id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0001/math-ph0001043.html
ar5iv
text
# Padé Interpolation: Methodology and Application to Quarkonium ## Abstract A novel application of the Padé approximation is proposed in which the Padé approximant is used as an interpolation for the small and large coupling behaviors of a physical system, resulting in a prediction of the behavior of the system at intermediate couplings. This method is applied to quarkonium systems and reasonable values for the $`c`$ and $`b`$ quark masses are obtained. PACS: 11.15.Me, 11.15.Tk, 11.80.Fv, 12.39.Pn The Padé approximation seeks to approximate the behavior of a function, $`f(x)`$, by a ratio of two polynomials of $`x`$. This ratio is referred to as the Padé approximant. Compared to the usual perturbative power series approximation, the Padé approximant has the advantage that it deviates less rapidly from the true values of $`f(x)`$ as $`x`$ becomes large. Recently, the Padé approximation has been applied to quantum field theories to estimate the next order term in a perturbation series. The method involves calculating a certain physical quantity perturbatively to $`n`$th order in the coupling constant and then forming a Padé approximant which, when expanded in a power series of the coupling constant, reproduces the perturbative result. The $`(n+1)`$th order term in the expansion of the Padé approximant yields an estimate of the $`(n+1)`$th order term in the perturbation series for the physical quantity. It turns out one can obtain reasonably good estimates from this approach. In this paper a different usage of the Padé approximation is proposed. We observe that, because of the nature of the Padé approximant, it can be expanded in a power series in $`x`$ when $`x`$ is small as well as in a power series in $`\frac{1}{x}`$ when $`x`$ is large. It is therefore interesting to ask the question: in cases when both the small $`x`$ (e.g., weak coupling) and the large $`x`$ (e.g., strong coupling) behaviors of a theory can be computed perturbatively, is it possible to form a single Padé approximant which interpolates the weak and strong coupling behaviors, and if so, how well does this Padé interpolation approximate the behaviors of the theory at intermediate values of the coupling constant? This is a particularly timely question since, with the advance of duality in supersymmetric gauge theories, we may someday be able to compute the strong coupling behaviors of a theory from its dual theory. The Padé interpolation will then provide a means to estimate the behaviors of the theory for the entire range of the coupling constant. The method proposed here goes beyond interpolating the strong and weak coupling behaviors of a system. For example, the expansion parameter $`x`$ can be the temperature, the strength of an applied field, or, as discussed below in the application to heavy quarkonia, a parameter introduced to implement the Padé interpolation. We have tested the Padé interpolation method with examples in which the exact result is known, with encouraging success. To see how accurate the Padé interpolation can be and to illustrate the methodology involved, let us consider a simple quantum mechanical two-state system with the Hamiltonian, $$H=\sigma _x+\lambda \sigma _z,$$ (1) where the $`\sigma `$’s are the Pauli matrices and the coupling constant $`\lambda `$ is assumed to be positive. For $`\lambda 1`$, the $`\sigma _z`$ term may be treated as a perturbation and we find, to second order in perturbation theory, the eigenvalues of $`H`$ are $$E_\pm ^<=\pm 1\pm \frac{\lambda ^2}{2}.$$ (2) For $`\lambda 1`$, the Hamiltonian can be written as $`H=\lambda (\sigma _z+\frac{1}{\lambda }\sigma _x)`$, and the $`\sigma _x`$ term can be treated as a perturbation. We find, again to second order in perturbation theory, the eigenvalues of $`H`$ are now $$E_\pm ^>=\pm \lambda \pm \frac{1}{2\lambda }.$$ (3) A Padé approximant which interpolates the small and large $`\lambda `$ behaviors of the energies can now be constructed. For the higher energy level $`E_+`$, we find $$E_+^{(\mathrm{PA})}=\frac{\lambda ^3+\frac{3}{2}\lambda ^2+\frac{3}{2}\lambda +1}{\lambda ^2+\frac{3}{2}\lambda +1}.$$ (4) This Padé approximant is uniquely determined from the perturbative expansions for $`E_+`$ given in (2) and (3). The large $`\lambda `$ behavior indicates that the polynomial in the numerator must be one degree higher than the polynomial in the denominator and that the coefficient for the highest order term in $`\lambda `$ must be the same for the two polynomials. Without loss of generality, we may choose this coefficient to be 1. If the numerator is a polynomial of degree $`d`$, there will be a total of $`(2d1)`$ coefficients to be determined for the Padé approximant. Because the small $`\lambda `$ behavior requires the numerator and the denominator to have the same constant (i.e., $`\lambda `$-independent) term, there are only $`(2d2)`$ remaining coefficients to be determined. Expanding the Padé approximant and matching against the perturbation series in (2) and (3) provide an additional four conditions, which selects $`d=3`$. In Figure 1, the approximate result generated from Padé interpolation is compared with the exact result, $`E_+=\sqrt{\lambda ^2+1}`$. We see that the Padé approximant (open squares) tracks the exact result (solid curve) for all values of the coupling constant. In fact, $`E_+^{(\mathrm{PA})}`$ differs from $`E_+`$ by no more than about 1% for the entire range of $`\lambda `$. For example, for $`\lambda `$ = 0.5, 1.0, 2.0, and 4.0, $`E_+^{(\mathrm{PA})}`$ is larger than $`E_+`$ by 0.63%, 1.02%, 0.62%, and 0.18%, respectively. One may improve the approximation by calculating more terms in the perturbation series for $`E_+^<`$ and $`E_+^>`$ and constructing the corresponding Padé approximant. However, our example suffices to demonstrate the potential power of the Padé interpolation method in that very few terms in the perturbation expansions can yield a very accurate approximation to the exact result for the entire range of the coupling constant. For comparison, we have also plotted in Figure 1 the perturbative result for small $`\lambda `$, $`E_+^<`$ (dotted curve). As expected, it only agrees with the exact result for small values of $`\lambda `$ and diverges significantly from the exact result when $`\lambda `$ becomes large. Similarly, $`E_+^>`$ will diverge from the exact result for small $`\lambda `$. In contrast, by interpolating $`E_+^<`$ and $`E_+^>`$, the Padé approximant is constrained not to deviate too far from the exact result for the full range of the coupling constant. In this way, the Padé interpolation method can yield a very good approximation, provided the quantity we try to approximate is a smooth, continuous function of the coupling constant. When applying the Padé interpolation, one should beware of potential unphysical singularities coming from the zeroes of the polynomial in the denominator of the Padé approximant. This complication may limit the scope of applicability of the method. On the other hand, this property may prove useful in some applications. For instance, when interpolating the high and low temperature behaviors of a system for which a phase transition takes place at some intermediate temperature, one may try to construct a Padé-like approximant (perhaps involving fractional powers in the polynomials) which mimics the singular behavior near the phase transition point. Another way to implement the Padé interpolation method is in cases when the Hamiltonian can be expressed as $`H=H_1+H_2`$, where the exact solutions for $`H_1`$ and $`H_2`$ (but not $`H`$) are known. Introducing the interpolating Hamiltonian, $$H(\beta )H_1+\beta H_2,$$ (5) where $`\beta `$ is a positive constant, we can then treat $`H_2`$ as a perturbation when $`\beta 1`$ and treat $`H_1`$ as a perturbation when $`\beta 1`$, in exactly the same way as in the example (1). A Padé approximant is formed interpolating the perturbative results for large and small $`\beta `$. Finally, an approximate solution for the original Hamiltonian $`H`$ is obtained by setting $`\beta `$ equal 1 in the Padé approximant. This method will be applied below to calculate quarkonium spectra. Reasonable values for the $`c`$ and $`b`$ quark masses are obtained by fitting the calculated levels to their measured values, which demonstrates the legitimacy of this Padé interpolation approach. Quarkonium refers to the bound state of a heavy quark $`Q`$ (e.g., $`c`$ or $`b`$ quark) with its antiquark $`\overline{Q}`$. It is well known that such systems can be described reasonably well using nonrelativistic quantum mechanics. Various potential energy functions have been used to model the $`Q\overline{Q}`$ interaction. It has been found that the potential description is flavor independent, i.e., the same potential describes equally well the $`c\overline{c}`$ and the $`b\overline{b}`$ systems. We consider here a central potential consisting of an attractive Coulomb term and a confining linear potential: $$V(r)=\frac{\alpha }{r}+\lambda r,$$ (6) where $`\alpha `$ and $`\lambda `$ are positive coupling constants. We shall focus on the S states for the purpose of testing the proposed interpolation method. In this case, the Hamiltonian for the radial Schrödinger equation is simply $$H_r=\frac{1}{2\mu }\frac{d^2}{dr^2}\frac{\alpha }{r}+\lambda r,$$ (7) where $`\mu `$ is the reduced mass for the heavy quark $`Q`$, $`\mu =m_Q/2`$. Note that $`H_r`$ can be expressed as the sum of two exactly solvable Hamiltonians: a Hamiltonian for the Coulomb potential, $$H_C=\frac{1}{4\mu }\frac{d^2}{dr^2}\frac{\alpha }{r},$$ (8) and a Hamiltonian for the linear potential, $$H_L=\frac{1}{4\mu }\frac{d^2}{dr^2}+\lambda r.$$ (9) We have split the kinetic energy term in half so that the “effective mass” that appears in $`H_C`$ and in $`H_L`$ is $`2\mu =m_Q`$. We may now form the interpolating Hamiltonian, $`H_r(\beta )=H_C+\beta H_L`$, and perform perturbative calculations for small $`\beta `$ as well as for large $`\beta `$. We shall summarize the results of our calculation here. Details of the calculation can be found in Ref. where the calculation including the quarkonium P states is also discussed. (For the P states, the centrifugal pontential energy term, $`\frac{l(l+1)}{2\mu r^2}`$, should be included with $`H_C`$, resulting in a solvable ”hydrogen-like” Hamiltonian. Because of the half kinetic energy term, care must be taken to redefine the orbital angular momentum quantum number in order to extract the energy eigenvalues.) The bound state energies for the S states of $`H_r(\beta )`$ are computed for small $`\beta `$ as well as for large $`\beta `$ to the same order in perturbation theory. Wherever necessary (e.g., integrals involving the Airy functions, the eigenfunctions of $`H_L`$), terms in the perturbation series are evaluated numerically. In addition, for second and higher order calculations, the infinite series that appear in the perturbation expansions are estimated using the method of acceleration of convergence. A separate Padé approximant is formed interpolating the small and large $`\beta `$ results from our first, second, and third order calculations. Our estimates for the S state energies are obtained by letting $`\beta `$ equal 1 in the respective Padé approximant. These are fitted to the corresponding measured values treating $`\alpha `$, $`\lambda `$, $`m_c`$, $`m_b`$ as well as the zero-point energies $`V_c`$ (for charmonium) and $`V_b`$ (for bottomonium) as free parameters. We used the data for the $`J/\psi `$(1S), $`J/\psi `$(2S), $`\mathrm{{\rm Y}}`$(1S), $`\mathrm{{\rm Y}}`$(2S), and $`\mathrm{{\rm Y}}`$(3S) given in Ref.. When performing the fit, care must be taken to avoid the artificial singularities of the Padé approximant. The details of the fit results are presented in Table 1. We see that the first-order approximation already produces a rather good fit to the measured S-state energies, although the best fit values for $`m_c`$ and $`m_b`$ are somewhat high. The second-order approximation improves the fit quality, reproducing all of the quarkonium S-levels. The fit quality, defined as $`_i\left(m_i^{\text{(experiment)}}m_i^{\text{(calculated)}}\right)^2`$, worsens (from less than 1 to 240) as we go to the third-order approximation, primarily due to the increased difficulty to avoid a larger number of unphysical singularities from the Padé approximant in performing the fit. Using the second-order results, our best fit values for $`m_c`$ and $`m_b`$ are 1.521 GeV and 5.046 GeV, respectively, to be compared with the values given in Ref.: $`m_c=1.0\mathrm{to}1.6`$ GeV; $`m_b=4.1\mathrm{to}4.5`$ GeV. The best fit values for the parameters in the quarkonium potential (6), $`\alpha =0.4984`$ and $`\lambda =0.1771\mathrm{GeV}^2`$, also compare favorably with earlier results: $`\alpha =0.520`$ and $`\lambda =0.183\mathrm{GeV}^2`$ in Ref.; $`\alpha =0.507`$ and $`\lambda =0.169\mathrm{GeV}^2`$ in Ref.. We have also performed a fit with the constraint $`V_bV_c=2(m_bm_c)`$ on the model parameters. The results are presented in Table 2. The fit quality in this case is comparable to that of the unconstrained fit and the best fit values of the model parameters differ somewhat from the best fit values of the unconstrained fit, indicating that the found minimum is not sharp and allows for some variation of the quark masses as long as their difference remains equal to half of the difference between the zero-point energies $`V_b`$ and $`V_c`$. The model parameters appear to be more stable than before as we go from first to second-order Padé interpolation, which corroborates the physical significance of the constraint. As a final check, we have integrated numerically the Schrödinger equation for the quarkonium systems with the second-order best fit values of the parameters to obtain the energy levels. The results are shown in the last column of Table 1. This verifies the validity of the Padé interpolation method. In conclusion, using quarkonia and a simple two-state model as our testing grounds, we have shown that Padé interpolation can be a powerful method for estimating physical quantities at intermediate values of the coupling constant where perturbative calculations are not reliable. There are many areas for which this method may be applicable. One of which is the K-meson system. The strange quark mass, $`m_s`$, has the value such that neither chiral perturbation theory (for small quark masses) nor heavy quark effective theory (for large quark masses) gives a good description of K-meson properties. With Padé interpolation we may be able to obtain a more accurate estimate of the K-meson properties by interpolating the small $`m_s`$ and large $`m_s`$ behaviors which can be obtained perturbatively through chiral perturbation theory and heavy quark effective theory, respectively. These issues are being examined by one of us and the results will be reported in the near future. Acknowledgement This work was supported in part by the U.S. Department of Energy under grant DE-FG02-84ER40163. Table 1. Fit results from Padé interpolation. | Enrgy level | Measured | 1st order | 2nd order | 3rd order | Numerical | | --- | --- | --- | --- | --- | --- | | $`J/\psi (1S)`$ \[MeV\] | 3097 | 3097 | 3097 | 3089 | 3097 | | $`J/\psi (2S)`$ \[MeV\] | 3686 | 3686 | 3686 | 3694 | 3687 | | $`\mathrm{{\rm Y}}(1S)`$ \[MeV\] | 9460 | 9459 | 9460 | 9464 | 9456 | | $`\mathrm{{\rm Y}}(2S)`$ \[MeV\] | 10023 | 10026 | 10023 | 10028 | 10020 | | $`\mathrm{{\rm Y}}(3S)`$ \[MeV\] | 10355 | 10353 | 10355 | 10347 | 10356 | | Fit quality \[MeV<sup>2</sup>\] | | 13 | $`<1`$ | 240 | | | Fit parameters | | | | | | | $`\alpha `$ | | 0.4600 | 0.4984 | 0.7510 | 0.4984 | | $`\lambda `$ \[GeV<sup>2</sup>\] | | 0.1834 | 0.1771 | 0.1344 | 0.1771 | | $`V_c`$ \[MeV\] | | 2767 | 2765 | 2953 | 2765 | | $`V_b`$ \[MeV\] | | 9573 | 9585 | 9761 | 9585 | | $`m_c`$ \[MeV\] | | 1719 | 1521 | 1253 | 1521 | | $`m_b`$ \[MeV\] | | 5538 | 5046 | 4143 | 5046 | Table 2. Fit results from Padé interpolation with the constraint $`V_bV_c=2(m_bm_c)`$. | Enrgy level | Measured | 1st order | 2nd order | | --- | --- | --- | --- | | $`J/\psi (1S)`$ \[MeV\] | 3097 | 3097 | 3098 | | $`J/\psi (2S)`$ \[MeV\] | 3686 | 3686 | 3685 | | $`\mathrm{{\rm Y}}(1S)`$ \[MeV\] | 9460 | 9459 | 9460 | | $`\mathrm{{\rm Y}}(2S)`$ \[MeV\] | 10023 | 10026 | 10022 | | $`\mathrm{{\rm Y}}(3S)`$ \[MeV\] | 10355 | 10353 | 10356 | | Fit quality \[MeV<sup>2</sup>\] | | 13 | 3 | | Fit parameters | | | | | $`\alpha `$ | | 0.4850 | 0.4964 | | $`\lambda `$ \[GeV<sup>2</sup>\] | | 0.1741 | 0.1784 | | $`V_c`$ \[MeV\] | | 2770 | 2773 | | $`V_b`$ \[MeV\] | | 9571 | 9574 | | $`m_c`$ \[MeV\] | | 1560 | 1572 | | $`m_b`$ \[MeV\] | | 4960 | 4972 | FIGURE CAPTION Figure 1: see description in the text.
no-problem/0001/astro-ph0001505.html
ar5iv
text
# 1 ROSAC: A ROSAT based Search for AGN Clusters ## 1 ROSAC: A ROSAT based Search for AGN Clusters The ROSAT All-Sky Survey (RASS) provides an excellent opportunity to study AGNs at low-redshifts. For the identification of RASS sources, objective prism and direct plates from the Hamburg Quasar Survey were used, giving a list of AGN candidates. The AGN nature of these candidates has to be confirmed by follow-up spectroscopy. Our confirmation rate for AGN candidates is $``$ 95%, which makes this identification strategy powerful for creating AGN samples. The ROSAC project makes use of this work to study the spatial properties of low-redshift AGNs. In particular the search for clusters or groups of AGNs and the determination of the 2-point-correlation function come to the fore. Three regions in the constellations Ursa Major (UMa), Coma Berenices, and Pisces were selected due to a) low hydrogen column densities, b) large numbers of known redshifts to reduce the observing time, and c) the presence of interesting structures found in a first minimal spanning tree analyses. The most advanced ’subsample’ today is that in UMa with a completeness of 87%. This region covers an area of 363 deg<sup>2</sup> and consists of 200 confirmed AGNs. A first clustering investigation within the scope of the ROSAC project is restricted to UMa (Fig. 1). ## 2 Clustering Analyses The 2-point-correlation function $`\xi \left(r\right)=\frac{N_{AGN}}{N_{Random}}1`$ was applied. Clustering properties of AGNs in the low redshift regime are uncertain. Only a few investigations of small samples and with lower surface densities than the ones of the ROSAC project have been conducted so far. Specifically, the two studies of X-ray selected AGNs (Boyle & Mo 1993, Carrera et al. 1998) did not show clustering on small scales. The results of our investigation are outlined in Fig 1. A power law $`\xi \left(r\right)=\left(\frac{r}{r_0}\right)^\gamma `$ was fitted to the data. The correlation length gives $`r_0=8.1_{3.9}^{+2.7}`$ and $`\gamma =1.08_{0.23}^{+0.45}`$, which is consistent with the favoured value of $`r_0=6.0`$ for AGNs. Additionally, a search for groups of AGNs was carried out using the minimal spanning tree (MST) technique. Earlier studies could find 18 groups of AGNs in total. All of these groups were confirmed by the MST technique. We could detect two further AGN groups comprising 21 and 14 members at mean redshifts of 0.081 and 0.222. Further work is necessary to quantify the exact significance of these results. ## 3 Future prospects The final ROSAC project will result in a sample of about 700 AGNs with surface densities between 0.3 and 0.5 AGNs/deg<sup>2</sup> at redshifts z$`<`$0.5. Consequently, the incorporation of the two other regions, which means 500 objects additionally, would provide a much better sample to study AGN clustering.
no-problem/0001/hep-lat0001007.html
ar5iv
text
# Lattice QCD Calculations of the Sigma Commutator ## Abstract As a direct source of information on chiral symmetry breaking within QCD, the sigma commutator is of considerable importance. With recent advances in the calculation of hadron masses within full QCD it is of interest to see whether the sigma commutator can be calculated directly from the dependence of the nucleon mass on the input quark mass. We show that provided the correct chiral behaviour of QCD is respected in the extrapolation to realistic quark masses one can indeed obtain a fairly reliable determination of the sigma commutator using present lattice data. Within two-flavour, dynamical-fermion QCD the value obtained lies in the range 45 to 55 MeV. ADP-00-01/T390 In the quest to understand hadron structure within QCD, small violations of fundamental symmetries play a vital role. The sigma commutator, $`\sigma _N`$: $$\sigma _N=\frac{1}{3}N\left|[Q_{i5},[Q_{i5},]]\right|N$$ (1) (with $`Q_{i5}`$ the two-flavour ($`i`$=1, 2, 3) axial charge) is an extremely important example. Because $`Q_{i5}`$ commutes with the QCD Hamiltonian in the chiral SU(2) limit, the effect of the double commutator is to pick out the light quark mass term from $``$: $$\sigma _N=N\left|\left(m_u\overline{u}u+m_d\overline{d}d\right)\right|N$$ (2) Neglecting the very small effect of the $`ud`$ mass difference we can write Eq. (2) in the form $`\sigma _N`$ $`=`$ $`N\left|\overline{m}\left(\overline{u}u+\overline{d}d\right)\right|N`$ (3) $`=`$ $`\overline{m}{\displaystyle \frac{M_N}{\overline{m}}}`$ (4) with $`\overline{m}=(m_u+m_d)/2`$. Equation (4) follows from the Feynman-Hellman theorem . While there is no direct experimental measurement of $`\sigma _N`$, the value inferred from world data has been $`45\pm 8`$ MeV for some time. Recently there has been considerable interest in this value because of progress in the determination of the pion-nucleon scattering lengths and new phase shift analyses . For an excellent summary of the sources of the proposed variations and the disagreements between various investigators we refer to the excellent review of Kneckt . For our purposes the experimental value is of limited interest as the full lattice QCD calculations upon which our work is based involve only two active flavours. Nevertheless, as a guide, the current work suggests that the best value of $`\sigma _N`$ may be between 8 and 26 MeV larger than the value quoted above . Numerous calculations of $`\sigma _N`$ have been made within QCD motivated models and there has been considerable work within the framework of chiral perturbation theory . However, direct calculations of $`\sigma _N`$ within QCD itself have proven to be difficult. Early attempts to extract $`\sigma _N`$ from the quark mass dependence of the nucleon mass in quenched QCD (using Eq.(4) ) produced values in the range 15 to 25 MeV. Attention subsequently turned to determining $`\sigma _N`$ by calculating the scalar matrix element of the nucleon $`N|\overline{u}u+\overline{d}d|N`$. There it was discovered that the sea quark loops make a dominant contribution to $`\sigma _N`$ . These works, based on quenched QCD simulation, found values in the 40 to 60 MeV range, which are more compatible with the experimental values quoted earlier. On the other hand, the most recent estimate of $`\sigma _N`$, and the only one based on a two-flavour, dynamical-fermion lattice QCD calculation, comes from the SESAM collaboration. They obtain a value of $`18\pm 5`$ MeV , through a direct calculation of the scalar matrix element $`N|\overline{u}u+\overline{d}d|N`$. The discrepancy from the quenched results of Refs. is not so much an unquenching effect in the scalar matrix element but rather a significant suppression of the quark mass in going from quenched to full QCD. The difficulty in all approaches which evaluate $`N|\overline{u}u+\overline{d}d|N`$ is that neither it nor $`\overline{m}`$ is renormalization group invariant. One must reconstruct the scale invariant result from the product of the scale dependent matrix element and the scale dependent quark masses. The latter are extremely difficult to determine precisely and are the chief source of uncertainty in this approach. An additional difficulty in extracting $`\sigma _N`$ from lattice studies is the need to extrapolate from quite large pion masses, typically above 500 or 600 MeV. An important innovation adopted by Dong et al., but not by the SESAM collaboration, was to extrapolate the computed values of $`N|\overline{u}u+\overline{d}d|N`$ using a form motivated by chiral symmetry, namely $`a+b\overline{m}^{\frac{1}{2}}`$. On the other hand, the value of $`b`$ used was not constrained by chiral symmetry and higher order terms of the chiral expansion were not considered. Furthermore, since the work was based on a quenched calculation, the chiral behaviour implicit in the lattice results involves incorrect chiral coefficients . Our work is motivated by recent, dramatic improvements in computing power which, together with the development of improved actions , mean that we now have accurate calculations of the mass of the nucleon within full QCD (for two flavours) as a function of $`\overline{m}`$ down to $`m_\pi 500`$ MeV. (Since $`m_\pi ^2`$ is proportional to $`\overline{m}`$ over the range studied we choose to display all results as a function of $`m_\pi ^2`$.) In addition, CP-PACS has recently published a result at $`m_\pi 300`$ MeV , albeit with somewhat large errors. Provided that one has control over the extrapolation of this lattice data to the physical pion mass, $`m_\pi \mu =140`$ MeV, one can calculate $`\sigma _N`$ by evaluating Eq. (4) at the physical pion mass. Note that this approach has the important advantage over the calculation of the scalar density that one only needs to work with renormalization group invariant quantities. We therefore turn to a consideration of the method of extrapolation. The lattice data for the nucleon mass calculated by UKQCD and CP-PACS is shown in Fig. 1. Both groups cite a 10% uncertainty in setting the lattice scale, so we have scaled the former down and the latter up by 5% so that the data sets are consistent. Over almost the entire range of $`m_\pi ^2`$, the data shows a dependence on quark mass that is essentially linear. However, the preliminary point at $`m_\pi ^20.1`$ GeV<sup>2</sup> suggests some curvature in the low mass region. This is indeed expected on the basis of chiral symmetry with the leading non-analytic (LNA) correction (in terms of $`\overline{m}`$) being proportional to $`m_\pi ^3`$ ($`\overline{m}^{3/2}`$): $$\delta M_N^{\text{LNA}}=\gamma ^{\text{LNA}}m_\pi ^3,\gamma ^{\text{LNA}}=\frac{3g_A^2}{32\pi f_\pi ^2}.$$ (5) These observations led the CP-PACS group to extrapolate their data with the simple, phenomenological form: $$M_N=\stackrel{~}{\alpha }+\stackrel{~}{\beta }m_\pi ^2+\stackrel{~}{\gamma }m_\pi ^3.$$ (6) The corresponding fit to the combined data set, using Eq. (6), is shown as the short-dashed curve in Fig. 1 and the parameters $`(\stackrel{~}{\alpha },\stackrel{~}{\beta },\stackrel{~}{\gamma })=(0.912,1.69,0.761)`$ (the units are appropriate powers of GeV). This yields a value for the sigma commutator, $`\sigma _N^{(p)}=29.7`$ MeV, where the superscript stands for “phenomenological”. The difficulty with this purely phenomenological analysis was discussed in Ref. . That is, the value of $`\stackrel{~}{\gamma }=0.761`$ is almost an order of magnitude smaller than the model independent LNA term, $`\gamma ^{\text{LNA}}=5.60`$ GeV<sup>-2</sup>. Clearly this presents some concern when evaluating $`\sigma _N`$, because of the derivative required. An alternative approach to this problem was recently suggested by Leinweber et al. . They realised that the pion loop diagrams, Fig. 2(a) and 2(b) not only yield the most important non-analytic structure in the expression for the nucleon mass, but amongst all the possible meson baryon states which contribute to the nucleon mass within QCD, they alone give rise to a significant variation of the nucleon mass as $`m_\pi 0`$. In Ref. it was suggested that one should extrapolate $`M_N`$ as a function of quark mass using: $$M_N=\alpha +\beta m_\pi ^2+\sigma _{NN}(m_\pi ,\mathrm{\Lambda })+\sigma _{N\mathrm{\Delta }}(m_\pi ,\mathrm{\Lambda }),$$ (7) where $`\sigma _{NN}`$ and $`\sigma _{N\mathrm{\Delta }}`$ are the self-energy contributions of Figs. 2(a) and 2(b), respectively, using a sharp cut-off in momentum, $`\theta (\mathrm{\Lambda }k)`$. The full analytical expressions for $`\sigma _{NN}`$ and $`\sigma _{N\mathrm{\Delta }}`$ are given in Ref. . For our purposes it suffices that they have precisely the correct LNA and next-to-leading non-analytic behaviour required by chiral perturbation theory as $`m_\pi 0`$. In addition, $`\sigma _{N\mathrm{\Delta }}`$ contains the correct, square root branch point ($`[m_\pi ^2(M_\mathrm{\Delta }M_N)^2]^{\frac{3}{2}}`$) at the $`\mathrm{\Delta }N`$ threshold, which is essential for extrapolations from above the $`\mathrm{\Delta }N`$ threshold. Fitting Eq. (7) to the data, including the point near 0.1 GeV<sup>2</sup>, gives the dot-dash curve in Fig. 1 ($`(\alpha ,\beta ,\mathrm{\Lambda })=(1.42,0.564,0.661)`$). The corresponding value of $`\sigma _N`$ is 54.6 MeV and the physical nucleon mass is 870 MeV. Omitting the lowest data point from the fit yields the long-dash curve in Fig. 1 ($`(\alpha ,\beta ,\mathrm{\Lambda })=(1.76,0.386,0.789)`$) with $`\sigma _N=65.8`$ MeV. Clearly the curvature associated with the chiral corrections at low quark mass is extremely important in the evaluation of $`\sigma _N`$. In order to estimate the error in the extracted value of $`\sigma _N`$ we would need to have the full data set on a configuration by configuration basis. As this is not available, the errors that we quote are naive estimates only. The extracted value of $`\sigma _N`$ is very well determined by the present data, the result being $`54.6\pm 2.0`$ MeV. Since the process of setting the physical mass scale via the string tension is thought to have a systematic error of 10%, one might naively expect this to apply to $`\sigma _N`$. However, all masses in the problem including the pion (or quark) mass, as well as that of the nucleon, scale with the lattice parameter $`a`$. It turns out that when one uses Eq. (4) at the physical pion mass (which means a slightly different value of $`\overline{m}a`$ if $`a`$ changes), the value of $`\sigma _N`$ is extremely stable. If, for example, one raises the CP-PACS data by 15% and the UKQCD data by 5% (instead of 5% and $`5\%`$, respectively) the value of $`\sigma _N`$ shifts from $`54.6\pm 2.0`$ to $`55.2\pm 2.1`$ MeV. We present calculations in Table 1 that show, for a variety of scalings of the lattice data, how stable our results are. The remaining issue, for the present data, is the model dependence associated with the choice of a sharp cut-off in the pionic self-energies. Our investigations in Ref. showed that Eq. (7) could reproduce the dependence of $`M_N`$ on $`m_\pi ^2`$ within the cloudy bag model, and that it could also describe the dependence of pion self-energy terms calculated with dipole form factors. Thus we believe that any model satisfying the essential chiral constraints and fitting the lattice data should give essentially the same answer. We checked this by numerically fitting the lattice data (solid curve) with the form of Eq. (7) but with $`\sigma _{NN}`$ and $`\sigma _{N\mathrm{\Delta }}`$ calculated with dipole form factors of mass $`\mathrm{\Lambda }_D`$ at all pion-baryon vertices. Since the preferred phenomenological form of the $`N\pi `$ form factor is a dipole, we regard the dipole result shown in the first line of Table 1 as our best estimate, namely $`\sigma _N=47.2\pm 1.8`$ MeV with fit parameters $`(\alpha ,\beta ,\mathrm{\Lambda }_D)=(2.02,0.398,1.225)`$. A remaining source of error is that, although the lattice results were calculated with an improved action, there still is an error associated with the extrapolation to the infinite volume, continuum limit. The importance of the inclusion of the correct chiral behaviour is clearly seen by the fact that it increases the value of the sigma commutator from the 30 MeV of the unconstrained cubic fit to around 50 MeV. Clearly an enormous amount of work remains to be done before we will fully understand the structure of the nucleon within QCD. It is vital that the rapid progress on improved actions and faster computers continue and that we have three flavour calculations within full QCD at masses as close as possible to the physical quark masses. Nevertheless, it is a remarkable result that the present lattice data for dynamical-fermion, two-flavour QCD, yields such a stable and accurate answer for the sigma commutator, an answer which is already within the range of the experimental values. The implications of this result for models of hadron structure need to be explored urgently. One of us (SVW) would like to acknowledge helpful discussions with Tom Cohen at an early stage of this work. We would also like acknowledge helpful comments from Chris Allton, Craig Roberts and Robert Perry. This work was supported in part by the Australian Research Council.
no-problem/0001/astro-ph0001127.html
ar5iv
text
# EUV and X-ray observation of Abell 2199: a three-phase intracluster medium with a massive warm component ## Abstract Various independent ways of constraining the Hubble constant and the baryonic content of the universe finally converged at a consensus range of values which indicates that at the present epoch the bulk of the universe’s baryons is in the form of a warm $``$ 10<sup>6</sup> K gas - a temperature regime which renders them difficult to detect. The discovery of EUV and soft X-ray excess emission from clusters of galaxies was originally interpreted as the first direct evidence for the large scale presence of such a warm component . We present results from an EUVE Deep Survey (DS) observation of the rich cluster Abell 2199 in the Lex/B (69 - 190 eV) filter passband. The soft excess radial trend (SERT), shown by a plot against cluster radius $`r`$ of the percentage EUV emission $`\eta `$ observed above the level expected from the hot intracluster medium (ICM), reveals that $`\eta `$ is a simple function of $`r`$ which decreases monotonically towards $`r=0`$; it smoothly turns negative at $`r`$ 6 arcmin, inwards of this radius the EUV is absorbed by cold matter with a line-of-sight column density of $``$ 2.7 $`\times `$ 10<sup>19</sup> cm<sup>-2</sup>. The area of absorption is much larger than that of the cooling flow. These facts together provide strong evidence for a centrally concentrated but cluster-wide distribution of clumps of cold gas which co-exist with warm gas of similar spatial properties. Further, the simultaneous modeling of EUV and X-ray data requires a warm component even within the region of absorption. The phenomenon demonstrates a three phase ICM, with the warm phase estimated to be $``$ 5-10 times more massive than the hot. The A2199 sky area was observed by EUVE for $``$ 57 ksec in February 1999. The programme featured an in situ background measurement by pointing at small offset from the cluster, which yielded an accurate background template for point-to-point subtraction . Complementary data in the X-ray (0.2 - 2.0 keV) passband, as gathered by a ROSAT PSPC observation which took place in July 1990, with an exposure of 8.5 ksec, were extracted from the public archive . For correct comparison between the EUV and X-ray emissions, the Galactic HI column density was measured at N<sub>H</sub> = (8.3 $`\pm `$ 1.0) $`\times `$ 10<sup>19</sup> cm<sup>-2</sup> by a dedicated observation at Green Bank , and was found to be spatially smooth. The EUV and X-ray data were simultaneously modeled with a thin plasma emission code and appropriate line-of-sight Galactic absorption for the above value of N<sub>H</sub>. At a given radius the hot ICM was assumed to be isothermal, with the abundance fixed at 0.3 solar apart from the cooling flow region where the parameter became part of the data fitting in order to account for any possible abundance gradient within this region (a different way of handling the abundance does not sensitively affect the results presented in this work). It is found that the forementioned model, when applied to the PSPC spectra at all radii, generally leads to acceptable fits. At low energies the EUV measurements gave crucial new information. The overall effect is a soft excess as reported previously . A plot of the SERT indicates, however, that the average percentage EUV excess at a given radius is less at the centre, see Figure 1. In fact, the trend takes the form of a very negative central excess (i.e. absorption), which steadily rises with radius until the 6 arcmin point, beyond which this fractional excess turns positive and continues to increase until the limiting radius of EUV detection ($``$ 20 arcmin ). We first address the outer parts of the cluster, where the data already demonstrated the implausibility of a non-thermal interpretation of the soft excess (which postulates a large population of intracluster relativistic electrons undergoing inverse-Compton (IC) interaction with the cosmic microwave background (CMB) ). Figure 2 shows a composite plot of the EUV and X-ray data for the 12 - 15 arcmin annulus. The prominent EUV excess, unaccompanied by any similar effect in soft X-rays, implies that the bulk of the relativistic electrons have energies below 200 MeV, a cut-off which is most obviously understood as due to aging (i.e. synchrotron and inverse-Compton losses): the electrons are at least 3 $`\times `$ 10<sup>9</sup> years old. However, in order to account for the large EUV excess the highly evolved electron spectrum at the present epoch must still include sufficient particles ahead of the cut-off. This means that for the region of concern, at injection (when the power-law differential number index is assumed to be 2.5, in accordance with our Galactic cosmic ray index) the relativistic electron pressure would have exceeded that of the hot ICM by a factor of $``$ 4, leading to a major confinement problem for the hot gas. The inclusion of cosmic ray protons exaggerates the difficulty, as protons carry 10 - 100 times more pressure than electrons. Thus, by elimination, the only viable alternative, viz. the originally proposed thermal (warm) gas scenario , must now be considered seriously. This is especially so in the light of the recent constraints on cosmological parameters, which point to the existence of a warm and massive baryonic component, as mentioned earlier. To appreciate the multi-phase nature of the ICM of A2199, we move radially inwards, where Figure 1 indicates that the EUV is absorbed. For more details, we show in Figure 3 an image of the of the fractional EUV excess $`\eta `$. The data suggest an intermixed model of the ICM: the lack of soft excess at small radii is due entirely to the larger amount of cold absorbing matter collected in this region. Thus, while in the north-south direction severe absorption persists out to a radius of $``$ 4 arcmin, in the east-west direction signatures of soft excess are already present as close as $`r`$ 2 arcmin. Our inference of the state of the ICM is reinforced by the behavior of the SERT: it follows a simple parametric profile which applies equally satisfactorily to the absorption and soft excess regions, with no change of behavior at the transition radius of $``$ 6 arcmin (Figure 1). In fact, there is no particular significance in this radius (it is much larger than the cooling flow radius of $``$ 2 arcmin ), The observation is naturally interpreted as the combined effect of clumped emission regions containing a warm component, absorbed by blobs of cold gas sandwiched in between. Both distributions are cluster-wide and centrally condensed, but with increasing radius the lines-of-sight are more transparent to EUV photons created at locations along them. For comparable intrinsic emission profiles of the soft excess and the hot ICM, the result is an outwardly rising SERT. The correctness of this approach is confirmed by our study of the nearby Virgo and Coma clusters. In the former case a strong SERT exists despite no apparent absorption (i.e. $`\eta >0`$ everywhere). Yet a spatial analysis of the PSPC image showed that a central circular area has a statistically significant enhancement in the number of small regions where the soft excess brightness is below the mean value for this circle (i.e. some of them must be due to resolved absorption clouds); with the effect disappearing gradually towards annuli of larger radius. For Coma the SERT is very weak, implying little absorption, and indeed a similar analysis revealed no evidence of clouds at any radius. Results on these two clusters will be published shortly. The argument for an intermixed ICM also rests upon direct evidence for the presence of soft excess even in the absorbed regions. We show in Figure 4 a core spectrum, where it can be seen that by the time intrinsic absorption accounts for the EUV decrement, an excess is seen in soft X-rays (which are less absorbed). This clearly indicates a complex ICM where the various gas phases co-exist. The apparent negative soft excess within the absorption radius is simply due to an abundance of cold clouds masking EUV emissions from the warm and hot components. The thermal origin of the EUV is compelling for another reason: the widespread absorption reported here implies the existence of a cold phase in the midst of the well known hot phase, and the question then naturally arises concerning why a warm phase is absent, and is not the cause of the soft excess. At the very least, mixing layers on the surface of the cold clouds would suffice to produce the intermediate phase . The mass budgets of the three ICM components in consideration are estimated as follows. The intrinsic HI column density as inferred from the central EUV absorption converts to a density of cold clouds of $``$ 5 $`\times `$ 10<sup>10</sup> M Mpc<sup>-3</sup>. This gives a mass ratio of 1:2000 between the cold and hot gas along the line-of-sight. Any estimate of the mass of warm gas at the centre is likely to be inaccurate, since the soft emission is significantly absorbed. We therefore considered the 12 - 15 arcmin region where this complication is not as severe as in the center. The extreme softness of the emission (Figure 2) limits the gas temperature to kT $`<`$ 100 eV (or T $`<`$ 10<sup>6</sup> K), with a correspondingly large mass estimate of 1.25 $`\pm _{0.9}^{0.4}\times `$ 10<sup>14</sup> M, i.e. $``$ 43 $`\pm _{29}^{13}`$ times more massive than the hot ICM in this region. The 1-$`\sigma `$ lower limit ratio implies $``$ 3 times more missing baryons than expected , although it must be emphasized that both the mass and bolometric luminosity can be substantially reduced if the gas turns out to be warmer. This can be realized by adopting alternative emission models for the warm phase, especially those which involve an underionized gas, since the EUV emission efficiency is then enhanced, and the gas can be warmer than the above temperature constraint. Plasma in such an ionization state is easily produced by mixing layers or shock heating. Another possibility is that the gas is actually warmer than our inferred temperature, and the lack of a soft X-ray excess is only an apparent effect caused by residual absorption at these outer radii. Figure Captions Figure 1: The SERT effect illustrated by a plot against cluster radius $`r`$ of the EUV fractional excess $`\eta `$, defined as $`\eta =(pq)/q`$, where $`p`$ is the DS Lex/B observed signal and $`q`$ is the expected EUV emission from the hot ICM. $`q`$ is determined from the best model of the PSPC data (single temperature fits were found to be satisfactory at all radii) with details of Galactic absorption as quoted in the text. The data follow a parametric profile $`\eta =0.45+0.0075r^{2.5}`$ (solid line). Figure 2: Emission models (solid line) used to simultaneously fit the EUVE/DS and ROSAT/PSPC data of the 12 - 15 arcmin annular region of A2199. Left Panel: isothermal thin plasma spectrum \[16-18\] at kT = 4.08 keV and an abundance of 0.5 solar. Note the strong EUV excess recorded by the DS (left most data point) which is not seen in soft X-rays by the PSPC (remaining data points). Right Panel: same as the previous model, except with an additional non-thermal component due to the IC/CMB effect (see text). The electron population (assumed to have an initial differential number index of 2.5, similar to that of Galactic cosmic rays) is $``$ 3.5 Gyr old, as during this period the IC/MWB and synchrotron losses would have secured the necessary high energy cut-off to avoid emissions in the PSPC passband. At the present epoch the electron pressure is $``$ 25 % that of the hot ICM, while the initial value of this ratio was $``$ 400 %. Figure 3: An image of the surface brightness of EUV excess for the central region of Abell 2199, obtained after subtraction of background and contributions from the hot ICM emission (see text). The pixel units (color coded) are in 10<sup>-3</sup> photons arcmin<sup>-2</sup> s<sup>-1</sup>. Pixels of negative excess correspond to areas where the EUV from warm and hot components are absorbed by a cold component. The common centroid of the cluster EUV and soft X-ray emissions is marked by a cross. Figure 4: Data are as in Figure 2, except for the 1 – 2 arcmin radius of A2199. Left Panel: single temperature emission model (kT = 3.58 $`\pm _{0.69}^{1.07}`$, abundance = 0.56$`\pm _{0.25}^{0.43}`$ solar) showing the EUV signal in absorption. Right Panel: Plasma properties as above, with an intrinsic cold gas of line-of-sight HI column density $`N_H`$ = 2.7 $`\times `$ 10<sup>19</sup> cm<sup>-2</sup> invoked to account for the depleted EUV flux. Note this correction reveals a soft X-ray excess in the PSPC 1/4- keV band, thus clearly indicating the presence of an underlying warm component which is masked by the cold absorbing phase. Referenceseferences 1. Cen, R. and Ostriker, J.P. 1999, ApJ, 514, 1-6. 2. Maloney, P.R. and Bland-Hawthorn, J. 1999, ApJ, 522, L81-84. 3. Lieu, R., Mittaz, J.P.D., Bowyer, S., Breen, J.O., Lockman, F.J., Murphy, E.M. & Hwang, C. -Y. 1996b, Science, 274,1335–1338. 4. Lieu, R., Bonamente,M. ,Mittaz, J.P.D., Durret, F., Dos Santos, S. and Kaastra, J.S. 1999, ApJ, 527, L77 5. See the High Energy Astrophysics Archive available at http://heasarc.gsfc.nasa.gov/docs/rosat/archive.html . 6. Kaastra, J.S., Lieu, R. 1. Cen, R. and Ostriker, J.P. 1999, ApJ, 514, 1-6. 2. Maloney, P.R. and Bland-Hawthorn, J. 1999, ApJ, 522, L81-84. 3. Lieu, R., Mittaz, J.P.D., Bowyer, S., Breen, J.O., Lockman, F.J., Murphy, E.M. & Hwang, C. -Y. 1996b, Science, 274,1335–1338. indent 4. Lieu, R., Bonamente,M. ,Mittaz, J.P.D., Durret, F., Dos Santos, S. and Kaastra, J.S. 1999, ApJ, 527, L77 5. See the High Energy Astrophysics Archive available at http://heasarc.gsfc.nasa.gov/docs/rosat/archive.html . 6. Kaastra, J.S., Lieu, R. Mittaz, J.P.D., Bleeker, J.A.M., Mewe, R., Colafrancesco, S. and Lockman, F.J. 1999, ApJ, 519, L119–122. 7. Mewe, R., Gronenschild, E.H.B.M. and van den Oord, G.H.J. 1985, A & A, 62, 197 . 8. Mewe, R., Lemen, J.R., and van den Oord, G.H.J. 1986, A & A, 65, 511 . 9. Morrison, R. and McCammon, D. 1983, ApJ, 270, 119. 10. Sarazin, C.L., Lieu, R. 1998, Astrophys. J., 494, L177–180. 11. Ensslin, T.A., Biermann, P.L. 1998, Astron. Astrophys., 330, 90–98. 12. Jakobsen, P. and Kahn, S.M. 1986, ApJ, 309, 682 . 13. Siddiqui, H., Stewart, G.C. and Johnston, R.M. 1998, Astr. Astrophys., 334, 71–86. 14. Fabian, A.C. 1997, Science, 275, 48–49.
no-problem/0001/astro-ph0001002.html
ar5iv
text
# On the inverse Compton scattering model of radio pulsars ## 1. Introduction “The crisis today, over twenty years later, is that theory has produced nothing more useful to compare observations with in order to interpret them” (Radhakrishnan 1992). On observational aspect, emission beam of a radio pulsar can be divided into two (core, inner conical) or three (plus an outer conical) components through careful studies of the observed pulse profiles and polarization characteristics (Rankin 1983; Lyne & Manchester 1988). Many pulsar profiles at meter wavelength are dominated by core components. In the usual models of radio pulsars which are related to polar cap, it is difficult to get a central or “core” emission beam. Considering such obvious discrepancy between observations and theory, several authors (e.g., Beskin et al. 1988; Qiao 1988a,b; Wang et al. 1988, Lyutikov et al. 1999) presented their models for the core emission. In the following, we discuss some characteristics of the inverse Compton scattering (ICS) model (Qiao 1992; Qiao and Lin 1998, hereafter as QL98; Liu et al. 1999; Qiao et al. 1999; Xu et al. 2000). ## 2. On Emission beams ### 2.1. Basic idea of the ICS model The basic idea of the ICS model (Qiao 1988a,b; QL98) can be briefly described as following. Low-frequency electromagnetic waves are produced in the polar cap due to gap sparking and afterwards propagate out freely; Such low energy photons are then inverse-Compton-scattered by the secondary particles from the pair cascades, and the up-scattered waves are the radio emission observed from pulsars. With the conditions $`BB_q=4.414\times 10^{13}`$ Guass and $`\gamma \mathrm{}\omega _0m_ec^2`$, the frequency formula of the ICS mechanism (Qiao 1988a,b; QL98) is $$\omega 2\gamma ^2\omega _0(1\beta \mathrm{cos}\theta _i),$$ $`(1)`$ where $`\omega _0`$ and $`\omega `$ are the frequencies of incident and scattered waves, respectively. The Lorentz factor of the secondary particles is $`\gamma =1/\sqrt{1\beta ^2}`$, and $`\theta _i`$ is the incident angle (the angle between the direction of motion of a particle and an incoming photon). Using the formula above, we have obtained so called “beam-frequency figure” (see QL98, Qiao et al. 1999), which is the plot of the beaming angle $`\theta _\mu `$ (the angle between the emission direction and the magnetic axis) versus observing frequency $`\nu `$. The shape of emission beams, emission heights and other emission properties can be derived from the “beam-frequency figure”. ### 2.2. The central emission beam and hollow cones Two distinct types of emission components were identified from observations, namely, ‘core’ or central emission beam near the center and one or two hollow cone(s) surrounding the core. The ICS model can reproduce one core and two cones at the same time. The emission beam in the model could consist of (1) Core + inner cone emission, for pulsars with shorter rotational periods; (2) Core + inner and outer cones, for pulsars with longer periods. Furthermore, the core emission may be a small hollow cone in fact, i.e., not be fully filled. This can be identified from some pulsar profiles through “Gaussian de-composition” (Qiao et al. 1999). In the model, we found that these emission components are emitted at different heights. The ‘core’ emission is emitted at the place closed to the surface of neutron star, the ‘inner cone’ at a higher place, and the ‘outer cone’ at the highest place. We also found that the sizes of the emission beams should change with frequency. As observing frequency increases, the ‘core’emission beam becomes narrow; the ‘inner cone’ size increases or at least keeps constant; but the ‘outer cone’ size decreases. For given magnetic inclination and impact angles, we can figure out how the shape of pulse profiles vary with frequency (see Qiao et al.1999). This is the theoretical base for the classification of radio pulsars. ### 2.3. Classification for radio pulsars The pulse profile shapes, especially the variation of profiles with the observing frequencies are very important for understanding the emission mechanism of pulsars. In the ICS model, various pulse profiles and its evolution over a wide frequency range can be well simulated. As the impact angle gradually increases, pulsars can be grouped into two types (and further sub-types) according to the ICS model (Qiao et al. 1999). Type I – Pulsars with only core and inner conical emissions. Such pulsars usually have shorter periods (thus have larger polar caps). There are two sub-types. Type Ia: Pulsars of this type have very small impact angle, and normally show single pulse profiles at low frequencies, but triple profiles at high frequencies. Prototype in this form are PSR B1933+16, PSR B1913+16. Type Ib: Pulsars of this type have larger impact angle than that of Type Ia. Though at low frequencies the pulsars also show single profiles, they will evolve to double profiles at higher frequencies, since the lines-of-sight only cut the inner conical beam. An example of this type is PSR B1845-01. Type II – Pulsars with all three emission beams. Pulsars with average period often have such a feature. IIa: Pulsars of this type have five components at most observing frequencies, since the small impact angle makes the line-of-sight cut all the three beams (the core, the inner cone and the outer cone). One important point is that in the ICS model, the five pulse components will evolve to three or even one component at very low frequencies (see QL98, Fig. 6a, line A). Such a feature have been observed from PSR B1237+25 (Kuzmin et al. 1998). IIb: For this type, the impact angle is larger than IIa, so that at higher frequencies, the line-of-sight does not cut the core beam. Thus the pulsars show three components at low observing frequencies, but four components when frequency is higher, and finally just double profiles at highest frequencies. PSR B2045$``$16 is an good example. IIc: Pulsars of this type have the largest impact angle, so that only the outer conical beam is cut by the line-of-sight. The pulse profiles of this type are double at all observing frequencies, with the separation between the two peaks decreasing at higher frequencies. This is just the traditional radius-to-frequency mapping which has also been calculated in the curvature radiation (CR) picture. The prototype is PSR B0525+21. Alternative situation of this type is that pulsars have single profiles at most of observing frequencies, but become double components at very low observing frequencies. An example is PSR B0950+08. ## 3. Linear and circular polarization Pulsar radio emission is generally found to be linearly polarized over all longitudes of profiles, some times as high as up to 100%. The position angle sweep is in an ‘S’ shape, which can be well understood within the rotating vector model. However, depolarization and position angle jumping are often found in the integrate profiles of some pulsars. Considering the retardation effect due to relative phase shift between pulsar beam components (the core and conal emission components are retarded from different heights in ICS model), we find that the phase shift of beam centers of the different components could cause the depolarization and position angle jump(s) in integrated profiles (Xu et al. 1997). ### 3.1. Basic idea for polarization in ICS model We have calculated the polarization feature of the ICS model. Now we would like to present one key point about how the circular polarization is produced in the ICS model. As many authors (Ruderman & Sutherland, 1975; QL98) argued, there may exist the inner gap above the pulsar polar cap. The continuous gap formation and breakdown provide both low frequency waves with $`\omega _010^5`$ s<sup>-1</sup> and out-streaming relativistic particles with $`\gamma 10^3`$. The low frequency waves will be up-scattered by relativistic particles (moving direction $`𝐧_\mathrm{e}`$) to observed radio waves. The frequency ($`\omega `$) of out-going waves is determined by eq.(1), while their complex amplitudes $`𝐄`$ are determined by (Liu et al. 1999) $$𝐄=C\frac{\mathrm{sin}\theta ^{}}{\gamma ^2(1\beta \mathrm{cos}\theta ^{})^2}e^{i(\frac{\omega _0}{c}R\frac{\omega }{c}𝐑𝐧+\varphi _0)}𝐞_\mathrm{s},$$ $`(2)`$ here n is the observing direction, R is the low frequency wave vector, $`\theta ^{}`$ is the angle between $`𝐧_\mathrm{e}`$ and n, $`C`$ is a constant, e<sub>s</sub> is a electric unit-vector in the co-plan of n<sub>e</sub> and n, $`\varphi _0`$ is the initial phase determined by incident wave. Moving out and losing energy via inverse Compton scattering, the particle undergoes a decay in $`\gamma `$, which is assumed to be (QL98) $$\gamma =\gamma _0[1\xi (rR_0)/R_e].$$ $`(3)`$ The electric field of the total scattered wave at a direction n is the sum of E of each electron if such electrons are scattered coherently. The emission region for a certain $`\omega `$ can be obtained by eq.(1). As pointed out in QL98, there are generally three possible emission zones, corresponding to core, inner and outer cones. The scattered electromagnetic waves can be superposed coherently if the low-frequency waves are from same sparking and the emission region is smaller than $`2\pi c/\omega _0`$. The coherent superposition of scattered waves will result in circular polarization as well as linear polarization (Xu et al. 2000). ### 3.2. Further considerations and numerical results ‘Subpulse’ circular polarization patterns When the line of sight sweeps across a mini-beam, we can see a transient “subpulse”. If the line of sight sweeps across the center of a core or inner conal mini-beam, the circular polarization will experience a central sense reversal, or else it will be dominated by one sense, either left hand or right hand according to its traversal relative to the mini-beam. Subpulse Position Angles Subpulse position angles show diverse values, which are generally centered at the projection of the magnetic field. The variation range is quite small for sub-pulses from outer cone emission zones, and becomes larger when emission height decreases, that is, larger for inner cone and core component. The position angles values are scattered around the value of the projection of the magnetic field lines. When emission of all the sub-pulses are summed up, the mean position angle will have the mean value. So naturally the mean position angle is related to the projection of magnetic field lines, exactly as assumed by the rotation vector model. Circular and linear polarization of mean profiles An observer can see bunches of particles all around his line of sight. The polarization he receives from individual bunches is different. When the magnetic axis is inclined from the rotation axis, the probability of the sparkings was assumed to decrease exponentially with the azimuthal angle from the projection of the rotational axis in our simulation. We then get significant (but smaller than in subpulse) circular polarization for core components of mean pulses. But this is not the case for the cone components. ## 4. Conclusion The ICS model can reproduce many observational properties of radio pulsars. There are emission beams of core, inner and outer cones. These beams are emitted from different heights. The pulse profiles changes with frequencies, similar to what observed from many pulsars. The transient ‘sub-pulses’ have very high circular polarization (sometime as strong as 100%). In mean pulse profile, circular polarization is much higher in core than that in inner cone and oute cone. #### Acknowledgments. This work is supported by National Nature Sciences Foundation of China, by the Climbing project of China, the Doctoral Program Foundation of Institution of Higher Education in China and by the Youth Foundation of Peking University.
no-problem/0001/cond-mat0001299.html
ar5iv
text
# Coarsening Dynamics of a Nonconserved Field Advected by a Uniform Shear Flow ## Abstract We consider the ordering kinetics of a nonconserved scalar field advected by a uniform shear flow. Using the Ohta-Jasnow-Kawasaki approximation, modified to allow for shear-induced anisotropy, we calculate the asymptotic time dependence of the characteristic length scales, $`L_{}`$ and $`L_{}`$, that describe the growth of order parallel and perpendicular to the mean domain orientation. In space dimension $`d=3`$ we find $`L_{}\gamma t^{3/2}`$, $`L_{}t^{1/2}`$, where $`\gamma `$ is the shear rate, while for $`d=2`$ we find $`L_{}\gamma ^{1/2}t(\mathrm{ln}t)^{1/4}`$, $`L_{}\gamma ^{1/2}(\mathrm{ln}t)^{1/4}`$ . Our predictions for $`d=2`$ can be tested by experiments on twisted nematic liquid crystals. The coarsening dynamics of systems quenched from a disordered phase into a two-phase region are by now reasonably well understood . Domains of the ordered phases form rapidly, and then coarsen with time $`t`$, i.e. there is a characteristic length scale (‘domain size’), $`L(t)`$, which grows with time, typically as a power law. Furthermore, there is good evidence for a form of dynamical scaling in which the domain morphology is statistically scale-invariant when all lengths are scaled by $`L(t)`$ . In recent years attention has been directed at systems subjected to external driving, e.g. an imposed shear . The shear induces an anisotropy, leading to different coarsening rates in directions parallel and perpendicular to the flow. This has been observed in experiments . An important open question for such driven systems is whether the coarsening continues indefinitely (for an infinite system), as in the case of no driving, or whether the driving force arrests the coarsening, leading to a steady state. For the case of a sheared phase-separating binary fluid, it has been argued, on the basis of the stability of a single drop of one fluid immersed in another, that the domain scale will eventually saturate at a maximum length scale, $`L_{max}`$, determined by the shear rate: $`L_{max}\sigma /\gamma \eta `$, where $`\sigma `$ and $`\eta `$ are the surface tension and viscosity respectively . In the multi-domain context, it has been argued that a steady state would be achieved through the shear-induced stretching and breaking of domains. However, the experimental evidence for a steady state is not completely clear-cut. In particular, as emphasized by Cates et al. , saturation of the domain length occurs naturally in a finite-size system when the domain length becomes of the same order as the system size. A second important question concerns the nature of the growth laws, and the nature of scaling – if it exists – in this anisotropic system. Naively, one might expect two characteristic length scales, $`L_{}`$ and $`L_{}`$, measuring correlations parallel and perpendicular to the flow. In dimension $`d=2`$, for example, it would be natural to conjecture that the coarsening follows conventional scaling when lengths along and perpendicular to the flow are scaled by these characteristic lengths. We will show, however, that the actual scaling is subtly different. To address these issues it would be helpful to have an exactly soluble model. The large-$`n`$ limit of the $`n`$-vector model has been solved for a conserved order-parameter field, but without hydrodynamics (i.e. the order parameter is simply advected by the shear flow) . Length scales $`L_{}`$ and $`L_{}`$, growing as $`\gamma (t^5/\mathrm{ln}t)^{1/4}`$ and $`(t/\mathrm{ln}t)^{1/4}`$ respectively, describe correlations along and perpendicular to the flow. Scaling is not strictly satisfied (there is instead a form of ‘multiscaling’), but this is presumably an artifact of the large-$`n`$ limit, as in the zero-shear case . There is no saturation at late times in this model. However, for the large-$`n`$ model there are no domains, so the concept of stretching and breaking loses its meaning. What is needed is an analytically tractable model for a scalar order parameter. For a nonconserved field, the Ohta-Jasnow-Kawasaki (OJK) approximation fulfills this requirement: In the absence of shear, it captures the essential features of the coarsening process . In this Letter we present the results of applying the OJK approach to a nonconserved scalar field advected by a uniform shear flow. The equation of motion for the order parameter is $$_t\varphi +(𝐮\varphi )=^2\varphi V^{}(\varphi ),$$ (1) where $`V(\varphi )`$ is a symmetric double well potential, and $`𝐮=\gamma y\widehat{𝐱}`$ is the velocity of the imposed shear flow, with the flow in the $`x`$-direction. Two aspects of real binary fluids are neglected in this model: the order parameter is not conserved, and it is simply advected by the shear rather than being coupled to the fluid velocity through the Navier-Stokes equation. This approach is, however, a very instructive first step which yields important insights concerning the main questions raised earlier, namely the nature the asymptotics and of dynamical scaling, in the physically correct context of a scalar field. Furthermore, in $`d=2`$ this model describes the coarsening dynamics of a twisted nematic liquid crystal, in which disclination lines, separating domains of opposite twist, relax viscously, driven by their line tension . Under shear, this system will furnish an experimental test of our predictions. It should be noted that our analysis leads to very different behavior in $`d=2`$ and $`d=3`$: This is one of the main results of the present work. The result of the OJK analysis is that, for space dimension $`d=3`$, $`L_{}\gamma t^{3/2}`$ and $`L_{}t^{1/2}`$, i.e. the coarsening rate parallel to the flow is enhanced by a factor $`\gamma t`$ relative to the unsheared system, while the growth of $`L_{}`$ is unchanged (the same features describe the large-$`n`$ result ). Hence there is no saturation of the coarsening. For $`d=3`$, furthermore, conventional scaling holds. For $`d=2`$, however, very different results are obtained: $`L_{}\gamma ^{1/2}t(\mathrm{ln}t)^{1/4}`$, and $`L_{}\gamma ^{1/2}(\mathrm{ln}t)^{1/4}`$. These results imply $`L_{}L_{}t`$ for $`d=2`$, i.e. the product of the two length scales, or ‘scale area’, is independent of the shear rate and has the same form as for the unsheared system, where $`L_{}=L_{}t^{1/2}`$. We will show that this result can be understood by a topological argument. An important subtlety in $`d=2`$ is that $`L_{}`$ and $`L_{}`$ have to be defined as characteristic scales parallel and perpendicular to the mean domain orientation, instead of the flow direction. This distinction is not important for $`d=3`$, but crucial for $`d=2`$. Only with the new definition is dynamical scaling recovered. For $`d=2`$, furthermore, $`L_{}`$ decreases with time asymptotically. Our approach breaks down, however, when $`L_{}`$ becomes comparable with the width, $`\xi `$, of the interfaces between domains, which occurs after a time of order $`\mathrm{exp}(\mathrm{const}/\gamma ^2\xi ^4)`$, when we expect the domains to break, as observed in simulations with conserved dynamics in $`d=2`$ . The OJK approach starts from the Allen-Cahn equation relating the normal component of the interface velocity, $`v_n`$, to the local curvature of the interface, $`K=𝐧`$, where $`𝐧`$ is the normal to the interface, $$v_n=𝐧+𝐮𝐧,$$ (2) where the final term is the drift due to the shear. The derivation of this equation from (1) follows the same route as the zero-shear case . The next step is to introduce a smooth auxiliary field $`m(𝐱,t)`$ whose zeroes coincide with those of $`\varphi `$. In a frame comoving with the interface one has $`dm/dt=0=_tm+v_n|m|`$. Combining this with (2), and using $$𝐧=m/|m|,$$ (3) yields the following equation for $`m`$, $$_tm+𝐮m=^2m\underset{a,b=1}{\overset{d}{}}n_an_b_a_bm.$$ (4) So far this is exact. Equation (4) is highly non-linear, however, due to the implicit dependence of $`𝐧`$ on $`m`$ through (3). The OJK approximation involves linearizing the $`m`$ equation by replacing the product $`n_an_b`$ by its spatial average, $$D_{ab}(t)=n_a(𝐱,t)n_b(𝐱,t),$$ (5) leading to the following equations for $`m`$ and $`D`$: $`_tm+\gamma y_xm`$ $`=`$ $`^2mD_{ab}(t)_a_bm`$ (6) $`D_{ab}`$ $`=`$ $`{\displaystyle \frac{_am_bm}{(m)^2}}.`$ (7) In the absence of shear the coarsening is isotropic, and the matrix $`D`$ has the simple form $`D_{ab}=\delta _{ab}/d`$, independent of $`t`$. The resulting diffusion equation for $`m`$ is readily solved, leading to a $`\sqrt{t}`$ coarsening. For $`\gamma 0`$, the coarsening is anisotropic: $`D_{ab}`$ is both non-diagonal and time-dependent. From the definition (5) of $`D_{ab}`$, though, the sum-rule, $`\mathrm{Tr}D=1`$, is trivially valid. From the symmetry of (6) under the combined transformations $`xx`$, $`yy`$ at fixed $`z`$, and under the separate transformation $`zz`$ at fixed $`x,y`$, we see that $`D_{ab}`$ has a block diagonal form, with $`D_{xy}=D_{yx}`$ the only non-zero off-diagonal elements. In Fourier space, (6) reads $`{\displaystyle \frac{m(𝐤,t)}{t}}\gamma k_x{\displaystyle \frac{m(𝐤,t)}{k_y}}`$ $`=`$ $`{\displaystyle \underset{ab}{}}\mathrm{\Omega }_{ab}(t)k_ak_bm(𝐤,t),`$ (8) $`\mathrm{\Omega }_{ab}(t)`$ $`=`$ $`\delta _{ab}D_{ab}(t).`$ (9) This is readily solved by the change of variables $`𝐪=A𝐤`$, $`\tau =t`$, and $`\mu (𝐪,\tau )=m(𝐤,t)`$, where $`A`$ has elements $$A_{ab}=\delta _{ab}+\gamma t\delta _{a2}\delta _{b1}.$$ (10) In the new variables, the left-hand side of (8) becomes $`_\tau \mu (𝐪,\tau )`$. Integrating the equation, and transforming back to the original variables, gives $$m(𝐤,t)=m(\stackrel{~}{𝐤}(t),0)\mathrm{exp}\left(\frac{1}{4}\underset{ab}{}k_aM_{ab}(t)k_b\right),$$ (11) where $`\stackrel{~}{𝐤}(t)=(k_x,k_y+\gamma k_xt,k_z)`$ and the matrix $`M`$ is given by $`M(t)`$ $`=`$ $`A^T(t)R(t)A(t)`$ (12) $`R(t)`$ $`=`$ $`4{\displaystyle _0^t}𝑑t^{}[A^T(t^{})]^1\mathrm{\Omega }(t^{})[A(t^{})]^1.`$ (13) Equations (1113) determine the function $`m(𝐤,t)`$ completely if the matrix $`D_{ab}(t)`$ is known. However, $`D`$ is itself determined from the distribution for $`m`$, via (7), so we have to solve these equations self-consistently. We take the initial condition, $`m(𝐤,0)`$ to be a gaussian random variable with correlator $`m(𝐤,0)m(𝐤^{},0)=\mathrm{\Delta }\delta (𝐤+𝐤^{})`$. Then the real-space correlation function of $`m`$, $`G(𝐫,t)=m(𝐱+𝐫,t)m(𝐱,t)`$ is obtained from (11) as $$G(𝐫,t)=G(0,t)\mathrm{exp}\left(\frac{1}{2}\underset{a,b}{}r_a(M^1)_{ab}r_b\right),$$ (14) where the precise expression for $`G(0,t)`$ is not relevant in what follows. All the information concerning domain growth in this system is contained in the matrix $`M_{ab}(t)`$. To obtain a closed set of equations we have to express the elements of the matrix $`D`$ in terms of the elements of $`M`$. To to this we first rewrite (7) using an integral representation for the denominator: $$D_{ab}=\frac{1}{2}_0^{\mathrm{}}𝑑u_am_bm\mathrm{exp}\left(\frac{u}{2}(m)^2\right).$$ (15) Since $`m`$ is a Gaussian field, the required average can be computed using the probability distribution $`P(𝐯)`$ of the vector $`𝐯=m`$. This distribution is determined by the correlator $`v_av_b=_a_bm(𝐫,t)m(\mathrm{𝟎},t)|_{𝐫=\mathrm{𝟎}}=m^2(M^1)_{ab}`$, from which one infers that $$P(𝐯)\mathrm{exp}\left(\frac{1}{2m^2}\underset{a,b}{}v_aM_{ab}v_b\right).$$ (16) Carrying out the average in (15) gives $`D_{ab}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _0^{\mathrm{}}}𝑑u\left({\displaystyle \frac{detM}{detN}}\right)^{1/2}(N^1)_{ab}`$ (17) $`N_{ab}`$ $`=`$ $`M_{ab}+u\delta _{ab}.`$ (18) We now proceed to outline the solution of this closed set of equations. Full details will be given elsewhere . We are interested in the large-$`t`$ asymptotics. This limit simplifies the analysis which is still, however, quite subtle. The results are very different in three and two dimensions, so we discuss these cases separately. We begin with some general remarks on the expected form of $`D_{ab}`$. The effect of the shear is to produce elongated domain structures aligned, at late times, at a small angle ($`1/\gamma t`$ \- see below) to the flow direction. As a result, the component $`n_1`$ of the normal to the interface is very small almost everywhere at late times, implying $`D_{11}0`$ for $`t\mathrm{}`$. The sum rule, $`\mathrm{Tr}D=1`$, is therefore exhausted by the remaining diagonal components of $`D`$ for large $`t`$. In particular, for $`d=2`$ we have $`D_{22}1`$ for $`t\mathrm{}`$, and $`\mathrm{\Omega }_{11}0`$ in (8). This case requires special care. For $`d=3`$, on the other hand, $`D_{22}+D_{33}1`$, and it turns out that both $`D_{22}`$ and $`D_{33}`$ approach non-zero limits. This case is, therefore, simpler to analyse. The case $`𝐝=\mathrm{𝟑}`$. With the assumption that $`D_{11}0`$ and $`D_{12}1/t`$, while $`D_{22}`$ and $`D_{33}`$ remain non-zero for $`t\mathrm{}`$, the asymptotics of the matrix elements $`M_{ab}`$ are readily obtained from (9), (10), (12) and (13): $`M_{11}`$ $`=`$ $`{\displaystyle \frac{4}{3}}\gamma ^2t^3(1D_{22}^{\mathrm{}})`$ (19) $`M_{12}`$ $`=`$ $`2\gamma t^2(1D_{22}^{\mathrm{}})`$ (20) $`M_{22}`$ $`=`$ $`4t(1D_{22}^{\mathrm{}})`$ (21) $`M_{33}`$ $`=`$ $`4tD_{22}^{\mathrm{}},`$ (22) where $`D_{22}^{\mathrm{}}`$ is the large-$`t`$ limit of $`D_{22}`$, while $`M_{13}=M_{23}=0`$ by symmetry. Using these limiting forms, the integrals (17) can be evaluated asymptotically. After some algebra one finds $$D_{22}^{\mathrm{}}=\left(1+\frac{1}{2}\left(\frac{1D_{22}^{\mathrm{}}}{D_{22}^{\mathrm{}}}\right)^{1/2}\right)^1$$ (23) while $`D_{33}^{\mathrm{}}=1D_{22}^{\mathrm{}}`$ and $`D_{13}=D_{23}=0`$. Equation (23) has the non-trivial solution $`D_{22}^{\mathrm{}}=4/5`$, implying $`D_{33}^{\mathrm{}}=1/5`$ and demonstrating the self-consistency of the initial ansatz. Finally we find $`D_{11}3\mathrm{ln}(\gamma t)/(\gamma t)^2`$ and $`D_{12}6/(5\gamma t)`$, consistent with our initial assumption. The characteristic length scales in the system are given, from (14), by the square roots of the eigenvalues of the matrix $`M`$. Using (22), with $`D_{22}^{\mathrm{}}=4/5`$, we find $$L_{}=\frac{2}{\sqrt{15}}\gamma t^{3/2},L_{}=\frac{1}{\sqrt{5}}t^{1/2},L_3=\frac{4}{\sqrt{5}}t^{1/2},$$ (24) for $`t\mathrm{}`$. The corresponding eigenvectors are $$𝐞_{}=\left(\begin{array}{c}1\\ \frac{3}{2\gamma t}\\ 0\end{array}\right),𝐞_{}=\left(\begin{array}{c}\frac{3}{2\gamma t}\\ 1\\ 0\end{array}\right),𝐞_3=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right),$$ (25) implying that the principal axes in the $`xy`$ plane are rotated, relative to the $`x`$ and $`y`$ axes, through an angle $`3/2\gamma t`$, which we can interpret as the angle between the average orientation of the domain structure and the flow direction. The three length scales are all distinct, though $`L_{}`$ and $`L_3`$ grow in the same way, and coarsening continues indefinitely – the system does not approach a stationary state. The matrix elements $`M_{ab}`$ grow in the way expected if a naive form of scaling holds: $`M_{11}=L_{}^2`$, $`M_{12}L_{}L_{}`$, $`M_{22}L_{}^2`$, and $`M_{33}=L_3^2`$. This means that dynamical scaling holds, and the scaling variables can be taken to be $`x/L_{}`$, $`y/L_{}`$, and $`z/L_3`$. The same simple structure does not, however, hold in $`d=2`$. The case $`𝐝=\mathrm{𝟐}`$. For $`d=2`$ the self-consistency problem is more tricky, because the quantity $`1D_{22}(t)=D_{11}(t)`$ tends to zero as $`t\mathrm{}`$. For $`d=2`$ the integrals (17) can be evaluated exactly. Making the assumptions, to be verified subsequently, that asymptotically $`M_{11}M_{12}M_{22}`$, that $`\mathrm{Tr}M\sqrt{detM}M_{22}`$, and that $`D_{12}1/t`$, one can derive the following self-consistent equation for $`D_{11}(t)`$: $$D_{11}(t)=\frac{1}{\gamma t^2}\left(\frac{_0^t𝑑t^{}t^2D_{11}(t^{})}{_0^t𝑑t^{}D_{11}(t^{})}\right)^{1/2},$$ (26) with asymptotic solution $$D_{11}(t)=\frac{1}{2\gamma t\sqrt{\mathrm{ln}\gamma t}}.$$ (27) Using this result, the asymptotic results for the matrix elements, and the determinant, of the correlation matrix $`M`$ are obtained as $`M_{11}(t)`$ $`=`$ $`4\gamma t^2\sqrt{\mathrm{ln}\gamma t}`$ (28) $`M_{12}(t)`$ $`=`$ $`4t\sqrt{\mathrm{ln}\gamma t}`$ (29) $`M_{22}(t)`$ $`=`$ $`(4/\gamma )\sqrt{\mathrm{ln}\gamma t}`$ (30) $`detM(t)`$ $`=`$ $`4t^2,`$ (31) while $`D_{12}=1/\gamma t`$. These results confirm, a posteriori, the assumptions made in their derivation, i.e. the solution is self-consistent. The asymptotic results for the $`M_{ab}`$ seem to imply $`detM=0`$, in contradiction to (31). To obtain (31) one has to keep subdominant contributions to the $`M_{ab}`$. \[Note that $`detM=detR`$ from (12)\]. The characteristic length scales are given, as before, by the square roots of the eigenvalues of $`M`$. The eigenvalues of any $`2\times 2`$ matrix can be expressed as $`\lambda _\pm =[\mathrm{Tr}M\pm \sqrt{(\mathrm{Tr}M)^24detM}]/2`$. Using $`\mathrm{Tr}M\sqrt{detM}`$ we obtain $`\lambda _+=\mathrm{Tr}M`$ and $`\lambda _{}=detM/(\mathrm{Tr}M)`$, whence $$L_{}=2\sqrt{\gamma }t(\mathrm{ln}\gamma t)^{1/4},L_{}=\frac{1}{\sqrt{\gamma }(\mathrm{ln}\gamma t)^{1/4}}.$$ (32) The corresponding eigenvectors are $$𝐞_{}=\left(\begin{array}{c}1\\ \frac{1}{\gamma t}\end{array}\right),𝐞_{}=\left(\begin{array}{c}\frac{1}{\gamma t}\\ 1\end{array}\right),$$ (33) giving a tilt angle $`1/\gamma t`$ between the domain orientation and the flow direction. The scale area in two dimensions is $`L_{}L_{}=2t`$, independent of $`\gamma `$. This is the same result as the zero-shear case, where $`D_{ab}=\delta _{ab}/d`$ implies $`M_{ab}=2t\delta _{ab}`$ for $`d=2`$, i.e. $`L(t)=\sqrt{2t}`$. This result is special to $`d=2`$ and can be understood as follows. For an isolated domain, the rate of change of the area enclosed by the domain boundary is $`dA/dt=𝑑lv_n=𝑑l(𝐮)𝐧`$ from (2). The second term is a topological invariant, equal to $`2\pi `$ from the Gauss-Bonnet Theorem, while the first term is equal to $`_Ad^2x𝐮`$, which vanishes for any divergence-free shear flow. While the $`\gamma `$-independence of $`dA/dt`$ has been proved only for closed loops of domain wall, we expect a similar result to hold for the scale area, i.e. $`d(L_{}L_{})/dt=\mathrm{const}`$. It is very encouraging that the OJK approximation captures this feature of the $`d=2`$ coarsening. There is no equivalent result in $`d=3`$ because the surface integral $`𝑑S𝐧`$ is no longer a topological invariant. The scaling is nontrivial in $`d=2`$ because, although $`M_{11}=L_{}^2`$, consistent with naive scaling, the corresponding results $`M_{12}L_{}L_{}`$, $`M_{22}L_{}^2`$, found in $`d=3`$ no longer hold in $`d=2`$. This is associated with the fact that the leading-order contribution to $`detM`$ vanishes. The consequence is that scaling only holds when referred to the unique scaling axes (33), which are themselves time-dependent. The most interesting and suggestive feature of the $`d=2`$ result is that $`L_{}`$ tends to zero as $`t\mathrm{}`$. Since our treatment is based on the ‘thin wall’ limit, in which domain walls are treated as having zero width, it will break down when $`L_{}`$ becomes comparable with the width, $`\xi `$, of the walls, at which point we conjecture that domains will break, possibly arresting the coarsening. This can be tested by experiments on twisted nematic liquid crystals. In $`d=3`$, the present work provides strong evidence that, at least for the nonconserved scalar field considered here, the coarsening state proceeds indefinitely. In this respect it is interesting that, in their $`d=3`$ simulations (including both conservation of the order parameter and hydrodynamics), Cates et al. found no evidence for a steady state structure emerging that is independent of the system size, i.e. observed steady states could be attributed to finite size effects. We thank Peter Sollich for a useful discussion. This work was supported by EPSRC (UK) under grant GR/L97698.
no-problem/0001/cond-mat0001236.html
ar5iv
text
# Excitation Spectra and Thermodynamic Response of Segmented Heisenberg Spin Chains ## I Introduction Low-dimensional electron systems are known to be particularly sensitive to disorder. It is therefore difficult to realize pure one- or two-dimensional behavior at very low temperatures and small frequencies in nature. This is somewhat disappointing in light of recent detailed theoretical predictions for the low-energy scaling behavior of paradigmatic quantum-many-body models, such as antiferromagnetic spin chains and ladders . It is the enhanced quantum fluctuations in these compounds, which give rise to particular low-temperature scaling regimes in very pure samples, and at the same instance make them highly unstable towards externally induced low-temperature transitions, such as localization by impurity scattering or three-dimensional long-range ordering due to residual small couplings between the lower-dimensional subsystems. It has recently been demonstrated that in certain 1D subsystems, such as random-exchange and random-spin chains, anomalously extended states can persist against disorder . The physical picture is that while most spins are bound in randomly distributed valence bonds, the unbound spins interact via virtual excitations, resulting in a zero-frequency band with power-law scaling. Furthermore, in the case of spin ladders and spin-Peierls compounds, doping with randomly placed non-magnetic impurities may actually induce quasi-long-range ordering due to effective inverse-power-law interactions between the residual “pruned” spins. This replacement of an originally short-ranged spin liquid state by impurity-induced quasi-long-range order can be viewed as a realization of the “order by disorder” phenomenon. In other compounds, impurity scattering may completely destroy the connectivity within one-dimensional subsystems. If this is the case extended states cannot survive. Let us examine two specific realizations of such segmented quantum spin systems: (i) CuO chains, with non-magnetic impurities, and (ii) pinned charge density waves in quasi-1D materials. The first situation can be realized by doping a quasi-1D compound such as SrCuO<sub>2</sub> with Zn. In the pure material, antiferromagnetic superexchange between neighboring Cu<sup>2+</sup> $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$ electrons is mediated by the filled O<sup>2-</sup> p-orbitals. By substituting Zn<sup>2+</sup> for Cu<sup>2+</sup>, static vacancies are created, and the infinite chain is separated into segments of length $`l`$ which follow a discrete distribution. A second physical way of realizing segmented spin chains is the pinning of one-dimensional charge density waves. If there is competition between poorly screened long-ranged repulsive Coulomb forces and short-range attractive forces, highly inhomogeneous density wave modulations occur, favoring particular segment lengths. Even small impurity scattering leads to a pinning of such structures. In both cases, there are ensembles of correlated spin segments which are most straightforwardly modelled by taking appropriate averages over distributions of finite clusters with open boundary conditions. The specific form of the distribution function strongly depends on the details of how the segments are formed. For example, in the case of randomly doped CuO chains a (discrete) Poisson distribution is natural, whereas for pinned, spatially modulated density waves, only two or three cluster sizes may dominate. An important factor, determining the proper distribution function, is the (meta-)stability of the random realizations: are they obtained from a quenched or an annealed cooling procedure? While in quenched realizations all clusters that occur at high temperatures also have non-vanishing weight in the zero-temperature distribution function, slower “annealed” cooling processes can lead to preferred sizes and shapes. In particular, segments with an electronic closed shell configuration have more stable groundstates than others, and thus receive a higher weight in an annealed cooling process. In this work, we systematically study such ensembles of antiferromagnetically correlated spin clusters, focusing on the evolution of the corresponding low-energy features in the dynamical spin excitation spectrum and on the uniform static susceptibility as a function of the hole concentration. In particular for small clusters, quantum effects are important , which makes any theoretical approach to this problem challenging. We therefore attack this task numerically, using exact numerical diagonalization and scaling laws derived from conformal field theory to calculate the static and dynamical magnetic response for variable segment sizes. This paper is organized as follows. In the next section, a procedure to obtain excitation spectra for segmented Heisenberg chains is explained. The appropriate distribution functions are derived, and the evolution of the spectra with impurity concentration is discussed. In the subsequent section, the static magnetic response, i.e. the uniform susceptibility, and the heat capacity are calculated for various ensembles of finite chains. In the final section we conclude with a discussion of possible extensions and experimental consequences of the procedure outlined in this paper. ## II Excitation Spectra at T=0 To study the zero-temperature dynamical response of randomly segmented Heisenberg chains we examine the dynamical structure factor $`S(q,\omega ,l)`$ of finite chains with open boundary conditions: $$S(𝐪,\omega ,l)=\frac{1}{lZ}\underset{n=1}{\overset{l}{}}|<n|S_𝐪^z|0>|^2\delta (\omega E_n+E_0),$$ (1) where $`l`$ is the length of the finite chain, $`Z`$ the partition function and $`n`$ runs over all possible final states, whereas $`|0>`$ is the groundstate. Let us first concentrate on the staggered magnetization $$S_\pi ^z=\underset{n=1}{\overset{l}{}}(1)^nS_n^z.$$ (2) Then $`S(\pi ,\omega ,l)`$ is well defined for open boundary conditions and chains of an even or odd number of sites $`l`$. In the case of even $`l`$ there is a unique singlet groundstate, whereas one has to take into account the doublet nature of the groundstate for odd $`l`$. Using numerical diagonalization techniques, we have obtained $`S(\pi ,\omega ,l)`$ with $`l=1,\mathrm{},20`$. The resulting spectra are shown in Fig. 1. We observe that segments of even length $`l`$ have $`l/2`$ major peaks at non-zero frequencies, whereas those of odd length have a significant pole at $`\omega =0`$ and $`(l1)/2`$ additional peaks at higher frequencies. The higher energy peaks have a more complex structure, because some of the final states are singlets. Zero-frequency peaks occur only in the odd length segments, reflecting the fact that their groundstates are doublets. As in the case of closed finite chains equations derived from conformal field theory can be used to extract the finite-size scaling behavior of $`S(\pi ,\omega ,l)`$. To leading order, the pole positions and amplitudes are given by $`\omega _i(l)`$ $`=`$ $`\alpha _i/l+\beta _i/l[\mathrm{ln}(l)],`$ (3) $`A_i(l)`$ $`=`$ $`a_i+b_i\mathrm{ln}(1+l)+c_i\mathrm{ln}[1+\mathrm{ln}(1+l)],`$ (4) where the coefficients $`\alpha _i,\beta _i,a_i,b_i,`$ and $`c_i`$ can be treated as fit parameters. In Fig. 2 the pole positions and amplitudes of the lowest three poles are shown for clusters of up to 20 sites, along with the fits to the above scaling equations. In particular for the larger-size segments, these equations give an excellent fit to the numerical data. In the shorter segments higher-order logarithmic corrections for the amplitudes become increasingly relevant, and the quality of the fits deteriorates slightly in this regime. Note also, that the peak of the one-site cluster at $`\omega =0`$ has an amplitude that does not follow the general trend. The amplitudes shown in Fig. 2 correspond to the dynamical spin response per site (Eq. (1)). In the following, we will consider averages over ensembles of finite chains, where the amplitudes of the individual segments enter as extensive quantities. In this case, the segment amplitudes per site shown in Fig. 2 have to be multiplied by the segment length l (Eqs. 5 and 6). Now consider an infinite Heisenberg chain, doped with non-magnetic impurities, resulting in an ensemble of open-end segments of various lenghts l. The average dynamical spin structure factor can then be calculated from $`S(\pi ,\omega )={\displaystyle \underset{l}{}}lP(l)S(\pi ,\omega ,l),`$ (5) where $`P(l)`$ is an appropriately chosen distribution function. Using the pole structure of the response functions for the individual segments, we obtain $`S(\pi ,\omega )={\displaystyle \underset{l}{}}{\displaystyle \underset{i}{}}lP(l)A_i(l)\delta (\omega \omega _i(l)).`$ (6) In practice, the $`\delta `$-function in Eq. 6 is replaced by a Lorentzian of width $`ϵ`$, which will be taken as $`ϵ=0.1J`$ throughout the paper. $`P(l)`$ determines the weight of each segment in the ensemble average, and extrinsic factors favoring certain cluster shapes over others enter through this function. If the chain segmentation occurs completely randomly, the corresponding distribution function is given by $`P(l)=\rho ^2(1\rho )^l`$, where $`\rho [0,1]`$ is the concentration of vacant sites. This distribution function is normalized by $$\underset{l=1}{\overset{\mathrm{}}{}}lP(l)=1\rho ,$$ (7) and for $`\rho 1`$ it can be approximated by $`P(l)\rho ^2\mathrm{exp}(\rho l)`$. We also note that since the total number of clusters per site is given by $`n_c=_lP(l)=\rho (1\rho )`$, the average length of the clusters is $`l_{av}=_llP(l)/n_c=1/\rho `$. The above distribution describes the case of quenched disorder in the infinite chain, i.e. the positions of the impurities are uncorrelated. In an annealed cooling process, even length segments are favoured over odd ones, because they have a lower groundstate energy. We describe this situation by a similar distribution function $`P_a(l)=C\rho ^2(1\rho )^l\delta _{0,lmod\mathrm{\hspace{0.25em}2}}`$ where C is determined by the normalization condition (7). Only even length spin segments occur in a chain with this distribution function for the impurities. Note, that according to common terminology all of the ensemble averages which are discussed here are “quenched” because frozen disorder realizations are used. In this paper, the terms “annealed” and “quenched” only refer to the cooling procedures giving rise to different distribution functions. For true annealed disorder, however, the disorder variables would have to be treated dynamically. Here we wish to evaluate and analyze $`S(\pi ,\omega )`$ and $`S(\pi )`$ by using the finite-size scaling behavior of the lowest few poles in the spectrum. This few-mode approximation has been shown to be valid for sufficiently large impurity concentrations, but it tends to underestimate the spectral response for very small $`\rho `$. Let us use the lowest three poles of each segment as shown in Fig. 2, i.e. the index $`i`$ in Eq. (6) runs from $`i=1`$ to 3. Also, a cutoff length $`l_{max}=10000`$ is used in the sum over $`l`$. The resulting frequency-integrated staggered dynamical structure factor $`S(\pi )=𝑑\omega S(\pi ,\omega )`$ is then given by $$S(\pi )=\underset{l=1}{\overset{ł_{\mathrm{max}}}{}}lP(l)S(\pi ,l),$$ (8) where $`S(\pi ,l)`$ is the frequency-integrated dynamical structure factor at wavevector $`\pi `$, with the scaling form $$S(\pi ,l)=a+b\mathrm{ln}(1+l)+c\mathrm{ln}[1+\mathrm{ln}(1+l)].$$ (9) Comparing the values of $`S(\pi )`$ obtained from exact numerical diagonalizations of ensembles of finite clusters with the result for $`S(\pi )`$ within the three-mode approximation, one finds that the three-mode approximation tends to underestimate $`S(\pi )`$ for small impurity concentrations. The difference in the integrated weight is due to neglecting the higher frequency poles that become more relevant for larger clusters and thus smaller impurity concentrations. This weight can be approximately restored by adding the properly normalized dynamical structure factor of an infinite Heisenberg chain, broadend by $`ϵ`$ to $`\mathrm{\Delta }S(\pi ,\omega )ϵ/(\omega ^2+ϵ^2)/\pi `$, neglecting logarithmic corrections. It turns out that these corrections to $`S(\pi )`$ are only relevant for very low impurity concentrations and are neglegible for $`\rho >0.2`$. The resulting spectra are shown for different $`\rho `$ in Fig. 3 for the quenched and the annealed case. For both types of impurity distribution functions there are two main features that occur in $`S(\pi ,\omega )`$ upon increasing the number of vacancies in the infinite chain. First, the integrated spectral weight decreases rapidly upon increasing the impurity concentration. This is shown explicitly in Fig. 4, where the solid lines in part (a) and (b) represent the dependence of $`S(\pi )`$ on $`\rho `$. Note that in the annealed case the integrated spectral weight is decreasing at a slower rate than in the quenched case, especially for large impurity concentrations. This is due to the fact, that the integrated spectral weight of finite chains is a monotonous function of $`l`$. Thus the total response increases if at constant impurity concentration the odd-length segments (starting with $`l=1`$) are substituted by even-length segments (starting with $`l=2`$). Most of the suppressed spectral weight comes from the low-energy continuum of the response function due to the large segments. It can be shown by considering a lowest-order single-mode approximation that $`S(\pi ,\omega )`$ is in fact exponentially suppressed at low frequencies . The second common feature is the emergence of a discrete peak structure at larger frequencies (of order J), dominated by the contributions of the smallest cluster segments. The dominant segments, occuring according to the distribution function $`P(l)`$, can be identified from these higher energy peaks. In the annealed case, the smallest segments are the two-site clusters with a pole in $`S(\pi ,\omega ,l)`$ at $`\omega =1J`$, and the four-site chain with a major pole at $`\omega =0.66J`$. These poles are well separated from the low-energy continuum and carry most of the spectral weight of $`S(\pi ,\omega )`$ at impurity concentrations $`\rho 0.5`$ (Fig. 3 (c) - (e)). The pole of the smallest segment at $`\omega =1J`$ dominates the dynamical response function in the annealed case at high concentrations, $`\rho >0.7`$. One thus expects that most of the spectral weight can be obtained from the two-site cluster for $`\rho `$ close to one. In fact, from Eq.(8) one finds $$S(\pi )2P(2)S(\pi ,2)\frac{1\rho }{2},\rho 1,$$ (10) explaining the linear behavior of $`S(\pi )`$ for large impurity concentrations. Since in the quenched case the single spin sites and the three-site chains are also present, their low frequency poles contribute strongly to the spectral weight. The major pole of the three site chain at $`\omega =1.5J`$ is the first well-defined pole to separate from the low-energy continuum upon increasing the impurity concentration, as can be seen in Fig. 3 (a) already at $`\rho =0.1`$. It dominates the spectrum away from the low-energy pseudo-branch until, at intermediate concentrations, the lowest-energy poles of the two- and four-site clusters are also separated from the continuum. Characteristic features of the underlying impurity distribution can thus be read off from the higher frequency part of the spectrum. The major difference between the two impurity distributions we have studied lies in the rate at which $`S(\pi ,\omega )`$ is suppressed for small frequencies. Consider the annealed case first. The even-site segments do not have a pole at $`\omega =0`$ due to their inherent finite spin-gap. Thus the exponential suppression of the contributions from the large segments leads to the development of a pseudogap at small frequencies with increasing impurity concentration. The finite values of $`S(\pi ,0)`$ are due to our replacement of the $`\delta `$-peak in Eq. (6) by Lorentzians. This mimics the various broadening mechanisms in real materials, such as thermal broadening, scattering from phonons, and interactions with out-of-chain impurities. As shown in Fig. 4 (c), the residual spectral weight $`S(\pi ,0)`$ is exponentially suppressed with increasing $`\rho `$. In Fig. 4 (d) the important higher-energy peaks are compared to $`S(\pi ,0)`$, clearly indicating the reduction of the zero-frequency weight in the annealed case. Now consider the quenched case. All odd-length segments and especially the dominating one-spin segment show a pole in their dynamical response function at zero frequency. Therefore one expects the rate of suppression of $`S(\pi ,0)`$ to be significantly reduced in the quenched case with respect to the annealed case. This is clearly observed in Fig. 4, comparing the value of $`S(\pi ,0)`$ for both distributions. In the quenched case the weight at $`\omega =0`$ remains a major contribution to $`S(\pi )`$, even at large impurity concentrations. Fig. 4 (a) shows that the reduction of $`S(\pi ,0)`$ with increasing $`\rho `$ is less pronounced in the quenched case. In fact, the amplitude of the peak at $`\omega =0`$ dominates the spectrum for all $`\rho `$, as shown in Fig 4. (b). ## III Thermodynamic Response: Uniform Susceptibility and Heat Capacity In this section we examine the thermodynamic response of segmented spin-1/2 Heisenberg chains. Based on simple scaling arguments, low-temperature approximations for the uniform susceptibility and the specific heat are derived and shown to be in good agreement with a numerical solution of the problem, using the Bethe ansatz equations for open-end chains. The impurity tail of the low-temperature susceptibility (i.e. the divergence of $`\chi (T)`$ at $`T=0`$) in an ensemble of segmented chains is caused by the odd-length segments. Before examining the sub-dominant contributions of the even-length segments it is therefore necessary to first identify and discuss the dominant divergent contributions of the odd-length clusters. This can be achieved by analyzing $`[\chi (T)T]`$. Its value at zero temperature, $`lim_{T0}[\chi (T)T]`$, gives the prefactor of the low-temperature impurity tail in $`\chi (T)`$. For chains of even length, which always have a finite spin gap, the value of $`[\chi (T)T]`$ approaches $`0`$ as T goes to zero, whereas for odd lengths $`l`$, $`lim_{T0}[\chi (T)T]=1/(4l)`$. It follows that in the quenched case $$[\chi (T)T]_0(\rho )\underset{T0}{lim}[\chi (T)T](\rho )=\underset{T0}{lim}\underset{l=1}{\overset{\mathrm{}}{}}lP(l)[\chi (T)T](l)=\frac{\rho (1\rho )}{4(2\rho )},$$ (11) with a maximum at $`\rho =2\sqrt{2}0.586`$. In the ideal annealed case only even-length segments contribute to the average, and therefore $`[\chi (T)T]_0(\rho )`$ vanishes. The Curie constant is $`C(\rho )=lim_T\mathrm{}[\chi (T)T]=(1\rho )/4`$, independent of the length of the contributing segments. To study the temperature dependence of $`\chi (T)`$ we can calculate the finite-size susceptibilities from the Bethe ansatz equations which determine the energy levels of open Heisenberg chains: $$\left(\frac{x_k+i}{x_ki}\right)=\underset{jk}{\overset{M}{}}\frac{x_kx_j+2i}{x_kx_j2i}\frac{x_k+x_j+2i}{x_k+x_j2i}.$$ (12) Here the number of roots M determines the total $`S^z`$ component of the state through the relation $`S^z=l/2M`$. In logarithmic form this equation becomes $$2l\text{ctan}^1(x_k)=\pi I_k+\underset{jk}{\overset{M}{}}\left[\text{ctan}^1\left(\frac{x_kx_j}{2}\right)+\text{ctan}^1\left(\frac{x_k+x_j}{2}\right)\right],$$ (13) where all the $`I_k`$ are integers with $`k=1,\mathrm{},M`$. Given a solution of the above equations for a set of $`I_k`$’s, the energy of the corresponding eigenstate is $$\frac{E}{J}=\frac{l1}{4}2\underset{k=1}{\overset{M}{}}\frac{1}{x_k^2+1}$$ (14) The groundstate in a given $`S^z`$ sector, $`E_0(S^z,l)`$, is obtained from the set $`\{I_k\}=\{l+1,l+3,l+5,\mathrm{},l+2M1\}`$. And the first exited state in the $`S^z>0`$ sectors can be determined from the set $`\{I_k\}=\{2S^z,2S^z+3,2S^z+5,2S^z+7,\mathrm{},l1\}`$. Using an iterative method to determine the energy gaps for the even length segments, $`\mathrm{\Delta }E_l=E_0(S^z=1,l)E_0(S^z=0,l)`$, one obtains for the low temperature susceptibility of the segmented chain: $$[\chi (T)T](\rho )=[\chi (T)T]_0(\rho )+2\underset{l=2}{\overset{\mathrm{}}{}}{}_{}{}^{}\frac{P(l)}{1+\mathrm{exp}(\mathrm{\Delta }E_l/T)},$$ (15) where the sum is restricted to even $`l`$. This equation can be evaluated numerically, using the energies from the finite-size Bethe ansatz equations (Eq. 14) and an appropriate distribution function $`P(l)`$. It is found that the results for $`[\chi (T)T]`$ typically converge below a length cut-off around $`l_{max}=1000`$. Results for the quenched case are shown in Fig. 5. For low temperatures there appears to be a universal scaling behavior which can be elucidated by expanding Eq. 15. To determine the low-temperature scaling behavior of the susceptibility we consider the lowest-order finite-size scaling behavior of the energy gap, $`\mathrm{\Delta }E_l\alpha _1/l`$, neglecting higher-order logarithmic corrections (Eq. 3) which become increasingly important when larger segments occur (i.e. at very small impurity concentrations). For low temperatures one obtains $`[\chi (T)T](\rho )[\chi (T)T]_0(\rho )`$ $``$ $`{\displaystyle _0^{\mathrm{}}}\mathrm{exp}\left[{\displaystyle \frac{\alpha _1}{Tl}}\mathrm{ln}\left({\displaystyle \frac{1}{1\rho }}\right)l\right]𝑑l=\sqrt{{\displaystyle \frac{4\alpha _1}{T\mathrm{ln}\left(\frac{1}{1\rho }\right)}}}K_1\left[\sqrt{{\displaystyle \frac{4\alpha _1}{T}}\mathrm{ln}\left({\displaystyle \frac{1}{1\rho }}\right)}\right]`$ (16) $``$ $`\sqrt{{\displaystyle \frac{\pi }{2}}\sqrt{{\displaystyle \frac{\alpha _1}{T}}}}\left[\mathrm{ln}{\displaystyle \frac{1}{1\rho }}\right]^{\frac{3}{4}}\mathrm{exp}\left[\sqrt{{\displaystyle \frac{4\alpha _1}{T}}\mathrm{ln}\left({\displaystyle \frac{1}{1\rho }}\right)}\right],T0.`$ (17) The expected low-temperature behavior is thus of the form $$[\chi (T)T](\rho )[\chi (T)T]_0(\rho )\left(\frac{T}{J}\right)^{\frac{1}{4}}e^{\gamma /\sqrt{T/J}},$$ (18) where $`\gamma `$ turns out to be $$\gamma =\sqrt{4\frac{\alpha _1}{J}\mathrm{ln}\left(\frac{1}{1\rho }\right)}.$$ (19) The observed linearity in Fig. 5 where we have plotted $`\mathrm{ln}[(\chi (T)T(\rho )[\chi (T)T]_0(\rho ))(T/J)^{1/4}]`$ vs. $`(T/J)^{1/2}`$ for various $`\rho `$ confirms the validity of this low-temperature expansion. The free parameter $`\alpha _1`$ can be determined by fitting $`\gamma (\rho )`$ of Eq. 19 to the exact numerical Bethe ansatz solution of $`[\chi (T)T](\rho )`$ (Eq. 15). As shown in Fig. 6, for sufficiently small concentrations the fit turns out to be very good, giving $`\alpha _1=3.88J`$. A similar procedure can be used to determine the scaling behavior of the specific heat of segmented Heisenberg spin-1/2 chains. For the quenched case, one finds $$C\left(\frac{T}{J}\right)^{\frac{5}{4}}e^{\gamma ^{}/\sqrt{T/J}}$$ (20) at sufficiently low temperatures. The corresponding effective gap $`\gamma ^{}`$ turns out to be $$\gamma ^{}=\gamma _0+\sqrt{4\frac{\alpha _1^{}}{J}\mathrm{ln}\left(\frac{1}{1\rho }\right)},$$ (21) where a fit of the numerical data yields $`\gamma _0=0.182`$ and $`\alpha _1^{}=4.106J`$. ## IV Conclusions In summary, we have examined the spectral and thermodynamic response of various ensembles of segmented antiferromagnetic Heisenberg spin-1/2 chains. We have calculated the dynamical spin structure factor which can be probed by Neutron scattering experiments. The particular dependence of this quantity on the impurity concentration is determined by the distribution of segment lengths. However, there are several generic features which are observed for the most common distribution functions. These are (i) a rapid decay of the integrated spectral weight with increasing impurity concentration, (ii) a suppression of low-frequency poles (pseudo-gap), and (iii) the emergence of a discrete pole structure at higher energies, dominated by the smallest contributing clusters in the average. While we find that the first two predictions are consistent with the presently available experimental data, point (iii) might be the hardest to verify because the corresponding signals in a highly disordered sample are typically rather small and broad. Two main contributions to the low-temperature thermodynamic response of segmented chains can be identified: a dominant divergent component from the odd-side clusters and a sub-dominant exponentially activated component due to the even-site clusters. The sub-dominant contribution is analyzed by subtracting the impurity tail from the total response function, as it is commonly done in the analysis of experiments. Assuming a certain type of distribution function, the corresponding effective gaps, here $`\gamma `$ for the uniform susceptibility and $`\gamma ^{}`$ for the specific heat, can be calculated. An analysis of these gaps can thus be used as an indicator of the underlying distribution function of segment lengths. Complete segmentation of quantum spin chains, as it has been treated in this work, can be viewed as an extreme case of impurity scattering with an infinitely large on-site repulsive potential. While this mechanism may indeed lead to segmentation, in many physical realizations longer-range exchange paths exist which can partially restore extended states of the undoped parent systems. Consider for example Zn-doped $`\mathrm{CuGeO}_3`$. This compound is known to have sizeable next-nearest neighbor exchange interactions along the $`\mathrm{CuO}_2`$-chain direction, giving rise to an effective $`J_1J_2`$ model. Below the transition temperature $`T_{SP}`$ the compound goes into a dimerized spin-Peierls phase, whereas above $`T_{SP}`$ it is in a critical quasi-one-dimensional state. This spin-Peierls transition is suppressed upon replacing the Cu-atoms randomly with Zn, most likely because the partial segmentation due to the non-magnetic impurities impedes the quantum-critical extended states within the chains which in turn are a prerequisite for Peierls transitions. Because of the longer-range exchange paths $`J_2`$, these extended states are not completely destroyed, and a remnant spin-Peierls phase is observed in $`\mathrm{Cu}_{1\mathrm{x}}\mathrm{Zn}_\mathrm{x}\mathrm{GeO}_3`$ at sufficiently small impurity concentrations x. The segmentation of critical one-dimensional system competes with transitions such as three-dimensional ordering due to small inter-chain interactions or Peierls-type transitions. Segmented one-dimensional phases are more stable against low-temperature ordering transitions, and critical states can in turn be created in a controlled manner by impurity-doping short-range ordered systems, for example by introducing non-magnetic sites into quasi-one-dimensional spin liquids such as two-leg spin-1/2 Heisenberg ladders. We thank A. Honecker and B. Normand for useful discussions, and acknowledge the Zumberge Foundation for financial support.
no-problem/0001/cond-mat0001357.html
ar5iv
text
# The low-energy scale of the periodic Anderson model ## I Introduction Lanthanide or Actinide-based heavy fermion compounds can be viewed as a paradigm for strong correlation effects in solids. These systems comprise a great variety of different low-temperature states, including paramagnetic metals with either a Fermi liquid or non-Fermi liquid properties, long range magnetic order, superconductivity or coexistent magnetism and superconductivity. While it is very interesting and challenging to understand the physics of these ordered ground states, not even the basic ingredient, namely the physics of the paramagnetic metal, has been fully captured yet on a microscopic basis. The common understanding is that the physical properties of heavy fermion compounds are largely determined by spin-flip scattering between spins localized on the Lanthanide or Actinide sites and delocalized conduction quasi particles usually formed by the $`d`$-states in the system. In the case of dilute compounds this spin-flip scattering leads to the well-known Kondo effect, i.e. the screening of the localized spins by the conduction electrons and the formation of a Fermi liquid with a low-energy scale set by the so-called Kondo temperature, $`T_\mathrm{K}`$ depending exponentially and non-perturbatively on the system parameters . Interestingly, the physics of the metallic phase of concentrated systems can in some cases, e.g. CeAl<sub>3</sub>, CeB<sub>6</sub> or CeCu<sub>6</sub> , be at least qualitatively understood by a picture of independent, but coherent Kondo scatterers, with the low-energy scale set by the Kondo temperature of the dilute system. However, for UPt<sub>3</sub>, URu<sub>2</sub>Si<sub>2</sub> or Yb<sub>4</sub>As<sub>3</sub> or the compound LiV<sub>2</sub>O<sub>4</sub> recently characterized as heavy fermion system there seem to exist two distinct energy scales; one high temperature scale, $`T_\mathrm{K}`$, describing conventional incoherent Kondo scattering, and a much smaller scale, $`T_0`$, also called coherence scale in literature, which marks the onset of Fermi liquid formation characterized by quasi particles with strongly enhanced effective masses. This apparent discrepancy in the experimental situation has triggered numerous theoretical investigations, from which two dominant competing scenarios have emerged. One scenario, based on a diagrammatic perturbation theory , succeeded in approximately mapping the concentrated system onto a set of independent, effective impurities which at low temperature form a coherent Fermi liquid state due to the lattice periodicity. The theory further predicts the existence of a single energy scale which uniquely determines the high energy properties like Kondo screening as well as the formation of the low-temperature Fermi liquid. Most important, apart from moderate renormalizations which usually lead to an enhancement, this lattice scale is given in terms of the energy scale of the corresponding diluted system and the one particle properties close to the Fermi energy are well described by a picture of hybridizing bands, leading to a density of states with a (pseudo-) gap slightly above the Fermi energy . The large effective masses can be accounted for by the observation that the Fermi energy lies in the region with flat bands . A further theoretical study, based on a variational treatment of the system with a Gutzwiller wave function and employing the Gutzwiller approximation, seems to support this result in the sense that the energy scale calculated is enhanced over the corresponding scale of the dilute system, i.e. there is quite likely only one relevant energy scale. However, this variational scale is more strongly renormalized . The other scenario is based on Noziéres argument that in concentrated systems there will generally not be enough conduction states available to screen all of the localized spins. This situation is engendered by the fact that only the band states within $`T_K`$ of the Fermi surface can effectively participate in screening the local moments. The number of screening electrons can be estimated as $`\rho _0^c(0)T_K1`$, where $`\rho _0^c(0)`$ is the conduction band density of states at the Fermi level. Thus, in a concentrated system one should encounter an intermediate temperature regime where all band states available for screening are “exhausted” – from which the name “exhaustion physics” was coined – and the system starts to resemble an incoherent metal where only part of the spins will be screened . However, since the Kondo screening clouds are not pinned to a particular site they can move through the system with an effective hopping matrix element and a residual strong local correlation, because there can never be two screening clouds on the same site. Based on these arguments Nozières thus suggests, that at low temperatures the system effectively behaves similar to a Hubbard model with a small number of holes and strong local Coulomb repulsion. The remaining entropy can now be quenched by forming a state with either long-range antiferromagnetic order or short ranged antiferromagnetic fluctuations, which give rise to a small low-energy scale and a corresponding heavy Fermi liquid, too . It is important to note that these phenomenological arguments by Nozières do not make any reference to a particular model or specific parameter regime of a model like the periodic Anderson model; thus, if Nozières arguments are correct, exhaustion physics should actually be the generic situation. Although both scenarios seem to be in accordance with some experimental facts, they apparently fail to capture the whole story. Moreover, in some cases they rely either on approximations that are difficult to control on a microscopic basis or are entirely based on phenomenological arguments, as it is the case with Nozières’ point of view. In order to discriminate which ansatz is correct or outline the borders where each of the two scenarios may be valid, a more thorough microscopic study of the properties of the paramagnetic phase is highly desirable. Recently, Quantum Monte Carlo (QMC) simulations of the periodic Anderson model (PAM) within the Dynamical Mean-Field Theory (DMFT) have shown that when the conduction band is nearly half filled, there is a single scale consistent with the predictions of Rice and Ueda , but obviously inconsistent with Nozières’ picture of exhaustion. On the other hand, when the conduction band filling is significantly reduced, the states available for screening of the local moments appear to be depleted near the Fermi surface and the lattice coherence scale is strongly reduced from the corresponding impurity scale . A protracted evolution of the photoemission spectra , and transport are predicted and can be understood in terms of a crossover between the two scales. Although a quantitative relationship between the two scales could not be established, the reduction was ascribed to exhaustion . Nozières subsequently argued that exhaustion should lead to a significant reduction of the lattice scale and predicted that $`T_0`$ is at most $`N_d(0)T_K^2`$ . In this contribution we present results for the zero-temperature properties of the periodic Anderson model obtained within the DMFT in conjunction with Wilson’s numerical renormalization group (NRG) calculations . We will show that for systems with approximately half filled conduction band, one finds a single energy scale, in accordance with the independent impurity picture and which qualitatively behaves as predicted by Rice & Ueda ; moreover, in this parameter regime we do not find any hint towards the occurrence of exhaustion physics; which raises the question to what extent Nozières’ phenomenological argument, that only a fraction $`\rho _0^c(0)T_\mathrm{K}1`$ of the band states can contribute to the screening of the f-spins, is valid. In order to observe a behavior at all that is in at least qualitative agreement with Nozières’ exhaustion scenario, one actually has to strongly reduce the carrier concentration of the conduction band. For these low-carrier systems the physical behavior completely changes and can now be understood in the framework of exhaustion. However, a quantitative comparison of the lattice energy scale and the one of the corresponding impurity model is possible with our method, and indicates a strikingly different relation between the two than recently predicted by Nozières even in this parameter regime. The paper is organized as follows. In the next section we will introduce the periodic Anderson model and its treatment within the DMFT using NRG. In section III we will discuss recent developments in understanding the physics of the PAM in some detail and provide a basis to interpret the NRG results presented in section IV. The paper will be concluded by a summary and outlook in section V. ## II Model and Method ### A Periodic Anderson Model The standard model used to describe the physics of heavy fermion systems is the periodic Anderson model (PAM) $$\begin{array}{cc}H=\hfill & \underset{𝐤\sigma }{}\epsilon _𝐤c_{𝐤\sigma }^{}c_{𝐤\sigma }^{}+\epsilon _f\underset{𝐤}{}f_{𝐤\sigma }^{}f_{𝐤\sigma }^{}+U\underset{i}{}n_i^fn_i^f\hfill \\ & +\underset{𝐤\sigma }{}V_𝐤\left(c_{𝐤\sigma }^{}f_{𝐤\sigma }^{}+\text{h.c.}\right).\hfill \end{array}$$ (1) In (1), $`c_{𝐤\sigma }^{}`$ creates a conduction quasi particle with momentum $`𝐤`$, spin $`\sigma `$ and dispersion $`\epsilon _𝐤`$, with $`\frac{1}{N}\underset{𝐤}{}\epsilon _𝐤=\epsilon _c`$ as its center of mass; $`f_{𝐤\sigma }^{}`$ creates an f-electron with momentum $`𝐤`$, spin $`\sigma `$ and energy $`\epsilon _f`$ and $`n_{i\sigma }^f`$ is the number operator for f-electrons at lattice site $`i`$ with spin $`\sigma `$. The localized f-states experience a Coulomb repulsion $`U`$ when occupied by two electrons and the two subsystems are coupled via a hybridization $`V_𝐤`$. Although it is in general a crude approximation we will assume for computational reasons $`V_𝐤=\text{const.}`$ for the rest of the paper. Given the usually rather complex crystal structure of heavy fermion compounds, the question to what extent the simple model (1), which describes orbitally non-degenerate f-states, is appropriate. However, especially in Ce-based systems the relevant configuration is f<sup>1</sup>, whose multiplicity will in general be reduced to a Kramers doublet in the crystal field; the other crystal field multiplets are generally well separated from this ground state doublet . The situation is more complicated in Uranium compounds, where with equal probability a f<sup>2</sup> state can be the lowest configuration. In this case the assumption of a Kramers doublet is of course not justified. Nevertheless, even for these compounds the PAM seems to provide an at least qualitatively correct description, so that we assume it to be the relevant model for those compounds, too. In contrast to e.g. the Hubbard model, where for one dimension an exact solution via Bethe ansatz is available and a combination of many-body techniques was successful to exploit the ground state properties exactly, no such exact benchmark is available for the PAM. The only limit where the physical properties are almost completely known is the impurity version of the model (1) (single impurity Anderson model, SIAM) , where a single site with f-states exists which couples to the conduction states. In this limit one finds for $`|\epsilon _f|/V,(\epsilon _f+U)/V1`$ the mentioned Kondo behavior, namely a screening of the local moment by the conduction states with an energy scale (Kondo temperature) $$\begin{array}{cc}T_\mathrm{K}=\hfill & \lambda \sqrt{I}\mathrm{exp}\left(\frac{1}{I}\right)\hfill \\ I=\hfill & 2V^2\rho _0^c(0)\left|\frac{1}{\epsilon _f}\frac{1}{\epsilon _f+U}\right|\hfill \end{array}$$ (2) with $`\rho _0^c(0)`$ the density of states at the Fermi energy of the conduction band and $`\lambda `$ a cut-off energy. Note, that $`T_\mathrm{K}`$ depends exponentially on the system parameters, and especially non-analytically on $`V^2`$. It is important to stress that the occurrence of the Kondo temperature is intimately connected to the thermodynamic limit, where a (quasi-) continuum of band states exists at the Fermi energy. This property makes reliable calculations with techniques usually suitable to treat finite-sized two or three dimensional systems, like exact diagonalization or quantum Monte Carlo, of limited value, since they can only handle small to moderately sized systems. For the quantum Monte Carlo an additional problem is the sign problem, that becomes increasingly serious as $`U`$ and the system size increase. Direct perturbational approaches like e.g. FLEX are by construction restricted to small values of $`U`$ and generally fail to capture even the basics of the Kondo physics. Notable exceptions occur again for the SIAM. Here, straight forward second order perturbation theory in $`U`$ (not self-consistent) is able to reproduce at least qualitative features in the special case of full particle-hole symmetry . More recently, a novel ansatz using a perturbation theory about the unrestricted Hartee-Fock solution for the SIAM has been shown to give even quantitatively correct results for spectral functions and energy scales of the SIAM in the Kondo limit . ### B Dynamical Mean-Field Theory The other non-trivial limit, where an exact solution of the PAM (1) becomes at least in principle possible is the limit of large dimensionality, where the dynamical mean-field theory (DMFT) becomes exact . Here, the renormalizations due to the local Coulomb repulsion become purely local , i.e. one obtains a one-particle self energy independent of momentum $$\mathrm{\Sigma }^f(𝐤,z)\mathrm{\Sigma }^f(z).$$ (3) This property may be used to remap the lattice problem onto an effective SIAM again . The non-trivial aspect of the theory comes about by the fact that the medium coupling to the effective impurity is a priori not known but has to be determined self-consistently in the course of the calculation . In particular, the self-consistency relation for the PAM reads $$G^{\mathrm{loc}}(z)\begin{array}{c}=𝑑ϵ\frac{\rho _0^c(ϵ)}{z\epsilon _f\mathrm{\Sigma }^f(z)\frac{V^2}{zϵ\epsilon _c}}\hfill \\ =\frac{1}{z\epsilon _f\stackrel{~}{\mathrm{\Delta }}(z)\mathrm{\Sigma }^f(z)}\stackrel{!}{=}G^{\mathrm{SIAM}}(z),\hfill \end{array}$$ (4) with $`\rho _0^c(ϵ)`$ the density of states of the bare conduction band in (1). The second line in (4) defines a generalized hybridization function $`\stackrel{~}{\mathrm{\Delta }}(z)`$, which implicitly depends on $`\mathrm{\Sigma }^f(z)`$ and is thus modified from its non-interacting form by the presence of correlated f-electrons on other sites in an averaged way. The remaining task is to solve the effective SIAM defined by the set of parameters $`\{\epsilon _f,U\}`$ and the generalized Anderson width $$\mathrm{\Gamma }(\omega )=\mathrm{}m\{\stackrel{~}{\mathrm{\Delta }}(\omega +i\eta )\}.$$ (5) Note that the self-consistency condition (4) requires the knowledge of a dynamical quantity, viz the one-particle Green function for all frequencies. This immediately rules out techniques like Bethe ansatz, since it is impossible to calculate dynamical correlation functions with this method. To perform this task nevertheless, a variety of different methods have been developed and applied during the past decade: Quantum Monte Carlo simulations , exact diagonalization , extended second order perturbation theory (iterated perturbation theory, IPT) , resolvent perturbation techniques , local-moment approach and Wilson’s NRG . In this contribution we used the last method, for the following reasons. First, it is tailored to capture the low-energy physics of the Kondo problem with high accuracy. Second, it is able to identify exponentially small energy scales. Third, it is non-perturbatively and thus also produces the correct dependence of the Kondo temperature on the parameters (see (2)) and, last but not least, it has only little numerical restrictions on the choice of model parameters. Together with past years developments we are able to study the low-temperature properties of the PAM with the NRG with good accuracy. ### C Details of Wilson’s NRG Quite generally, the NRG is based on a logarithmic discretization of the energy axis, i.e. one introduces a parameter $`\mathrm{\Lambda }>1`$ and divides the energy axis into intervals $`[\mathrm{\Lambda }^{(n+1)},\mathrm{\Lambda }^n]`$ for $`n=0,1,\mathrm{},\mathrm{}`$ . With some further manipulations one can map the original model onto a semi-infinite chain, which can be solved iteratively by starting from the impurity and successively adding chain sites. Since the coupling between two adjacent sites $`n`$ and $`n+1`$ vanishes like $`\mathrm{\Lambda }^{n/2}`$ for large $`n`$, the low-energy states of the chain with $`n+1`$ sites are determined by a comparatively small number $`N_{\mathrm{states}}`$ of states close to the ground state of the $`n`$-site system. In practice, one retains only those $`N_{\mathrm{states}}`$ from the $`n`$-site chain to set up the Hilbert space for $`n+1`$ sites and thus prevents the usual exponential growth of the Hilbert space as $`n`$ increases. Eventually, after $`n_{\mathrm{NRG}}`$ sites have been included in the calculation, adding another site will not change the spectrum significantly and one terminates the calculation. It is obvious, that for any $`\mathrm{\Lambda }>1`$ the NRG constitutes an approximation to the system with a continuum of band states but becomes exact in the limit $`\mathrm{\Lambda }1`$. Performing this limit is, of course, not possible as one has to simultaneously increase the number of retained states to infinity. One can, however, study the $`\mathrm{\Lambda }`$\- and $`N_{\mathrm{states}}`$-dependence of the NRG results and perform the limit $`\mathrm{\Lambda }1`$, $`N_{\mathrm{states}}\mathrm{}`$ by extrapolating these data. Surprisingly one finds that the dependence of the NRG results on $`\mathrm{\Lambda }`$ as well as on the cut-off $`N_{\mathrm{states}}`$ is extremely mild, in most cases is a choice of $`\mathrm{\Lambda }=2`$ and $`N_{\mathrm{states}}=300\mathrm{}500`$ sufficient . While the knowledge of the states is sufficient to calculate thermodynamic properties, the self-consistency (4) requires the knowledge of the one-particle Green function or, equivalently the knowledge of the single-particle self energy $`\mathrm{\Sigma }^f(z)`$. Since the NRG scheme works with a discretization of the energy axis, it corresponds to a discrete system and by construction the Green function consists of a set of poles and an appropriate coarse-graining procedure has to be applied. During the last 15 years considerable progress has been made to extract dynamical properties with the NRG, too, and it has been shown to give very accurate results also for e.g. dynamical one- and two-particle and also transport properties . For dynamical properties the NRG works best at $`T=0`$, and various dynamic correlation functions can be calculated with an accuracy of a few percent. Although less well defined for finite temperatures, its extension to $`T>0`$ also shows very good agreement with exact results . It is quite remarkable as no sum-rules (Friedel sum rule, total spectral weight etc.) must be used as input for these calculations. On the contrary, they can serve as an independent a posteriori check on the quality of the results. The first application of the NRG to the DMFT known to us is the work of Sakai et al. where the symmetric Hubbard and periodic Anderson model in the metallic regime have been studied. In their papers, the authors point out some difficulties in the progress of iterating the NRG results with the DMFT equations, which are in our opinion largely related to the necessary broadening of the NRG spectra. As has been shown in the work of Bulla et al. , these difficulties can be circumvented if, instead of calculating $`G^f(z)`$ and extracting $`\mathrm{\Sigma }^f(z)`$ from it, one calculates the self energy directly via the relation $$\begin{array}{cc}\mathrm{\Sigma }_\sigma ^f(z)=\hfill & U\frac{F_\sigma ^f(z)}{G_\sigma ^f(z)}\hfill \\ F_\sigma ^f(z)=\hfill & f_\sigma f_{\overline{\sigma }}^{}f_{\overline{\sigma }},f_\sigma ^{}_z,\hfill \end{array}$$ (6) which originates from the equation of motion $$\begin{array}{cc}zG_\sigma ^f(z)=\hfill & 1+[f_\sigma ^{},H_{\mathrm{SIAM}}],f_\sigma ^{}(z)\hfill \\ \hfill =& 1+\left(\epsilon _f+\mathrm{\Delta }(z)\right)G_\sigma ^f(z)+UF_\sigma ^f(z)\hfill \end{array}$$ (7) with $$\mathrm{\Delta }(z)=\frac{1}{N}\underset{𝐤}{}\frac{|V_𝐤|^2}{z\epsilon _𝐤}.$$ Both correlation functions $`G_\sigma ^f(z)`$ and $`F_\sigma ^f(z)`$ appearing in (6) can be calculated with the NRG and it turns out that the quotient of them gives a much better result for $`\mathrm{\Sigma }^f(z)`$ than the use of $`G^f(z)`$ alone . Let us note one particular problem in dealing with the PAM in the DMFT+NRG. As we will see in the beginning of section IV, the effective hybridization for the PAM in the particle-hole symmetric limit exhibits a pole right at the Fermi level (see Fig. 3 on page 3). It is clear that such a pole will lead to numerical difficulties. However, the NRG allows to deal with such a structure in a very efficient way, namely by including this single state represented by the pole into the definition of the “impurity” defining the beginning of the NRG chain. ## III Recent Results Early studies of the PAM using the DMFT concentrated on the particle-hole symmetric case $`\epsilon _c=0`$, $`2\epsilon _f+U=0`$, i.e. $`n_f=1`$ and $`n_c=1`$ . To solve the SIAM, the authors employed QMC and NRG . Although for this particular situation the system is a band insulator for symmetry reasons, one can extract a Wilson Kondo scale from the SIAM impurity susceptibility. Interestingly, this energy scale is enhanced with respect to the corresponding SIAM Kondo temperature in accordance with the results by Rice & Ueda . In addition, at low temperatures the system can order antiferromagnetically . To avoid confusion and make the discussion more transparent we will use $`T_0`$ and $`T_\mathrm{K}`$ from now on the distinguish the relevant lattice energy scale and the Kondo scale, respectively. One possibility to break the particle-hole symmetry is by depleting the band filling via changing $`\epsilon _c`$, but keeping $`n_f1`$, which has been done with QMC for $`U/V^24`$ . The resulting phase diagram turns out to be quite interesting for several reasons. First, one finds a suppression of the antiferromagnetic order with decreasing $`n_c`$ and around $`n_c=0.5`$ a region with ferromagnetic order emerges , which for $`n_c<0.5`$ can be attributed to a ferromagnetic effective RKKY-type exchange. Second, from studies of the resistivity and optical conductivity , one can infer that for $`n_c0.8`$ the physics of the paramagnetic metallic phase drastically change. While for $`n_c0.8`$ there seem to exist only one energy scale $`T_0T_\mathrm{K}`$, the resistivity data suggest, that for $`n_c0.8`$ the onset of coherence is marked by a temperature scale $`T_0T_\mathrm{K}`$ . In addition, the spectral function drastically changes as one decreases $`n_c`$ and the effective Anderson width (5), which for $`n_c1`$ has a peak at the Fermi energy, starts to develop a dip for $`\omega =0`$. These observations were taken as fingerprints of Noziéres exhaustion scenario . Further evidence for this interpretation comes from the work by Vidhyadhiraja et al. , which is based on $`2^{nd}`$ order perturbation theory in $`U`$ (IPT). These authors, too, find a similar behaviour as function of $`n_c`$. In addition, since their method allows to study $`T=0`$, they could extract the energy scale $`T_0`$ from their data. Interestingly, they found a relation between $`T_0`$ and $`T_\mathrm{K}`$ of the form $`T_0(T_\mathrm{K})^2`$ for their results, which is precisley the behaviour predicted by the phenomenological theory of Noziéres . However, we note that this particular relationship $`T_0T_K^2`$ in was found only if the width of the Kondo peak was used to estimate $`T_0`$. When the ratio of the masses was used instead, the relationship rather follows $`T_0T_K`$ . The estimate of $`T_0`$ coming from Gutzwiller ansatz calculations on the other hand apparently fail to capture the essential physics in this parameter regime, since they predict a ratio $`T_0/T_\mathrm{K}>1`$ for all values of $`n_c`$ . Although the results from QMC and IPT are at a first glance very convincing, there remain some problems. First, both series of calculations were done with a comparatively small value of $`U/V^24`$ and a systematic study of the behaviour as function of $`U/V^2`$ especially for larger values is clearly necessary. Second, the results were either obtained by QMC or IPT . However, for large $`U/V^2`$, the identification of exponentially small energy scales with QMC is problematic due to its restriction to finite temperatures. The IPT on the other hand leads to ambiguous results as mentioned before; in addition, as a perturbational approach in $`U`$, it certainly cannot produce exponentially small energy scales. Thus, for a quantitative description of the low-temperature phase and especially a reliable calculation of the low-energy scale $`T_0`$, a non-perturbative technique at $`T=0`$ is necessary. ## IV NRG results Such a method has become available recently by the application of Wilson’s NRG to the DMFT , which we use to study the paramagnetic phase of the PAM within the DMFT at $`T=0`$. In order to perform the energy integral in (4) analytically and to be able to make contact to earlier results we study a simple hypercubic lattice in the limit of dimensionality $`d\mathrm{}`$. With the proper scaling this leads to $`\rho _0^c(ϵ)=\mathrm{exp}\left((ϵ^2/t^{})^2\right)/(t^{}\sqrt{\pi })`$ for the noninteracting DOS of the band states. In the following we use $`t^{}=1`$ as our energy unit. Let us start by briefly discussing the particle-hole symmetric situation, i.e. $`2\epsilon _f+U=0`$ and $`\epsilon _c=0`$. Here, it is expected from symmetry and shown by calculations , that the concentrated system exhibits a hybridization gap. This is shown in Fig. 1, where the $`f`$-DOS $`A_f(\omega )=\frac{1}{\pi }\mathrm{}m\left\{G^f(\omega +i\delta )\right\}`$ is plotted for the SIAM (dashed line) and the PAM (full line). The model parameters are $`U=2`$ (i.e. $`\epsilon _f=1`$) and $`V^2=0.2`$. The result for the SIAM nicely shows the well-known structures, namely the charge excitation peaks at $`\omega \pm U/2`$ and the Abrikosov-Suhl resonance (ASR) at the Fermi level. This latter structure, which can be regarded as an effective local level, basically leads to the hybridization gap in the PAM. The enlarged view of the region around the Fermi energy also shows, that the width of this hybridization gap and the width of the ASR are of similar order of magnitude. As we will dicuss in a moment the corresponding energy scale in the lattice is actually enhanced over $`T_\mathrm{K}`$, which sets the width of the ASR. This result can readily be anticipated from the self-energy of the $`f`$-states in Fig. 2. In the SIAM (dashed line) and the PAM (full line) one observes a nice parabolic extremum in $`\mathrm{}m\mathrm{\Sigma }^f(\omega +i\delta )`$ at the Fermi energy (see Fig. 2a), which is accompanied by a linear region in $`\mathrm{}e\mathrm{\Sigma }^f(\omega +i\delta )`$ in Fig. 2b; the slope is negative in both cases and definitely smaller for the PAM, as can be seen from the inset to Fig. 2b. This means that, at least as far as the self energy is concerned, the system can be viewed as a Fermi liquid with a quasi-particle weight $$Z^1=1\frac{d\mathrm{}e\mathrm{\Sigma }^f(\omega )}{d\omega }|_{\omega =0}m^{}/m,$$ (8) where we introduced the notion of the effective mass $`m^{}`$. Note that the above result, viz that the system is a Fermi liquid, is not a priori apparent from Fig. 1, since the spectrum represents an insulator. From the self-energy it is however evident that this insulating state can be trivially accounted for by using the picture of hybridizing quasi-particle bands, one of which is located at the Fermi energy and describes an effective f-level. The characterization of the particle-hole symmetric system as band insulator is a priori not the only possible scenario. Indeed, for small $`V^2`$ and $`U\mathrm{}`$ an alternative route to an insulating state is via a Mott-Hubbard transition as found by Held et al. for non-constant $`V_𝐤`$ . We did not observe such a transition for the case $`V_𝐤=`$const. studied here for values of $`U`$ as large as $`U=10`$, but also cannot exclude such a scenario on the basis of the data available. It is also quite illustrating to have a look at the effective Anderson width as defined by (5). This function is rather featureless for the SIAM (dashed line in Fig. 3), but exhibits a very pronounced structure near the Fermi level for the PAM (full line in Fig. 3), namely a pseudo gap plus a strong peak right at $`\omega =0`$; one can in fact show that the latter is a $`\delta `$-peak. This $`\delta `$-peak can be understood as emerging from the sharp quasi-particle band with dominantly f-character at the Fermi level. The results by QMC and IPT suggest that one can expect drastic changes in the physics of the model if one breaks the particle-hole symmetry. There are actually two possible routes to accomplish this goal; the first is to keep $`\epsilon _c=0`$, and thus $`n_c1`$, but increase $`U`$ so that $`2\epsilon _f+U>0`$. An example for the spectrum and the corresponding hybridization function for $`\epsilon _f=1`$, $`U=6`$ and $`V^2=0.2`$ is shown in Fig. 4, where we plot the DOS $`A_f(\omega )`$ (Fig. 4a) and the hybridization function $`\mathrm{\Gamma }(\omega )`$ (Fig. 4b). For these particular parameters the occupancies are $`n_c1`$ and $`n_f0.92`$. As in the particle-hole symmetric case one finds the characteristic three-peak structure, again with a hybridization gap in the DOS of the lattice (full line, see inset to Fig. 4a). Note that this gap now is located above the Fermi energy and its width is visibly larger than the width of the ASR in the SIAM; pointing again towards an enhanced energy scale for the lattice, which is also confirmed by an inspection of the self energy. A replica of this (pseudo-) gap is also visible in the effective hybridization function of the lattice in Fig. 4b, which in addition shows a pronounced peak slightly above the Fermi energy. The origin of this peak is the same as for $`n_f=1`$; only it now has to be damped due to the finite lifetime of the quasi-particles for $`\omega >0`$. It is quite interesting to note that the value of $`\mathrm{\Gamma }(0)`$ is actually reduced from its value in the SIAM, but the average of $`\mathrm{\Gamma }(\omega )`$ over a region of the order $`T_0`$ around the Fermi energy is enhanced. If one assumes that it will be such an averaged value that determines the low-energy scale, one can readily understand that $`T_0`$ is enhanced over $`T_\mathrm{K}`$ by this simple picture. We should mention that our results are very similar to those obtained more than 10 years ago with resolvent perturbation theories . In fact, the physical situation studied then was basically the same, namely a particle-hole symmetric conduction band hybridizing with an asymmetric $`f`$-level, however with $`U=\mathrm{}`$. The interpretation of the structures in $`\mathrm{\Gamma }(\omega )`$, which may be viewed as the effective DOS of the medium visible to the impurity, then leads to the picture that due to the coherent admixture of $`f`$-states to the system, there will be an effective enhancement of $`\mathrm{\Gamma }(\omega )`$ close to the Fermi energy. In this sense one may identify the physics in this region of parameter space with the picture of coherent Kondo scatterers. The most interesting question is how the energy scales of the dilute system and the lattice, $`T_\mathrm{K}`$ and $`T_0`$, are related in this parameter regime. Let us recall that from the Gutzwiller ansatz one obtains $`T_\mathrm{K}/T_0=m_{\mathrm{PAM}}^{}/m_{\mathrm{SIAM}}^{}=\mathrm{exp}(1/2I)`$, where $`I=8\rho _0^c(0)V^2/U`$; this prediction was found to be consistent with recent DMFT QMC simulations where the Wilson Kondo scale was estimated form the excess impurity susceptibility. At $`T=0`$, the most efficient way to extract the low-energy scale is by calculating the effective mass according to (8). With $`m/m^{}T_0`$ we are then able to discuss the variation of $`T_\mathrm{K}`$ and $`T_0`$ with $`U`$. The result is shown in Fig. 5, where $`m/m^{}`$ is plotted versus $`1/I`$ and $`I=2\rho _0^c(0)V^2U/\left(|\epsilon _f|(\epsilon _f+U)\right)`$ is the Schrieffer-Wolff exchange coupling. As already predicted from the spectra and self energies, the general relation $`T_0>T_\mathrm{K}`$ holds. In addition, both $`T_\mathrm{K}`$ and $`T_0`$ apparently behave exponentially as function in $`1/I`$, but with different slopes; in qualitative agreement with the predictions the slope of $`T_0`$ is smaller. To quantify the relation between $`T_\mathrm{K}`$ and $`T_0`$ further we show in the inset to Fig. 5 the ratio $`T_\mathrm{K}/T_0=m_{\mathrm{PAM}}^{}/m_{\mathrm{SIAM}}^{}`$ as function of $`1/I`$. Again, an exponential behavior is observed; however, in contrast to the predicted factor $`1/2`$ we find $`\mathrm{ln}(T_\mathrm{K}/T_0)=1/3I`$ from our data. For the time being we do not have an explanation for this discrepancy between the QMC and NRG results; whether it might be related to different schemes to extract $`T_0`$ – scaling behavior of the excess susceptibility as function of temperature in the QMC versus effective mass at $`T=0`$ in the NRG – has to be clarified. The second possibility to destroy particle-hole symmetry is to choose $`\epsilon _c0`$. As a typical result for that parameter regime we show in Fig. 6 the DOS $`A_f(\omega )`$ (Fig. 6a) and the corresponding effective hybridization function $`\mathrm{\Gamma }(\omega )`$ (Fig. 6b) for $`\epsilon _c=0.5`$, $`2\epsilon _f+U=0`$, $`U=2`$, and $`V^2=0.2`$. For the filling we obtain $`n_f1`$ and $`n_c0.6`$ for both the PAM (dashed line) and the single impurity Anderson model (SIAM, full line). As usual, one sees the characteristic structures, namely the charge-excitation peaks at $`\omega \pm U/2`$ and the Kondo resonance at the Fermi level. However, in contrast to the results with $`\epsilon _c=0`$, we do not find any hint of a hybridization gap in the lattice DOS; in fact, the DOS looks pretty much like that of a conventional SIAM. The major difference to the DOS of the SIAM is an enhancement of the ASR and a reduction of its width , as is apparent from the inset to Fig. 6a. Particularly interesting is the behavior of the effective hybridization function in Fig. 6b. Similar to the case $`\epsilon _c=0`$, $`2\epsilon _f+U>0`$ it is considerably reduced in the region around the Fermi level; in contrast to the former case, however, the sharp quasi-particle contribution is missing and the average value of $`\mathrm{\Gamma }(\omega )`$ over the region of the order $`T_0`$ around $`\omega =0`$ is still reduced from the non-interacting value here. The depletion in $`\mathrm{\Gamma }(\omega )`$ around the Fermi level has been coined as hall-mark of exhaustion physics in the PAM and related models , since according to Nozières phenomenological picture the effective density of medium states available at a given site should be reduced due to screening at other sites. The fundamental difference in the physics between $`n_c1`$ and $`n_c1`$ also manifests itself in the behavior of the DOS of the conduction states. Typical results for this quantity are shown in Fig. 7 for $`\epsilon _f=1`$, $`V^2=0.2`$ and $`\epsilon _c=0`$, $`U=6`$ ($`n_c1`$, upper panel) and $`\epsilon _c=0.5`$, $`U=2`$ ($`n_c0.6`$, lower panel). For comparison, the bare band DOS for the corresponding value of $`\epsilon _c`$ is also included (dashed curves); the choice of different values of $`U`$ for the cases $`\epsilon _c=0`$ and $`\epsilon _c=0.5`$ is necessary to ensure that both systems are metallic. One observes fundamental qualitative differences in the DOS, especially close to the Fermi surface, which in our opinion are related to the different physical properties of the different regimes and do not depend critically on a particular choice of the interaction strength. For $`n_c1`$, the appearance of a gap slightly above the Fermi level in the conduction DOS again supports the notion of hybridizing bands in this region of parameter space. On the other hand, the conduction DOS for the case $`n_c0.6`$ does not show the typical form of hybridized bands, but merely a pseudo-gap at the Fermi energy. The fact, that the spectrum only develops a pseudo-gap with finite DOS for $`\omega =0`$, can again be understood as sign of exhaustion, since the formation of a full hybridization gap requires complete Kondo screening by the band states to occur for each f-site; while the formation of a pseudo-gap can be interpreted that only part of the f-sites are screened by the conduction electrons. The reduced effective hybridization around the Fermi level observed in Fig. 6 gives also rise to a reduced low-energy scale, characterized by an effective mass $`m^{}/m17`$ in the PAM, whereas the corresponding quantity for the SIAM is $`m^{}/m8`$. The behavior of the low-energy scale as function of $`n_c`$ for fixed $`U=2`$, $`V^2=0.25`$ and $`n_f1`$ is shown in Fig. 8. Note that for $`n_c1`$ the value $`T_0`$ is again enhanced over the impurity scale with $`\mathrm{ln}(T_\mathrm{K}/T_0)=1/3I`$. Below $`n_c0.8`$ the ratio $`T_0/T_K`$ decreases below one and falls rapidly and monotonically with $`n_c`$, being almost two orders of magnitude smaller for $`n_c0.2`$. The ratio $`m_{\mathrm{SIAM}}^{}/m_{\mathrm{PAM}}^{}T_0/T_\mathrm{K}`$ is shown in the inset to Fig. 8. This ratio falls more rapidly than $`T_\mathrm{K}`$, i.e. $`T_0/T_\mathrm{K}^2`$ is not constant. The ratio $`T_0/T_\mathrm{K}`$ can be fit to a form $`T_0/T_\mathrm{K}n_c\mathrm{exp}\left(cn_c\right)`$ with $`c5/2`$ (Full curve in the inset to Fig. 8). It gives an excellent account of the data. This behaviour especially means that $`T_0n_cT_\mathrm{K}`$ as $`n_c0`$ . Note that the Gutzwiller result for $`n_c1`$ predicts both $`T_0`$ and $`T_\mathrm{K}`$ to behave like $`n_ce^{cn_c}`$, but (i) predicts $`T_0/T_\mathrm{K}>1`$ and (ii) clearly gives no proportionality to $`n_c`$ in the ratio $`T_0/T_\mathrm{K}`$. Nozières phenomenological arguments also lead to an estimate of $`T_0`$ as function of $`n_c`$, namely $`T_0(T_K)^2/\rho _0^c(0)`$ . This relation has recently been tested with IPT and, within this approach, found to be fulfilled at least for $`U/V^24`$ between $`0.4n_c0.8`$ when $`T_0`$ is estimated from the width of the Kondo resonance. We first note, that clearly our result in Fig. 8 is not compatible with this prediction. In order to clarify the relation between $`T_0`$ and $`T_\mathrm{K}`$ we compare $`T_0`$ and $`T_K`$ as function of $`U/V^2`$ for fixed $`n_c`$. The results for $`n_c0.6`$ as function of $`U/V^2`$ for both varying $`U`$ at constant $`V^2=0.2`$ (circles) and varying $`V^2`$ at constant $`U=2`$ (squares) is shown in Fig. 9 on a semi-logarithmic scale; to study the dependence on the filling $`n_c`$, we included calculations for $`n_c=0.31`$ (diamonds, varying $`U`$ only). Evidently $`T_K`$ and $`T_0`$ both follow an exponential law $`T_0,T_K\mathrm{exp}(a/I)`$, where $`I=8\rho _0^c(0)V^2/U`$ is the Schrieffer-Wollf exchange coupling for $`2\epsilon _f+U=0`$. However, the curves for the SIAM and PAM for fixed $`n_c`$ are parallel in the semi-logarithmic plot in Fig. 9, i.e. the coefficients of $`U/V^2`$ in the exponents of both quantities are identical. This of course means $`T_0T_K`$ rather than $`T_0(T_K)^2`$, as predicted by Nozières. Thus, neither as function of $`n_c`$ nor as function of $`U/V^2`$ does the lattice scale $`T_0`$ obey Nozières prediction. ## V Summary and Conclusion We have presented results for the PAM obtained within the DMFT at $`T=0`$ using Wilson’s NRG approach. For the range of parameters studied here, the system can always be characterized as a Fermi liquid with a strongly enhanced effective mass; this lattice scale $`T_0`$ is enhanced over a corresponding impurity Kondo scale $`T_\mathrm{K}`$ for the particle-hole symmetric conduction band in accordance with perturbational results or those from Gutzwiller ansatz . Moreover, the picture of hybridizing quasi particle bands leading to (pseudo-) gaps in the DOS was found to be valid here. On the other hand, in the case of an asymmetric conduction band and especially for low carrier concentration $`n_c0.8`$, the spectral functions and corresponding effective hybridization functions show the signs of exhaustion and we observe a corrresponding strong reduction of the lattice scale $`T_0`$ . Together with our results, an extensive picture of exhaustion physics in the infinite-dimensional PAM has begun to emerge. Close to half filling the low-temperature properties of the model can be characterized by one energy scale $`T_0>T_K`$; whereas away from half filling, two scales are apparent: $`T_K`$, where screening begins, and $`T_0T_K`$ where coherence sets in. At low temperatures $`TT_0`$, the quasi particle features in the single-particle spectra become pronounced and have predominantly f character . Since only the states near the Fermi energy can participate in screening, the f-electron moments are screened predominantly by other f-electron states. At a temperature $`T_m`$ ($`T_0<T_m<T_K`$), a dip begins to develop in the effective hybridization rate at the Fermi surface $`\mathrm{\Gamma }(\omega 0)`$ , indicating that the states available for screening are becoming exhausted. This is a direct confirmation of the qualitative features of Nozières exhaustion scenario. Nevertheless, for fixed $`n_c`$, we find that $`T_0T_K`$, and as the conduction band filling changes we find $`T_0n_c\mathrm{exp}(cn_c)`$, both are in direct contradiction with the predictions of Nozières. Thus, we conclude that Nozières phenomenological arguments are too crude to capture the quantitative features of exhaustion. To our knowledge there has not yet been a systematic experimental study concentrating on the signatures of exhaustion in metallic HF compounds. However, there are several experimentally relevant consequences of exhaustion predicted by DMF calculations. Most of these predictions are related to the presence of two relevant scales and the protracted behavior of measurements as a function of temperature in crossover regime between these scales . For example, as compared to the predictions of the SIAM with the same $`T_0`$, the photoemission peak should evolve much more slowly with temperature . In addition, it should have significantly more weight since the height of the peak goes like $`1/\mathrm{\Gamma }(0)`$ and its width is set by $`T_0`$. Although these features have been reported in photoemission experiments on Yb-based HF materials , these results remain controversial and it has been suggested that the experimental spectrum is representative of the surface, and not the bulk. Fortunately, transport and neutron scattering experiments probe much further into the bulk, and should display characteristic features due to exhaustion. The calculated resistivity displays a non-universal peak and two other regions typical of HF systems and associated with impurity-like physics at high temperatures $`TT_K`$ and Fermi liquid formation at low $`TT_0`$ . The peak resistivity shows features characteristic of exhaustion. It occurs at $`TT_m`$, the temperature at which exhaustion first becomes apparent as a dip $`\mathrm{\Gamma }(\omega 0)`$. $`T_m`$ is non-universal in that it increases with decreasing $`n_c`$, $`T_K`$ and $`T_0`$. The Drude peak in the optical conductivity persists up to much higher temperatures than predicted by the SIAM and the Drude weight weight rises dramatically with temperature. The quasielastic peak in the angle integrated dynamic spin susceptiblity also evolves more slowly with temperature than predicted by the SIAM, and it displays more pronounced charge-transfer features . Despite the rich picture which has begun to emerge from DMF calculations, many questions remain unresolved. Among these, three seem most prominent to us. First, it is unclear what is happening as $`n_c1`$. In this regime, following Nozières argument, there should also remain too few states to screen the moments so the exhaustion scenerio would seem to apply, too; nevertheless, the $`T_0`$ is enhanced relative to $`T_K`$ and all effects of exhaustion vanish. It is tempting to attribute this vanishing of exhaustion to another energy scale associated with the bare gap that appears in the spectra as $`n_c1`$. However, in recent calculations for a model with f-d hopping such that the hybridization $`V_kϵ_k`$, where there is no conduction band gap when $`n_f=n_c=1`$, in the regime of strong f-d hybridization $`T_0T_K`$ is again found, suggesting that there must be some more fundamental reason for the absence of exhaustion . Thus, these latter results together with the ones presented here, surely demand for a critical reinvestigation of the phenomenology of exhaustion. Second, thus far, all calculations are for the orbitally non-degenerate models. The effect of orbital degeneracy and crystal field splittings has yet to be determined. However, in the limit of infinite orbital degeneracy, the Kondo scale would seem to be unrenormalized. Third, following Doniach’s arguments, RKKY interaction and superexchange will compete with Kondo screening in the formation of the ground state. In the present work, we have explicitely concentrated on the paramagnetic state, i.e. these types of exchange do not enter the calculation. However, within the DMF it is possible to study the influence of RKKY or superexchange on the mean-field level by either looking at the susceptibility or allowing for a symmetry-broken state. Generally, since the RKKY exchange grows like $`J^2`$, and the Kondo scale is exponential in $`1/J`$, Kondo screening and hence exhaustion is expected to be most pronounced when the Kondo exchange $`JV^2/U`$ is large. However, we have found that exhaustion can dramatically reduce the relevant low-energy scale; which may extend the region where the magnetic exchange dominates in the formation of the ground state. Thus a systematic study of the magnetic phase diagram as function of $`U/V^2`$ is clearly desirable. Acknowledgements: It is a pleasure to acknowledge fruitful discussions with D. Logan, H.R. Krishanmurthy, F. Anders, A. Georges, N. Grewe, J. Keller, D. Vollhardt. One of us (MJ) would like to acknowledge support by the NSF via grants DMR9357199 and DMR 9704021. This work was in part supported by a NATO collaborative research travel grant.
no-problem/0001/hep-ph0001161.html
ar5iv
text
# 1 Introduction ## 1 Introduction Electron–photon scattering, $$e\gamma \gamma ^{}\gamma hadrons,$$ (1) studied in high energy $`e^+e^{}`$ collisions with tagged electron is an analogue of the inelastic lepton–nucleon scattering. Here the probe – a virtual photon of four momentum $`q`$ ($`q^2=Q^2<0`$), tests the target particle, the real photon of four momentum $`p`$ ($`p^2=\mathrm{\hspace{0.33em}0}`$). The corresponding spin averaged cross section, Fig.1, can be parametrized e.g. by the photon structure functions $`F_1^\gamma (x,Q^2)`$ and $`F_2^\gamma (x,Q^2)`$. The Bjorken parameter $`x`$ is conventionally defined as $`x=Q^2/(2pq)`$. Thus the process (1) permits an insight into the inner structure of the real photon. At large $`Q^2`$ the photon structure function is described by the perturbative QCD . However in the low $`Q^2`$ region, $`Q^2\stackrel{<}{}`$ 1 GeV<sup>2</sup>, it is expected that the Vector Meson Dominance (VMD) contribution is important. In this paper we present a model of the photon structure function $`F_2^\gamma `$ which includes both the VMD contribution and the QCD term, suitably extrapolated to the low $`Q^2`$ region. This approach is based on the extension of a similar representation of the nucleon structure function to the case of the photon. Possible parametrisations of the photon structure function which extend to the low $`Q^2`$ region have also been discussed in Ref. . The parametrisation proposed in Ref. is based upon the Quark Parton Model supplemented by the contribution from the hadronic structure of the photon. The energy dependence of the latter has a Regge form. The $`Q^2`$ dependence is parametrised in terms of the simple form-factors which if combined with the Regge-type energy dependence generate at large $`Q^2`$ the Bjorken scaling behaviour of the corresponding part of the structure function. In Ref. the energy dependence of the cross-section is also parametrised in a Regge-like form with the $`Q^2`$ dependence specified by the suitable form factors which contain terms corresponding to the VMD contribution. The parametrisation discussed in is based upon a model corresponding to the interaction of colour dipoles, i.e. the $`q\overline{q}`$ pairs which the photon(s) fluctuate into. In our approach the contribution coming from the light vector mesons within the Vector Meson Dominance model is similar to that used by other authors (see eg. ) although the details concerning estimate of the relevant total cross-sections are slightly different. The novel feature of our model is the treatment of the contribution coming from high masses of the hadronic states which couple to the virtual photons. In our scheme this contribution is directly related to the photon structure function in the large $`Q^2`$ region. The low and high mass hadronic states are separated at $`Q_0`$, a parameter whose value was taken identical with that for the $`F_2^p`$. Our framework permits to describe the $`\gamma \gamma `$ and $`\gamma ^{}\gamma `$ total cross sections as functions of energy. The energy dependence of $`\sigma _{\gamma \gamma }`$ is also described by other models, . Most of them incorporate the Regge-like parametrisation of the total $`\gamma \gamma `$ cross-sections; some provide a detailed insight into the structure of final states and a decomposition of the $`\gamma \gamma `$ total cross-section into terms corresponding to the appropriate subdivision of photon interactions and event classes . Possibility that part of the $`\gamma \gamma `$ cross-section is driven by the production of minijets has been discussed in . Certain approaches analyse the behaviour of the cross-sections on the virtualities of both interacting photons . The content of our paper is as follows: In the next section we recall the QCD description of the photon structure functions and in section 3 we briefly describe the Vector Meson Dominance model in the process $`\gamma ^{}\gamma hadrons`$. In section 4 we present a parametrisation of the photon structure function, as well as the total cross section for the interaction of two real photons and of the virtual and real photon. In section 5 we compare our theoretical predictions with the experimental data on the $`F_2^\gamma `$ and on the total cross sections $`\sigma _{\gamma \gamma }`$ and $`\sigma _{\gamma ^{}\gamma }`$. We also give predictions for $`\sigma _{\gamma \gamma }`$ in the very high energy range which can become accessible in future linear colliders. Finally in section 6 we give the summary of our results. ## 2 Partonic content of the photon In the large $`Q^2`$, i.e. in the deep inelastic limit the virtual photon probes the quark (antiquark) structure of the (real) photon in analogy to the deep inelastic lepton–hadron scattering, Fig.2. The corresponding photon structure function $`F_2^\gamma (x,Q^2)`$ may thus be related to quark and antiquark distributions $`q_i^\gamma (x,Q^2)`$, $`\overline{q}_i^\gamma (x,Q^2)`$ in the photon: $$F_2^\gamma (x,Q^2)=x\underset{i}{}e_i^2[q_i^\gamma (x,Q^2)+\overline{q}_i^\gamma (x,Q^2)],$$ (2) where $`e_i`$ denote the charges of quarks and antiquarks and the sum is over all active quark flavours. To be precise equation (2) holds in leading logarithmic approximation of perturbative QCD. It acquires higher order corrections in next-to-leading approximation and beyond . A special feature of the quark structure of the photon with respect to the proton is a possibility of a direct $`q\overline{q}`$ production through the process $`\gamma ^{}\gamma q\overline{q}`$, see Fig.3, leading to the parton model predictions for the $`q_i^\gamma (x,Q^2)`$ and $`\overline{q}_i^\gamma (x,Q^2)`$. A contribution of this process introduces an inhomogeneous term into the equation describing the QCD evolution of the quark (and antiquark) distributions in the photon. The process $`\gamma ^{}\gamma q\overline{q}`$ (modified by the QCD evolution) with the pointlike quark coupling both to real and virtual photons dominates in the large $`Q^2`$ limit making the photon structure functions exactly calculable in this limit . Striking features of these functions are: * $`F_{1,2}^\gamma `$ rise with increasing $`x`$ at large $`x`$; * $`F_{1,2}^\gamma `$ show the scaling violation, $`F_{1,2}^\gamma \mathrm{ln}Q^2`$. At low values of $`x`$ the dominant role in the photon structure functions is played by gluons. The situation here is similar to that of the hadronic structure functions which exhibit a very strong increase with decreasing $`x`$, see e.g. . Those effects are still rather weak in the kinematical region of $`F_2^\gamma `$ probed by present experiments but they will be very important in the regime accessible in the future linear $`e^+e^{}`$ ($`e\gamma `$, $`\gamma \gamma `$) colliders . Besides the direct, point-like coupling to quarks, the target photon can fluctuate into vector mesons and other hadronic states which can also have their partonic structure. The latter cannot be calculated perturbatively and thus has to be parametrised phenomenologically . Finally it should be pointed out that the charm quark playing the dominant role in the heavy quark contributions to $`F_2^\gamma (x,Q^2)`$, is often described just by the lowest order Bethe-Heitler cross-section for the process $`\gamma ^{}\gamma c\overline{c}`$ and the additional contribution generated by the radiation $`gc\overline{c}`$ . ## 3 Dispersive relation for $`\gamma ^{}\gamma `$ scattering. $`F_2^{VMD}`$ and $`F_2^{partons}`$ contributions to $`F_2^\gamma `$ The QCD describes the photon structure functions in the large $`Q^2`$ region. In the low $`Q^2`$ region however one expects that the VMD mechanism is important. By the Vector Meson Dominance mechanism in this case we understand the model in which the virtual photon of virtuality $`Q^2`$ fluctuates into vector mesons which next undergo interaction with the (real) photon of virtuality $`p^20`$, see Fig.4. In order to be able to describe the photon structure function for arbitrary values of $`Q^2`$ it would be very useful to have a unifed scheme which contains both the VMD and the QCD contributions, the latter suitably extended to the region of low values of $`Q^2`$. This may be achieved by utilising the dispersive representation in $`Q^2`$ of the structure function. To this aim let us notice that the $`\gamma ^{}\gamma `$ collision can be viewed as the interaction of a real photon target with a photon with virtuality $`Q^2`$ which fluctuates onto general hadronic state, cf. Fig.5. We consider the virtual photon first fluctuating onto the $`q\overline{q}`$ state and then interacting with the real photon. Like in the $`\gamma ^{}p`$ scattering one can write the dispersion relation for the $`\gamma ^{}\gamma `$ scattering as follows : $$F_2^\gamma (W^2,Q^2)=$$ $$\frac{Q^2}{4\pi ^2\alpha }\underset{q}{}\frac{dM^2}{M^2+Q^2}\frac{dM^2}{M^2+Q^2}\rho (M^2,M^2)\frac{1}{W^2}\mathrm{Im}A_{(q\overline{q})\gamma }(W^2,M^2,M^2)$$ (3) where $`M`$ and $`M^{}`$ are the invariant masses of the incoming and outgoing $`q\overline{q}`$ pair. In eq. (3), $`\rho (M^2,M^2)`$ is the density matrix of the $`q\overline{q}`$ states and $`\mathrm{Im}A_{(q\overline{q})\gamma }`$ is an imaginary part of the corresponding forward scattering amplitude. The above formula can be rewritten in a form of a single dispersion relation as follows, $$F_2^\gamma (W^2,Q^2)=Q^2_0^{\mathrm{}}\frac{dQ^2}{(Q^2+Q^2)^2}\mathrm{\Phi }(W^2,Q^2)$$ (4) where the spectral function $`\mathrm{\Phi }(W^2,Q^2)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi ^2\alpha }}{\displaystyle _0^1}𝑑\lambda {\displaystyle 𝑑M^2𝑑M^2\delta (Q^2\lambda M^2(1\lambda )M^2)}`$ (5) $`\rho (M^2,M^2){\displaystyle \frac{1}{W^2}}\mathrm{Im}A_{(q\overline{q})\gamma }(W^2,M^2,M^2).`$ The centre-of-mass energy squared $`W^2=(p+q)^2`$ is related to the Bjorken parameter $`x`$ in the following way: $$W^2=Q^2(\frac{1}{x}1).$$ (6) One can now separate regions of low- and high values of $`Q^2`$ in the integral (4), by noticing that this integral corresponds to the (Generalised) Vector Meson Dominance representation of the $`F_2^\gamma `$. For low values of $`Q^2`$, $`Q^2<Q_0^2`$, one uses the Vector Meson Dominance model. In this approach one assumes that the virtual photon forms a vector meson rather than a pair of well separated $`q\mathrm{and}\overline{q}`$. The integrand in the integral on the right hand side of equation (5) defining the spectral function $`\mathrm{\Phi }(W^2,Q^2)`$ is then given by the following formula: $$\rho (M^2,M^2)\frac{1}{W^2}\mathrm{Im}A_{(q\overline{q})\gamma }(W^2,M^2,M^2)$$ $$=\pi \alpha \underset{V}{}\frac{M_V^4}{\gamma _V^2}\sigma _{V\gamma }(W^2)\delta (M^2M_V^2)\delta (M^2M_V^2),$$ (7) where $`M_V`$ is the mass of the vector meson $`V`$ and $`\sigma _{V\gamma }(W^2)`$ denotes the $`V\gamma `$ total cross section. The couplings $`\gamma _V^2`$ can be estimated from the leptonic widths of the vector mesons, $$\frac{\gamma _V^2}{\pi }=\frac{\alpha ^2M_V}{3\mathrm{\Gamma }_{e^+e^{}}^V}.$$ (8) In equation (7) we have included only diagonal transitions between the vector mesons having the same masses. The corresponding spectral function $`\mathrm{\Phi }^{VMD}(W^2,Q^2)`$ thus reads: $$\mathrm{\Phi }^{VMD}(W^2,Q^2)=\underset{V}{}\frac{M_V^4}{4\pi \gamma _V^2}\sigma _{V\gamma }(W^2)\delta (Q^2M_V^2).$$ (9) The resulting VMD contribution to the photon structure function $`F_2^\gamma `$ , $`F_2^{\mathrm{VMD}}`$, is given by the following equation: $$F_2^{\mathrm{VMD}}(x,Q^2)=\frac{Q^2}{4\pi }\underset{V}{}\frac{M_V^4\sigma _{V\gamma }(W^2)}{\gamma _V^2(Q^2+M_V^2)^2}.$$ (10) In eq.(10) we only consider mesons with masses $`M_V^2<Q_0^2`$. The contribution coming from the region of high values of $`Q^2`$ ($`Q^2>Q_0^2`$) can be related to the photon structure function from the large $`Q^2`$ domain. It defines the partonic contribution $`F_2^{\mathrm{partons}}`$ to the structure function $`F_2^\gamma `$, extended to arbitrary low values of $`Q^2`$. For convenience, we adopt the approximation used in Ref. which gives: $$F_2^{\mathrm{partons}}(x,Q^2)=\frac{Q^2}{Q^2+Q_0^2}F_2^{\mathrm{QCD}}(\overline{x},Q^2+Q_0^2)$$ (11) where $$\overline{x}=\frac{Q^2+Q_0^2}{W^2+Q^2+Q_0^2}.$$ (12) The structure function $`F_2^{\mathrm{QCD}}`$ is taken from the QCD analysis, valid in the large $`Q^2`$ region, i.e. it is calculated from the existing parametrisations of the parton distributions. Modifications of the QCD contribution: replacement of the parameter $`x`$ by $`\overline{x}`$ defined in equation (12), shift of the scale $`Q^2Q^2+Q_0^2`$ and the factor $`Q^2/(Q^2+Q_0^2)`$ instead of 1, introduce power corrections which vanish as $`1/Q^2`$ and are negligible at large $`Q^2`$. The magnitude of $`Q_0^2`$ is set to $`1.2\mathrm{GeV}^2`$ as in the case of the proton . The $`F_2^{\mathrm{partons}}`$ thus defines a contribution to $`F_2^\gamma `$ at arbitrary low values of $`Q^2`$. An elaborated treatment of the partonic contribution to the proton structure function has been developed in Ref. , where long and short distance components have been carefully separated. According to that paper the low mass region is dominated by the $`q\overline{q}`$ pairs with large transverse sizes in the impact parameter space (thus corresponding to the VMD) whereas the QCD part is dominated by pairs of small transverse size. ## 4 Parametrisation of the photon structure function and of the total photon–photon interaction cross sections Our representation of the photon structure function $`F_2^\gamma (x,Q^2)`$ is based on the following decomposition: $$F_2^\gamma (x,Q^2)=F_2^{\mathrm{VMD}}(x,Q^2)+F_2^{\mathrm{partons}}(x,Q^2)$$ (13) where $`F_2^{\mathrm{VMD}}`$ and $`F_2^{\mathrm{partons}}(x,Q^2)`$ are defined by eq. (10) and (11). A total $`\gamma ^{}\gamma `$ cross-section in the high energy limit is given by $$\sigma _{\gamma ^{}\gamma }(W,Q^2)=\frac{4\pi ^2\alpha }{Q^2}F_2^\gamma (x,Q^2),$$ (14) with $`x=Q^2/(Q^2+W^2)`$. The $`Q^2=0`$ (for fixed $`W`$) limit of eq. (14) gives the total cross-section $`\sigma _{\gamma \gamma }(W^2)`$ corresponding to the interaction of two real photons. ¿From (13), (10) and (11) we obtain the following expression for this cross-section at high energy: $$\sigma _{\gamma \gamma }(W)=\alpha \pi \underset{V=\rho ,\omega ,\varphi }{}\frac{\sigma _{V\gamma }(W^2)}{\gamma _V^2}+\frac{4\pi ^2\alpha }{Q_0^2}F_2^{\mathrm{QCD}}(Q_0^2/W^2,Q_0^2).$$ (15) At large $`Q^2`$ the structure function given by eq. (13) becomes equal to the QCD contribution $`F_2^{\mathrm{QCD}}(x,Q^2)`$. The VMD component gives the power correction term which vanishes as $`1/Q^2`$ for large $`Q^2`$. It should be noted that the VMD part contains only finite number of vector mesons with their masses smaller than $`Q_0^2`$. In the quantitive analysis of the photon structure function and of the total cross sections we have taken the structure function $`F_2^{\mathrm{QCD}}`$ from the LO analyses presented in Ref. (GRV) and (GRS’),with a number of active flavours equal four. The latter parton parametrization is based on updated data analysis and holds for both virtual and real photons. The VMD part was estimated using the following assumptions: 1. Numerical values of the couplings $`\gamma _V^2`$ are the same as those used in Ref. . They were estimated from the relation (8) which gives the following values: $$\frac{\gamma _\rho ^2}{\pi }=1.98,\frac{\gamma _\omega ^2}{\pi }=21.07,\frac{\gamma _\varphi ^2}{\pi }=13.83.$$ (16) 2. Cross-sections $`\sigma _{V\gamma }`$ are represented as sums of the Pomeron and Reggeon contributions: $$\sigma _{V\gamma }(W^2)=P_{V\gamma }(W^2)+R_{V\gamma }(W^2)$$ (17) where $$P_{V\gamma }(W^2)=a_{V\gamma }^P\left(\frac{W^2}{W_0^2}\right)^{\lambda _P}$$ (18) $$R_{V\gamma }(W^2)=a_{V\gamma }^R\left(\frac{W^2}{W_0^2}\right)^{\lambda _R}$$ (19) with $$\lambda _R=0.4525,\lambda _P=0.0808$$ (20) and $`W_0^2=`$1 GeV<sup>2</sup> . 3. Pomeron couplings $`a_{V\gamma }^P`$ are related to the corresponding couplings $`a_{\gamma p}^P`$ controlling the Pomeron contributions to the total $`\gamma p`$ cross-sections. We assume the additive quark model and reduce the total cross-sections for the interaction of strange quarks by a factor 2. This gives: $$a_{\rho \gamma }^P=a_{\omega \gamma }^P=\frac{2}{3}a_{\gamma p}^P,$$ $$a_{\varphi \gamma }^P=\frac{1}{2}a_{\rho \gamma }^P.$$ (21) 4. Reggeon couplings $`a_{V\gamma }^R`$ are estimated assuming the additive quark model and the duality (i.e. a dominance of planar quark diagrams). We also assume that the quark couplings to a photon are proportional to the quark charge with the flavour independent proportionality factor. This gives: $$a_{\rho \gamma }^R=a_{\omega \gamma }^R=\frac{5}{9}a_{\gamma p}^R,$$ $$a_{\varphi \gamma }^R=0.$$ (22) 5. Couplings $`a_{\gamma p}^P`$ and $`a_{\gamma p}^R`$ are taken from the fit discussed in Ref. which gave: $$a_{\gamma p}^R=0.129\mathrm{mb},a_{\gamma p}^P=0.0677\mathrm{mb}$$ (23) Since we are using the Regge description of total cross sections $`\sigma _{V\gamma }(W)`$ our approach can only work for large values of $`W`$, $`W^2\stackrel{>}{}\mathrm{\hspace{0.33em}2}\mathrm{GeV}^2`$, away from the resonance region. ## 5 Numerical results In this section we compare our results for the real photon structure function $`F_2^\gamma (x,Q^2)`$ and the two-photon cross sections, $`\sigma _{\gamma \gamma }(W)`$ and $`\sigma _{\gamma ^{}\gamma }(W)`$ with corresponding measurements. In some cases our predictions have been extended to the region $`W^2<2\mathrm{GeV}^2`$ where the model may not be applicable. Theoretical curves were obtained using two different parametrisations for the structure function $`F_2^{\mathrm{QCD}}`$, GRV and GRS’ . In Fig.6 we show predictions for the photon structure function based on equation (13) plotted as the function of $`x`$ for different values of $`Q^2`$ in the region of small $`Q^2`$. In Fig.7 the $`Q^2`$ dependence of the photon structure function for different values of $`x`$ is presented <sup>1</sup><sup>1</sup>1 In both figures the curves are plotted only for $`W>2m_\pi `$ which corresponds to the threshold energy in the reaction $`\gamma \gamma hadrons`$.. We confront our theoretical results with existing experimental data . Measurements of $`F_2^\gamma `$ are scarce, especially for low values of $`Q^2`$. However it can be seen that our prediction reproduces well the data independently of the parametrisation (GRV or GRS’) of $`F_2^{\mathrm{QCD}}`$ used in the model. Irregular behaviour of the dashed lines observed in Figs 6 and 7 at high values of $`x`$ is connected with the teatment of the charm contribution to $`F_2^\gamma `$ in the GRS’ approach. In Fig.8 we compare our predictions with the data on $`\sigma _{\gamma \gamma }(W)`$. Theoretical curves were obtained from equation (15). We show experimental points corresponding to the low energy region ($`W\stackrel{<}{}`$ 10 GeV) and the recent preliminary high energy data obtained by the L3, OPAL and DELPHI collaborations at LEP . The representation (15) for the total $`\gamma \gamma `$ cross-section describes the data reasonably well. The result of the calculation based on GRS’ parametrisation of $`F_2^{\mathrm{QCD}}`$ is slightly higher and has a shallower minimum as compared to that based on GRV parametrisation. Calculations using the latter give a good description of the shape of the energy dependence of the cross section although the overall normalisation seems to be about $`15\%`$ too large. It should be stressed that our prediction is essentially parameter free. The magnitude of the cross-section is dominated by the VMD component yet the partonic part is also non-negligible. In particular the latter term is responsible for generating a steeper increase of the total cross-section with increasing $`W`$ than that embodied in the VMD part which is described by the soft Pomeron contribution. The decrease of the total cross-section with increasing energy in the low $`W`$ region is controlled by the Reggeon component of the VMD part (see eqs. (17), (19) and (20)) and by the valence part of the partonic contribution. In Fig.9 we show predictions for the total $`\gamma \gamma `$ cross-section as a function of the total centre-of-mass energy $`W`$ in the wide energy range including the energies that might be accessible in the future linear colliders. In this figure we also show a decomposition of $`\sigma _{\gamma \gamma }(W^2)`$ into its VMD and partonic components (only GRV parametrisation was used in this analysis). At very high energies these two terms exhibit different energy dependence. The VMD part is described by the soft Pomeron contribution which gives the $`W^{2\lambda }`$ behaviour with $`\lambda `$ = 0.0808, eq. (20). The partonic component increases faster with energy since its energy dependence reflects the increase of $`F_2^{\mathrm{QCD}}(\overline{x},Q_0^2)`$ with decreasing $`\overline{x}`$ generated by the QCD evolution . This increase is stronger than that implied by the soft Pomeron exchange. As a result the total $`\gamma \gamma `$ cross-section, which is the sum of the VMD and partonic components does also exhibit stronger increase with the increasing energy than that of the VMD component. It is however milder than the increase generated by the partonic component alone, at least for $`W<10^3`$ GeV. This follows from the fact that in this energy range the magnitude of the cross-section is still dominated by its VMD component. We found that for sufficiently high energies $`W`$ the total $`\gamma \gamma `$ cross-section $`\sigma _{\gamma \gamma }(W)`$ described by eq. (15) can be parametrized by the effective power law dependence, $`\sigma _{\gamma \gamma }(W)(W^2)^{\lambda _{eff}}`$, with $`\lambda _{eff}`$ slowly increasing with energy ($`\lambda _{eff}0.10.12`$ for 30 GeV $`<W<`$ 10<sup>3</sup> GeV). In Fig.10 we show the $`\gamma ^{}\gamma `$ cross section for different bins of the centre-of-mass energy $`W`$ plotted versus $`Q^2`$, the virtuality of the probing photon. Our theoretical predictions based on equation (14) are compared with measurements by the TPC/2$`\gamma `$ Collaboration and the agreement between the two is very good. In Fig.11 we show the $`\gamma ^{}\gamma `$ cross section for large energies $`W`$ plotted versus $`Q^2`$ (only GRV parametrsation was used here). At medium and large $`Q^2`$, $`\sigma _{\gamma ^{}\gamma }`$ decreases as $`1/Q^2`$ (modulo logaritmic corrections) and for very small values ($`Q^2<10^1\mathrm{GeV}^2`$) it exhibits a flattening behaviour. ## 6 Concluding remarks We have presented an extension of the representation developed for the nucleon structure function $`F_2`$ for arbitrary values of $`Q^2`$, , onto the structure function of the real photon. This representation includes both the VMD contribution and the QCD component, obtained from the QCD parton parametrisations for the photon, suitably extrapolated to the region of low $`Q^2`$. In the $`Q^2`$=0 limit the model gives predictions for the total cross section $`\sigma _{\gamma \gamma }`$ for the interaction of two real photons. We showed that our framework is fairly successful in describing the experimental data on $`\sigma _{\gamma \gamma }(W)`$ and on $`\sigma _{\gamma ^{}\gamma }(W,Q^2)`$ and on the photon structure function $`F_2^\gamma `$ at low $`Q^2`$. We also showed that one can naturally explain the fact that the increase of the total $`\gamma \gamma `$ cross-section with increasing CM energy $`W`$ is stronger than that implied by soft Pomeron exchange. The calculated total $`\gamma \gamma `$ cross-section exhibits an approximate power-law increase with increasing energy $`W`$, i.e. $`\sigma _{\gamma \gamma }(W)(W^2)^{\lambda _{eff}}`$ with $`\lambda _{eff}`$ slowly increasing with energy: $`\lambda _{eff}0.10.12`$ for 30 GeV $`<W<`$ 10<sup>3</sup> GeV. ## Acknowledgments This research was partially supported by the Polish State Committee for Scientific Research grants no. 2 P03B 089 13, 2 P03B 014 14, 2 P03B 184 10 and by the EU Fourth Framework Programme ”Training and Mobility of Researchers”, Network ’Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX - CT98 - 0194. AMS thanks Foundation for Polish Science for financial support.
no-problem/0001/nucl-th0001049.html
ar5iv
text
# Freeze-out in hydrodynamical models in relativistic heavy ion collisions ## 1 INTRODUCTION In continuum and fluid dynamical models, particles, which leave the system and reach the detectors, can be taken into account via freeze-out (FO) or final break-up schemes, where the frozen out particles are formed on a 3-dimensional hypersurface in space-time. Such FO descriptions are important ingredients of evaluations of two-particle correlation data, transverse-, longitudinal-, radial- and cylindrical- flow analyses, transverse momentum and transverse mass spectra and many other observables. The FO on a hypersurface is a discontinuity, where the pre FO equilibrated and interacting matter abruptly changes to non-interacting particles, showing an ideal gas type of behavior. The frequently used Cooper-Frye formula, the rapidity distribution, transverse momentum spectrai, etc., – all include expression $`f_{FO}(x,p,T,n,u^\nu )p^\mu d\widehat{\sigma }_\mu ,`$ where $`f_{FO}(x,p,T,n,u^\nu )`$ is the post FO distribution which is unknown from the fluid dynamical model, $`d\widehat{\sigma }_\mu `$ is normal vector to the FO hypersurface. Those formulas work well for timelike $`d\widehat{\sigma }^\mu `$ ($`p^\mu d\widehat{\sigma }_\mu >0`$). If $`d\widehat{\sigma }_\mu `$ is spacelike we count particles going backwards through FO front as well as outwards. The post FO distribution can not be a thermal one! In fact $`f_{FO}`$ should contain only particles which cross the FO front outwards, $`p^\mu d\widehat{\sigma }_\mu >0`$, since the rescattering and back scattering are not allowed any more in the post FO side. $$f_{FO}(x,p,T,n,u^\nu ,d\widehat{\sigma }_\mu )=0,p^\mu d\widehat{\sigma }_\mu <0.$$ (1) Initially if we know the pre FO baryon current and energy-momentum tensor, $`N_0^\mu `$ and $`T_0^{\mu \nu },`$ we can calculate locally, across a surface element of normal vector $`d\widehat{\sigma }^\mu `$ the post FO quantities, $`N^\mu `$ and $`T^{\mu \nu }`$, from the relations: $`[N^\mu d\widehat{\sigma }_\mu ]=0`$ and $`[T^{\mu \nu }d\widehat{\sigma }_\mu ]=0,`$ where $`[A]AA_0`$. One should also check that entropy is nondecreasing in FO. In numerical calculations the local FO surface can be determined most accurately via self-consistent iteration. Initial ideas to improve the Cooper-Fry FO description this way were suggested in refs. . ## 2 FREEZE-OUT DISTRIBUTION FROM KINETIC THEORY Let us assume an infinitely long tube with its left half ($`x<0`$) filled with nuclear matter and in the right vacuum is maintained. We can remove the dividing wall at $`t=0`$, and then the matter will expand into the vacuum. Continuously removing particles at the right end of the tube and supplying particles on the left end we can establish a stationary flow in the tube, where the particles will gradually freeze out in an exponential rarefraction wave propagating to the left in the matter. We can move with this front, so that we describe it from the reference frame of the front (RFF). We assume that there are two components of our momentum distribution: $`f_{free}(x,\stackrel{}{p})`$ and $`f_{int}(x,\stackrel{}{p})`$. However, we assume that at $`x=0`$, $`f_{free}`$ vanishes exactly and $`f_{int}`$ is an ideal Jüttner distribution, then $`f_{int}`$ gradually disappears and $`f_{free}`$ gradually builds up, according to the differential equations: $`_xf_{int}(x,\stackrel{}{p})dx=\mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu ){\displaystyle \frac{\mathrm{cos}\theta _\stackrel{}{p}}{\lambda }}f_{int}(x,\stackrel{}{p})dx,`$ (2) $`_xf_{free}(x,\stackrel{}{p})dx=+\mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu ){\displaystyle \frac{\mathrm{cos}\theta _\stackrel{}{p}}{\lambda }}f_{int}(x,\stackrel{}{p})dx,`$ where $`\mathrm{cos}\theta _\stackrel{}{p}=\frac{p^x}{p}`$ in RFF. It expresses the fact that particles with momenta orthogonal to the FO surface leave the system with bigger probability than particles emitted at an angle. Such a dramatically oversimplified model can reproduce cut Jüttner distribution as a post FO one, but it does not allow the complete FO – the interacting component of momentum distribution survives even if $`x\mathrm{}`$. To improve our model we take into account rescattering within the interacting component, which will lead to re-thermalization and re-equilibration of this component. We use the relaxation time approximation to simplify the description of the dynamics. Thus, the two components of the momentum distribution develop according to the differential equations: $$\begin{array}{ccc}\hfill _xf_{int}(x,\stackrel{}{p})dx=& \mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )\frac{\mathrm{cos}\theta _\stackrel{}{p}}{\lambda }f_{int}(x,\stackrel{}{p})dx+\hfill & \\ & & \\ & +\left[f_{eq}(x,\stackrel{}{p})f_{int}(x,\stackrel{}{p})\right]\frac{1}{\lambda ^{}}dx,\hfill & \end{array}$$ (3) $$\begin{array}{ccc}\hfill _xf_{free}(x,\stackrel{}{p})dx=& +\mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )\frac{\mathrm{cos}\theta _\stackrel{}{p}}{\lambda }f_{int}(x,\stackrel{}{p})dx.\hfill & \end{array}$$ (4) The interacting component of the momentum distribution, described by eq. (3), shows the tendency to approach an equilibrated distribution with a relaxation length $`\lambda ^{}`$. Of course, due to the energy, momentum and particle drain, this distribution $`f_{eq}(x,\stackrel{}{p})`$ is not the same as the initial Jüttner distribution, but its parameters, $`n_{eq}(x)`$, $`T_{eq}(x)`$ and $`u_{eq}^\mu (x)`$, change as required by the conservation laws. Let us assume that $`\lambda ^{}\lambda `$, i.e. re-thermalization is much faster than particles are freezing out, or much faster than parameters, $`n_{eq}(x)`$, $`T_{eq}(x)`$ and $`u_{eq}^\mu (x)`$ change. Then $`f_{int}(x,\stackrel{}{p})f_{eq}(x,\stackrel{}{p})`$, for $`\lambda ^{}\lambda .`$ For $`f_{eq}(x,\stackrel{}{p})`$ we assume the spherical Jüttner form at any $`x`$ including both positive and negative momentum parts with parameters $`n(x),T(x)`$ and $`u_{RFG}^\mu (x)`$. (Here $`u_{RFG}^\mu (x)`$ is the actual flow velocity of the interacting, Jüttner component, i.e. the velocity of the Rest Frame of the Gas (RFG) ). In this case the changes of conserved quantities due to particle drain or transfer can be evaluated for an infinitesimal $`dx`$ and new parameters of $`f_{eq}(x+dx,\stackrel{}{p})`$ can be found . We would like to point out that, although for the spherical Jüttner distribution the Landau and Eckart flow velocities are the same, the change of this flow velocity calculated from the loss of baryon current and from the loss of energy current appear to be different $`du_{i,E,RFG}^\mu (x)du_{i,L,RFG}^\mu (x).`$ This is a clear consequence of the asymmetry caused by the FO process as it was discussed in ref. . This problem does not occur for the freeze-out of baryonfree plasma. We performed the calculations, according to this model, for the baryonfree and massless gas . We would like to note that now $`f_{int}(x,\stackrel{}{p})`$ does not tend to the cut Jüttner distribution in the limit $`x\mathrm{}`$. Furthermore, we obtain that $`T0`$, when $`x\mathrm{}`$. So, $`f_{int}(x,\stackrel{}{p})=\frac{1}{(2\pi \mathrm{})^3}\mathrm{exp}[(\mu p^\nu u_\nu )/T]0`$, when $`x\mathrm{}`$. Thus, all particles freeze out in the improved model, but such a physical FO requires infinite distance (or time). This second problem may also be removed by using volume emission model discussed in . The application of our onedimensional model to transverse expansion gives result showing in Fig. 1, which is in qualitative agreement with experiment. Recent calculations have confirmed that the FO hypersurface idealization can be justified even in microscopic reaction models (like UrQMD or QGSM) for nucleon data in collisions of massive heavy ions. The improvements presented here are essential and lead to non-negligible qualitative and quantitative changes in calculations including FO. Several further details and consequences of this improved approach have to be worked out still (e.g. ) to obtain more accurate data from the numerous continuum and fluid dynamical models used for the description of heavy ion reactions.
no-problem/0001/cond-mat0001414.html
ar5iv
text
# Nonexistence of the non–Gaussian fixed point predicted by the RG field theory in 4-ϵ dimensions ## 1 Introduction Since the famous work of K.G. Wilson and M.E. Fisher the renormalization group (RG) field theory has been widely used in calculations of critical exponents . The basic hypothesis of this theory is the existence of a certain fixed point for the RG transformation. However, the existence of such a stable fixed point for the Ginzburg–Landau model (which lies in the basis of the field theory) has not been proven mathematically in the case of the spatial dimensionality $`d<4`$. The fact that such a fixed point can be found within a scheme of self–consistent approximations, assuming its existence at the very beginning, cannot be regarded as a mathematical proof. An attempt to prove the existence of the non–Gaussian fixed point in $`4ϵ`$ dimensions has been made in Ref. . The authors have considered some rather artificial $`\phi ^4`$–type model, supposing that it simulates the Ginzburg–Landau model in $`4ϵ`$ dimensions. The method of proof is to solve the problem for a finite system (of linear size $`L`$), considering the thermodynamic limit afterwards. However, we cannot, in principle, find the non–Gaussian fixed point at a finite $`L`$ for a system with real interaction, and then consider $`L\mathrm{}`$. The problem is that, due to absence of phase transition at $`u>0`$, a stable fixed point with nonzero coupling constant $`u`$ of $`\phi ^4`$ interaction cannot exist in a case of finite system. All such efforts to prove the existence of the non–Gaussian fixed point, predicted in the Ginzburg–Landau model by RG field theory, are futile, since there exists an obviously simple proof of nonexistence presented hereafter. ## 2 Fundamental equations We consider the Ginzburg–Landau phase transition model. The Hamiltonian of this model in the Fourier representation reads $$\frac{H}{T}=\underset{𝐤}{}\left(r_0+c𝐤^2\right)\phi _𝐤^2+uV^1\underset{𝐤_1,𝐤_2,𝐤_3}{}\phi _{𝐤_1}\phi _{𝐤_2}\phi _{𝐤_3}\phi _{𝐤_1𝐤_2𝐤_3},$$ (1) where $`\phi _𝐤=V^{1/2}\phi (𝐱)\mathrm{exp}(i\mathrm{𝐤𝐱})𝑑𝐱`$ are Fourier components of the scalar order parameter field $`\phi (𝐱)`$, $`T`$ is the temperature, and $`V`$ is the volume of the system. In the RG field theory Hamiltonian (1) is renormalized by integration of $`\mathrm{exp}(H/T)`$ over $`\phi _𝐤`$ with $`\mathrm{\Lambda }/s<k<\mathrm{\Lambda }`$, followed by a certain rescaling procedure providing a Hamiltonian corresponding to the initial values of $`V`$ and $`\mathrm{\Lambda }`$, where $`\mathrm{\Lambda }`$ is the upper cutoff of the $`\phi ^4`$ interaction. Due to this procedure, additional terms appear in the Hamiltonian (1), so that in general the renormalized Hamiltonian contains a continuum of parameters. The basic hypothesis of the RG theory in $`d<4`$ dimensions is the existence of a non–Gaussian fixed point $`\mu =\mu ^{}`$ for the RG transformation $`R_s`$ defined in the space of Hamiltonian parameters, i.e., $$R_s\mu ^{}=\mu ^{}.$$ (2) The fixed-point values of the Hamiltonian parameters are marked by an asterisk ($`r_0^{}`$, $`c^{}`$, and $`u^{}`$, in particular). Note that $`\mu ^{}`$ is unambiguously defined by fixing the values of $`c^{}`$ and $`\mathrm{\Lambda }`$. According to the RG theory, the main terms in the renormalized Hamiltonian in $`d=4ϵ`$ dimensions are those contained in (1) with $`r_0^{}`$ and $`u^{}`$ of the order $`ϵ`$, whereas the additional terms are small corrections of order $`ϵ^2`$. Consider the Fourier transform $`G(𝐤,\mu )`$ of the two–point correlation (Green’s) function, corresponding to a point $`\mu `$. Under the RG transformation $`R_s`$ this function transforms as follows $$G(𝐤,\mu )=s^{2\eta }G(s𝐤,R_s\mu ).$$ (3) Let $`G(𝐤,\mu )G(k,\mu )`$ (at $`𝐤\mathrm{𝟎}`$ and $`V\mathrm{}`$) be defined within $`k\mathrm{\Lambda }`$. Since Eq. (3) holds for any $`s>1`$, we can set $`s=\mathrm{\Lambda }/k`$, which at $`\mu =\mu ^{}`$ yields $$G(𝐤,\mu ^{})=ak^{2+\eta }fork<\mathrm{\Lambda },$$ (4) where $`a=\mathrm{\Lambda }^{2\eta }G(\mathrm{\Lambda },\mu ^{})`$ is the amplitude and $`\eta `$ is the universal critical exponent. According to the universality hypothesis, the infrared behavior of the Green’s function is described by the same universal value of $`\eta `$ at any $`\mu `$ on the critical surface (with the only requirement that all parameters of Hamiltonian (1) are present), i.e., $$G(𝐤,\mu )=b(\mu )k^{2+\eta }atk0,$$ (5) where $$b(\mu )=\underset{k0}{lim}k^{2\eta }G(𝐤,\mu ).$$ (6) According to Eq.(3), which holds for any $`s=s(k)>1`$, Eq.(6) reduces to $$b(\mu )=\underset{k0}{lim}k^{2\eta }s(k)^{2\eta }G(s𝐤,R_s\mu ).$$ (7) By setting $`s(k)=\mathrm{\Lambda }/k`$, we obtain $$b(\mu )=\mathrm{\Lambda }^{2\eta }\underset{k0}{lim}G(\mathrm{\Lambda },R_{(\mathrm{\Lambda }/k)}\mu )=\mathrm{\Lambda }^{2\eta }G(\mathrm{\Lambda },\mu ^{})=a,$$ (8) if the fixed point $`\mu ^{}=\underset{s\mathrm{}}{lim}R_s\mu `$ exists. Let us define the function $`X(𝐤,\mu )`$ and the self–energy $`\mathrm{\Sigma }(𝐤,\mu )`$ as follows $$X(𝐤,\mu )=k^2G^1(𝐤,\mu ),$$ (9) $$k^2X(𝐤,\mu )=2(r_0+c𝐤^2)+\mathrm{\Sigma }(𝐤,\mu ).$$ (10) Equation (10) is usually used in the perturbation theory, since the self–energy has a suitable representation by Feynman diagrams. According to Eqs.(4), (5), and (8), we have (for $`k<\mathrm{\Lambda }`$) $$X(𝐤,\mu ^{})=\frac{1}{a}k^\eta $$ (11) and $$X(𝐤,\mu )=\frac{1}{a}k^\eta +\delta X(𝐤,\mu ),$$ (12) where $`\mu `$ belongs to the critical surface, $`\mu ^{}=\underset{s\mathrm{}}{lim}R_s\mu `$, and $`\delta X(𝐤,\mu )`$ denotes the correction–to–scaling term. From (11) and (12) we obtain the equation $$\delta X(𝐤,\mu ^{}+\delta \mu )=X(𝐤,\mu ^{}+\delta \mu )X(𝐤,\mu ^{}),$$ (13) where $`\delta \mu =\mu \mu ^{}`$. This equation, of course, makes sense only if the fixed point $`\mu ^{}`$ exists and $`\mu `$ includes all the relevant Hamiltonian parameters to ensure the universal infrared critical behavior (5) of the correlation function. ## 3 Proof of the nonexistence On the basis of fundamental equations obtained in the previous section, we prove here the nonexistence of the fixed point predicted by RG field theory, i.e., we assume the existence and derive a contradiction. Since Eq. (13) is true for any small deviation $`\delta \mu `$ satisfying the relation $$\mu ^{}=\underset{s\mathrm{}}{lim}R_s(\mu ^{}+\delta \mu ),$$ (14) we choose $`\delta \mu `$ such that $`\mu ^{}\mu ^{}+\delta \mu `$ corresponds to the variation of the Hamiltonian parameters $`r_0^{}r_0^{}+\delta r_0`$, $`c^{}c^{}+\delta c`$, and $`u^{}u^{}+ϵ\times \mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is a small constant. The values of $`\delta r_0`$ and $`\delta c`$ are choosen to fit the critical surface and to meet the condition (14) at fixed $`c^{}=1`$ and $`\mathrm{\Lambda }=1`$. In particular, quantity $`\delta c`$ is found $$\delta c=Bϵ^2+o(ϵ^3),$$ (15) with some (small) coefficient $`B=B(\mathrm{\Delta })`$, to compensate the shift in $`c`$ of the order $`ϵ^2`$ due to the renormalization (cf. ). The formal $`ϵ`$–expansion of $`\delta X(𝐤,\mu )`$ can be obtained in the usual way from the perturbation theory. In this case Eq. (13) reduces to $$\delta X(𝐤,\mu )=2\delta c+k^2[\delta \mathrm{\Sigma }(𝐤,\mu )\delta \mathrm{\Sigma }(\mathrm{𝟎},\mu )],$$ (16) where $`\delta \mathrm{\Sigma }(𝐤,\mu )`$ is the variation of self–energy due to the substitution $`\mu ^{}\mu ^{}+\delta \mu `$. A simple calculation yields $$\delta X(𝐤,\mu )=ϵ^2[\mathrm{\hspace{0.17em}2}B12(2A\mathrm{\Delta }+\mathrm{\Delta }^2)k^2(I(𝐤)I(\mathrm{𝟎}))]+o(ϵ^3),$$ (17) where $`I(𝐤)`$ $`=`$ $`(2\pi )^8{\displaystyle \underset{k_1<1}{}}d^4k_1{\displaystyle \underset{k_2<1}{}}d^4k_2\times k_1^2k_2^2𝐤𝐤_1𝐤_2^2`$ (18) $`\times \theta (1𝐤𝐤_1𝐤_2)`$ and $`A`$ is the expansion coefficient in the $`ϵ`$–expansion of the renormalized coupling constant $`u^{}`$, i.e., $$u^{}=Aϵ+o(ϵ^2).$$ (19) The theta function appears in Eq. (18) due to the cutting of the integration region at $`k=\mathrm{\Lambda }=1`$. Term (18) is well known . It behaves like $`const+k^2\mathrm{ln}k`$ at small $`k`$. The expansion coefficient at $`ϵ^2`$ in Eq. (17) is exact, since uncontrolled parameters of order $`ϵ^2`$ contained in the renormalized Hamiltonian $`H^{}`$, which are absent in Eq.(1), give a contribution of order $`ϵ^3`$ to $`\delta \mathrm{\Sigma }(𝐤,\mu )\delta \mathrm{\Sigma }(\mathrm{𝟎},\mu )`$. In such a way, at small $`k`$ the expansion is unambiguous, i.e., $$\delta X(𝐤,\mu )=ϵ^2[C_1(\mathrm{\Delta })+C_2(\mathrm{\Delta })\mathrm{ln}k]+o(ϵ^3)atk0,$$ (20) where $`C_1(\mathrm{\Delta })`$ and $`C_2(\mathrm{\Delta })`$ are coefficients independent on $`ϵ`$. It is commonly accepted in the RG field theory to make an expansion like (17), obtained from the diagrammatic perturbation theory, to fit an asymptotic expansion at $`k0`$, thus determining the critical exponents. In general, such a method is not rigorous since, obviously, there exist such functions which do not contribute to the asymptotic expansion in $`k`$ powers at $`k0`$, but give a contribution to the formal $`ϵ`$–expansion at any fixed $`k`$. Besides, the expansion coefficients do not vanish at $`k0`$. A trivial example of such a function is $`ϵ^m[1\mathrm{tanh}(ϵk^ϵ)]`$ where $`m`$ is integer. Nevertheless, according to the general ideas of the RG theory, in the vicinity of the fixed point the asymptotic expansion $$X(𝐤,\mu )=\frac{1}{a}k^\eta +b_1k^{ϵ+o(ϵ^2)}+b_2k^{2+o(ϵ)}+\mathrm{}$$ (21) is valid not only at $`k0`$, but within $`k<\mathrm{\Lambda }`$. The latter means that terms of the kind $`ϵ^m[1\mathrm{tanh}(ϵk^ϵ)]`$ are absent or negligible. In such a way, if there exists a fixed point, then we can obtain correct $`ϵ`$–expansion of $`\delta X(𝐤,\mu )`$ at small $`k`$ by expanding the term $`b_1k^{ϵ+o(ϵ^2)}`$ (with $`b_1=b_1(ϵ,\mathrm{\Delta })`$) in Eq. (21) in $`ϵ`$ powers, and the result must agree with (20) at small $`\mathrm{\Delta }`$, at least. The latter, however, is impossible since Eq. (20) never agree with $$\delta X(𝐤,\mu )=b_1(ϵ,\mathrm{\Delta })[\mathrm{\hspace{0.17em}1}+ϵ\mathrm{ln}k+o(ϵ^2)]$$ (22) obtained from (21) at $`k0`$. Thus, we have arrived to an obvious contradiction, which means that the initial assumption about existence of a certain fixed point, predicted by the RG field theory in $`4ϵ`$ dimensions, is not valid. The only reason why this contradiction has not been detected before, seems, is the fact that Eq.(13) never has been considered in literature. Since the results of the RG field theory in $`4ϵ`$ dimensions completely are based on the formal $`ϵ`$–expansion, the predicted ”fixed point”, obviously, is a set of Hamiltonian parameters at which Eq. (2) is satisfied in the limit $`ϵ0`$ at a fixed $`s`$, but is not satisfied in the limit $`s\mathrm{}`$ at a fixed $`ϵ`$. ## 4 Conclusions We have demonstrated clearly that, in its very basics, the RG field theory in $`4ϵ`$ dimensions is contradictory. Based on a mathematically correct (according to the general arguments of the RG theory) method, we have shown that such a fixed point in $`4ϵ`$ dimensions, as predicted by the RG field theory, does not exist. It should be noticed, however, that our consideration does not exclude existence of a different fixed point. ## References
no-problem/0001/astro-ph0001492.html
ar5iv
text
# Mapping the Galactic Halo I. The “Spaghetti” Survey ## 1 INTRODUCTION How much of the Galaxy’s halo was accreted from satellite galaxies? What fraction of these accretions have left substructure that we can detect today? Hierarchial galaxy formation pictures (Davis et al., 1985; Governato et al., 1997; Klypin et al., 1999) suggest that structure forms first in small clumps which later combine to make larger galaxies. While this picture describes dark matter rather than stars, it is reasonable to expect that some stars would have formed in these dense clumps of matter at early times. This is borne out by the fact that almost all of the Local Group dwarf galaxies contain stars with ages greater than 10 Gyr (Mateo, 1998). In studies of the Milky Way, the first suggestion that the Galaxy’s halo did not form in a fast, uniform collapse (Eggen et al., 1962) was made by Searle and Zinn (1978), who noted that the horizontal-branch morphology of the outer halo globulars could be explained by a younger mean age. These clusters would have originated in “transient protogalactic fragments that continued to fall into …… the Galaxy for some time after the collapse of its central regions”. Strong variations in the Galaxy’s potential associated with the formation of the inner disk and bar, plus the shorter orbital timescales there, may have erased the kinematic signature of halo accretion in its inner regions. Substructure may persist for many Gyr further from the galactic center (Johnston et al., 1995; Harding et al., 1999). We see evidence for accretion not only in the Sgr dwarf galaxy, which is being tidally disrupted on its current passage close to the Galaxy’s disk (Ibata et al., 1994), but also in the detection of various moving groups in the halo field (Majewski et al., 1994; Helmi et al., 1999). These latter objects are particularly interesting and surprising because they are at relatively small distances from the galactic center (8–10 kpc). Figure 1(a) shows in histogram form the numbers of distant halo objects known to date. Globular clusters and dwarf spheroidal galaxies are not included. It is not surprising, with such small samples, that almost all the discoveries of halo substructure to date have been serendipitous. In this paper, we will describe a survey which will give a quantitative answer to the question “how much of the halo was accreted?” by identifying a sample comprising a large number of halo stars out to great distances. (Figure 1(b) shows how the situation has improved after our first spectroscopic followup run on the KPNO 4m.) In Section 2 we discuss the design of our survey and the tracers we use, together with the region of the halo that each tracer will sample. Section 3 discusses our photometric selection technique in detail for each tracer, showing its efficacy with spectra of stars found in each category. We also discuss the possible contaminants of our sample and how we reject them. Section 4 uses the numbers of turnoff stars found by our survey in various directions to constrain the density distribution of the halo. We also report the first evidence of spatial substructure in the halo. Future papers in this series will discuss our simulations of the breakup of satellites and their observational consequences (Harding et al., 1999), our photometric survey (Dohm-Palmer et al., 1999), our spectroscopy of distant metal-poor giants and BHB stars (Olszewski et al., 1999) and evidence for spatial substructure in our photometric data (Morrison et al., 2000). ## 2 SURVEY DESIGN Because substructure is visible in velocity space long after it disappears in density space (Johnston et al., 1995; Harding et al., 1999), we aim to obtain velocity data on a large sample of halo stars. Harding et al. (1999) discuss the signature that tidally disrupting streams will show in velocity histograms: although the observed signature does depend on the viewing geometry and initial conditions of the accretion, in many cases velocity histograms will be bimodal or multi-modal. The features that correspond to disrupted satellites will show a velocity dispersion of order tens of kilometers per second. Since substructure will survive longest in the outer Galaxy, distant halo tracers are advantageous. If we restrict our spectroscopic follow-up to 4m-class telescopes, this means that halo red giants and blue horizontal branch stars are the tracers of choice because of their intrinsic luminosity. However, these stars are intrinsically rare, and so they limit our detection methods for substructure — for example, it is impossible to find a sample of 100 halo giants in a field of size 1 deg<sup>2</sup>. Samples of this size are needed for methods of substructure detection based on velocity histograms. In order to have more sensitivity to subtle signatures of accretion, we also need tracers which are more numerous, such as halo turnoff stars. We shall, however, not be able to probe the extreme outer halo with these less luminous stars until 8m-class telescopes are more generally available. The stellar halo provides a very small fraction of the Milky Way’s luminosity — in the solar neighborhood, the disk-to-halo star number ratio is $``$800:1, (Bahcall and Casertano, 1986; Morrison, 1993), and even the thick disk outnumbers the halo by $``$40:1. Thus it is important to use a method which will efficiently pre-select halo stars before obtaining velocities. ### 2.1 Sky Coverage When aiming to answer a question about the origin of the entire halo, all-sky coverage would be ideal. Johnston et al. (1996) discuss such a survey for halo substructure, which, unfortunately, is not feasible at present. Existing all-sky surveys based on photographic plates cannot produce photometry accurate enough to efficently select halo objects apart from rare blue horizontal branch stars. Any spatial substructure in the halo will be washed out by the many foreground disk and thick disk stars in a photographic survey. The Sloan survey (Gunn, 1995) plans to cover one quarter of the sky at high galactic latitude in the North. The photometry from this survey will be sufficiently accurate to enable the identification of halo turnoff stars and BHB stars, but not the rare distant red halo giants we discuss below. These stars, which will be inseparably contaminated by foreground K dwarfs in the Sloan colors, are particularly valuable for this project because they probe the extreme outer halo. We have chosen to use a pencil-beam survey of various high-latitude fields, using CCD photometry and the Washington system to select the halo stars, and then carry out follow-up spectroscopy to search for kinematic substructure. Even with the most extreme hypothesis that the entire halo is composed of tidal streams, their filling factor on the sky will still be small. Some fields will therefore contain more stars than the average and some considerably fewer. A large number of different pointings is preferable to maximize the chance of hitting a single stream. For example, assuming a typical dimension for a stream of 2 degrees by 100 degrees, 50 of these would cover only 25 percent of the sky. We have chosen to survey initially 100 square degrees of the sky, in 100 different fields randomly distributed at high galactic latitude (generally above $`|b|=45`$), with most fields having galactic longitude between 90 and 270. We chose in general to stay away from the quadrants which include the galactic center for two reasons. First, structure is more readily destroyed close to the galactic center, which lowers our chance of detecting it. The best place to look for substructure is the outer halo, where dynamical times are long and tidal forces small. Thus, the galactic anticenter is better since the galactocentric radii of the stars we detect will be larger. Second, there are many components represented at the galactic center – young and old disk, thick disk and bar, as well as the inner halo. Interpretation of our results would be more complex there due to the dynamical effect of the bar. The metal-weak tail of the thick disk is also minimized when we look at higher latitudes. ### 2.2 Tracers Traditional ways of searching for halo stars include: * Proper motion surveys (Sandage and Fouts, 1987; Carney and Latham, 1987; Ryan and Norris, 1991). These surveys identify stars at most a few hundred pc away, except for the surveys of Majewski (1992) and Mendez et al. (1999), which produced complete proper motion information on a sample of stars to B=22.5 and 19 respectively (a maximum distance of 30 kpc). * RR Lyrae surveys (Kinman et al., 1965; Saha, 1985; Suntzeff et al., 1991; Wetterer and McGraw, 1996; Kinman et al., 1996). These surveys sample more distant objects, but because of their extreme rarity – of order one per deg<sup>2</sup>– few distant RR Lyraes are known. * Objective prism surveys, generally for metal-poor giants or stars near the main sequence turnoff. (Bidelman and MacConnell, 1973; Ratnatunga and Freeman, 1989; Beers et al., 1985; Morrison et al., 1990). Because of the relatively high resolution needed to identify these stars spectroscopically, these surveys are in general restricted to relatively bright magnitudes and therefore relatively nearby objects. * Blue horizontal-branch (BHB) star surveys. The BHB stars are identified either from their unusually blue color (Sommer-Larsen and Christensen, 1985; Norris & Hawkins, 1991) or by using objective prism spectra near the Ca K line (Pier, 1982, 1984; Beers et al., 1985). BHB stars are almost as rare as RR Lyrae variables, and their identification is complicated by the presence of halo blue stragglers, which have the same broadband colors as BHB stars but higher gravity. As Norris and Hawkins show, the blue straggler fraction may be as high as 50% for samples of faint blue stars. * Carbon stars. These are extremely rare stars (of order one per 200 deg<sup>2</sup>, Totten and Irwin 1998) which are identified easily from objective-prism spectra. Totten and Irwin (1998) review the currently known halo carbon stars and their properties. Distances are more uncertain for these stars than for any other tracer we have discussed. Halo tracers we chose to use are: * Red giants. These are identified photometrically using a luminosity indicator based on the Mg $`b`$/MgH feature at 5170Å plus a metallicity indicator based on line blanketing near 4000Å , with spectroscopic confirmation. * Blue horizontal branch (BHB) stars, identified by their color plus a spectroscopic check of gravity. * Stars at the main sequence turnoff, identifiable from their blue color — the most metal-poor globular clusters have turnoff colors of B V$``$0.38, while the thick disk turnoff is B V$``$0.5, so stars with colors between these two values are most likely halo turnoff stars. * Blue metal-poor stars (Preston et al., 1994), which are halo field stars with colors bluer than B V=0.38, thought to be younger main sequence stars. When found in globular clusters, such stars are typically referred to as blue stragglers and may have a different origin from the field analogs. We will discuss these tracers, and possible contaminants in our sample, below. We reject another halo tracer, RR Lyrae variables, because of the large amount of telescope time needed to identify and phase these variable stars. Our technique reliably identifies both turnoff stars from the halo and the more distant halo giants and BHB stars. Because of their different luminosities, these objects probe different regions of the galactic halo. Turnoff stars can only reach to galactocentric distances of 15–20 kpc using 4m-class telescopes for spectroscopic followup, while red giants and BHB stars will reach to distances of more than 100 kpc. However, due both to shorter evolutionary timescales for giants and to the strong decrease in halo density with galactocentric distance ($`\rho r^3`$ or $`r^{3.5}`$, Zinn 1985, Saha 1985), there are far fewer halo giants detected than turnoff stars. Different techniques are therefore used to search for the kinematic signatures of accretion. For the more numerous turnoff stars, we use statistical techniques based on the appearance of the velocity histogram (testing for multimodality, for example, see Harding et al. (1999)). For the rarer red giants, we use the technique of Lynden-Bell and Lynden-Bell (1995) (see also Lynden-Bell 1999) to search for stars with similar energies and angular momenta, indicating a common origin. ## 3 SELECTION TECHNIQUE Our initial survey was done using the Burrell and Curtis Schmidt telescopes, which have CCD fields of order 1 deg<sup>2</sup>. The Burrell Schmidt is fitted with a back-illuminated SITe 2048$`\times `$4096 CCD with 1.5 arcsec per pixel, while the Curtis Schmidt (often) has a 2048$`\times `$2048 back-illuminated Tek CCD with 2.4 arcsec per pixel. Now that large mosaics are available, we have extended the survey using the CTIO 4m with the BTC (field = 0.25 deg<sup>2</sup>) and the 8-CCD NOAO mosaics. Our spectroscopic followup observations have been made using the Hydra multiobject fiber spectrograph on the 3.5m WIYN telescope and the RC spectrograph on the KPNO 4m. Future observations with the Hydra spectrograph on the CTIO 4m and the Magellan telescope are planned. The Washington photometric system combines strong metallicity sensitivity for late-type giants with broad filter passbands, which contribute to observing efficiency. We use this system for our survey, as its filters can be used for selection of all the other tracers we need as well. We describe transformations between the Washington system and the BVI system in the Appendix. In each survey field (of area approx 1 deg<sup>2</sup>) we obtain photometry using a modified Washington (Canterna, 1976; Geisler, 1984) filter set (C,M, 51 and i’ filters). The large pixel area on the Schmidt telescopes leads to a high sky level in the I band. Thus we use the Sloan i’ filter, whose passband avoids the worst of the bright sky lines in the I band, in place of the Washington $`T_2`$ or I filters. This i’ filter transforms readily to the Washington system. Typical exposure times using back-illuminated CCDs and the 24-inch Schmidt telescopes are 6000 sec in C and M, 8400 sec in 51, and 4800 sec in i’. On the CTIO 4m with the BTC mosaic, exposure times were 100 sec in M, 500 sec in C, 120 sec in $`T_2`$ and 250 in 51. These give typical errors of 0.015 mag. in each filter for a V=19 star. Our photometry will be discussed in more detail by Dohm-Palmer et al. (1999). We have used the data of Schlegel et al. (1998) to estimate the values of reddening in our fields, and de-reddened the Washington colors according to the prescriptions of Canterna (1976) and Harris and Canterna (1979). The reddening values are small, so this did not have a strong effect on our results in any case. Figure 2 shows a typical color-magnitude diagram with the position of the halo and thick disk turnoffs marked. Since each tracer requires a different selection technique, we will discuss them separately. ### 3.1 Halo giants This tracer is the most exciting since it allows us to probe the extreme outer halo. These stars have been little used in the past because they are greatly outnumbered by foreground K dwarfs, and it is difficult to distinguish K dwarfs from giants without accurate intermediate-band photometry or spectroscopy. However, their potential is enormous: a metal-poor star near the giant branch tip with $`M_V`$=–2 and $`V`$=19.5 (easily observable at medium resolution on a 4m-class telescope) has a distance of 200 kpc! The combination of large CCD fields with the Washington photometric system makes the detection of such objects feasible for the first time. These distant halo giants are rare. Using the simple model of Morrison (1993), we find that there are of order 1–10 halo giants per square degree down to V=20, using a range of assumptions about the halo density distribution. In section 4 we will show that out to galactocentric distances of 20 kpc, the halo density law is well described by a flattened power-law with exponent –3.0 and flattening b/a=0.6. If this density law continues to larger distances, we would expect to see 4 halo giants deg<sup>-2</sup> brighter than V=20 at the NGP and 5 deg<sup>-2</sup> in an anticenter field with galactic latitude 45. There are three classes of objects found in the same range of color and magnitude as the halo giants we wish to find: * numerous K dwarfs of the thin and thick disk, which can be detected using a photometric survey ($`M51`$ color) * extremely metal-poor halo dwarfs, which are present in comparable numbers to halo giants for $`V>18`$, and need a good follow-up spectrum (with S/N$``$15) to distinguish from halo giants. * background objects such as compact galaxies and QSOs, which are easily weeded out using a low S/N followup spectrum. #### 3.1.1 Disk Dwarfs The major source of contamination of a halo giant sample is foreground K dwarfs. To quantify the numbers of foreground dwarfs that we will need to weed out, we have estimated their number using a modified version of the Bahcall-Soniera model which includes both thin disk and thick disk, using a 5% normalization for the thick disk. This predicts 70 thin disk dwarfs and 90 thick disk dwarfs ($`0.9<`$ B V$`<1.2`$) per deg<sup>2</sup> at the NGP. Classical spectroscopic luminosity indicators (originally developed for Pop. I stars, eg Seitter 1970) which are useful for our survey include: * the Mg $`b`$ triplet and MgH band near 5200Å. These features are much stronger in dwarfs than giants in this color range. They begin to lose sensitivity to luminosity blueward of B V = 0.9 ($`MT_2`$=1.2). These features are also temperature sensitive. * the Ca I resonance line at 4227Å shows marked sensitivity to both luminosity and temperature in K stars. Its strength increases as temperature and luminosity decrease. Since we have an independent photometric measure of temperature from $`MT_2`$ color, we can use the Ca I 4227Å line as a luminosity indicator. * The blue and UV CN bands (bandheads at 4216 and 3883Å ) are strong in giants and not in dwarfs. The bands become weaker with decreasing metallicity, and are not visible below \[Fe/H\] = –1.5. We have observed metal-poor dwarfs and giants in order to check whether these indicators retain their usefulness for more metal-poor stars. Our observations, plus those of the metal-weak giants we found, will be discussed in more detail by Olszewski et al. (1999). We use the Mg $`b`$/MgH region as our major method of rejecting disk dwarfs in our sample via photometric selection. Geisler (1984) augmented the original Washington system with the DDO “51” filter, an intermediate-band filter centered on the Mg $`b`$ and MgH features near 5200 Å , to give luminosity sensitivity for late G and K giants. The $`M51`$ color gives a photometric method of measuring the strength of these features. Figures 3(a)–(d) show the strength of the MgH feature in spectra of dwarfs of solar and lower metallicity, and of the Mg $`b`$ feature for giants of different metallicity. Most of these stars do not have direct measures of $`MT_2`$ color. We transformed to Washington colors from existing $`by`$ or B V colors, using the method discussed in the Appendix. For ease of display we have sorted the stars into bins of metallicity and color. Table 2 gives sources of metallicity and color for all the stars shown in Fig. 3. These spectra clearly illustrate the luminosity indicators discussed above. A strong MgH band is seen for dwarfs with $`MT_2`$ = 1.2. The feature becomes weaker as temperature increases, until it is hard to see in the dwarfs with $`MT_2`$ = 1.0. Although we do not have any spectra for dwarfs redder than $`MT_2`$ = 1.3, the MgH band continues to strengthen with decreasing temperature. For all except the reddest giants ($`MT_2`$$`>1.45`$) there is no MgH feature visible, and the Mg $`b`$ lines are weak. For $`MT_2`$of 1.45 and redder, there is a slight MgH feature visible for the giants with \[Fe/H\] = –1.0 and above. Our main method of detecting foreground dwarfs photometrically is via these features (MgH + Mg b). Other spectral features which are useful for luminosity discrimination with a follow-up spectrum are the Ca I 4227Å line and the CN bands. The Ca I 4227Å line is visible in all the dwarf spectra, and can be seen to increase in strength as temperature decreases. It is also visible in the spectra of the metal-poor giants (especially above \[Fe/H\] = –1.5) but is much weaker than in dwarfs of the same color. Blue and UV CN bands are visible in the metal-richer giants (for example M71 stars l-1, S232) and will be discussed in more detail in the next section. The 51 filter gives us the ability to measure the strength of the MgH feature and reject the numerous foreground dwarfs. Precise photometry is necessary, however: Geisler (1984) obtained $`M51`$ colors for a sample of metal-rich and metal-poor giants, and predominantly metal-rich dwarfs. He showed that giants and metal-rich dwarfs differ by at least 0.10 mag in M-51 color for B V$`>`$ 0.85, and that the difference becomes more pronounced between metal-poor giants (with weaker Mg $`b`$) and metal-rich dwarfs. Figure 4 shows $`M51`$colors for known dwarf and giant stars. Geisler (1984) plotted $`M51`$ versus $`T_1T_2`$; it is clear from Fig. 4 that the use of $`MT_2`$ as a color does not degrade the luminosity sensitivity. Our measurement errors for $`M51`$ are 0.02 to 0.04 mag., allowing us to discriminate easily between giants and all dwarfs except the most metal-poor using our photometric survey. Because we are searching for intrinsically very rare objects, we are particularly vulnerable to photometric errors — dwarfs with 3-sigma errors in their colors are more common than our most distant halo giants. Since all but a few percent of the known halo stars have \[Fe/H\]$`<`$–1.0, we can use metallicity as an additional criterion. We use an additional Washington filter to identify halo giants: C. $`CM`$ is a metallicity indicator which was calibrated for giants by Geisler et al. (1991). We require that candidates lie in the \[Fe/H\] $`<1.0`$ region of the Washington $`CM`$ vs. $`MT_2`$ diagram. Figures 5 and 6 show how successful our photometric classification has been. Photometric data from 22 high-latitude fields observed at the CTIO 4m with the BTC in April 1999 are plotted with spectroscopic confirmations shown as larger symbols. These data will be described in more detail by Dohm-Palmer et al. (1999). Note that we deliberately chose to observe candidates near the giant/dwarf boundary to mark it carefully for future work. #### 3.1.2 Extreme K subdwarfs Metal-poor halo dwarfs are of particular concern because their spectra more closely resemble metal-poor giants. The Bahcall-Soniera model (Bahcall and Soniera, 1984) predicts that 15 halo dwarfs will be found per square degree down to V=20 in the color range that we search for giants (B V= 0.9 to 1.2). (Recall from Section 3.1 that we expect to see 4-5 halo giants in the same magnitude interval.) There are very few metal-poor K dwarfs with $`M51`$ photometry available, so we cannot accurately measure the photometric separation between metal-poor dwarfs and giants. However, Paltoglou and Bell (1994) have calculated synthetic $`M51`$ colors for their grid of dwarf and giant models of different metallicity. These models are based on the linelists of Bell et al. (1994). We show their dwarf sequences for \[Fe/H\] from 0.0 to –3.0 in Fig. 4. It can be seen that the most metal-poor dwarfs, with \[Fe/H\] $`<`$ –2.0, overlap the region where giants are found in this diagram. Thus, the models suggest that the $`M51`$ photometric index will not be useful for weeding out the extreme K subdwarfs in our fields — we will need to examine their spectra in more detail. How common are these extremely metal-poor subdwarfs? We can estimate the number of halo dwarfs with \[Fe/H\] $`<`$ –2.0 using the halo metallicity distribution of Ryan and Norris (1991); 31% of their sample has \[Fe/H\] $`<`$ –2.0, which, in conjunction with the results of the Bahcall/Soniera model, translates to 4 very metal-poor halo dwarfs per deg<sup>2</sup> for $`V<20`$ at the NGP. Halo dwarfs only appear in significant numbers for $`V>18`$, but the very distant halo giants also have $`V>18`$. We need to consider these contaminants seriously. We have approached the problem of discriminating spectroscopically between halo giants and extreme K subdwarfs in two ways — by obtaining spectra of the few extremely metal-poor K dwarfs known in this temperature range, and supplementing these with synthetic spectra. It is practical to depend on follow-up spectroscopy to weed these objects out because of their rarity. Since some of the major differences between K dwarfs and giants are due to molecular bands, we have used the NextGen model atmosphere grid (Hauschildt et al., 1999a, b) for the synthetic spectra. These models were originally designed to model the atmospheres of very low-mass stars and brown dwarfs, and include a detailed molecular equation of state and a set of molecular opacity sources which improve on those used in previous work. Hauschildt et al. (1999a) state that their models are more suitable for stars with $`T_{eff}<5000`$K than previous models such as that of Kurucz (1992). The NextGen models are described in Hauschildt et al. (1999a, b). Peter Hauschildt kindly computed NextGen models with \[Fe/H\] = –2.0 and an over-abundance of the alpha elements similar to that seen in halo stars (\[$`\alpha `$/Fe\]=0.4) for us. They were calculated with log g = 1.0 and 4.5 to match giants and dwarfs with effective temperatures of 4700 and 4500K. The model spectra are shown in Fig. 7, smoothed to the same resolution as our 4m spectra. It can be seen that there are marked differences between giants and dwarfs with \[Fe/H\]=–2. Both dwarf models, especially the cooler one, show MgH features, and both show strong lines of Ca I 4227, unlike the giants, and much stronger Mg $`b`$ lines. The dwarfs also show much stronger lines in the region blueward of the Ca H and K lines, particularly in the region near 3840 Å where lines of FeI and MgI contribute. In the next two figures we focus on two regions of the spectrum which are particularly useful for luminosity discrimination, comparing spectra of known metal-poor dwarfs and giants and supplementing with synthetic spectra when no real spectra are available. Fig 8 shows the region between 3700 and 4500Å . It can be seen that the most metal-poor dwarf for which we have a spectrum, HD 134440 with \[Fe/H\]=–1.5 and T<sub>eff</sub>=4740K (Carbon et al., 1987) matches the model spectrum with \[Fe/H\]=–2.0 and T<sub>eff</sub>=4700K quite well, giving confidence in the synthetic spectra. There is no marked difference in G band strength between giants and dwarfs, but all the dwarf spectra show a strong feature at Ca I 4227. The metal-poor giants of moderate metal deficiency (\[Fe/H\] $``$ –1.6) show a weaker Ca I line, while the very metal-poor giant shows almost none. The blue and UV CN bands are also visible in the giants with \[Fe/H\] $``$ –1.6, while the dwarf spectra look quite different in this region. The feature just blueward of 3840Å is very strong in the dwarfs, and is significantly narrower than the UV CN band. We note that because of CN anomalies in globular clusters, the CN strength of the stars in Fig 8 may not be typical of field stars. While CN measurements of the three globular cluster giants shown in the Figure are not available, field giant stars do show these CN features, as can be seen in Figure 4(b) of Flynn and Morrison (1990). This criterion can be used as a way of confirming that a star is a giant, because no dwarf shows these CN features. It cannot be used as a way of confirming that a star is a dwarf because some giants may be CN weak. Fig. 9 shows the region from 5000 to 5300Å . While the large-scale shape of the MgH band is not easily visible when such a small region of spectrum is displayed, it can be seen that at a given $`MT_2`$color, both the MgH bandhead and the Mg triplet are stronger in the subdwarfs than in any of the giants. Even the M71 giants, whose metallicity is higher than we aim to identify in our halo sample, are distinguishable from subdwarfs. However, the effect is too small to depend on the $`M51`$ color to identify these subdwarfs reliably. In conclusion, our photometric survey will weed out the more common dwarfs of the thin and thick disk, but is unable to identify the very metal-poor K subdwarfs of the halo. These stars are rare enough that it is practicable to use follow-up spectra with good S/N to reject them. We can measure our success rate for photometric pre-selection using the percentages of spectroscopically confirmed giants, foreground dwarfs and subdwarfs. 70% of giant candidates in the correct region of the $`M51`$ vs.$`MT_2`$ diagram were giants, 20% were subdwarfs and 10% were dwarfs. Given our expected lack of success in discriminating subdwarfs from giants photometrically, this confirms that our photometric selection technique is very effective. Figure 10 illustrates the success of our selection with spectra of metal-poor giants identified in our survey. Metallicity and distance values obtained from the Washington $`CM`$ vs. $`MT_2`$ calibration are given for each star. Our sample already includes two stars which are more distant than the LMC, one of which is shown in Fig. 10, and we are well placed to discover a large number of giants in the outer halo where only dwarf satellites and globular clusters were previously known. #### 3.1.3 Extragalactic Contaminants Possible extragalactic contaminants of our giant sample are QSOs and unresolved galaxies. Reid & Majewski (1993) show that there are $``$ 15 QSOs and unresolved galaxies deg<sup>-2</sup> down to R=19 at the NGP. Most QSOs are separable photometrically because of their unusual positions in the color-color diagram, although a few need to be weeded out spectroscopically. However, color discrimination does not work for unresolved normal galaxies, as their integrated colors are similar to those of metal-poor giants (Geisler, 1995). The large pixels on the Schmidt telescopes mean that we are more vulnerable to this problem there. With smaller pixels the galaxy contamination is much less severe — on our recent 4m run (reported in Olszewski et al. (1999)) where fourteen halo giants were identified from CTIO 4m/BTC data, no galaxies were mistakenly found. ### 3.2 Blue Horizontal-Branch Stars These stars will cover a distance range of 5 – 50 kpc, and thus represent another important tracer of the outer halo in our survey. The number of BHB stars per deg<sup>2</sup> depends on the horizontal-branch morphology of the halo field, which is not well determined for large distances from the Galaxy’s center. If the halo field follows the globular clusters in having a redder horizontal branch morphology for large radii, then we would expect a few BHB stars per deg<sup>2</sup> in the magnitude range $`V`$=15–20. We restrict ourselves to the portion of the horizontal branch which is flat in V magnitude, between B V=0.0 and 0.20. This converts to a color range of 0.0 to 0.30 in $`MT_2`$ (see Fig. 20). It can be seen from Figure 2 that this is a sparsely-populated portion of the color-magnitude diagram, and the only non-halo contaminants of our sample in this color range are white dwarfs and QSOs. There are 3 white dwarfs deg<sup>-2</sup> and 5 QSOs deg<sup>-2</sup> down to V=20 in this color range, Fleming et al. (1986); Sandage and Luyten (1969); Reid & Majewski (1993). Both of these types of objects are easy to discriminate with even a low-dispersion, low signal-to-noise spectrum. We also need to be able to discriminate between halo blue stragglers (with main sequence gravities) and blue horizontal branch stars. Our followup spectroscopy allows us this via Balmer line profiles (Pier, 1983; Sommer-Larsen and Christensen, 1985; Norris & Hawkins, 1991; Kinman et al., 1994). In cases where we are able to obtain spectra which reach below the Balmer jump, its size can also be used as a discriminator. Kinman et al. (1994) used spectra of 3.7Å resolution (very similar to our 4m spectral resolution) to make this measurement. Figure 11 shows spectra of known BHB standards and two BHB stars from our sample. ### 3.3 Halo turnoff stars Halo turnoff stars are the most numerous but least luminous tracers we shall use, and sample distances from 2 – 16 kpc from the Sun (assuming an absolute magnitude $`M_V`$ = 4.5 and a limiting magnitude of V=20.5). We used the halo turnoff luminosity function calculated in Section 4.1 and the preferred halo model from Section 4 (power law exponent of –3.0 and an axial ratio of 0.6) to calculate the numbers of turnoff stars per square degree we would expect to see in our fields. Under these assumptions, there should be 150 halo turnoff stars per square degree down to V=20.5 at the NGP, and 130 per square degree in the anticenter at $`b`$=45. These stars are relatively easy to identify using accurate photometry because they are bluer than almost all stars at high galactic latitude. For the most metal-poor globular clusters such as M92, the turnoff is at B V=0.38, significantly bluer than the turnoff color of the thick disk (B V=0.5, Carney et al. (1989)). The thin disk has a sufficiently small scale height that very few young thin disk stars are found in our magnitude range at high galactic latitude (Bahcall and Soniera, 1984). We choose stars with B V color of 0.38 to 0.49 as halo turnoff star candidates. The transformation to $`MT_2`$ color is complicated by the fact that we have no observations of metal-poor turnoff stars in this color. We approach the problem in two ways. First, only the most metal-poor stars will have a turnoff color as blue as B V = 0.38, so we could use the synthetic colors of Paltoglou and Bell (1994) for their models with \[Fe/H\] = –2.0 to derive the transformation. This predicts a turnoff color of $`MT_2`$=0.6. Second, we can derive the turnoff color using $`VI`$ and Stromgren photometry of metal-poor globular clusters from the literature. Table 3 summarizes the turnoff colors in $`VI`$ and $`by`$ of metal-poor globular clusters NGC 6397, NGC 7099 and M92, and the metal-rich cluster 47 Tucanae. The average $`VI`$ turnoff color for these three metal-poor clusters is 0.51, which transforms via Fig. 19 to $`MT_2`$=0.64. Thus the estimates from theory and observation are in reasonable agreement, and we have chosen to use the observational estimate here. We also use the $`VI`$ photometry of Kaluzny et al. (1998) for 47 Tuc to constrain the turnoff color of the thick disk to be $`MT_2`$=0.81. In summary, we take the color range from $`MT_2`$ = 0.64 to 0.80 for our halo turnoff stars. As a consistency check on our photometry, we also require that $`CM`$ is in the range 0.2 to 0.5. Thick disk stars scattering into the color range should be a minor problem because of our small photometric errors (our $`MT_2`$ errors are less than 0.02 mag. at V=19) and the fact that we work faint enough to be away from the regions of the Galaxy where the thick disk dominates. Using the models of turnoff stars in the thick disk and halo of section 3.1, we can show that only for the brightest part of our magnitude range (V=16–17, corresponding to distances of order 2.5 kpc, saturated on 4m/BTC data but not on Schmidt data) are numbers of thick disk stars per square degree greater than halo star numbers in our fields, and then only by a factor of $``$2, which is not large enough to make significant numbers of thick disk stars “leak” into the halo color range via photometric errors. For stars with V=17–18, numbers of thick disk and halo turnoff stars are approximately equal, and for fainter magnitudes, halo turnoff stars outnumber thick disk stars. The only other contaminants of our sample in this color range are small numbers of white dwarfs, QSOs and RR Lyraes. The QSOs and white dwarfs are obvious from spectra, and the RR Lyraes are sufficiently rare that few will be detected and some of these will be removed due to their variability on either photometric or spectroscopic observations. Figures 2 and 14 illustrate the selection technique, showing color-magnitude diagrams ($`MT_2`$ vs M) for a single field and for a composite of many fields. The position of the turnoff for both halo and thick disk is marked, and our candidate halo turnoff stars are shown. With very few exceptions, all stars in this region of the CMD show spectra typical of halo stars, i.e. metal-poor. Figure 12 shows WIYN/Hydra spectra of several of these stars. It is clear that the halo turnoff candidates are indeed metal-poor, confirming the accuracy of our photometry. ### 3.4 Blue Metal-Poor Stars Preston et al. (1994) identified an important group of halo turnoff stars with color bluer than B V = 0.38, and suggested that these stars were metal-poor stars with unusually young ages, which had originated in dwarf spheroidal satellites which were subsequently accreted into the Galaxy’s halo. Another possibility is that these stars are the result of the evolution of multiple star systems of the halo. Preston has obtained detailed follow-up spectroscopy of a number of these stars to test this possibility. Preston et al. (1994) estimated numbers in the solar neighborhood of 350–450 per kpc<sup>3</sup> (cf. the density from Table 4 of 2695 halo turnoff stars per kpc<sup>3</sup> with $`M_V`$=4.5). Unavane et al. (1996) found from photographic starcount data that $`10`$% of halo stars were bluer than B V = 0.4, which is in rough agreement with the Preston et al. (1994) value. These BMP stars are particularly important for our survey if they have younger ages, and we select them by B V color between 0.15 and 0.35 ($`MT_2`$ = 0.2 to 0.6). Using the Preston et al. (1994) local normalization, we expect to find 10–20 BMP stars deg<sup>-2</sup> to V=19. Our Fig 2 shows 23 such stars in the color range $`MT_2`$ = 0.2 to 0.6 for an area of 2.75 deg<sup>2</sup>, a little lower but not significantly different from the value of Preston et al. (1994). ## 4 MAPPING THE HALO – “INTELLIGENT” STAR COUNTS Accurate CCD photometry of large areas in many fields makes an new method of investigation of the halo possible. As discussed in Section 3.3, the halo turnoff color is B V= 0.38, and the thick disk turnoff is B V$``$ 0.5, so stars with colors between these two numbers are almost certainly halo stars close to the turnoff. Photographic colors have such large errors (0.05 to 0.10 magnitudes) that it is not possible to isolate halo turnoff stars from photographic data without simultaneously modelling the contribution of the thick disk. In contrast, our photometry is of such high and uniform quality that it is possible to separate halo turnoff stars cleanly from thick disk turnoff stars using the $`MT_2`$color. Fig. 14 illustrates this in one of our lowest latitude fields, where both thick disk and halo turnoffs can be seen. We now make a preliminary analysis of the BTC data to illustrate the power of our survey technique. Our major aim here is to check whether our data agree with other models for the halo. The dataset obtained with the Big Throughput Camera on the CTIO 4m in April 1999 (Dohm-Palmer et al., 1999) is particularly useful for mapping the halo because of the uniform quality of the data, and the fact that conditions were photometric throughout the run. Fig. 13 shows in cartoon form the location of the fields where data were obtained. There are 46 fields with latitudes ranging from +25 to +73, and longitudes from $`l`$=17 through the galactic center to $`l`$=218 (less than 40 degrees from the anticenter). We have checked for errors in reductions or reddening estimation by carefully examining the color-magnitude diagrams of all fields. Figure 14 is typical: the position of the halo turnoff is clear, and agrees well with the calculated value of $`(MT2)_0`$=0.64 in all but two of the 46 fields. In these two fields it appears that the Schlegel reddening values that we used should be adjusted by a few hundredths in order to bring the turnoff position to this color. Because the stars are uniformly distributed across the color range from $`(MT2)_0`$=0.64 to 0.8, photometric errors or reddening estimates of a few hundredths of a magnitude will in no case be strong enough to make the turnoff star numbers vary significantly. Models of the shape of the halo have in many cases been derived from easily detectable tracers such as globular clusters (Zinn, 1985) or horizontal-branch stars (Kinman et al., 1965; Saha, 1985; Hartwick, 1987; Preston et al., 1991; Kinman et al., 1994). Studies using star counts (eg Bahcall and Soniera 1986) are handicapped by their inability to separate halo and thick disk accurately with photographic photometry, as discussed above. Models from both globular clusters and field stars both find that the halo is centrally concentrated, with power-law exponent varying from –3.0 to –3.5 or even steeper. Axial ratios vary from 0.5 to 1.0, with suggestions from several groups that the axial ratio might change with galactocentric radius in the sense that the outer halo is spherical and the inner halo flattened. However, neither globular clusters nor horizontal-branch stars are ideal for measuring the density distribution of the halo. First, it is not clear that the halo field stars were formed under the same conditions as the globular clusters. Second, possible age and metallicity gradients in the halo (Searle and Zinn, 1978; Zinn, 1993; Preston et al., 1991; Kinman et al., 1994) are reflected in horizontal-branch morphology. This can cause a different power-law exponent to be derived for RR Lyraes and BHB stars, as found by Preston et al. (1991). Thus a check of the earlier results with a different tracer is valuable. Although our turnoff star sample will also be sensitive to age and metallicity variations, we are able to check for such effects by examining the color-magnitude diagrams of discrepant fields directly. ### 4.1 Local Halo Density The estimation of the halo density in the solar neighborhood is even more challenging than the measurement of its density in more distant fields. It has in most cases been based on the local density of stars selected by their high proper motion (eg Bahcall and Casertano 1986), with kinematic corrections made for the amount of the halo that would be missed using this selection technique. There have also been a small number of direct measurements of local density of some tracer such as RR Lyraes or red giants (eg Preston et al. 1991, Morrison 1993) which agree within a factor of two with the proper-motion data, but different selection effects such as metallicity operate for these samples. Attempts to extrapolate the results of pencil-beam surveys inward to the solar neighborhood are not always successful — Preston et al. (1991) and Wetterer and McGraw (1996) note the disagreement of a factor of two between the counts of nearby RR Lyrae variables and the extrapolation of more distant RR Lyrae counts inward. Since we have few low-latitude fields and our BTC data saturate for magnitudes much brighter than V=17 (corresponding to a distance of 3 kpc for turnoff stars) we have no direct constraints on the local density from our data. Our preferred value of the solar neighborhood halo density will depend on the axial ratio adopted. We decided to re-examine the measurement of the local halo density from the proper-motion samples, as there have been significant advances in the information available on these stars since the work of Bahcall and Casertano (1986). We have used the extended sample of Carney et al. (1994), which has been updated recently to have a distance scale consistent with the Hipparcos parallax measurements for subdwarfs, to make an estimate of the local density. Bruce Carney kindly made this sample available to us in advance of publication. Also, in order to make comparisons with our BTC data easier, we isolated stars in the turnoff star color bin we used (B V between 0.38 and 0.5, corresponding to $`MT_2`$ between 0.64 and 0.8), and then derived a local luminosity function for these stars alone. Since this is a kinematically selected sample, we need to make corrections for the halo stars missed because of the proper motion selection. To minimise contamination of the sample by thick disk stars, which contribute strongly to the derived local density because the lowest velocity stars are given the highest weights (see Bahcall and Casertano 1986) we only used stars with tangential velocity greater than 220 km/s, and rejected all stars with \[Fe/H\] $`>`$ –1.0. This may reject a few genuine halo stars, but will cause an error of only $``$10% in the derived density (Carney et al., 1989). Erroneously including thick disk stars would have a much larger effect. The Carney et al. (1994) sample is drawn from the Lowell Proper Motion Catalog which covers the entire Northern Hemisphere and has a proper motion lower limit of 0.26 <sup>′′</sup> yr<sup>-1</sup>. In the color range of concern here, the catalog’s magnitude limit is sufficiently faint that all we need to do is correct for proper motion selection effects, which we do by weighting by $`V_{tan}^3`$, following Bahcall and Casertano (1986). Table 4 gives our results. Carney and collaborators have a more complete kinematical analysis in progress, so we have chosen simply to use the simulations of Bahcall and Casertano (1986) to correct for the tangential velocity cut at 220 km/s. Morrison (1993) notes that the sample used by Bahcall and Casertano to derive halo kinematics was probably contaminated by thick disk stars, and calculates that the “discovery fraction” for a sample with this $`V_{tan}`$ cutoff should be close to 0.5, not 0.33 as they found. We have used this higher value in our calculations of the turnoff star luminosity function. ### 4.2 Halo concentration and axial ratio While our maps of the halo based on accurate color-magnitude diagrams are less sensitive to possible variations in age and metallicity than the estimates from horizontal-branch stars in particular, they are less easy to interpret because of the difficulty of obtaining accurate distances to turnoff stars. Turnoff stars in our chosen color range can have absolute magnitudes which vary from 3.5 to 5.5, although the luminosity function of Table 4 shows that many will have absolute magnitude near 4.5. We have chosen to divide our data into three magnitude ranges (V=17–18.5, 18.5–19.5 and 19.5–20.5) and to calculate model predictions for these magnitude ranges to compare directly with the star counts there. We plan to do a more statistically sophisticated analysis in a future paper — the values we derive here will be roughly correct for the distance ranges probed by the data, but not may not be optimal. Fig. 15 shows the sensitivity of our data to the axial ratio of the halo. It shows the ratio of number of stars in a given magnitude bin to the model predictions. Both models have a power-law exponent of –3.0, and one has an axial ratio b/a=1.0 (spherical) while the other has a moderate flattening (b/a=0.6). We have plotted these ratios vs a rough indicator of the z height travelled by each line of sight, calculated by assuming that all stars in the bin have absolute magnitude $`M_V`$=4.5. It can be seen in Fig. 15 that the spherical halo provides a significantly worse fit. There is a trend with z: points with high z have fewer stars than the model predicts, and points with low z have more. The b/a=0.6 model residuals show little trend with z, and the small number of very discrepant points remaining are found with both small and large z. Fig. 16 shows the same data/model ratios against the range in $`R_{gc}`$, for models with b/a=0.6 and power-law exponent –3.0 and –3.5. A trend with $`R_{gc}`$ is visible in the panel with power-law exponent of –3.5. The model predicts too few stars with large $`R_{gc}`$ and too many with small $`R_{gc}`$. Note also that in the panel showing residuals from the model with exponent –3.0, the remaining residuals show no clear trend with $`R_{gc}`$. We adopt a model with b/a=0.6 and power-law exponent –3.0 for the rest of this analysis. Only a 10% correction to the local halo density of Table 4 is needed with these parameters. It is possible to examine the residuals to the model fits in more detail by plotting the position in the Galaxy traversed by each field, and highlightling the fields with large residuals. Figures 18 and 17 show a number of vertical and horizontal slices through the Galaxy, for different ranges of y and z. Fields with residuals more than 2.5$`\sigma `$ from the model fit are highlighted. It can be seen that four of the fields with large negative residuals are towards the galactic center, and two, with large positive residuals, are in our lowest latitude field, which is close to the anticenter. Interestingly, the discrepant fields cover a large range in both r and z, so a simple adjustment to the power-law exponent or flattening will not improve things. The variable axial ratio models of Hartwick (1987) and Preston et al. (1991) will not solve the problem completely. The model of Preston et al. (1991) has an axial ratio changing linearly from b/a=0.5 at the galactic center to b/a=1.0 for ellipsoids with semi-major axis of 20 kpc. While the results in fields close to the minor axis which currently have large negative residuals may be improved by the adoption of a model where the axial ratio is closer to 1 at this point, the anticenter fields will become more discrepant since the axial ratio must become close to 1 at this distance from the center too. However, it is possible that the thick disk might be partially responsible for the excess of stars in this color range, if its scale height increases in the outer Galaxy and/or it has a strong abundance gradient. We plan to investigate these issues further when we obtain more fields with low values of $`R_{gc}`$ and z. ## 5 CONCLUSIONS We describe a survey designed to find field stars from the galactic halo in large enough numbers to provide a strong test of the question “Was the galactic halo accreted from satellite galaxies?” The survey will cover 100 deg<sup>2</sup> at high galactic latitude. It uses an efficient pre-selection technique based on the Washington photometric system to identify halo red giants, blue horizontal branch stars, blue metal-poor main sequence stars and turnoff stars. Follow-up spectroscopy (with multi-object spectrographs for the more numerous turnoff and BMP stars) tests for kinematic signatures of accretion. Our sample of halo stars will be unprecedentedly large, and cover galactocentric distances from the solar circle to more than 100 kpc. Because the photometric selection has few and easily quantifiable selection effects, our sample will also enable thorough studies of the density distribution of the galactic halo, increasing the numbers of distant halo objects known by an order of magnitude. This will allow much more accurate measurement of the mass of the Galaxy than previously possible. We discuss the particular problems caused for identification of the very distant halo giants by foreground K subdwarfs with very low metallicity (\[Fe/H\]$`<`$–2.0). These stars are roughly as numerous as the genuine halo giants for $`V>18`$, and are not distinguishable from giants via the Washington $`M51`$filter that we use for photometric luminosity discrimination. However, with low-dispersion spectra of S/N 15 or more, a combination of several spectral features such as the Ca I line at 4227 Å suffices to distinguish these stars reliably from giants. Studies which use giants in the outer halo need to take particular care to eliminate these extreme K subdwarfs. We use one of our large photometric datasets, obtained in one run using the BTC on the CTIO 4m, to constrain the spatial distribution of the Galaxy’s halo over ranges of galactocentric distances from approximately 5–20 kpc and z heights from 2 to 15 kpc by using halo turnoff star numbers. We find that a power-law with exponent –3.0 and a moderate flattening of b/a=0.6 gives a good fit to most of the data. However, there are a number of fields that show departures from this model, suggesting that we will need a more complex model in future as our coverage of fields in the halo increases. It is a pleasure to thank Peter Hauschildt for kindly running the alpha element-enriched NextGen models for us and making the NextGen models so readily available on the Web. We would also like to thank Doug Geisler for the careful work he has devoted to making the Washington system so useful, and for his help with our many questions about the system, and Bruce Carney for kindly making his database available in advance of publication. We have benefited from helpful discussions with Mike Bessell, Bruce Carney, Conard Dahn and Peter Dawson. We would also like to thank an anonymous referee for comments that improved this paper. This work was supported by NSF grants AST 96-19490 to HLM, AST 95-28367, AST96-19632 and AST98-20608 to MM, and AST 96-19524 to EWO. We used the SIMBAD database, maintained by CDS, Strasbourg, extensively. APPENDIX: CONVERSIONS BETWEEN WASHINGTON AND OTHER SYSTEMS The Washington broadband photometric system (Canterna, 1976; Harris and Canterna, 1979; Geisler, 1996) includes the $`T_2`$ filter (which is very similar to the Cousins I filter), the M filter (which is 1050Å wide and has a central wavelength of 5100Å, slightly blueward of the V filter) and the C filter (which is 1100 Å wide, centered near the Ca K line at 3934 Å ). Geisler (1984) has added the DDO “51” filter to make giant/dwarf discrimination possible for G and K stars. (The system also includes the T1 filter, similar to the Cousins R filter, which we do not need to use). The original temperature indicator for the Washington system was the $`T_1T_2`$ ($`RI`$) color. However, an alternative is the $`MT_2`$ color, which transforms well to $`VI`$ as can be seen in Figure 19. For $`VI`$ between 0.5 and 1.5, the simple linear relation $`MT_2`$ = 1.264 ($`VI`$) works well. The standard deviation of the residuals from this line is 0.25 magnitudes. We have included both dwarfs and giants in this diagram, and a significant number of metal-poor globular cluster giants. We used: * Landoldt (1992) standard stars which are also Washington standards (Harris and Canterna, 1979; Geisler, 1984, 1986, 1990; Geisler et al., 1991, 1992; Geisler, 1996). Only stars with E(B V) less than 0.11 mag, from the reddening maps of Schlegel et al. (1998) are shown. We corrected the colors using the relations between E(B V) and the extinction in the Washington passbands using the ratios from Harris and Canterna (1979). * we used photometry of stars from three globular clusters: 47 Tucanae, NGC 1851 and NGC 6752. The $`VI`$ photometry is from Da Costa and Armandroff (1990), and the Washington photometry from Geisler (1986); Geisler et al. (1997). * Accurate photometry of a sample of very nearby dwarfs was obtained by Gonzalez and Piché (1992). They also measured B V colors for these stars. Although the authors did not measure $`VI`$ colors, Bessell (1979) has shown that the Stromgren $`by`$ color transforms very well to $`VI`$ and we have used his transformation, and the very accurate $`by`$ measurements of Eggen (1998), to derive $`VI`$ colors for these stars. We have used the six stars in their sample with $`BV0.51`$ to supplement the Landolt standards. Because these stars are so nearby (all have large and accurate parallaxes) reddening corrections are not needed. It is clear that there is no metallicity dependence in the transformation from $`MT_2`$ to $`VI`$. It is also possible to transform the $`MT_2`$ color to B V but this is less straightforward because there are different loci for late-type dwarfs and giants in this diagram. However, for bluer stars such as halo turnoff stars, the relation is single-valued, as can be seen in Figure 20. Since there are few metal-poor dwarfs with Washington photometry, we have also plotted the models of Paltoglou and Bell (1994) which are based on synthetic spectra, for dwarfs with solar abundance and \[Fe/H\] = –2.0. Paltoglou and Bell (1994) note that their models show good agreement with existing data for temperatures higher than 4500 K (B V$``$ 1.0). Washington $`CM`$ is less useful as a temperature indicator because it has both metallicity sensitivity for late-type stars (from line blanketing around 4000 Å ) and some gravity sensitivity for earlier types (from the Balmer jump) We give the relation between $`CM`$ and B V in Figure 21 for completeness. It is also useful to derive transformations between the Stromgren $`by`$ color and $`MT_2`$. This is particularly useful for K star temperatures, because many of our spectroscopic luminosity standards have $`by`$ observations but not $`MT_2`$. For main sequence stars, we used the compilations of Eggen (1998); Gonzalez and Piché (1992) of photometry of stars from the Yale Parallax Catalog (van Altena et al., 1991), whose stars have the advantage that they are so close to the Sun that reddening corrections are unlikely to be needed. For metal-poor giants we used stars from Bond (1980) with Washington photometry in Geisler (1986). E(B V) values were taken from Bond (1980), and were smaller than 0.04 in all cases. Figure 22 shows the relation between these two colors. It can be seen that there are different sequences for red dwarfs and giants, but that the sequences are fairly tight in both cases.
no-problem/0001/cond-mat0001242.html
ar5iv
text
# REFERENCES Buhot Reply : In the letter I proposed a possible scenario for the phase-separation instability of binary mixtures of hard-core particles in the limit of high asymmetry between large and small particles relating this transition to a bond-percolation transition. In his Comment A. A. Louis claims that, at least in the case of hard-spheres, the phase-separation transition is unrelated to bond-percolation. In fact, the mapping to bond-percolation relates to an instability of the homogeneous phase and, in the case of hard-spheres (or hard-disks) it gives only an upper bound for the packing fraction at the phase separation transition. First of all, let me recall the mapping argument to the bond-percolation transition proposed in . Due to the large asymmetry, the radial distribution function $`g_{ll}(r)`$ of the large particles possesses a sharp and high peak of width $`\sigma _s`$ (size of small particles) at the contact value $`\sigma _l`$ between two large particles. This peak may be interpreted as bonds between large particles and the number of bonds $`n_b`$ is then defined as: $$n_b=\rho _l_{\sigma _lr\sigma _l+\sigma _s}g_{ll}(r)𝑑𝐫$$ (1) where $`\rho _l`$ is the number density of the large particles. As the number $`n_b`$ increases with the total packing fraction, larger and larger aggregates of large particles appear in the homogeneous fluid phase. For a sufficiently large number of bonds $`n_c`$ or equivalently a sufficiently large packing fraction, a macroscopic aggregate appears in the system. This macroscopic aggregate breaks the translational invariance of the system which is no more homogeneous. Thus, the homogeneous fluid phase is instable. In the letter I approximate the number $`n_c`$ by the number of bonds $`zp_c`$ at the bond-percolation transition of the corresponding crystal lattice of large particles where $`z`$ is the coordination number and $`p_c`$ the bond-percolation threshold. This approximation neglects two effects : the slight modification due to the lack of lattice and, more important, the possible correlations between bonds. Taking into account those possible correlations, the appearance of the solid phase (or macroscopic aggregate) will be shift to a lower number of bonds ($`n_c<zp_c`$). The packing fraction $`\eta _c`$ corresponding to $`n_b=zp_c`$ is thus only an upper bond of the packing fraction $`\eta _t`$ at the phase-separation transition between the fluid phase and the fluid-solid phase. In the case where the phase-separation transition is strongly first order due to a large surface tension between the fluid and the solid, we may expect large correlations between the bonds and then the bond-percolation transition is not directly related to the phase-separation transition. However, $`\eta _c`$ is an upper bound (and thus only a qualitative estimation) for $`\eta _t`$. This is exactly what is observed on Fig.1 of the Comment since, for hard-spheres, we expect a large surface tension. It is also interesting to notice that for hard-disk mixtures the prediction that instability of the homogeneous fluid phase occurs for sufficiently high asymmetry remains valid. Consequently, a phase-separation transition is predicted as already said in . However, no simulations are yet able to confirm this result. Concerning the case of parallel hard-cubes, as already said in the Comment , it is known that the freezing of the one component fluid is a second order transition . This is principally due to the lack of rotational symmetry since the cubes are parallel (the first order nature of the freezing transition is restored if we allow the cubes to rotate). Thus, in the case of binary mixtures of parallel hard-cubes (or squares), we may expect that the surface tension between the fluid and the solid is low. In that case, we may expect that the packing fraction $`\eta _c`$ is a quantitative approximation for $`\eta _t`$. Numerical simulations for few state points seem to confirm this result but a complete numerical calculation of the phase diagram is still lacking. In conclusion, the phase-separation transition is not directly related to bond-percolation transition. However, the mapping proposed in may be useful to predict the existence of phase separation in binary mixtures of hard-core particles since it gives an upper bound of the packing fraction at the phase separation transition. It is for example the case for the hard-disks mixture. A. Buhot Theoretical Physics University of Oxford 1 Keble Road Oxford OX1 3NP UNITED KINGDOM
no-problem/0001/hep-ph0001111.html
ar5iv
text
# Standard and exotic interpretations of the atmospheric neutrino data ## Abstract The present status of some theoretical interpretations of the atmospheric neutrino deficit is briefly discussed. Specifically, we show the results for the FC mechanism and for the standard oscillation hypothesis, both in the active and in the sterile channels. All these mechanisms are able to fit the present data to a good statistical level. Among them, the $`\nu _\mu \nu _\tau `$ oscillation is certainly the best explanation to the atmospheric neutrino deficit, providing a remarkably good agreement with the data. When cosmic rays collide with nuclei in the upper atmosphere, they produce neutrino fluxes, which have been detected by several detectors over many years . Even though the absolute fluxes of atmospheric neutrinos are largely uncertain, the expected ratio $`R(\mu /e)`$ of the muon neutrino flux over the electron neutrino flux is robust, since it largely cancels out the uncertainties associated with the absolute fluxes. This ratio has been calculated with an uncertainty of less than 5% over energies varying from 0.1 GeV to 100 GeV. Since the calculated ratio does not match the observations, we believe to be facing an anomaly which can be ascribed to non–standard neutrino properties. Super-Kamiokande high statistics observations indicate that the deficit in the total ratio $`R(\mu /e)`$ is due to the number of neutrinos reaching the detector at large zenith angles. The $`e`$-like events do not present any compelling evidence of a zenith-angle dependent suppression while the $`\mu `$-like event rates are substantially suppressed at large zenith angles. A simplest explanation for these features comes from the hypothesis of neutrino masses and neutrino flavour oscillations, where a $`\nu _\mu `$ transforms during propagation into a $`\nu _\tau `$ or, alternatively, a sterile neutrino $`\nu _s`$. However, alternative (“exotic”) interpretations to the atmospheric neutrino deficit have been proposed. Among others, flavour changing (FC) neutrino interactions in matter, neutrino decay, violation of relativity principles or violation of the CPT symmetry (see Ref. for relevant references). In this paper, together with presenting the most updated results for the “standard” solution in terms of neutrino oscillations, we will also discuss the status of the FC hypothesis. For the latter case, we will perform the analysis of the latest 52 kton-yrs Super-Kamiokande data , while in the case of the oscillation mechanism we will report on our global fit to all the available atmospheric neutrino data . More detailed discussions can be found in Ref. for the FC hypothesis and in Ref. for the oscillation mechanism. We refer to these papers also for a more exhaustive set of references. Here we briefly recall that we calculate the expected number of $`\mu `$-like and $`e`$-like contained events as $`N_\mu =N_{\mu \mu }+N_{e\mu }`$ and $`N_e=N_{ee}+N_{\mu e}`$ where $`N_{\alpha \beta }`$ $`=`$ $`n_tT{\displaystyle \frac{d^2\mathrm{\Phi }_\alpha }{dE_\nu d(\mathrm{cos}\theta _\nu )}\kappa _\alpha (h,\mathrm{cos}\theta _\nu ,E_\nu )}`$ (1) $`P_{\alpha \beta }{\displaystyle \frac{d\sigma }{dE_\beta }}\epsilon (E_\beta )dE_\nu dE_\beta d(\mathrm{cos}\theta _\nu )dh`$ where $`n_t`$ is the number of targets, $`T`$ is the experiment’s running time, $`E_\nu `$ is the neutrino energy and $`\mathrm{\Phi }_\alpha `$ is the flux of atmospheric neutrinos ($`\alpha =\mu ,e`$); $`E_\beta `$ is the final charged lepton energy and $`\epsilon (E_\beta )`$ is the detection efficiency for such charged lepton; $`\sigma `$ is the neutrino-nucleon interaction cross section, and $`\theta _\nu `$ is zenith angle; $`h`$ and $`\kappa _\alpha `$ are geometrical factors . $`P_{\alpha \beta }`$ is the conversion probability of $`\nu _\alpha \nu _\beta `$, which depends on the conversion mechanism. See Refs. for the relevant expressions. For the upgoing muon data we calculate the fluxes as $$\mathrm{\Phi }_\mu (\theta )_{S,T}=\frac{1}{A(L,\theta )}\frac{d\mathrm{\Phi }_\mu (E_\mu ,\theta )}{dE_\mu }A_{S,T}(E_\mu ,\theta )$$ (2) where $`{\displaystyle \frac{d\mathrm{\Phi }_\mu }{dE_\mu }}`$ $`=`$ $`{\displaystyle \frac{d\mathrm{\Phi }_{\nu _\mu }(E_\nu ,\theta )}{dE_\nu }P_{\mu \mu }\frac{d\sigma }{dE_{\mu 0}}R(E_{\mu 0},E_\mu )}`$ (3) $`\kappa _\mu (h,\mathrm{cos}\theta _\nu ,E_\nu )dE_{\mu 0}dE_\nu dh`$ where $`R(E_{\mu 0},E_\mu )`$ is the muon range function, $`A(L,\theta )=A_S(E_\mu ,\theta )+A_T(E_\mu ,\theta )`$ is the projected detector area for internal pathlengths longer than $`L`$. $`A_S`$ and $`A_T`$ are the corresponding areas for stopping and through-going muon trajectories. The fitting procedure we adopt is discussed in detail in . Here we only recall that we define a $`\chi ^2`$ function $$\chi ^2\underset{I,J}{}(N_I^{da}N_I^{th})(\sigma _{da}^2+\sigma _{th}^2)_{IJ}^1(N_J^{da}N_J^{th}),$$ (4) where $`I`$ and $`J`$ stand for any combination of the experimental data sets and event-types considered. The error matrices are defined as $`\sigma _{IJ}^2\sigma _\alpha (A)\rho _{\alpha \beta }(A,B)\sigma _\beta (B)`$ where $`\rho _{\alpha \beta }(A,B)`$ is the correlation matrix. A detailed discussion of the errors and correlations used in our analysis can be found in Ref. . The final step is the minimization of the $`\chi ^2`$ function from which we determine the allowed region in the parameter space as: $`\chi ^2\chi _{min}^2+4.6,6.0,9.2`$ for 90,95 and 99 % C.L. As for the FC mechanism, we report in Table 1 the result of our fits over the different 52 kton-yrs SK data samples . The same table also shows our results for the oscillation interpretation. We notice that the FC hypothesis is able to fit well all the different data sets, with statistical confidence comparable to the oscillation cases. When a global analysis is performed, the FC hypothesis turns out to be a worse explanation as compared to oscillation. This is mainly due to a too strong suppression of the horizonthal thru-going muons . Nevertheless, the FC mechanism is still acceptable at 90% C.L. . Fig. 1 shows the allowed regions in the two-parameter space of the FC mechanism . We can notice that, in order to describe the data, a somewhat large amount of FC in the neutrino sector is required. As for the oscillation mechanism, we have peformed a global fit to all the available atmospheric neutrino data: Nusex , IMB , Frejus , Kamiokande , Soudan , Super Kamiokande , MACRO and Baskan . Some of our results are shown in Fig. 2 and in Table 2. We see that all the three oscillation channels describe the data to a good statistical level (for details, see ). From the results of the fit, we can conclude that, among the three possibilities, the $`\nu _\mu \nu _\tau `$ oscillation hypothesis turns out to be the current most favourable option. Acknowledgements. I wish to address my warmest thanks to the TAUP99 Organizers for allowing me to deliver this second talk to the Conference. This work was supported by the Spanish DGICYT under grant number PB95–1077 and by the TMR network grant ERBFMRXCT960090 of the European Union.
no-problem/0001/astro-ph0001284.html
ar5iv
text
# The Spectroscopic Orbit of the Planetary Companion Transiting HD 209458Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The other data were obtained at Observatoire de Haute-Provence (France) and with the 1.2-m Euler Swiss telescope at La Silla Observatory, ESO Chile. ## 1 INTRODUCTION We report in this Letter on our spectroscopic observations of HD 209458, observations that led to the discovery of a transiting planet with an orbital period of 3.5 days. We have been observing HD 209458 since August 1997 as one of the targets in two large independent radial-velocity surveys, both searching for extrasolar planets around solar-type stars. One program uses HIRES (Vogt et al. 1994) on the Keck I telescope, and the other uses ELODIE (Baranne et al. 1996) on the 1.93-m telescope at Observatoire de Haute Provence (France). In June 1999, after observations from both efforts showed that the radial velocity of HD 209458 was variable, additional frequent observations were obtained with ELODIE, as well as with CORALIE (Queloz et al. 1999) on the new 1.2-m Swiss telescope at La Silla. In August 1999 the identity of HD 209458 and its orbital elements were provided to D. Charbonneau and T. Brown so that they could look for transits with the STARE photometric instrument (Brown & Kolinski 1999). Two transits were successfully observed in September 1999 (Charbonneau et al. 2000; hereafter C00). An independent discovery in November 1999 of the planetary orbit, as well as the detection of a transit ingress, are reported by Henry et al. (2000). Transit observations together with an orbital solution allow us to determine directly the mass, radius, and density of the planet, provided we have good estimates for the mass, radius, and limb darkening of the star (e.g., C00). We describe in this Letter our efforts to derive better estimates for these parameters. ## 2 OBSERVATIONS One of the two radial-velocity projects, the results of which we report here, is the G Dwarf Planet Search (Latham 2000). The sample for this project is composed of more than 1000 targets whose effective temperatures, luminosities, chemical compositions, and Galactic-population memberships have been determined using precise Strömgren photometry (Olsen 1993; private communications). In addition, the radial velocities of these stars were known to be constant at a precision of 300–600 m s<sup>-1</sup>, based on more than ten years of observations with the CORAVELs (Mayor 1985) and the CfA Digital Speedometers (Latham 1985, 1992). The observations for this project were performed with HIRES and its iodine gas-absorption cell (Marcy & Butler 1992) on the Keck I telescope. The G Dwarf Planet Search observing strategy is designed to carry out an initial reconnaissance of the sample stars, with the immediate goal of identifying the stars whose radial velocity is modulated with an amplitude of about 50 m s<sup>-1</sup> or larger. To increase the number of target visits we concede velocity precision, and therefore have exposed no longer than needed to achieve a precision of 10 m s<sup>-1</sup>. Radial velocities are derived from the spectra with TODCOR (Zucker and Mazeh 1994) — a two dimensional correlation algorithm. Although we are still in the development stage of our software (Zucker, Drukier & Mazeh, in preparation), preliminary results from the global analysis of the 675 stars with two or more iodine observations show that we are already close to achieving our goal of 10 m s<sup>-1</sup> or better. The other program whose results are presented here is the ELODIE planet search survey (Mayor & Queloz 1995a). After the discovery of the planetary companion around 51 Peg (Mayor & Queloz 1995b), the surveyed sample was extended to about 320 northern hemisphere solar-type stars brighter than $`m_V=7.65`$ and with small projected rotational velocities ($`v\mathrm{sin}i`$ from CORAVEL, Benz & Mayor 1984). From CORAVEL data, the stars in this sample were known to have constant radial velocities at a 300 m s<sup>-1</sup> precision level. Radial-velocity measurements are obtained with the ELODIE echelle fiber-fed spectrograph (Baranne et al. 1996) mounted on the Cassegrain focus of the 1.93-m telescope of the Observatoire de Haute-Provence. The reduction technique used for this sample is known as the “simultaneous Thorium-Argon technique” described by Baranne et al (1996). The precision achieved with this instrument is of the order of 10 m s<sup>-1</sup> over more than 3 years. After the two independent detections of the short-term variability of HD 209458 by the HIRES and ELODIE teams, we decided to add this object to the CORALIE planet search sample (Udry et al. 1999a,b) in order to gather more radial-velocity data and therefore increase the precision of the orbital elements. The precision achieved with CORALIE is of the order of 7–8 m s<sup>-1</sup> over 18 months. To check for other possible sources of line shifts besides orbital motion we computed the mean bisector profiles (as described by Queloz et al. 2000, in preparation) for all the ELODIE and CORALIE spectra. No correlation between the observed velocities and the line profiles was detected, convincing us that the planetary interpretation was correct even before transits were detected. ## 3 RADIAL-VELOCITY ANALYSIS As of November 16, 1999, we had a total of 150 radial-velocity measurements of HD 209458 available for analysis: 11 from HIRES, 31 from ELODIE, and 108 from CORALIE. Initially we applied shifts of $`5`$ m s<sup>-1</sup> to the ELODIE velocities and $`14780`$ m s<sup>-1</sup> to the HIRES measurements to bring them to the CORALIE system, the latter offset being much larger due to the arbitrary zero point of the HIRES velocities (Zucker, Drukier & Mazeh, in preparation). To account for possible errors in these shifts, the orbital solutions described below included two additional free parameters — $`\mathrm{\Delta }_{HC}`$ and $`\mathrm{\Delta }_{EC}`$ for the HIRES and ELODIE shifts — along with the orbital elements. In addition, the two transit timings recorded by C00 provide useful information on the orbital period and $`T_c`$, the time of inferior conjunction. These timings were therefore included in the derivation of the spectroscopic orbital elements, and we treated them as independent measurements with their corresponding uncertainties. In a preliminary solution weights were assigned to each observation based on the internal errors. From this fit we computed the RMS residuals separately for each data set – $`\sigma _H`$, $`\sigma _E`$, and $`\sigma _C`$ — and then scaled the internal errors for each instrument to match the corresponding RMS residuals on average. We re-solved for the orbital parameters, and the procedure converged in one iteration with essentially no change in the elements. Tests allowing for a non-circular Keplerian orbit for HD 209458 resulted in an eccentricity indistinguishable from zero: $`e=0.016\pm 0.018`$. We therefore assume in the following that the orbit is circular. Our final orbital solution is represented graphically in Figure 1. The elements are given in Table 1, where the value of the planetary mass, $`M_p`$, depends on the inclination angle and on the adopted stellar mass, $`M_{}`$, to the power of 2/3. The orbital elements reported by Henry et al. (2000) are consistent with our results, although their quoted errors are substantially larger. Robichon & Arenou (2000) have identified three transits in the Hipparcos photometry and have derived a more precise period, $`P=3.524739\pm 0.000014`$ d, consistent with the value of Table 1 within 1.5-$`\sigma `$. ## 4 STELLAR PARAMETERS In this section we compare the results of two different approaches for estimating the mass and radius of HD 209458. In the first approach we rely on the effective temperature, $`T_{\mathrm{eff}}`$, and the surface gravity, $`\mathrm{log}g`$, derived from a fine analysis of the iron lines in the HIRES template spectrum of HD 209458. In the second approach we take advantage of the accurate distance available from the Hipparcos astrometric mission (ESA 1997) and rely on the stellar absolute magnitude and observed color. For both approaches we matched theoretical isochrones from four different sets of stellar evolution models with the location of HD 209458 in the corresponding parameter plane, estimating the stellar mass, radius and age. The stellar models depend sensitively on the metallicity, and therefore a critical first step for both approaches is to determine the metallicity. An analysis of eight spectra obtained with the CfA Digital Speedometers (Latham 1992) with the techniques reported by Carney et al. (1987) gave $`T_{\mathrm{eff}}=5975`$ K, $`\mathrm{log}g=4.25`$, and $`[\mathrm{m}/\mathrm{H}]=+0.11\pm 0.1`$. Another independent analysis of the cross-correlation dips from CORAVEL observations (Mayor 1980 as revised by Pont 1997 using primary calibrators by Edvardsson et al. 1993) gave \[Fe/H\]$`=0.14\pm 0.1`$. This large range in metallicity values, $`0.14`$ to $`+0.11`$, implied a large uncertainty in the mass and radius of HD 209458, so we undertook a detailed analysis of selected Fe I and Fe II lines measured on our HIRES template spectrum, which has a resolving power of about 70,000 and signal-to-noise ratio per resolution element of about 300. We adopted a line list developed by L. de Almeida (private communication) and solar gf values based on the National Solar Observatory solar flux atlas (Kurucz et al. 1984). For this analysis of HD 209458 we used model atmospheres and computer codes based on the work of R. Kurucz. Selected weak Fe I lines were used to set $`T_{\mathrm{eff}}`$ by adjusting the $`T_{\mathrm{eff}}`$ until the plot of abundance versus excitation potential was flat. Stronger Fe I lines were then included and the microturbulent velocity was adjusted to get a flat dependence of abundance on line strength. Finally, the surface gravity was adjusted until the abundances from the Fe II and Fe I lines agreed. This analysis gave $`T_{\mathrm{eff}}=6000`$ K, microturbulent velocity $`\xi =1.15\mathrm{km}\mathrm{s}^1`$, $`\mathrm{log}g=4.25`$, and \[Fe/H\]$`=0.00`$. The errors in these values are undoubtedly dominated by systematic effects, and we estimate that they are $`\pm 50`$ K in $`T_{\mathrm{eff}}`$, $`\pm 0.2`$ in $`\mathrm{log}g`$, and $`\pm 0.02`$ in \[Fe/H\]. This approach yielded $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$, as well as the metallicity. In the other approach we used the absolute magnitude and color of $`M_V=4.28\pm 0.11`$ and $`BV=+0.594\pm 0.015`$ (assuming no extinction or reddening), together with the metallicity derived from the iron lines. In Table 2 we compare the values of the stellar mass and radius, $`M_{}`$ and $`R_{}`$, and age that we derive for HD 209458 using the two approaches and isochrones from four different stellar evolution codes: Geneva (Schaller et al. 1992), Bertelli (Bertelli et al. 1994), Claret (Claret 1995), and Yale (Demarque et al. 1996). The results of this comparison are given in the first four lines of Table 2. The last two lines of Table 2 demonstrate the effects of changing the stellar helium abundance and metallicity. The next-to-the-last line indicates that helium-rich models give slightly lower values of the mass and radius. The effect of changing the metallicity, with the helium scaled by the enrichment law, is illustrated by the last line. We therefore conclude that if Z = 0.019 is adopted for the solar metallicity (Anders & Grevesse 1989), all the results for the Z = 0.02 models should be reduced by about 0.01 in both mass and radius. Note that all four sets of evolutionary models and the two approaches yielded similar results, with only small differences. The mass estimates are systematically smaller for the results based on the observational $`M_V`$ vs $`BV`$ plane, by about $`0.05M_{}`$. The radius estimates are also smaller, by about $`0.03R_{}`$. Note, however, that the good agreement between the different models may be misleading, and that systematic errors are likely to be larger. Based on all these considerations, we adopt for our best estimate of the mass and radius of HD 209458 $`1.1\pm 0.1M_{}`$ and $`1.3\pm 0.1R_{}`$. The uncertainty estimates are somewhat arbitrary, and are based mainly on the assumed uncertainty in $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$, $`M_V`$ and $`BV`$. Using ELODIE and CORAVEL cross-correlation dip widths, we infer the projected rotational velocity (calibration by Queloz et al. (1998) for ELODIE and by Benz & Mayor (1984) for CORAVEL). The results are $`v\mathrm{sin}i=4.4\pm 1`$ km s<sup>-1</sup> for CORAVEL and $`v\mathrm{sin}i=4.1\pm 0.6`$ km s<sup>-1</sup> for ELODIE. ## 5 PLANETARY PARAMETERS C00 present preliminary estimates of the planetary radius, $`R_p`$, and orbital inclination, $`i`$, based on initial estimates of $`M_{}`$, $`R_{}`$, and the $`R`$-band limb-darkening parameter $`c_R`$. We now present values for these quantities based on the more accurate analysis of the stellar parameters presented in this Letter. In addition, we present estimates of the uncertainties that combine the effects both due to the uncertainties in the stellar parameters, and due to the level of precision in the photometric measurements of the transit. All uncertainties presented below correspond to 1-$`\sigma `$ confidence levels. Using the calculated limb darkening coefficients presented in Claret, Díaz-Cordovés, & Giménez (1995), we adopted a value of $`c_R=0.56\pm 0.05`$, based on the values for $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ derived in the previous section. As described in C00, we then calculated the $`\mathrm{\Delta }\chi ^2`$ of the photometric points for the model light curve as a function of $`R_p`$ and $`i`$, using the revised values of {$`M_{}`$, $`R_{}`$, $`c_R`$} presented here. To evaluate the uncertainty in the derived values of $`R_p`$ and $`i`$, we calculated the $`\mathrm{\Delta }\chi ^2`$ for all combinations of the stellar parameters at 1-$`\sigma `$ above and below their respective best fit values. We then assign 1-$`\sigma `$ error bars based on the intervals which are excluded with this confidence for all these combinations. We find $`R_p=1.54\pm 0.18R_{\mathrm{Jup}}`$ and $`i=85\stackrel{}{\mathrm{.}}2\pm 1\stackrel{}{\mathrm{.}}4`$. The primary mass and the inclination imply (see Table 1) the planetary mass to be $$M_p=0.69\pm 0.05M_{Jup}.$$ (1) ¿From the planetary radius and mass we calculate the density, surface gravity, and escape velocity to be $`\rho =0.23\pm 0.08\mathrm{g}\mathrm{cm}^3`$, $`g=720\pm 180\mathrm{cm}\mathrm{s}^2`$, and $`v_e=40\pm 4\mathrm{km}\mathrm{s}^1`$. In the interpretation of the transit curve of HD 209458, the dominant uncertainty in the planetary parameters is due to the uncertainty in the stellar radius, rather than the observational uncertainty in the photometric points. The planetary radius found here is larger and the orbital inclination is smaller than the values presented in C00. This is due primarily to the fact that the value of the stellar radius found here ($`1.3R_{}`$) is larger than the one assumed in the initial analysis ($`1.1R_{}`$). A larger star requires a larger planet crossing at a lower inclination to fit the same photometric data. The results presented here and in C00 are based on the analysis of the detailed observed transit lightcurve. Henry et al. (2000), who did not observe the full transit, assumed an inclination of 90 for the orbital inclination, and a stellar radius and mass of $`1.15R_{}`$ and $`1.03M_{}`$. With these assumptions they derived a planetary mass and radius of $`R_p=1.42\pm 0.08R_{Jup}`$ and $`M_p=0.62M_{Jup}`$. ## 6 DISCUSSION The spectroscopic orbit, when combined with the inclination derived from transits, enables us to derive the planetary mass directly. This demonstrates the power of combining spectroscopy and photometry for transiting planets (C00; Henry et al. 2000). To derive masses for non-transiting planets that have spectroscopic orbits, we are forced to turn to other approaches for determining the orbital inclination, such as astrometry (e.g., Mazeh et al. 1999; Zucker & Mazeh 2000). HD 209458 is the first extrasolar planet whose orbital inclination is known with high precision. In principle we might be able to derive the inclination of the stellar rotational axis, if we could obtain the stellar rotational period from photometric observations. Together with the projected rotational velocity and the stellar radius derived here, this will enable us for the first time to check the assumption that the stellar rotation is aligned with the orbital motion for such short-period systems. With $`v\mathrm{sin}i=4.2\pm 0.5`$ km s<sup>-1</sup>, alignment implies a rotational period of $`P=15.7\pm 2.4`$ days. One unique feature of the transit technique is its ability to derive the planetary radius. As described in the review by Guillot (1999) and the references listed therein, the radius of an extrasolar giant planet is determined by its mass, age, degree of insolation, and composition. Now that an accurate measurement of the planetary radius has been made, it should be possible to infer specifics of the planetary composition, since the mass and insolation are known, and the age can be reasonably constrained based on the value of the age of the star determined above. More specifically, as described in Guillot (1999), it may be possible to calculate the amount of heavy elements for a given hydrogen/helium ratio, and to infer the presence or absence of certain atmospheric grains. These very fascinating possibilities, together with the analysis of other known short-period extrasolar planets, like 51 Peg, $`\tau `$ Boo, and $`\upsilon `$ And (e.g., Ford, Rasio & Sills 1999), promise us new insights to the formation and evolution of these systems. We give special thanks to Robert L. Kurucz for generously sharing his expertise and procedures for analyzing stellar spectra. We are grateful to Karin Sandstrom for her help with the abundance analysis and to the referee for his useful comments. This work was supported by the US-Israel Binational Science Foundation through grant 97-00460, the Israeli Science Foundation, the Swiss NSF (FNRS) and the French NSF (CNRS). ## REFERENCES Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 Baranne, A., Queloz, D., Mayor, M., Adrianzyk, G., Knispel, G., Kohler, D., Lacroix, D., Meunier, J.-P., Rimbaud, G., & Vin, A. 1996, A&AS, 119, 373 Benz W., & Mayor M., 1984, A&A, 138, 183 Bertelli, G., Bressan, A., Chiosi, C., Fagotto, F., & Nasi, E. 1994, A&AS, 106, 275 Brown, T. M., & Kolinski, D. 1999, http://www.hao.ucar.edu/public/research/stare/stare.html Carney, B. W., Laird, J. B., Latham, D. W., & Kurucz, R. L. 1987, AJ, 94, 1066 Charbonneau, D., Brown, T. M., Latham, D. W., & Mayor, M. 2000, ApJL, 529, L45 Claret, A. 1995, A&AS, 109, 441 Claret, A., Díaz-Cordovés, J., & Giménez, A. 1995, A&AS, 114, 247 Demarque, P., Chaboyer, B., Guenther, D., Pinsonneault, M., Pinsonneault, L., & Yi, S. 1996, Yale Isochrones 1996 in ”Sukyoung Yi’s WWW Homepage”. Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D. L., Nissen, P.E., & Tomkin, J., 1993, A&A 275, 101 ESA 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200 Ford, E. B., Rasio, F. A., & Sills, A. 1999, ApJ, 514, 411 Guillot, T. 1999, Science, 286, 72 Henry, G., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2000, ApJ, 529, L41 Kurucz, R. L., Furenlid, I., & Brault, J. 1984, Solar Flux Atlas from 296 to 1300 nm, National Solar Observatory Atlas No. 1 Latham, D. W. 1985, in Stellar Radial Velocities, IAU Coll. 88, ed. A. G. D. Philip & D. W. Latham, (Schenectady: L. Davis Press), p. 21 Latham, D. W. 1992, in Complementary Approaches to Double and Multiple Star Research, IAU Coll. 135, ed. H. A. McAlister & W. I. Hartkopf, ASP Conference Series, Vol. 32, p. 110. Latham, D. W. Charbonneau, D., Brown, T. M., Mayor, M., & Mazeh, T. 1999, IAUC, 7315 Latham, D. W. 2000, in Bioastronomy 99: A New Era in the Search for Life in the Universe, ed. G. Lemarchand and K. Meetch, ASP Conference Series, in press Marcy, G. W., & Butler, R. P. 1992, PASP, 104, 270 Mayor, M. 1980, A&A, 87, L1 Mayor, M. 1985, in Stellar Radial Velocities, IAU Coll. 88, ed. A. G. D. Philip & D. W. Latham, (Schenectady: L. Davis Press), p. 35 Mayor, M., & Queloz, D. 1995a, in Cool Stars, Stellar Systems and the Sun, 9th Cambridge workshop, ed. R. Pallavicini & A. K. Dupree, ASP Conference Series, Vol. 109, p.35 Mayor, M., & Queloz, D., 1995b, Nature, 378, 355 Mazeh, T., Zucker, S., Dalla Torre, A., & van Leeuwen, F., 1999, ApJ, 522, L149 Pont, F., 1997, Ph.D. Thesis, Geneva Observatory Queloz, D., Allain, S., Mermillod, J.-C., Bouvier, J., & Mayor, M. 1998, A&A, 335, 183 Queloz, D., Mayor, M., Weber, L., Blécha, A., Burnet, M., Confino, B., Naef, D., Pepe, F., Santos, N., & Udry, S. 1999, A&A, in press Robichon, N., & Arenou, F. 2000, A&A, in press Schaller, G., Schaerer, D., Meynet, G., & Maeder, A. 1992, A&AS, 96, 269 Udry S., Mayor M., Queloz, D., Naef D., Santos N.C., 1999a, VLT Opening Symposium, “From extrasolar planets to brown dwarfs”, ESO Astrophys. symp. Ser., in press Udry, S., Mayor, M., Naef, D., Pepe, F., Santos, N. C., Queloz, D., Burnet, M., Confino, B., & Melo, C., 1999b, A&A, submitted Vogt, S. S., et al. 1994, Proc. Soc. Photo-Opt. Instrum. Eng., 2198, 362 Zucker, S., & Mazeh, T. 1994, ApJ, 420, 806 Zucker, S., & Mazeh, T. 2000, ApJL, in press FIGURE CAPTION
no-problem/0001/hep-ph0001024.html
ar5iv
text
# Untitled Document CERN-TH/2000-004 NEUTRINO MASSES AND GRAND UNIFICATION G. Altarelli Theoretical Physics Division, CERN 1211 Geneva 23, Switzerland and Università di Roma Tre, Rome, Italy Abstract We discuss some models of neutrino masses and mixings in the context of fermion masses in Grand Unified Theories. Talk given at the 6th International Workshop on Topics in Astroparticle and Underground Physics (TAUP 99) 6–10 September 1999, Paris, France CERN-TH/2000-004 January 2000
no-problem/0001/astro-ph0001202.html
ar5iv
text
# Stellar-Dynamics of Young Star Clusters ## 1. Introduction The evolution of the primordial binary population, mass segregation, the ejection of massive stars, and the effect of mass loss through evolving stars, are dynamical processes that shape the stellar content and structure of young clusters. A particularly interesting problem is the attempt to reconstruct the initial configuration of young clusters, such as the Orion Nebula Cluster and the Pleiades, by using the binary population as a memory agent for the dynamical history of a population. Since most stars probably form in clusters of some sort, the multiple-star properties of the stellar population in the Galactic field is expected to be significantly shaped by the dynamical processes in these. To understand the appearance of star clusters of any age, their dynamical and astrophysical evolution needs to be treated accurately and consistently. Direct $`N`$-body integration with special mathematical methods to treat accurately and efficiently multiple close stellar encounters as well as perturbed binary and higher-order systems, together with modern stellar evolution algorithms, is achieved with the Nbody6 programme (Aarseth 1999). Using a specially modified version of this code, the evolution of clusters with $`N10^4`$ stars is presented here. ## 2. Models Table 1 lists the cluster models. Each was followed for 150 Myr. $`N`$ is the total number of stars, and $`N_{\mathrm{bin}}`$ the number of primordial binaries. Birth binary proportions $`f_\mathrm{b}=1,0.6,0`$ are used with distributions of binding energies as in Taurus–Auriga or the Galactic field (Kroupa, Petr, & McCaughrean 1999, hereinafter KPM), respectively. In all cases the central number density has the value observed in the Trapezium Cluster, $`\rho _\mathrm{c}=10^{4.78}`$ stars/pc<sup>3</sup>, and the half-mass-diameter crossing time $`t_{\mathrm{cr}}=0.24`$ Myr. The half-mass radius and three-dimensional velocity dispersion are $`R_{0.5}`$ and $`\sigma _{3\mathrm{D}}`$, respectively; the initial density distributions being Plummer laws. All results shown here are averages of $`N_{\mathrm{run}}`$ calculations per model. In all cases the KTG93 IMF (Kroupa, Tout, & Gilmore 1993) is used for $`0.081.0M_{}`$. For all but the 100f10 models, a power-law IMF with $`\alpha =2.3`$ (Salpeter: $`\alpha =2.35`$) for $`m>1M_{}`$, and with $`\alpha =0.3`$ for $`0.01m<0.08M_{}`$, is employed. The 100f10 models have $`\alpha =2.7`$ for $`m>1M_{}`$, and $`\alpha =0.5`$ for brown dwarfs. The mass range is thus $`0.0150M_{}`$ giving the average stellar mass $`<m>`$. Random pairing from the IMF is assumed to give the primordial mass-ratio distribution. The resulting median relaxation time, $`t_{\mathrm{rel}}`$, assumes the cluster either consists of centre-of mass systems (short values) or only single stars. Owing to the rapidly changing number of systems the true $`t_{\mathrm{rel}}`$ lies between these two values. ## 3. Stimulated binary evolution Binary systems are ubiquitous in all known stellar populations. In particular, open clusters such as the Pleiades may contain a binary proportion up to 70 per cent ($`f_{\mathrm{tot}}=0.70`$) (Kähler 1999). In the Galactic field, $`f_{\mathrm{tot}}0.6`$, with a Gaussian distribution of log$`{}_{10}{}^{}P`$, where $`P`$ is the binary-star period in days. In sparse nearby star-forming regions $`f_\mathrm{b}1`$, with a primordial period distribution that is consistent with rising to the longest periods, and $`f_{\mathrm{tot}}\begin{array}{c}>\hfill \\ \hfill \end{array}0.5`$ even in the highly concentrated very young Trapezium Cluster (see KPM and references therein). Realistic star-cluster models should therefore contain $`f_\mathrm{b}>0.5`$ at birth. The evolution of binary orbital parameters through encounters with other systems is termed stimulated evolution. During the initial few crossing times the weakly bound binary systems are disrupted which can cool the cluster (KPM). This occurs for those binaries that have a binding energy, $`|e_\mathrm{b}|`$, smaller than the average kinetic energy of the cluster stars, $`\overline{e}_\mathrm{k}`$. Binaries with $`|e_\mathrm{b}|>\overline{e}_\mathrm{k}`$, on the other hand, are hard, and on average they harden further through interactions with cluster stars thereby heating the cluster (Heggie 1975). At any time, binaries near the hard/soft boundary, with periods $`PP_{\mathrm{th}}`$ denoting the thermal period, are most active in the energy exchange between the cluster field and the binary population. The cluster expands as a result of binary heating and mass segregation (see below), and the hard/soft boundary, $`P_{\mathrm{th}}`$, shifts to longer periods (Fig. 1). Meanwhile, binaries with $`P>P_{\mathrm{th}}`$ continue to be disrupted while $`P_{\mathrm{th}}`$ keeps shifting to longer periods. This process ends when $`P_{\mathrm{th}}\begin{array}{c}>\hfill \\ \hfill \end{array}P_{\mathrm{cut}}`$, which is the cutoff or maximum period in the surviving period distribution. At this critical time, $`t_\mathrm{t}`$, further cluster expansion can be reduced since the population of heating sources, i.e. the binaries with $`PP_{\mathrm{th}}`$, is significantly reduced. The details strongly depend on the initial value of $`P_{\mathrm{th}}`$ by determining the amount of binding energy in soft binaries which can cool the cluster if significant enough. Stimulated evolution also leads to a depletion of the mass-ratio distribution at small values, but is not efficient enough to thermalize the eccentricity distribution, requiring binaries to form with the observed thermal eccentricity distribution (Kroupa 1995). After the critical time, $`t_\mathrm{t}`$, the expanded cluster has reached a temporary state of thermal equilibrium, and further stimulated evolution will occur with a significantly reduced rate determined by the velocity dispersion in the cluster, the cross section given by the semi-major axis of the binaries, and their number density and that of single stars in the cluster. Stimulated evolution during this slow phase will usually involve partner exchanges and unstable but also long-lived hierarchical systems. The IMF is critically important for this stage, as the initial number of massive stars determines the cluster density at $`t\begin{array}{c}>\hfill \\ \hfill \end{array}5`$ Myr. Further binary depletion will occur once the cluster goes into core-collapse and the kinetic energy in the core rises. This phase of advanced cluster evolution is, however, not addressed here. Fig. 2 illustrates the evolution of the binary proportion. All systems are counted within and outside the clusters. Initially, the binary proportion decays on the crossing time-scale, but slows significantly in all cases after $`t1`$ Myr $`4t_{\mathrm{cr}}t_\mathrm{t}`$. The value of $`f_\mathrm{b}`$ is remembered by 150 Myr for the same IMF. However, for model 100f10 $`f_{\mathrm{tot}}`$ crosses over to the $`f_\mathrm{b}=0.6`$ cases (models 8f06 and 30f06) which is due to the smaller number of massive stars. Cluster expansion owing to stellar evolutionary mass loss is lessened in model 100f10, and consequently, the clusters retain a higher density. Stimulated evolution also leads to the generation of binary systems with forbidden orbits. These binaries have an eccentricity, $`e`$, that places them above or to the left of the upper envelope in $`e\mathrm{log}_{10}P`$ diagrams evident in binary populations (Kroupa 1995). Forbidden orbits are expected to circularise (i.e. eigenevolve) rapidly. This can lead to coalescence in some cases, but when $`e1`$ after an encounter, the two stars can collide and merge within an orbital period. Since the cross section for encounters is much larger for binaries than for single stars, this channel leading to stellar collisions will dominate throughout the cluster life-time, possibly leading to exotic very massive stars in the early phase and/or blue stragglers in older clusters. This is also discussed by Portegies-Zwart (2000) for clusters with extraordinarily high central densities and assuming stars stick once they touch. A more conservative approach is taken here by counting the number of forbidden orbits, $`O_{\mathrm{forb}}`$, present at discrete time-intervals (Fig. 3). The figure shows that larger $`f_\mathrm{b}`$ leads to larger $`O_{\mathrm{forb}}`$, and that $`O_{\mathrm{forb}}`$ increases significantly during $`t\begin{array}{c}<\hfill \\ \hfill \end{array}t_{\mathrm{cr}}`$. After this time, the generation of further forbidden orbits is reduced, and $`O_{\mathrm{forb}}`$ is about 0.3 % of the initial number of systems when $`f_\mathrm{b}=0.6`$, but about 0.7 % when $`f_\mathrm{b}=1`$. The maximum in $`O_{\mathrm{forb}}`$ for $`t\begin{array}{c}>\hfill \\ \hfill \end{array}2`$ Myr when $`N3000`$ is interesting, but remains to be investigated. ## 4. Cluster evolution Three mechanisms drive internal cluster evolution: (i) energy equipartition (two-body relaxation), (ii) binary star activity, and (iii) stellar evolution. Fig. 4 illustrates the evolution of the central number density, which is defined by the number of stars within the central radius of 0.1 pc. The central number density decreases on the relaxation time-scale. It also decreases for the clusters with $`f_\mathrm{b}=0`$. This is due to the shedding of binding energy, gained by the massive stars as they rapidly sink to the cluster centre as a result of energy equipartition, to the cluster field. The central average mass increases by a factor of 2 within 1 Myr, and a factor of 4 by 2 Myr, in clusters with the Salpeter IMF above $`1M_{}`$, independently of the binary proportion. The cluster (100f10) with the steeper IMF for massive stars also shows a linear increase of the central average mass, but the rate of increase is a factor of 2 slower. Returning to the central number density, the clusters with $`f_\mathrm{b}>0.5`$ expand at a faster rate than the single-star clusters, which is due to heating from the significant binary population. The higher density in model 30f10 (thick solid line) than in model 30f06 (medium solid line) for $`t\begin{array}{c}<\hfill \\ \hfill \end{array}0.6`$ Myr is significant and stems from the cooling of the cluster through disrupting wide binaries in 30f10. This effect is seen in reverse in models 8f$`x`$ (thick and medium dashed lines) because the initially smaller $`\sigma _{3\mathrm{D}}`$ makes more of the binary population effectively hard, thus leading to more heating in the case $`f_\mathrm{b}=1`$. The overall bulk evolution is illustrated in Fig. 5. The tidal radius, $`R_{\mathrm{tid}}`$, decreases as stars escape, and $`R_{0.5}`$ increases owing to cluster expansion. The clusters composed of single stars only (8f00, 30f00) expand at a slower rate because binary-heating is not effective. Only the models with $`N=800`$ stars loose approximately 50 % of their mass within about 150 Myr. Owing to the rapid onset of mass segregation, stars and systems with lower mass have, on average, larger radii than those with higher mass. In particular, the calculations show that brown dwarfs have the largest mean radius and are preferentially lost from the cluster. However, not only low-mass stars escape. The massive stars sink to the cluster core on a time-scale given roughly by $`\left(<m>/m_{\mathrm{mass}}\right)t_{\mathrm{rel}}`$ which is $`\begin{array}{c}<\hfill \\ \hfill \end{array}0.1t_{\mathrm{rel}}`$ for the cases studied here. The massive stars interact energetically with each other, often leading to their ejection. Many calculations show two O and/or B stars leaving a cluster in opposite directions. This occurs if at least one of the stars is a binary system that gains binding energy and propels itself and the other star out of the cluster. Fig. 6 shows the ratio $`R_{\mathrm{esc}}=N_{\mathrm{esc}}/N_{\mathrm{now}}`$, where $`N_{\mathrm{esc}}`$ is the number of stars with $`m8M_{}`$ and $`R>2R_{\mathrm{tid}}`$, and $`N_{\mathrm{now}}`$ is the number of such stars remaining in the calculation ($`R_{\mathrm{esc}}=0`$ if $`N_{\mathrm{now}}=0`$). Thus, between about 5 and 50 Myr, 10–50 % of all massive stars are found at distances larger than $`2R_{\mathrm{tid}}`$ (Fig. 5). This has significant implications for studies of the IMF in young clusters, as well as in the field. The figure also shows that there is no significant difference between the cluster models. However, more massive clusters have a larger $`R_{\mathrm{tid}}`$, so that it takes longer for stars with equal velocity to reach $`2R_{\mathrm{tid}}`$ for massive clusters. ## 5. Conclusions Binary–binary and binary–single-star interactions dominate the dynamical evolution during the first few crossing times, during which time the population of soft binaries is depleted significantly, and binaries near the hard/soft boundary contribute to cluster heating. The rapid sinking of the most massive stars to the centre of the cluster, on a time-scale which is a small fraction of the relaxation time, also leads to cluster heating and consequently cluster expansion. About 0.3 and 0.7 % of the initial number of systems for $`f_\mathrm{b}=0.6`$ and 1, respectively, are hardened sufficiently during an encounter such that efficient circularisation sets in. A fraction of such binaries will have their constituent stars collide. Massive stars are readily ejected from the cluster cores, and between 10 and 50 % of O and B stars are located further than $`2R_{\mathrm{tid}}`$ from their clusters between about 5 and 50 Myr. The redistribution of binding and kinetic energy leads to cluster expansion, and the central density falls on a relaxation time-scale. After about 4 Myr the massive stars that remain in the cluster start evolving significantly, and their combined mass loss leads to further cluster expansion which increases the evolution time-scale of the cluster. The number of massive stars, and thus the IMF, is critical in determining the expansion of the cluster during this phase, and by how much the primordial binary population is eroded during the first 150 Myr. The proportion of primordial binary systems that survive the first 150 Myr, in clusters that initially have a central number density such as in the Orion Nebula Cluster, is 34 and 43 % for $`f_\mathrm{b}=0.6`$ and 1, respectively, for a Salpeter IMF ($`\alpha =2.3`$) above $`1M_{}`$. If $`\alpha =2.7`$ for $`m>1M_{}`$, then $`f_{\mathrm{tot}}0.35`$ despite $`f_\mathrm{b}=1`$. Finally, the brown dwarf population lies at a larger average radius than more massive stars during cluster evolution. The results presented here are preliminary in that a larger model library is being completed now, and the formidable data reduction is being advanced. Many questions remain to be answered. In particular, the effect of gas ejection from a newly fledged cluster needs to be addressed. Work on this and other issues is in progress. Acknowledgements It is a pleasure to thank Sverre Aarseth and Jarrod Hurley for making Nbody6 freely available and aiding in code development specific for this project, which is supported through DFG grant KR1635. ## References Aarseth, S. J. 1999, PASP, 111, 1333 Bonnell, I. A., Davies, M. B., 1998, MNRAS, 295, 691 Bonnell, I. A., Bate, M. R., Zinnecker, H., 1998, MNRAS, 298, 93 Heggie, D. C., 1975, MNRAS, 173, 729 Kähler, H., 1999, A&A, 346, 67 Kroupa, P., 1995, MNRAS, 277, 1507 Kroupa, P., Petr, M. G., & McCaughrean, M. J. 1999, NewA, 4, 495 (KPM) Kroupa, P., Tout, C. A., & Gilmore, G. 1993, MNRAS, 262, 545 (KTG93) Portegies-Zwart, S. F., 2000, these proceedings ## 6. Discussion David Schaerer: 1) Is there a contradiction between your scenario where massive stars are very rapidly ejected from clusters and the scenario of Bonnell, Davies, & Zinnecker (1998) requiring massive stars to form in dense clusters? PK: No, not at all. It supports it since the location of massive stars away from clusters can be interpreted readily as being due to dynamical ejection. Velocity measurements, however, are required to confirm this in any particular case. 2) Could you comment on the implications of your scenario on the IMF of massive stars? Also, Massey finds indications for an IMF less populated by massive stars in the field compared to clusters. How does this fit into your picture? PK: This is a very important question which I am working on. Indeed, the MF for massive stars measured in clusters cannot be taken to be the IMF (even if stellar evolution is taken account of), and the field stars are likely to have been ejected from clusters. Qualitatively this agrees with the finding by Massey that the field IMF is steep, because less massive OB stars have a higher chance of being ejected with large velocities than the massive ones. However, this requires extensive numerical experiments to quantify the ejection velocities and ejected mass spectrum, before it can be ascertained that a “primordial field” population of OB stars with its own IMF does not exist. Hans Zinnecker: I noticed that the time scale for mass segregation you gave for the Orion Nebula Cluster was shorter than the crossing time. How can that be? If mass segregation is so fast, does that mean that the central Trapezium stars may not have been born in the very centre? In other words: do you disagree with the recent simulations of Bonnell & Davies (1998)? PK: The timescale for mass segregation is not well defined. Spitzer estimated the timescale for energy equipartition assuming an isothermal stellar population. This estimate is $`t_{\mathrm{equ}}=\left(<m>/m_{\mathrm{mass}}\right)t_{\mathrm{rel}}`$ which can be, formally, shorter than $`t_{\mathrm{cross}}`$. However, $`t_{\mathrm{equ}}`$ is only the timescale for the onset of mass segregation, and as soon as the density distribution has changed as a result of energy equipartition, the estimate is no longer applicable. Only numerical experiments can lead to a better understanding of the mass-segregation time-scale. But little work has been done so far on this. Unfortunately Bonnell & Davies used a code that was developed by Aarseth for galaxy simulations, and they do not specify the value of the softening parameter they adopted. Therefore, it can, at this stage, not be said that their calculations correctly treat the cumulative effects of many close encounters. One important effect not considered by Bonnell & Davies is the very high primordial binary population. This aids mass segregation because the binaries have a significantly larger encounter cross section.
no-problem/0002/cond-mat0002308.html
ar5iv
text
# Modes of counterion density-fluctuations and counterion-mediated attractions between like-charged fluid membranes ## Abstract Counterion-mediated attractions between like-charged fluid membranes are long-ranged and non-pairwise additive at high temperatures. At zero temperature, however, they are pairwise additive and decay exponentially with the membrane separation. Here we show that the nature of these attractions is determined by the dominant modes of fluctuations in the density of counterions. While the non-pairwise additive interactions arise from long-wavelength fluctuations and vanish at zero temperature, the short-ranged pairwise additive interactions arise from short-wavelength fluctuations and are stronger at low temperatures. 87.22.Bt,82.65Dp Counterion-mediated attractions play a significant role in many physical and biological phenomena . These attractions are responsible for fitting DNA inside a small biological container such as a viral capsid or a nuclear envelope and can also be crucial in promoting adhesion and fusion of biological membranes . Accordingly, significant effort has been expended in developing a practical way of investigating the nature of these counterion-mediated attractions. Besides an integral equation method , there have emerged two distinct approaches. The first approach , based on a charge fluctuation picture, suggests that these attractions are mediated by correlated fluctuations of ion clouds of counterions. This approach is consistent with our conception of counterions as fluctuating objects and thus merits significant consideration . In the second approach, based on a zero temperature picture , the appearance of attractions between like-charged molecules is attributed to the strong charge correlations that drive the systems, together with counterions, into an ionic crystal. At first glance these two approaches appear to be contradictory to one another, but there is an evidence that they can, in fact, be complimentary . Despite this, there still remain fundamental discrepancies between the two that have yet to be resolved. For the case of two planar surfaces a distance $`h`$ apart, the zero-temperature picture leads to an attractive force that decays expenentially with $`h`$ . In the charge-fluctuation approach, however, the attractive force decays algebraically as $`h^3`$ , as long as $`h`$ is sufficiently large. Most recently, the apparent discrepancy between these two approaches has been examined by Lau et al. Lau et al. have shown that the zero-temperature quantum fluctuations give rise to a long-ranged attraction which varies as $`h^{7/2}`$. Their theory, however, does not explain the crossover between the long-ranged interactions driven by thermal fluctuations and the exponentially-decaying interactions expected at low temperature. At finite temperatures, the long-ranged interactions supported by the charge-fluctuation approach should exist and constitute a dominant contribution to the plate-plate interactions. Furthermore, when applied to many rod systems , the charge-fluctuation approach suggests that interactions between rods are not pairwise additive while the exponentially decaying interactions between plates as implied by the zero-temperature approaches are pairwise additive. In this Letter, we present a simple theoretical approach to counterion-mediated interactions between fluid membranes in order to bridge the gap between the two existing theories. In particular, we show that the nature of these attractions is controlled by the dominant modes of fluctuations in the density of counterions. At high temperature, the membrane interactions are dominated by long-wavelength fluctuations in the counterion density, ı.e., fluctuations at large length scales. The resulting long-wavelength interactions are shown to be long-ranged and non-pairwise additive. As the temperature decreases, the high-temperature behavior of the membrane interactions crosses over to the behavior expected at low temperatures. At low temperatures, the membrane interactions are mainly determined by the short-wavelength charge fluctuations. We find that the resulting interactions decay exponentially with the separation between the membranes and are approximately pairwise additive. Finally, we obtain a phase diagram to depict the two distinctive regimes characterized by the corresponding dominant modes, and the crossover boundaries between the two. The system we consider here consists of negatively charged parallel membranes surrounded by neutralizing counterions of opposite charge. In the following derivation, we assume that counterions are localized in the plane of the membrane. Thus our approach presented here is relevant to the strongly charged case. More precisely, the separation between plates is much larger than the Gouy-Chapman length, a typical length scale within which counterions are confined. The main purpose of the present work is to study the crossover from the high-temperature results for the membrane attractions to the behavior expected at low temperatures. Since this crossover occurs at low temperaures or at high densities of counterions, the assumption of localized counterions is reasonable. The charge distribution on a layer $`j`$ is described by the local surface charge density, $`\widehat{\sigma }_j(𝐫_{})=e\sigma _0+emZ`$, where $`e`$ is the electronic charge, $`\sigma _0`$ is the average counterion number density, $`m=0,1,2,3,`$ etc., is the number of counterions per unit area at $`𝐫_{}(x,y)`$, and $`Z`$ is the counterion valency. In order to calculate the free energy and the corresponding charge correlations (cf. Eq. 6), we use two-dimensional Debye-Hückel (DH) theory. This approximation, however, fails to capture the strong charge-correlations at low temperatures. This defect in the DH theory has been corrected in an approximate way by taking into account short-ranged charge correlations over the size of ions . We thus implement DH theory with the counterion size via the two-dimensional form factor $`g(𝐫_{},𝐫_{}^{})=\mathrm{\Theta }(|𝐫_{}𝐫_{}^{}|D)/\pi D^2`$ where $`D`$ is the diameter of the counterions. We find that the charge-fluctuation contribution to the free energy per area is $$\frac{\mathrm{\Delta }F_N}{k_BT}=\frac{1}{2}\frac{d𝐤_{}}{(2\pi )^2}\left\{\mathrm{log}\left[detQ(𝐤_{})\right]N\frac{g(𝐤_{})}{\lambda k_{}}\right\},$$ (1) where $`\lambda ^1=2\pi Z\mathrm{}_B\sigma _0`$ is the inverse Gouy-Chapman length and $`\mathrm{}_B=e^2/ϵk_BT`$ is the Bjerrum length, ı.e, the length scale at which the electrostatic energy between two charges is comparable to the thermal energy. The matrix $`Q(𝐤_{})`$ is defined by the matrix elements $$Q_{ij}(𝐤_{})=\delta _{ij}+\frac{g_{ij}(𝐤_{})}{\lambda k_{}}\mathrm{e}^{k_{}h_{ij}}$$ (2) where $`g_{ij}(𝐤_{})`$ is $`g(𝐤_{})\frac{2J_1(k_{}D)}{k_{}D}`$ if $`i=j`$ and 1 otherwise, and $`J_1(x)`$ is the first-order Bessel function of the first kind. Finally, $`h_{ij}`$ is the separation between plates $`i`$ and $`j`$ First note that the free energy of an $`N`$-plate system is not simply a pairwise sum of the corresponding two-plate results over all pairs of plates. Thus the pairwise additivity is not always satisfied as will be detailed later. For the $`N=2`$ case, our result in Eq. 1 reduces to the previous two-plate result \[see Eq. 3 of Ref. \], if $`D`$ is set to zero. When $`D=0`$, the free energy has a single minimum at a nonzero value $`k_{}=k_{}^<1\AA ^1`$ for all values of $`\lambda `$. The dominance of the long-wavelength charge fluctuations is responsible for the breakdown of the pairwise additivity of electrostatic interactions between macroions. It has been shown that the pairwise additivity for the case of charged rods breaks down if the expansion of the corresponding interaction free energy in powers of $`\mathrm{}_B`$ diverges . For rod systems, the free energy is dominated by the zero-$`k`$ mode and thus this expansion converges only when the charge fluctuation along the rods is vanishingly small. For the two-dimensional case, the convergence of the $`\mathrm{}_B`$-expansion can be tested by estimating $`\delta (k_{}^<)^1\lambda ^1`$; the $`\mathrm{}_B`$-expansion is convergent if $`\delta <1`$. We, however, find that $`\delta `$ is smaller for smaller values of $`\lambda ^1`$ and is comparable to unity if $`\lambda ^1<10^3`$. As in the rodlike systems, the pairwise additivity is violated unless the inverse of the Gouy-Chapman length is very small. Thus the pairwise additivity can easily be violated even in two-dimensional systems. The above analysis, however, cannot be pursued when the temperature is low or when the density of ions is high. In this case, it is crucial to incorporate $`D0`$ ; the ion size constitutes an important length scale at low temperatures or at high density of ions. Having understood that the dominance of long-wavelength charge fluctuations for the case $`D=0`$ leads to the breakdown of pairwise additivity, we now examine the low-$`T`$ behavior of the free energy (thus with $`D`$ set to a finite value) in Fourier space. Let us first consider the case of two plates seperated by a distance $`h`$. For convenience, we consider the following quantity: $`𝒬(𝐤_{})k_{}\mathrm{log}[detQ(𝐤_{})]`$, ı.e., the first term in $`\{\mathrm{}\}`$ of Eq. 1 mutiplied by $`k_{}`$. In Fig. 1, we plot this quantity $`𝒬(𝐤_{})`$ as a function of $`k_{}`$ for several different values of the Gouy-Chapman length $`\lambda `$. We have chosen $`D=5\AA `$ and $`h=5\AA `$. When $`\lambda ^1=1\AA ^1`$, $`𝒬(𝐤_{})`$ has a single minimum at $`k_{}=k_{}^<1`$. This implies that the free energy is dominated by long-wavelength charge fluctuations as in the previous case of point charges. We find that the function $`𝒬(𝐤_{})`$ has two minima at $`k_{}=k_{}^<1\AA ^1`$ and at $`k_{}=k_{}^>=𝒪(1\AA ^1)`$, respectively, for $`\lambda ^1>4.2\AA ^1`$ (not shown in the figure). As $`\lambda ^1`$ changes, the minimum at $`k_{}=k_{}^>`$ varies monotonically and is deeper for larger $`\lambda ^1`$. The minimum at $`k_{}=k_{}^<`$, however, is roughly independent of $`\lambda `$. When $`\lambda ^15.9\AA ^1`$, the two minima are comparable in magnitude. When $`\lambda ^1=7\AA ^1`$, the function $`𝒬(𝐤_{})`$ is overwhelmingly dominated by the second minimum at $`k_{}=k_{}^>`$, as shown in the figure. At $`\lambda ^1=\lambda _X^1=7.2\AA ^1`$, the second minimum diverges. This is suggestive of the onset of crystallization of counterions as will be detailed later. Even though the region $`\lambda ^1\lambda _X^1`$ is certainly beyond the validity of our theory, it is neverthless interesting to see what our theory impiles for that region. Notably, the free energy curve corresponding to $`\lambda ^1=10\AA ^1`$ has two local minima at large $`k_{}=𝒪(1\AA ^1)`$. The existence of multiple minima at large $`k_{}`$ assures that the system is in a solidlike phase. Our results in Fig. 1 imply that there are two distinct contributions to the free energy: long-wavelength fluctuations and short-wavelength fluctuations in the density of counterions. They also imply that the short-wavelength fluctuation contribution to the free energy has a much narrower peak if $`\lambda ^15.9\AA ^1`$. This enables us to separate the short-wavelength contribution from the long-wavelength contribution. By noting that $`\mathrm{e}^{k_{}h}`$ does not change appreciably over the region inside the peak at $`k_{}=k_{}^>`$, we find, up to $`h`$-independent terms $`\mathrm{\Delta }F_2{\displaystyle \frac{k_BT}{16\pi }}{\displaystyle \frac{\zeta (3)}{h^2}}`$ (3) $`{\displaystyle \frac{k_BT}{8\pi ^2}}{\displaystyle \frac{\mathrm{e}^{2k_{}^>h}}{\lambda ^2}}{\displaystyle _{k_{}k_{}^>}}{\displaystyle \frac{k_{}^1dk_{}}{[1+k_{}^1\lambda ^1g(k_{})]^2}},`$ (4) where $`\zeta (x)`$ is the zeta function (thus $`\zeta (3)/16\pi 0.024`$). The first term denoted by $`F_{LW}`$ is the free energy calculated with $`D`$ set to zero and is the long-wavelength free energy. Our previous analysis on $`𝒬(𝐤_{})`$ implies that the short-wavelength free energy denoted by $`F_{SW}`$, ı.e., the second term in Eq. 3, is dominant over the long-wavelength free energy $`F_{LW}`$ at low temperatures ($`\lambda ^15.9\AA ^1`$) and decays exponentially in space. The exponentially decaying interaction between two plates is consistent with the zero-$`T`$ analysis in Refs. . At high temperatures corresponding to $`\lambda ^15.9\AA ^1`$, however, the free energy is mainly determined by $`F_{LW}`$. In this case, the fluctuation contribution to the pressure between the two plates scales as $`h^3`$ . Also note that this long-wavelength contribution vanishes at $`T=0`$. This follows from the fact that $`𝒬(k_{}^<)`$ is roughly independent of $`\lambda `$. To estimate the temperature dependence of $`F_{SW}`$, note that the prefactor of this term varies as $`T^1`$. The $`k_{}`$-integral of this term depends on the depth and width of the second minimum of $`𝒬(k_{})`$ at $`k_{}=k_{}^>`$. While the width is roughly independent of $`\lambda `$ for given $`h`$, the minimum becomes deeper with the increasing $`\lambda ^1`$ (or decreasing $`T`$). This proves that $`F_{SW}`$ is more negative at low $`T`$ than at high $`T`$, as opposed to $`F_{LW}`$. For a given value of $`h`$, there exists a special value $`\lambda _{cr}`$ at which the crossover between the two distinctive behaviors of the plate interaction ($`F_{LW}`$ and $`F_{SW}`$) takes place. By requiring $`\frac{}{h}\left(F_{LW}F_{SW}\right)=0`$, we have the following transcendental equation for $`\lambda _{cr}`$: $$\left(\frac{\pi }{2}\right)\frac{\zeta (3)}{k_{}^>}\frac{\mathrm{e}^{2k_{}^>h}}{h^3}=\frac{1}{\lambda _{cr}^2}_{k_{}k_{}^>}\frac{k_{}^1dk_{}}{[1+k_{}^1\lambda _{cr}^1g(k_{})]^2}.$$ (5) To solve the trascendental equation, we have chosen $`D=5\AA `$. Fig. 2 describes distinct regimes characterized by the corresponding dominant modes of fluctuations, and the crossover boundaries between them; the regimes where the long-wavelength (LW) and short-wavelength (SW) fluctuations dominate are denoted by LW and SW, respectively. When $`\lambda ^1`$ is smaller than $`2.7\AA ^1`$, the plate interaction is solely determined by the LW fluctuations for the whole range of $`h`$. At $`\lambda ^12.7\AA ^1`$, marked by the vertical dotted line on the left, the SW fluctuations start to contribute to the plate interaction. When $`2.7\AA ^1\lambda ^17.6\AA ^1`$, however, the plate interaction is determined by the competition between the two; the crossover from the LW to SW regime takes places for larger value of $`h`$ at low temperatures (corresponding to larger $`\lambda ^1`$). At $`\lambda ^1=\lambda _X^17.6\AA ^1`$ , marked by the vertical dotted line on the right, the crossover occurs only when $`h\mathrm{}`$. Beyond $`\lambda _X^1`$, the SW fluctuations solely determine the plate interaction. The appearance of two distinctive competing interactions, ı.e., $`F_{LW}`$ and $`F_{SW}`$, can also be understood in terms of in-plane charge correlations for a single plate: $`G_1(𝐫_{},𝐫_{}^{})=\widehat{\sigma }(𝐫_{})\widehat{\sigma }(𝐫_{}^{})\widehat{\sigma }(𝐫_{})\widehat{\sigma }(𝐫_{}^{})`$. The long-wavelength contribution to the charge correlation was shown to scale as $`G_{LW}(𝐫_{},𝐫_{}^{})k_BT/|𝐫_{}𝐫_{}^{}|^3`$ . Note that this correlation vanishes as $`T0`$. Our result in Eq. 1 implies that the short-wavelength charge correlation function is given by $`G_{SW}(𝐫_{},𝐫_{}^{})g(𝐫_{},𝐫_{}^{})`$ (6) $`{\displaystyle \frac{\sigma _0e^2}{2\pi }}{\displaystyle _{k_{}0}}{\displaystyle \frac{k_{}dk_{}g(𝐤_{})}{1+\frac{\lambda k_{}}{g(𝐤_{})}}}J_0(k_{}|𝐫_{}𝐫_{}^{}|),`$ (7) where $`J_0(x)`$ is the zeroth-order Bessel function of the first kind. Unlike the long-wavelength correlation, the behavior of the short-wavelength correlation is determined by the nature of the poles of $`\left[1+\lambda k_{}/g(𝐤_{})\right]^1`$. We find that $`G_{SW}`$ shows an oscillatory decay, and that the amplitude of $`G_{SW}`$ varies as $`T^1\left[1+(k_{}^>\lambda )^1g(k_{}^>)\right]^1`$ and is larger at low temperatures. For the two-plate case, this oscillatory charge correlation essentially gives rise to the SW interaction. At high temperatures, we can consider each plate to consist of large domains, ı.e., counterion-rich and counterion-poor domains. The size of the domains is on the order of $`(k_{}^<)^11\AA `$, and thus these domains can form huge dipoles, resulting in a long-ranged attraction. The long-wavelength fluctuations couple over many plates, leading to breakdown of pairwise additivity in the $`N`$-plate case. We also find that the power-law behavior of the in-plane charge correlation crosses over to an exponential decaying form as $`N\mathrm{}`$ . This is clearly suggestive of the breakdown of pairwise additivity. As the temperature decreases, however, the plate interaction driven by long-wavelength fluctuations crosses over to a distinct behaviour controlled by non-monotonically decaying charge correlations. At low temperatures, each domain becomes overall charge neutral and thus the distiction between domains is meaningless. In this case, the local correlation between a counterion and a backbone charge in its neighborhood dominates the free energy. There is thus strong cancellation of repulsions (between like charges) with attractions (between opposite charges). This results in an exponentially decaying, short-ranged attraction between the plates. In conclusion, we have presented a systematic approach to study the nature of counterion-mediated attractions between fluid membranes. We have shown that the nature of these attractions is determined by the dominant modes of fluctuations in the density of counterions. At high temperatures, fluctuations at large length scales determine the membrane interactions; the resulting interactions are long-ranged and not pairwise additive. Charge densities of biomembranes range from $`0.03`$ to $`0.24/\mathrm{nm}^2`$, corresponding to LW regimes at room temperature. It is thus clear that many-body, non-pairwise additive interactions operate in biomembrane systems at room temperature. At low temperatures, however, the membrane interactions are dominated by SW charge fluctuations and are exponentially decaying with the membrane spacing. In this case, only the nearest pairs of plates couple strongly with each other. Surprisingly, this implies that the pairwise additivity is restored at very low temperatures (the non-pairwise additive interaction becomes smaller, and eventually vanishes, as $`T0`$). The approach presented here allows one to systematically study the crossover, as the temperature decreases, from the high-temperature, long-ranged attractions to the behaviors expected at low temperatures. We have benifited from illuminating discussions with A.W.C. Lau and A.J. Liu. We are also grateful to W.M. Gelbart, H. Schiessel, and R. Menes for scientific stimulation. We thank M. Howard for reading the manuscript carefully, and D. Boal, M. Wortis, and M. Plischke for providing an excellent research enviornment. This work was in part supported by the NSERC of Canada.
no-problem/0002/cond-mat0002317.html
ar5iv
text
# Commensurate and Incommensurate Structure of the Neutron Cross Section in LaSrCu and YBaCuO ## Abstract We study the evolution of the $`d`$-wave neutron cross-section with variable frequency $`\omega `$ and fixed $`T`$ (below and above $`T_c`$) in two different cuprate families. The evolution from incommensurate to commensurate to incommensurate peaks is rather generic within an RPA-like scheme. This behavior seems to be in reasonable accord with experiments, and may help distinguish between this and the “stripe” scenario. The goal of the present paper is to address the frequency evolution of the neutron cross section, over the entire range of $`\omega ,𝐪`$, using a scheme which we have previously applied to the normal and $`d`$-wave superconducting states. This work is viewed as significant because it leads to a fairly generic frequency evolution, which seems to be observed experimentally in two cuprate families. These calculations, which have no adjustable parameters (besides those which were used to fit the normal state), can help establish whether the details of the fermiology plus $`d`$-wave pairing can account for the observed incommensurabilities and their evolution with frequency, or whether (by default) some new phenomenon such as stripe or other exotic phases may be required. Our starting point is a three band, large $`U`$ calculation which yields a dynamical susceptibility $`\chi (q,\omega )=\chi ^o(q,\omega )/[1+J(q)\chi ^o(q,\omega )]`$ where the Lindhard function $`\chi ^o`$ is appropriate to the ($`d`$-wave) superconducting state and the underlying Fermi surface. Here the residual exchange $`J(q)=J_o[\mathrm{cos}q_x+\mathrm{cos}q_y]`$ arises from Cu-Cu interactions via the mediating oxygen band. While the YBaCuO system is a two layer material, our past experience has shown that most of the peak structures associated with the neutron cross section are captured by an effective one layer band calculation, which we will investigate here. For definiteness, we fix the temperature at $`4`$ K and vary frequency in increments of a few meV. We take the electronic excitation gap to be described by an ideal $`d`$-wave, $`\mathrm{\Delta }(q)=\mathrm{\Delta }(\mathrm{cos}q_x+\mathrm{cos}q_y)`$, where at $`T=4`$ K, $`\mathrm{\Delta }`$ is taken to be $`17`$ meV in YBaCuO<sub>6.6</sub> and $`8`$ meV in optimally doped LaSrCuO. In Figure 1a we show the frequency evolution of an YBaCuO<sub>6.6</sub> sample. A node-to-node peak appears at low frequencies but the magnitude is small compared to that of all of the other features shown. Incommensurate peaks at $`(\pi ,\pi \pm \delta ),(\pi \pm \delta ,\pi )`$ are first seen at around $`\omega \mathrm{\Delta }`$, albeit their magnitude is somewhat smaller than found experimentally. As frequency increases there is a clear trend: the incommensurability decreases continuously; this decrease is most apparent, in the immediate vicinity of the resonance frequency. At $`27`$ meV the resonant, commensurate peak is now well established. As can be seen, there is some fine structure near $`(\pi ,\pi )`$ which is a remanent of the incommensurate peaks. And there is a pronounced evolution in the peak shape and height above resonance in the underdoped YBaCuO sample. The $`(\pi ,\pi )`$ regime is a flat-topped, possibly weakly incommensurate peak, just after resonance. It then broadens and remains structureless (as shown by our $`34`$ meV plot) between $`3448`$ meV. Finally, above $`48`$ meV, clear incommensurate structure appears. Figure 1b shows the frequency evolution for the LaSrCuO family, here shown at optimal doping. Node-to-node structures are seen at low frequencies with very small amplitude. A four peaked structure is seen at frequencies just around the gap frequency, $`8`$ meV. These peaks are sharper, but in roughly the same position as their counterparts in the normal state. These peaks persist (with slightly growing amplitude) until around $`12`$ meV, at which point the incommensurability seems to decrease while their overall magnitude increases. In between $`14`$ meV and $`2\mathrm{\Delta }=16`$ meV is an interesting (evidently “resonant”) structure, but there have been, thus far, no reports of this resonance. However, it should be stressed, that if there is a commensurate feature in LaSrCuO, it should be seen only over a very narrow ( $`2`$ meV) frequency window. Finally, just beyond $`2\mathrm{\Delta }`$, as shown by the last panel at $`\omega =19`$ meV, the cross section becomes very similar in shape to its normal state counterpart, although the magnitude is larger. This evolution from incommensurate to commensurate peaks and then back to incommensurate behavior with increasing $`\omega `$ can, thus, be seen to be the case for both cuprates. There are claims for these effects in recent experiments on YBaCuO and possibly in BiSSCO. What appears to be different between our observations and these particular experiments is that we do not find distinct two energy scales $`E_c`$, where the incommensurate peaks merge and $`E_r`$, where the resonance occurs. Our calculations for the reduced YBaCuO case, suggest that the incommensurate peaks probably never merge, but rather that at resonance the $`(\pi ,\pi )`$ feature fills in the gap between the two incommensurate features. In this sense $`E_c`$ and $`E_r`$ are the same frequency, although it should be stressed that here we have incorporated no resolution limiting effects. This work was supported by the NSF under awards No. DMR-91-20000 (through STCS) and No. DMR-9808595 (through MRSEC).
no-problem/0002/astro-ph0002454.html
ar5iv
text
# Damped Ly𝛼 Systems in Semi-Analytic Models: Sensitivity to dynamics, disk properties, and cosmology. ## 1. Introduction Previously (Maller et al. 1999, hereafter MSPP) we have shown that it is possible to account for the kinematic properties of damped Lyman alpha systems (DLAS) as measured by Prochaska & Wolfe (1997, 1998, hereafter PW97 and PW98) in the context of semi-analytic models (SAMs) (Somerville & Primack 1999, hereafter SP99). In these models, hierarchical structure formation is approximated by constructing a merger tree for each dark matter halo. A natural consequence is that every virialized halo may contain not only a central galaxy, but also a number of satellite galaxies as determined by its merging history. Thus the kinematics of the DLAS arise from the combined effects of the internal rotation of gas disks and the motions between gas disks within a common halo. Here we investigate the sensitivity of this model to some of the assumptions made in MSPP, including the modeling of satellite dynamics, the scale height of the gas, and the cosmology. ## 2. Satellite Dynamics In the SAMs, a merger tree represents a halo’s growth through the mergers of smaller halos (see Somerville 1997). When halos merge, the central galaxy of the largest progenitor halo becomes the central galaxy of the new halo, and all other galaxies become satellites. These satellites then fall in towards the central galaxy due to dynamical friction, and eventually merge with it (see SP99). Because the treatment of the dynamics of the satellites is necessarily simplified in the usual semi-analytic spirit, and since the kinematics of the DLAS arise from both the rotation of disks and the motions of satellite galaxies in the common halo, it is important to test whether our results are sensitive to the details of our modeling. In the models presented in MSPP, all satellites were assumed to start falling in from the virial radius of the newly formed halo, and all satellites were assumed to be on circular orbits. These assumptions will tend to maximize the dynamical friction timescale of the satellites, and correspond to the assumptions made in some earlier versions of the SAMs (e.g. Somerville 1997). In the models presented here, we have modified these assumptions in two ways. We initially place the satellites at one half of the virial radius, and assign the orbits from a random distribution as observed in N-body simulations by Navarro, Frenk, & White (1995). They found that the “circularity” parameter $`ϵ`$ has a flat distribution, where $`ϵJ/J_c`$ is defined as the ratio of the angular momentum of the satellite to that of a circular orbit with the same energy. The dynamical friction time scale is then scaled by a factor $`ϵ^{0.78}`$ (Lacey & Cole 1993). This causes some of the satellites to fall in faster than in our previous modeling. In addition, we have changed from the SCDM cosmology used in MSPP to a more fashionable $`\mathrm{\Omega }_0=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$ cosmology. These ingredients are compatible with the models that were shown to produce good agreement with both local galaxy observations (SP99) and the high redshift Lyman-break galaxies (Somerville, Primack, & Faber 1999). We find that the combined effect of these changes has a negligible effect on our results. Although we do see fewer satellites within a halo of a given mass, the number of satellites in the inner part of the halo is similar. These are the satellites most likely to give rise to multiple hits, which as we argued in MSPP is the crucial factor in matching the observed kinematics of the DLAS. As in MSPP, we still find that the gaseous disks must have very large radial extents in order to match the observations. ## 3. Disk Thickness Another important feature of the models is the assumed distribution of the gas within the disks. In MSPP, we investigated several radial profiles and obtained the best results with an assumed gas distribution of the form $$n(R,z)=\frac{N_t}{2h_z}\frac{R_t}{R}\mathrm{exp}\frac{|z|}{h_z}(R<R_t)$$ (1) where the truncation density $`N_t`$ is an adjustable parameter. The vertical scale height of the gas disks is another uncertainty. In MSPP we assumed it to be one tenth of the stellar disk scale radius, i.e. $`h_z=0.1R_{}`$ where $`R_{}`$ is the stellar disk scale length as given by the SAMs (see SP99). Because the gas disks in the successful models tend to have a large radial extent, much larger then the stellar disk, this leads to very thin gaseous disks. When the scale height is increased to half the stellar scale radius ($`h_z=0.5R_{}`$), we are able to use more physically plausible values for the truncation density $`N_t`$ (see Table 1), and the gas disks are now typically contained within the truncation radius of the dark matter sub-halos surrounding the satellites. This model therefore seems to be more physical, yet still produces good agreement with all four of the diagnostic statistics of PW97 (see Table 1). Note that PW98 have shown that the effect of including a warp in the gas disk is the same as increasing its thickness, so our exploration of thicker disks can also be thought of as including warps in the disk. Figure 2 shows the distribution of thicknesses and inclinations for the disks that contribute to DLAS in the two models. We find that the disks that produce damped systems in model B are more likely to have a face-on geometry. With thinner disks, more of the cross section comes from an inclined geometry as $`nh_x^1`$. The total cross section in the two models is roughly the same; in model A, more of it comes from extended highly inclined disks, while in model B, denser, face-on disks are more important. Thus the need for gas disks with large radial extent can be reduced by using thicker disks, though not by a large amount. Increasing the thickness by a factor of five only reduces the radial truncation value by $`30\%`$. The gaseous disks in model B are still quite large compared to the stellar component. ## 4. Conclusions In MSPP we argued that the observed kinematics of DLAS can be reproduced in hierarchical models if a significant fraction of the lines of sight pass through multiple disks orbiting within a common dark matter halo. However, we found that in order to obtain a high enough cross section for multiple hits, we had to assume very large radial extents for the gaseous disks in our models. We have tested the robustness of these conclusions by modifying several of the uncertain ingredients of our models. We find that modifying the dynamical friction timescale of the satellites by assuming a different intial radius or orbit has a small effect on our results. Similarly, our results do not seem to be sensitive to the assumed cosmology. Increasing the vertical scale height of the disks has a larger effect and leads to models that are more physically plausible and still produce good agreement with the diagnostic statistics of PW. Our results suggest that in order to reconcile the observed kinematics of DLAS with hierarchical theories of structure formation, gaseous disks at high redshift must be large in radial extent and thickened or warped. The physical cause of these properties remains obscure, however we speculate that tidal encounters or outflows could be responsible.
no-problem/0002/math0002196.html
ar5iv
text
# Distortion of leaves in product foliations ## 1. Introduction It is a basic problem, given a foliation of a Riemannian manifold, to compare the extrinsic and the intrinsic geometry of leaves of the foliation. For leaves which are compact, this amounts to studying the distortion of $`\pi _1`$ of the leaf in $`\pi _1`$ of the ambient manifold; much is known about this problem. For a partial survey, see . For taut codimension one foliations of $`3`$–manifolds, one knows by a result of Sullivan in that one can choose a metric on the ambient manifold such that the leaves are minimal surfaces. One could think of this as saying that taut foliations “measure area well”. However, leaves of taut foliations are far from being quasi–isometrically embedded, in general. For instance, a theorem of Fenley in states that for an $``$–covered foliation $``$ of a hyperbolic $`3`$–manifold $`M`$, leaves of the pulled–back foliation $`\stackrel{~}{}`$ of $`\stackrel{~}{M}`$ limit to the entire sphere at infinity of $`\stackrel{~}{M}=^3`$. A foliation is $``$–covered if the pulled–back foliation $`\stackrel{~}{}`$ is topologically the product foliation of $`^3`$ by horizontal $`^2`$’s. Fenley’s proof is attractive but somewhat lengthy, and therefore we think it is useful to give a simple proof of this fact here: ###### Theorem 1.1. For $``$ an $``$–covered foliation of $`M`$, a finite volume hyperbolic $`3`$–manifold, every leaf $`\lambda `$ of $`\stackrel{~}{}`$ limits to the entire sphere at infinity $`S_{\mathrm{}}^2`$ of $`\stackrel{~}{M}=^3`$. Proof: For a leaf $`\lambda `$ of $`\stackrel{~}{}`$, let $`\lambda _{\mathrm{}}=\overline{\lambda }\lambda S_{\mathrm{}}^2`$ be the set of points in $`S_{\mathrm{}}^2`$ in the closure of $`\lambda `$. We claim that $`\lambda _{\mathrm{}}=S_{\mathrm{}}^2`$ for each $`\lambda `$. Suppose some $`\lambda _{\mathrm{}}`$ omits some point $`pS_{\mathrm{}}^2`$, and therefore an open disk $`U`$ containing $`p`$. Let $`U^{}`$ and $`U^{\prime \prime }`$ be two open disks in $`S_{\mathrm{}}^2`$ whose closures are disjoint and contained in $`U`$. Since $`M`$ has finite volume, there are elements $`\alpha ,\beta \pi _1(M)`$ such that $`\alpha (S_{\mathrm{}}^2U)U^{}`$ and $`\beta (S_{\mathrm{}}^2U)U^{\prime \prime }`$. Let $`\lambda ^{}=\alpha (\lambda )`$ and $`\lambda ^{\prime \prime }=\beta (\lambda )`$. Since the leaf space of $`\stackrel{~}{}`$ is $``$, After relabeling the leaves if necessary, we can assume $`\lambda `$ separates $`\lambda ^{}`$ from $`\lambda ^{\prime \prime }`$. However, there is clearly an arc $`\gamma `$ in $`S_{\mathrm{}}^2`$ joining $`\lambda _{\mathrm{}}^{}`$ to $`\lambda _{\mathrm{}}^{\prime \prime }`$ which avoids $`\lambda _{\mathrm{}}`$. $`\gamma `$ is a Hausdorff limit (in $`^3S_{\mathrm{}}^2`$) of arcs $`\gamma _i^3`$ running between $`\lambda ^{}`$ and $`\lambda ^{\prime \prime }`$. Each $`\gamma _i`$ must intersect $`\lambda `$ in some point $`q_i`$, and by extracting a subsequence, we find $`q_iq\gamma `$. But by construction, $`q\lambda _{\mathrm{}}`$, giving us a contradiction. If we want to study the distortion of leaves in taut foliations, then $``$–covered foliations are the most delicate case. For, if $``$ is a taut foliation which is not $``$–covered, then the leaf space of $`\stackrel{~}{}`$ is non–Hausdorff. That is, there are a sequence of pairs of points $`p_i,q_i\stackrel{~}{M}`$ with $`p_ip`$ and $`q_iq`$ such that $`p_i,q_i\lambda _i`$ but $`p`$ and $`q`$ lie on different leaves of $`\stackrel{~}{}`$. It follows that the distance between $`p_i`$ and $`q_i`$ as measured in $``$ is going to infinity; that is, $`d_{\lambda _i}(p_i,q_i)\mathrm{}`$. On the other hand, the distance between $`p_i`$ and $`q_i`$ as measured in $`\stackrel{~}{M}`$ is bounded: $`d_{\stackrel{~}{M}}(p_i,q_i)d_{\stackrel{~}{M}}(p,q)`$. By contrast, leaves of $``$–covered foliations are never infinitely distorted. In fact in we show the following: ###### Theorem 1.2. Let $``$ be an $``$ covered foliation of an atoroidal $`3`$–manifold $`M^3`$. Then leaves $`\lambda `$ of $`\stackrel{~}{}`$ are quasi–isometrically embedded in their $`t`$–neighborhoods, for any $`t`$. That is, there is a uniform $`K_t,ϵ_t`$ such that for any leaf $`\lambda `$ of $`\stackrel{~}{}`$, the embedding $`\lambda N_t(\lambda )`$ is a $`(K_t,ϵ_t)`$ quasi–isometry. A heuristic scheme to measure the distortion of a leaf $`\lambda `$ runs as follows. We assume that $``$ is co–oriented, so that in the universal cover there is a well–defined notion of the space “above” and the space “below” a given leaf. Since leaves are minimal surfaces, the mean curvature is zero, so there will be well–defined approximate directions at every point in which the leaf “bends upwards” and approximate directions in which it “bends downwards”. For an arc $`\alpha `$ between points $`p,q`$ of a leaf $`\lambda `$ which is roughly parallel to a direction of positive extrinsic curvature, the geodesic $`\alpha ^+`$ in $`\stackrel{~}{M}`$ running between $`p`$ and $`q`$ will lie mostly above $`\lambda `$. We can cap off the circle $`\alpha \alpha ^+`$ with a disk $`D_\alpha `$ of minimal area, and consider its projection to $`M`$. The intersection of $`D_\alpha `$ with $`\stackrel{~}{}`$ gives a foliation of $`D_\alpha `$ by arcs parallel to the principal directions of positive curvature in the leaves of $`\stackrel{~}{}`$. Thus the projection of $`D_\alpha `$ to $`M`$ should be close to an embedding, since its self–intersections cannot have a large dihedral angle. If we consider longer and longer leafwise geodesics $`\alpha `$, the geometric limit of the disks $`D_\alpha `$ in $`M`$ should be a geodesic lamination $`\mathrm{\Lambda }^+`$ in $`M`$ which describes the “eigendirections” of positive curvature of $``$. If we consider the principal directions of negative curvature, we should get a complementary lamination $`\mathrm{\Lambda }^{}`$, transverse to $``$ and to $`\mathrm{\Lambda }^+`$, which describes the “eigendirections” of negative curvature of $``$. To study the distortion of leaves of $``$ in such a setup, one need only study, for a typical leaf $`l`$ of $`\mathrm{\Lambda }^+`$ say, how distorted the induced one–dimensional foliation $`l`$ is in $`l`$. Such a foliation has bounded geometry — its extrinsic curvature is bounded above and below by some constant — because of the compactness of $`M`$. Moreover it is topologically a product. The point of this paper is to show that these two conditions in no way allow one to establish any bound on the distortion function of $``$. One remark worth making is that a pair of laminations $`\mathrm{\Lambda }^\pm `$ transverse to an $``$–covered foliation of an atoroidal $`3`$–manifold $`M`$ are constructed in and independently by Sérgio Fenley (). The interpretation of these foliations as eigendirections of extrinsic curvature is still conjectural, however. ## 2. Non–recursive distortion ###### Definition 2.1. The distortion function $`𝒟`$ of a foliation $``$ in a Riemannian manifold $`M`$ is defined as follows: if $`d_{}`$ denotes the intrinsic distance function in a leaf of $``$ thought of as a geodesic metric space, and $`d_{\text{amb}}`$ denotes the Riemannian distance function in the ambient space, then $$𝒟(t)=\underset{x,y\lambda |d_{\text{amb}}(x,y)=t}{sup}d_{}(x,y)$$ where the supremum is taken over all pairs of points in all leaves of $``$. We usually expect that $`M`$ is simply connected. ###### Theorem 2.1. Let $`\lambda `$ be a smooth oriented bi–infinite ray properly immersed in $`^2`$. 1. If $`1\kappa 1`$ everywhere, then the distortion function $`𝒟`$ is at most exponential. 2. If $`\kappa 1`$ everywhere and $`>1`$ somewhere, then $`\lambda `$ has a self–intersection. 3. If $`\kappa =1`$ everywhere then $`\lambda `$ is a horocircle and therefore has exponential distortion. Proof: The proof of 1. is a standard comparison argument, and amounts to showing that $`\lambda `$ makes at least as much progress as a horocircle. To see that 2. holds, we consider the progress of the osculating horocircle to $`\lambda `$. At a point $`x`$ on $`\lambda `$, the osculating horocircle $`H_x`$ is the unique horocircle on the positive side of $`\lambda `$ which is tangent to $`\lambda `$ at $`x`$. Let $`B_xS_{\mathrm{}}^1`$ be the basepoint of $`H_x`$. Then the curvature condition on $`\lambda `$ implies that as one moves along $`\lambda `$ in the positive direction, $`B_x`$ always moves anticlockwise. Since $`\kappa >1`$ somewhere, the derivative of $`B_x`$ is nonzero somewhere. If we truncate $`\lambda `$ on some sufficiently big compact piece and fill in the remaining segments with horocircles, we get a properly immersed arc $`\lambda ^{}`$ in $`B^2`$ with positive winding number, which consequently has a self–intersection somewhere. If this intersection is in the piece agreeing with $`\lambda `$, we are done. Otherwise, the curvature condition implies that $`\lambda `$ must have an intersection in the region enclosed by $`\lambda ^{}`$. In light of this theorem, it is perhaps surprising that we can make the following construction: ###### Theorem 2.2. For any $`ϵ>0`$ there is a foliation $``$ of $`^2`$ which is topologically the standard foliation of $`^2`$ by horizontal lines with the following properties: * Leaves are smooth. * The extrinsic curvature of any leaf at any point is bounded between $`1ϵ`$ and $`1+ϵ`$. * The distortion function $`𝒟`$ grows faster than any recursive function. Proof: We consider the upper half-space model of $`^2`$ and let $`\lambda `$ be the leaf passing through the point $`i`$. $`\lambda `$ will be the graph of a function $`r=\varphi (\theta )`$ in polar co–ordinates for $`\theta (0,\pi )`$. Choose some very small $`\delta `$. Then for $`\pi \delta >\theta >\delta `$ we let $`\lambda `$ agree with the horocircle with “center” at $`\mathrm{}`$ passing through $`i`$. Let $`r_n`$ be a sequence of positive real numbers which grows faster than any recursive function. Then we define $$\varphi \left(\frac{\delta }{2^n}\right)=r_n$$ Since $`r_n`$ grows so very fast, one can easily choose $`\varphi `$ to interpolate between $`\frac{\delta }{2^n}`$ and $`\frac{\delta }{2^{n+1}}`$ so that $`\lambda `$ is very close to a radial (Euclidean) line of very small slope in this range. The extrinsic curvature of such a line in $`^2`$ is very close to $`1`$. Define $`\varphi `$ in the range $`\theta >\pi \delta `$ by $`\varphi (\theta )=\varphi (\pi \theta )`$. Then the extrinsic distance between $`(r,\theta )`$ and $`(r,\pi \theta )`$ depends only on $`\theta `$. It follows that for $`\theta `$ between $`\frac{\delta }{2^n}`$ and $`\frac{\delta }{2^{n+1}}`$ the extrinsic distance between $`(r,\theta )`$ and $`(r,\pi \theta )`$ is bounded above by some recursive function in $`n`$. However, the length of the path in $`\lambda `$ between $`(r,\theta )`$ and $`(r,\pi \theta )`$ is growing approximately like $`\mathrm{ln}(r_n)`$. The distortion function $`𝒟`$ is therefore non–recursive. Let $`\lambda _1=\lambda `$ and for $`t>0`$, define $`\lambda _t`$ to be the graph of the function $`r=t\varphi (\theta )`$ in polar co-ordinates. Then there is a hyperbolic isometry taking $`\lambda _t`$ to $`\lambda _s`$ for any $`s,t`$ and therefore each of the $`\lambda _t`$ satisfies the same curvature bounds as $`\lambda `$. Moreover, the union of the $`\lambda _t`$ gives a product foliation of $`^2`$, as required. ###### Remark 2.1. For those unfamiliar with the concept, it should be noted that it is easy to produce a function (even an integer valued function) such as $`r_n`$ which grows faster than any recursive function. For, enumerate all the recursive functions somehow as $`\varphi _n`$. Then define $`r(n)=\mathrm{max}_{mn}\varphi _m(n)`$. Such an $`r`$ grows (eventually) at least as fast as any recursive function, and therefore faster than any recursive function. Essentially the same construction works for Euclidean space, a fact which we now establish. ###### Theorem 2.3. For any $`ϵ>0`$ there is a foliation $``$ of $`𝔼^2`$ which is topologically the standard foliation of $`^2`$ by horizontal lines with the following properties: * Leaves are smooth. * The extrinsic curvature of any leaf at any point is bounded by $`ϵ`$ * The distortion function $`𝒟`$ grows faster than any recursive function. Proof: The leaf $`\lambda `$ of $``$ passing through the origin will be the graph of an even function $`y=\varphi (x)`$ and every other leaf will be a translate of $`\lambda `$. After choosing a sufficiently large $`K`$ and small $`\delta `$, we make $`\varphi (x)=\delta x^2`$ for $`|x|<K`$, so that $`\lambda `$ has curvature bounded below $`ϵ`$, and is very nearly vertical at $`x=K`$. Then let $`r_i`$ be a sequence growing faster than any recursive function, as before, and define $`\varphi (K+n)=r_n`$. Then we can choose $`\varphi `$ to interpolate between $`K+n`$ and $`K+n+1`$ for each $`n`$ to make it very smooth and almost straight, since $`\lambda `$ will be almost vertical in this region. ## 3. Minimality ###### Definition 3.1. A decorated metric space is a metric space in the usual sense with some auxiliary structure. For instance, this structure could consist of a basepoint, some collection of submanifolds, a foliation or lamination, etc. together with a notion of convergence of such structures in the geometric topology, in the sense of Gromov. One says a decorated metric space $`M`$ has bounded geometry if the metric space itself has bounded geometry, and if for every $`t`$, the decorated metric spaces obtained as restrictions of $`M`$ to the balls of radius $`t`$ form a precompact family. For instance, we might have a family of foliated Riemannian manifolds $`M_i,_i`$. The geometric topology requires a choice of basepoints $`p_i`$ in each $`M_i`$. Then we say the family $`(M_i,_i,p_i)`$ is a Cauchy sequence if there are a sequence of radii $`r_i`$ and $`ϵ_i0`$ so that the Gromov–Hausdorff distance between the ball of radius $`r_i`$ in $`M_i`$ and $`M_j`$ about $`p_i,p_j`$ for $`j>i`$ is at most $`ϵ_i`$, and that such “near–isometries” can be chosen in a way that the leaves of $`_i`$ can be taken $`ϵ_i`$–close to the leaves of $`_j`$. From such a Cauchy sequence we can extract a limit $`(M,,p)`$. ###### Definition 3.2. A decorated metric space $`M`$ with bounded geometry is minimal if for every limit $`(M^+,p)`$ of the pointed decorated metric spaces $`(M,p_i)`$ and every point $`qM`$, there is a sequence $`q_iM^+`$ such that the pointed decorated metric spaces $`(M^+,q_i)(M,q)`$. The foliations constructed in the last section are certainly not minimal. We ask the question: does the assumption of minimality allow one to make some estimate on the distortion function for a product foliation?
no-problem/0002/astro-ph0002119.html
ar5iv
text
# Rotations and Abundances of Blue Horizontal-Branch Stars in Globular Cluster M15Based in large part on observations obtained at the W.M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California ## 1 Introduction M15 (NGC 7078) is one of the most metal-poor globulars known, with a metallicity $`[\mathrm{Fe}/\mathrm{H}]2.4`$ dex measured from red giant abundances (Cohen 1979, Sneden et al. 1997). Like many other such clusters, M15’s horizontal branch lies predominantly bluewards of the instability strip, and color-magnitude diagrams (Buonanno et al. 1983; Durrell & Harris 1993) show an extended “blue tail” reaching $`T_{\mathrm{eff}}`$ as high as $`20000\mathrm{K}`$, which is separated from the horizontal part of the HB by a “gap” in the distribution of stars along the HB (Moehler et al. 1995). Similar gaps appear in the CMDs of M13, M80, NGC 6752, NGC 288, and other clusters, but are difficult to explain via standard models of RGB mass loss or HB evolution. Recently, attention has focused on atmospheric effects as a possible explanation for these photometric features (Caloi 1999; Grundahl et al. 1998). At $`T_{\mathrm{eff}}10000\mathrm{K}`$, they suggest, the stellar atmospheres become susceptible to diffusion effects, and thus develop chemical peculiarities similar to those that appear in main-sequence CP stars. The resulting changes in atmospheric opacity alter the emitted flux distributions and thus the measured photometry of the hotter stars, giving rise to the gaps. These claims have been bolstered by measurements of large photospheric abundance anomalies among hotter BHB stars in M13 (Behr et al. 1999) and NGC 6752 (Moehler et al. 1999), which show 30 to 50 times the iron abundance expected for these metal-poor clusters. In M13, the transition from normal-metallicity cooler stars to metal-enhanced hotter stars is remarkably abrupt, and coincides with the location of the gap, further supporting the surface-effect hypothesis. It appears, however, that stellar rotation also plays some role in this process. Theoretical treatments of the diffusion mechanisms (Michaud et al. 1983) suggest that circulation currents induced by higher rotation velocities can easily prevent abundance variations from appearing. This prediction is borne out by measurements of $`v\mathrm{sin}i`$ for the M13 stars (Behr et al. 2000), which show that although the cooler stars exhibit a range of $`v\mathrm{sin}i`$, some as high as $`40\mathrm{km}\mathrm{s}^1`$, the hot metal-enhanced stars all show very low rotations, $`v\mathrm{sin}i<10\mathrm{km}\mathrm{s}^1`$. This correlation suggests that slow rotation may be required in order for the metal enhancement and helium depletion appear in the photosphere. Although these results from M13 and NGC 6752 imply that we are on the right track towards explaining the photometric peculiarities, BHB stars in many other clusters will have to be analyzed in a similar fashion before we can make any firm claims. In particular, since the radiative levitation that causes the observed metal enhancements is thought to depend strongly on the intrinsic metallicity of the atmosphere, we should study clusters spanning a range of \[Fe/H\], to see whether the onset and magnitude of the enhancements vary. To this end, we have observed BHB stars in five clusters in addition to M13. In this Letter, we describe the results for both rotation and photospheric abundances in M15. The data for the other clusters — M92, M68, NGC 288, and M3 — are being analyzed currently, and will be presented in a future publication. ## 2 Observations and Reduction The eighteen stars in our sample were selected from Buonanno et al. (1983), and are listed in Table 1. We acquired supplementary Strömgren photometry of the target stars using the Palomar 60-inch telescope, in order to better constrain the effective temeratures. The stars generally lie in the cluster outskirts, where crowding and confusion are less of a problem than towards the core, and our CCD imaging confirmed an absence of faint companions. The spectra were collected using the HIRES spectrograph (Vogt et al. 1994) on the Keck I telescope, during four observing runs on 1997 August 01–03, 1997 August 26–27, 1998 June 27, and 1999 August 14–17. A 0.86-arcsec slit width yielded $`R=45000`$ ($`v=6.7\mathrm{km}\mathrm{s}^1`$) per 3-pixel resolution element. We limited frame exposure times to 1200 seconds, to minimize susceptibility to cosmic ray accumulation, and then coadded four frames per star. $`S/N`$ ratios were on the order of $`3060`$ per resolution element. We used a suite of routines developed by J.K. McCarthy (1988) for the FIGARO data analysis package (Shortridge 1988) to reduce the HIRES echellograms to 1-dimensional spectra. Frames were bias-subtracted, flat-fielded against exposures of HIRES’ internal quartz incandescent lamps (thereby removing much of the blaze profile from each order), cleaned of cosmic ray hits, and coadded. A thorium-argon arc lamp provided wavelength calibration. Sky background was negligible, and 1-D spectra were extracted via simple pixel summation. A 10th-order polynomial fit to line-free continuum regions completed the normalization of the spectrum to unity. ## 3 Analysis The resulting spectra show a few to over 140 metal absorption lines each, with the hottest stars showing the largest number of lines. Line broadening from stellar rotation is evident in a few of the stars, but even in the most extreme cases, the line profiles were close to Gaussian, so line equivalent widths ($`W_\lambda `$) were measured by least-square fitting of Gaussian profiles to the data. Equivalent widths as small as 10 mÅ were measured reliably, and errors in $`W_\lambda `$ (estimated from the fit $`\chi ^2`$) were typically 5 mÅ or less. Lines were then matched to the atomic line lists of Kurucz & Bell (1995), and those that were attributed to a single species (i.e. unblended) were used to determine radial velocity $`v_r`$ for each of the stars. On the basis of $`v_r`$, all of the targets appear to be cluster members. To derive photospheric parameters $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$, we compared the published photometry and our own Strömgren indices to synthesized colors from ATLAS9 (Kurucz 1997), adopting a cluster reddening of $`E(BV)=0.09`$. Temperatures are well-determined ($`\pm `$ a few hundred K) for the cooler stars, but are somewhat less firm (as much at $`\pm 1000\mathrm{K}`$) for the hotter stars. This situation will improve with a more sophisticated treatment of the various Strömgren colors, and by using transitions with different excitation potentials $`\chi `$ to constrain $`T_{\mathrm{eff}}`$. We estimated surface gravities using the $`AB_\nu `$ flux method (Oke & Gunn 1984), which relates $`\mathrm{log}g`$, stellar mass $`M_{}`$, distance $`d`$, and photospheric Eddington flux $`H_\nu `$ at 5480 Å. Since $`AB_\nu (5480\AA )=V`$ magnitude, we can derive $$\mathrm{log}g=3.68+\mathrm{log}(M_{}/M_{})+\mathrm{log}(H_\nu )(Mm)_V+0.4V_0,$$ where unextincted magnitude $`V_0=V+A_V=V+0.30`$ for M15. We assumed $`M_{}=0.6M_{}`$ as a representative BHB star mass, used a distance modulus of 15.26 (Silbermann & Smith 1995), and drew the $`H_\nu `$ values from the ATLAS9 model atmospheres (Kurucz 1997), iterating until $`\mathrm{log}g`$ converged. The resulting $`\mathrm{log}g`$ values agree well with model ZAHB tracks (Dorman 1993), except for the hotter stars, which are “overluminous” as described by Moehler et al. (1995). Table 1 lists the final photospheric parameters used for each of the target stars, as well as the heliocentric radial velocities. For the chemical abundance analyses, we use the LINFOR/LINFIT line formation analysis package, developed at Kiel, based on earlier codes by Baschek, Traving, and Holweger (1966), with subsequent modifications by M. Lemke. Our spectra of these very metal-poor stars are sufficiently uncrowded that we can simply compute abundances from equivalent widths, instead of performing a full spectral synthesis fit. Only lines attributed to a single chemical species were considered; potentially blended lines are ignored in this analysis. Upper bounds on \[Ti/H\] were determined for two of the hotter stars, which did not show any titanium lines, by assigning an equivalent width of 20 mÅ (double the strength of the weakest metal lines actually measured in those spectra) to the six strongest Ti II transitions and calculating the implied abundance. Microturbulent velocity $`\xi `$ was chosen such that the abundance derived for a single species (Fe II for most cases) was invariant with $`W_\lambda `$. For those stars with an insufficient number of lines to utilize this technique, we adopted a typical value of $`\xi =2\mathrm{km}\mathrm{s}^1`$. We assumed a cluster metallicity of \[Fe/H\] $`=2.4`$ dex in computing the initial model atmospheres, and then adjusted as necessary for those stars that turned out to be considerably more metal-rich, although these adjustments to the atmospheric input were found to have only modest effects ($`<0.2`$ dex) on the abundances of individual elements. Measurement of $`v\mathrm{sin}i`$ broadening traditionally entails cross-correlation of the target spectrum with a rotational reference star of similar spectral type, but this approach assumes that the template star is truly at $`v\mathrm{sin}i=0`$, which is rare. Furthermore, given the abundance peculiarities that many of these BHB stars exhibit, it is difficult to find an appropriate spectral analog. Since we are able to resolve the line profiles of our stars, we instead chose to measure $`v\mathrm{sin}i`$ by fitting the profiles directly, taking into account other non-rotational broadening mechanisms. We used bright unsaturated arc lines to construct an instrumental profile, and then combined it with an estimated thermal Doppler broadening of $`3\mathrm{km}\mathrm{s}^1`$ FWHM and the previously-determined $`\xi `$. This profile was convolved with hemicircular rotation profiles for various $`v\mathrm{sin}i`$ to create the final theoretical line profiles. Each observed line in a spectrum was fit to the theoretical profile using an iterative least-squares algorithm, solving for $`v\mathrm{sin}i`$ and line depth. The values for $`v\mathrm{sin}i`$ from different spectral lines generally agree quite well, once we removed helium lines and blended lines such as the Mg II 4481 triplet. ## 4 Results In Figure 1, abundance determinations for key chemical species are plotted as a function of stellar $`T_{\mathrm{eff}}`$. The values \[X/H\] represent logarithmic offsets from the solar values of Anders & Grevesse (1993). The error bars incorporate the scatter among multiple lines of the same species, plus the uncertainties in $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$, $`\xi `$, $`W_\lambda `$ for each line, and \[Fe/H\] of the input atmosphere. In the cooler stars ($`T_{\mathrm{eff}}<10000\mathrm{K}`$), the compositions are largely as expected. The iron abundance \[Fe/H\] averages $`2.5`$, slightly below the value of $`2.4`$ expected for this metal-poor cluster. Magnesium and titanium appear at \[Mg/H\] $`2.2`$ and \[Ti/H\] $`1.8`$, respectively, which are also reasonable for this environment. As we move to the hotter stars, however, the iron abundances change radically. Six of the eight stars at $`T_{\mathrm{eff}}10000\mathrm{K}`$ show solar iron abundances, \[Fe/H\] $`0`$, an enhancement of a factor of 300 from the cooler stars. (The other two hot stars show the same \[Fe/H\] as the cool stars, for reasons which will be discussed shortly.) Titanium, in a similar fashion, rises to \[Ti/H\] $`>0`$, although there are hints of a monatonic increase with $`T_{\mathrm{eff}}`$ rather than an abrupt jump. Magnesium, on the other hand, appears to be unaltered, maintaining a roughly constant metal-poor level across the entire temperature range. The hotter stars also start to show helium lines, providing evidence of helium depletion, since \[He/H\] $`0`$ at $`11000\mathrm{K}`$ but it then drops by factors of 10 to 30 for $`T_{\mathrm{eff}}=12000`$$`13000\mathrm{K}`$. Figure 1 also charts the values of $`v\mathrm{sin}i`$ derived for the target stars. Among the cooler stars, we find a range of rates, with most of the stars rotating at $`15\mathrm{km}\mathrm{s}^1`$ or less, except for two stars at $`29\mathrm{km}\mathrm{s}^1`$ and $`36\mathrm{km}\mathrm{s}^1`$, respectively. This sort of distribution of $`v\mathrm{sin}i`$ is not what one would expect given a single intrinsic rotation speed $`v`$ and random orientation of the rotation axes, since large $`\mathrm{sin}i`$ are more likely than small $`\mathrm{sin}i`$. Instead, the cool end of the HB appears to contain two rotational populations, one with $`v35\mathrm{km}\mathrm{s}^1`$, and another with $`v15\mathrm{km}\mathrm{s}^1`$, much like in M13 (Peterson et al. 1995). For the hotter stars, there also appears to be a bimodal distribution in $`v`$, although the difference is less pronounced. Six of these eight stars show $`v\mathrm{sin}i<7\mathrm{km}\mathrm{s}^1`$, while the other two have $`v\mathrm{sin}i12\mathrm{km}\mathrm{s}^1`$. Interestingly, these two faster-rotating hot stars are the same stars that show “normal” (metal-poor) iron abundances. ## 5 Discussion These BHB stars in M15 exhibit abundance and rotation characteristics very similar to those reported for BHB stars in M13 (Behr et al. 1999 & 2000). The enhancements of metals except for magnesium, the depletion of helium, and the difference in maximum $`v\mathrm{sin}i`$ between hotter and cooler stars are shared by both clusters. The abundance anomalies in M15 are therefore likely to be due to the same diffusion processes that were invoked for the prior study — radiative levitation of metals, and gravitational settling of helium, in the stable non-convective atmospheres of the hotter, higher-gravity stars, as hypothesized by Michaud et al. Unfortunately, the stars selected in M15 do not sample the immediate vicinity of the photometric gap as well as those in M13, so the association between the onset of diffusion-driven abundance variations and the gap is not as clear-cut, but the general trend still supports this association. There are, however, two notable differences between the results for M15 and those for M13. First, the magnitudes of the metal enhancements are somewhat different. Iron and titanium are each enhanced by $`2.5\mathrm{dex}`$ in M15, while in M13, iron increases by only $`2\mathrm{dex}`$, and titanium by $`1.5\mathrm{dex}`$. Despite this difference, the enhancement mechanism yields the same final metallicities for the hot stars in both clusters: \[Fe/H\] $`0`$ and \[Ti/H\] $`0`$. This correspondence suggests that the radiative levitation mechanism reaches equilibrium with gravity at or near solar metallicity, independent of the initial metal content of the atmosphere, as the metal lines become saturated and are thus unable to support further enhancements. Second, there is the issue of the two hotter M15 stars which do not show metal enhancement, denoted by circles in Figure 1. Their derived temperatures associate them with the hotter population, as does their photometry, which places them bluewards of the photometric gap. Their iron abudances, however, are $`<2.5\mathrm{dex}`$, like the cooler unenhanced stars in the cluster, and stringent upper bounds on their titanium abundances again suggest that they are metal-poor. These two stars are also distinguished by having $`v\mathrm{sin}i12\mathrm{km}\mathrm{s}^1`$, nearly twice as large as any of the other hot stars. Although their temperatures and gravities are high enough to support radiative levitation, it appears that their faster rotations induce meridonal circulation, which keeps the atmosphere well-mixed and prevents the metal enhancements from appearing. Such sensitive dependence upon rotation speed was mentioned by Michaud et al., but these observations provide direct evidence that rotationally-driven mixing can and does directly influence the atmospheric composition. With these results from M15, we add to the growing body of evidence that element diffusion, regulated by stellar rotation, are at least partially responsible for the observed photometric morphology of globular cluster HBs. The findings from this cluster corroborate the prior work on M13 and NGC 6752, while adding some new twists which further illuminate the diffusion mechanisms. Analysis of many other clusters, spanning a range of metallicity and HB morphology, will be necessary before all of the implications of these effects can be known. These observations would not have been feasible without the HIRES spectrograph and the Keck I telescope. We are indebted to Jerry Nelson, Gerry Smith, Steve Vogt, and many others for making such marvelous machines, to the W. M. Keck Foundation for making it happen, and to a bevy of Keck observing assistants for making them work. J.G.C. and B.B.B. acknowledge support from NSF Grant AST-9819614.
no-problem/0002/cond-mat0002408.html
ar5iv
text
# Nonlinear analysis of the shearing instability in granular gases ## I Introduction The understanding of the dynamics of granular fluids is crucial for various industrial processes. This has led to many investigations where theory, experiments and simulations are used in order to construct a predictive theory . There is hope that for moderate densities and slightly dissipative grains, grain dynamics may be described on large scales by using fluid hydrodynamics with slight modifications. One way to guess which changes to make in standard hydrodynamics is to use the tools developed by statistical mechanics in order to derive hydrodynamic equations. This is justified by the fact that a simple grain model, the inelastic hard sphere model, has been shown to reproduce most of the phenomena occuring in granular systems: in some sense, it has proven to contain the essential ingredients necessary to predict the peculiar physics observed . The scheme used in the study of nonequilibrium fluids can then be extended to granular fluids: “microscopic” simulations permit to compute the equation of state and values for transport coefficients which are then fed into the guessed macroscopic equations. Comparison between direct nonequilibrium simulations of microscopic and macroscopic models allows then to test the validity of the proposed macroscopic equations. This approach was used recently by two of us to investigate heat transport (heat being identified with the kinetic energy associated with the grains’ motion) and it has been shown that Fourier’s law has to be generalized with a density gradient term appearing in the expression for the heat flux . In the Inelastic Hard Sphere model (IHS), grains are spherical hard particles with only translational degrees of freedom. The energy dissipation is included through a restitution coefficient $`r`$ lower than one. As for hard spheres, the collision is an instantaneous event and the grain velocities after a collision are given by $`𝐯_1^{}`$ $`=`$ $`𝐯_1+{\displaystyle \frac{1}{2}}(1+r)\left[\widehat{𝐧}(𝐯_2𝐯_1)\right]\widehat{𝐧}`$ (1) $`𝐯_2^{}`$ $`=`$ $`𝐯_2{\displaystyle \frac{1}{2}}(1+r)\left[\widehat{𝐧}(𝐯_2𝐯_1)\right]\widehat{𝐧}`$ (2) where $`\widehat{𝐧}`$ is the unit vector pointing from the center of particle 1 towards the center of particle 2. It is convenient to define the dissipation coefficient $`q=(1r)/2`$ which vanishes when collisions are elastic. In what follows units are chosen such that the disk diameter $`\sigma `$ and the particles masses are set to one. The IHS model, like the elastic hard sphere model, does not have an intrinsic energy scale. This means that two systems with the same initial configuration and with the particle velocities of one system being equal to those of the other multiplied by one scaling factor will follow the same trajectory but at different speeds. This lack of an intrinsic energy scale also implies a simple temperature dependence for all hydrodynamic quantities which may be found by simple dimensional analysis. When a fluidized granular medium is let to evolve freely in a box with periodic boundary conditions, the energy decreases continuously in time and the system remains homogeneous. This non-steady state is called the Homogeneous Cooling State (HCS) and it is the reference state from which perturbations are studied: it is analogous to the thermodynamic equilibrium state for elastic systems . In the IHS model this state is particularly simple and the energy decreases obeying the Haff’s law $$E(t)=\frac{E(0)}{\left(1+t/t_0\right)^2}$$ (3) In order to avoid the cooling of the system, the simulations are made at constant energy: at every collision between the grains, the dissipated energy is redistributed by scaling all the velocities. This is equivalent to a time rescaling and can be treated as such in the equations for the continuous model. Using appropriate hydrodynamic equations, this homogeneous state is predicted to be unstable under certain conditions of density, system size and dissipativity . Considering the dissipativity coefficient q as a bifurcation parameter, while increasing q at constant density and number of grains, the system first develops an instability characterized by two counterflows, the shearing instability, and then either a clustering regime in which the density becomes inhomogeneous or a vortex state where many small vortices develop throughout the system. For given values of total number of grains $`N`$ and number density $`\rho =N/V`$ the shearing instability appears when the dissipation coefficient is larger than a critical value. In the low density and large system limit,the critical value is given (in two dimensions) by $$\widehat{q}=\frac{\pi }{N\rho }$$ (4) Note that in the thermodynamic limit the system is always unstable for any finite dissipation. In this article we will study the shearing instability using a nonlinear hydrodynamic approach. In Sec. II we will develop a formalism that allows us to treat the HCS as a non equilibrium steady state, and we will present a stability analysis around the steady state. Next, in Sec. III we will present the nonlinear analysis of the instability, obtaining expressions for the hydrodynamic fields beyond the instability. It will also be shown that the presence of the instability modifies the collision rate and energy dissipation rate. Finally, in Sec. IV we compare the predictions of the continuous model with molecular dynamics simulations of inelastic hard disks. Conclusions are presented in Sec. V. In what follows we will treat the two dimensional case, but the extension to three dimensions is direct. ## II Rescaled time formalism Consider a two dimensional system composed of $`N`$ grains interacting with the IHS model in a square box of size $`L`$ that has periodic boundary conditions in both directions. Units are chosen such that Boltmann’s constant and the particle masses are set to one. Granular temperature is defined analogously to the kinetic definition for classical fluids $$T=\frac{1}{N}\underset{i}{}\frac{1}{2}(𝐯_i𝐯)^2$$ (5) where $`𝐯`$ is the hydrodynamic velocity. The shearing instability has been predicted by linear analysis of the hydrodynamic equations for the IHS. Also an analysis of the first stages of the nonlinear regime has been done in Ref. . As we shall see, the relative simplicity of the model allows for a complete nonlinear analysis as well. The hydrodynamic equations for the low-density IHS are similar to the usual hydrodynamic equations for fluids except that there is an energy sink term and the heat flux has contributions from the density gradient . The equations read $`{\displaystyle \frac{\rho }{t}}+(\rho 𝐯)=0`$ (6) $`\rho \left({\displaystyle \frac{𝐯}{t}}+(𝐯)𝐯\right)=IP`$ (7) $`\rho \left({\displaystyle \frac{T}{t}}+(𝐯)T\right)=𝐉IP:𝐯\omega `$ (8) with the following constitutive equations $`IP_{ij}`$ $`=`$ $`\rho T\delta _{ij}\eta _0\sqrt{T}\left({\displaystyle \frac{v_i}{x_j}}+{\displaystyle \frac{v_j}{x_i}}(𝐯)\delta _{ij}\right)`$ (9) $`𝐉`$ $`=`$ $`k_0\sqrt{T}T\mu _0{\displaystyle \frac{T^{3/2}}{\rho }}\rho `$ (10) $`\omega `$ $`=`$ $`\omega _0\rho ^2T^{3/2}`$ (11) where $`\eta _0`$, $`k_0`$, $`\mu _0`$, and $`\omega _0`$ do not depend on density or temperature but on the dissipation coefficient $`q`$. In particular, $`\omega _0`$ and $`\mu _0`$ vanish with $`q`$. The total energy dissipation rate can be computed from the hydrodynamic equations, obtaining $`{\displaystyle \frac{dE}{dt}}`$ $`=`$ $`{\displaystyle \frac{d}{dt}}{\displaystyle (\rho T+\rho v^2/2)𝑑V}`$ (12) $`=`$ $`{\displaystyle \omega 𝑑V}`$ (13) In the HCS, where density and temperature are homogeneous and there is no velocity field, the energy dissipation rate is $$\frac{dE}{dt}=V\omega _0\rho _0^2T^{3/2}$$ (14) This continuous cooling down has the disadvantage that any perturbation analysis must be done with respect to this non-steady state. To overcome this difficulty we propose a differential time rescaling $`ds=\gamma dt`$ such that in this rescaled time, energy is conserved. The transformation corresponds to a continuous rescaling of all the particle velocities such that the kinetic energy remains constant. This rescaling does not, however, introduce new phenomena since as we have mentioned the IHS does not have an intrinsic time scale, and thus a time rescaling gives rise to the same phenomena viewed at different speed. In order to keep the rescaled energy constant the appropriate value of $`\gamma `$ is given by $$\gamma =\sqrt{\frac{E(t)}{E(0)}}$$ (15) The rescaled time hydrodynamic fields transform as $`𝐯`$ $`=`$ $`\gamma \stackrel{~}{𝐯}`$ (16) $`T`$ $`=`$ $`\gamma ^2\stackrel{~}{T}`$ (17) $`IP_{ij}`$ $`=`$ $`\gamma ^2\stackrel{~}{IP_{ij}}`$ (18) $`𝐉`$ $`=`$ $`\gamma ^3\stackrel{~}{𝐉}`$ (19) $`\omega `$ $`=`$ $`\gamma ^3\stackrel{~}{\omega }`$ (20) where the tilde denotes the rescaled variable. Due to the non-constant character of the time rescaling, extra source terms appear in the hydrodynamic equations, which can be simplified using the relation $$\frac{1}{\gamma }\frac{d\gamma }{ds}=\frac{1}{2E(0)}\stackrel{~}{\omega }𝑑V$$ (21) Suppressing the tildes everywhere, the equations now read: $`{\displaystyle \frac{\rho }{s}}+(\rho 𝐯)=0`$ (22) $`\rho \left({\displaystyle \frac{𝐯}{s}}+(𝐯)𝐯\right)=IP+{\displaystyle \frac{\rho 𝐯}{2E(0)}}{\displaystyle \omega 𝑑V}`$ (23) $`\rho \left({\displaystyle \frac{T}{s}}+(𝐯)T\right)=𝐉IP:𝐯\omega +{\displaystyle \frac{\rho T}{E(0)}}{\displaystyle \omega 𝑑V}`$ (24) Note that the constitutive relations (9) remain unchanged under time rescaling. In the rescaled time, the HCS reduces to a non-equilibrium steady state with a continuous energy supply that compensates the energy dissipation. As there is no energy scale, we fix the reference temperature to be one, so that $`E(0)=N`$. The HCS is then characterized by $`\{\rho =\rho _H,T=1,𝐯=0\}`$. To study the stability of this state, we first introduced the change of variables: $`\rho `$ $`=`$ $`\rho _H+\delta \rho `$ (25) $`𝐯`$ $`=`$ $`\delta 𝐯`$ (26) $`T`$ $`=`$ $`1+\delta T`$ (27) Taking the (discrete) Fourier transform of the linearized hydrodynamic equations around HCS, it is easy to check that the transverse velocity perturbation decouples from the rest and satisfies the equation $$\rho _H\frac{\delta 𝐯_k_{}}{s}=\lambda _k\delta 𝐯_k_{}$$ (28) with $$\lambda _k\frac{4k^2\pi ^2}{L^2}\eta _0+\frac{\rho _H^2\omega _0}{2};\{k_x,k_y\}=\mathrm{\hspace{0.17em}0},\pm 1,\pm 2,\mathrm{}$$ (29) For small values of $`\omega _0`$ (proportional to the dissipation) $`\lambda _k<0`$, so that the perturbations $`\delta 𝐯_k_{}`$ decay exponentially (the case $`k=0`$ should not be taken into account since the center of mass velocity is strictly zero). But there exists a critical value of $`\omega _0`$ for which $`\lambda _k`$ vanishes, thus indicating the stability limit of the corresponding mode. The first modes to become unstable correspond to $`|𝐤|=1`$ (i.e, $`k_x=\pm 1,k_y=0`$ and $`k_x=0,k_y=\pm 1`$). The instability threshold for $`\omega _0`$ is then given by $$\widehat{\omega }_0=\frac{8\pi ^2\eta _0}{\rho _H^2L^2}$$ (30) The stability for the other modes has been studied previously . In this last reference, it was shown that for, low dissipation, the first instability that arises is indeed the transverse velocity instability. The origin of the instability can be understood also in terms of the real time hydrodynamics. In fact, the relaxation of the transverse hydrodynamic velocity is basically due to the viscous diffusion, which depends on the system size, whereas the cooling process is governed by local dissipative collisions. There exists therefore a system length beyond which the dissipation of thermal energy is faster that the relaxation of the transverse hydrodynamic velocity. The latter will then increase, when observed on the scale of thermal motion, thus producing the shearing pattern. ## III Nonlinear analysis of the instability In this section we propose to work out the explicit form of the velocity field beyond the instability. The calculations are tedious and quite lengthy, so that, here, we only describe explicitely the basic steps. We start by taking the Fourier transform of the full nonlinear hydrodynamic equations, obtaining a set of coupled nonlinear equations for the modes { $`\delta \rho _k,\delta T_k,\delta 𝐯_k_{},\delta 𝐯_k_{}`$ }, where $`\delta 𝐯_k_{}`$ represents the longitudinal component of the velocity field. Close to the instability ($`\omega _0\widehat{\omega }_0`$, $`|𝐤|=1`$), the modes $`\delta 𝐯_1_{}`$ exhibit a critical slowing down since $`\lambda _10`$, whereas the other hydrodynamic modes decay exponentially (cfr. ref. ). On this slow time scale, i.e. $`sO(\lambda _1^1)`$, the “fast” modes {$`\delta \rho _k;\delta T_k;\delta 𝐯_k_{};\delta 𝐯_k_{},k1`$} can be considered as stationary, their time dependence arising mainly through $`\delta 𝐯_1_{}`$. Setting the time derivatives of these fast modes to zero one can express them in terms of the slow modes $`\delta 𝐯_1_{}`$. If now one inserts the so-obtained expressions into the evolution equation for the slow modes, one gets a set of closed nonlinear equations for $`\delta 𝐯_1_{}`$ (adiabatic elimination ), usually referred to as normal form or amplitude equations \[G. Nicolis, Introduction to Nonlinear Science, Cambridge University Press (1995) \]. Note that such a calculation is only possible close to the instability threshold, where the amplitude of $`\delta 𝐯_1_{}`$ approaches zero as $`\omega \widehat{\omega }_0`$. On the other hand, in this limit the only Fourier modes that will have large amplitudes are the transverse velocity modes with wavevector equal to $`\pm \mathrm{\hspace{0.17em}2}\pi /L`$ (i.e. $`|𝐤|=1`$). There are four such modes: $$\delta 𝐯_k_{}=\{\begin{array}{cc}A_1\hfill & 𝐤=(1,0)\hfill \\ A_1^{}\hfill & 𝐤=(1,0)\hfill \\ A_2\hfill & 𝐤=(0,1)\hfill \\ A_2^{}\hfill & 𝐤=(0,1)\hfill \end{array}$$ (31) After some tedious algebra, one finds $`\rho _H{\displaystyle \frac{dA_1}{ds}}`$ $`=`$ $`\lambda _1A_14\pi ^2L^2\eta _0\left(C_1\left|A_1\right|^2+C_2\left|A_2\right|^2\right)A_1`$ (32) $`\rho _H{\displaystyle \frac{dA_2}{ds}}`$ $`=`$ $`\lambda _1A_24\pi ^2L^2\eta _0\left(C_1\left|A_2\right|^2+C_2\left|A_1\right|^2\right)A_2`$ (33) where $`C_1`$ $`=`$ $`{\displaystyle \frac{8(k_0\mu _0)3\eta _0}{8(k_0\mu _0)2\eta _0}}`$ (34) $`C_2`$ $`=`$ $`{\displaystyle \frac{2(k_0\mu _0)+\eta _0}{2(k_0\mu _0)\eta _0}}`$ (35) The amplitude equations (32) and (33) admit three different stationary solutions: $`(a)`$ $`A_1=0,A_2=0`$ (36) $`(b)`$ $`\left|A_1\right|=\widehat{A},A_2=0`$ (38) $`A_1=0,\left|A_2\right|=\widehat{A}`$ $`(c)`$ $`\left|A_1\right|=\left|A_2\right|=\widehat{A}\sqrt{{\displaystyle \frac{C_1}{C_1+C_2}}}`$ (39) where we have set $$\widehat{A}=\frac{L}{2\pi }\sqrt{\frac{\lambda _1}{\eta _0C_1}}$$ (40) Note that the phases of the above stationary solutions are arbitrary (recall that $`A_1`$ and $`A_1`$ are complex conjugate). The trivial solution (a) corresponds to a motionless fluid, whereas the solutions (b) are shearing states with the corresponding fluxes oriented either in the $`y`$ or $`x`$ direction. The mixed mode solution (c) represents a vortex state with two counter-rotating vortices in the box. Below the critical point ($`\lambda _1<0`$), the trivial solution (a) is the only stable one. As we cross the critical point this solution becomes unstable. A linear stability analysis of the Eqs. 33 shows that the shearing states are stable provided $$\frac{C_2}{C_1}>1$$ (41) while the mixed mode solution is unstable. Satisfying Eq.41 depends on the values of the transport coefficients. For small dissipativity it is always fulfilled, provided the number density remains relatively low. In fact, a mixed mode state has been observed recently in a highly dense system . The occurence of either shearing states depends on the initial state. In a statistical sense, they are equally probable. For example, in the case that the system chooses the solution {$`A_1=\widehat{A}`$, $`A_2=0`$}, the velocity field reads $`𝐯=2\widehat{A}\mathrm{cos}(2\pi x/L)\widehat{𝐲}`$ (42) where $`\widehat{𝐲}`$ represents the unit vector in $`Y`$ direction. The temperature and density perturbations are then given by $`\delta T`$ $`=`$ $`(\widehat{A})^2\left[1+(1C_1)\mathrm{cos}(4\pi x/L)\right]`$ (43) $`\delta \rho `$ $`=`$ $`\rho _H(\widehat{A})^2(1C_1)\mathrm{cos}(4\pi x/L)`$ (44) The instability of the transverse velocity field thus gives rise to modifications of the temperature and density fields. The temperature decreases globally, since part of the energy is taken by the convective motion. Moreover, because of the viscous heating, the temperature profile exhibits a spatial modulation, i.e. it is higher where the viscous heating is higher. The density profile also shows a spatial modulation that keeps the pressure homogeneous (recall that for a two dimensional low density gas, $`\delta p\rho _h\delta T+\delta \rho `$, in system units). This modulation of the density, that was first observed by Goldhirsh and Zanetti (Fig. 2 of their article) in the study of the clustering instability, is stable and does not lead to clustering. Using the above expressions for the hydrodynamic fields, the energy density profile reads $$e=\rho _H\left(1+\widehat{A}^2\mathrm{cos}(4\pi x/L)\right)$$ (45) Assuming the molecular chaos hypothesis, the mean collision rate, $`\overline{\nu }`$, and dissipation rate, $`\overline{\omega }`$, can be computed as well: $`\overline{\nu }`$ $`=`$ $`{\displaystyle \frac{2\sqrt{\pi }}{V\rho _H}}{\displaystyle \rho ^2\sqrt{T}𝑑V}=\nu _0\rho _H\left(1{\displaystyle \frac{\widehat{A}^2}{2}}\right)`$ (46) $`\overline{\omega }`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \omega 𝑑V}=\omega _0\rho _H^2\left(1{\displaystyle \frac{3\widehat{A}^2}{2}}\right)`$ (47) These relations show that the global decrease of the temperature leads to corresponding decreases of the collision frequency and dissipation rate. It is important to note that the origin of the nonlinear coupling of slow modes lies in the viscous heating term and the state-dependence of the transport coefficients, and not in the usual convective derivatives ($`(𝐯)𝐯`$). In fact, as we have shown above, the shearing state produces a variation in the density and temperature fields that modify locally the value of the transport coefficients. This effect, which is negligible in classical fluids, can become very important in granular fluids, mainly because of the the lack of scale separation of the kinetic and hydrodynamic regimes. Contrary to normal fluids, here, the convective energy is comparable to the thermal energy. In fact, as will be shown in the MD simulations, in a well developed shearing state up to half of the total kinetic energy corresponds to the convective motion. The microscopic source of this phenomenon lies in the fact that only the relative energy is dissipated in binary collisions, i.e. the center of mass energy is conserved. In other words, the thermal energy is dissipated but the convective one is conserved. This asymmetry in the energy dissipation mechanism is at the very origin of the shearing instability. ## IV Molecular dynamics simulations For the molecular dynamic simulations we have considered a system made of $`N=10000`$ hard disks with a global number density $`\rho _H=0.005`$. Inelastic collision rules are adopted, with a dissipativity varying from $`q=0.0`$ to $`q=0.12`$. The boundary conditions are periodic in all directions. A spatially homogeneous initial condition is adopted, with velocities sorted from an equilibrium (zero mean velocity) Maxwellian distribution. We note that the density is low enough so that the system remains within the low-density regime. The simulations have been performed in the rescaled time. Computationally, this is achieved by doing a normal IHS simulation (event driven molecular dynamics ), but at each collision the value of the kinetic energy is updated according to the energy dissipated. The instantaneous value of $`\gamma `$ is computed from the kinetic energy, allowing to evaluate all the rescaled quantities. Note that the $`s`$-time can be integrated in the simulation because $`\gamma `$ is a piecewise constant function, thus allowing to make periodic measurements in the system. Finally, to avoid roundoff errors, a real velocity rescaling is performed whenever the kinetic energy decreases by a given amount (typically $`10^7`$ of the initial value). In each simulation, the collision frequency and temperature dissipation rate are computed with respect to the $`s`$-time, after the system has reached a stationary regime. In Fig. 1, the collision frequency and the temperature dissipation rate are presented as a function of the dissipation coefficient $`q`$. As expected, the shearing instability is associated with an abrupt decrease of these functions. It must be noted that the decreasing of the collision frequency is more that $`30\%`$, which corresponds to a global decrease of the temperature of more than $`50\%`$. This means that when the shearing is fully developed, about half the total kinetic energy is taken by the macroscopic motion. This phenomenon is typical of granular media and has no counterpart in classical fluids. To measure the critical point, $`q_0`$, we fit the collision frequency according to the following piecewise function $$\overline{\nu }=\{\begin{array}{cc}a_0\hfill & qq_0\hfill \\ a_0+a_1(qq_0)+a_2(qq_0)^2\hfill & q>q_0\hfill \end{array}$$ (48) obtaining $`q_0`$ $`=`$ $`0.0686`$ (49) $`a_0`$ $`=`$ $`0.0178`$ (50) $`a_1`$ $`=`$ $`0.167`$ (51) $`a_2`$ $`=`$ $`1.03`$ (52) Similarly, the dissipation rate is fitted according to $$\overline{\omega }=\{\begin{array}{cc}b_0q\hfill & qq_0\hfill \\ b_0q+b_1(qq_0)q+b_2(qq_0)^2q\hfill & q>q_0\hfill \end{array}$$ (53) Using $`q_0=0.0686`$, one finds $`b_0`$ $`=`$ $`0.000166`$ (54) $`b_1`$ $`=`$ $`0.00466`$ (55) $`b_2`$ $`=`$ $`0.0546`$ (56) To compare these results with our theoretical predictions we need the explicit form of the transport coefficients, up to critical dissipativity. Unfortunately, there are no known expressions for them in the case the $`2d`$ IHS model in the low-density regime. However, as the critical dissipativity is small we can use the the quasielastic approximation for the transport coefficients (that is, taking the first non-trivial order in $`q`$) $`\omega _0`$ $`=`$ $`4\sqrt{\pi }q`$ (57) $`\eta _0`$ $`=`$ $`{\displaystyle \frac{1}{2\sqrt{\pi }}}`$ (58) $`k_0`$ $`=`$ $`2/\sqrt{\pi }`$ (59) $`\mu _0`$ $`=`$ $`0`$ (60) The critical dissipativity and the amplitude of the shearing state are given by $`\widehat{q}`$ $`=`$ $`{\displaystyle \frac{\pi }{\rho _H^2L^2}}`$ (61) $`\widehat{A}`$ $`=`$ $`\rho _HL\sqrt{{\displaystyle \frac{30}{29\pi }}\delta q}`$ (62) $`=`$ $`\sqrt{{\displaystyle \frac{30}{29}}{\displaystyle \frac{\delta q}{\widehat{q}}}}`$ (63) For the presented simulation, the predicted critical dissipativity is $$\widehat{q}=0.0628$$ (64) which shows a discrepancy of $`8\%`$ with the observed value. This difference is consistent with the adopted approximations. The predicted values for $`a_0`$, $`a_1`$, $`b_0`$, $`b_1`$ (cfr. Eqs. 46, 47, 61, and 63) are $`a_0`$ $`=`$ $`0.0177`$ (65) $`a_1`$ $`=`$ $`0.146`$ (66) $`b_0`$ $`=`$ $`0.000177`$ (67) $`b_1`$ $`=`$ $`0.00438`$ (68) which are also consistent with the adopted approximations. Since the system is periodic, the developed convective pattern can diffuse in the direction perpendicular to the flow (the phases of the complex amplitudes $`A_i`$ are arbitrary due to Galilean invariance). As a result, the average hydrodynamic fields remain vanishingly small, mainly because of “destructive” interference. To overcome this difficulty we have performed another series of simulations, keeping periodic boundary conditions in the vertical direction, while introducing a pair of stress-free and perfectly insulating parallel walls in the horizontal direction (in a collision with a wall the tangential velocity is conserved whereas the normal one is inverted). As a consequence, the total vertical momentum is conserved, which will be simply set to zero initially. The nonlinear analysis for this case is similar to the periodic one, except that, here, the direction of the flow pattern remains always parrallel to the walls. Furthermore, the unstable wavevector is now $`k=\pi /L`$, because of to the fixed boundary conditions. As a result all the previous predictions remain valid, except that everywhere $`L`$ must be replaced by $`2L`$. We have used the very same number of particles and density for this series of simulations, but, of course, the different boundary conditions produce a new critical dissipativity. Performing the same analysis as before, the measured critical point and fit parameters turn out to be $`q_0`$ $`=`$ $`0.0163`$ (69) $`a_0`$ $`=`$ $`0.0178`$ (70) $`a_1`$ $`=`$ $`0.562`$ (71) $`a_2`$ $`=`$ $`8.93`$ (72) $`b_0`$ $`=`$ $`0.000175`$ (73) $`b_1`$ $`=`$ $`0.0179`$ (74) $`b_2`$ $`=`$ $`1.06`$ (75) while the predicted ones are $`\widehat{q}`$ $`=`$ $`0.0157`$ (76) $`a_0`$ $`=`$ $`0.0177`$ (77) $`a_1`$ $`=`$ $`0.583`$ (78) $`b_0`$ $`=`$ $`0.000177`$ (79) $`b_1`$ $`=`$ $`0.0175`$ (80) Eqs. 42, 44, and 45, once $`L`$ is replaced by $`2L`$, indicate that the perturbation of the transverse momentum density ($`𝐣=\rho 𝐯`$) has a wavevector equal to $`k_x=\pi /L`$, while the density and energy density have wavevectors $`k_x=2\pi /L`$. In the simulations we computed the amplitudes of these Fourier modes using the microscopic definitions for the particle, momentum, and energy densities. Figs. 2, 3, and 4 show the predicted and computed Fourier modes amplitudes. The predictions are in good agreement with the simulations in the neighborhood of the critical point, showing that not only the average quantities like the collision frequency are well predicted, but also the whole hydrodynamic picture is correct. ## V Conclusions Taking advantage of the lack of energy scale in the IHS model, a rescaled time formalism was introduced that allows us to study the homogeneous cooling state as a nonequilibrium steady state. Using a hydrodynamic description for granular media written with a rescaled time variable, the shearing instability has been studied in the nonlinear regime. It has been shown that the shearing state is the stable solution and its amplitude has been computed. The appearance of the velocity field produces that part of the kinetic energy goes from the kinetic to the hydrodynamic scale. In usual fluids this redistribution of the energy is negligible, but in granular fluids it can represent an important fraction of the total energy. This phenomenon is a manifestation of a global property of granular fluids: there is not in general a clear separation between the kinetic and the hydrodynamic regimes. This could lead to put into question the validity of a hydrodynamic description. Nevertheless, at small values of the dissipativity coefficient, predictions based on the nonlinear hydrodynamic equations are in excellent agreement with molecular dynamics simulations. Both the value of the critical dissipativity and the behavior after the instability has developed are well predicted. This is a remarkable result which shows again how robust the hydrodynamic fluid equations are when they are tested at time and length scales where their validity could be questioned. ## ACKNOWLEDGMENTS This work is supported by a European Commission DG 12 Grant PSS\*1045 and by a grant from FNRS Belgium. One of us (R.S.) acknowledges the grant from MIDEPLAN.
no-problem/0002/quant-ph0002080.html
ar5iv
text
# Unitary transformation approach for the trapped ion dynamics ## I Introduction Trapped ions interacting with laser beams have become an extremelly interesting system for the investigation of fundamental physics, e.g. the generation of states of the harmonic oscillator as well as for potential applications such as precision spectroscopy and quantum computation . The theoretical treatment of the interaction of a trapped ion with one or several laser beams constitutes a complicated problem, being the full Hamiltonian highly nonlinear. Therefore approximations are normally required, such as, for instance, the Lamb-Dicke approximation, in which the ion is considered to be confined within a region much smaller than the laser wavelength. This makes possible to obtain Hamiltonians of the Jaynes-Cummings type, in which the center-of-mass of the trapped ion plays the role of the field mode in cavity QED. Recently , it has been suggested a new approach to this problem, based on the application of a unitary transformation which linearizes the total ion-laser Hamiltonian. Moreover, in that approach it is obtained a Jaynes-Cummings type Hamiltonian including counter-rotating terms. A rotating wave approximation (RWA) may be performed provided we are in the special “resonant” regime, or $`\mathrm{\Omega }\nu `$, where $`\mathrm{\Omega }`$ is basically related to the laser intensity and $`\nu `$ is the frequency of the ion in the magneto-optical trap. The RWA performed this way allows the exact diagonalization of the transformed Hamiltonian, but on the other hand poses an upper limit to the possible values of the Lamb-Dicke parameter. In this paper we propose a method of diagonalization of the ion-laser Hamiltonian entirely based on unitary transformations. As a first step we linearize the Hamiltonian, as it is done in . The resulting Hamiltonian is further transformed, and cast in a form that it is suitable for diagonalization. Although the diagonalization procedure is an approximate one, based on recursion relations, it is not based in an expansion on any of the parameters of the problem. Therefore, the found solution does not depend on any specific regime, such as the Lamb-Dicke regime, for instance. This paper is organized as follows: In section 2 we obtain the linearized form of the full Hamiltonian. In section 3 we show how the resulting Hamiltonian may be further transformed in order to be diagonalized. In section 4 we summarize our conclusions. ## II Linearization of the Hamiltonian We consider a single trapped ion interacting with two laser (classical) plane waves (frequencies $`\omega _1`$ and $`\omega _2`$), in a Raman-type configuration ($`\omega _L=\omega _1\omega _2`$). The lasers effectively drive the electric-dipole forbidden transition $`|g|e`$ (frequency $`\omega _0`$), with $`\delta =\omega _0\omega _L`$. We end up with an effective two-level system for the internal degrees of freedom of the atom and the vibrational motion, after adiabatically eliminating the third level in the Raman configuration. This situation is described, in the atomic basis, by the following effective Hamiltonian $$\widehat{H}=\mathrm{}\left(\begin{array}{cc}\nu \widehat{n}+\frac{\delta }{2}& \mathrm{\Omega }e^{i\eta \widehat{X}}\\ \mathrm{\Omega }e^{i\eta \widehat{X}}& \nu \widehat{n}\frac{\delta }{2}\end{array}\right),$$ (1) being $`\widehat{X}=\widehat{a}+\widehat{a}^{}`$, $`\widehat{n}=\widehat{a}^{}\widehat{a}`$, $`\eta `$ the Lamb-Dicke parameter, and $`\widehat{a}^{}`$ ($`\widehat{a}`$) the ion’s vibrational creation (annihilation) operator. By applying the unitary transformation $$\widehat{T}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}\widehat{D}^{}(\beta )& \widehat{D}^{}(\beta )\\ \widehat{D}(\beta )& \widehat{D}(\beta )\end{array}\right),$$ (2) to the Hamiltonian in equation (1), where $`\widehat{D}(\beta )=\mathrm{exp}(\beta \widehat{a}^{}\beta ^{}\widehat{a})`$ is Glauber’s displacement operator, with $`\beta =i\eta /2`$, we obtain the following transformed Hamiltonian $$\widehat{}\widehat{T}^{}\widehat{H}\widehat{T}=\mathrm{}\left(\begin{array}{cc}\nu \widehat{n}+\nu \frac{\eta ^2}{4}+\mathrm{\Omega }& i\lambda \widehat{Y}\frac{\delta }{\eta \nu }\\ i\lambda \widehat{Y}\frac{\delta }{\eta \nu }& \nu \widehat{n}+\nu \frac{\eta ^2}{4}\mathrm{\Omega }\end{array}\right),$$ (3) where $`\widehat{Y}=\widehat{a}\widehat{a}^{}`$, and $`\lambda =\frac{1}{2}\eta \nu `$. This result holds for any value of the Lamb-Dicke parameter $`\eta `$. We have thus transformed our original Hamiltonian in equation (1) into a Jaynes-Cummings-type Hamiltonian, and therefore its exact diagonalization is allowed provided we perform the rotating wave approximation (RWA). However this imposes limitations on the values of the Lamb-Dicke parameter. Because the “effective coupling constant” of the transformed Hamiltonian is given by $`\lambda =\frac{1}{2}\eta \nu `$, we shall have $`\frac{1}{2}\eta \nu \nu `$ if we want to neglect the counter rotating terms (RWA). Moreover, we still have to be “in resonance”, or $`\nu 2\mathrm{\Omega }`$. In this regime the trapped ion-laser system suitably prepared exhibits long-time-scale revivals (superrevivals), as discussed in . Here we are not going to perform the RWA. We propose instead a way of further transforming the Hamiltonian in equation (3) in order to allow its diagonalization in an approximate, although nonperturbative way. In what follows we are going to present a novel method of transforming the Hamiltonian, showing how it may be diagonalized. ## III Diagonalization of the Hamiltonian We have succeeded in transforming the ion-laser Hamiltonian in Equation (2) into a more tractable form (3). Nevertheless, its exact diagonalization has nor been achieved yet. The problem is either treated exactly after performing the RWA, or by means of approximate methods such as perturbative expansions or even numerically . Here we propose a different approach to that problem, based on unitary transformations. We are going to restrict ourselves to the resonant case $`\delta =0`$. After discarding the constant term $`\frac{1}{4}\mathrm{}\nu \eta ^2`$, which just represents an overall phase factor, we obtain $$\widehat{}=\mathrm{}\left(\begin{array}{cc}\nu \widehat{n}+\mathrm{\Omega }& i\lambda \widehat{Y}\\ i\lambda \widehat{Y}& \nu \widehat{n}\mathrm{\Omega }\end{array}\right),$$ (4) Now we define the following unitary transformations $$\widehat{T}_1=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right),$$ (5) and $$\widehat{T}_2=\left(\begin{array}{cc}(1)^{\widehat{n}}& 0\\ 0& 1\end{array}\right).$$ (6) These transformations have interesting effects when applied to the Jaynes-Cummings type Hamiltonian in (4). First we apply $`\widehat{T}_1`$, which yields $$\frac{1}{2}\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)\left(\begin{array}{cc}\mathrm{\Omega }& \lambda \widehat{Y}\\ \lambda \widehat{Y}& \mathrm{\Omega }\end{array}\right)\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)=\left(\begin{array}{cc}i\lambda \widehat{Y}& \mathrm{\Omega }\\ \mathrm{\Omega }& i\lambda \widehat{Y}\end{array}\right),$$ (7) i.e., the operators are transposed to the diagonal. Then we apply $`\widehat{T}_2`$ to the matrix which resulted form the operation above, such that $$\left(\begin{array}{cc}(1)^{\widehat{n}}& 0\\ 0& 1\end{array}\right)\left(\begin{array}{cc}i\lambda \widehat{Y}& \mathrm{\Omega }\\ \mathrm{\Omega }& i\lambda \widehat{Y}\end{array}\right)\left(\begin{array}{cc}(1)^{\widehat{n}}& 0\\ 0& 1\end{array}\right)=\left(\begin{array}{cc}i\lambda \widehat{Y}& \mathrm{\Omega }(1)^{\widehat{n}}\\ \mathrm{\Omega }(1)^{\widehat{n}}& i\lambda \widehat{Y}\end{array}\right),$$ (8) where it has been used the fact that $`(1)^{\widehat{n}}\widehat{a}(1)^{\widehat{n}}=\widehat{a}`$ (the same for $`\widehat{a}^{}`$). We have now that $`\widehat{Y}`$ is multiplied by the identity matrix. At this stage we apply $`\widehat{T}_1`$ again, rearranging the terms in such a way that we obtain a Hamiltonian diagonal in the atomic state basis, or $$\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)\left(\begin{array}{cc}i\lambda \widehat{Y}& \mathrm{\Omega }(1)^{\widehat{n}}\\ \mathrm{\Omega }(1)^{\widehat{n}}& i\lambda \widehat{Y}\end{array}\right)\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)=i\lambda \widehat{Y}+\left(\begin{array}{cc}\mathrm{\Omega }(1)^{\widehat{n}}& 0\\ 0& \mathrm{\Omega }(1)^{\widehat{n}}\end{array}\right).$$ (9) Diagonalization of the total Hamiltonian becomes an easy task now, because the atomic part is already in a diagonal form. The total transformed Hamiltonian may be conveniently expressed in an operator form as $$\widehat{}^{}=\widehat{T}_1^{}\widehat{T}_2^{}\widehat{T}_1^{}\widehat{T}^{}\widehat{H}\widehat{T}\widehat{T}_1\widehat{T}_2\widehat{T}_1=\mathrm{}\left(\nu \widehat{n}+i\lambda \widehat{Y}\mathrm{\Omega }\sigma _z(1)^{\widehat{n}}\right).$$ (10) A general expression for the eigenstates of the Hamiltonian in expression (10) is $$|\mathrm{\Psi }_l=|\phi _l^g|g+|\phi _l^e|e,$$ (11) with $`|\phi _l^e=_nC_{n,l}^e|n`$ and $`|\phi ^g=_nC_{n,l}^g|n`$. From the eigenvalues equation $`\widehat{}^{}|\mathrm{\Psi }_l=\mathrm{\Lambda }_l|\mathrm{\Psi }_l`$ we obtain $$\left(\nu \widehat{n}+i\lambda \widehat{Y}\mathrm{\Omega }(1)^{\widehat{n}}\right)\underset{n}{}C_{n,l}^e|n=\mathrm{\Lambda }_l\underset{n}{}C_{n,l}^e|n,$$ (12) and $$\left(\nu \widehat{n}+i\lambda \widehat{Y}+\mathrm{\Omega }(1)^{\widehat{n}}\right)\underset{n}{}C_{n,l}^g|n=\mathrm{\Lambda }_l\underset{n}{}C_{n,l}^g|n.$$ (13) The expansion coefficients $`(C^{}s)`$ may be obtained by means of recursion relations. For instance, for the coefficient $`C_{l,n}^e`$. From equation (11) we may write $$\underset{n}{}\left(nC_{n,l}^e|n+i\lambda C_{n,l}^e\sqrt{n}|n1i\lambda C_{n,l}^e\sqrt{n+1}|n+1\mathrm{\Omega }(1)^nC_{n,l}^e|n\right)=\mathrm{\Lambda }_l\underset{n}{}C_{n,l}^e|n.$$ (14) After rearranging some of the terms, we have the following relation between the coefficients $$C_{n+2,l}^e=\frac{\sqrt{n+1}}{\sqrt{n+2}}C_{n,l}^e\frac{i}{\lambda }\frac{\left[\mathrm{\Lambda }+\mathrm{\Omega }(1)^{n+1}\nu (n+1)\right]}{\sqrt{n+2}}C_{n+1,l}^e.$$ (15) By multiplying expression (14) by $`0|`$, we also obtain the first term $$C_{1,l}^e=i\frac{\mathrm{\Omega }+\mathrm{\Lambda }}{\lambda }C_{0,l}^e.$$ (16) Similar relations may be found for the coefficients $`C_{n,l}^g`$. Of course the normalization condition $`_n|C_{n,l}^g|^2+_n|C_{n,l}^e|^2=1`$ should be satisfied. Having diagonalized the transformed Hamiltonian, the evolution of the state vector becomes a trivial task. For that we have to express a generic state $`|\psi `$ in terms of the basis states, or $$|\psi =\underset{l}{}A_l|\mathrm{\Psi }_l.$$ (17) The choice of a specific initial state determines the set of coefficients $`A_l`$. For instance, if we initially prepare the trapped ion in the state $`|\mathrm{\Psi }(0)=\frac{1}{\sqrt{2}}|\beta \left(|g|e\right)`$, the transformed state will read $$|\psi (0)=\widehat{T}_1^{}\widehat{T}_2^{}\widehat{T}_1^{}\widehat{T}^{}|\mathrm{\Psi }(0)=|0|e=\underset{l}{}A_l|\mathrm{\Psi }_l.$$ (18) The coefficients $`A_l`$ may be determined by multiplying the left hand side of Equation (18) by $`n|e|`$ and $`n|g|`$, so that we obtain a set of coupled equations for the coefficients $`A_l`$. It is now easy to calculate the time-evolution of the state vector, namely $$|\psi (t)=\mathrm{exp}(i\widehat{}^{}t/\mathrm{})\underset{l}{}A_l|\mathrm{\Psi }_l=\underset{l}{}A_l\mathrm{exp}(i\mathrm{\Lambda }_lt/\mathrm{})|\mathrm{\Psi }_l.$$ (19) The next step is to apply on $`|\psi (t)`$ the sequence of transformations $`\widehat{T}_1^{}\widehat{T}_2^{}\widehat{T}_1^{}\widehat{T}^{}`$ backwards in order to recover the wanted solution for the state vector which describes the trapped ion system interacting with the laser beams. ## IV Conclusions We have presented a novel way of treating the problem of the interaction of trapped ions with laser beams entirely based on unitary transformations. The system Hamiltonian is successively modified and cast in a more tractable form. We propose a method of diagonalization of the transformed Hamiltonian by means of the construction of recursive relations for expansion coefficients in the Fock state basis. Despite of not being an exact diagonalization, there is no need of performing any approximation, such as the Lamb-Dicke limit, for instance. Interesting possibilities for the investigation of such a system in different regimes are opened up, given that there are no restrictions on the relevant parameters and no approximations have been made so far. ###### Acknowledgements. This work was partially supported by Consejo Nacional de Ciencia y Tecnología (CONACyT), México, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).
no-problem/0002/astro-ph0002512.html
ar5iv
text
# The Detection of Multimodal Oscillations on 𝛼 UMa ## 1 Introduction Over the past few decades, our understanding of the interior of the Sun – its thermodynamic structure, internal rotation, and dynamics – has been revolutionized by the technique of helioseismology, the study of the frequencies and amplitudes of seismic waves that penetrate deep into the solar interior (Leibacher et al. 1985; Duvall et al. 1988; Schou et al. 1998). The high quality of the modeling in the solar case is made possible by the large number ($`10^7`$) of modes visible in the Sun. Unfortunately, the lack of spatial resolution inherent in stellar observations limits the number of detectable modes in stars to only a few (those with low degree $`l`$). Nonetheless, successful detection of even a few modes has the potential to provide greatly improved values for fundamental stellar parameters such as mass, abundance, and age (Gough 1987; Brown et al. 1994) While oscillations have been successfully detected on roAp stars, $`\delta `$ Scuti stars, and white dwarfs, these stars all show oscillation amplitudes several orders of magnitude larger than is expected from solar-like stars. More recently, however, a number of authors (Hatzes & Cochran 1994a, 1994b; Edmonds & Gilliland 1996) have reported the detection of periodic variability at levels of a few hundred $`\mathrm{m}\mathrm{s}^1`$ and/or several millimagnitudes in a number of K giants. However, each detection is of only a single mode, and multimodal oscillations have yet to be unambiguously detected in any cool star other than the Sun. In March 1999, NASA launched the Wide-Field Infrared Explorer satellite, with the intent of carrying out an infrared sky survey to better understand galaxy evolution. Unfortunately, within days of launch the primary science instrument on WIRE failed due to loss of coolant. However, the satellite itself continues to function nearly perfectly, and in May we began a program of asteroseismology using the WIRE’s onboard 52 mm aperture star camera. Below we report WIRE’s probable detection of multimodal oscillations on a cool star, which is the first of its kind. ## 2 Instrument Description and Data Reduction Technique The WIRE satellite star camera, a Ball Aerospace model CT-601, consists of a $`512\times 512`$ SITe CCD with $`27\mu `$m pixels (1 arc minute on the sky) and gain of $`15\mathrm{e}^{}/\mathrm{ADU}`$, fed by a 52 mm, f/1.75 refractive optical system. The read noise of the system is 30 electrons. The pixel data is digitized with a 16 bit ADC, and up to five $`8\times 8`$ pixel fields can be digitized and transmitted to the ground. For this work, we used only one field, which permitted us to read out the CCD at a rate of 10 Hz. The stellar image is somewhat defocused, but essentially all of the light falls on the central $`2\times 2`$ pixel spot. The spectral response of the system is governed entirely by the response of the CCD plus the optical system, and is approximately equivalent to the V+R bandpass. The WIRE satellite is in a sun-synchronous orbit which, when combined with constraints imposed by the solar panels, limits pointing to two strips, each approximately $`\pm 30^{}`$ wide, located perpendicular to the Earth-Sun line. In addition, continuous observing is not possible. Early in the program, we were able to obtain only 7 or 8 minutes worth of data during every 96-minute orbit. Later, after viewing constraints were relaxed (which involved scheduling software and onboard data-table modifications), observing efficiency rose to as much as 40 minutes per target per orbit – up to two targets are possible during any orbit. Thus, with integrations every 0.1 s, we acquired continuous data segments of up to 24,000 observations. Bias correction was performed on board the satellite, and further data reduction was accomplished using software developed at IPAC. Each $`8\times 8`$ pixel field was extracted by summing the flux in the central $`4\times 4`$ pixel region. Although scattered light in the field is limited by the one-meter sun-shield mounted on the star camera, we performed a background subtraction using the flux from a 20-pixel octagonal annulus surrounding the central region of the image. Finally, we converted our fluxes to an instrumental magnitude. After removal of thermal effects (see below), the rms noise in the final reduced time series was typically comparable to the 1.8 mmag noise expected from pure photon statistics, although non-Poisson noise is certainly present as well. The lack of a good flat field for the instrument was a concern, which we dealt with by rejecting those frames in which the mean centroid of the stellar image lay more than $`4\sigma `$ from the mean position, where $`\sigma `$ is the mean standard deviation of the image centroid. This criterion applied to approximately 3.8% of the observations, and the vast majority of these observations were at the start or end of an orbital segment. Overall, the satellite displayed excellent attitude stability during our run, with $`\sigma `$ measured to be typically 0.7 arc sec or less (Laher et al. 2000). ## 3 Observations and Data Analysis $`\alpha `$ UMa was the primary target for WIRE from 18 May through 23 June 1999. It is a K0 III star (Taylor 1999) with an angular diameter of $`6.79\mathrm{mas}`$ (Hall 1996; Bell 1993). At the Hipparcos distance of 38 pc, this corresponds to a stellar radius of $`28\mathrm{R}_{\mathrm{}}`$, in substantial agreement with the earlier value of $`25\mathrm{R}_{\mathrm{}}`$ derived by Bell (1993). The effective temperature of the star is variously reported as $`T_e=3970\mathrm{K}`$ to $`T_e=4660\mathrm{K}`$ (Cayrel de Stroebel 1992), with the latter value being the most recent (Taylor 1999). $`\alpha `$ UMa is a member of a binary system, with a total system mass of $`5.94\mathrm{M}_{\mathrm{}}`$ (Soderhjelm 1999). The spectral type of $`\alpha `$ UMa B is somewhat unclear, with F7 V most often cited in the literature (see, e.g., the SAO catalog). However, on the basis of IUE observations, Kondo, Morgan, & Modisette (1977) estimate the secondary to be “late A” (see also Ayres, Marstad, & Linsky 1981). In either case, however, the secondary makes only a small contribution ($`6\%`$) to the total system luminosity, and should show oscillation frequencies quite different from those of the primary. During the observation period, WIRE made a total of $`4,036,448`$ observations of $`\alpha `$ UMa, after removal of those observations with poor pointing characteristics. The data were first phased at the period of the spacecraft, to determine if any obvious instrumental periodicities existed. This phasing showed the existence of a strong sinusoidal variation, with amplitude $`8`$ mmag. A thermistor is mounted on the star camera, and examination of data from this thermistor showed that these variations were in fact correlated with temperature variations of a few tenths of a degree. Although a thermoelectric cooler (TEC) is mounted on the CCD, the star tracker thermal design lowered the CCD temperature below the default setpoint, so the TEC never actually turned on. In order to reduce the impact of this signal on our data analysis, we prewhitened the data by fitting and subtracting from the entire time series a sinusoid constrained to have the satellite orbital period (the phase was allowed to vary). The best-fit sinusoid has an amplitude of 8.63 mmag, and its subtraction results in an rms residual of 2.05 mmag, compared to the 1.84 mmag expected from photon statistics. We also explored other means of fitting and removing the thermal signature, including high-order polynomial fits to the phased data. Such approaches, while more complex than sine fitting, did not lead to any appreciable improvement in the fit, and were therefore discarded. The use of any of these thermal fitting procedures did not affect the peaks in the amplitude spectrum described below. In fact, these peaks were visible even before the application of any fit to the thermal variation, though removal of the thermal signature, by removing the largest single peak, did help to render them more easily visible. Data from the first portion of the run (18 May - 7 June) were all in short ($`t<8\mathrm{minutes}`$) segments, which obviously extended over a short range of orbital phase ($`0.08`$). We were concerned that including these data in our sine fit would bias the result, so we decided to exclude them from our analysis. In addition, NASA responded to our expressed concern about the TEC by lowering the set point on 18 June, and the CCD behavior subsequent to this had not yet reached an equilibrium point before the end of our run. Thus, we excluded data taken after 18 June from the analysis as well, leaving only the 8 June – 18 June window of approximately $`9.2\times 10^5\mathrm{s}`$. Shortening the time series clearly adversely affects our frequency resolution, but we felt that confidence in the data quality was the overriding issue. The data that were subjected to analysis are shown in Figure 1, after removal of the thermal variation. Data were searched for periodicities using Discrete Fourier Transform (DFT; Foster 1996), Lomb-Scargle periodogram (Scargle 1982; Horne & Baliunas 1986), and epoch-folding techniques (Davies 1990) which are essentially equivalent to phase dispersion modulation (PDM; see Stellingwerf 1978; Schwarzenberg-Czerny 1998). The Scargle periodogram analysis was conducted as described in Scargle (1982; see also Hatzes & Cochrane 1994ab); the data were windowed using a Parzen function and the resulting periodogram was oversampled by a factor of 8 in frequency space. The DFT analysis was similarly oversampled, and implemented the CLEAN algorithm of Roberts, Lehar, & Dreher (1987) to remove alias peaks. When using the DFT, the data were windowed using a Parzen function, and 100 iterations of CLEAN were performed. Due to its relative slowness, epoch-folding analysis was conducted only within the frequency range of interest, as identified by the Scargle and DFT analysis, and was used only to aid in interpretation of the results from the other two algorithms. In the discussion that follows, we will concentrate on the Lomb-Scargle periodogram results, though, in general, the three techniques gave similar results. Figure 2 shows the window function for the time series. The upper frequency limit for the figure has been arbitrarily set at $`5\mathrm{mHz}`$ to enhance the visibility of the amplitude spectrum at frequencies near 1 mHz, while the lowest frequencies are shown in the inset. No significant features are present in the window function above 5 mHz. The large evenly spaced peaks correspond to the satellite orbital frequency and its aliases. The Lomb-Scargle periodogram for the time series is shown in Figure 3 on the same frequency scale as the window function. The low-frequency inset shows ten significant peaks, and the frequencies and amplitudes of these peaks are given in Table 1 (derived from Lorentzian fits), along with conservative formal error estimates derived from the half-width of the periodogram peaks. We note that periodograms of portions of the time series (halves and thirds) give results similar to that of the whole, with decreased frequency resolution, and that sine fits to these frequencies show coherent phasing in these different portions. Unfortunately, the implementation of the on-board data collection on WIRE means that we lack simultaneous observations of a comparison star, and are thus essentially performing absolute photometry with an instrument not designed for that purpose. However, we have observed stars other than $`\alpha `$ UMa, and the sun-synchronous orbit of WIRE implies that most instrumental effects should be similar for all sources. The dashed line in the Figure 3 inset shows the periodogram from a time series of $`\alpha \mathrm{Leo}`$, a B7 V star not expected to show significant low-frequency oscillations. The $`\alpha `$ Leo data set, which was obtained from 23 May through 3 June 1999, consists of segments similar in length to those of $`\alpha `$ UMa, has similar rms noise to the $`\alpha `$ UMa time series, and was reduced in exactly the same manner. Our object here is not to perform analysis of the $`\alpha `$ Leo data (which might well benefit from a different approach than we have used for $`\alpha `$ UMa), but rather to show that the particular low frequency peaks in the periodogram of $`\alpha `$ UMa do not arise from either instrumental effects or the data reduction procedure itself. The larger peaks in the $`\alpha `$ Leo periodogram may arise from an imperfect removal of both long-term and orbital variations; none show coherent phasing across different segments of the time series. The dissimilarity of the two periodograms increases our confidence that the peaks visible in the $`\alpha `$ UMa periodogram are due to the star itself, although it is of course possible that the instrumental behavior changes significantly for different targets. As is apparent from Figures 2 and 3, the family of peaks visible at low frequencies is repeated at higher frequencies, which leads to the difficulty of determining which set of peaks is the correct one. We can easily eliminate peaks above $`200\mu \mathrm{Hz}`$ by examining the summed power spectrum of the individual orbital segments, which shows no significant power at these frequencies. We are therefore left with the problem of selecting between the set of peaks in the $`250\mu \mathrm{Hz}`$ range and the similar set around $`200\mu \mathrm{Hz}`$. We believe that the low-frequency peaks are the physical solution and the higher-frequency set an alias for the following two reasons: 1. The low-frequency peaks are always of larger amplitude than the alias peaks. While a resonance of stochastic noise with an alias frequency can enhance an individual alias peak such that it is larger than the corresponding true peak, this is unlikely to occur simultaneously for multiple peaks. We have performed simulations which confirm this reasoning. 2. Hatzes (1999, private communication) has searched for oscillations in $`\alpha `$ UMa using ground-based spectroscopic methods. He reports finding frequencies of 1.36 and 6.0 $`\mu \mathrm{Hz}`$ (the latter less convincingly), although his observing run was too short to lend much confidence to the exact values. While not identical to the frequencies we report here, these frequencies are certainly comparable to our lower-frequency peaks rather than to the alias peaks. Though neither of these factors is conclusive on its own, we believe that together they indicate that we are on solid ground interpreting the observed peaks in the amplitude spectrum as stellar oscillations. Of course, it remains possible that the observed variations are due to instrumental effects or to non-oscillatory stellar phenomena such as granulation. ## 4 Interpretation Below we discuss the astrophysical implications of our results. A more detailed discussion, in the context of a complete stellar interiors model for $`\alpha `$ UMa A, can be found in Guenther et al. (1999). ### 4.1 Mode Frequencies and Spacings The frequency of the fundamental mode is determined primarily by structure in the envelope and so its determination requires a complete stellar model. However, we can easily determine the range in which it should lie. The fundamental period $`P_0`$ is given by (Christy 1966; 1968) $$Q_0=P_0\sqrt{\frac{\rho }{\rho _{\mathrm{}}}}$$ (1) where $`Q_0`$ should lie between the value of 0.038 for a polytrope with $`\gamma =4/3`$ and 0.116 (for $`\gamma =5/3`$). Using the values of $`R=28R_{\mathrm{}}`$ from interferometry and the mass $`M4M_{\mathrm{}}`$ appropriate to a K0 giant yields a fundamental mode of between 2.8 and 8.6 days. The lowest frequency that we see corresponds to a period of 6.35 days, so we identify it as the fundamental mode for $`\alpha `$ UMa. As noted above, the average mode spacing for the first 8 modes is $`2.94\mu \mathrm{Hz}`$. The last two modes have much larger spacings, which we interpret as signifying that we are not detecting all of the possible oscillation modes for the star, presumably because they are not excited to large amplitudes. The large separation $`\mathrm{\Delta }\nu _0`$ is related to the mean stellar density, as shown by Cox (1980): $$\mathrm{\Delta }\nu _0=135\left(\frac{\rho }{\rho _{\mathrm{}}}\right)^{1/2}\mu \mathrm{Hz}$$ (2) Using the values appropriate to a typical K0 giant yields a predicted spacing of $`1.82\mu \mathrm{Hz}`$, about half the observed value. Once again, this discrepancy can be accounted for by assuming that all modes are not excited. In particular, if only even or odd-valued radial $`n`$ modes are excited, we would expect the observed large separation to be twice the predicted value. The simplest explanation is then that only the $`l=0`$ modes are excited, and the frequencies we observe correspond to radial oscillations of the star. ### 4.2 Mode Amplitudes The Kjeldsen & Bedding (1995) scaling law $$\delta L/L=\frac{L/L_{\mathrm{}}}{(\lambda /550\mathrm{nm})(T_e/5777\mathrm{K})^2(M/M_{\mathrm{}})}\times 5.1\mu \mathrm{mag}$$ (3) predicts oscillation amplitudes of $`500\mu \mathrm{mag}`$ for $`\alpha `$ UMa, which is essentially in agreement with our results. It should be noted that the WIRE data are obtained in white light and, consequently, phase differences in the oscillation amplitudes as a function of wavelength would tend to combine to reduce the observed amplitude. Consequently, it is not surprising that the observed amplitudes are somewhat smaller than those predicted by theory. In addition, of course, extending a relationship derived for lower main sequence stars to giants is a risky enterprise! Nonetheless, the near-agreement between theory and observation may imply that the excitation mechanism for oscillations in $`\alpha `$ UMa is fundamentally similar to the solar mechanism (presumably convection; see, e.g. Bogdan et al 1993), unlike oscillations observed in other K giants, which show amplitudes an order of magnitude greater than those we have detected. We gratefully acknowledge the support of Dr. Harley Thronson and Dr. Phillipe Crane at NASA Headquarters for making this unusual use of WIRE possible. J.C., T.C., R.L., T.G., and D.S. would like to thank Drs. Carol Lonsdale and Perry Hacking for the opportunity to work with them on the WIRE project, and their support of the WIRE asteroseismology effort. The hard work of many people, including the WIRE operations and spacecraft teams at GSFC, and the timeline generation team team at IPAC, was essential to making this project a reality. We would also like to acknowledge the contributions of the anonymous referee, whose criticisms helped to greatly improve the presentation of our results.
no-problem/0002/cond-mat0002057.html
ar5iv
text
# Velocity Autocorrelation and Harmonic Motion in Supercooled Nondiffusing Monatomic Liquids ## 1 Introduction Recent work by Wallace and Clements has uncovered several important properties of the many-body potential underlying the motion of liquid sodium systems. Specifically, it has been shown that (a) the potential surface consists of a large number of intersecting nearly harmonic valleys, (b) these valleys can be classified as symmetric (crystalline, microcrystalline, or retaining some nearest-neighbor remnants of crystal symmetry) or random, with the random valleys vastly outnumbering the symmetric ones, (c) the frequency spectra of different random valleys are nearly identical (while those of the symmetric valleys vary widely), and (d) below 35 K the system remains in a single valley throughout the longest molecular dynamics (MD) runs that were performed. Results (a) through (c) verify predictions made by Wallace in his theory of liquid dynamics , which has been successfully applied to account for the high-temperature specific heats of monatomic liquids and a study of the velocity autocorrelation function . These four results together suggest that below 35 K the motion of the atoms in liquid sodium is purely harmonic to a high degree of approximation, again as predicted by Wallace in , and we would like to test this hypothesis further. One check is to compare the mean square displacement from MD with the prediction from purely harmonic motion, which is done in Fig. 12 of , where the two are found to agree closely. However, it would be more convincing if the theory could be shown to reproduce an entire scalar function calculated from MD (instead of just a single number), such as the normalized velocity autocorrelation function $`\widehat{Z}(t)`$. That is the aim of this paper. We will show that purely harmonic motion of the atoms in a potential valley produces a $`\widehat{Z}(t)`$ which matches that of MD calculations to within the calculations’ accuracy; thus we will conclude that the motion of atoms in a nondiffusing supercooled liquid state is very nearly entirely harmonic. For completeness, in Sec. 2 we briefly review the calculation of $`\widehat{Z}(t)`$ assuming harmonic motion, and in Sec. 3 we compare this result with MD. Finally, in Sec. 4 we make contact with work by others in this field, as well as comparing these results to Wallace’s earlier effort mentioned above. ## 2 Harmonic Theory If an $`N`$-body system is moving in a potential valley, the potential can be expanded about the valley minimum with the resulting Hamiltonian $$H=\underset{Ki}{}^{}\frac{p_{Ki}^2}{2M}+\underset{Ki,Lj}{}^{}\mathrm{\Phi }_{Ki,Lj}u_{Ki}u_{Lj}+\mathrm{\Phi }_A$$ (1) where $`u_{Ki}`$ is the $`i`$th component of the $`K`$th particle’s displacement from equilibrium, $`p_{Ki}`$ is the corresponding momentum, and the anharmonic term $`\mathrm{\Phi }_A`$ contains all of the higher order parts of the expansion. The primed sum indicates that the sum is performed under the constraint that the center of mass of the system is stationary. (As a result, the system has only $`3N3`$ independent degrees of freedom.) The matrix $`\mathrm{\Phi }_{Ki,Lj}`$ is called the dynamical matrix of the system. If the valley is approximately harmonic, we can neglect $`\mathrm{\Phi }_A`$. If coordinates $`q_\lambda `$ are defined by the relation $$u_{Ki}=\underset{\lambda }{}w_{Ki,\lambda }q_\lambda $$ (2) where the $`w_{Ki,\lambda }`$ form a $`3N\times 3N`$ orthogonal matrix, satisfying $$\underset{Ki}{}w_{Ki,\lambda }w_{Ki,\lambda ^{}}=\delta _{\lambda \lambda ^{}},$$ (3) then the Hamiltonian in these new coordinates is $$H=\underset{\lambda }{}\frac{p_\lambda ^2}{2M}+\underset{Ki,Lj}{}^{}\underset{\lambda \lambda ^{}}{}w_{Ki,\lambda }\mathrm{\Phi }_{Ki,Lj}w_{Lj,\lambda ^{}}q_\lambda q_\lambda ^{}$$ (4) where the $`p_\lambda `$ are the momenta conjugate to the $`q_\lambda `$. Now one can always choose the $`w_{Ki,\lambda }`$ to diagonalize $`\mathrm{\Phi }_{Ki,Lj}`$, so that $$\underset{Ki,Lj}{}w_{Ki,\lambda }\mathrm{\Phi }_{Ki,Lj}w_{Lj,\lambda ^{}}=M\omega _\lambda ^2\delta _{\lambda \lambda ^{}}.$$ (5) (This equation defines the frequencies $`\omega _\lambda `$ in terms of the eigenvalues of $`\mathrm{\Phi }_{Ki,Lj}`$.) With this choice, the Hamiltonian becomes $$H=\underset{\lambda }{}\left(\frac{p_\lambda ^2}{2M}+\frac{1}{2}M\omega _\lambda ^2q_\lambda ^2\right).$$ (6) Three of the $`\omega _\lambda `$ are zero; these modes correspond to uniform motion of the center of mass. Since we have restricted the center of mass position and velocity to zero, these modes are not excited. The classical equations of motion for the remaining modes are solved by $$q_\lambda (t)=a_\lambda \mathrm{sin}(\omega _\lambda t+\alpha _\lambda ),$$ (7) or, returning to the original coordinates, $$u_{Ki}(t)=\underset{\lambda }{}w_{Ki,\lambda }a_\lambda \mathrm{sin}(\omega _\lambda t+\alpha _\lambda ),$$ (8) with the understanding that the sum on $`\lambda `$ ranges from $`1`$ to $`3N3`$. The velocities of the particles are $$v_{Ki}(t)=\underset{\lambda }{}w_{Ki,\lambda }\omega _\lambda a_\lambda \mathrm{cos}(\omega _\lambda t+\alpha _\lambda ).$$ (9) We compute the $`𝒗(t)𝒗(0)`$ in $`Z(t)`$ by calculating $`𝒗_K(t)𝒗_K(0)`$, summing over $`K`$ and dividing by $`N1`$ (remember that only $`3N3`$ coordinates are independent), and averaging over the amplitudes $`a_\lambda `$ and phases $`\alpha _\lambda `$. Thus $`Z(t)`$ $`=`$ $`{\displaystyle \frac{1}{3}}𝒗(t)𝒗(0)`$ (10) $`=`$ $`{\displaystyle \frac{1}{3N3}}{\displaystyle \underset{Ki}{}}{\displaystyle \underset{\lambda \lambda ^{}}{}}w_{Ki,\lambda }w_{Ki,\lambda ^{}}\omega _\lambda \omega _\lambda ^{}a_\lambda a_\lambda ^{}\mathrm{cos}(\omega _\lambda t+\alpha _\lambda )\mathrm{cos}(\alpha _\lambda ^{})`$ $`=`$ $`{\displaystyle \frac{1}{3N3}}{\displaystyle \underset{\lambda \lambda ^{}}{}}\delta _{\lambda \lambda ^{}}\omega _\lambda \omega _\lambda ^{}a_\lambda a_\lambda ^{}\mathrm{cos}(\omega _\lambda t+\alpha _\lambda )\mathrm{cos}(\alpha _\lambda ^{})`$ $`=`$ $`{\displaystyle \frac{1}{3N3}}{\displaystyle \underset{\lambda }{}}\omega _\lambda ^2a_\lambda ^2\mathrm{cos}(\omega _\lambda t+\alpha _\lambda )\mathrm{cos}(\alpha _\lambda )`$ $`=`$ $`{\displaystyle \frac{1}{6N6}}{\displaystyle \underset{\lambda }{}}\omega _\lambda ^2a_\lambda ^2\mathrm{cos}(\omega _\lambda t).`$ By the equipartition theorem, $$\frac{1}{2}M\omega _\lambda ^2q_\lambda ^2=\frac{1}{2}kT$$ (11) for any nonzero $`\omega _\lambda `$, from which it follows that $$a_\lambda ^2=\frac{2kT}{M\omega _\lambda ^2},$$ (12) so $$Z(t)=\frac{1}{3N3}\frac{kT}{M}\underset{\lambda }{}\mathrm{cos}(\omega _\lambda t).$$ (13) Notice that $`Z(0)=kT/M`$, so $`\widehat{Z}(t)`$ defined by $$Z(t)=Z(0)\widehat{Z}(t)$$ (14) is given in this theory by $$\widehat{Z}(t)=\frac{1}{3N3}\underset{\lambda }{}\mathrm{cos}(\omega _\lambda t).$$ (15) This is the result we wish to compare with MD. To do so, we need the frequencies $`\omega _\lambda `$, which are related to the eigenvalues of the dynamical matrix $`\mathrm{\Phi }_{Ki,Lj}`$ as indicated in Eq. (5). These were evaluated for five separate random valleys in by quenching all the way down to a valley minimum and diagonalizing $`\mathrm{\Phi }_{Ki,Lj}`$ there; as pointed out in Sec. 1, these eigenvalues were found to be independent of the specific random valley chosen. All five sets of eigenvalues are shown in Fig. 7 of , and we picked one set at random to use in performing the sum in Eq. (15); the other sets produce identical graphs of $`\widehat{Z}(t)`$. We can also use the set of eigenvalues to reconstruct the density of frequencies $`g(\omega )`$; the results are shown in Fig. 1. Note that we do not actually integrate over this $`g(\omega )`$ to evaluate $`\widehat{Z}(t)`$ below; we directly sum over the given set of frequencies as indicated in Eq. (15). The Figure is provided only to convey a sense of the shape of the frequency distribution. Also note that this $`g(\omega )`$ is determined from fully mechanical considerations; as a result, it is not temperature-dependent as are the frequency spectra used in Instantaneous Normal Mode (INM) studies . We will expand on this point in the Conclusion. ## 3 Comparison with MD The MD setup used to calculate $`\widehat{Z}(t)`$ to compare with Eq. (15) is essentially that described in : $`N`$ particles interact through a potential that is known to reproduce accurately a wide variety of experimental properties of metallic sodium (see discussion in for details). The two significant changes are that we used $`N=500`$ for all runs and that the MD timestep was reduced to $`\delta t=0.2t^{}`$, where $`t^{}=7.00\times 10^{15}`$ s is the natural timescale defined in . (The system’s mean vibrational period $`\tau =2\pi /\omega _{\mathrm{rms}}`$, where the rms frequency $`\omega _{\mathrm{rms}}`$ is calculated in , is approximately $`300\delta t`$.) We cooled the sodium sample to 22.3 K and 6.69 K, and then we ran each at equilibrium to collect velocities $`𝒗_K(t)`$ to be used to calculate $`Z(t)`$ by the formula $$Z(t)=\frac{1}{3N}\underset{K}{}\frac{1}{n+1}\underset{t^{}=0}{\overset{n}{}}𝒗_K(t+t^{})𝒗_K(t^{}).$$ (16) We then divided by $`Z(0)`$ to obtain $`\widehat{Z}(t)`$. The number $`n`$ was chosen as large as possible without running beyond the data calculated in the MD run. We know that at these temperatures the sodium is nondiffusing for two reasons: Both temperatures are below the 35 K threshold , and $`\widehat{Z}(t)`$ from either MD run (shown below in Fig. 2) integrates to zero, yielding zero diffusion coefficient. The formula above may fail to produce reliable values of $`\widehat{Z}(t)`$ for three reasons. First, the number of data points in the time average may be too small; if the MD simulation is run out to time $`t_{\mathrm{max}}`$, then for a given value of $`t`$ in Eq. (16), the maximum possible value of the upper limit $`n`$ is $`t_{\mathrm{max}}t`$. Thus we require $`t_{\mathrm{max}}>>t`$; we have chosen $`t_{\mathrm{max}}=50,000`$ timesteps and we have calculated $`\widehat{Z}(t)`$ only to $`t=1000`$. To ensure that this value of $`t_{\mathrm{max}}`$ is large enough, we also performed MD runs out to 200,000 timesteps and calculated $`\widehat{Z}(t)`$ from them; the differences from the 50,000 timestep result were of order $`10^3`$. Hence we are confident that 50,000 timesteps is enough if we calculate $`\widehat{Z}(t)`$ to only 1000 timesteps. Second, it is possible that reducing the timestep (thus increasing the accuracy of the simulation) might improve the accuracy of $`\widehat{Z}(t)`$. To test this, we performed another MD run with $`\delta t`$ reduced to $`0.05t^{}`$, keeping the “real” time of the run the same; this also produced differences in $`\widehat{Z}(t)`$ of order $`10^3`$. Thus we are sure that our timestep is small enough. Finally, there is the possibility of finite size effects. Since the MD system has periodic boundary conditions, an acoustic wave sent out from the system at $`t=0`$ could propagate across the simulation region and return to its point of origin in a finite time, producing spurious correlations that would show up in $`\widehat{Z}(t)`$ but would not be present in a large-$`N`$ system. To see if this effect is relevant, we estimated the time it would take for an acoustic wave to cross the region, using the numbers from . The speed of sound in sodium at its melting temperature is $`2.5\times 10^5`$ cm/s, and the volume of the region occupied by one atom is $`278a_0^3`$, so from the fact that there are 500 atoms one finds that the time required for an acoustic wave to cross the region is $`783\delta t`$, or about 800 timesteps. (The speed of sound in sodium at our lower temperatures varies from that at the melting point by roughly 5%, so this result is valid to the same accuracy.) In the Figure below, the MD result for $`\widehat{Z}(t)`$ begins to show small oscillatory revivals at about this time; we conclude that this is a finite size effect, but it does not affect the data before that time. In Fig. 2, Eq. (15) is plotted on top of the MD data for $`\widehat{Z}(t)`$ of sodium at the two temperatures. Although both temperatures compare exceptionally well to the harmonic theory, the match is visibly poorer for the lower temperature. However, repeated MD runs at the lower temperature revealed that overall variations in $`\widehat{Z}(t)`$ amount to $`10^2`$ on average, which is the same order as the differences between theory and MD in this Figure. By 500 timesteps the theory is slightly out of phase with the MD data, and this small difference persists out to more timesteps. ## 4 Conclusions These results show that the motion of a liquid in a single potential valley is harmonic to an extremely high approximation; the harmonic prediction for the function $`\widehat{Z}(t)`$ matches the calculation from MD very closely. Any contributions due to anharmonicity (which are certainly present) are at most of the same order as the accuracy of the MD calculations. Some form of harmonic approximation, such as the one used here, has been taken up by many workers attempting to understand the dynamics of liquids, and it is helpful to compare their models with our approach. One of the most popular is the theory of Instantaneous Normal Modes (INM), introduced by Rahman, Mandell, and McTague and LaViolette and Stillinger and developed extensively by Stratt (for example, ). Stratt expands the many-body potential in the neighborhood of an arbitrary point to second order in displacements from that point, and he expresses the potential as a quadratic sum of normal modes, in which the frequencies may be either real or imaginary. He then replaces the frequencies by their thermal averages over the potential surface, resulting in a temperature-dependent density of frequencies. From this point he calculates the system’s motion and considers various time correlation functions, including $`Z(t)`$. He observes that his results are accurate to order $`t^4`$ for short times, but his predictions also diverge from MD results very rapidly, in a time shorter than half of one vibrational period. The agreement with MD at long times can be improved by omitting the imaginary frequencies from the calculation of $`Z(t)`$, but of course this makes the short time behavior inexact. (The work of Vallauri and Bermejo follows Stratt’s procedure.) Efforts to improve the long time behavior of the correlation functions calculated using INM have been made by Madan, Keyes, and Seeley , who have attempted to extract from the imaginary part of the INM spectrum a damping factor for $`Z(t)`$ of the general type suggested by Zwanzig . Also taking their cue from Zwanzig, Cao and Voth have followed a slightly different path, replacing the actual potential by a set of temperature-dependent effective normal modes which, as they emphasize, bears little resemblance to the mechanical normal modes of a single many-particle valley. In fact, they state quite explicitly that a theory based on purely mechanical normal modes will have little success in accounting for equilibrium or dynamical properties of liquids. An obvious difference between our theory and INM is the nature of our approximation. In INM, one approximates the potential quadratically at an arbitrary point, with the result that the motion so predicted is accurate only for very small times; in our theory, we expand the potential only at very special points where we know the predicted motion will be valid for very long times. Both theories then face the problem of extending their validity beyond the initial approximation, of course, and we will briefly mention our extension in the final paragraph below, but there is one particular reason why we strongly prefer the approach taken here: The other models all replace the true potential by a temperature-dependent potential determined by one or another thermal averaging process. A temperature-dependent potential does not provide a true Hamiltonian, and therefore it cannot be used to calculate the quantum or classical motion, i.e., it cannot be used in the Schrödinger equation or Newton’s law. (On the dynamical level, temperature is not even a well-defined concept.) Further, the Hamiltonian resulting from a temperature-dependent potential cannot be used to do statistical mechanics, except through uncontrolled self-consistent procedures. We prefer to build our theory in terms of the actual potential, hence in terms of its true Hamiltonian, and to find at least approximate solutions for the Hamiltonian motion, so we can apply the standard procedures of equilibrium and nonequilibrium statistical mechanics. Further, we would argue that Cao and Voth’s skepticism regarding purely dynamical approaches is unfounded, given the results here. It is difficult to compare our $`\widehat{Z}(t)`$ results to those of others, because their MD-simulated states are not always characterized as diffusing or nondiffusing. We are fairly confident that Vallauri and Bermejo’s Fig. 2b is a comparable state (glassy Cs at 20 K), and we believe our fit to MD is slightly better. Madan, Keyes, and Seeley’s Fig. 3b is an ambiguous case (it is likely that a glass transition has occurred), but there also we are confident that our match with MD is better. Hence we would claim that this method shows as much promise as the others currently available, and with the physical potential as opposed to a thermal average potential. It is also instructive to compare the results of this paper with a model for $`\widehat{Z}(t)`$ previously proposed by Wallace in which a single particle oscillates in a three-dimensional harmonic valley, and at each turning point it may with probability $`\mu `$ “transit” to an adjacent valley. To apply that model to a nondiffusing case, we set $`\mu =0`$ (indicating no transits), yielding $`\widehat{Z}(t)=\mathrm{cos}(\omega t)`$. Clearly this would not fit the MD data for any $`\omega `$, and it is easy to see why: Wallace included only one frequency in his earlier model, whereas our Eq. (15) contains contributions from many frequencies, all of which are necessary to raise the first minimum in $`\widehat{Z}(t)`$ above $`1`$ and then damp $`\widehat{Z}(t)`$ out by dephasing. This suggests an alternate path to understanding diffusing states: Begin with a mean atom trajectory model that by construction reproduces the correct result for $`\widehat{Z}(t)`$ in the nondiffusing regime (Eq. (15)), and then incorporate Wallace’s notion of transits into this model. Our work in this direction, with comparison to MD data for higher-temperature diffusing states of liquid sodium, will be described in a subsequent paper .
no-problem/0002/astro-ph0002148.html
ar5iv
text
# Spectral Analysis of the Ly𝛼 Forest Using Wavelets ## 1 Introduction Measurements of QSO spectra show that the Intergalactic Medium (IGM) is composed of highly inhomogeneous structures. Ever since their identification by Lynds (1971) and the pioneering survey of Sargent et al. (1980), these inhomogeneities have been described as discrete absorption systems, the Ly$`\alpha `$forest. With the view that the systems arise from individual intervening gas clouds, the Ly$`\alpha `$forest has been characterized using traditional absorption line statistics, most notably the line equivalent widths and, as the spectra improved in resolution and signal–to–noise ratio, the Doppler widths and $`\mathrm{I}`$column densities through Voigt profile line fitting to the features. In the past few years, numerical simulations have successfully modelled many of the measured properties of the forest, showing that the absorption systems may arise as a consequence of cosmological structure formation (Cen et al. 1994; Zhang, Anninos & Norman 1995; Hernquist et al. 1996; Bond & Wadsley 1997; Zhang et al. 1997; Theuns, Leonard & Efstathiou 1998). The simulations have shown, contrary to the picture in which the systems are isolated intergalactic gas clouds, that most of the systems originate in an interconnected web of sheets and filaments of gas and dark matter (Cen et al. 1994; Bond & Wadsley 1997; Zhang et al. 1998). Alternative statistical methods were subsequently introduced for describing the forest using the more direct measurements of the induced light fluctuations. These include the 1-point distribution of the fluctuations (Miralda-Escudé et al. 1996; Zhang et al. 1997), and a quantity related to the 2-point distribution based on a weighted difference of the light fluctuations in neighbouring wavelength pixels (Miralda-Escudé et al. ). A direct estimate of the 2-point transmission correlation function was made by Zuo & Bond (1994). While the newer methods for analysing the Ly$`\alpha `$forest avoid the identification of absorption lines and the fitting of Voigt profiles, they are not necessarily fundamentally different in their description of the spectra. For instance, Zhang et al. (1998) find that the distribution of optical depth per pixel in their simulation may be recovered by modelling the spectra entirely by discrete absorption lines with Voigt profiles. Rather the more direct methods circumvent a difficulty that has long plagued attempts to characterize the absorbers in terms of Voigt profiles: the sensitivity of the resulting line statistics to noise and to the fitting procedure. Absorption line fitting of necessity requires arbitrary decisions to be made regarding the setting of the continuum level, the deblending of features, and a decision on the acceptability of a fit. Different observational groups report different distributions for the line parameters. Most discrepant has been the inferred distribution of line widths. Even with the highest quality data gathered to date using the Keck HIRES, agreement is still lacking, with Hu et al. (1995) finding a narrower Doppler parameter distribution with a significantly higher mean than found by Kirkman & Tytler (1997). The differences are important, as cosmological simulations predict comparable differences for a range of plausible cosmological models (Machacek et al. 2000; Meiksin et al. 2000). The purpose in this paper is to develop a method that provides an alternative objective description of the statistics of the Ly$`\alpha `$forest. Ultimately the goal is to employ the same method for analysing both observational data and data derived from numerical simulations in order to compare the two on a fair basis. Because of the large number of synthetic spectra generated from a simulation necessary to provide a correct average description of the forest, two principal requirements of the procedure are that it be fast and easily automated. Although automated or semi–automated Voigt profile fitting procedures exist (AutoVP, Davé et al. 1997; VPFIT, developed by Carswell and collaborators), these procedures still require arbitrary decisions to be made to obtain acceptable fits. The complexity of the codes makes it difficult to assess the statistical significance of differences between the measured distributions of the absorption line parameters and those predicted. The codes also are computationally expensive, making very costly their application to the large number of simulated spectra required to obtain a statistically valid average of the line parameters. For these reasons, a faster less complex method would be desirable. The Voigt profile fitting codes yield important parameters, like the linewidths, which contain physical information (eg, gas temperature and turbulent velocities), that the direct-analysis methods do not. It would thus be desirable for an alternative method to retain some of this information. The method presented here utilizes wavelets to characterize the absorption statistics of the Ly$`\alpha `$forest. It is not intended to be a replacement for Voigt profile fitting, but a fast alternative that allows a ready comparison between the predictions of numerical models and measured spectra and a clear statistical analysis of the results. The outline of the paper is as follows: in §2 it is shown how the statistics of the Ly$`\alpha `$forest may be characterized using wavelets. In §3 the method is applied to the measured spectrum of a high redshift QSO. The results are summarized in §4. ## 2 Analysing the Ly$`\alpha `$Forest with Wavelets ### 2.1 Terminology Although wavelets have been used in signal processing, image analysis, and the study of fluid dynamics for a decade, they are only beginning to enter the vernacular of astronomers. Accessible introductions are provided in Press et al. (1992), and in Slezak, Bijaoui & Mars (1990) and Pando & Fang (1996), who apply wavelets to study the clustering of galaxies and Ly$`\alpha `$absorbers, respectively. More complete accounts of wavelet methodology are Chui (1992), Daubechies (1992), and Meyer (1993). The description here is confined to those elements necessary to introduce the notation and terminology that will be used below. Wavelets are defined variously in the literature. The definition of most use here, somewhat restrictive but appropriate to a multiresolution analysis using the Discrete Wavelet Transform (DWT), is (Meyer): > A wavelet is a square–integrable function $`\psi (x)`$ defined in real space such that $`\psi _{jk}2^{j/2}\psi (2^jxk)`$, where $`j`$ and $`k`$ are integers, is an orthonormal basis for the set of square–integrable functions. The wavelet $`\psi (x)`$ satisfies $`_{\mathrm{}}^{\mathrm{}}𝑑x\psi (x)=0`$, and is generally chosen to be concentrated near $`x=k2^j`$. Its defining properties permit it to perform two operations governed by the values of $`j`$ and $`k`$. Smaller values of $`j`$ correspond to coarser variations in $`f(x)`$, while differing values of $`k`$ correspond to shifting the centre of the transform. The wavelet coefficients of a function $`f(x)`$ are defined by $$w_{jk}𝑑xf(x)\psi _{jk}(x).$$ (1) The set of coefficients $`\{w_{jk}\}`$ comprises the wavelet transform of the function $`f(x)`$. The function may then be recovered through the inverse transform $$f(x)=\underset{j,k}{}w_{jk}\psi _{jk}(x),$$ (2) since the set of functions $`\psi _{jk}`$ forms a complete orthonormal basis. The wavelet coefficients at a level $`j`$ express the changes between the smoothed representations of $`f(x)`$ at the resolution scales $`j+1`$ and $`j`$. Several functions may serve as wavelets. A set that has proven particularly useful was developed by Daubechies (Daubechies 1992). These functions are constructed to have vanishing moments up to some value $`p`$, and the functions themselves vanish outside the range $`0<x<2p+1`$. The wavelet coefficients decrease rapidly with $`p`$ for smooth functions. Accordingly, the higher order Daubechies wavelets are the most suitable for analyzing smooth data. The DWT is computed using the pyramidal algorithm as implemented in Numerical Recipes (Press et al. ). The Daubechies wavelet of order 20 is chosen throughout. ### 2.2 Monte Carlo simulations The properties of the wavelet transform of the Ly$`\alpha `$forest are examined by performing Monte Carlo realizations of spectra. The spectra are constructed from discrete lines with Voigt profiles using the $`\mathrm{I}`$column density and Doppler parameter distributions found by Kirkman & Tytler. Specifically, the $`\mathrm{I}`$column densities $`N_{\mathrm{HI}}`$are drawn from a power law distribution of slope 1.5 between $`12.5<\mathrm{log}_{10}N_{\mathrm{HI}}<16`$ and the Doppler parameters $`b`$ from a gaussian with mean 23 $`\mathrm{km}\mathrm{s}^1`$and standard deviation 14 $`\mathrm{km}\mathrm{s}^1`$. A cut–off in $`b`$ is imposed according to $`b>14+4(\mathrm{log}_{10}N_{\mathrm{HI}}12.5)`$ $`\mathrm{km}\mathrm{s}^1`$. The resulting average Doppler parameter is 31 $`\mathrm{km}\mathrm{s}^1`$. The number density of lines per unit redshift matches that of Kirkman & Tytler at $`z=3`$. The resolution is set at $`\lambda /d\lambda =5\times 10^4`$, and gaussian noise is added according to a specified continuum signal–to–noise ratio per pixel. This is the fiducial model used in all the simulations unless stated otherwise. Segments 128 pixels wide were found adequate for extracting the statistical properties of the wavelet coefficients. A representative spectrum and its discrete wavelet transform are shown in Figure 1. A block at resolution $`j`$ is $`128/2^j`$ pixels wide and $`2^j`$ pixels long for $`j=1`$ to 6. The resolution becomes finer as $`j`$ increases from 1 to 6 (downwards). The uppermost level ($`j=0`$) corresponds to smoothed averages of the spectrum. The wavelet coefficients tend to increase in magnitude with decreasing resolution (decreasing $`j`$). The low values indicate that only small changes occur in the spectrum when smoothed at one resolution level to the next higher. The small values are desirable, as they signify the dominant absorption features in the spectra are adequately resolved. Because the wavelet functions form a complete set of basis functions, the full set of wavelet coefficients completely describes the spectrum: the spectrum may be reconstructed identically from the inverse transform. For noisy spectra, however, it will generally be unnecessary to retain the full set of coefficients. Indeed, this is the motivation for multi–resolution data compression. By employing a judicious set of basis functions, a signal may be compressed into only a small fraction of its original size. The method of chosing the optimal basis set such that the compressed signal matches the original as closely as possible in a least squares sense with the least number of retained basis elements is known as Proper Orthogonal Decomposition or the Karhunen–Loéve procedure (see Berkooz, Holmes & Lumley 1993 for a review). The basis set, however, will in general differ from signal to signal if its components are highly variable, as in the case of the Ly$`\alpha `$forest. Although not optimal in the least squares sense, the wavelet basis nonetheless achieves a large amount of data compression and has the advantage of generality. Next is described how wavelets may be applied to assessing the amount of useful information in a spectrum. Two measures of the information content of a noisy spectrum are considered, one based on $`\chi ^2`$ and the second on entropy. If $`s(x_i)`$ is the original spectrum defined at $`N`$ points $`x_i`$ (eg, wavelength or velocity), and $`s_n(x_i)`$ is the spectrum reconstructed from the $`n`$ largest (in magnitude) wavelet coefficients, then $$\chi ^2=\underset{i=1}{\overset{N}{}}\left[\frac{s(x_i)s_n(x_i)}{\sigma _i}\right]^2$$ (3) where $`\sigma _i`$ is the measurement error associated with pixel $`i`$. For gaussian distributed measurements, the expectation value of $`\chi ^2`$ is the number of degrees–of–freedom. If $`n`$ wavelet coefficients are retained, the number of degrees–of–freedom is $`Nn`$. (Hence, for example, $`\chi ^2=0`$ is expected for $`n=N`$.) The reduced $`\chi _{\mathrm{red}}^2=\chi ^2/(Nn)`$ then defines the optimal value of $`n`$ for truncating the wavelet coefficients. The information content may also be expressed in terms of the wavelet coefficients directly as an “entropy”<sup>1</sup><sup>1</sup>1Meyer (1993) defines the entropy to be the exponential of $`S`$. $$S=\underset{jk}{}\alpha _{jk}^2\mathrm{log}\alpha _{jk}^2,$$ (4) where the $`\alpha _{jk}`$ are the normalized coefficients $$\alpha _{jk}=\frac{w_{jk}}{\left(_{jk}w_{jk}^2\right)^{1/2}}.$$ (5) This quantity behaves like a physical entropy in the sense that it is maximum when the signal is completely random so that the full set of coefficients $`\{w_{jk}\}`$ is required to describe it, while it vanishes when the signal may be entirely described by a single coefficient. The reduced $`\chi ^2`$ for an ensemble of Monte Carlo realizations is shown in Fig. 2 as a function of the fraction $`(Nn)/N`$ of the wavelet coefficients discarded. As the signal–to–noise ratio per pixel increases, the value of $`\chi _{\mathrm{red}}^2`$ for a given $`n`$ increases. In all cases, however, there is some $`n<N`$ for which $`\chi _{\mathrm{red}}^2=1`$. This suggests that an acceptable fit to a noisy spectrum may be provided by only a fraction $`n/N`$ of the full set of coefficients, with the fraction required increasing as the noise level decreases. The entropy $`S`$ is shown in Figure 3. The entropy stays nearly constant out to $`\chi _{\mathrm{red}}^2=1`$, indicating that little information has been lost by discarding the small coefficients. As $`\chi _{\mathrm{red}}^2`$ increases, eventually the entropy decreases as information is lost. Due to the greater information content of the less noisy spectra, as the noise level is decreased, the entropy remains constant to increasingly higher values of $`\chi _{\mathrm{red}}^2`$ before declining. ### 2.3 Statistics of the wavelet coefficients It was shown above how wavelets may be used to characterize the noise properties of a spectrum. The wavelet coefficients, however, may also be used to characterize the statistics of the Ly$`\alpha `$forest itself. The distributions of the coefficients (in absolute value) for the several resolution levels are shown in Fig. 4 for a set of simulated spectra with $`S/N=50`$, typical of the Keck HIRES spectra. The number of coefficients at a level $`j`$ is $`2^j`$, with $`j=1`$ corresponding to the coarsest resolution, and $`j=6`$ to the finest for the $`2^7=128`$ pixels used in a spectrum. The finest resolution ($`j=6`$) curve is the steepest. As the resolution becomes increasingly coarse, the amplitude of the coefficients increases, as was found in Fig. 1. This indicates that most of the information in the spectrum is carried by the coarser levels (as well as by the two course scale averages, not shown). The finest level has resolved the spectral structures, with little difference between the smoothed representations of the spectrum at resolution levels $`j=5`$ and 6. Applying a cut–off in the coefficients corresponding to $`\chi _{\mathrm{red}}^2=1`$ yields for the average number retained of the initially $`2^j`$ coefficients for $`j=1\mathrm{}6`$ the respective values 1.9, 3.8, 7.4, 11.6, 6.9, and 3.4. While almost all of the coefficients for $`j3`$ are needed, a decreasing fraction is required to describe the spectra at higher resolution. The distributions are insensitive to the signal–to–noise ratio, as shown in Fig. 5. Except for the lowest ratio of 10, the curves coincide, showing that they may be measured accurately even for a varying signal–to–noise ratio in a spectrum, provided it is not too low. To demonstrate that the wavelet coefficient distributions may be used to discriminate between different predictions for the statistical properties of the Ly$`\alpha `$forest, a second set of Monte Carlo realizations with alternative column density and Doppler parameter distributions is generated. The parameters adopted are those reported by Hu et al. They found that the forest statistics are consistent with an $`\mathrm{I}`$column density distribution with a slope of $`1.5`$ for clouds with $`12.3<\mathrm{log}N_{\mathrm{HI}}<14.5`$ and a Gaussian Doppler parameter distribution with mean 28 $`\mathrm{km}\mathrm{s}^1`$, standard deviation 10 $`\mathrm{km}\mathrm{s}^1`$, and a sharp cut–off below 20 $`\mathrm{km}\mathrm{s}^1`$. The resulting average Doppler parameter is 37 $`\mathrm{km}\mathrm{s}^1`$. The simulation is normalized to the same line density per unit redshift at $`z=3`$ as found by Hu et al. , but the column density distribution is extended to $`\mathrm{log}N_{\mathrm{HI}}=16`$ to be consistent with the previous set of simulations. The resulting wavelet coefficient distributions are compared with those from the previous simulation for $`j=3,4,`$ and 5 in Fig. 6. The wavelet coefficients are able to distinguish between the two models. The Kolmogorov–Smirnov test may be used to assess the probability that the wavelet coefficients of a measured spectrum match a given distribution for each resolution level $`j`$. The most stringent test, however, is given by combining the probabilities for all the distributions. Because any given absorption feature may be expected to affect the coefficients at more than a single resolution level $`j`$, it is possible that the coefficients corresponding to a given set of nested blocks for different $`j`$ (see Fig. 1) may be correlated. In this case, the probabilities of matching the various distributions may not be combined as if they were independent. To determine the degree to which the distributions may be treated as independent, the correlations are measured for coefficients between the various levels $`j`$ corresponding to the same hierarchy of blocks, and then averaged over all the hierarchies, for a set of Monte Carlo realizations using the fiducial forest model. The results are shown in Table 1. (The level $`j=0`$ refers to the correlations with the pair of coefficients corresponding to the course scale averages.) A signal–to–noise ratio of 50 is assumed, and a cut-off in the coefficients is applied to ensure $`\chi _{\mathrm{red}}^2=1`$. The error on the correlations is $`0.1`$%. Although the correlations are small, they are not absent. They are sufficiently small, however, that treating the probabilities for the different distributions as independent should be an adequate approximation for model testing. ### 2.4 Data compression One of the key features of wavelets is their ability to compress data. Figs. 2 and 3 show that it is possible to fit a spectrum using only a subset of the wavelets used in its DWT at a statistically acceptable level ($`\chi _{\mathrm{red}}^2=1`$), without significantly degrading the information content of the spectrum as measured by the wavelet entropy. This suggests that filtering the spectrum in this way may provide a usable spectrum that is relatively noiseless and suitable for absorption line fitting. This is illustrated by performing Voigt profile fitting to Monte Carlo realisations of the fiducial line model, with an assumed signal–to–noise ratio of 50. A wavelet filtered representation of each realised spectrum is generated with coefficients truncated to give a reduced $`\chi _{\mathrm{red}}^2=1`$ for the difference between the original and wavelet filtered spectra. This corresponds on average to retaining only 30% of the full set of coefficients. A representative spectrum is shown in Fig. 7. Absorption lines are then identified in the filtered spectrum and fit using AutoVP. The results of $`10^4`$ realisations are shown in Figs. 8 and 9. Also shown are the distributions obtained from AutoVP using the original spectra with no wavelet filtering applied. The distributions are nearly identical. A negligible loss is incurred in the recovery of the line parameters despite the exclusion of 70% of the information in the original spectra. ## 3 Application to Q1937–1009 In this section, the Discrete Wavelet Transform is used to analyse the Ly$`\alpha `$forest as measured in the $`z=3.806`$ QSO Q1937–1009. The spectrum was taken with the Keck HIRES at a resolution of $`8.5`$ $`\mathrm{km}\mathrm{s}^1`$(Burles & Tytler 1997). The signal–to–noise ratio per pixel was $`50`$. The spectrum covers the range between Ly$`\alpha `$and Ly$`\beta `$ in the QSO restframe. (The region analysed is restricted to the redshift interval $`3.055<z<3.726`$ to avoid any possible influence by the QSO.) The distribution of wavelet coefficients is shown in Fig. 10 for $`j=2`$, 3, 4, and 5. As in Fig. 4, the high frequency coefficients are generally smaller than at lower frequencies, indicating that the fluctuations that dominate the spectra have been resolved. The cumulative distributions of the coefficients are compared with the predicted distributions for the fiducial model in Fig. 11. The predicted distributions were generated by simulating spectra with the same pixelization, resolution, signal–to–noise ratio and wavelength coverage as for the measured spectrum of Q1937–1009. An increase in line density per unit redshift proportional to $`(1+z)^{2.6}`$ (Kirkman & Tytler) was included to match to the redshift range of Q1937–1009. While the distributions generally agree well, a large variation is found for $`j=4`$, corresponding to fluctuations on the scale of $`1734`$ $`\mathrm{km}\mathrm{s}^1`$, suggesting some differences from the line model of Kirkman & Tytler. Effects neglected in the simulations that could produce a difference are the presence of metal systems and redshift correlations between the Ly$`\alpha `$absorption systems. The changes that would be produced, however, are most likely small: the number of metal systems is small, and the correlations appear weak or absent (Meiksin & Bouchet 1995; Kim et al. 1997). Still, the sensitivity of the wavelet coefficient distributions to these effects may be worth more careful consideration. ## 4 Summary Wavelets may be usefully employed to provide a statistical characterizaton of the absorption properties of the Ly$`\alpha `$forest. An approach is presented that performs a multiresolution analysis of the forest using the Discrete Wavelet Transform of the QSO spectrum. The transform decomposes the local frequency dependence of the light fluctuations into an orthogonal hierarchy of basis functions, the wavelets. It is found that in spectra of better than 10 $`\mathrm{km}\mathrm{s}^1`$resolution, most of the information of the spectrum is carried by the lower frequency wavelets. For a signal–to–noise ratio typical of even the highest quality spectra ($`S/N=10100`$), only 10–30% of the wavelets are required to provide a statistically acceptable description of the spectrum, corresponding to a data compression factor of 3–10. It is shown that a Voigt profile line analysis performed on the wavelet filtered spectra yields nearly identical line parameter distributions as obtained from the original unfiltered spectra. The distributions of the wavelet coefficients offer an alternative statistical description of the Ly$`\alpha `$forest while retaining information on the line widths. It is demonstrated that the correlations of coefficients between different levels in the wavelet hierarchy are weak (a few percent or smaller). Consequently, each of the distributions may be treated as statistically independent to good approximation. The method is applied to a Keck HIRES spectrum of Q1937–1009. The wavelet coefficient distributions behave qualitatively similarly to those found in Monte Carlo simulations based on the line parameter distributions reported by Kirkman & Tytler. The measured distributions, however, show some differences on the scale $`1734`$ $`\mathrm{km}\mathrm{s}^1`$. The results demonstrate that Multiresolution Analysis using the Discrete Wavelet Transform provides an alternative objective, easily automated procedure for analysing the Ly$`\alpha `$forest suitable for basing a comparison between the measured properties of the Ly$`\alpha `$forest and the predictions of numerical models. The author thanks S. Burles and D. Tytler for kindly providing the spectrum of Q1937–1009, and R. Davé for permission to use AutoVP.
no-problem/0002/hep-ph0002013.html
ar5iv
text
# Non perturbative renormalization group potentials and quintessence \[ ## Abstract New solutions to the non preturbative renormalization group equation for the effective action of a scalar field theory in the Local Potential Approximation having the exponential form $`e^{\pm \varphi }`$ are found. This result could be relevant for those quintessence phenomenological models where this kind of potentials are already used, giving them a solid field theoretical derivation. Other non perturbative solutions, that could also be considered for the quintessence scenario, are also found. Apart from this particular cosmological application, these results could be relevant for other models where scalar fields are involved, in particular for the scalar sector of the standard model. pacs 98.80.Cq, 11.10.Hi , 11.10.Gh \] One of the most challenging problems in fundamental physics has been the search for a theoretical argument, hopefully a symmetry, that could explain the vanishing of the cosmological constant. Recent observations from high redshift supernovae, combined with the data on the fluctuation of the cosmic microwave background, have changed our perspective. Apparently $`\mathrm{\Omega }_M`$, the ratio of the barionic and cold dark matter density to the critical density, is about $`\frac{1}{3}`$. This means that either the universe is open or the missing energy is provided by some new form of matter. The simplest candidate is a cosmological constant term. Alternative models where the missing energy is given by a scalar field slowly rolling down its potential, the “quintessence”, have recently attracted lot of attention. For a truly constant vacuum energy term the “old” problem of explaining the vanishing of the cosmological constant is replaced by the equally difficult one of explaining why it has the small observed value of about $`(3\times 10^3ev)^4`$. In the quintessence scenario the equivalent problem is the so called “coincidence problem”. The matter and the scalar fields evolve differently but we observe today an order of magnitude “coincidence” between the matter energy density $`\mathrm{\Omega }_M`$ and the quintessence energy density $`\mathrm{\Omega }_\varphi `$ that requires a fantastic fine tuning of the initial conditions. The notion of “tracker field” , a quintessence scalar field that evolves to an attractor solution during its rolling down, has been introduced to circumvent this problem. For a wide range of initial conditions the attractor is stable. An explanation for the coincidence can be obtained this way and this has been advocated as an argument in favour of the quintessence scenario. Recently another interesting argument has been given in the framework of the brane-world picture. Even restricting ourselves to tracker fields there is still a large amount of arbitrariness in the possible form of this potential. Different proposals have been essentially based on their capability to reproduce the observational data. Also interesting attempts have been done to derive its form from particle physics models, but they still leave the door open to several possibilities and it is not possible to discriminate between the different proposals. In this Letter I show that the renormalization group equation for the effective action of a single component scalar field theory in the Local Potential Approximation (LPA) possesses non perturbative solutions in addition to the well known perturbative one. These are of the form of exponential potentials that are among the favoured candidates in the quintessence scenario. I think that this result can give a strong motivation to these potentials. I will comment more on this point later. The exact renormalization group equation for the effective action has been found in Ref.. By considering only the potential term, an approximation to this equation (the LPA) is obtained, and for a single component scalar field theory in $`d=4`$ dimensions it reads: $$k\frac{}{k}U_k(\varphi )=\frac{k^4}{16\pi ^2}ln\left(\frac{k^2+U_k^{^{\prime \prime }}(\varphi )}{k^2+m^2}\right).$$ (1) Here $`k`$ is the current scale, $`m^2`$ is a constant with dimension $`(\mathrm{mass})^2`$, $`U_k`$ is the potential at the scale $`k`$ and the means derivative with respect to the field $`\varphi `$. Eq.(1) is a non perturbative evolution equation for $`U_k`$. It is immediate to see that the k-independent solution to Eq.(1), i.e. the fixed point potential, is the trivial gaussian one: $$U_f(\varphi )=\frac{1}{2}m^2\varphi ^2+\alpha \varphi +\beta .$$ (2) One simple way to show that Eq.(1) admits the well known perturbative solution is as follows. Let’s consider first a small deviation of the potential $`U_k(\varphi )`$ around $`U_f(\varphi )`$, $$U_k(\varphi )=U_f(\varphi )+\delta U_k(\varphi ).$$ (3) We develop now the logarithm in Eq.(1) in powers of $`\delta U_k`$ and expand $`\delta U_k`$ in powers of the field (for the sake of simplicity we consider a potential with the $`Z(2)`$ symmetry $`\varphi \varphi `$) $$\delta U_k(\varphi )=\frac{\lambda _2(k)}{2}\varphi ^2+\frac{\lambda _4(k)}{4!}\varphi ^4+\mathrm{}.$$ (4) At any order in $`\delta U_k`$ we obtain this way an infinite system of equations for the coupling constants. By truncating this system to the first two equations for $`\lambda _2(k)`$ and $`\lambda _4(k)`$ and solving iteratively up to the second order in $`\delta U_k`$ we get the known perturbative one loop RG flow for the coupling constants. Extrapolating down to $`k=0`$, this flow identifies the gaussian fixed point as infrared stable, i.e. we have the standard result about the triviality of the theory. We ask now the question about the possibility of having non perturbative solutions. The only k-independent solution to Eq.(1), i.e. the only fixed point potential, is the gaussian one found in Eq.(2). We want to linearize now Eq.(1) around $`U_f`$ and look for a small but non perturbative $`\delta U_k`$. We have (for notational simplicity I write $`U_k`$ rather than $`\delta U_k`$): $$k\frac{}{k}U_k(\varphi )=\frac{k^4}{16\pi ^2}\frac{1}{k^2+m^2}U_k^{^{\prime \prime }}(\varphi ).$$ (5) There is a class of non perturbative solutions to Eq.(5) that is very easy to find. Let’s seek for solutions of the form: $$U_k(\varphi )=f(k)g(\varphi ).$$ (6) Once inserted in Eq.(5), the ansatz (6) gives: $$\frac{d^2g(\varphi )}{d\varphi ^2}\frac{1}{\mu ^2}g(\varphi )=0$$ (7) $$\frac{df(k)}{dk}+\frac{1}{16\pi ^2\mu ^2}\frac{k^3}{k^2+m^2}f(k)=0,$$ (8) where $`\mu ^2`$ is any constant with dimension $`(\mathrm{mass})^2`$ that allows to separate Eq.(5) in the two Eqs. (7) and (8). Solving now these equations is a simple exercise. For positive values of the constant $`\mu ^2`$ the solutions to Eq.(5) have the form $$U_k(\varphi )=M_1^4e^{\frac{1}{32\pi ^2\mu _1^2}\left(k^2m^2\mathrm{ln}\frac{k^2+m^2}{m^2}\right)}e^{\frac{\varphi }{\mu _1}}$$ (9) $$U_k(\varphi )=M_2^4e^{\frac{1}{32\pi ^2\mu _2^2}\left(k^2m^2ln\frac{k^2+m^2}{m^2}\right)}e^{+\frac{\varphi }{\mu _2}},$$ (10) for arbitrary values of the mass dimension constants $`M_i`$ and $`\mu _i`$. For negative values of $`\mu ^2`$ (calling $`\overline{\mu }^2=\mu ^2>0`$), the solutions to Eq.(5) have the form: $$U_k(\varphi )=M_3^4e^{+\frac{1}{32\pi ^2\overline{\mu }_3^2}\left(k^2m^2ln\frac{k^2+m^2}{m^2}\right)}\mathrm{cos}\left(\frac{\varphi }{\overline{\mu }_3}\right)$$ (11) $$U_k(\varphi )=M_4^4e^{+\frac{1}{32\pi ^2\overline{\mu }_4^2}\left(k^2m^2ln\frac{k^2+m^2}{m^2}\right)}\mathrm{sin}\left(\frac{\varphi }{\overline{\mu }_4}\right).$$ (12) For solutions of the kind (9) and (10) the gaussian potential $`U_f(\varphi )`$ is an ultraviolet fixed point. From that we immediately understand the completely different nature of these solutions with respect to the perturbative one. As already mentioned these exponential potentials $`e^{\pm \varphi }`$, as well as linear combinations of them, are among the favourite candidates for the quintessence scenario. The attempts that have been done to derive different forms of quintessence potentials all started from some sort of “fundamental” higher energy model. For example, inverse power-law potentials have been motivated from Supersymmetric QCD . For another derivation of the same kind of potentials see. Exponential potentials of the form found above arise naturally in several higher energy/higher dimensional theories. As we simply don’t know the theory that describes our world at very high energies, the use of phenomenologically motivated potentials is certainly well justified. In that respect indications coming directly from the effective theory of the quintessence field should be considered as very welcome. Actually, whatever the structure of the higher energy/higher dimensional theory, i.e. whatever the fundamental origin of the scalar quintessence field is, the “low energy” effective theory for this field should be very well described by Eq.(1). This happens because the higher energy degrees of freedom decouple from the quintessence field. There could still be the problem of the coupling of this field to ordinary matter, i.e. to ordinary standard model fields. As the long range forces that these couplings would generate are not observed, we can suppose that they are suppressed through some mechanism as for instance the one proposed in Ref.. In this case the renormalization group Eq.(1) gives the flow equation for the quintessence field irrespectively of the nature of the higher energy theory and the above results (9) and (10) give a solid motivation from the “low energy side” to the phenomenological exponential potentials $`e^{\pm \varphi }`$. Concerning the solutions of the kind (11) and (12), we see that the gaussian potential $`U_f(\varphi )`$ is neither an infrared nor an ultraviolet fixed point for them, even though we can multiply these solutions times a small dimensionless constant so that they still make sense as linearized solutions of the Eq.(1). I want to mention here that cosine potentials have also been considered as possible quintessence candidates. We want to seek now for other non perturbative solutions. Actually there is at least another class of such solutions that can be easily found. To see that, we switch first to the dimensionless form of Eq.(1). If we define the dimensionless field $`\phi `$, the dimensionless scale parameter $`t`$ and the dimensionless potential $`v(\phi ,t)`$ from $$\phi =\frac{1}{4\pi }\frac{\varphi }{k},t=\mathrm{ln}\frac{\mathrm{\Lambda }}{k},U_k(\varphi )=\frac{k^4}{16\pi ^2}v(\phi ,t),$$ (13) where $`\mathrm{\Lambda }`$ is a boundary value for $`k`$, Eq.(1) becomes: $$\frac{v}{t}+\phi \frac{v}{\phi }4v=ln\left(\frac{1+\frac{^2v}{\phi ^2}}{1+\frac{m^2}{\mathrm{\Lambda }^2}e^{2t}}\right).$$ (14) The dimensionless potential $`v_f(\phi ,t)`$ that corresponds to the gaussian potential $`U_f(\varphi )`$ of Eq.(2) is: $$v_f(\phi ,t)=\frac{1}{2}\frac{m^2}{\mathrm{\Lambda }^2}e^{2t}\phi ^2+\frac{4\pi \alpha }{\mathrm{\Lambda }^3}e^{3t}\phi +\frac{16\pi ^2\beta }{\mathrm{\Lambda }^4}e^{4t},$$ (15) and solves the equation: $$\frac{v}{t}+\phi \frac{v}{\phi }4v=0.$$ (16) Actually Eq.(14) is more often written as: $$\frac{v}{t}+\phi \frac{v}{\phi }4v=ln\left(1+\frac{^2v}{\phi ^2}\right),$$ (17) i.e. by setting $`m^2=0`$. This simply corresponds to choose a massless fixed point potential and from now on we also restrict ourselves to this case. We consider now a small fluctuation around the fixed point potential, $$v(\phi ,t)=v_f(\phi ,t)+\delta v(\phi ,t),$$ (18) and linearize Eq.(17) around $`v_f`$ (again we write $`v(\phi ,t)`$ rather than $`\delta v(\phi ,t)`$) to get: $$\frac{v}{t}+\phi \frac{v}{\phi }4v=\frac{^2v}{\phi ^2}.$$ (19) By following the same strategy as before we look for solutions to Eq.(19) with factorized $`t`$ and $`\phi `$ dependence: $$v(\phi ,t)=f(t)g(\phi ).$$ (20) Inserting the ansatz (20) in Eq.(19) we have: $$\frac{d^2g(\phi )}{d\phi ^2}\phi \frac{dg(\phi )}{d\phi }+4g(\phi )=\alpha g(\phi )$$ (21) $$\frac{df(t)}{dt}=\alpha f(t),$$ (22) where $`\alpha `$ is an arbitrary dimensionless constant that allows to separate Eq.(19) in the two Eqs. (21) and (22). Equation (22) is trivially solved and gives: $$f(t)=Ae^{\alpha t},$$ (23) where $`A`$ is the integration constant. The solution to Eq.(21) can be found by series. Writing $$g(\phi )=\underset{n=0}{\overset{\mathrm{}}{}}c_n\phi ^n,$$ (24) inserting (24) in Eq.(21) and exploiting the recurrence relations between the coefficients $`c_n`$, we get the two linearly independent solutions ($`a=\alpha 4`$): $$g_1(\phi )=c_0\left(1+\underset{n=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Pi }_{j=1}^n(a+2j2)}{(2n)!}\phi ^{2n}\right)$$ (25) $$g_2(\phi )=c_1\left(\phi +\underset{n=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Pi }_{j=1}^n(a+2j1)}{(2n+1)!}\phi ^{2n+1}\right)$$ (26) where I have explicitly kept $`c_0`$ and $`c_1`$, the first two coefficients in Eq.(24), as integration constants. After some trivial algebra we see that Eqs.(25) and (26) can be written in terms of confluent hypergeometric functions and the general solution to Eq.(21) takes the compact form $$g(\phi )=c_0M(\frac{a}{2},\frac{1}{2},\frac{\phi ^2}{2})+c_1\phi M(\frac{a+1}{2},\frac{3}{2},\frac{\phi ^2}{2}).$$ (27) We recall here that the confluent hypergeometric function $`M(a,b,x)`$ is defined as $$M(a,b,x)=1+\frac{a}{b}x+\frac{1}{2!}\frac{a(a+1)}{b(b+1)}x^2+\mathrm{}.$$ (28) Of course once dimensionless solutions to Eq.(19) are known, it is a trivial exercise to reconstruct dimensionfull potentials from Eq.(13). Some of the potentials obtained this way could also be considered as quintessence candidates. It is worth to compare the general solution (27) to Eq.(21) with a non perturbative result that has been obtained some years ago by Halpern and Huang following a different but essentially equivalent approach. Searching for alternatives to the trivial $`\varphi ^4`$ theory, the authors expanded the potential in even powers of the field and derived an infinite system of differential equations for the coupling constants. Looking for new eigendirections in the parameter space, they actually ended with the solution $`g_1(\phi )`$ to Eq.(21). More precisely, after making the trivial changes to match the two different notations and restricting ourselves to consider the $`N=1`$ and d=4 case as in the present paper, we can immediately check that eq.(49) of Ref. coincides with the $`g_1(\phi )`$ solution above. As these authors considered potentials containing only even powers of the field, obviously they could only get the solution $`g_1(\phi )`$. At a first sight it could seem that the solution $`g_2(\phi )`$ should be discarded as it contains odd powers of the field and as such it is unbounded from below. We would conclude in this case that the only physically acceptable general solution to Eq.(19) is $`g_1(\phi )`$, i.e. the Halpern-Huang result. But this is not always true. It is immediate to see from Eq.(27) that for all the positive or negative integer odd values of $`\alpha `$ such that $`\alpha <4`$, $`g_2(\phi )`$ is a polynomial in $`\phi `$ and it can be combined with $`g_1(\phi )`$ to give a bounded from below potential. It is also easy to give examples where even when the hypergeometric function in $`g_2(\phi )`$ keeps all the infinite terms, still a linear combination of $`g_1(\phi )`$ and $`g_2(\phi )`$ gives a bounded from below potential. Take for instance the case $`\alpha =5`$ for which both $`g_1`$ and $`g_2`$ are not polynomials. From the asymptotic behaviour of the hypergeometric function we easily see that $`c_0`$ and $`c_1`$ can be chosen in such a way that the resulting potential is bounded from below. The class of physically acceptable potentials, that are solutions of the Eq.(21), is larger than that spanned by $`g_1`$ only. I should mention at this point that an attempt to solve Eq.(19) has been recently made in Ref. where the solution $`v(\phi ,t)=e^{5t}e^{\frac{\phi ^2}{2}}`$ is presented. The author says that this solution is asymptotically similar to those of Ref. but that the connection between the two results is unclear to him. From Refs. and from Eq.(27) above (setting $`c_1=0`$) we immediately see that this solution is just the Halpern-Huang result for $`\alpha =5`$. It is also a trivial exercise to verify that the other solutions presented in (see Eq.(21) of that paper) are particular cases of the general solution, Eq.(27), obtained for the integer values $`\alpha =6,7,8,\mathrm{}`$. To summarize in this letter I have presented new solutions to the LPA of the exact renormalization group equation for the effective action of a single component scalar field theory. These potentials have the exponential form $`e^{\pm \varphi }`$ and have been recently used in phenomenological quintessence models. As the effective theory for the quintessence field should be governed by Eq.(1) whatever its higher energy origin, I argue that the fact that they arise as solutions to the renormalization group equation gives a derivation of these potentials that is alternative and complementary to those based on higher energy theories and should allow to discriminate between different proposals. I have also presented other solutions to the LPA renormalization group equation some of which already partially known. Apart from the application to the particular cosmological problem suggested in the present paper, I think that these results could also be relevant in other frameworks where scalar fields play a role. For the non perturbative solutions to Eq.(19) of the kind $`e^{\pm \varphi }`$, the gaussian potential is an UV stable fixed point. The same is true for those solutions of the kind (27) when $`\alpha >0`$. This result is opposite to the perturbative one and its implications in particle physics models are certainly worth to explore. The scalar sector of the standard model is itself an open problem. It is not a priori clear whether the existence of these solutions could have some relevance for the theory. I hope to come back to this issue in a future paper. Work is in progress in this direction.
no-problem/0002/cond-mat0002344.html
ar5iv
text
# Elastic properties of the vortex lattice for a superconducting film of finite thickness ## Abstract In this paper we investigate the elastic properties of the vortex lattice for a superconducting film of finite thickness. We derive an analytic expression for the compression modulus. The shear modulus is evaluated numerically by using both the Pearl interaction potential, valid in the limit of very thin film, and a potential for films of arbitrary thickness. A comparative study of the shear moduli is carried out. The problem of a vortex emerging perpendicularly to a surface of a superconductor was first considered by Pearl. He pointed out that the vortex-vortex interaction potential at the surface of superconductors follows the power law $`1/r`$ ($`1/k`$ in Fourier space) at large distances. The Pearl interaction potential has been widely used to investigate the equilibrium and the elastic properties of the vortex lattice in superconducting films. In this work we will show that in fact the Pearl interaction potential is capable of describing satisfactorily the superconducting properties of a very thin superconductor. However, we will show that as the thickness of the film approaches the London penetration depth $`\lambda `$, the use of this potential may underestimate significantly the value of the shear modulus for sufficiently low induction. It has been shown elsewhere that the energy of an ensemble of interacting vortices inside a superconducting film of finite thickness $`d`$ and the energy of the stray fields in the vacuum is given by $$F=\frac{\mathrm{\Phi }_0^2}{8\pi }\frac{d^2k}{(2\pi )^2}\frac{1}{\lambda ^2\alpha ^2}\left\{d+\frac{2}{\lambda ^2k\alpha \left[\alpha +k\mathrm{coth}\left(\frac{\alpha d}{2\lambda }\right)\right]}\right\}|S(𝐤)|^2.$$ (1) where $`𝐤=(k_x,k_y)`$, $`k=\sqrt{k_x^2+k_y^2}`$, $`\alpha =\sqrt{k^2+\lambda ^2}`$, and $`\mathrm{\Phi }_0`$ is the quantum flux. The structure factor is given by $$S(𝐤)=\underset{i}{}e^{i𝐤𝐑_i}.$$ (2) Here $`𝐑_i`$ is the position of the $`i`$th vortex line. Note that this result should be valid for an ensemble of distorted vortices, that is, the positions of the vortices do not necessarily correspond to the equilibrium configuration. The first term inside Eq. (1) represents the interaction energy of the vortex lines as if the surfaces were absent. The second term represents the surface energy associated to the magnetic energy of the stray field at the vacuum. Note that for $`k`$ small (large $`r`$), $`\alpha ^21/\lambda ^2`$. Thus, the surface energy goes as $`\mathrm{\Phi }_0^2/8\pi ^2r`$. This is the Pearl result for vortices emerging from a semi-infinite isotropic superconductor. Another interesting particular case of Eq. (1) is the limit of a very thin film, $`d0`$, and $`k`$ small. In this limit, from Eq. (1) it is straightforward to show that $$F=E_0\frac{dk^2}{(2\pi )^2}\frac{2\pi d}{k\mathrm{\Lambda }^1+k^2}|S(𝐤)|^2,$$ (3) where $`E_0=(\mathrm{\Phi }_0/4\pi \lambda )^2`$, $`\mathrm{\Lambda }=2\lambda ^2/d`$. This is precisely the energy of an ensemble of interacting vortices in a very thin film first obtained by Pearl. However, his derivation is supposed to be valid for any $`k`$. In this work we use Eq. (3) with no restriction on $`k`$. The integrand of Eq. (1) contains two terms. The bulk term follows the power law $`1/k^2`$ for large $`k`$. The integrand of Eq. (3) has the same behavior as its corresponding one. As a result, the self-energy contributions to both Eqs. (1) and (3) diverge. The divergence is logarithmic. This is so because London theory neglects the size of the vortex-core. To remove this short-length scale divergence we use the Gaussian cutoff. This regularization procedure consists in multiplying the integrand of Eq. (1) and (3) by a factor $`e^{2\xi ^2k^2}`$, where $`\xi `$ is the coherence length. In the first case, only the bulk term needs a cutoff. The London picture is valid in the limit of low induction and very strong type II superconductor, $`\kappa =\lambda /\xi 1`$. Let us suppose that initially the vortex lines are located at the equilibrium positions $`𝐑_i^0=(X_i^0,Y_i^0)`$. Let us denote by $`𝐮_i𝐮(𝐑_i^0)`$ the displacement of the $`i`$th vortex line from its equilibrium position. The new position of the $`i`$th vortex line is then given by $`𝐑_i=𝐑_i^0+𝐮_i`$. The elastic energy of the distorted vortex lattice is most conveniently written in $`𝐤`$-space by expressing the displacement in terms of its Fourier transform $$𝐮_i=_{BZ}\frac{dk^2}{(2\pi )^2}\frac{1}{d}𝐮(𝐤)e^{𝐤𝐑_i^0},$$ (4) where the integration is taken over the first Brillouin zone (BZ) of the vortex lattice. Upon taking this into Eq. (1) and expanding it up to second order in $`𝐮_i`$ we obtain for the excess free energy $$\delta F=\frac{1}{2}\frac{dk^2}{(2\pi )^2}\frac{1}{d}u_\alpha (𝐤)\mathrm{\Phi }_{\alpha \beta }(𝐤)u_\beta (𝐤),$$ (5) with $`(\alpha ,\beta )=(x,y)`$. Here $`B=\mathrm{\Phi }_0/A`$ is the induction, $`A`$ is the area of a unit cell of the vortex lattice. The factor $`1/d=_{\pi /d}^{\pi /d}𝑑k_z/(2\pi )`$ in the integrals was introduced deliberately to resemble the three dimensional corresponding expressions. The coefficients $`\mathrm{\Phi }_{\alpha \beta }(𝐤)`$, called elasticity matrix, are real, symmetric, and periodic in $`𝐤`$-space. One has $$\mathrm{\Phi }_{\alpha \beta }(𝐤)=\frac{B^2}{4\pi }\underset{𝐐}{}[(𝐤+𝐐)_\alpha (𝐤+𝐐)_\beta V(𝐤+𝐐)Q_\alpha Q_\beta V(𝐐)],$$ (6) where $`𝐐`$ are the reciprocal lattice vectors. Although the last equation is valid for any type of vortex lattice, we will use a triangular lattice. The basis vectors of the reciprocal lattice are given by $$𝐐_1=\frac{2\pi }{a}\left(\widehat{𝐱}\frac{\widehat{𝐲}}{\sqrt{3}}\right),𝐐_2=\frac{4\pi }{a}\frac{\widehat{𝐲}}{\sqrt{3}}$$ (7) where $`a^2=2\mathrm{\Phi }_0/\sqrt{3}B`$. Then, the vortex positions in reciprocal space are given by $`𝐐𝐐_{mn}=n𝐐_1+m𝐐_2`$, with $`m,n`$ integers. The interaction potential $`V(𝐤)`$ for a film of arbitrary thickness is $$V(𝐤)=\frac{1}{\lambda ^2\alpha ^2}\left[1+\frac{2}{d\lambda ^2k\alpha (k+\alpha )}\right],$$ (8) and in the limit of Pearl is $$V(𝐤)=\frac{2}{kd+\lambda ^2k^2}.$$ (9) Since we have assumed that the vortex lines are straight and parallel to each other, there will be only two elastic constants connected to the excitation of the lattice. One is the compression modulus and the other one is the shear modulus. Usually, shear is softer than compression. Let us first neglect the discreteness of the lattice by considering only the $`𝐐=0`$ contribution to Eq. (6). Within this continuum approximation, only the compression modulus is obtained. One has $$c_{11}(𝐤)=\frac{B^2}{4\pi }V(𝐤).$$ (10) Note that in the long wave length limit, $`k0`$, the interaction potential diverges, both for the full expression and within the Pearl limit \[see Eqs. (8) and (9)\]. As a result, in this local limit the compression modulus diverges. Thus, as the vortex density becomes very small, the energy cost to compress the lattice is very large. The evaluation of the more important shear modulus is possible only if one goes beyond the continuum limit. Over the first Brillouin zone the shear modulus is nearly a constant, that is, it does not contain any significant non-locality like the compression modulus. Thus, either of the following definitions is sufficient to describe the shear deformations $$c_{66}=\underset{k0}{lim}\frac{\mathrm{\Phi }_{yy}(k,0)}{k^2},c_{66}=\underset{k0}{lim}\frac{\mathrm{\Phi }_{xx}(0,k)}{k^2}.$$ (11) In what follows, the induction $`B`$ is in units of the upper critical field $`H_{c2}`$, $`b=B/H_{c2}`$; lengths are in units of $`\lambda `$; the Ginzburg-Landau parameter used is $`\kappa =50`$. By using these parameters we evaluated numerically $`c_{66}^p`$ and $`c_{66}`$, where the superscript $`p`$ means that the Pearl potential was used to calculate the shear modulus, whereas with no superscript we used the full expression of Eq. (8). Fig. 1 shows the ratio $`c_{66}^p/c_{66}`$ as a function of the film thickness. As one can see from this figure, for sufficiently low induction the difference between both shear modulus grows as $`d`$ increases. Note that this occurs for values slightly larger than $`\xi `$. However, these differences tend to disappear as the induction increases. In Fig. 2 we also show a plot of the ratio $`c_{66}^b/c_{66}`$ as a function of the film thickness, where the superscript $`b`$ stands for bulk. Note that as the induction increases the shear modulus the bulk shear modulus becomes approximately equal to full shear modulus. This was first experimentally observed by Fiory. The conclusion we draw from this scenario is that the Pearl potential is indeed valid in the limit of very thin film. However, the study of the elastic properties of the vortex lattice in superconducting films, of thickness slightly larger than $`\xi `$ and the induction very close to the lower critical field $`H_{c1}`$, might become non-reliable by employing this potential. In this situation it should be more convenient to use the full expression for the interaction potential, but for films of thickness not much larger than $`\lambda `$, because in this limit bending of the vortex lines may become important. ###### Acknowledgements. The author thanks the Brazilian Agencies FAPESP and CNPq for financial support.
no-problem/0002/cond-mat0002178.html
ar5iv
text
# Anomalous Diffusion in Quasi One Dimensional Systems ## Abstract In order to perform quantum Hamiltonian dynamics minimizing localization effects, we introduce a quasi-one dimensional tight-binding model whose mean free path is smaller than the size of the sample. This size, in turn, is smaller than the localization length. We study the return probability to the starting layer using direct diagonalization of the Hamiltonian. We create a one dimensional excitation and observe sub-diffusive behavior for times larger than the Debye time but shorter than the Heisenberg time. The exponent corresponds to the fractal dimension $`d^{}0.72`$ which is compared to that calculated from the eigenstates by means of the inverse participation number. PACS numbers: 72.10.Bg, 73.20.Dx, 73.20.Fz , The classical kinetic theory predicts that, in a disordered system, the return probability of an excitation decays with a diffusive law $`P(t)(4\pi Dt)^{d/2}`$, where $`D`$ is the diffusion coefficient and $`d`$ the dimension of the system. This fact has been extensively used in many areas of physics, in particular in electronic transport. However, the theory of quantum localization, on the basis of steady state transport properties, made clear that different regimes arise as a function of the disorder $`W`$. For $`W\mathrm{}`$, the eigenstates are completely localized and wave packets do not move through the system (insulating phase). If $`W0`$, the motion is completely ballistic with a velocity $`Ja/\mathrm{}`$. Between these two limits the diffusive behavior (metallic phase) is observed. The particles move around freely between collisions for a certain average length $`\mathrm{}`$, called the mean free path. More recently, theoretical studies have considered the regime where the critical amount of disorder $`W_C`$ is such that the system is at the metal-insulator transition ($`W_C`$ $`zJ`$ is the typical exchange energies with $`z`$ neighbors at distance $`a`$). In that case, the dynamics of the system is diffusive, but with a smaller exponent implying $`d^{}<d`$ . This reduction in the effective dimension is attributed to the (multi)fractality of the eigenstates at the transition. On the other hand, the diffusion of a spin excitation was directly observed in NMR experiments. When the spin network has a cubic structure filling the space, the intensity of the excitation decays as $`(tt_0)^{1.5}`$, which is the expected value for diffusion in a three dimensional system. However, the same experiment performed in a chain-like structure (powdered polyethylene) shows anomalous exponents in the diffusion of the excitation; namely $`0.9`$ and $`0.7`$ for the crystalline and amorphous parts of the sample respectively. In these systems, the role of disorder is played by typical energy differences $`W`$ between states, which are smaller than exchange energies $`zJ.`$ This assures a diffusive dynamics with $`DJa^2/\mathrm{}`$. Therefore, the effective dimensions $`d^{}=1.8`$ and $`1.4`$ corresponding to these values reflect the spatial structure of the spin network. In this work we study the conditions to reach the diffusive regime from actual quantum dynamics, namely the numerical study of model Hamiltonians. In particular, we developed a quasi one dimensional tight binding model system, which we called the Stars necklace model, whose basic unit (layer) is a highly connected cluster (see inset in figure 1) with $`N`$ sites and intralayer hopping $`J=V/N^{0.5}`$. Disorder is introduced through on site energies characterized by a random distribution of width $`W`$ . The initial wave function is a packet defined in one layer of the system. We are interested in the probability of return to the layer, $`P`$, which is the sum of probabilities of finding the particle in every site of the initial layer. To calculate the dynamics of the system we perform an exact diagonalization of the Hamiltonian. One key aspect of the numerical calculation is that the mean free path $`\mathrm{}`$, the size of the system $`L`$ and the localization length $`\xi `$ must obey some restrictive relationships. The condition for a diffusive regime is $`\mathrm{}`$ much smaller than $`L`$, hence assuring that the particle will collide many times before reaching the boundaries of the system. In turn, to stay away from the localized regime, $`L`$ must be smaller than $`\xi `$. For a strictly one dimensional wire $`\xi =2\mathrm{}`$, while for strips and bars with a given number of transverse modes (channels) $`M`$ the localization length is expected to go as $`\xi =2M\mathrm{}`$. In our model the $`M=N1`$ channels available for transport have the same group velocity $`v=2Va/\mathrm{}`$. This striking feature of the model allows to reach a one dimensional diffusive dynamics when $`N\mathrm{}.`$ For finite $`N`$ it provides an optimal representation for one-dimensional excitations. We studied many system sizes and amounts of disorder, a typical evolution is shown in figure 1. The particular system for this figure has $`N=12`$ and a perimeter of $`100a`$, $`W=3V`$, $`\mathrm{}6a`$ and $`\xi 140a`$ . We see that after a ballistic time $`\mathrm{}/v`$, the evolution follows a power law which indicates a diffusive behavior. Nevertheless, the exponent of this power law is somewhat different from the expected one dimensional value $`0.5`$. As in the examples mentioned above, this anomalous exponent could be due to a fractal effective dimension of the system. The fitting of the evolution to a power law with a free exponent resulted in an effective dimension of the system $`d^{}0.7`$. In our model, a possible cause for a fractality in the eigenfunctions of the system is disorder. Strongly localized stated in the band tails are confined around some random points. This means that they represent “holes” in the real space allowed to extended wave functions, thus making the effective dimension of the system smaller than the real one. For times longer than the ones showed in the figure, the autocorrelation function saturates, this is a finite size effect (the saturation value depends linearly on the system size). We also observed that a magnetic field does not change the exponent in the power law noticeable, but reduces the value of the saturation, meaning that there are fewer localized states. Another way to study how the eigenstates of energy $`\epsilon `$ are occupying some fraction of the volume of space is by means of the inverse participation number $`p^1=_i\left|\phi _i(\epsilon )\right|^4`$ . For plane waves one obtains that $`p=L^d`$, i.e. equals the volume of the system. For a localized state $`p`$ is proportional to the volume in which the state has a non-vanishing amplitude. However if the states are extended but fractal, in the thermodynamic limit it diverges as $`p=L^d^{}`$, with $`d^{}`$ an effective dimension that may be different from the dimension $`d`$ of the ordered system. We calculated the inverse participation number for each of the eigenstates of the system and through it the effective dimension ratio $`d^{}/d`$ of everyone of them. The results (shown in figure 2) are in very good agreement with the effective dimension calculated through the fitting of the autocorrelation function depicted in Fig. 1. Summarizing, we have introduced a numerical Hamiltonian model whose exact solution shows a regime with sub-diffusive behavior. Moreover, we presented hints of a fractal dimension of the extended eigenstates induced by the presence of disorder. By hindering particles from a fraction of the available real space, disorder induces a weak breaking of the ergodicity that anticipates the non-ergodicity associated with full localization.
no-problem/0002/quant-ph0002017.html
ar5iv
text
# Quantum Field Symbolic Analog Computation: Relativity Model ## 1 Introduction Recently, there has been considerable interest in what is called quantum computation . The efforts in this regard are to seek improved ways of performing computations or building new types of computing machines. It is hoped that not only faster computation will be achieved , but that better understanding of computational complexity will come about . Quantum computers might appear to be discrete systems, such as a finite collection of spins or qubits . But this is not the only possibility. Instead of thinking of a computer based on a physical system which is understood in terms of non-relativistic quantum mechanics with spin added on <sup>1</sup><sup>1</sup>1 Spin used to be added in a ad hoc fashion to non-relativistic quantum mechanics until Dirac produced his relativistic equation for the electron ., one can also consider a system based on relativistic quantum mechanics . In another direction, the classical discrete digital computer has been generalized to include the possibility of computing over the continuum . This is because the latter way of computing over the continuum is more appropriate to the way we do analysis, physics, and engineering problems. So this is a computing model which is based on classical mechanics. But classical mechanics could also be extended to include relativity, getting relativistic mechanics . Because, in studying atomic phenomena, classical mechanics has been replaced by quantum mechanics, we could also think of more general models of computing based on adding relativity to quantum theory to get relativistic quantum field theory. It is natural to consider our physical or quantum systems in the continuum limit. In fact Isaac Newton , when studying gravitation, found it natural to consider a continuous distribution of matter to model the earth’s gravitational action at external points. From continuum quantum mechanics, by combining relativity, we have quantum field theory. ## 2 Generalized Quantum Computation The continuum limit (in space-time configuration) of quantum mechanics together with the incorporation of Einstein’s special relativity leads naturally to the relativistic quantum theory of fields. By taking the limit as the velocity of light $`c\mathrm{}`$ we expect to get non-relativistic quantum mechanics. The limit as Planck’s constant $`h0`$ gives classical mechanics . Quantum field theory includes all of quantum mechanics, classical mechanics, and much more <sup>2</sup><sup>2</sup>2Examples are, discrete anti-unitary symmetries, CPT invariance and the spin statistics connection .. Included also will be unitary transformations and superposition of amplitudes, which are regarded as prerequisites for quantum computation. It has been possible to generalize quantum computation to relativistic quantum field computation in a certain model . We expect other forms of quantum computation, namely, those based on non-relativistic quantum mechanics, topological quantum field theories and classical mechanics to be related to this generalization. The relationship should shed light, not only on computing possibilities , but also on quantum field theory itself. ## 3 Quantum Fields Non-relativistic quantum mechanics is not complete, because radiative corrections have to be made to it, using field theory. In dealing with a system corresponding to an infinite number of degrees of freedom, it is well known historically that formulations of quantum field theory like perturbation theory lead to infinities resulting in the need for renormalization. Nevertheless, quantum electrodynamics has turned out to be, “the most accurate theory known to man” <sup>3</sup><sup>3</sup>3This statement is attributed to Feynman.. Dirac, Schwinger and Feynman are some of the principal contributers to quantum electrodynamics <sup>4</sup><sup>4</sup>4The spectacular history of this is related in and hence to quantum field theory . Relativistic covariance is of paramount importance in correctly performing the renormalization process. It is useful here to work within the Wightman formulation of quantum field theory <sup>5</sup><sup>5</sup>5The fruitfulness and utility of this formulation, from a current perspective, is available in . We are dealing with fields in the Heisenberg picture, without using perturbation theory, nor any particular time frame related Hamiltonians. The theory is in terms of analytic functions (Wightman functions) of several complex variables. These functions arise from their boundary values which are vacuum expectation values of the form $$𝒲_m(x_1,x_2,\mathrm{}x_m)=(\mathrm{\Omega },\varphi _1(x_1)\varphi _2(x_2),\mathrm{}\varphi _m(x_m)\mathrm{\Omega })$$ of products of $`m`$ quantum field operators in a separable Hilbert space. The field operators transform according to appropriate unitary spin representations of the Poincaré (inhomogenous $`SL(2,)`$ group. Quantum fields are uniquely reconstructed from these analytic functions by Wightman. Let the ($`m`$-point) Wightman functions be denoted by $`W(n;z)`$ where $`z`$ denotes the set of $`n`$ complex variables. Here, $`n=sm`$ where $`s2`$ is the space-time dimension; space-time will consist of 1-time and $`(s1)`$-space dimensions <sup>6</sup><sup>6</sup>6Because it is not known, at present, how to physically understand concepts like closed time-like loops in more than one time dimension, there will be only one dimension in time . Because these analytic functions are fundamental to the theory, one is led to computation of holomorphy domains for these functions over the space of several complex variables, $`^n`$ . The mass spectrum is assumed to be reasonable, in the sense that momentum vectors $`p^\mu `$ lie in the closed forward light cone, with time component $`p^0>0`$ except for the unique vacuum state having $`p=0`$. ## 4 Computer Automation When $`s=2`$, i.e. in 1-dimensional space and 1-dimensional time, a system of light-cone coordinates is appropriate . In this 2-dimensional space-time, computer automation has been accomplished as deterministic exact analog computation (computation over “cells” in the continuum of $`^n`$) to obtain primitive extended tube domains of holomorphy for $`W(n;z)`$. By a series of abstractions the computation is done with essentially reversible logic, programming in the Prolog language, and simulating on a Turing machine. Just as the classical computer, Turing machine, computes over $``$ or equivalently over $`_2`$, we now have what can be called a complex Turing machine, in fact a, severally complex Turing machine. The primitive extended tube domains are bounded by analytic hypersurfaces, namely Riemann cuts denoted by $`C_{ij}`$ and other hypersurfaces denoted by $`S_{ij,kl}`$ and $`F_{ij,kl}`$ . These domains are in the form of semi-algebraic sets. Since the computation is symbolic, it is also exact, which is important in handling analytic functions. Because of Lorentz invariance properties of the physics involved, the domains have a structure referred to as complex Lorentz projective spaces <sup>7</sup><sup>7</sup>7This is different from Euclidean complex projective spaces, well known in mathematics. The difference is captured in the Hall-Wightman theorem .. Related to this invariance are certain continuum cells over which the computation occurs. Thus this computation is also like analog computation which would otherwise be regarded as impossible to do exactly. ## 5 Analytic Extensions In relativistic quantum field theory it is possible to implement the physical requirement of microcausality. There exists quantum microcausality (field operators commute or anticommute) at totally space-like points. Together with the requirement of permutation invariance of the domains, the edge-of-the-wedge theorem provides enlargements of the original primitive domains of analyticity to analyticity into unions of permuted primitive domains. Mapping these union domains creates some Boolean satisfiability problems. In fact, the novel methods of computation raise interesting issues of computability and complexity. ## 6 Non-deterministic Holomorphic Extensions By the nature of analytic domains in more than one complex variable, it is in general possible to further extend these domains towards the maximal domains called envelopes of holomorphy. By considering boundary related semi-algebraic sets, there are non-deterministic computations of holomorphic extensions of domains. After the guessing step, the verification is by deterministic processes mentioned above. Built-in permutation invariance has considerable power, just as $`n!`$ rapidly dominates over $`2^n`$ for large $`n`$. ## 7 Uniformity of Computation Uniformity in the direction of universal computation has been discussed in , in different contexts, including numerical analysis. We do have certain types of uniformity here. First we note that the computation is independent of any particular form of Lagrangian or dynamics, and is uniform in $`n`$, qualifying for a universal quantum machine over $`^{\mathrm{}}`$. The latter space is the infinite discrete union $`_{n=1}^{\mathrm{}}^n`$. ### 7.1 Function Order Uniformity When the progrm runs for $`s=2`$, dynamic memory allocation is used through the operating system. Because $`n`$ can be input as a variable, only part of the whole memory management cost is outside the program. The program itself is independent of $`n=sm`$ and therefore is uniform in $`n`$, which is unbounded above. We can call this function order uniformity in $`n_{\mathrm{}}`$. ### 7.2 Space-time Uniformity In addition, there is uniformity in the dimension $`s2`$ of space-time, in the following manner. Given a dimension $`s2`$ of space-time, looking at the semi-algebraic sets defining the primitive extended tube domains of holomorphy (hypersurface boundaries), and at function orders, there are three different classes of orders. These classes comprise, a) lower order W functions, b) intermediate order W functions, and c) high order W functions . Extended tube domains for all high order W functions have the same complicacy. For a) we have $`ms+1`$, and for c), $`m>s(s1)/2+2`$. The remaining cases lie in class b). For example, there is no class b) for $`s=2`$, the most complicated primitive domain being for the 3-point function. If $`s=3`$, then $`m=5`$ is the only case in class b). When $`s=4`$, we have in class b), the cases, $`m=6,7`$ and $`8`$. Since $`s2`$ is unbounded above, we can call this space-time dimension uniformity in $`s_{\mathrm{}}`$. ### 7.3 co-NHolo Uniformity The holomorphy envelopes $`H[D_m]`$ for different orders $`m`$ of Wightman functions are related in the following way. For $`0<r<m`$, and relative to $`H[D_m]`$, $$H[D_m]\underset{\sigma \mathrm{P}_m}{}\left\{H[\sigma D_{mr}]\times (\sigma ^{sr})\right\},$$ where $`\sigma `$ denotes permutations in $`\mathrm{P}_m`$, the permutation group in the $`m`$ points of the $`m`$-point W function. In the case of Schlicht domains (analogous to single sheeted Riemann surfaces in $``$), the $``$ sign means set theoretic inclusion. For example, in $`s=2`$, the 4-point function cannot be continued beyond the 2-point function Riemann cuts nor the (permuted) 3-point function Källén-Wightman domains of holomorphy. This is a statement regarding analyticity that does not exist, and thus refers to the complements of the domains of holomorphy; hence the use of the prefix co-. Because computations of analytic extensions of domains are non-deterministic (hence the notation N), we can say that we have co-NHolo uniformity. ## 8 Conclusion We started with relativity and continuum quantum field theory. Arbitrary numbers of particles (optionally with spins) can be created or annihilated. Relying on a fruitful set of models, we have related what appeared to be different models of quantum and classical computation based on non-relativistic quantum mechanics and classical mechanics. Exact deterministic and non-deterministic computation over continuous domains appear naturally. Furthermore there is uniformity in computation over, unbounded above, or arbitrarily high order $`n`$ of $`W(n;z)`$ and arbitrarily high dimension $`s`$ of space-time. In the present context this can be called co-NHolo uniformity in $`n_{\mathrm{}}`$ and $`s_{\mathrm{}}`$. The novel methods of computation raise interesting issues of computability and complexity, and possibly could shed more light on quantum field theory itself.
no-problem/0002/nucl-th0002004.html
ar5iv
text
# 1 Chiral order parameter: 𝐵⁢(𝑝̃_𝑘) obtained from Eq. (), as a function of (𝑇,𝜇): (𝑇_𝑐=158 MeV,𝜇=0), (𝑇=0,𝜇_𝑐=275 MeV). As described in the text, the phase boundary is fixed by the condition ℬ⁢(𝑇,𝜇)=0. TEMPERATURE, CHEMICAL POTENTIAL AND THE $`\rho `$–MESON C.D. ROBERTS and S.M. SCHMIDT Physics Division, Argonne National Laboratory, Argonne IL 60439-4843, USA 1. Introduction. Models of QCD must confront nonperturbative phenomena such as confinement, dynamical chiral symmetry breaking (DCSB) and the formation of bound states. In addition, a unified approach should describe the deconfinement and chiral symmetry restoring phase transition exhibited by strongly-interacting matter under extreme conditions of temperature and density. Nonperturbative Dyson-Schwinger equation (DSE) models provide insight into a wide range of zero temperature hadronic phenomena; e.g., non-hadronic electroweak interactions of light- and heavy-mesons , and diverse meson-meson and meson-nucleon form factors. This is the foundation for their application at nonzero-($`T,\mu `$,-. Herein we describe the calculation of the deconfinement and chiral symmetry restoring phase boundary, and the medium dependence of $`\rho `$-meson properties. We also introduce an extension to describe the time-evolution in the plasma of the quark’s scalar and vector self energies based on a Vlasov equation. 2. Dyson-Schwinger Equation at Nonzero-($`T,\mu `$). The dressed-quark DSE at nonzero-($`T,\mu `$) is $`S^1(\stackrel{~}{p}_k)=i\stackrel{}{\gamma }\stackrel{}{p}A(\stackrel{~}{p}_k)+i\gamma _4\omega _{k_+}C(\stackrel{~}{p}_k)+B(\stackrel{~}{p}_k)=i\stackrel{}{\gamma }\stackrel{}{p}+i\gamma _4\omega _{k_+}+\mathrm{\Sigma }(\stackrel{~}{p}_k),`$ (1) where $`\stackrel{~}{p}_k=(\stackrel{}{p},\omega _{k_+})`$, $`\omega _{k_+}=\omega _k+i\mu `$, and $`\omega _k=(2k+1)\pi T`$ is the quark’s Matsubara frequency. The complex-valued scalar functions: $`A(\stackrel{}{p},\omega _{k_+})`$, $`B(\stackrel{}{p},\omega _{k_+})`$ and $`C(\stackrel{}{p},\omega _{k_+})`$, depend only on $`(|\stackrel{}{p}|^2,\omega _{k_+}^2)`$. With a given dressed-gluon propagator the solutions are determined by $`B(\stackrel{~}{p}_k)m_0`$ $`=`$ $`{\displaystyle \frac{8}{3}}{\displaystyle \frac{d^4\stackrel{}{q}}{(2\pi )^4}D(\stackrel{~}{p}_k\stackrel{~}{q}_k)\frac{B(\stackrel{~}{q}_k)}{\stackrel{~}{q}_k^2C^2(\stackrel{~}{q}_k)+B^2(\stackrel{~}{q}_k)}},`$ (2) $`(C(\stackrel{~}{p}_k)1)\stackrel{~}{p}_k^2`$ $`=`$ $`{\displaystyle \frac{4}{3}}{\displaystyle \frac{d^4\stackrel{}{q}}{(2\pi )^4}D(\stackrel{~}{p}_k\stackrel{~}{q}_k)\frac{\stackrel{~}{p}_k\stackrel{~}{q}_kC(\stackrel{~}{q}_k)}{\stackrel{~}{q}_k^2C^2(\stackrel{~}{q}_k)+B^2(\stackrel{~}{q}_k)}},`$ (3) where herein we only consider models where $`A(p)=C(p)`$. It is the interplay between the functions $`B`$ and $`C`$ that leads to confinement, realised via the absence of a Lehmann representation for the dressed-quark $`2`$-point function ,-. $`B0`$ in the chiral limit signals DCSB. To provide an illustrative solution of the quark DSE we employ an Ansatz for the scalar function characterising the dressed-gluon propagator : $$D(p)=3\pi ^2\frac{\eta ^2}{T}\delta _{k0}\delta ^3(p).$$ (4) The infrared enhancement in this choice ensures quark confinement and DCSB. As an infrared dominant model, Eq. (4) does not represent well the interaction away from $`(\stackrel{~}{p}_k\stackrel{~}{q}_k)^20`$ and that introduces some model-dependent artefacts. However, they are easily identified and the model yields qualitatively reliable results, preserving features of more sophisticated studies. Using Eq. (4) in Eqs. (2-3) we obtain a system with two phases. The Nambu-Goldstone (NG) phase is characterised by dynamically broken chiral symmetry and confined dressed-quarks. The alternative Wigner-Weyl (WW) solution describes a phase of the model with restored chiral symmetry and deconfinement. In studying the phase transition one must consider the relative stability of the confined and deconfined phases, which is measured by the $`(T,\mu )`$-dependent pressure difference between the two distinct phases: $`(T,\mu )=P[S_{\mathrm{NG}}]P[S_{\mathrm{WW}}]`$. $`(T,\mu )>0`$ indicates the stability of the confined (Nambu-Goldstone) phase and hence the phase boundary is specified by that curve in the $`(T,\mu )`$-plane for which $`(T,\mu )=0`$. The critical line is depicted in Fig. 1. The phase transition is first order for any non-zero $`\mu `$ and second order for $`\mu =0`$. The model has mean field critical exponents, which is a feature of the rainbow-ladder truncation . The study of thermodynamic properties shows that it is essential to keep scalar and vector self-energies as well as their momentum dependence . Mesons are quark-antiquark bound states and their masses are obtained by solving the Bethe-Salpeter equation . Here we focus on the vector channel and employing Eq. (4) the eigenvalue equation for the bound state mass is $$\frac{\eta ^2}{2}\mathrm{𝖱𝖾}\left\{\sigma _B(\omega _{0+}^2\frac{1}{4}M_{\rho \pm }^2)^2\left[\pm \omega _{0+}^2\frac{1}{4}M_{\rho \pm }^2\right]\sigma _C(\omega _{0+}^2\frac{1}{4}M_{\rho \pm }^2)^2\right\}=1,$$ (5) where $`\sigma _{B,C}(\stackrel{~}{p}_k^2)=\{B(\stackrel{~}{p}_k^2),C(\stackrel{~}{p}_k^2)\}/[\stackrel{~}{p}_k^2C^2(\stackrel{~}{p}_k^2)+B^2(\stackrel{~}{p}_k^2)].`$ The equation for the $`\rho `$-meson’s transverse component is obtained with $`[\omega _{0+}^2{\displaystyle \frac{1}{4}}M_\rho ^2]`$ in Eq. (5) and in the chiral-limit yields $`M_\rho ^2={\displaystyle \frac{1}{2}}\eta ^2,\text{independent}\text{ of }T\text{ and }\mu \text{.}`$ This is the $`T=0=\mu `$ result of Ref. . Even for nonzero current-quark mass, $`M_\rho `$ changes by less than 1% as $`T`$ and $`\mu `$ are increased from zero toward their critical values. Its insensitivity is consistent with the absence of a constant mass-shift in the transverse polarization tensor for a gauge-boson. For the longitudinal component one obtains in the chiral limit: $$M_{\rho +}^2=\frac{1}{2}\eta ^24(\mu ^2\pi ^2T^2).$$ (6) The results for the medium-dependence of the $`\rho `$ meson are summarised in Fig. 2. As in the case of the dressed-quark mass function, the response to increasing $`T`$ and $`\mu `$ is anti-correlated: the $`\rho `$\- mass decreases with increasing chemical potential and increases with temperature. This anti-correlation leads to an edge along which the $`T`$ and $`\mu `$ effects compensate and the mass remains unchanged up to the transition point. 3. Nonequilibrium Application. The time evolution of the self energies can be studied using Vlasov’s equation $$_tf(p,x)+_pE(p,x)_xf(p,x)_xE(p,x)_pf(p,x)=0.$$ (7) Solving this equation is complicated for two reasons. (i) The energy is a functional of the scalar and vector self energies, which in general are nonzero and momentum-dependent. While the scalar self energy is small in the plasma phase due to chiral symmetry restoration, the vector self energy remains significant . (ii) The absence of a Lehmann representation for the dressed-quark propagator in the confined phase precludes the existence of a single particle distribution function, $`f`$, in this phase. Therefore a conventional kinetic theory is only reasonable in the deconfined phase. This situation is adequately represented in DSE models; e.g., Refs. describe a quark’s $`(T,\mu )`$-evolution from a confined to a propagating mode, and Ref. makes use of this evolved quasiparticle behaviour in calculating the plasma’s thermodynamic properties. Therefore, approaching the phase boundary from the plasma domain we anticipate a discontinuous disappearance of the quark distribution function, $`f`$. As an illustration we employ an instantaneous interaction of the form $$D(p)=3\pi ^2\eta \delta ^3(p)$$ (8) to represent dynamics in the deconfined phase. In this case the Matsubara sum in Eq. (2) can be performed analytically and we obtain: $`\mathrm{\Sigma }^B(p,x)`$ $`=`$ $`\eta {\displaystyle \frac{\mathrm{\Sigma }^B(p,x)+m_0}{(1+\mathrm{\Sigma }^C(p,x))E^{}(p,x)}}[12f(p,x)],`$ (9) $`\mathrm{\Sigma }^C(p,x)`$ $`=`$ $`\eta {\displaystyle \frac{1}{(1+\mathrm{\Sigma }^C(p,x))E^{}(p,x)}}[12f(p,x)],`$ (10) with the quasi particle energy: $`E^{}(p,x)=\sqrt{(\stackrel{}{p}^{})^2+M^{}(p,x)^2}`$, the renormalized momentum: $`\stackrel{}{p}^{}=\stackrel{}{p}(1+\mathrm{\Sigma }^C(p,x))`$, and mass: $`M^{}(p,x)=m_0+\mathrm{\Sigma }^B(p,x)`$. As a test whether this simplification still yields necessary and qualitatively important features, such as $`C1,Bm_0`$, in Fig. 3 we compare the momentum dependence obtained in the models specified by Eqs. (4,8) in the vicinity of $`T_c`$. Both functions are well reproduced and hence Eq. (8) can be used to model the persistence of non-perturbative effects in the deconfined domain. The solution of Eqs. (7,9-10) provide the time-evolution of the quark self-energy and distribution function. As in the case of thermal equilibrium, the vector self energy plays an important role. Neglecting $`\mathrm{\Sigma }^C`$ and the momentum dependence of $`\mathrm{\Sigma }^B`$ a simpler equation is obtained $$_tf(p,x)+\frac{\stackrel{}{p}}{E(p,x)}_xf(p,x)m(x)_xm(x)_pf(p,x)=0,$$ (11) with $`m(x)`$ the quark mass obtained as a solution of the gap equation in models without confinement. This equation has been widely studied; e.g. Refs. . However, we anticipate that the numerical solution of Eq. (7) will yield significantly different results because of the presence and persistence of the vector self energy in the deconfined domain. Acknowledgments. This work was supported by the US Department of Energy, Nuclear Physics Division, under contract no. W-31-109-ENG-38, the National Science Foundation under grant nos. INT-9603385 and PHY97-22429, and benefited from the resources of the National Energy Research Scientific Computing Center. S.M.S. acknowledges financial support from the A. v. Humboldt foundation.
no-problem/0002/cond-mat0002120.html
ar5iv
text
# Energy level dynamics in systems with weakly multifractal eigenstates: equivalence to 1D correlated fermions at low temperatures ## Abstract It is shown that the parametric spectral statistics in the critical random matrix ensemble with multifractal eigenvector statistics are identical to the statistics of correlated 1D fermions at finite temperatures. For weak multifractality the effective temperature of fictitious 1D fermions is proportional to $`T_{eff}(1d_n)/n1`$, where $`d_n`$ is the fractal dimension found from the $`n`$-th moment of inverse participation ratio. For large energy and parameter separations the fictitious fermions are described by the Luttinger liquid model which follows from the Calogero-Sutherland model. The low-temperature asymptotic form of the two-point equal-parameter spectral correlation function is found for all energy separations and its relevance for the low temperature equal-time density correlations in the Calogero-Sutherland model is conjectured. The spectral statistics in complex quantum systems are signatures of the underlying dynamics of the corresponding classical counterpart. The spectral statistics in chaotic and disordered systems in the limit of infinite dimensionless conductance $`g`$ is described by the classical random matrix theory of Wigner and Dyson (WD statistics). The WD statistics possess a remarkable property of universality: it depends only on the symmetry class with respect to the time-reversal transformation $`𝒯`$. The three symmetry classes correspond to the lack of $`𝒯`$-invariance (the unitary ensemble, $`\beta =2`$); the $`𝒯`$-invariant systems with $`𝒯^2=1`$ (the orthogonal ensemble, $`\beta =1`$), and the $`𝒯`$-invariant systems with $`𝒯^2=1`$ (the symplectic ensemble, $`\beta =4`$), respectively. The physical ground behind this universality is the structureless eigenfunctions in the ergodic regime which implies the invariance of the eigenfunction statistics with respect to a unitary transformation of the basis. In real disordered metals the eigenfunctions are not basis-invariant. The basis-preference reaches its extreme form for the strongly impure metals where all eigenfunctions are localized in the coordinate space but delocalized in the momentum space. In this case the spectral statistics is Poissonian in the thermodynamic (TD) limit. For low-dimensional systems $`d=1,2`$, where all states are localized in the TD limit, one can observe the smooth crossover from the WD to the Poisson spectral statistics as a function of the parameter $`\xi /L`$, where $`\xi `$ is the localization radius and $`L`$ is the system size. The dependence of the spectral correlation functions on the energy variable $`s=E/\mathrm{\Delta }`$ ($`\mathrm{\Delta }`$ is the mean level separation) is non-universal for finite $`L/\xi `$ but all of them tend to the Poisson limit as $`L/\xi \mathrm{}`$. In systems of higher dimensionality $`d>2`$ the situation is different because of the presence of the Anderson localization transition at a critical disorder $`W=W_c`$. In the metal phase $`W<W_c`$ the dimensional conductance $`g(L)\mathrm{}`$ as $`L\mathrm{}`$ and one obtains the WD spectral statistics in the TD limit. In the insulator state $`g(L)0`$ at $`L\mathrm{}`$ and the limiting statistics is Poissonian. However, there is a fixed point $`W=W_c`$ in which the spectral statistics are nearly independent of $`L`$. Thus at the critical point there exist universal spectral statistics which are neither WD nor Poissonian but rather a hybrid of both . However, the universality of the critical spectral statistics (CSS) is somewhat limited, since it depends not only on the Dyson symmetry parameter $`\beta `$ but also on the critical value of the dimensional conductance $`g^{}`$ which in turn depends on the dimensionality $`d`$ of the system . Thus for each universality class there is a family of critical spectral statistics parametrized by the critical dimensionless conductance $`g^{}`$. The very existence of the subject of critical level statistics imposes a constraint on the possible values of the localization length exponent $`\nu =[d\beta (g)/d\mathrm{ln}g]^1|_{g=g^{}}`$, where $`\beta (g)d\mathrm{ln}g/d\mathrm{ln}L`$ is the scaling function. Indeed, for the spectral statistics to be meaningful the width of the critical energy window $`\delta E`$ must be much larger than the mean level separation $`\mathrm{\Delta }1/L^d`$. The quantity $`\delta E`$ is defined as the distance from the mobility edge $`E=E_c`$ at which the localization or correlation radius $`\xi (\delta E)|\delta E|^\nu `$ is equal to the system size $`L`$. The number of critical eigenstates $`𝒩=\delta E/\mathrm{\Delta }`$ is proportional to $`L^{d\frac{1}{\nu }}`$. For $`\nu d>1`$ this number tends to infinity in the TD limit $`L\mathrm{}`$ despite the width of the critical energy window shrinks to zero. This necessary condition for the existence of the critical statistics is secured by the famous Harris criterion $`\nu d>2`$. However, the critical exponent $`\nu `$ enters not only in the necessary condition for the CSS but also in the correlation functions of the density of energy levels $`\rho (E)`$. It has been shown in that there is a power-law tail in the critical two-level correlation function (TLCF) $`R(\omega )=\rho (E)\rho (E+\omega )`$ that arises because of the finite-size correction to the dimensionless conductance $`g(L_\omega )/g^{}1(L_\omega /L)^{1/\nu }=s^{\frac{1}{\nu d}}`$, where $`L_\omega \omega ^{1/d}L`$ is the length scale set by the energy difference $`\omega =EE^{}s\mathrm{\Delta }`$ between two levels. The sign of this tail depends on whether the critical energy levels are on the metal ($`E<E_c`$) or on the insulator ($`E>E_c`$) side of the mobility edge, in the same way as for the 2D systems in the weak delocalization ($`\beta =4`$) or weak localization ($`\beta =1`$) regimes . Clearly, this power-law tail does not reflect properties of the critical eigenstates but rather the behavior $`\xi (|EE_c|)`$ of the size of the space region where eigenstates show the critical space correlations. In order to study the relationship between the properties of the critical eigenstates and the CSS in its pure form one should consider a system with a continuous line of critical points where $`\beta (g)=0`$. This case formally corresponds to $`\nu =\mathrm{}`$ and the finite-size effects are absent. Another complication which makes ambiguous the definition of CSS is the fact that being independent of the system size, the spectral correlation functions depend on the boundary conditions and topology of a system. Therefore we will consider the system of the torus topology where CSS takes its ‘canonical’ form. In particular, the TLCF decays exponentially in this case. As has been already mentioned the universality of the WD statistics is based on the ergodic, basis-invariant statistics of eigenfunctions which one may encounter in different physical situations. The characteristic feature of all critical quantum systems is the multifractal statistics of the critical eigenfunctions . The simplest two-point correlations of the critical wave functions can be obtained from the renormalization group result for $`l<r<\xi <L`$: $$|\mathrm{\Psi }_E(0)|^{2n}|\mathrm{\Psi }_E(r)|^{2n}=p<|\mathrm{\Psi }_E(0)|^{2n}>^2(\xi /r)^{\alpha _n},$$ (1) where $`l`$ is the short-distance cut-off of the order of the elastic scattering length, $`p=(\xi /L)^d`$ is the probability for a reference point to be inside a localization region. The exponent $`\alpha _n=2n(d_nd_{2n})+d_{2n}2d_n+d`$ is expressed through the fractal dimensions $`d_n`$ defined by the $`L`$-dependence of the moments of inverse participation ratio: $$L^d|\mathrm{\Psi }_E(r)|^{2n}L^{d_n(n1)},n2.$$ (2) At the critical point $`p=1`$ and the correlation radius $`\xi \mathrm{}`$ in Eq.(1) must be replaced by the sample size $`L`$. Spectral statistics are related with eigenfunction correlations at different energies $`E`$ and $`E^{}=E+\omega `$. If $`\omega \mathrm{\Delta }`$ and thus $`L_\omega L`$ one should substitute $`L_\omega `$ for $`L`$ in the $`r`$-dependent term of Eq.(1). In this way the multifractality exponents $`d_n`$ enter the spectral $`\omega `$-dependences . For weak multifractality one can expect the fractal dimensions to be a linear function of $`n`$ which is controlled by only one parameter $`a`$: $$d_n/d=1an,a1/g^{}1.$$ (3) This relationship holds approximately for the Anderson transition in $`2+ϵ`$ dimensions and is fulfilled exactly for the critical eigenstate of the Dirac equation in the random vector-potential . Thus for critical quantum systems with weak multifractality it is natural to expect that the spectral statistics depends on only one system-specific parameter - the critical conductance $`g^{}`$. In view of the expected universality, it is useful to find a simple one-parameter random matrix ensemble with the multifractal eigenfunction statistics which would play the same role for the critical systems as the classical RMT does for the ergodic systems. As a matter of fact there are few candidates . However, here we focus only on one of them , since for this ensemble the multifractality of eigenstates has been rigorously proven . Consider a Hermitean $`N\times N`$ matrix with the real ($`\beta =1`$), complex ($`\beta =2`$) or quaternionic ($`\beta =4`$) entries $`H_{ij}`$ $`(ij)`$ which are independent Gaussian random numbers with zero mean and the variance: $$|H_{ij}|^2=\frac{1}{1+\frac{(ij)^2}{B^2}}\{\begin{array}{cc}1/\beta ,& i=j,\\ 1/2,& ij\end{array}J(ij)$$ (4) This model has been shown to be critical both for large and for small values of $`B`$ with the fractal dimensionality $`d_2`$ at the center of the spectral band being: $$d_2=\{\begin{array}{cc}1\frac{1}{\pi \beta B},& B1,\\ 2B,& B1\end{array}$$ (5) Thus the 1D system with long-range hopping described by the matrix Hamiltonian Eq.(4) possesses the line of critical points $`B(0,\mathrm{})`$, the fractal dimensionality $`d_2`$ changing from 1 to 0 with decreasing $`B`$. One can extend this matrix 1D model by closing it into a ring and applying a ‘flux’ $`\phi [0,1]`$. In this case $$H_{ij}(\phi )=H_{ij}+H_{ij}^{(1)}e^{2\pi i\phi sgn(ij)}$$ (6) is a sum of two independent Gaussian random numbers with the variance of $`H_{ij}^{(1)}`$ given by: $$|H_{ij}^{(1)}|^2=J(N|ij|).$$ (7) For large values of $`B`$ which correspond to weak multifractality one can derive an effective field theory – the supersymmetric nonlinear sigma-model – which describes the spectral and eigenfunction correlations of the critical random matrix ensemble Eq.(4): $$F[𝐐]=\frac{g^{}}{16}\underset{i,j=1}{\overset{N}{}}Str[𝐐_iU_{|ij|}𝐐_j]+\frac{i\pi s}{4N}\underset{i=1}{\overset{N}{}}Str[\sigma _z𝐐_i],$$ (8) where $`𝐐`$ is the supermatrix with $`𝐐_i^2=\mathrm{𝟏}`$ and $$g^{}=4\beta B.$$ (9) The symmetry with respect to time reversal is encoded in the symmetry of $`𝐐_i`$ in exactly the same way as for the diffusive sigma-model . The only difference is the long-range kernel $`U_{|ij|}`$ with the Fourier-transform $`\stackrel{~}{U}_k=|k|`$. For a torus geometry $`k=2\pi m/N`$, where $`m`$ is an arbitrary integer. One can explicitly resolve the constraint $`𝐐^2=1`$ by switching to the integration over the ‘angles’ $`W`$. Then the Gaussian fluctuations of ‘angles’ recover the spectrum of ‘quasi-diffusion’ modes: $$\epsilon _m=g^{}|m|,m=0,\pm 1,\pm 2,\mathrm{}$$ (10) The problem of spectral statistics can be generalized to include the dependence of spectrum on the flux $`\phi `$ introduced by Eq.(6). One can define the parametric two-level correlation function $`R(s,\phi )=\rho (E,0)\rho (E+s\mathrm{\Delta },\phi )`$ which can be treated in the framework of the same nonlinear sigma-model but with the phase-dependent $`\epsilon _m`$: $$\epsilon _m(\phi )=g^{}|m\phi |.$$ (11) Following the work by Andreev and Altshuler Ref. we introduce the spectral determinant: $$D^1(s,\phi )=\underset{m0}{}\frac{\epsilon _m^2(\phi )+s^2}{\epsilon _m^2(0)}$$ (12) Then it can be shown in the same way as in Ref. that the parametric TLCF for $`s1`$ and $`g^{}\phi 1`$ can be expressed in terms of the spectral determinant as follows: $`R^u(s,\phi )`$ $`=`$ $`{\displaystyle \frac{1}{4\pi ^2}}{\displaystyle \frac{^2G(s,\phi )}{s^2}}+\mathrm{cos}(2\pi s)e^{G(s,\phi )}`$ (13) $`R^o(s,\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2\pi ^2}}{\displaystyle \frac{^2G(s,\phi )}{s^2}}+2\mathrm{cos}(2\pi s)e^{2G(s,\phi )}`$ (14) $`R^s(s,\phi )`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \frac{^2G(s,\phi )}{s^2}}+{\displaystyle \frac{\pi }{\sqrt{8}}}\mathrm{cos}(2\pi s)e^{G(s,\phi )/2}+`$ (15) $`+`$ $`{\displaystyle \frac{1}{8}}\mathrm{cos}(4\pi s)e^{2G(s,\phi )},`$ (16) where $`G(s,\phi )=G(s,\phi +1)`$ is a periodic in $`\phi `$ function: $$e^{G(s,\phi )}=\frac{D(s,\phi )}{2\pi ^2(s^2+\epsilon _0^2(\phi ))}.$$ (17) Eqs.(13-16) coincide with the corresponding formulae in Ref. for the unitary, orthogonal and symplectic ensembles after some misprints are corrected as in Ref. and $`s2s`$ for the symplectic ensemble to take account of the Kramers degeneracy. The only difference is in the form of the spectral determinant Eq.(12) due to the specific spectrum $`\epsilon _m(\phi )`$ of the quasi-diffusion modes. Using the functional representation Eq.(8) one can also find the leading term in the deviation from the WD statistics at $`sg^{}`$ and $`\phi =0`$ using the results of Ref.: $$\delta R(s)=\frac{1}{2\pi ^2\beta }\left(\underset{m0}{}\frac{1}{\epsilon _m^2}\right)\frac{d^2}{ds^2}\left[s^2R_{WD}(s)\right],$$ (18) where $`R_{WD}(s)`$ is the Wigner-Dyson TLCF. A remarkable property of the function $`G(s,\phi )`$ for the critical sigma-model Eq.(8) on a torus is that it can be decomposed into the sum $`G(s,\phi )=F(z)+F(\overline{z})`$ of analytic functions $`F(z)`$ and $`F(\overline{z})`$ where $`\tau =g^{}\phi `$, $`z=\tau +is`$, $`\overline{z}=\tau is`$ and $$F(z)=\mathrm{ln}[\sqrt{2}g^{}\mathrm{sin}(\pi z/g^{})].$$ (19) Eq.(19) results from a straightforward evaluation of the product in Eq.(12). On the other hand, it can be easily verified that $`G(s,\phi )`$ given by Eq.(19) is proportional to the Green’s function of the free-boson field $`\mathrm{\Phi }(z)`$ on a torus in the $`(1+1)`$ $`z`$-space: $`0<\mathrm{}z<g^{}`$, $`\mathrm{}<\mathrm{}z<+\mathrm{}`$: $$\mathrm{\Phi }(s,g^{}\phi )\mathrm{\Phi }(0,0)_S\mathrm{\Phi }(0,0)\mathrm{\Phi }(0,0)_S=KG(s,\phi ),$$ (20) where $`\mathrm{}_S`$ denotes the functional average with the free-boson action: $$S[\mathrm{\Phi }]=\frac{1}{8\pi K}_0^g^{}𝑑\tau _{\mathrm{}}^+\mathrm{}𝑑s\left[(_s\mathrm{\Phi })^2+(_\tau \mathrm{\Phi })^2\right].$$ (21) Now we are in a position to make a crucial step and suggest that for the critical RMT described by Eq.(4), the Andreev-Altshuler equations (13-16) are nothing but density-density correlations in the Luttinger liquid of fictitious 1D fermions at a finite temperature $`T=1/g^{}`$: $$R(s,\phi )=\overline{n}^2n(s,\tau )n(0,0)_S1,\tau =g^{}\phi .$$ (22) Indeed, the density operator $`n(s,\tau )`$ ($`s`$\- is space and $`\tau (0,1/T)`$ is imaginary time coordinate) for 1D interacting fermions with the Fermi-momentum $`k_F=\pi `$ can be expressed through the free boson field $`\mathrm{\Phi }(s,\tau )`$ as follows : $`n(s,\tau )`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}_s\mathrm{\Phi }(s,\tau )+A_K\mathrm{cos}[2\pi s+\mathrm{\Phi }(s,\tau )]+`$ (23) $`+`$ $`B_K\mathrm{cos}[4\pi s+2\mathrm{\Phi }(s,\tau )].`$ (24) The constants $`A_K`$ and $`B_K`$ are independent of ‘temperature’ $`1/g^{}`$ but depend on the interaction constant $`K`$. They can be uniquely determined from the WD limit $`g^{}\mathrm{}`$. Using Eqs.(22,23) and the well known result for the Gaussian average of the exponent: $$e^{ip\mathrm{\Phi }(s,\tau )}e^{ip\mathrm{\Phi }(0,0)}=e^{Kp^2[G(s,\tau )G(0,0)]},$$ (25) one can verify that for the choice: $$K=\frac{2}{\beta },\beta =1,2,4$$ (26) the Andreev-Altshuler formulae Eqs.(13-16) are reproduced exactly for the orthogonal, unitary and symplectic ensembles, respectively. Now we remind on a known result that the parametric spectral statistics in the WD limit $`g^{}=\mathrm{}`$ is equivalent to the Tomonaga-Luttinger liquid at zero temperature. It follows directly from Ref. and the equivalence of the Calogero-Sutherland model and the Tomonaga-Luttinger liquid for large distances $`|z|1`$. The critical random matrix ensemble Eq.(4,6,7) and the critical 1D sigma-model Eq.(8) turns out to be the simplest generalization of the WD theory that retains the Tomonaga- Luttinger liquid analogy extended for finite ‘temperatures’ $`T=1/g^{}`$ which are related with the spectrum of fractal dimensions Eq.(3). The Wigner-Dyson two-level statistics for all three symmetry classes can be expressed through the single kernel $`K(s)=\mathrm{sin}(\pi s)/(\pi s)`$ in the following way : $`R^u(s)=K^2(s),`$ (27) $`R^o(s)=K^2(s){\displaystyle \frac{dK(s)}{ds}}{\displaystyle _s^{\mathrm{}}}K(x)𝑑x,`$ (28) $`R^s(s)=K^2(s)+{\displaystyle \frac{dK(s)}{ds}}{\displaystyle _0^s}K(x)𝑑x.`$ (29) It turns out that such a representation is also valid for the critical TLCF at $`g^{}1`$ if the kernel is replaced by: $$K(s)=\frac{T}{\alpha }\frac{\mathrm{sin}(\pi \alpha s)}{\mathrm{sinh}(\pi Ts)},T=\frac{1}{g^{}}.$$ (30) where $`\alpha =1`$ for the orthogonal and the unitary ensemble and $`\alpha =2`$ for the symplectic ensemble. The form of the kernel Eq.(28) can be guessed from the well known density correlation function for the case of free fermions in one dimension at a finite temperature that corresponds to the unitary ensemble . In order to prove this statement we note that for $`s1`$ and $`g^{}1`$ Eqs.(27-29) with the kernel Eq.(30) give the same leading terms as Eqs.(13-16) with $`G(s,0)`$ given by Eq.(19). For $`sg^{}`$ Eqs.(27-29) give corrections to the WD statistics that coincide with Eq.(18). Thus the representation Eqs.(27-29) with the kernel Eq.(30) are the correct asymptotic expressions for both $`s1`$ and $`sg^{}`$. At $`g^{}1`$ these regions have a parametrically large overlap so that Eqs.(27-29) are valid for all values of $`s`$. Given that for all $`s`$ at $`T=0`$ and for large energy separations $`s1`$ at $`T1`$ expressions Eqs.(27-29) correspond to equal-time correlations in the Calogero-Sutherland model, one can expect Eqs.(27-29) with the kernel Eq.(30) to describe the low-temperature equal-time correlations in the Calogero-Sutherland model for for all $`s`$ and the interaction constant $`K=2/\beta `$. One can use the kernel Eq.(28) to compute the spacing distribution function $`P(s,g^{})`$ which can be compared with the corresponding distribution function $`P_c(s)`$ obtained by the numerical diagonalization of the three-dimensional Anderson model at the critical point. Such a comparison is done in Ref. for $`\beta =1,2,4`$. It turns out that identifying the parameter $`g^{}`$ from the fitting $`P(s,g^{})e^{\kappa (g^{})s}`$ with the far exponential tail of $`P_c(s)e^{\kappa s}`$ one reproduces the entire distribution function $`P_c(s)`$ extremely well. A. M. T. acknowledges a kind hospitality of Abdus Salam ICTP where this work was performed.
no-problem/0002/nlin0002005.html
ar5iv
text
# Finite-genus solutions for the Hirota’s bilinear difference equation. ## Abstract The finite-genus solutions for the Hirota’s bilinear difference equation are constructed using the Fay’s identities for the $`\theta `$-functions of compact Riemann surfaces. In the present work I want to consider once more the question of constructing the finite-genus solutions for the famous Hirota’s bilinear difference equation (HBDE) $$\tau (k1,l,m)\tau (k+1,l,m)+\tau (k,l1,m)\tau (k,l+1,m)+\tau (k,l,m1)\tau (k,l,m+1)=0$$ (1) which has been solved in using the so-called algebraic-geometrical approach. This method, which is the most powerful method for deriving the quasi-periodic solutions (QPS) and which has been developed for almost all known integrable systems, exploits some rather sophisticated pieces of the theory of functions of complex variables and is based on some theorems determining the number of functions with prescribed structure of singularities on the Riemann surfaces (see for review). However, in some cases the QPS can be found with less efforts, in a more straightforward way, using the fact that the finite-genus QPS (and namely they are the subject of this note) of all integrable equations possess similar and rather simple structure: up to some phases they are ’meromorphic’ combinations of the $`\theta `$-functions associated with compact Riemann surfaces (the situation resembles the pure soliton case, where solutions are rational functions of exponents). Thus, to construct these solutions we only have to determine some constant parameters. This, as in the pure soliton case, can be done directly, using the well-known properties of the $`\theta `$-functions of compact Riemann surfaces. In such approach was developed for the Ablowitz-Ladik hierarchy, where the finite-genus solutions were ’extracted’ from the so-called Fay’s formulae . The fact, that the HBDE is closely related to the Fay’s identities is not new and was mentioned, e.g., in (see Remark 2.7), but contrary to this work I will use these identities as a starting point and will show how one can derive from them, by rather short and very simple calculations (which to my knowledge have not been presented explicitly in the literature), a wide family of solutions for the HBDE. Consider a compact Riemann surface $`X`$ of the genus $`g`$. One can choose a set of closed contours (cycles) $`\{a_i,b_i\}_{i=1,\mathrm{},g}`$ with the intersection indices $$a_ia_j=b_ib_j=0,a_ib_j=\delta _{ij}i,j=1,\mathrm{},g$$ (2) and find $`g`$ independent holomorphic differentials which satisfy the normalization conditions $$_{a_i}\omega _k=\delta _{ik}$$ (3) The matrix of the $`b`$-periods, $$\mathrm{\Omega }_{ik}=_{b_i}\omega _k$$ (4) determines the so-called period lattice, $`L_\mathrm{\Omega }=\left\{𝒎+\mathrm{\Omega }𝒏,𝒎,𝒏^g\right\}`$, the Jacobian of this surface $`\mathrm{Jac}(X)=^g/L_\mathrm{\Omega }`$ (2$`g`$ torus) and the Abel mapping $`X\mathrm{Jac}(X)`$, $$𝓐(P)=_{P_0}^P𝝎$$ (5) where $`𝝎`$ is the $`g`$-vector of the 1-forms, $`𝝎=(\omega _1,\mathrm{},\omega _g)^T`$ and $`P_0`$ is some fixed point of $`X`$. A central object of the theory of the compact Riemann surfaces is the $`\theta `$-function, $`\theta (𝜻)=\theta (𝜻,\mathrm{\Omega })`$, $$\theta \left(𝜻\right)=\underset{𝒏^g}{}\mathrm{exp}\left\{\pi i(𝒏,\mathrm{\Omega }𝒏)+2\pi i(𝒏,𝜻)\right\}$$ (6) where $`(𝝃,𝜼)`$ stands for $`_{i=1}^g\xi _i\eta _i`$, which is a quasiperiodic function on $`^g`$ $`\theta \left(𝜻+𝒏\right)`$ $`=`$ $`\theta \left(𝜻\right)`$ (7) $`\theta \left(𝜻+\mathrm{\Omega }𝒏\right)`$ $`=`$ $`\mathrm{exp}\left\{\pi i(𝒏,\mathrm{\Omega }𝒏)2\pi i(𝒏,𝜻)\right\}\theta \left(𝜻\right)`$ (8) for any $`𝒏^g`$. The famous Fay’s trisecant formula can be written as $$\underset{i=1}{\overset{3}{}}a_i\theta \left(𝜻+𝜼_i\right)\theta \left(𝜻𝜼_i\right)=0$$ (9) where $`2𝜼_1`$ $`=`$ $`𝓐(P_1)+𝓐(P_2)+𝓐(P_3)𝓐(P_4),`$ (10) $`2𝜼_2`$ $`=`$ $`𝓐(P_1)𝓐(P_2)+𝓐(P_3)𝓐(P_4),`$ (11) $`2𝜼_3`$ $`=`$ $`𝓐(P_1)+𝓐(P_2)𝓐(P_3)𝓐(P_4).`$ (12) Here $`P_1,\mathrm{},P_4`$ are arbitrary points of $`X`$, and the constants $`a_i`$ are given by $`a_1`$ $`=`$ $`e(P_4,P_1)e(P_2,P_3),`$ (13) $`a_2`$ $`=`$ $`e(P_4,P_2)e(P_3,P_1),`$ (14) $`a_1`$ $`=`$ $`e(P_4,P_3)e(P_1,P_2).`$ (15) The skew-symmetric function $`e(P,Q)`$, $`e(P,Q)=e(Q,P)`$, is closely related to the prime form and is given by $$e(P,Q)=\theta [𝜹^{},𝜹^{\prime \prime }]\left(𝓐(Q)𝓐(P)\right)$$ (16) where $`\theta [𝜶,𝜷]\left(𝜻\right)`$ is the so-called $`\theta `$-function with characteristics, $$\theta [𝜶,𝜷]\left(𝜻\right)=\mathrm{exp}\left\{\pi i(𝜶,\mathrm{\Omega }𝜶)+2\pi i(𝜶,𝜻+𝜷)\right\}\theta \left(𝜻+\mathrm{\Omega }𝜶+𝜷\right),$$ (17) and $`(𝜹^{},𝜹^{\prime \prime })\frac{1}{2}^{2g}/^{2g}`$ is a non-singular odd characteristics, $$\theta [𝜹^{},𝜹^{\prime \prime }]\left(\mathrm{𝟎}\right)=0,\mathrm{grad}_𝜻\theta [𝜹^{},𝜹^{\prime \prime }]\left(\mathrm{𝟎}\right)\mathrm{𝟎}$$ (18) Now it is very easy to establish relations between (9) and the HBDE (1). To do this one has first to introduce the discrete variables by $$\mathrm{\Theta }(k,l,m)=\theta \left(𝜻+k𝜼_1+l𝜼_2+m𝜼_3\right)$$ (19) The Fay’s identity (9) can now be rewritten as $`a_1\mathrm{\Theta }(k1,l,m)\mathrm{\Theta }(k+1,l,m)+a_2\mathrm{\Theta }(k,l1,m)\mathrm{\Theta }(k,l+1,m)`$ (20) $`+a_3\mathrm{\Theta }(k,l,m1)\mathrm{\Theta }(k,l,m+1)`$ $`=`$ $`0`$ (21) from which it follows that the quantity $$\tau (k,l,m)=a_1^{k^2/2}a_2^{l^2/2}a_3^{m^2/2}\mathrm{\Theta }(k,l,m)$$ (22) satisfies HBDE (1). Thus the last formula, $$\tau (k,l,m)=a_1^{k^2/2}a_2^{l^2/2}a_3^{m^2/2}\theta \left(𝜻+k𝜼_1+l𝜼_2+m𝜼_3\right)$$ (23) where $`\theta `$ is the $`\theta `$-function of some compact Riemann surface $`X`$ and the constants $`a_i`$ and $`\eta _i`$ depend on four points (paths) on this surface, determines a family of finite-genus solutions of the HBDE. Now I want to discuss the solutions obtained above. First of all it should be noted that these solutions are ’finite-genus’ but not quasiperiodic. At first glance, this seems to be strange, because usually the finite-genus solutions naturally appear when one solves quasiperiodic problems. For, example, in $`1+1`$ dimensional discrete systems, such as Toda chain, Ablowitz-Ladik equations, etc, the quasiperiodicity leads to the polynomial dependence of the scattering matrix of the auxiliary problem on the spectral parameter. These polynomials determine some hyperelliptic curve (spectral curve) of finite genus, and the quasiperiodic solutions are built up of the $`\theta `$-functions corresponding to the latter. Thus, in some sense, in discrete systems the quasiperiodicity implies the ’finite-genus’ property. In other words, quasiperiodic solutions are finite-genus. However, the reverse is not obligatory true. The Fay’s identities do not imply that the integrals $`𝓐(P)`$ are in some way related to the periods $`𝝎`$. Formula (9) determine, so to say, ’local’ properties of the $`\theta `$-functions, and all above consideration was local, without reference to some boundary conditions (quasiperiodicity). Another point which I would like to discuss here is to compare the approach of this note with the algebro-geometrical one. The ideology of the latter is to operate on the Riemann surface: the key moment in calculating the Baker-Akhiezer function, the central object of the algebro-geometrical method, is to satisfy the condition that it is a single-valued function of the point of the Riemann surface. Since we didn’t introduce the Baker-Akhiezer function and didn’t study its analytical properties the question of how our solutions depend on the points $`P_iX`$ (or on the integral paths from $`P_0`$ to $`P_i`$, to be more precise) is not so crucial as in the algebro-geometrical approach and is in some sense the question of parametrization of constants. One can restrict the points $`P_i`$, $`i=1,2,3`$, to some neighborhood of the point $`P_0`$ and rewrite the Abel’s integrals in terms of local coordinates (with some polynomial representing the Riemann surface). This enables not to mention the Riemann surface and reformulate all results in terms of integrals over the complex plane. As to the ’global’ (or ’homotopical’) effects, which arise when we add to the paths $`(P_0,P_i)`$ some integer cycles ($`_{k=1}^gm_ka_k+n_kb_k`$, $`m_k,n_k`$) it should be noted that, if we consider the Abel integral as mapping $`X^g`$, then such deformations of the contours change $`𝓐(P)`$ as $`𝓐(P)𝓐(P)+𝜸`$, $`𝜸L_\mathrm{\Omega }`$. This results first in adding some half-period to the argument of the $`\theta `$-function in (23), $$𝜻+\underset{i=1}{\overset{3}{}}k_i𝜼_i𝜻+\underset{i=1}{\overset{3}{}}k_i𝜼_i+\frac{1}{2}\underset{i=1}{\overset{3}{}}k_i𝜸_𝒊,𝜸_𝒊L_\mathrm{\Omega }$$ (24) and, second, in altering the constants $`a_i`$’s, $`a_ia_i\mathrm{exp}(f_i)`$. Thus, we come to the point where one can try to apply the theory of Backlund transformations for the HBDE to describe the transformations of $`\theta `$-functions due to half-period shifts (and vice versa). This is an interesting problem, which deserves special studies. To conclude I would like to note the following. In principle, the direct approach based on the Fay’s identities can be used to derive the finite-genus solutions not only for the HBDE but for almost all known integrable systems (some examples one can find in the book ). However, contrary to the case of the HBDE where all ’calculations’ take only two lines (formulae (19) and (22) above), in the case of partial differential equations such as, e.g., KP equation the corresponding calculations become rather cumbersome. For example, to solve the KP equation one has to expand the Fay’s identities up to the third order in some small parameter. From the other hand, it is a widely known fact that almost all known integrable equations can be derived from the HBDE. Hence one can ’skip’ the Fay’s identities and use solutions (23) as a starting point. In it was shown how to obtain some finite-genus solutions for the nonlinear Schrödinger and KP equations using the corresponding solutions for the Ablowitz-Ladik hierarchy, which can be viewed as some ’pre-continuous’ version of the HBDE (in the Ablowitz-Ladik hierarchy 2 of 3 discrete coordinates of the HBDE are presented as the Miwa’s shifts of two infinite sets of continuous variables).
no-problem/0002/cond-mat0002048.html
ar5iv
text
# Induced instability for boson-fermion mixed condensate of Alkali atoms due to attractive boson-fermion interaction ## A Life Time of Collective Tunneling Effect in Metastable Region To estimate a life time of metastable mixed condensates, we consider the collective tunneling effect with the collective variable $`R`$ (the boson radius) and its effective energy $`E(R)`$ as a collective potential. The effective energy $`E(R)`$ is obtained from (14), and, for the metastable condensates, a typical shape of it is the line (c) in Fig. 1. We approximate this $`E(R)`$ with the linear-plus-harmonic-oscillator type potential eq. (16) (Fig. 3). $`M`$ is inertia mass for the collective variable $`R`$. The meanings of other parameters ($`F`$, $`G`$, $`\mathrm{\Omega }`$, $`R_{eq}`$, $`R_t`$) can be read off in Fig. 3, and they are fixed in order that the potential $`V(R)`$ reproduce the $`E(R)`$ numerically obtained by (14). This approximation should be enough for the order estimation of the life time. For the kinetic term, we take $$T=\frac{\mathrm{}^2}{2M}\frac{d^2}{dR^2}.$$ (A1) In $`V(R)`$, the metastable state $`\psi _M(x)`$ before tunneling is approximately given by the ground-state wave function in the harmonic oscillator potential $`V_2(R)`$: $$\psi _M(R)=\frac{1}{\sqrt{\pi ^{\frac{1}{2}}a_{HO}}}\mathrm{exp}\left[\frac{(RR_{eq})^2}{a_{HO}^2}\right],$$ (A2) where $`a_{HO}=\sqrt{\mathrm{}/M\mathrm{\Omega }}`$ is a harmonic oscillator length. This state has the zero-point energy $`\mathrm{}\mathrm{\Omega }/2`$ measured from $`V_0`$. The continuum state $`\psi _D(R)`$ after the tunneling decay is obtained by the wave function in the linear potential $`V_1(R)`$, and its Schrödinger equation becomes $$\left[\frac{\mathrm{}^2}{2M}\frac{d^2}{dR^2}+E(FRG)\right]\psi _D(R)=0.$$ (A3) To solve the above equation, we use the WKB approximation. It should be noticed that, because the state energy is lower than the potential maximum ($`E<FR_tG`$), a turning point exist at $`R_E=(E+G)/F`$. Thus, the WKB connection formula should be used at $`R_E`$ for the continuum state $`\psi _D`$: $`\psi _D(R)`$ $`=`$ $`{\displaystyle \frac{A}{2\sqrt{K(R)}}}\mathrm{sin}\left[{\displaystyle \frac{2}{3}}\left({\displaystyle \frac{2MF}{\mathrm{}^2}}\right)^{\frac{1}{2}}(R_ER)^{\frac{3}{2}}+{\displaystyle \frac{\pi }{4}}\right],\text{for }0R<R_E`$ (A4) $`\psi _D(R)`$ $`=`$ $`{\displaystyle \frac{A}{2\sqrt{K(R)}}}\mathrm{exp}\left[{\displaystyle \frac{2}{3}}\left({\displaystyle \frac{2MF}{\mathrm{}^2}}\right)^{\frac{1}{2}}(RR_E)^{\frac{3}{2}}\right],R_E<R`$ (A5) where $`A=\sqrt{2M/(\pi \mathrm{}^2)}`$ and $`\mathrm{}K(R)=\sqrt{2MF|R_ER|}`$. Let us consider the tunneling decay rate $`\mathrm{\Gamma }`$ from $`\psi _M(R)`$ to $`\psi _D(R)`$. Using the golden rule of Fermi, it becomes $$\mathrm{\Gamma }=\frac{2\pi }{\mathrm{}}\left|𝑑R\psi _D^{}(R)[V(R)V_2(R)]\psi _M(R)\right|_{E=E_M}^2,$$ (A6) where $`E_M=V_0+\mathrm{}\mathrm{\Omega }/2`$ is the energy of $`\psi _M(R)`$. Because $`V(R)V_2(R)=0`$ for $`R_t<R`$ and $`\psi _M(0),\psi _M^{}(0)1`$, eq. (A6) becomes $$\mathrm{\Gamma }=\frac{2\pi }{\mathrm{}}\left(\frac{\mathrm{}^2}{2M}\right)^2\left[\frac{d\psi _D(R)}{dR}\psi _M(R)\psi _D(R)\frac{d\psi _M(R)}{dR}\right]_{R=R_t}.$$ (A7) Using eqs. (A2, A4, A5), the tunneling rate $`\mathrm{\Gamma }`$ is shown to have the form $`\mathrm{\Gamma }=D\mathrm{exp}(W)`$ : $`D`$ can be interpreted as the staying probability Per unit time and $`\mathrm{exp}(W)`$ is the transition coefficient. The explicit formulae for $`D`$ and $`W`$ are $`D`$ $`=`$ $`{\displaystyle \frac{4\sqrt{2}\mathrm{\Omega }}{\sqrt{\pi }}}\sqrt{{\displaystyle \frac{\mathrm{\Delta }V}{\mathrm{}\mathrm{\Omega }}}{\displaystyle \frac{1}{2}}}`$ (A8) $`W`$ $`=`$ $`{\displaystyle \frac{4}{3}}\sqrt{{\displaystyle \frac{2M}{\mathrm{}^2}}}\left(\mathrm{\Delta }V{\displaystyle \frac{\mathrm{}\mathrm{\Omega }}{2}}\right)(R_tR_E)+{\displaystyle \frac{(R_0R_t)^2}{a_{HO}^2}},`$ (A9) where $`\mathrm{\Delta }V=V_1(R_t)V_0`$ is a barrier height. To represent the parameters $`F`$ and $`G`$ in $`V_1`$ by $`\mathrm{\Delta }V`$, we have used $$V_1(R_t)=FR_tG,V_1(R_E)=FR_EG=E=V_0+\frac{1}{2}\mathrm{}\mathrm{\Omega }.$$ (A10) The tunneling life-time is defined as an inverse of decay rate: $`\tau _{ct}=\mathrm{\Gamma }^1`$. ## B Critical Condition for Unstable Region In this appendix, we rederive the Mølmer scaling law which gives the critical condition for the unstable region . We assume the Thomas-Fermi approximations both for the boson and fermion density distributions. In that approximations, the density distributions are given as solutions of the Thomas-Fermi equations: $`\stackrel{~}{g}n_b(x)+x^2+\stackrel{~}{h}n_f(x)=\stackrel{~}{\mu }_b,`$ (B1) $`[6\pi ^2n_f(x)]^{\frac{2}{3}}+x^2+\stackrel{~}{h}n_b(x)=\stackrel{~}{\mu }_f,`$ (B2) where $`n_b(x)`$ is the boson density distribution scaled by the harmonic-oscillator length $`\xi =(\mathrm{}/m\omega )^{1/2}`$ ($`n_b(x)=|\mathrm{\Phi }(x)|^2\xi ^3`$). The scaled chemical potentials, $`\stackrel{~}{\mu }_{b,f}`$, and coupling constants, $`\stackrel{~}{g}`$ and $`\stackrel{~}{h}`$, have been defined in the main body of this paper. Eliminating $`n_b(x)`$ in (B2) by (B1), we obtain the equation $`F[n_f(x)]=G[n_f(x)]`$ to determine the fermion density $`n_f(x)`$, where $$F[n_f(x)][6\pi ^2n_f(x)]^{\frac{2}{3}},G[n_f(x)]\stackrel{~}{\mu }_f+\frac{|\stackrel{~}{h}|}{\stackrel{~}{g}}\stackrel{~}{\mu }_b\left(1+\frac{|\stackrel{~}{h}|}{\stackrel{~}{g}}\right)x^2+\frac{|\stackrel{~}{h}|^2}{\stackrel{~}{g}}n_f(x).$$ (B3) We concentrate on the central density $`n_f(0)`$. In order to determine $`n_f(0)`$ two conditions should be satisfied: $$F[n_f(0)]=G[n_f(0)],\frac{\delta F}{\delta n_f}[n_f(0)]\frac{\delta G}{\delta n_f}[n_f(0)],$$ (B4) Evaluating equations in (B4) with (B3), we obtain the critical condition for the unstable region: $$\frac{|\stackrel{~}{h}|^2}{\stackrel{~}{g}}=\frac{4\pi ^2}{\sqrt{3}}\left(\stackrel{~}{\mu }_f+\frac{|\stackrel{~}{h}|}{\stackrel{~}{g}}\stackrel{~}{\mu }_b\right).$$ (B5) To evaluate the right-hand side of (B5), we should use the relations between $`\stackrel{~}{\mu }_{b,f}`$ and the boson/fermion particle number $`N_{b,f}`$, which are obtained by solving the Thomas-Fermi equations in (B2). Here we assume that $`\stackrel{~}{\mu }_b=0`$ and $`\stackrel{~}{\mu }_f=2(6N_f)^{1/3}`$ ($`\stackrel{~}{\mu }_f`$ for a free fermion system) for (B5). Consequently, we obtain the Mølmer scaling relation: $$\frac{|\stackrel{~}{h}|^2}{\stackrel{~}{g}}=\frac{4\pi ^2}{6^{1/6}\sqrt{6}}N_f^{1/2}12.0N_f^{1/6}.$$ (B6) It should be noted that the coefficient 12.0 in (B6) are close to the value 13.8 which is obtained by Mølmer with numerically evaluating the Thomas-Fermi equations.
no-problem/0002/hep-ph0002020.html
ar5iv
text
# TeV Emission by Ultra-High Energy Cosmic Rays in Nearby, Dormant AGNs The curvature radiation produced by particles accelerating near the event horizon of a spinning supermassive black hole, threaded by externally supported magnetic field lines is considered. It is shown that light nuclei suffer catastrophic curvature losses that limit the maximum energy they can attain to values well below that imposed by the maximum potential difference induced by the black hole dynamo, unless the curvature radius of magnetic field lines largely exceeds the gravitational radius. It is further shown that the dominant fraction of the rotational energy extracted from the black hole is radiated in the TeV band. Given the observed flux of ultra-high energy cosmic rays, and the estimated number of nearby supermassive black holes, it is expected that if dormant AGNs are the sources of UHECRs, as proposed recently by Boldt & Ghosh, then they should also be detectable at TeV energies by present TeV experiments. Energy losses resulting from interactions with the cosmic microwave background limit the distance a cosmic ray (CR) of energy $`>10^{20}`$ eV can traverse to less than 50 Mpc . As a consequence, if the origin of the ultra-high energy cosmic rays (UHECRs) observed is associated with astrophysical objects, rather than decaying supermassive X particles as in the top-down scenario , then the sources of UHECRs must be close by. Plausible classes of UHECR sources have been discussed in the literature, including active galactic nuclei (AGNs) and gamma-ray bursts . Recently, it has been proposed that the UHECR observed at Earth may originate from dormant AGNs in the local Universe having masses in excess of $`10^9M_{}`$. The idea being that these objects, although underluminous relative to active quasars, contain spinning supermassive black holes that, instead of producing luminous radio jets as seen in blazars, serve as accelerators of a small number of particles to ultra-high energies. The acceleration mechanism invoked is associated with a BZ type process ; specifically, individual particles are accelerated by the potential difference induced by the rotation of a black hole that is threaded by externally produced magnetic field lines, during episodes when breakdown of the vacuum does not occur. In the absence of energy losses, the maximum energy a particle can gain by this mechanism is limited to the voltage drop involved, and is proportional to the product of the black hole mass and the strength of magnetic field. A recent analysis indicates that massive dark objects (MDOs) are present in the centers of nearby galaxies, some of which have masses in excess of $`10^{10}M_{}`$. A plausible interpretation is that the MDOs are supermassive black holes, and may represent quasar remnants or dormant AGNs. This interpretation is further supported by the fact that a correlation between the black hole mass and bulge luminosity, similar to that found for the sample of galaxies studied in ref. , has been found for a sample of bright quasars, using a completely different method. By applying their model to a list of objects from ref. , of which 14 have compact central masses larger than $`10^9M_{}`$, Boldt & Ghosh estimate that protons can be accelerated up to energies in excess of a few times $`10^{20}`$ eV. As emphasized by them, in order to account for the measured flux of UHECRs, an average power of only $`10^{42}`$ erg s<sup>-1</sup> is needed to be extracted from a black hole by the accelerated cosmic rays, corresponding to a mass loss rate of order $`10^{10}`$ g s<sup>-1</sup>, and a loss rate of electric charge that constitutes only a small fraction of the total effective current required to induce the potential difference across the gap. Acceleration to the maximum energy allowed is possible provided that radiative losses are sufficiently small. Boldt & Ghosh argued that proton energy losses during the acceleration phase due to pair production and photomeson production on ambient photons are unimportant by virtue of the low accretion luminosity anticipated in those objects. However, they have not discussed the radiation associated with the acceleration process itself. In the following, we consider the curvature radiation produced by the accelerated particles, and show that in the case of light nuclei it limits the maximum energy attainable to values below the full voltage, unless the average radius of curvature of the particle’s trajectory exceeds the gravitational radius by at least a factor of a few. We further show that the curvature radiation is emitted predominantly in the TeV band, and conclude that if dormant AGNs are indeed the sources of UHECRs, then they should also be detectable by present, ground based experiments at TeV energies. Curvature radiation: The electric potential difference generated by a maximally rotating black hole of mass $`M=10^9M_9M_{}`$, threaded by magnetic field of strength $`B=10^4B_4`$ Gauss is $$\mathrm{\Delta }V4.4\times 10^{20}B_4M_9(h/R_g)^2\mathrm{volts},$$ (1) where $`h`$ is the gap height, and $`R_g=GM/c^2`$ is the gravitational radius. In the presence of a nonuniform magnetic field, particles accelerated by this potential difference will suffer energy losses through curvature radiation, even if initially they move along magnetic field lines. Since the gyroradius of a proton having energy $`ϵ`$, $`R_c=ϵ/eB`$, is smaller than the gravitational radius: $$R_c/R_g=(ϵ/e\mathrm{\Delta }V)e\mathrm{\Delta }Vc^2/(eGBM)(ϵ/e\mathrm{\Delta }V)<1,$$ (2) (the gyroradius of an ion of charge $`Z`$ having energy near the maximum imposed by the voltage drop will be larger by a factor of $`Z`$), we expect the average radius of curvature of a particle’s trajectory to be of order the curvature radius of magnetic field lines in the gap. (The curvature radii of different trajectories should span some range though, reflecting the different boundary conditions.) The computations of particles’ trajectories, even for relatively simple magnetic field topologies, are complicated by the fact that the gyroradius at the highest energies is comparable to the size of the hole, as can be seen from eq. (2), and are beyond the scope of this paper. In what follows, we denote by $`\rho `$ the average curvature radius of an accelerating ion, and assume that it is independent of the ion energy. The rate of energy loss through curvature radiation by a particle of energy $`ϵ=mc^2\gamma `$ can then be expressed as $$P=\frac{2}{3}\frac{e^2c\gamma ^4}{\rho ^2}.$$ (3) The energy change per unit length of an accelerating ion having charge $`Z`$ and mass $`m_i=\mu m_p`$ is given by $$dϵ/ds=eZ\mathrm{\Delta }V/hP/c,$$ (4) yielding a maximum acceleration energy, $$ϵ_{max}=3\times 10^{19}\mu Z^{1/4}M_9^{1/2}B_4^{1/4}(\rho ^2h/R_g^3)^{1/4}\mathrm{eV},$$ (5) where eq. (3) has been used. Consequently, only a fraction $$\eta =0.1\mu M_9^{1/2}(ZB_4)^{3/4}(\rho /R_g)^{1/2}(h/R_g)^{7/4};\eta 1,$$ (6) of the potential energy available will be released as UHECRs; the rest will be radiated in the form of curvature photons. For the most massive MDOs listed in table 2 of ref. ($`M_9>20`$) we obtain $`ϵ_{max}1.5\times 10^{20}\mu (ZB_4\rho ^2h/R_g^3)^{1/4}`$ eV, and $`\eta 0.02\mu (ZB_4)^{3/4}(\rho /R_g)^{1/2}(h/R_g)^{7/4}`$. Thus $`B_4\rho ^2h/R_g^320`$ is required in order to accelerate a proton ($`Z=\mu =1`$) to energies $`3\times 10^{20}`$ eV in these systems, corresponding to $`\eta =10^3(\rho /R_g)^2(h/R_g)^1`$. The requirement on heavier nuclei is more relaxed. Estimates of the maximum value of the horizon threading magnetic field strength yield $`B_41`$ , and according to recent numerical simulations $`B_4`$ may be well below unity . Consequently, acceleration of light nuclei to the highest energies measured by current experiments requires $`\rho `$ to be larger than $`R_g`$ by a factor of at least a few (assuming $`hR_g`$). The values of $`B_4`$ obtained by Boldt & Ghosh for the Magorian et al. sample , assuming equipartition between the magnetic field in the vicinity of the horizon and the matter infalling into the center, lie in the range between 0.1 to 1, given the estimated mass loss rate of the galaxy bulge. The spectrum produced by the curvature radiation of a single ion will peak at an energy $$ϵ_{\gamma max}=1.5\gamma ^3\mathrm{}c/\rho =1.6\times 10^7ϵ_{max}\mu ^1(ZB_4)^{1/2}(h/R_g)^{1/2}=5M_9^{1/2}(ZB_4)^{3/4}(\rho ^2h^3/R_g^5)^{1/4}\mathrm{TeV},$$ (7) and is a power law $`I(ϵ_\gamma )(ϵ_\gamma )^{1/3}`$ below the cutoff. The overall spectrum of curvature photons would depend on the energy distribution of the accelerating particles, and is expected to be somewhat softer below the peak. For $`ϵ_{max}=3\times 10^{20}`$ eV and $`hR_g`$ we obtain from eq. (7), $`ϵ_{\gamma max}50\mu ^1(ZB_4)^{1/2}`$ TeV. The number of TeV photons per proton produced in the process is roughly $$n_\gamma eZ\mathrm{\Delta }V/ϵ_{\gamma max}=10^8M_9^{1/2}(ZB_4)^{1/4}(h/R_g)^{5/4}(\rho /R_g)^{1/2}.$$ (8) The mean free path to pair creation of a photon moving at an angle $`\chi `$ to the magnetic field is $`l95(B_4\mathrm{sin}\chi )^1e^q`$, with $`q=1.3\times 10^3M_9^{1/2}B_4^{7/4}\mathrm{sin}\chi ^1(\rho ^2h^3/R_g^5)^{1/4}`$ and, therefore, the radiated photons escape the system freely for the range of parameters considered here. However, when $`q`$ becomes sufficiently small (at $`q`$ of about 50 one photon per proton will be converted into an electron-positron pair), a pair cascade may be initiated with high enough probability to lead to a breakdown of the gap. Observational consequences: As shown recently , the observed CR spectrum above $`10^{19}`$ eV, as measured by the AGASA and Fly’s Eye experiments, can be accounted for by a homogeneous cosmological distribution of CR sources with power law spectra and energy production rate of $`1.5\times 10^{37}`$ ergs Mpc<sup>-3</sup> s<sup>-1</sup>. Given this energy production rate, and denoting by $`n_{CR}`$ the density of objects contributing to the observed CR flux in this energy range, the average power released in the form of UHECR by a single source can be expressed as $$L_{CR}=2\times 10^{42}\left(\frac{n_{CR}}{10^4Mpc^3}\right)^1\mathrm{erg}\mathrm{s}^1.$$ (9) Employing eq. (9), one finds that the average TeV flux emitted by a single CR source at a distance of $`D=50D_{50}`$ Mpc is $$F_\gamma 10^{12}\left(\frac{n_{CR}}{10^4Mpc^3}\right)^1\eta ^1D_{50}^2\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1,$$ (10) where $`\eta <1`$ is the UHECR production efficiency defined in eq. (6). As seen from eq. (6), $`\eta `$ should be of order unity if the particles accelerated by the black hole dynamo are predominantly heavy nuclei. There is, however, evidence that the CR composition changes from heavy nuclei below the ankle (at energy of $`5\times 10^{18}`$ eV) to light nuclei above it . Assuming a protonic CR composition above the ankle, we obtain values of $`\eta `$ between 10<sup>-1</sup> and 10<sup>-3</sup> for the range of parameters considered above. The density of black holes above a certain mass can be estimated using the correlation between bulge luminosity and MDO mass found by Magorian et al. , and an appropriate luminosity function of nearby galaxies to correct for the incompleteness of their sample. However, as seen from eq. (5), the maximum energy a particle can attain depends on a combination of parameters and not solely on the mass. Therefore, the black hole mass above which particles can accelerate to the required energies is uncertain, and since the MDOs having relevant masses lie on the bright end of the luminosity function, this uncertainty in mass translates to a large uncertainty in $`n_{CR}`$. For a reasonable choice of parameters we anticipate that $`M_9`$ of at least a few would be required in order to account for UHECR energies observed. Using the luminosity function measured by Efstathiou et al. , we estimate that the density of MDOs having masses $`M_9>1`$ is of order a few times $`10^4`$ Mpc<sup>-3</sup>. An estimate based on a k-band luminosity function measured more recently yielded a similar value. The threshold flux for a 5$`\sigma `$ detection of gamma-rays above TeV by current TeV experiments is $`5\times 10^{12}t_{day}^{1/2}`$ erg s<sup>-1</sup> cm <sup>-2</sup>, where $`t_{day}`$ is the exposure time measured in days . With the above estimation of $`\eta `$ and $`n_{CR}`$, we expect that at least some fraction of the UHECR sources will be detectable by present TeV experiments, provided that the TeV photons escape the system. Conceivable sources of opacity that may give rise to attenuation of the TeV flux are considered next. The curvature photons produced near the black hole can be absorbed through pair production on IR photons in the galaxy. The corresponding optical depth at a given energy depends on the spectrum of IR photons and the energy dependence of the cross section. To an order of magnitude it is given by, $`\tau _{\gamma \gamma }\sigma _{PP}n_{IR}R`$, where $`R`$ is the size of IR emission region - of order several kpc as inferred from the light profiles, $`n_{IR}`$ is the number density of IR photons, and $`\sigma _{PP}`$ is the pair production cross section, given approximately by $`\sigma _{PP}0.2\sigma _T`$ at energies near the threshold. Using this approximation one finds that TeV photons would escape the galaxy provided that the IR luminosity at energy $`ϵ_{IR}=(m_ec^2)^2/ϵ_\gamma `$ satisfies: $$L_{IR}(ϵ_\gamma ^1)<3\times 10^{45}\left(\frac{R}{3\mathrm{kpc}}\right)\left(\frac{ϵ_\gamma }{1\mathrm{TeV}}\right)^1\mathrm{erg}\mathrm{s}^1.$$ (11) The integrated IR luminosity implied by eq. (11) would presumably be larger. Condition (11) is satisfied in most cases. A central continuum source, if present, would also contribute a pair production opacity. The IR emission may arise from a cold accretion disk or dust reprocession, and is likely to originate from small radii, of order 10 to 100 $`R_g`$ in these low luminosity objects. Adopting a size of 10 times the gravitational radius of a $`10^{10}M_{}`$ black hole for the IR emission region, we find that the TeV flux will be strongly attenuated if the corresponding IR luminosity exceeds $`10^{40}(ϵ_\gamma /1\mathrm{TeV})^1`$ ergs s<sup>-1</sup>. This is slightly below the luminosity inferred for low-luminosity AGNs , and comparable to upper limits on the luminosity of the point source in elliptical galaxies . In conclusion, it has been shown that particles accelerating near the horizon of a spinning suppermassive black hole threaded by externally supported magnetic field lines, suffer severe energy losses through curvature emission. The curvature losses limit the maximum energy attainable by light nuclei to values well below that imposed by the voltage drop. The major fraction of the energy extracted from the rotating hole is radiated in the TeV band, with a rather hard spectrum that extends well beyond 10 TeV. Given the energy flux of cosmic rays above $`10^{19}`$ eV, as reported recently by current CR experiments, and an estimate of the density of supermassive black holes in the universe, it is concluded that if dormant AGNs are the sources of the ultra-high energy cosmic rays, then they should be detectable by current TeV experiments. I thank Ari Laor, Dan Maoz, Eli Waxman, David Eichler, Amiel Sternberg, and Hagai Netzer for discussions and useful comments. Support from the Israel Science Foundation is acknowledged.
no-problem/0002/astro-ph0002017.html
ar5iv
text
# X-Ray Wakes in Abell 160 ## 1 Introduction With the advent of satellite-based X-ray astronomy, it was discovered that elliptical galaxies can contain as much interstellar gas as their spiral kin \[for example, see Forman et al. (1979)\]. In the case of ellipticals, however, this interstellar medium (ISM) predominantly takes the form of a hot plasma at a temperature of $`10^7\mathrm{K}`$. The vast majority of elliptical galaxies are found in clusters, which themselves are permeated by very hot gas — the intracluster medium (ICM) — at a temperature of $`10^8\mathrm{K}`$. The existence of these two gaseous phases raises the question of how they interact with each other. The collisional nature of such material means that one might expect ram pressure to strip the ISM from cluster members. But since the ISM is continually replenished by mass loss from stellar winds, planetary nebulae, and supernovae, it is not evident *a priori* that galaxies will be entirely denuded of gas by this process. Observing the X-ray emission from individual cluster galaxies is quite challenging, since they are viewed against the bright background of the surrounding ICM. Sakelliou & Merrifield (1998) used a deep *ROSAT* observation to detect the X-ray emission from galaxies in the moderately-rich cluster Abell 2634. They showed that the level of galaxy emission is consistent with the expected X-ray binary content of the galaxies, and hence that there is no evidence of surviving ISM in this rich environment. When one looks in somewhat poorer environments, one does see evidence for surviving interstellar gas, and for the stripping process itself. The best-documented example is M86 in the Virgo Cluster which, when mapped in X-rays, reveals a tail or plume of hot gas apparently being stripped from the galaxy by ram pressure \[Rangarajan et al. (1995), and references therein\]. A similar process appears to be happening to NGC 1404 in the Fornax Cluster, which displays a clear wake of X-ray emission pointing away from the cluster centre (Jones et al. 1997). Evidence for a ‘cooling wake’ formed by gravitational accretion is found in the NGC 5044 group where a soft, linear X-ray feature is seen trailing from NGC 5044 itself (David et al. 1994). Given the difficulty of detecting such faint wakes against the bright background of the ICM, it is quite possible that this phenomenon is widespread amongst galaxies in poor clusters. This possibility is intriguing, since wakes indicate the direction of motion of galaxies on the plane of the sky, and it has been shown that this information can be combined with radial velocity data to solve for both the distribution of galaxy orbits in a cluster and the form of the gravitational potential (Merrifield 1998). The existing isolated examples do not tell us, however, how common wake formation might be amongst cluster galaxies: although the observed wakes might represent the most blatant examples of widespread on-going ISM stripping, the galaxies in question might have merged with their current host clusters only recently, or be in some other way exceptional. In order to obtain a more objective measure of the importance of ram pressure stripping in poor clusters, and the frequency with which it produces wakes behind galaxies, we need to look at a well-defined sample of cluster members within a single system. The cluster Abell 160 provides an ideal candidate for such a study. Its richness class of 0 makes it a typical poor system. It lies at a redshift of $`z=0.045`$, and hence at a distance of $`270\mathrm{Mpc}`$,<sup>1</sup><sup>1</sup>1We adopt a value for the Hubble constant of $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ throughout this paper. which is sufficiently close to allow galactic-scale structure to be resolved in its X-ray emission and corresponds to a size scale of $`79`$ kpc arcmin<sup>-1</sup>. Furthermore, its Bautz-Morgan class of III means that it contains quite a number of comparably luminous galaxies, and one might hope to detect ISM emission most readily from such a sample. In addition, its Rood-Sastry classification of C means that its members are concentrated towards the centre of the cluster: galaxies lying in the cluster core, where the ICM density is high, will be most affected by ram pressure stripping. Finally, Pinkney (1995) has obtained positions and redshifts for an almost complete, independently-defined set of galaxies in the field of Abell 160, providing an objective sample of cluster members for this study. We therefore obtained a deep *ROSAT* HRI X-ray image of Abell 160 in order to investigate ISM stripping in this typical poor cluster. We use a Galactic Hi column density towards A160 of $`4.38\times 10^{20}`$ cm<sup>-2</sup> (Stark et al. 1992) throughout this paper. In the next section, we present the X-ray observation and the redshift data employed. Section 3 describes an objective method for detecting and quantifying wake features in the X-ray data, and Section 4 presents the results of applying this approach to the Abell 160 data. We conclude in Section 5 with a discussion of the interpretation of the results. ## 2 X-Ray data and Optical Redshifts Abell 160 was observed with the *ROSAT* HRI in three pointings (1997 January and July, and 1998 January) for a total integration time of 70.4 ksec (see Table 1). The data were reduced with the *ROSAT* Standard Analysis pipeline, with subsequent analysis performed using the IRAF/PROS software package. Point sources detected in the three X-ray observations were used to examine the registration in relation to the nominal *ROSAT* pointing position. A *Digitized Sky Survey* image of Abell 160 was employed to provide the optical reference frame. The second observation set was shifted $`0.9`$ arcseconds east and $`1.7`$ arcsec south, and the third set was shifted $`0.3`$ arcsec east and $`1.8`$ arcsec south. Registration of all three sets of observations was then within 0.5 arcseconds of the optical reference, tied down to five sources. A greyscale image of the merged X-ray dataset is shown in Figure 1. The centroid of the diffuse X-ray emission in this image was calculated by interpolating over any bright point sources, and projecting the emission down on to two orthogonal axes. Fitting a Gaussian to each of these one-dimensional distributions then gives a robust estimate for the centroid of the emission. This procedure yielded a location of $$\begin{array}{c}\alpha _{2000.0}=01^\mathrm{h}13^\mathrm{m}05^\mathrm{s}\\ \delta _{2000.0}=+15^{}29^{}48^{\prime \prime }\end{array}\}\pm 43\mathrm{arcsec},$$ which was adopted as the cluster centre for the subsequent analysis. Pinkney (1995) obtained redshifts for the 94 brightest galaxies in the field of A160 using the MX multi-object spectrograph on the Steward Observatory 2.3m telescope. Figure 2 shows the resulting velocity distribution. In order to investigate the X-ray properties of normal cluster members, we have excluded the central galaxy since it contains a twin-jet radio source (Pinkney 1995), so its X-ray emission may well contain a significant contribution from the central AGN. There are 91 galaxies within 8,000 km s<sup>-1</sup> of the twin-jet source ($`v_{\mathrm{TJ}}=13,173\pm 100`$ km s <sup>-1</sup>); some of these form a background cluster detected at approximately 18,000 km s<sup>-1</sup>. After further eliminating galaxies outside the field of view of the HRI, we end up with a sample of 35 cluster members whose X-ray emission we wish to quantify. This subsample, highlighted in Figure 2, has a line-of-sight velocity dispersion of 560 km s<sup>-1</sup>, directly comparable to other poor clusters. The locations of these galaxies are marked on Figure 1 where the different symbols indicate different subsamples of galaxies based upon optical luminosity, as described in the figure’s caption. ## 3 Detecting Wakes Having combined the optical cluster member locations with the X-ray data, we now turn to trying to see whether there is X-ray emission associated with any individual galaxy, and whether it takes the form of an X-ray wake. On examining Figure 1, the eye is drawn to a number of cases where there is an enhancement in the X-ray emission near, but offset from, the optical galaxy position — see, for example, the galaxy at RA 1:12:38, Dec 15:28:03. It would be tempting to ascribe these near-coincidences to X-ray wakes. However, it is also clear from Figure 1 that there are many apparent enhancements in the X-ray emission that are totally unrelated to cluster members: some will be from foreground and background point sources, while others are probably substructure or noise associated with the ICM itself. What we need, therefore, is some objective criterion for assessing the probability that any given wake is a true association rather than a chance superposition. Further, even if we cannot unequivocally decide whether some particular feature is real, we need to be able to show that there are too many apparent wakes for all to be coincidences. The test adopted to meet these requirements is as follows. First, for each galaxy we must seek to detect the most significant wake-like emission that might be associated with it. We must therefore choose a range of radii from the centre of the galaxy in which to search for a wake. Scaling the wakes previously detected in other clusters to the distance of Abell 160, we might expect enhanced X-ray emission at radii $`r<8\mathrm{arcsec}`$, equivalent to a galaxy distance of approximately 10 kpc. We also want, as far as possible, to exclude emission from any faint central AGN component in the galaxy, so we only consider emission from radii $`r>3\mathrm{arcsec}`$. Balsara, Livio & O’Dea (1994) found wake-like structure in high resolution, hydrodynamic simulations with a scalelength of $`2`$ arcseconds at the distance of Coma (see also Stevens, Acreman & Ponman 1999); in the poor environment of Abell 160 we expect wakes to be longer as their formation should be dominated by ISM stripping. We adopt the annulus $`3<r<8\mathrm{arcsec}`$ to search for wakes: counts in annuli with larger inner and outer radii were also performed but did not improve the statistical results described below. In our chosen annulus, we search for the most significant emission feature by taking a wedge with an opening angle of 45 degrees, rotating about the centre of the galaxy in 10 degree increments, and finding the angle that produces the maximum number of counts in the intersection of the wedge and the annulus. Finally, the contribution to the emission in this wedge from the surrounding ICM is subtracted by calculating an average local background between radii $`25<r<60\mathrm{arcsec}`$, centered on the galaxy and each comparison region at the same cluster radius (see below), to give a brightest wake flux, $`f_{\mathrm{wake}}`$. To provide a diagnostic as to the nature of this wake, its direction on the plane of the sky, $`\mathrm{\Theta }`$, was also recorded. This angle was measured relative to the line joining the galaxy to the cluster centre, so that $`|\mathrm{\Theta }|=0^{}`$ corresponds to a wake pointing directly away from the cluster centre, while $`|\mathrm{\Theta }|=180^{}`$ indicates one pointing directly toward the cluster centre.<sup>2</sup><sup>2</sup>2Under the assumption of approximately spherical symmetry in the cluster, there is no physical information in the sign of $`\mathrm{\Theta }`$. Having found this strongest wake feature, we must assess its significance. To do so, we simply repeated the above procedure using $`n_{\mathrm{comp}}=100`$ points for each galaxy at the same projected distance from the cluster centre, but at randomly-selected azimuthal angles. These comparison points were chosen to lie at the same distance from the cluster centre so that the properties of the ICM and the amount of vignetting in the *ROSAT* image were directly comparable to that at the position of the real galaxy. As noted above, counts in ‘background annuli’ were also acquired and all these counts were averaged together in order to obtain a value for the background to be subtracted from the counts in each wedge region. Fewer comparison regions were used for the 12 galaxies closest to the cluster centre, as otherwise the count regions would overlap. The comparison regions and real data were then sorted by their values of $`f_{\mathrm{wake}}`$, from faintest to brightest, and the rank of the galaxy (i.e. the position of the real data in this ordered list), $`\mathrm{Rank}_{\mathrm{gal}}`$, computed. The statistic $$k=\frac{\mathrm{Rank}_{\mathrm{gal}}}{n_{\mathrm{comp}}+1}$$ (1) was then calculated. Clearly, if all the apparent galaxy wakes were spurious, then nothing would differentiate these regions from the comparison regions, and we would expect $`k`$ to be uniformly distributed between 0 and 1. For significant wake features, on the other hand, we would expect the distribution of $`k`$ values to be skewed toward $`k1`$. As an additional comparison to the X-ray emission around galaxies in Abell 160 we performed the same analysis on 70.4 ksec of ‘blank field’ data, extracted from the *ROSAT* Deep Survey, which encompasses $`1,320`$ ksec of HRI pointings towards the Lockman Hole (see e.g. Hasinger et al. 1998). We searched for wake-like features around 35 random positions across this HRI field, using comparison regions for each ‘galaxy’ as defined above. ## 4 Results We have applied the above analysis to the confirmed cluster members using IRAF software packages and the merged QP datafile. Figure 3 shows the distribution of the $`k`$ statistic for both the complete sample of 35 galaxies and the subsample of the 15 brightest galaxies. The full sample of A160 galaxies yields a reduced chi-squared value of $`\chi ^2=1.617`$ when fitted by a uniform distribution, which is approximately a $`3\sigma `$ deviation; the probability of obtaining $`\chi ^21.617`$ for 34 degrees of freedom is only $`5.1\times 10^3`$. The full sample has $`k=0.628`$ and the subsample of the 15 brightest galaxies yields $`k=0.741`$; indeed, selecting subsamples of the 20 or so optically brightest galaxies always gives $`k>0.7`$ and the distributions are clearly skewed towards $`k=1`$. This suggests that we have detected significant wake-like excesses in these data. Kolmogorov-Smirnov tests were performed to compare the $`k`$ statistic results to a uniform distribution. Figure 4 presents these test results for the complete sample of galaxies as well as the brightest 15 subsample. The figure’s annotation gives the values of the K-S statistic, $`d`$, which is simply the greatest distance between the data’s cumulative distribution and that of a uniform distribution, and *prob*, which is a measure of the level of significance of $`d`$. The probability of the detected features arising from chance is less than $`3.5\times 10^4`$ for the bright sample and $`3.8\times 10^3`$ for the entire sample of 35 galaxies. It is clear from Figure 4 that the more luminous galaxies do not follow a uniform distribution in $`k`$-space. The analysis applied to the Deep Survey ‘blank field’ data yielded a mean value for the $`k`$ statistic of $`k=0.48`$ indicating that the results are uniformly distributed in $`k`$-space and that we do not detect ‘wakes’ in this comparison field. Indeed, assuming a uniform distribution for $`k`$, the fit to the data has a chi-squared value of $`\chi ^2=1.015`$, implying that the values of $`k`$ are uniformly distributed to high precision. In order to investigate the nature of the wake-like excesses in the A160 data, we now consider the distribution of their directions on the sky. Figure 5 shows the directions of the strongest wake-like features found using the $`k`$ statistic analysis, for all 35 cluster members. The lines on this figure represent the wakes and their lengths are drawn proportional to wake strength. The azimuthal distributions of the counts found in some of the highest-ranked wakes are shown in Figure 6. The wake profiles show wakes for these galaxies to be statistically significant with a mean net count of $`5`$, corresponding to a mean net X-ray flux of $`1.7\times 10^{15}`$erg cm<sup>-2</sup> s<sup>-1</sup>. The wake fluxes translate into X-ray luminosities in the range $`(12)\times 10^{40}`$erg s<sup>-1</sup>. Grebenev et al. (1995) used a wavelet transform analysis to study the small-scale X-ray structure of the richness class 2 cluster Abell 1367: they found 16 extended features of which nine were associated with galaxies and had luminosities in the range $`(330)\times 10^{40}`$erg s<sup>-1</sup>. They concluded that the features could be associated with small galaxy groups, as suggested by Canizares, Fabbiano & Trinchieri (1987), rather than individual galaxies. The wake-like features we have detected have X-ray luminosities of the same order as individual galaxies in Abell 160, and the emission is clearly confined to extensions in specific directions away from the galaxies. Furthermore, the features noted by Canizares, Fabbiano & Trinchieri (1987) have size scales $`1^{}`$, much larger than the expected wake size at the distance of A1367 and so not directly comparable with the current work. Figure 7 shows the distribution of apparent wake angles as a function of the galaxies’ radii in the cluster, for different strengths of wake as quantified by the $`k`$ statistic. As we might expect, for low values of $`k`$ where the wake is almost certainly a noise feature, the values of $`\left|\mathrm{\Theta }\right|`$ are randomly distributed between $`0^{}`$ and $`180^{}`$. However, for values of $`k>0.7`$, which are unlikely to be attributable to noise, there is only one wake pointed at an angle of $`\left|\mathrm{\Theta }\right|>135^{}`$. If the distribution of wake directions was intrinsically isotropic, the probability of finding only one of the 12 most significant wakes in this range of angles is only 0.01. Given the *a posteriori* nature of this statistical measure, its high formal significance should not be over-interpreted. Nonetheless, there definitely appears to be a deficit of wakes pointing toward the cluster centre. ## 5 Discussion As a first attempt at an objective determination of the frequency of wakes behind cluster galaxies, we have found significant excesses of X-ray emission apparently offset from their host galaxies. Before exploring the possible astrophysical meaning of such features, we must rule out more prosaic possibilities. If the X-ray emission were truly centred on the galaxies, we would still detect offset X-ray emission if there were significant positional errors in the optical galaxy locations. The uncertainties on these positions, however, are much less than the radii at which we have detected the wakes, so this possibility can be excluded. Similarly, an overall mismatch between the optical and X-ray reference frames would produce offsets between X-ray and optical locations of coincident sources, but the distribution of apparent offset directions shown in Figure 5 is not consistent with the coherent pattern that one would expect from either an offset or a rotation between the two frames. We could explain the excess of sources where the X-ray emission lies at larger radii in the cluster than the optical position if the spatial scale of the optical data had been underestimated relative to that of the X-ray data. But both the *ROSAT* and optical data image scales are extremely well calibrated. Further, if such mismatch in magnification were responsible for the effect, one would expect the radial offsets to increase with distance from the field centre, and Figure 7 provides no evidence that the wakes become more radially oriented at large distances from the cluster centre. A further possibility is that the distorted nature of the X-ray emission could arise from an asymmetry in the *ROSAT* HRI point-spread function (PSF). Such asymmetries are documented \[see, for example, Morse et al. (1995)\], but the observed shape of the HRI PSF actually becomes tangentially extended at large off-axis angles, so one would expect the wakes to be oriented at angles of $`|\mathrm{\Theta }|90^{}`$. The three wakes at very large radii and $`|\mathrm{\Theta }|70120^{}`$ in Figure 7 could well result from this phenomenon, but there is no evidence for any such effect at smaller radii. We are therefore forced to return to trying to find an astrophysical explanation for the bulk of the observed wakes. As discussed in the introduction, an individual galaxy can emit at X-ray wavelengths due to both its hot ISM component and its contingent of X-ray binaries. Such emission could extend to the radii where we have been searching for wakes, or appear to do so due to the blurring influence of the PSF, so we might expect some wake-like features to appear simply due to this component. Such asymmetric wake features could arise from Poisson noise on intrinsically symmetric emission, or it could reflect a real asymmetry in the emission. For example, the emission from X-ray binaries could be dominated by one or two ultra-luminous sources in the outskirts of a galaxy, leading to an offset in the net X-ray emission. Even the X-ray wake phenomenon that we are seeking to detect can be described as an asymmetric distortion in the normal ISM emission. How, then, are we to distinguish between these possibilities? Perhaps the best clue as to the nature of the detected asymmetric emission comes from the distribution of the angles at which it is detected, $`\mathrm{\Theta }`$. As we have described above, there is a deficit of wakes pointing toward the cluster centre. It is hard to see how such a systematic effect can be attributed to any of the more random processes such as Poisson noise on a symmetric component, or even the azimuthal distribution of X-ray binaries within the galaxy. It therefore seems highly probable that we are witnessing the more systematic wake phenomenon that we seek. If a wake indicates the direction of motion of the galaxy, then the deficit of detections at large values of $`|\mathrm{\Theta }|`$ implies that the production mechanism becomes ineffective when a galaxy is travelling out from the cluster centre. This conclusion has a simple physical explanation: if a galaxy is travelling on a reasonably eccentric orbit, by conservation of angular momentum it will spend a large fraction of its time close to the orbit’s apocentre. During this period, its velocity is slow and the ICM it encounters is tenuous, so it is able to retain its ISM. In fact, continued mass loss from stellar winds and planetary nebulae means that the amount of gas in its ISM will increase. Ultimately, however, its orbit will carry it inward toward the core of the cluster. At this point, the galaxy is travelling more rapidly, and encounters the higher density gas near the centre of the cluster, so ram pressure stripping becomes more efficient, and a wake of stripped ISM material will be seen behind the infalling galaxy. By the time the galaxy passes the pericentre of its orbit, the ISM will have been stripped away to the extent that the outgoing galaxy does not contain the raw material to create a measurable wake, explaining the lack of detected wakes at large values of $`|\mathrm{\Theta }|`$. This simple picture seems to fit the data on Abell 160 rather well; a similar scenario was invoked by McHardy (1979) to explain the locations of weak radio sources in clusters. It is also notable that the beautiful wake feature behind NGC 1404 in the Fornax Cluster detected by Jones et al. (1997) is oriented such that it points radially away from the cluster centre. Clearly, though, more deep X-ray observations of clusters are required if we are to confirm the widespread applicability of this scenario. ### Acknowledgements The authors are grateful to the referee for helpful comments and suggestions, and to Ian McHardy for several fruitful discussions. ND acknowledges receipt of a PPARC Studentship. This research has made use of data obtained from the Leicester Database and Archive Service at the Department of Physics and Astronomy, Leicester University, UK.
no-problem/0002/hep-ph0002114.html
ar5iv
text
# Theory of 𝜀'/𝜀 ## Acknowledgments My appreciation goes to the Organizers of the $`_{\mathrm{CP}}`$ ’99 Conference for hosting a most enjoyable and stimulating meeting.
no-problem/0002/cond-mat0002018.html
ar5iv
text
# The effect of the annealing temperature on the local distortion of La0.67Ca0.33MnO3 thin films ## I Introduction The Double Exchange (DE) mechanism was originally considered to be the main interaction contributing to the colossal magnetoresistance (CMR). In the DE model, if the spins of two neighboring Mn ions are aligned, then an electron will require less energy to hop from one Mn site to another. Consequently, at low temperature, the lattice will have ferromagnetic (FM) order such that the total system (both local and itinerant sub-systems) has the lowest energy. Although the DE model can explain many properties of CMR materials, the magnitude of the MR calculated from the DE model is much smaller than the actual measured MR. Millis et al. suggested that local $`\mathrm{𝐽𝑎ℎ𝑛}\mathrm{𝑇𝑒𝑙𝑙𝑒𝑟}`$ distortions also play an important role in CMR materials, and are needed to explain the large magnitude of the MR in these materials. Both X-ray Absorption Fine Structure (XAFS) and pair-distribution function (PDF) analysis of neutron diffraction data, have been done to study the local structure of the CMR materials and an important relationship between the local distortions and magnetism in these materials has been found. These new experiments investigate the local structure of thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> (LCMO) to understand more about this relationship. Recent experiments show that the non-fully annealed thin film samples are oxygen deficient; the Curie temperature, T<sub>c</sub>, the saturated magnetization, M<sub>0</sub>, and the resistivity peak temperature, T<sub>MI</sub>, of the samples increase with increasing the oxygen stoichiometry, while the resistivity of the samples decreases. For the fully annealed thin film sample, T<sub>c</sub> and T<sub>MI</sub> are almost the same as those of the bulk material. It is not yet clear what mechanisms are responsible for suppressing T<sub>c</sub>, M<sub>0</sub> and T<sub>MI</sub> in the other samples. Our experiments on transport and magnetization measurements show similar annealing effects with significant changes in the resistivity and the magnetization. They also show that a huge MR occurs for the films, especially for the as-deposited sample at low temperature. XAFS experiments on these thin film samples allow us to observe the local structure around the Mn sites, primarily the local distortion of the Mn-O bonds. This paper focuses on the $`\mathrm{𝑐ℎ𝑎𝑛𝑔𝑒𝑠}`$ in the local structure of CMR thin films that are induced by annealing at different temperatures. These results may also help us to better understand how annealing modifies other sample properties. Diffraction studies of LaMnO<sub>3</sub> and CaMnO<sub>3</sub> have been carried out by several groups. They show that there are three groups of Mn-O bonds in LaMnO<sub>3</sub> with different lengths: 1.91, 1.97 and 2.17 Å, while in CaMnO<sub>3</sub> all bond lengths nearly the same at 1.90 Å (variation is within 0.01 Å). Our previous work has compared our XAFS data with the diffraction results, and found that the local structure around the Mn site in LCMO CMR samples is very similar to the average structure determined by the diffraction data. More comparison details are shown in reference . In Sec. II, we provide a brief description of the samples and some experimental details. We present the magnetization and transport property data in Sec. III and our XAFS results in Sec. IV. The conclusions are given in Sec. V. ## II Samples and Experiments The La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> thin-film samples were deposited on SrTiO<sub>3</sub> substrates using PLD; each film is 3000 Å thick. See Ref. for additional details. The samples we chose for the XAFS studies were: as-deposited at 750 K, annealed at 1000 K and annealed at 1200 K. The annealed samples were held at their respective temperatures for 10 hours in flowing oxygen, and were heated and cooled at 2 K per minute. The XAFS experiments were done on beamline 10-2 at SSRL using Si $`<`$220$`>`$ monochromator crystals and a 13-element Ge detector to collect Mn $`K_\alpha `$ fluorescence data. The thin film samples were aligned at $``$55.0 with respect to the X-ray beam to make x, y and z axes equivalent, and thus correspond to a powder. This angle comes from the polarization dependence of the photoelectric effect. For each sample, we made four sweeps at each temperature; for two of these sweeps, we rotated the sample by 1.5 in order to determine the position of glitches, which must be removed. ## III Magnetization and Resistivity The magnetization $`vs`$ temperature data for these thin film samples are shown in Fig. 1. All samples have broad ferromagnetic-to-paramagnetic phase transitions and T<sub>c</sub> increases with the annealing temperature. From these measurements, we extract T<sub>c</sub> for these samples (see Table I). The saturated magnetization of the samples increases with annealing temperature. This indicates that at higher annealing temperature, a larger fraction of the sample becomes ferromagnetic in the FM phase at low temperature. Our previous experiments show that the 30% Ca doped LCMO powder sample has a Curie temperature of $``$ 260 K, which is very close to The T<sub>c</sub> of the fully annealed thin film sample. Fig. 2 is a plot of log<sub>e</sub> of the resistivity $`vs.`$ temperature for all three samples. We show both the data in zero field and at 5.5 Tesla. The resistivity decreases with annealing temperature. At zero field, a large peak is present for the 1000 K annealed sample; the resistivity decreases and the peak moves to higher temperature for the 1200 K annealed sample. There is no metal-to-insulator phase transition for the as-deposited sample; this sample also shows a very large resistivity at low temperature. When the external magnetic field is raised to 5.5 Tesla, the resistivity drops dramatically with magnetic field for the 1000 K and 1200 K annealed samples at temperatures near the resistivity peak, and the resistivity peaks move to higher temperature. For the as-deposited sample, the biggest change in MR occurs at the lowest measuring temperature (70 K). It should also be noted that the peak in resistivity is very close to T<sub>c</sub> for the 1200 K annealed sample, but well below T<sub>c</sub> (over 10 K difference) for the 1000 K annealed sample. Fig. 3 shows CMR% (expressed as a percentage) $`vs.`$ temperature data for these samples on a log<sub>e</sub> scale. Here, we define CMR% to be: CMR% = 100$``$( R<sub>0</sub> \- R<sub>H</sub> )/R<sub>H</sub> where R<sub>0</sub> is the resistivity without magnetic field and R<sub>H</sub> is the resistivity with a 5.5 T field. The magnetoresistance for these thin films is very large compared to that of the corresponding powder samples. This is especially true for the as-deposited sample at low temperature—the change of magnetoresistance at 70 K is about 3200%. The maximum CMR% is about 1000% for the 1000 K sample and about 250% for the 1200 K sample. There is a double peak structure for the 1200 K annealed sample clearly visible in Fig. 3. As mentioned earlier, our thin film samples are deposited on SrTiO<sub>3</sub> (STO) substrates. When the sample is deposited on STO, some Sr diffuses into the first few hundred Å of the film and changes the stoichiometry of that layer. This Sr diffusion could produce a resistance peak at a lower temperature and may be responsible for this extra peak. We chose to use STO substrates instead of LaAlO<sub>3</sub> (LAO) substrates, which have no diffusion problem, because the large La $`L_I`$-edge XAFS from the substrate would interfere with our Mn K edge XAFS data. Our XAFS data are sensitive to this Sr diffusion layer only if we focus on the further neighbors such as Mn-La, Mn-Ca and Mn-Sr. The Mn-O bond distance does not change much between Ca and Sr substitution. Consequently for the Mn-O pair distribution function which is the focus of this paper, we are not very sensitive to Sr in the lower 10% of the film. ## IV XAFS Data Analysis and Discussion We collected all Mn-$`K`$ edge data in fluorescence mode. First the pre-edge absorption (absorptions from other edges) was removed using the Victoreen formula. Next we extract k$`\chi `$(k) where the photoelectron wave vector k is obtained from k=$`\sqrt{2m_e(EE_0)/h^2}`$ and the XAFS function $`\chi `$(k) is defined as $`\chi `$(k)=$`\mu `$(k)/$`\mu _0`$(k)-1. We fit a 7-knot spline to $`\mu `$(k) (the $`K`$-edge absorption data above the edge) to obtain the background function $`\mu _0`$(k) (embedded atom function). An example of such data is shown in Fig. 4. Next we obtain the r-space data from the Fourier Transform (FT) of k$`\chi `$(k); fits to the data were carried out in r-space. Some details of these fits are shown later in this paper. See references for additional details. In Fig. 5, we show the Mn $`K`$-edge Fourier transformed (FT) $`r`$-space data for all three thin-film samples. In this figure, the position of each peak corresponds to an atom pair shifted by a well understood phase shift, $`\mathrm{\Delta }`$r; for example, the first peak corresponds to the Mn-O pairs and the second peak, which is near 3 Å, corresponds to the Mn-La (Mn-Ca) pairs. The width, $`\sigma `$, of the pair distribution function is a measure of the local distortions in a shell of neighboring atoms. In XAFS, a large $`\sigma `$ leads to a decrease in the amplitude of the $`r`$-space peak. We obtain some results from these data without having to resort to curve fitting. First, the amplitude of the Mn-O peak (the first peak) decreases with increasing temperature. That means there are increasing distortions in each sample with increasing temperature. Second, for the higher annealing temperature, the amplitude at low temperature increases and the change of the amplitude with temperature for the Mn-O peak is larger than that for the other samples. The amount of distortion removed at low temperature is smaller for the 1000 K annealed sample, while most of the distortion in the as-deposited sample is still present at $`T`$ = 20 K. The amplitude of the r-space data at high temperature (310 K) is almost the same for all three samples; this suggests that the amount of distortion at high temperatures doesn’t change very much with the annealing temperature; however it is still clearly less than the local distortion observed previously in LaMnO<sub>3</sub> at 300 K. Our r-space data were fit using a Gaussian pair distribution function for the Mn-O, Mn-Ca/La and Mn-Mn shells. The pair-distribution width, $`\sigma `$(T), (for Mn-O) was determined from these detailed fits to the data, which were carried out using FEFF6 theoretical functions (See Fig. 6). In the fit, S$`{}_{}{}^{2}{}_{0}{}^{}`$N was fixed at 4.3, where N is the number of nearest neighbors (6 O neighbors), and S$`{}_{}{}^{2}{}_{0}{}^{}`$ is an amplitude reduction factor. k$`\chi `$(k) arrives from single-electron process and is normalized to the step height; S$`{}_{}{}^{2}{}_{0}{}^{}`$ corrects for the fact that the measured step height also includes multi-electron process. There is an absolute uncertainty in S$`{}_{}{}^{2}{}_{0}{}^{}`$ of roughly 10%, however small changes in S$`{}_{}{}^{2}{}_{0}{}^{}`$ move all the curves up or down in Fig. 6 and do not change the shape or relative position. The values of $`\sigma ^2`$ obtained from the fits provide a measure of the distortion of the Mn-O bond. Different contributions to the broadening add in quadrature and hence $`\sigma ^2`$ has the general form: $`\sigma ^2=\sigma _{static}^2+\sigma _{phonons}^2+\sigma _{othermechanisms}^2`$ At low temperature, $`\sigma ^2`$ is dominated by zero-point motion and some static distortions. For all the manganites, the smallest value for $`\sigma `$ is about 0.04 Å; consequently small variations in bond lengths, such as occurring for CaMnO<sub>3</sub> are not directly observed in $`\sigma `$. It is surprising that the net distortion of the substituted CMR samples is about as small as that observed for the more ordered CaMnO<sub>3</sub> structure at 20 K. For each temperature 4 traces are analyzed and averaged. The relative errors shown in Fig. 6 are the root mean square (rms) variation of the fit result at each temperature. For the as-deposited sample, $`\sigma ^2`$ has a very small change with decreasing temperature; there is a larger change for the 1000 K annealed sample, and an even larger temperature dependence for the 1200 K annealed sample. We also find in Fig. 6 that, at 320 K, $`\sigma ^2`$ increases as the annealing temperature is lowered. This indicates that some of the static defects in the as-deposited sample can be removed during the annealing process. The solid line in Fig. 6 corresponds to the data for the Mn-O bond in CaMnO<sub>3</sub>, which has a high Debye temperature (950 K); $`\sigma ^2`$ for this sample will be denoted $`\sigma _T^2`$ (The CaMnO<sub>3</sub> data we use in this figure are obtained from a powder sample. There might be a difference up to 10% between powder samples and thin film samples since they are in different form and this difference may change the effective value of S$`{}_{}{}^{2}{}_{0}{}^{}`$ for the fit process.) The difference between $`\sigma _{data}^2`$ and $`\sigma _T^2`$ at low temperature is due to a static distortion $`\sigma _{static}^2`$, which is defined as: $`\sigma _{static}^2=\sigma _{data}^2(20K)\sigma _T^2(20K).`$ To estimate this quantity, we shift the solid line vertically until it fits the low temperature data for the 1200 K annealed sample This yields the dashed line in Fig. 6, which is defined to be $`\sigma _T^2`$ \+ $`\sigma _{static}^2`$. Although $`\sigma _{static}^2`$ is almost zero for the 1200 K annealed sample, it is large for the other samples. We include it here for the 1200 K annealed sample to clarify its definition. The contribution to $`\sigma _{data}^2`$ above the dashed line is attributed to a polaron distortion, where the full (maximum) polaron distortion, $`\sigma _{FP}^2`$, is defined by: $`\sigma _{FP}^2=\sigma _{data}^2(300K)\sigma _T^2(300K)\sigma _{static}^2`$ We have found in previous work that a useful parameter is the distortion removed as T drops below T<sub>c</sub>, $`\mathrm{\Delta }\sigma ^2`$, which we define below. First, the $`\sigma _T^2`$ curve is shifted vertically (by an amount $`\sigma _{FP}^2`$ \+ $`\sigma _{static}^2`$) such that it fits the high temperature data. This yields the dotted line shown in Fig. 6 which is $`\sigma _T^2`$ \+ $`\sigma _{static}^2`$ \+ $`\sigma _{FP}^2`$. This dotted line represents the expected Debye behavior plus static distortion if no polaron distortion were removed upon cooling. We define $`\mathrm{\Delta }\sigma ^2`$ as the difference between the dotted line and the data: $`\mathrm{\Delta }\sigma ^2=\sigma _T^2+\sigma _{FP}^2+\sigma _{static}^2\sigma _{data}^2`$ A similar analysis is carried out for the as deposited sample and the 1000 K annealed sample (corresponding curves for $`\sigma _T^2`$ \+ $`\sigma _{static}^2`$ and $`\sigma _T^2`$ \+ $`\sigma _{static}^2`$ \+ $`\sigma _{FP}^2`$ are not shown in Fig. 6). It is also important to point out that the difference between $`\sigma ^2`$ for the 1200 K annealed sample and that of CaMnO<sub>3</sub> at 20 K is very small, which suggests that the Mn-O local structure of the fully annealed sample can be as ordered as that of CaMnO<sub>3</sub> even though it is a doped sample. The same result was obtained for La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> CMR powder samples (x = 0.2$``$0.5) from our previous experiments. The reason for this phenomena for the thin films can be explained as follows: first, the high temperature annealing process appears to remove most of the static defects such as dislocations and vacancies; second, at very low temperatures, there is almost no difference between the two type of Mn sites in the DE model, the electron moves rapidly from one site to another on a time scale fast compared to the appropriate phonon frequency. Consequently, Jahn-Teller distortions don’t have time to form. For the as-deposited sample, the large CMR% occurs when there is a large value for $`\sigma ^2`$ at low temperature; this suggests that the distortion in the sample may, in part, be the origin of the unusually large magnetoresistance in thin-film samples. We have recently observed similar results in our study of Ti and Ga doped LCMO powder samples. In Fig. 7, we plot $`\mathrm{ln}\mathrm{\Delta }\sigma ^2`$ vs $`M/M_0`$. Our previous studies of La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> powder samples showed that there is a linear relationship between $`\mathrm{ln}\mathrm{\Delta }\sigma ^2`$ and the magnetization, which provides evidence that there is a strong connection between the local distortion and magnetism in these materials. The solid squares in Fig. 7 show the linear relationship for a La<sub>0.70</sub>Ca<sub>0.30</sub>MnO<sub>3</sub> powder sample from our previous work. For the thin film samples, there is a similar connection between local distortion and magnetization. For the 1000 K and 1200 K annealed samples, we find a small deviation from a straight line below M/M<sub>0</sub> $``$0.3. Since the error of the data in this range (M/M<sub>0</sub> $`<`$ 0.3) is large, it is not clear if this is a real effect or not. The data for the as-deposited sample appear to follow a straight line, but the errors in the difference are too large to draw conclusions. Also, we find that the data for the 1200 K annealed thin-film and the powder sample (both La<sub>0.70</sub>Ca<sub>0.30</sub>MnO<sub>3</sub>) almost overlap each other. This suggests that the local structure of the fully annealed thin-film behaves similarly to that of the corresponding bulk sample. In order to see the effect of annealing temperature on the local distortion more clearly, we plot the distortion contributions, $`\sigma _{static}^2`$ and $`\sigma _{FP}^2`$, as a function of annealing temperature in Fig. 8. This figure shows that the static distortion decreases with annealing temperature, while the polaron contribution increases with annealing temperature. This suggests that part of the static distortions observed in the as-deposited sample become dynamic, polaron-like, distortions after annealing, for T $`>`$ T<sub>c</sub>. It should be noted that although the magnetization only drops by roughly 50% for the as-deposited sample, the $`\sigma _{FP}^2`$ contribution becomes much smaller ( of order 5%). Consequently, there must be statically distorted regions in the as-deposited sample that are also ferromagnetic. The reduction of the saturated magnetization can arise in several ways. First, because of the inhomogeneous material, small regions may be antiferromagnetic (AF). Second, it has been suggested that the crystal field, particularly in regimes where the inhomogeneity causes a local reduction of the tolerance factor, can result in a low spin Mn (<sup>4</sup>t<sub>2g</sub>) configuration with a local 50% decrease in Mn moment. Third, the spins in a small domain may not be exactly parallel. However, to explain the entire decrease in saturated magnetization would require very large canting angles. Fourth, the magnetization vectors of each domain may not be aligned. The number of domain-walls may be important for calculating the resistivity. However, for domains large compared to a unit cell, slightly canted spins within a domain or a lack of alignment of the magnetization of various domains would not lead to a significant decrease in the polaron contribution to the broadening in XAFS. Consequently, the presence of static distortions but essentially no polaron-like contributions in the as-deposited material indicate both a significant fraction of AF material, and the presence of some low spin Mn sites, possibly induced by the disorder (interstitials, vacancies, inhomogeneous Ca concentrations etc.). This disorder must pin the local distortions which would also suppress the electron hopping frequency, thereby reducing the effectiveness of the DE interaction. All three thin film samples were prepared in the same way, except for the annealing temperature. During the annealing process, part of the static defects in the sample such as vacancies and interstitials can be removed. In addition, the annealing process can also change the amount of oxygen in the sample. The sample is slightly oxygen deficient before the annealing process, and oxygen is incorporated during annealing. The fully annealed sample (1200 K) is expected to be fully stoichiometric ( similar to the corresponding powder sample). It is well documented that samples without sufficient oxygen can have higher resistivities, a lower resistance peak temperature, a lower T<sub>c</sub> and a lower saturated magnetization. We have the same trends in our experiments. Compared to the 1200 K annealed sample, the samples annealed at lower temperatures have a large decrease in $`\sigma _{FP}^2`$ and a large increase in resistivity, while the saturated magnetization changes by less than a factor of two. This suggests that much of the distorted magnetic regions probably do not contribute to the conductivity and that the fraction of conducting material is very low for the as-deposited sample. However, the six order of magnitude increase in resistivity is much larger than the volume reduction of the regions that still have a polaron-like distortion. Consequently, it is likely that percolation also plays a role for transport in the as-deposited sample. Then the magnetic field may play two roles for this sample; it will decrease the resistivity for the conducting pathways that exist at B=0 and may also make some ”marginal” pathways become conducting. ## V Conclusion From our analysis, we find that the annealing temperature of the thin films affects the local distortion of the materials appreciably. The large change in resistivity and the small change in local structure with temperature for the as-deposited sample suggest that only small regions are contributing to the resistivity and percolation may play a role. We also find that there is still a strong connection between local distortions, resistivity and magnetism in the fully annealed thin-film materials, which behaves much like a powder sample. For the 1000 K annealed sample, the resistivity and MR peaks are well below T<sub>c</sub>. In this case (including the as-deposited sample), the local distortions correlate well with the magnetization, but there is no feature in $`\sigma `$ that occurs at the temperature at which the resistivity has a peak. ###### Acknowledgements. This work was supported in part by NSF grant DMR9705117. The experiments were performed at SSRL, which is operated by the DOE, Division of Chemical Sciences, and by the NIH, Biomedical Resource Technology Program, Division of Research Resources.
no-problem/0002/astro-ph0002134.html
ar5iv
text
# Temporal and Spectral Variabilities of High Energy Emission from Blazars Using Synchrotron Self-Compton Models ## 1 INTRODUCTION Blazars are a class of flat radio spectrum, core-dominated active galactic nuclei (AGNs). The overall radiation spectra of blazars show two broad peaks in the $`\nu F_\nu `$ space; one is between IR and X-rays, and the other in the $`\gamma `$-ray regime (e.g., von Montigny et al. (1995)). Flares also have been observed at X- and gamma-ray bands by multiwavelength observations of Mrk 421 (e.g., Macomb et al. 1995; Macomb et al. 1996 for erratum; Buckley et al. 1996) and Mrk 501 (Catanese et al. 1997; Pian et al. 1998). The tremendous luminosity and fast time variabilities from blazars have led to the usual arguments that relativistic motion is occurring in the emitting plasma. Moreover, the favored scenario to explain these sources is that we are viewing nearly along the axis of a relativistically outflowing plasma jet that has been ejected from an accreting super massive black hole (e.g., Blandford & Rees (1978)). Although the origin of these multiwavelength spectra is still under debate, several models on the radiative processes have been put forth, in particular, models of Compton scattering of synchrotron photons or external photons have been developed in recent years (e.g., Bloom & Marscher 1996; Inoue & Takahara 1996; Ghisellini & Madau 1996; Dermer, Sturner, & Schlickeiser 1997; Mastichiadis & Kirk 1997; Sikora et al. 1997; Böttcher, Mause, & Schlickeiser 1997; Georganopoulos & Marscher 1998; Ghisellini et al. 1998). Most of these calculations are either semi-analytic, or for steady state situations, or not including the Compton scattering process self-consistently. The main purpose of this paper is to improve upon this situation. The physics of how energy is dissipated into relativistic particles is, unfortunately, not well understood (see, however, Romanova & Lovelace (1997)) and will not be treated fully in this paper. Among various blazar models, synchrotron self-Compton (SSC) models have received a fair amount of attention, by virtue of its simplicity and its possible predictive power. In these models, it is proposed that the nonthermal synchrotron emission forms the radio-through-X-ray continuum, and that the Compton scattering of these (soft) synchrotron photons by the same nonthermal electrons produces the gamma rays ($``$ GeV – TeVs). In this paper, we focus on the so-called homogeneous SSC model where a spherical blob of uniform relativistic plasma is postulated. Even with such a greatly simplified picture, a number of parameters have to be invoked, whose interplay gives rise to a rich dynamic behavior of the observed radiation. Of particular interest is the correlated variabilities in X-ray and $`\gamma `$-ray fluxes, since they represent the tail of nonthermal electrons which have the shortest cooling timescale. Although the generic multiwavelength spectra from radio to TeV can be fitted by a steady state model with fixed parameters (e.g, Kataoka et al. 1999), time-dependent calculations almost always offer stronger constraints. Furthermore, when the self-Compton component contains a comparable or even larger fraction of the radiative energy than the synchrotron component, the whole problem becomes inherently nonlinear and both components need to be calculated simultaneously and self-consistently. This naturally leads to the need of solving coupled, time-dependent, nonlinear particle and photon kinetic-equations. Moreover, by examining the energy-dependence of flare data at gamma-ray energies, one could potentially discriminate between SSC and external Compton-scattering origins of the seed photons (Dermer 1998). The simplest model for time variability of blazars (Mastichiadis & Kirk 1997; hereafter MK97) assumes that electrons obeying a power-law distribution are injected uniformly throughout a relativistically moving blob over an extended period of time, and that electrons cool by both synchrotron radiation and Compton scattering. The blob is assumed not to accelerate or decelerate, and the energy loss by Compton scattering of photons impinging from outside the blob is assumed to be small in comparison with the synchrotron self-Compton loss. MK97 reproduced the qualitative behavior of the energy-dependent lags and the hysteresis diagrams (Takahashi et al. 1996). Much of the work presented here follows closely to the previous study by MK97, but we are using a completely different kinetic code which will be discussed in later sections. Kirk, Rieger, & Mastichiadis (1998) further modeled the evolution of synchrotron emission, calculating acceleration and cooling regions separately, though Compton scattering was not included. In this paper we present a detailed study of the time-evolution of an electron-photon plasma (the positive particles could be either protons or positrons) by solving the kinetic equations numerically. We briefly describe our model in §2 and show numerical results in §3. Summary is given in §4. ## 2 MODEL We assume that observed photons are emitted from a blob moving relativistically towards us with a Doppler factor $`𝒟=[\mathrm{\Gamma }(1\beta _\mathrm{\Gamma }\mathrm{cos}\theta )]^1`$, where $`\mathrm{\Gamma }`$ is the Lorentz factor of the blob, $`\beta _\mathrm{\Gamma }c`$ is the velocity with $`c`$ being the light speed, and $`\theta `$ is the angle between the direction of blob motion and the line of sight. The blob is a spherical and uniform cloud with radius $`R`$. Relativistic electrons are injected into the blob and produce high energy emission. Electrons and photons are uniformly distributed throughout the blob. The most important physical processes for electrons include synchrotron radiation and Compton scattering. The spectra of electrons and photons in the blob are calculated by solving the kinetic equations described below (see also Coppi & Blandford (1990)). The kinetic equation describing the time-evolution of electron distribution $`n(\gamma )`$ is given by $$\frac{n}{t}=\frac{}{\gamma }\left[\left(\frac{d\gamma }{dt}\right)_{\mathrm{loss}}n\right]+\frac{1}{2}\frac{^2}{\gamma ^2}(D_en)\frac{n}{t_{e,\mathrm{esc}}}+Q(\gamma ),$$ (1) where $`n`$ is the electron number density per $`\gamma `$, $`\gamma `$ is the electron Lorentz factor, and $`t_{e,\mathrm{esc}}`$ is the time for electrons to escape from the blob. The term $`(d\gamma /dt)_{\mathrm{loss}}`$ represents various electron energy loss processes, such as synchrotron and Compton scattering; their corresponding energy diffusion is given as $`D_e`$. We also include the synchrotron “boiler” effect (Ghisellini et al. 1988) and other processes such as Coulomb collisions, though they are not important in the present study. Pair-production and annihilation are not treated in the present code, though they tend to be less important too. In our code, the particle equation is actually discretized in the momentum space so that thermal particles and their processes can be handled accurately. This is less important for AGN jet parameters but will be very useful for modeling the emissions from stellar-mass black hole systems. Note that equation (1) assumes a continuous electron energy loss (i.e., Fokker-Planck type). This assumption does not account for the situation when there is a significant energy loss in a single Compton scattering which is important in the Klein-Nishina regime. This discrete nature of the Compton energy loss is, however, included in the photon kinetic equation, i.e., equations (2) and (3). We have checked the accuracy of the continuous approximation of the Compton energy loss in equation (1) in view of energy conservation. Our numerical tests show that the total energy is conserved with the accuracy better than 5 per cent after 10 $`R/c`$, when the SSC component is dominant and the scatterings occur frequently in the Klein-Nishina regime. The relevant kinetic equation for the time-evolution of photons is given by $$\frac{n_{\mathrm{ph}}(ϵ)}{t}=\dot{n}_\mathrm{C}(ϵ)+\dot{n}_{\mathrm{em}}(ϵ)\dot{n}_{\mathrm{abs}}(ϵ)\frac{n_{\mathrm{ph}}(ϵ)}{t_{\mathrm{ph},\mathrm{esc}}},$$ (2) where $`n_{\mathrm{ph}}(ϵ)`$ is the photon number spectrum per unit volume per unit energy $`ϵ`$. Compton scattering is calculated as $$\dot{n}_\mathrm{C}(ϵ)=n_{\mathrm{ph}}(ϵ)𝑑\gamma n(\gamma )R_\mathrm{C}(ϵ,\gamma )+𝑑ϵ^{}𝑑\gamma P(ϵ;ϵ^{},\gamma )R_\mathrm{C}(ϵ^{},\gamma )n_{\mathrm{ph}}(ϵ^{})n(\gamma ).$$ (3) First term of the right hand side of equation (3) denotes the rate that photons with energy $`ϵ`$ are scattered by electrons with number spectrum $`n(\gamma )`$ per unit volume per unit $`\gamma `$; $`R_\mathrm{C}`$ is the angle-averaged scattering rate. Second term of equation (3) denotes the spectrum of scattered photons: $`P(ϵ;ϵ^{},\gamma )`$ is the probability that a photon with energy $`ϵ^{}`$ is scattered off by an electron with $`\gamma `$ to have energy $`ϵ`$. The probability $`P`$ is normalized such that $`P(ϵ;ϵ^{},\gamma )𝑑ϵ=1`$. The details of $`R_\mathrm{C}`$ and $`P`$ are given in Jones (1968) and the appendix A of Coppi & Blandford (1990). We use the exact Klein-Nishina cross section in the calculations of Compton scattering. Photon production and self-absorption by synchrotron radiation are included in $`\dot{n}_{\mathrm{em}}(ϵ)`$ and $`\dot{n}_{\mathrm{abs}}(ϵ)`$, respectively. The synchrotron emissivity and the absorption coefficient are calculated using the approximations given in Robinson and Melrose (1984) for transrelativistic electrons and Crusius and Schlickeiser (1986) for relativistic electrons. External photon sources such as disk photons can be included, though they are not considered here. The rate of photon escape is estimated as $`n_{\mathrm{ph}}(ϵ)/t_{\mathrm{ph},\mathrm{esc}}`$. Since we are in the optically thin limit, we set $`t_{\mathrm{ph},\mathrm{esc}}=R/c`$, which is a good approximation. The photon spectra from solving equation (2) has been extensively compared with those from Monte-Carlo simulations and we have found very good agreement between them (Kusunose, Coppi, & Li 1999). The comoving quantities are transformed back into the observer’s frame using $`ϵ_{\mathrm{obs}}=ϵ𝒟/(1+z)`$ and $`dt_{\mathrm{obs}}=dt(1+z)/𝒟`$, where $`z`$ is the redshift of the source. We assume that electrons are injected obeying a power law in energy: $$Q(\gamma ,t)=Q_0(t)\gamma ^p$$ with $`\gamma _{\mathrm{min}}\gamma \gamma _{\mathrm{max}}`$. The total energetics of the electrons can be represented by a compactness parameter which is proportional to $`L/R`$, where $`L`$ is the source luminosity. We do not consider specific acceleration mechanisms in this paper. Thus particles are just being injected into the blob over a finite time. We emphasize that in order to be consistent with the basic assumption of spatial homogeneity, we require the injection time $`t_{\mathrm{inj}}`$ to be longer than $`t_{\mathrm{dyn}}=R/c`$. In effect, we can not probe variabilities shorter than $`t_{\mathrm{dyn}}`$ in the comoving frame. Other physical effects such as adiabatic loss via expansion might play an important role but is not considered here. It is unfortunate that we need such a large number of parameters to proceed with the calculations, and this is the main reason we opt not to add further complications such as acceleration. ## 3 RESULTS The dynamic behavior of the emission spectra is controlled by several timescales, namely, the cooling time $`t_{\mathrm{cool}}`$, the dynamic time $`t_{\mathrm{dyn}}`$, the injection duration $`t_{\mathrm{inj}}`$, and the escape time $`t_{\mathrm{esc}}`$. The causality argument requires that $`t_{\mathrm{dyn}}t_{\mathrm{inj}},t_{\mathrm{esc}}`$, whereas $`t_{\mathrm{cool}}`$ can be smaller than $`t_{\mathrm{dyn}}`$. For SSC models, both synchrotron and Compton processes contribute to the electron cooling, so $`1/t_{\mathrm{cool}}=1/t_{\mathrm{syn}}+1/t_{\mathrm{ssc}}`$, where $`t_{\mathrm{syn}}=\gamma /|\dot{\gamma }_{\mathrm{syn}}|`$ and $`t_{\mathrm{ssc}}=\gamma /|\dot{\gamma }_{\mathrm{ssc}}|`$. In the following analyses, we will divide our results into two major parts, based on whether $`t_{\mathrm{cool}}`$ is longer or shorter than $`t_{\mathrm{inj}}`$. In the limit of $`t_{\mathrm{cool}}t_{\mathrm{inj}}t_{\mathrm{dyn}}`$, where $`t_{\mathrm{cool}}`$ is evaluated using the highest particle energy, the injected particle distribution does not change appreciably during the injection process. We call this the injection-dominated limit. On the other hand, if $`t_{\mathrm{inj}}t_{\mathrm{dyn}}t_{\mathrm{cool}}`$, then particles will be sufficiently cooled while the injection still occurs, and emissions are from a cooled particle distribution rather than from the injected one. We call this the cooling-dominated limit. Consequently, we expect rapid variations in both fluxes and spectra in the latter case and relatively slow spectral variations in the former case. The primary purpose of this paper is to understand the dynamics of SSC model. We thus have chosen a broad range of parameters rather than try to fit any specific source spectrum, but we certainly use parameters thought to be applicable to those sources (Mrk 421 in particular) to guide our calculations. ### 3.1 Long Cooling Time Limit We show that, in this limit, a set of SSC model parameters can be uniquely determined from the observable quantities. We use observations of Mrk 421 as an example and further discuss their implications for multiwavelength observations. #### 3.1.1 A Unique Solution Let $``$ represent the energy (in ergs) injected in nonthermal electrons, which we assume can be described as $`N_e(\gamma )=N_0\gamma ^p`$ and $`\gamma _{\mathrm{min}}\gamma \gamma _{\mathrm{max}}`$. Then $`N_0=(2p)/[m_ec^2(\gamma _{\mathrm{max}}^{2p}\gamma _{\mathrm{min}}^{2p})]`$ (for $`p2`$). In the limit of long cooling time, we can use the injected electron distribution to calculate the synchrotron and SSC fluxes. Thus, for the peak energies of synchrotron and SSC, we have $$𝒟\gamma _{\mathrm{max}}^2B\nu _{\mathrm{syn}}/2.8\times 10^6=\stackrel{~}{\nu }_{\mathrm{syn}},$$ (4) $$𝒟\gamma _{\mathrm{max}}\nu _{\mathrm{ssc}}/1.236\times 10^{20}=\stackrel{~}{\nu }_{\mathrm{ssc}},$$ (5) where $`\nu _{\mathrm{syn}}`$ and $`\nu _{\mathrm{ssc}}`$ are the synchrotron and $`\gamma `$-ray peaks in Hz, respectively, and $`B`$ is magnetic fields in Gauss. The numerical normalization factors are easily obtainable using the standard expressions for the peak synchrotron energy and inverse Compton peak energy in the KN limit. For Mrk 421, we have $`\stackrel{~}{\nu }_{\mathrm{syn}}1.72\times 10^{11}`$ ($`2`$ keV) and $`\stackrel{~}{\nu }_{\mathrm{ssc}}10^7`$ ($`5`$ TeV). From equations (4) and (5), we get $$B=𝒟\stackrel{~}{\nu }_{\mathrm{syn}}/\stackrel{~}{\nu }_{\mathrm{ssc}}^2.$$ (6) The relative ratio $`\eta `$ of SSC to synchrotron fluxes for Mrk 421 is close to 1. This ratio can be approximately represented by the ratio of the comoving-frame synchrotron photon energy-density, $`U_{\mathrm{syn}}=L_{\mathrm{syn}}/(4\pi R^2c𝒟^4)`$, to the magnetic field energy-density, $`U_B=B^2/(8\pi )`$. But the exact value of $`\eta `$ could be quite different from this estimate owing to several factors: the end point effect at the peak of both synchrotron and SSC fluxes, <sup>1</sup><sup>1</sup>1For $`p<3`$, the peak of $`\nu L_\nu `$ is at $`\nu _{\mathrm{syn}}\gamma _{\mathrm{max}}^2B`$, but its flux is smaller than that obtained using the $`\delta `$function approximation to the scattering cross section. To be consistent with our numerical code results which have used the exact synchrotron emissivity formulations, we have introduced a reduction factor of $`f_{\mathrm{syn}}(<1)`$ in our simplified analytic estimates for the peak synchrotron flux (see also equation (10). Note that this reduction only applies to the end points of emissivity. For energies much smaller than $`\nu _{\mathrm{syn}}`$, the $`\delta `$function approximation is quite accurate. the KN effect of Compton scattering for producing TeV emission, and the fact that the electron distribution will be slightly cooled even though the cooling timescale is long. It is very difficult to get an exact analytic value to account for all these effects, so we introduce a correction factor $`f_c`$ in calculating $`\eta `$. Thus, we have $$\eta =f_c\frac{U_{\mathrm{syn}}}{U_B}\mathrm{or}L_{\mathrm{syn}}𝒟^44\pi R^2cU_B(\eta /f_c).$$ (7) Furthermore, since all blazar sources are observed to be highly variable, an additional constraint has usually been proposed that $$R𝒟ct_{\mathrm{var}},$$ (8) where $`t_{\mathrm{var}}`$ is defined as a variability timescale in the observer’s frame. Combining all the equations given above, we have 4 equations for 4 independent variables (i.e., $`𝒟,B,\gamma _{\mathrm{max}},`$ and $`R`$). The supplemental information include $`\stackrel{~}{\nu }_{\mathrm{syn}},\stackrel{~}{\nu }_{\mathrm{ssc}},L_{\mathrm{syn}}`$, $`\eta `$, and $`t_{\mathrm{var}}`$, with a somewhat variable factor $`f_c`$. Thus we can uniquely determine a solution set for all the parameters. The Doppler factor from this solution (denoted by a subscript ‘$`s`$’) can be expressed as $$𝒟_s22.2\left(\frac{L_{\mathrm{syn}}}{6\times 10^{44}}\right)^{\frac{1}{8}}\left(\frac{\stackrel{~}{\nu }_{\mathrm{ssc}}}{10^7}\right)^{\frac{1}{2}}\left(\frac{\stackrel{~}{\nu }_{\mathrm{syn}}}{1.7\times 10^{11}}\right)^{\frac{1}{4}}\left(\frac{t_{\mathrm{var}}}{10^4}\right)^{\frac{1}{4}}\left(\frac{f_c}{0.4}\right)^{\frac{1}{8}}\left(\frac{\eta }{1.0}\right)^{\frac{1}{8}}.$$ (9) The peak luminosity ($`L_{\mathrm{syn}}`$) of the synchrotron component in the observer’s frame can be estimated using $`\nu L_\nu `$ at $`\nu _{\mathrm{syn}}`$ (again taking into account the end-point effects), $$L_{\mathrm{syn}}𝒟^4f_{\mathrm{syn}}\frac{2(2p)}{3}\sigma _\mathrm{T}cU_B\frac{}{m_ec^2}\gamma _{\mathrm{max}},$$ (10) where $`\sigma _\mathrm{T}`$ is the Thomson cross section, $`f_{\mathrm{syn}}(<1)`$ represents the reduction of synchrotron flux at the end point, $`𝒟^4`$ is due to the Doppler boosting. Observationally, $`L_{\mathrm{syn}}`$ at $`2`$ keV is $`6\times 10^{44}`$ ergs s<sup>-1</sup> with a luminosity distance of $`3.8\times 10^{26}`$ cm ($`z=0.0308`$) for a $`q_0=1/2`$ and $`\mathrm{\Lambda }=0`$ cosmology. Here, $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> is assumed. Also, fitting of Mrk 421’s synchrotron spectrum suggests $`p1.65`$ (MK97). From this, we can determine the rest of the parameters for this unique solution (again denoted by a subscript ‘s’) $`B_s`$ $`=`$ $`𝒟_s\stackrel{~}{\nu }_{\mathrm{syn}}/\stackrel{~}{\nu }_{\mathrm{ssc}}^20.038\mathrm{G},`$ (11) $`\gamma _{\mathrm{max},s}`$ $`=`$ $`\stackrel{~}{\nu }_{\mathrm{ssc}}/𝒟_s4.5\times 10^5,`$ (12) $`R_s`$ $`=`$ $`ct_{\mathrm{var}}𝒟_s6.5\times 10^{15}\mathrm{cm},`$ (13) $`_s`$ $`=`$ $`{\displaystyle \frac{1.55\times 10^9}{(2p)f_{\mathrm{syn}}}}{\displaystyle \frac{L_{\mathrm{syn}}}{\gamma _{\mathrm{max},s}B_s^2𝒟_s^4}}4.1\times 10^{46}\mathrm{ergs},`$ (14) where we have used $`p=1.65`$ and $`f_{\mathrm{syn}}=0.4`$. <sup>2</sup><sup>2</sup>2The fact that $`f_c`$ and $`f_{\mathrm{syn}}`$ are both chosen as $`0.4`$ is a coincidence. The evaluation of $`f_c`$ involves end-point effects from both synchrotron and Compton scattering, whereas $`f_{\mathrm{syn}}`$ is only concerned with synchrotron. The value $`0.4`$ is obtained by comparing the analytic estimates with the exact numerical calculations. More importantly, we can check our original assumption that cooling time is long compared to $`t_{\mathrm{dyn}}=R/c`$. This is obviously satisfied since $`t_{\mathrm{syn}}(6\pi m_ec/\sigma _\mathrm{T})/(\gamma _{\mathrm{max}}B^2)1.2\times 10^6`$ sec, which is much longer than $`t_{\mathrm{dyn}}2.2\times 10^5`$ sec. Using the above derived parameters, we solve equations (1) and (2) simultaneously and follow the evolution until $`20t_{\mathrm{dyn}}`$. A total energy of $``$ is injected in nonthermal electrons over a comoving timescale of $`t_{\mathrm{inj}}=2t_{\mathrm{dyn}}`$. In these calculations electrons are not allowed to escape. Figure 1 shows the time evolution of electron and photon distributions as the system evolves. Note that the time for synchrotron and SSC components to reach their peak fluxes is different, and that it happens after the electron injection has stopped. To qualitatively understand this, we can write the photon kinetic equation symbolically as $$\frac{n_{\mathrm{ph}}(ϵ)}{t}=\mathrm{Production}(ϵ)\mathrm{Escape}(ϵ).$$ (15) Thus, the photon flux at energy $`ϵ`$ will increase if the production rate is larger than the escape and decrease if escape is quicker. This determines when the peak of photon flux at certain energy is reached. Using photon flux at keV as an example, the production of these photons still continues even when the electron injection stops because the cooling timescale is much longer than $`R/c`$. Eventually as electrons cool, they can no longer produce keV synchrotron emissions, the flux at keV starts to decline. Figure 2 shows the light curves at different energy bands expected from this injection event. Since the cooling time is rather long, only the energy bands corresponding to the tail of electron distributions show short time variabilities due to that electron injection is turned on and off; whereas other energy bands show a long plateau, representing a balance between the photon production and escape. A clear prediction from this is that there should be very little spectral evolution except at the peaks of synchrotron and SSC. All these are commensurate with the dynamics of electron cooling. In order to further differentiate the role of synchrotron versus Compton cooling on electrons, we plot the ratio of $`|\dot{\gamma }_{\mathrm{ssc}}/\dot{\gamma }_{\mathrm{syn}}|`$ as a function of electron energies at different times in Figure 3. It is clear that SSC cooling becomes more important than synchrotron cooling when the photon energy density builds up within the system as time proceeds. After reaching the peak, the SSC cooling starts to decrease as the photon energy density decreases. The dependence of this ratio on the electron energy is partly due to the KN effect. This figure clearly indicates that one can not ignore SSC cooling in estimating certain parameters and that the evolution is very nonlinear, thus a self-consistent numerical calculation is required for fitting the data more accurately. Note that the spectra given in Figure 1 is not intended to be an accurate fit to the observed spectra from Mrk 421. In fact, the TeV spectrum of Mrk 421 is known to be roughly a power law (Krennrich et al. (1999)), but the generic spectrum in TeV obtained here has a clear curvature, due to the fact that the Compton scattering is in the KN regime. Nevertheless, this exercise allows us to establish a parameter space where a reasonable fit to the actual spectrum might be obtainable, and it has the nice feature that the electron energy distribution retains the injected form without much softening, which greatly simplifies the analysis. #### 3.1.2 Parameter Variations on the Unique Solution In this subsection, we explore how sensitive the above results are to parameter variations. The parameter we want to emphasize is the ratio $`\eta `$ of the SSC component to the synchrotron component. From equations (6) and (9), and holding other parameters unchanged, we can see that $$\eta 𝒟^8B^8,$$ (16) which implies that a small change in magnetic field and/or Doppler factor could result in a large variation in $`\eta `$. Figure 4 shows this effect when $`B`$ is being varied. The general trend from equation (16) is indeed confirmed, i.e., smaller/larger $`B`$ (versus $`B_s`$) gives much larger/smaller $`\eta `$. For $`B<B_s`$, the variation amplitude in $`\eta `$ does not exactly follow equation (16). We attribute this discrepancy mostly to variation in $`f_c`$ because both $`\gamma _{\mathrm{max}}`$ and synchrotron photon energy in comoving frame are varying for different $`B`$. Additionally, when $`\eta >1`$ (SSC cooling is more important than synchrotron cooling), electron distribution is subject to stronger cooling. This is evident in its rapid spectral variation in the middle panel of Figure 4. For larger $`B`$ which results in $`\eta <1`$, equation (16) is mostly confirmed. Note that the Doppler factor $`𝒟`$, using equation (6), is now getting uncomfortably large. To conclude, in the limit that the cooling time of the highest energy electrons is longer than the dynamic timescale on which injection occurs, we can most likely find a unique set of parameters that roughly satisfy the observational constraints. The most important prediction is that even though the fluxes of the synchrotron and SSC peaks can vary by a large factor ($`>10`$) in a short time whose time histories look like “pulses”, the duration of emission at other wave bands (such as GeV, MeV, eV) can be considerably longer by at least a factor of $`4`$, which is commensurate with long electron cooling time at those energies. This parameter space has some interesting implications for interpreting multiwavelength data from blazar monitoring campaigns. Using keV and eV bands as an example (similar arguments can be made for TeV and GeV bands too), since the lifetimes for keV-producing and eV-producing electrons are different, there is really no reason to expect their light curves to track each other and to have the same rise and fall patterns or timescales. Furthermore, when there are multiple injections occurring over a timescale shorter than the lifetime of keV-producing electrons but longer than the lifetime of eV-producing electrons, fluxes at keV could vary rapidly to reflect the multiple injections. Fluxes at eV, however, might only show a continuous increase with no obvious decline because of the accumulation of eV-producing electrons from multiple injections (see also §3.2.5). ### 3.2 Short Cooling Time Limit In this subsection we explore the limit where $`t_{\mathrm{cool}}(\gamma _{\mathrm{max}})<t_{\mathrm{dyn}}`$, i.e., electrons are efficiently being cooled while they are injected into the system. All the runs in this section have size $`R=1.5\times 10^{16}`$ cm which gives a comoving dynamic timescale $`R/c=5\times 10^5`$ sec, particle’s $`\gamma _{\mathrm{min}}=10`$, $`\gamma _{\mathrm{max}}=10^6`$, and index $`p=2`$. The Doppler factor is chosen as 10 and magnetic field $`B=0.1`$ G, though most of the conclusions depend weakly on $`𝒟`$ and $`B`$ in this section. The particularly attractive feature of this limit is that it is possible to achieve short time variabilities for many wave bands, contrary to the case shown in Figure 2 from the previous section. One serious problem in comparing the theoretical results with the actual observations is that most observations need to accumulate over certain time interval (to collect enough photons) and different integration times are needed for different energy bands. So, in order to make a direct comparison with the observations, properly averaged fluxes are needed, rather than the prompt flux we have presented above. This inevitably introduces many more additional parameters in determining how photons at different energy bands are sampled. To avoid these complications, as before, we will continue to present the prompt flux results, leaving the problem of integrated fluxes to future studies on detailed spectral fitting of particular sources. To quantify the relative importance of SSC versus synchrotron, the previous expression for $`\eta `$ has to be modified. To recapitulate, the comoving synchrotron photon energy-density is $`U_{\mathrm{syn}}[m_ec^2/(4\pi R^2c)]_{\gamma _{\mathrm{min}}}^{\gamma _{\mathrm{max}}}N_e(\gamma )|\dot{\gamma }|𝑑\gamma `$, where the electron energy loss-rate through synchrotron radiation is given by $`\dot{\gamma }=[4\sigma _\mathrm{T}/(3m_ec)]U_B\gamma ^2`$. Thus, we get $$\eta =f_c\frac{U_{\mathrm{syn}}}{U_B}=f_c\frac{\sigma _\mathrm{T}}{3\pi R^2}\frac{}{m_ec^2}\frac{2p}{3p}\frac{\gamma _{\mathrm{max}}^{3p}\gamma _{\mathrm{min}}^{3p}}{\gamma _{\mathrm{max}}^{2p}\gamma _{\mathrm{min}}^{2p}}f_c\frac{\sigma _\mathrm{T}}{3\pi R^2}\frac{}{m_ec^2}\frac{\gamma _{\mathrm{max}}}{\mathrm{ln}(\gamma _{\mathrm{max}}/\gamma _{\mathrm{min}})},$$ (17) where the last expression applies to the case $`p=2`$. On the other hand, if $`t_{\mathrm{dyn}}>t_{\mathrm{cool}}`$, a cooled electron distribution has to be used when calculating the synchrotron photon energy-density. A very rough estimate of the electron break energy $`\gamma _{\mathrm{br}}`$, beyond which cooling dominates can be given as $$\frac{6\pi m_ec}{\sigma _\mathrm{T}}\frac{1}{\gamma _{\mathrm{br}}B^2}\mathrm{max}(t_{\mathrm{inj}},t_{\mathrm{dyn}}).$$ The cooled electron distribution has the original index $`p`$ between $`\gamma _{\mathrm{min}}`$ and $`\gamma _{\mathrm{br}}`$, and roughly $`p1`$ between $`\gamma _{\mathrm{br}}`$ and $`\gamma _{\mathrm{max}}`$. Since the number density of electron at high energy end is very small for $`p>1`$, the previous expression for $`N_0`$3.1.1) still applies. Then we can derive another expression for $`\eta `$ as $$\eta _c=f_c\frac{\sigma _\mathrm{T}}{3\pi R^2}\frac{}{m_ec^2}\left[\left(\frac{2p}{3p}\right)\frac{\gamma _{\mathrm{br}}^{3p}}{\gamma _{\mathrm{max}}^{2p}}+1\right]f_c\frac{\sigma _\mathrm{T}}{3\pi R^2}\frac{}{m_ec^2}\left[\frac{\gamma _{\mathrm{br}}+\mathrm{ln}(\gamma _{\mathrm{max}}/\gamma _{\mathrm{br}})}{\mathrm{ln}(\gamma _{\mathrm{max}}/\gamma _{\mathrm{min}})}\right],$$ (18) where again, the last expression applies to the case $`p=2`$. Note that this ratio $`\eta _c`$ depends on $`B`$ through $`\gamma _{\mathrm{br}}`$. Using the same parameters given previously, we find that equations (17) and (18) give $`\eta _c(\gamma _{\mathrm{br}}/\gamma _{\mathrm{max}})\eta `$. In other words, in order to reach the same relative ratio between synchrotron and SSC, more energy is needed (by a factor of $`\gamma _{\mathrm{max}}/\gamma _{\mathrm{br}}`$ for $`p=2`$) if electrons are cooled efficiently during injection. #### 3.2.1 Dynamics of a Single Injection In this subsection we concentrate on the dynamics of a single injection event lasting $`t_{\mathrm{inj}}`$ and its corresponding evolution of electron and photon distributions. The idea is to mimic individual flaring events in blazars and gain some basic knowledge of how synchrotron and SSC components are dynamically linked. To simplify the calculations and analysis, we choose 6 total injection energies with a factor of 10 increase from $`10^{44}`$ ergs to $`10^{49}`$ ergs. These energies in nonthermal electrons are injected over a comoving timescale of $`t_{\mathrm{inj}}=2t_{\mathrm{dyn}}`$. We solve equations (1) and (2) simultaneously and follow the evolution until $`10t_{\mathrm{dyn}}`$. Electrons are not allowed to escape. With these parameters, $`t_{\mathrm{cool}}(\gamma _{\mathrm{max}})`$ is much smaller than $`t_{\mathrm{dyn}}`$, so electrons are appreciably cooled during injection. Figure 5 shows the $`\nu F_\nu `$ spectra of these 6 injections taken at $`t=t_{\mathrm{inj}}`$ and with fluxes divided by $`/10^{44}`$. It is evident that synchrotron is the dominant cooling process for $`10^{47}`$ cases, whereas SSC becomes the dominant cooling process as the photon energy density builds up in the $`10^{48}`$ and $`10^{49}`$ cases. In fact, electron cooling is so significant in the $`10^{49}`$ case that the maximum synchrotron flux is reduced by almost a factor of 10 compared to other lower injection energy cases, and its SSC peak energy is also much softer than others. As given in equation (18), the ratio of SSC to synchrotron is roughly proportional to $``$. Thus as $``$ increases, so is this ratio. This is shown as the (almost) linear increase in the SSC peaks of Figure 5. So we conclude that so long as the peak in SSC is less than synchrotron, the magnitude of increase in SSC peak will be the square of the magnitude of increase in synchrotron peak. When the SSC component becomes comparable to synchrotron component, the system becomes highly nonlinear, the estimate of $`\gamma _{\mathrm{br}}`$ based on pure synchrotron cooling is no longer valid, even though equation (18) probably still applies as long as a cooled electron distribution is used. A further point regarding the relative ratio of SSC versus synchrotron components is that the initial $`10`$ GeV – TeV production via SSC is in the KN regime, which reduces the SSC flux. This implies that an even larger $``$ is needed than those given in equation (18). Observations of Mrk 421 and 501 seem to indicate roughly the same heights of synchrotron and SSC peaks. Thus, in modeling the time-dependent (or even steady state) emissions from these objects, a full KN cross section has to be used, as was done here in our code. #### 3.2.2 Dynamics and Light Curves We now study in detail the full time evolution of three injection cases: $`=10^{44},5\times 10^{47},`$ and $`10^{49}`$ ergs. They are shown in Figures 6, 7, and 8, where the time-evolution of electron distributions and photon spectra are presented. Figures 9, 10, and 11 show the corresponding light curves of different photon energies for the above three injection cases. To qualitatively understand the light curves, we refer to equation (15) again. The peak of the light curves is reached when the production and escape are balanced, which depends on whether particle distribution has softened enough. Furthermore, once the production at certain photon energy has stopped, the pure escape process will produce an exponential decay. Since the photon escape timescale is $`t_{\mathrm{dyn}}=R/c`$, one could get an estimate of the size by fitting the decline portion of the light curves. This can be done using Figures 9, 10, and 11 where fluxes have been plotted in logarithm. The straight lines give a clear representation of photon escape, especially when it shows up in several energy bands, thus this might be a useful method in analyzing the real blazar data. #### 3.2.3 Spectral Variations The flux changes depicted in the above subsection are accompanied by large spectral variations too as shown in Figures 6, 7, and 8. These curves contain a wealth of information, some of which are rather parameter dependent. Nevertheless, we can draw some general conclusions: (1) Since the cooling time at $`\gamma _{\mathrm{max}}`$ is the shortest timescale in our system, the electron distribution at high energy end is always substantially softened. Furthermore, the production of photons $`>`$ tens of GeV is always in the KN regime using SSC model. Both effects make the TeV spectrum very soft, with a large curvature. \[Additional effects such as intrinsic absorption at the source or intergalactic absorption by infrared background can cause further curvature (e.g., Coppi & Aharonian 1999).\] This curvature is not consistent with the TeV spectra we have seen from Mrk 421 (Krennrich et al. (1999)), but it is perhaps consistent with the observations of Mrk 501 where a curvature in the Compton component is clearly seen (Djannati-Atai et al. (1999)). (2) The “hysteresis” in the relation of photon energy flux and spectral index was first pointed out by Takahashi et al. (1996) at the keV band. In Figures 12, 13, and 14, we show the evolution of photon index as a function of energy flux in the observer’s frame. Clockwise rotation is always seen at 1 GeV, regardless of whether synchrotron or SSC cooling dominates. Clockwise rotation is found at 2 keV for the case where synchrotron losses dominate the electron cooling ($`=10^{44}`$ ergs). We find that this clockwise rotation at 2 keV is true for all cases with injection energy less than $`10^{48}`$ ergs (with the above injection form). This is mostly related to the fact that if we can associate 2 keV with the synchrotron peak, the spectrum always softens when its flux is decreasing because the electrons that can produce 2 keV photons have diminished. When the injection energy is large, the SSC loss is dominant ($`=10^{49}`$ ergs), the hysteresis diagrams rotate in the opposite sense at 2 keV. The hardening at later time is actually due to the fact that 2 keV flux is now from the first generation of SSC, not synchrotron anymore. Different behaviors of the hysteresis, however, are found at 100 keV: counter-clockwise for $`=10^{44}`$ ergs and clockwise for $`=10^{49}`$ ergs. Given these large variations and their sensitivity of various parameters, it is difficult to use these “hysteresis” diagrams to draw firm conclusions. (3) If observations show a synchrotron peak in 1 – 10 keV band, one should be very careful with fitting the spectrum in the 100 keV energy band (such as OSSE and BeppoSAX), since it is right in the region where synchrotron and SSC meet. There is a very large and rapid spectral evolution during the flare (e.g., Figure 13). (4) In the rising part of the $`\nu F_\nu `$ spectra, such as 1 – 10 eV and MeV – GeV, the spectral index variation is much slower than the keV and TeV energy bands even though their fluxes vary by a large factor (e.g., Figures 12, 13, and 14). (5) To further quantify the spectral evolution, we plot the peak energies of synchrotron and SSC components in $`\nu F_\nu `$ as a function of time in Figure 15 for the case with $`=5\times 10^{47}`$ ergs. An important part is the early softening stage, where the synchrotron peak energy decreases as $`\gamma ^2`$ whereas the SSC peak energy goes down first as $`\gamma `$ because the scattering is in the KN regime. This effect might be observable with the current keV and TeV observational capabilities. Such a “correlated” evolution in peak energies might provide a definitive test of SSC. (6) As shown in Figure 5, when the SSC cooling becomes comparable to or dominant over the synchrotron cooling, the synchrotron peak becomes broader than those dominated by synchrotron cooling only. Some of the above conclusions might be testable using the current data collected on blazars, and some might require much higher quality data. #### 3.2.4 Time-Dependent Injection In this subsection we show how different injection profiles change the light curves in a single injection event. Different from the previous subsection where a constant injection is used (box-shaped injection), we calculate another case with a linearly increasing injection rate (triangle-shaped injection), i.e., $`Q(\gamma )=Q_0\gamma ^2t/t_{\mathrm{dyn}}`$ for $`0t2t_{\mathrm{dyn}}`$, and $`Q(\gamma )=0`$ for $`t>2t_{\mathrm{dyn}}`$. The time evolution is followed until $`t=10t_{\mathrm{dyn}}`$ and electrons are allowed to escape with $`t_{e,\mathrm{esc}}=5t_{\mathrm{dyn}}`$. We use the same number of particles and same amount of total injected energy in both box- and triangle-shaped injections; the injected energy is $`5\times 10^{47}`$ ergs per $`2t_{\mathrm{dyn}}`$ in the blob frame. In Figure 16, we compare their light curves at 1 keV. The light curve for the box-shaped injection is asymmetric, because more electrons are injected at an early stage than in the triangle injection. On the other hand, the light curve from triangle injection shape is almost symmetric. In both cases, light curves decay exponentially after the end of the injection. #### 3.2.5 Multiple Injections We now move to study other injection profiles, which are done by artificially turning the electron source term on and off. This is admittedly quite artificial. The purpose is to understand whether there are any generic features associated with these multiple injections, which might aid us on modeling the multiple flares often observed from blazars. In all the following runs, we allow electrons to escape with $`t_{e,\mathrm{esc}}=5t_{\mathrm{dyn}}`$. Other parameters are the same as the single injection case. Successive flares can be produced by repeated injections of nonthermal electrons. As an example of this picture, we present light curves for two repeated injection cases of nonthermal electrons. The top panels of Figures 17 and 18 show the injection profiles, which consist of two triangle injections separated by a long ($`8t_{\mathrm{dyn}}`$) and a short ($`2t_{\mathrm{dyn}}`$) intervals (in the comoving frame), respectively. In both cases $`5\times 10^{47}`$ ergs are injected in each “flare” with a triangle-shaped time profile. The lower panels of Figures 17 and 18 show the expected light curves calculated by our code. The shape of the light curves from each injection is very similar to that in Figure 16 where a single injection is involved (i.e., quasi-symmetric). The peak fluxes of light curves in multiple injections are, however, affected by the separation time of two flares. When the separation interval is longer than the electron escape time ($`5t_{\mathrm{dyn}}`$), the light curves can almost be regarded as a simple sequence of two separate single injections. But when the separation interval is shorter than the electron escape time, multiwavelength light curves become rather complicated. The main physical reason behind this complication is the dynamic accumulation of both photons and relativistic electrons. First of all, 1 eV and 1 keV emissions are from synchrotron and others are from SSC. All the SSC emissions have second peak higher than the first one, this is due to the increase both in soft photon energy density and in the number of electrons which are not completely cooled yet when the second injection occurs. This accumulation of electrons also accounts for the increase in 1 eV synchrotron flux. The 1 keV emissions, however, show a lower flux in the second peak, this is because the relativistic electrons from the second injection are subject to a much stronger cooling due to the enhanced photon energy density from the first injection. In addition to the flux differences, there are obvious delays in reaching the peak fluxes for different wavelengths with respect to the synchrotron and SSC peaks, though 1 keV and 1 TeV fluxes track each other rather well. As demonstrated in these figures, the slow response and relatively small amplitude variations at the photon energies other than the synchrotron and SSC peaks argue against the usual belief of closely correlated variations in multiwavelength observations. Only the emissions from the tail of the electron energy distribution can be reliably used as diagnostics for separate injections. Still, extra caution is obviously needed when relating the energy contained in nonthermal particles versus the observed fluxes. ## 4 SUMMARY AND DISCUSSIONS Using a homogeneous synchrotron self-Compton model, we have calculated the time evolution of emission spectra and electron energy distributions when nonthermal electrons are uniformly injected into a relativistically moving plasma blob with constant velocity. We have found that: (1) When the luminosities of the synchrotron and SSC peaks are comparable, the electron cooling by inverse Compton scattering is not negligible and the system is inherently nonlinear and dynamic. One has to solve the time-dependent, coupled electron and photon kinetic-equations self-consistently. Furthermore, since observations are most sensitive to the peak fluxes of synchrotron and SSC components, accurate treatments of synchrotron emissivity due to the end point effects and inverse Compton scattering in the KN regime are quite essential. (2) When the cooling time at the maximum particle energy is longer than the injection timescale ($`R/c`$), the light curve of emissions corresponding to the tail of the electron distribution can have short, large amplitude variations but emissions at other wavelengths show considerably longer and smaller amplitude changes. Additionally, spectral evolution is also rather slow. All these features are simply caused by the long cooling time of electrons. (3) When cooling time at the maximum particle energy is shorter than the dynamic timescale, strong spectral evolutions are observed for both synchrotron and SSC components and short duration flares are obtained in most energy bands. (4) Generally, the prompt TeV spectrum is curved due to the KN effect and the fact that TeV-production electrons are usually in a cooled distribution. This consideration does not take into account the possible infrared background attenuation of the TeVs, which might cause further curvature in the TeV spectrum. On the other hand, most current TeV observations require an accumulation time probably much longer than the dynamic timescale of the blob, so that it might still be possible to obtain a quasi power-law TeV spectrum by averaging over an evolving spectrum. Further study is needed to address this issue. (5) We recommend plotting the light curves in a fashion that is logarithmic flux versus linear time. The goal is to look for exponential decays at specific energy bands, which might give direct measurements of the size of the system, as indicated in Figures 911. (6) One has to be cautious about the common belief that light curves in different energy bands should track each other. The electrons responsible for producing specific energy photons might have quite different lifetimes, especially when multiple and closely spaced injections are involved. This complication also applies to the leading/lagging analysis for different photon energy bands. (7) When high time-resolution spectroscopy is available both in keV and TeV bands, one should be able to prove whether TeV production is via SSC process by comparing the rates of spectral softening as done in Figure 15. The primary purpose of this paper is to investigate the radiative signatures in a purely cooling and dynamic system, thus providing a bridge between observations and the detailed but largely unknown physics of particle energization processes. Since we did not address the particle acceleration problem here, in this sense, some of the conclusions drawn above are certainly subject to revisions as our understanding of energy flow in AGNs improves. In conclusion, we have found that solving time-dependent, coupled electron and photon kinetic-equations provides an easy and efficient way of comparing multiwavelength, time-dependent observations with some simplified SSC models. It has the advantage of naturally combining the spectral and temporal evolutions in a dynamic system, which is very useful when more and more high quality data become available. We thank C. Dermer for useful discussions and the anonymous referee for the helpful comments. H.L. gratefully acknowledges the support of an Oppenheimer Fellowship, and his research is supported by the Department of Energy, under contract W-7405-ENG-36. M.K. thanks F. Takahara for stimulating discussions and his research was partially supported by Scientific Research Grants (09223219, 10117215) from the Ministry of Education, Science, Sports and Culture of Japan.
no-problem/0002/cond-mat0002071.html
ar5iv
text
# Generalization properties of finite size polynomial Support Vector Machines ## I Introduction In the last decade, the typical properties of neural networks that learn classification tasks from a set of examples have been analyzed using the approach of Statistical Mechanics. In the general setting, the value of a binary output neuron represents whether the input vector, describing a particular pattern, belongs or not to the class to be recognized. Manuscript character recognition and medical diagnosis are examples of such classification problems. The process of inferring the rule underlying the input-output mapping given a set of examples is called learning. The aim is to predict correctly the class of novel data, i.e. to generalize. In the simplest neural network, the perceptron, the inputs are directly connected to a single output neuron. The output state is given by the sign of the weighted sum of the inputs. Then, learning amounts to determine the weights of the connexions in order to obtain the correct outputs to the training examples. Considering the weights as the components of a vector, the network classifies the input vectors according to whether their projections onto the weight vector are positive or negative. Thus, patterns of different classes are separated by the hyperplane orthogonal to the weight vector. Beyond these linear separations, two different learning schemes have been suggested. Either the input vectors are mapped by linear hidden units to so called internal representations that must be linearly separable by the output neuron, or a more powerful output unit is defined, able to perform more complicated functions than just the weighted sum of its inputs. The first solution is implemented using feedforward layered neural networks. The classification of the internal representations, performed by the output neuron, corresponds in general to a complicated separation surface in input space. However, the relation between the number of hidden units of a network and the class of rules it can infer is still an open problem. In practice, the number of hidden neurons is either guessed or determined through constructive heuristics. A solution that uses a more complex output unit, the Support Vector Machine (SVM) , has been recently proposed. The input patterns are transformed into high dimensional feature vectors whose components may include the original input together with specific functions of its coordinates selected a priori, with the aim that the learning set be linearly separable in feature space. In that case the learning problem is reduced to that of training a simple perceptron. For example, if the feature space includes all the pairwise products of the input vector, the SVM may implement any classification rule corresponding to a quadratic separating surface in input space. Higher order polynomial SVMs and other types of SVMs may be defined by introducing the corresponding features. A big advantage is that learning a linearly separable rule is a convex optimization problem. The difficulties of having many local minima, that hinder the process of training multilayered neural networks, are thus circumvented. Once the adequate feature space is defined, the SVM selects the particular hyperplane called Maximal Margin (or Maximal Stability) Hyperplane (MMH), which lies at the largest distance to its closest patterns in the training set. These patterns are called Support Vectors (SV). The MMH solution has interesting properties . In particular, the fraction of learning patterns that belong to the SVs provides an upper bound to the generalization error, that is, to the probability of incorrectly classifying a new input. It has been shown that the perceptron weights are a linear combination of the SVs, an interesting property in high dimensional feature spaces, as their number is bounded. A perceptron can learn with very high probability any set of examples, regardless of the underlying classification rule, provided that their number does not exceed twice its input space dimension . However, this simple rote learning does not capture the rule underlying the classification. As it may arise that the feature space dimension of the SVM is comparable to, or even larger than, the number of available training patterns, we would expect that SVMs have a poor generalization performance. Surprisingly, this seems not to be the case in the applications . Two theoretical papers have recently addressed this interesting question. They determined the typical properties of a family of polynomial SVMs in the limit of large dimensional spaces, reaching completely different results in spite of the seemingly innocuous differences between the models. Both papers consider polynomial SVMs in which the input vectors $`𝐱\text{IR}^N`$ are mapped onto quadratic features $`𝚽`$. More precisely, the normalized mapping $`𝚽_n(𝐱)=(𝐱,x_1𝐱/\sqrt{N},x_2𝐱/\sqrt{N},\mathrm{},x_N𝐱/\sqrt{N})`$ has been considered in . The non-normalized mapping $`𝚽_{nn}(𝐱)=(𝐱,x_1𝐱,x_2𝐱,\mathrm{},x_k𝐱)`$ has been studied in as a function of $`k`$, the number of quadratic features. For $`k=N`$ the dimension of both feature spaces is the same, corresponding to a linear subspace of dimension $`N`$, and a quadratic subspace of dimension $`N^2`$. The mappings only differ in the distributions of the quadratic components in feature space. Due to the normalization, those of $`𝚽_n`$ are squeezed by a normalizing factor $`a=1/\sqrt{N}`$ with respect to those of $`𝚽_{nn}`$. In the case of learning a linearly separable rule with the non-normalized mapping $`𝚽_{nn}`$, the generalization error at any given learning set size increases dramatically with the number $`k`$ of quadratic features included . On the contrary, in the case of mapping $`𝚽_n`$, the generalization error exhibits an interesting stepwise decrease, also found within the Gibbs learning paradigm in a quadratic feature space . If the number of training patterns scales with $`N`$, the dimension of the linear subspace, it decreases up to an asymptotic lower bound. If the number of examples scales proportionally to $`N^2`$, it vanishes asymptotically. In particular, if the rule to be inferred is linearly separable in the input space, learning in the feature space with the mapping $`𝚽_n`$ is harmless, as the decrease of the generalization error with the number of training patterns presents a slight slow-down with respect to that of a simple perceptron learning in input space. As this stepwise learning is exclusively related to the fact that the normalizing factor of the quadratic features vanishes in the thermodynamic limit $`N\mathrm{}`$, in the present paper we determine the influence of the normalizing factor on the typical generalization performance of finite size SVMs. To this end, we introduce two parameters, $`\sigma `$ and $`\mathrm{\Delta }`$, caracterizing the mapping of the $`N`$-dimensional input patterns onto the feature space. The variance $`\sigma `$ reflects the width of the high-order features distribution and is related to the normalizing factor $`a`$. The inflation factor $`\mathrm{\Delta }`$ accounts for the proportion of quadratic features with respect to the input space dimension $`N`$. Actual quadratic SVMs are caracterized by different values of $`\mathrm{\Delta }`$ and $`\sigma `$, depending on $`N`$ and $`a`$. Keeping $`\sigma `$ and $`\mathrm{\Delta }`$ fixed in the thermodynamic limit allows us to determine the typical properties of actual SVMs, which have finite compressing factors and inflation ratios. In fact, the behaviour of the SVMs is the same as that of a simple perceptron learning a training set with patterns drawn from a highly anisotropic probability distribution, such that a macroscopic fraction of components have a different variance from the others. Not surprisingly, we find that the asymptotic behaviour corresponding to both the small and large training set size limits, is the same as the one of the perceptron’s MMH. Only the prefactors depend on the mapping used by the SVM. As expected, the stepwise learning obtained with the normalized mapping in the thermodynamic limit becomes a crossover. Upon increasing the number of training patterns, the generalization error first present an abrupt decrease, that corresponds to learning the weight components in the linear subspace, followed by a slower decrease corresponding to the learning of the quadratic components. The steepness of the crossover not only depends on $`\mathrm{\Delta }`$ and $`\sigma `$, but also on the task to be learned. The agreement between our analytic results and numerical simulations is excellent. The paper is organized as follows: in section II we introduce the model and the main steps of the Statistical Mechanics calculation. Numerical simulation results are compared to the corresponding theoretical predictions in section III. The two regimes of the generalization error and the asymptotic behaviours are discussed in section IV. The conclusion is left to section V. ## II The model We consider the problem of learning a binary classification task from examples with a SVM in polynomial feature spaces. The learning set contains $`M`$ patterns $`(𝐱^\mu ,\tau ^\mu )`$ ($`\mu =1,\mathrm{},M`$) where $`𝐱^\mu `$ is an input vector in the $`N`$-dimensional input space, and $`\tau ^\mu \{1,1\}`$ is its class. We assume that the components $`x_i^\mu `$ ($`i=1,\mathrm{},N`$) are independent identically distributed (i.i.d.) random variables drawn from gaussian distributions having zero-mean and unit variance: $$P(𝐱)=\underset{i=1}{\overset{N}{}}\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left(\frac{x_i^2}{2}\right).$$ (1) In the following we concentrate on quadratic feature spaces, although our conclusions are more general, and may be applied to higher order polynomial SVMs, as discussed in section IV. The mappings $`𝚽_{nn}(𝐱)=(𝐱,x_1𝐱,x_2𝐱,\mathrm{},x_N𝐱)`$ and $`𝚽_n(𝐱)=(𝐱,x_1𝐱/\sqrt{N},x_2𝐱/\sqrt{N},\mathrm{},x_N𝐱/\sqrt{N})`$ are particular instances of mappings of the form $`𝚽(𝐱)=(\varphi _1,\varphi _2,\mathrm{},\varphi _N,\varphi _{11},\varphi _{12},\mathrm{},\varphi _{NN})`$ where $`\varphi _i=x_i`$, and $`\varphi _{ij}=ax_ix_j`$, where $`a`$ is the normalizing factor of the quadratic components: $`a=1`$ for mapping $`𝚽_{nn}`$ and $`a=1/\sqrt{N}`$ for $`𝚽_n`$. The patterns probability distribution in feature-space is: $$P\left(𝚽\right)=\underset{i=1}{\overset{N}{}}\frac{dx_i}{\sqrt{2\pi }}\mathrm{exp}\left(\frac{x_i^2}{2}\right)\delta (\varphi _ix_i)\underset{j=1}{\overset{N}{}}\delta \left(\varphi _{ij}ax_ix_j\right).$$ (2) Clearly, the components of $`𝚽`$ are not independent random variables. For example, a number $`O(N^3)`$ of triplets of the form $`\varphi _{ij}\varphi _{jk}\varphi _{ki}`$ have positive correlations. These contribute to the third order moments, which should vanish if the features were gaussian. Moreover, the fourth order connected correlations do not vanish in the thermodynamic limit. Nevertheless, in the following we will neglect these and higher order connected moments. This approximation, used in and implicit in , is equivalent to assuming that all the components in feature space are independent gaussian variables. Then, the only difference between the mappings $`𝚽_n`$ and $`𝚽_{nn}`$ lies in the variance of the quadratic components distribution. The results obtained using this simplification are in excellent agreement with the numerical tests described in the next section. Since, due to the symmetry of the transformation, only $`N(N+1)/2`$ among the $`N^2`$ quadratic features are different, hereafter we restrict the feature space and only consider the non redundant components, that we denote $`𝝃=(𝝃_u,𝝃_\sigma )`$. Its first $`N`$ components $`𝝃_u=(\xi _1,\mathrm{},\xi _N)`$ hereafter called u-components, represent the input pattern of unit variance, lying in the linear subspace. The remaining components $`𝝃_\sigma =(\xi _{N+1},\mathrm{},\xi _{\stackrel{~}{N}})`$ stand for the non redundant quadratic features, of variance $`\sigma `$, hereafter called $`\sigma `$-components. $`\stackrel{~}{N}`$ is the dimension $`\stackrel{~}{N}=N(1+\mathrm{\Delta })`$ of the restricted feature space, where the inflation ratio $`\mathrm{\Delta }`$ is the relative number of non-redundant quadratic features per input space dimension. The quadratic mapping has $`\mathrm{\Delta }=(N+1)/2`$. According to the preceding discussion, we assume that learning $`N`$-dimensional patterns selected with the isotropic distribution (1) with a quadratic SVM is equivalent to learning the MMH with a simple perceptron in an $`\stackrel{~}{N}`$-dimensional space where the patterns are drawn using the following anisotropic distribution, $$P\left(𝝃\right)=\underset{i=1}{\overset{N}{}}\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left(\frac{\xi _i^2}{2}\right)\underset{j=N+1}{\overset{\stackrel{~}{N}}{}}\frac{1}{\sqrt{2\pi \sigma ^2}}\mathrm{exp}\left(\frac{\xi _j^2}{2\sigma ^2}\right).$$ (3) The second moment of the u-features is $`𝝃_u^2=N`$ and that of the $`\sigma `$-features is $`𝝃_\sigma ^2=N\mathrm{\Delta }\sigma ^2`$. If $`\sigma ^2\mathrm{\Delta }=1`$, we get $`𝝃_\sigma ^2=𝝃_u^2`$, which is the relation satisfied by the normalized mapping considered in . The non-normalized mapping corresponds to $`\sigma ^2\mathrm{\Delta }=N`$. In the following, instead of selecting either of these possibilities a priori, we consider $`\mathrm{\Delta }`$ and $`\sigma `$ as independent parameters, that are kept constant when taking the thermodynamic limit. Since the rules to be inferred are assumed to be linear separations in feature space, we represent them by the weights $`𝐰^{}=(w_1^{},w_2^{},\mathrm{},w_{\stackrel{~}{N}}^{})`$ of a teacher perceptron, so that the class of the patterns is $`\tau =\mathrm{sign}(𝝃𝐰^{})`$. Without any loss of generality we consider normalized teachers: $`𝐰^{}𝐰^{}=\stackrel{~}{N}`$. The training set in feature space is then $`_M=\{(𝝃^\mu ,\tau ^\mu )\}_{\mu =1,\mathrm{},M}`$. In the following we study the typical properties of polynomial SVMs learning realizable classification tasks, using the tools of Statistical Mechanics. If $`𝐰=(w_1,\mathrm{},w_{\stackrel{~}{N}})`$ is the student perceptron weight vector, $`\gamma ^\mu =\tau ^\mu 𝝃^\mu 𝐰/\sqrt{𝐰𝐰}`$ is the stability of pattern $`\mu `$ in feature space. The pertinent cost function is : $$E(𝐰,\kappa ;_M)=\underset{\mu =1}{\overset{M}{}}\mathrm{\Theta }(\kappa \gamma ^\mu ).$$ (4) $`\kappa `$, the smallest allowed distance between the hyperplane and the training patterns, is called the margin. The MMH corresponds to the weights with vanishing cost (4) that maximize $`\kappa `$. The typical properties of cost (4) in the case of isotropic pattern distributions have been exhaustively studied . The case of a single anisotropy axis has also been investigated . Here we study the case of the anisotropic distribution (3), where a macroscopic fraction of components have different variance from the others, which is pertinent for understanding the properties of the SVM. Considering the cost (4) as an energy, the partition function at temperature $`1/\beta `$ writes $$Z(\kappa ,\beta ;_M)=\mathrm{exp}[\beta E(𝐰,\kappa ;_M)]p(𝐰)𝑑𝐰.$$ (5) Without any loss of generality, we assume that the a priori distribution of the student weights is uniform over the hypersphere of radius $`\stackrel{~}{N}^{1/2}`$, i.e. $`p(𝐰)=\delta (𝐰𝐰\stackrel{~}{N})`$, meaning that the student weights are normalized in feature space. In the limit $`\beta \mathrm{}`$, the corresponding free energy $`f(\kappa ,\beta ;_M)=(1/\beta N)\mathrm{ln}Z(\kappa ,\beta ;_M)`$ is dominated by the weights that minimize the cost (4). The typical properties of the MMH are obtained by looking for the largest value of $`\kappa `$ for which the quenched average of the free energy over the patterns distribution, in the zero temperature limit $`\beta \mathrm{}`$, vanishes. This average is calculated by the replica method, using the identity $$f(\kappa ,\beta )=\frac{1}{N\beta }\overline{\mathrm{ln}Z(\kappa ,\beta ;_M)}=\frac{1}{N\beta }\underset{n0}{lim}\frac{\mathrm{ln}\overline{Z^n(\kappa ,\beta ;_M)}}{n},$$ (6) where the overline represents the average over $`_M`$, composed of patterns selected according to (3). We obtain the typical properties of the MMH corresponding to given values of $`\mathrm{\Delta }`$ and $`\sigma `$ by taking the thermodynamic limit $`N\mathrm{}`$, $`M\mathrm{}`$, with $`\alpha M/N`$, $`\mathrm{\Delta }`$ and $`\sigma `$ constant. Notice that the relation between the number of training examples and the feature space dimension, $`\stackrel{~}{\alpha }M/\stackrel{~}{N}=\alpha /(1+\mathrm{\Delta })`$, is finite. Thus, not only are we able to study the dependence of the learning properties as a function of the training set size as usual, but also of the inflation factor that characterizes the SVM, as well as of the variance of the quadratic components. As we only consider realizable rules, i.e. classification tasks that are linearly separable in feature space, the energy (4) is a convex function of the weights $`𝐰`$, and replica symmetry holds. For any $`\kappa <\kappa _{max}`$, there are a macroscopic number of weights that minimize the cost function (4). In particular, in the case of $`\kappa =0`$, the cost is the number of training errors, and is minimized by any weight vector that classifies correctly the training set. The typical properties of such solution, called Gibbs learning, may be expressed in terms of several order parameters . Among them, $`q_u^{ab}=_{i=1}^N\overline{w_i^aw_i^b}/\stackrel{~}{N}`$, $`q_\sigma ^{ab}=_{i=N+1}^{\stackrel{~}{N}}\overline{w_i^aw_i^b}/\stackrel{~}{N}`$ and $`Q^a=_{i=N+1}^{\stackrel{~}{N}}\overline{w_i^aw_i^a}/\stackrel{~}{N}`$, where $`ab`$ are replica indices and $`\mathrm{}`$ stands for the usual thermodynamic average (with Boltzmann factor corresponding to the partition function (5)). $`q_u^{ab}`$ and $`q_\sigma ^{ab}`$ represent the overlaps between different solutions in the u\- and the $`\sigma `$\- subspaces respectively. $`\stackrel{~}{N}Q^a`$ is the typical norm of the $`\sigma `$-components of replica $`a`$. Because of replica symmetry we have $`Q^a`$=$`Q^b=Q`$, $`q_\sigma ^{ab}=q_\sigma `$ and $`q_u^{ab}=q_u`$ for all $`a`$, $`b`$. Upon increasing $`\kappa `$, the volume of the error-free solutions in weight space shrinks, and vanishes when $`\kappa `$ is maximized. Correspondingly, $`q_u1Q`$ and $`q_\sigma Q`$, with $`xlim_{\kappa \kappa _{max}}(1q_u/(1Q))/(1q_\sigma /Q)`$ finite. In the limit of $`\kappa \kappa _{max}`$, the properties of the MMH may be expressed in terms of $`x`$, $`\kappa _{max}`$ and the following order parameters, $`Q`$ $`=`$ $`{\displaystyle \frac{1}{\stackrel{~}{N}}}{\displaystyle \underset{i=N+1}{\overset{\stackrel{~}{N}}{}}}\overline{w_i^2},`$ (7) $`R_u`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{(1Q)(1Q^{})}}}{\displaystyle \frac{1}{\stackrel{~}{N}}}{\displaystyle \underset{i=1}{\overset{N}{}}}\overline{w_iw_i^{}},`$ (8) $`R_\sigma `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{QQ^{}}}}{\displaystyle \frac{1}{\stackrel{~}{N}}}{\displaystyle \underset{i=N+1}{\overset{\stackrel{~}{N}}{}}}\overline{w_iw_i^{}},`$ (9) where $`Q^{}=_{i=N+1}^{\stackrel{~}{N}}(w_i^{})^2/\stackrel{~}{N}`$ is the teacher’s squared weight vector in the $`\sigma `$-subspace. $`Q`$ is the corresponding typical value for the student. $`R_u`$ and $`R_\sigma `$ are proportional to the overlaps between the student and the teacher weights in the u\- and the $`\sigma `$\- subspaces respectively. The factors in the denominators arise because the weights are not normalized in each subspace. The saddle point equations corresponding to the extremum of the free energy for the MMH are $`2{\displaystyle \frac{\alpha }{\mathrm{\Delta }}}\mathrm{\Delta }_\sigma I_1`$ $`=`$ $`(1R_\sigma ^2){\displaystyle \frac{(x+\mathrm{\Delta }_\sigma )^2}{1+\mathrm{\Delta }_\sigma }},`$ (10) $`2{\displaystyle \frac{\alpha }{\mathrm{\Delta }}}I_2`$ $`=`$ $`\sqrt{{\displaystyle \frac{1+\mathrm{\Delta }_\sigma ^{}}{\mathrm{\Delta }_\sigma ^{}}}}R_\sigma {\displaystyle \frac{x+\mathrm{\Delta }_\sigma }{\sqrt{\mathrm{\Delta }_\sigma (1+\mathrm{\Delta }_\sigma )}}},`$ (11) $`2{\displaystyle \frac{\alpha }{\mathrm{\Delta }}}Q(1\sigma ^2)I_3`$ $`=`$ $`\left(1x{\displaystyle \frac{1R_\sigma ^2}{1R_u^2}}\right){\displaystyle \frac{x+\mathrm{\Delta }_\sigma }{1+\mathrm{\Delta }_\sigma }},`$ (12) $`{\displaystyle \frac{R_\sigma ^2}{1R_\sigma ^2}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }_\sigma ^{}}{\mathrm{\Delta }}}{\displaystyle \frac{R_u^2}{1R_u^2}},`$ (13) $`{\displaystyle \frac{R_u}{R_\sigma }}`$ $`=`$ $`x{\displaystyle \frac{\mathrm{\Delta }}{\sqrt{\mathrm{\Delta }_\sigma \mathrm{\Delta }_\sigma ^{}}}}.`$ (14) where $`\mathrm{\Delta }_\sigma \sigma ^2Q/(1Q)`$ and $`\mathrm{\Delta }_\sigma ^{}\sigma ^2Q^{}/(1Q^{})`$. The integrals in the left hand side of equations (10-12) are $`I_1`$ $`=`$ $`{\displaystyle _{\stackrel{~}{\kappa }}^{\mathrm{}}}Dt(t+\stackrel{~}{\kappa })^2H\left({\displaystyle \frac{tR}{\sqrt{1R^2}}}\right),`$ (15) $`I_2`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi }}}\left[{\displaystyle \frac{\sqrt{1R^2}\mathrm{exp}(\stackrel{~}{\kappa }^2/(2(1R^2)))}{\sqrt{2\pi }}}+\stackrel{~}{\kappa }H\left({\displaystyle \frac{\stackrel{~}{\kappa }}{\sqrt{1R^2}}}\right)\right],`$ (16) $`I_3`$ $`=`$ $`{\displaystyle _{\stackrel{~}{\kappa }}^{\mathrm{}}}Dt\stackrel{~}{\kappa }(t+\stackrel{~}{\kappa })H\left({\displaystyle \frac{tR}{\sqrt{1R^2}}}\right),`$ (17) with $`Dtdt\mathrm{exp}(t^2/2)/\sqrt{2\pi }`$, $`H(x)=_x^{\mathrm{}}Dt`$, and $`\stackrel{~}{\kappa }`$ $`=`$ $`{\displaystyle \frac{\kappa _{max}}{\sqrt{(1Q)(1+\mathrm{\Delta }_\sigma )}}},`$ (18) $`R`$ $`=`$ $`{\displaystyle \frac{R_u+\sqrt{\mathrm{\Delta }_\sigma \mathrm{\Delta }_\sigma ^{}}R_\sigma }{\sqrt{(1+\mathrm{\Delta }_\sigma )(1+\mathrm{\Delta }_\sigma ^{})}}}.`$ (19) The value of $`R`$ determines the generalization error through $`ϵ_g=(1/\pi )\mathrm{arccos}(R)`$. After solving the above equations for $`Q`$, $`R_u`$, $`R_\sigma `$, $`x`$ and $`\stackrel{~}{\kappa }`$, it is straightforward to determine $`\rho _{SV}`$, the fraction of training patterns that belong to the subset of SV : $$\rho _{SV}=2_{\mathrm{}}^{\stackrel{~}{\kappa }}H\left(tR/\sqrt{1R^2}\right)Dt.$$ (20) In summary of this section, instead of considering a particular scaling of the fraction of high order features components and their normalization with $`N`$, we analyzed the more general case where these quantities are kept as free parameters. We determined the saddle point equations that define the typical properties of the corresponding SVM. This approach allows us to consider several learning scenarios, and more interestingly, to study the crossover between the different generalization regimes. ## III Results We describe first the experimental data, obtained with quadratic SVMs, using both mappings, $`𝚽_{nn}`$ and $`𝚽_n`$, which have normalizing factors $`a=1`$ and $`a=1/\sqrt{N}`$ respectively, where $`N`$ is the input space dimension. The $`M=\alpha N`$ random input examples of each training set were selected with probability (1) and labelled by teachers of normalized weights $`𝒘^{}(𝒘_l^{},𝒘_q^{})`$ drawn at random. $`𝒘_l^{}`$ are the $`N`$ components in the linear subspace and $`𝒘_q^{}`$ are the $`N^2`$ components in the quadratic subspace. Notice that, because of the symmetry of the mappings, teachers having the same value of the symmetrized weights in the quadratic subspace, $`(w_{q,ij}^{}+w_{q,ji}^{})/2`$, are all equivalent. The teachers are characterized by the proportion of (squared) weight components in the quadratic subspace, $`Q^{}=𝒘_q^{}𝒘_q^{}/𝒘^{}𝒘^{}`$. In particular, $`Q^{}=0`$ and $`Q^{}=1`$ correspond to a purely linear and a purely quadratic teacher respectively. The experimental student weights $`𝒘(𝒘_l,𝒘_q)`$ were obtained by solving numerically the dual problem , using the Quadratic Optimizer for Pattern Recognition program , that we adapted to the case without threshold treated in this paper. We determined $`Q`$, and the overlaps $`R_l`$ and $`R_q`$ in the linear and the quadratic subspaces, respectively. For each value of $`M`$, averages were performed over a large enough number of different teachers and training sets to get the precision shown in the figures. Experiments were carried out for $`N=50`$. The corresponding feature space dimension is $`N(N+1)=2550`$. The restricted feature space considered in our model is composed of the $`N`$ (linear) input components, which define the u-subspace of the feature space, and the $`N\mathrm{\Delta }`$ non redundant quadratic components of the $`\sigma `$-subspace. For the sake of comparison with the theoretical results determined in the thermodynamic limit, we caracterize the actual SVM by its (finite size) inflation factor $`\mathrm{\Delta }=(N+1)/2`$, and the variance $`\sigma ^2`$ of the components in the $`\sigma `$-subspace, related to the normalizing factor $`a`$ of the new features through $`\sigma ^2=Na^2/\mathrm{\Delta }`$. In our case, since $`N=50`$, $`\mathrm{\Delta }=25.5`$ and $`\sigma ^2=1.960784a^2`$, that is $`\sigma ^2=1.960784`$ for the non-normalized mapping and $`\sigma ^2=0.039216`$, for the normalized one. The values of $`Q`$, the fraction of squared student weights in the $`\sigma `$-subspace, and the teacher-student overlaps $`R_u`$ and $`R_\sigma `$, normalized within the corresponding sub-space, are represented on figures 1 to 4 as a function of $`\alpha M/N`$, using full and open symbols for the mappings $`𝚽_n`$ and $`𝚽_{nn}`$ respectively. Notice that the abscissas correspond to the fraction of training patterns per input space dimension. Error bars are smaller than the symbols’ size. The lines are not fits, but the theoretical curves corresponding to the same classes of teachers as the experimental results. The excellent agreement with the experimental data is striking. Thus, the high order correlations of the features, neglected in the theoretical models, are indeed negligible. Fig. 1 corresponds to a purely linear teacher ($`Q^{}=0`$), i.e. to a quadratic SVM learning a rule linearly separable in input space. As in this case $`R_\sigma =0`$, only $`R_u`$ and $`Q`$ are represented. In the case of a purely quadratic rule, $`Q^{}=1`$, represented on fig. 2, $`R_u=0`$. Notice that the corresponding overlaps, $`R_u`$ and $`R_\sigma `$, do not have a similar behaviour, as the latter increases much slower than the former, irrespective of the mapping. This happens because, as the number of quadratic components scales like $`N\mathrm{\Delta }`$, a number of examples of the order of $`N\mathrm{\Delta }`$ are needed to learn them. Indeed, $`R_u`$ reaches a value close to $`1`$ with $`\alpha O(1)`$ while $`R_\sigma `$ needs $`\alpha O(\mathrm{\Delta })`$ to reach similar values. Fig. 3 shows the results corresponding to the isotropic teacher, having $`Q^{}=Q_{iso}^{}\mathrm{\Delta }/(1+\mathrm{\Delta })`$. For $`\mathrm{\Delta }=25.5`$ we have $`Q_{iso}^{}=0.962`$ A particular case of such a teacher has all its weight components of equal absolute value, i.e. $`(w_i^{})^2=1/\stackrel{~}{N}`$, and was studied in and . Finally, the results corresponding to a general rule, with $`Q^{}=0.5`$, are shown in fig. 4. Notice that at fixed $`\alpha `$, $`R_u`$ decreases and $`R_\sigma `$ increases with $`Q^{}`$ at a rate that depends on the mapping. These quantities determine the student’s generalization error through the combination (19). The fact that they increase as a function of $`\alpha `$ with different speed is a signature of hierarchical learning. The generalization error $`ϵ_g`$ corresponding to the different rules is plotted against $`\alpha `$ on fig. 5, for both mappings. At any fixed $`\alpha `$, the performance obtained with the normalized mapping is better the smaller the value of $`Q^{}`$. The non-normalized mapping shows the opposite trend: its performance for a purely linear teacher is extremely bad, but it improves for increasing values of $`Q^{}`$ and slightly overrides that of the normalized mapping in the case of a purely quadratic teacher. These results reflect the competition on learning the anisotropically distributed features. In the case of the normalized mapping, the $`\sigma `$-components are compressed ($`\sigma ^2=0.039`$) with respect to the u-components, which have unit variance. This is advantageous whenever the linear components carry the most significant information, which is the case for $`Q^{}1`$. When $`Q^{}=1`$, the linear components only introduce noise that hinders the learning process. As the number of linear components is much smaller than the number of quadratic ones, their pernicious effect should be more conspicuous the smaller the value of $`\mathrm{\Delta }`$. Conversely, the non-normalized mapping has $`\sigma ^2=1.96`$, meaning that the compressed components are those of the u-subspace. Therefore, this mapping is better when most of the information is contained in the $`\sigma `$-subspace, which is the case for teachers with large $`Q^{}`$ and, in particular, with $`Q^{}=1`$. Finally, for the sake of completeness, the fraction of support vectors $`\rho _{SV}M_{SV}/M`$, where $`M_{SV}`$ is the number of training patterns with maximal stability, is represented on figure 6. This fraction is an upper bound to the generalization error. Notice that these curves present qualitatively the same trends as $`ϵ_g`$. Interestingly, $`\rho _{SV}`$ is smaller for the normalized mapping than for the non-normalized one for most of the rules. Since the student’s weights can be expressed as a linear combination of SVs , this result is of practical interest. ## IV Discussion In order to understand the results obtained in the previous section, we first analyze the relative behaviour of $`R_u`$ and $`R_\sigma `$, which can be deduced from equation (13). If $`\mathrm{\Delta }_\sigma ^{}\mathrm{\Delta }`$, which is the case for sufficiently small $`Q^{}`$, we get that $`R_\sigma R_u`$. This means that the quadratic components are more difficult to learn than the linear ones. On the other hand, if the teacher lies mainly in the quadratic subspace, $`\mathrm{\Delta }_\sigma ^{}\mathrm{\Delta }`$, and then $`R_\sigma >R_u`$. The crossover between these different behaviours occurs at $`\mathrm{\Delta }_\sigma ^{}=\mathrm{\Delta }`$, for which equation (13) gives $`R_\sigma =R_u`$. For $`N=50`$, which is the case in our simulations, this arises for $`Q_n^{}=0.998`$ or $`Q_{nn}^{}=0.929`$, depending on whether we use the normalized or the non-normalized mapping. In the particular case of the isotropic teacher and the non-normalized mapping, $`Q^{}>Q_{nn}^{}`$, so that $`R_\sigma >R_u`$, as shown on figure 3. These considerations alone are not sufficient to understand the behaviour of the generalization error, which depends on the weighted sum of $`R_\sigma `$ and $`R_u`$ (see equation (19)). The behaviour at small $`\alpha `$ is useful to understand the onset of hierarchical learning. A close inspection of equations (10-13) shows that in the limit $`\alpha 0`$, $`x=\sigma ^2`$ and $`Q\mathrm{\Delta }\sigma ^2/(\mathrm{\Delta }\sigma ^2+1)`$ to leading order in $`\alpha `$. This results may be understood with the following simple argument: if there is only one training pattern, clearly it is a SV and the student’s weight vector is proportional to it. As a typical example has $`N`$ components of unit length in the u-subspace and $`N\mathrm{\Delta }`$ components of length $`\sigma `$ in the $`\sigma `$-subspace, we have $`Q=N\mathrm{\Delta }\sigma ^2/(N\mathrm{\Delta }\sigma ^2+N)`$. With the normalized mapping, $`lim_{\alpha 0}Q=1/2`$. In the case of the non normalized one $`lim_{\alpha 0}Q=(2\mathrm{\Delta }1)/2\mathrm{\Delta }`$, which depends on the inflation factor of the SVM. In this limit, we obtain: $`\kappa _{max}`$ $``$ $`{\displaystyle \frac{1+\sigma ^2\mathrm{\Delta }}{\sqrt{1+\sigma ^4\mathrm{\Delta }}}}{\displaystyle \frac{1}{\sqrt{\alpha }}},`$ (21) $`R_u`$ $``$ $`\sqrt{{\displaystyle \frac{2}{\pi }}}{\displaystyle \frac{1}{\sqrt{1+\mathrm{\Delta }_\sigma ^{}}}}\sqrt{\alpha },`$ (22) $`R_\sigma `$ $``$ $`\sqrt{{\displaystyle \frac{2}{\pi }}}\sqrt{{\displaystyle \frac{\mathrm{\Delta }_\sigma ^{}}{1+\mathrm{\Delta }_\sigma ^{}}}}\sqrt{{\displaystyle \frac{\alpha }{\mathrm{\Delta }}}}.`$ (23) Therefore, $`R\sqrt{\alpha }`$, like for the simple perceptron MMH , but with a prefactor that depends on the mapping and the teacher. In our model, we expect that hierarchical learning correspond to a fast increase of $`R`$ at small $`\alpha `$, mainly dominated by the contribution of $`R_u`$. As in the limit $`\alpha 0`$, $$R\frac{R_u+R_\sigma \sqrt{\sigma ^4\mathrm{\Delta }\mathrm{\Delta }_\sigma ^{}}}{\sqrt{1+\sigma ^4\mathrm{\Delta }}\sqrt{1+\mathrm{\Delta }_\sigma ^{}}},$$ (24) we expect hierarchical learning if $`\sigma ^4\mathrm{\Delta }1`$ and $`\mathrm{\Delta }_\sigma ^{}1`$. The first condition establishes a constraint on the mapping, which is only satisfied by the normalized one. The second condition, that ensures that $`R_\sigma <R_u`$ holds, gives the range of teachers for which this hierarchical generalization takes place. Under these conditions, $`R`$ grows fast and the contribution of $`R_\sigma `$ is negligible because it is weighted by $`\sqrt{\sigma ^4\mathrm{\Delta }\mathrm{\Delta }_\sigma ^{}}`$. The effect of hierarchical learning is more important the smaller $`\mathrm{\Delta }_\sigma ^{}`$. The most dramatic effect arises for $`Q^{}=0`$, i.e. for a quadratic SVM learning a linearly separable rule. On the other hand, if $`\sigma ^4\mathrm{\Delta }1`$, which is the case for the non normalized mapping, both $`R_u`$ and $`R_\sigma `$ contribute to $`R`$ with comparable weights. Notice that, if the normalized mapping is used, the condition $`\mathrm{\Delta }_\sigma ^{}1`$ implies that $`Q^{}<Q_{iso}^{}\mathrm{\Delta }/(1+\mathrm{\Delta })`$, where $`Q_{iso}^{}`$ corresponds to the isotropic teacher. A straightforward calculation shows that a fraction of $`47.5\%`$ of teachers satisfies this constraint for $`N=50`$. In fact, the distribution of teachers as a function of $`Q^{}`$ has its maximum at $`Q_{iso}^{}`$. When $`N\mathrm{}`$, the distribution becomes $`\delta (Q^{}Q_{iso}^{})`$, and $`Q_{iso}^{}`$ tends to the median, meaning that in this limit, only about $`50\%`$ of the teachers give raise to hierarchical learning when using the normalized mapping. In the limit $`\alpha \mathrm{}`$, all the generalization error curves converge to the same asymptotic value as the simple perceptron MMH learning in the feature space, namely $`ϵ_g=0.500489(1+\mathrm{\Delta })/\alpha `$, independently of $`\sigma `$ and $`Q^{}`$. Thus, $`ϵ_g`$ vanishes slower the larger the inflation factor $`\mathrm{\Delta }`$. Finally, it is worth to point out that for $`\sigma =1`$, which would correspond to a normalizing factor $`a=\sqrt{\mathrm{\Delta }/N}`$, the pattern distribution in feature space is isotropic. Irrespective of $`Q^{}`$, the corresponding generalization error is exactly the same as that of a simple perceptron learning the MMH with isotropically distributed examples in feature space. Since the inflation factor $`\mathrm{\Delta }`$ of the SVM feature space in our approach is a free parameter, it does not diverge in the thermodynamic limit $`N\mathrm{}`$ . As a consequence, $`ϵ_g`$ does not present any stepwise behaviour, but just a crossover between a fast decrease at small $`\alpha `$ followed by a slower decrease regime at large $`\alpha `$. The results of Dietrich et al. for the normalized mapping, that corresponds to $`\sigma ^2\mathrm{\Delta }=1`$ in our model, can be deduced by taking appropriately the limits before solving our saddle point equations. The regime where the number of training patterns $`M=\alpha N`$ scales with $`N`$, is straightforward. It is obtained by taking the limit $`\sigma 0`$ and $`\mathrm{\Delta }\mathrm{}`$ keeping $`\sigma ^2\mathrm{\Delta }=1`$ in our equations, with $`\alpha `$ finite. The regime where the number of training patterns $`M=\alpha N`$ scales with $`N\mathrm{\Delta }`$, the number of quadratic features, obtained by keeping $`\stackrel{~}{\alpha }\alpha /(1+\mathrm{\Delta })`$ finite whilst taking, here again, the limit $`\sigma 0`$, $`\mathrm{\Delta }\mathrm{}`$ with $`\sigma ^2\mathrm{\Delta }=1`$. The corresponding curves are represented on figure 7 for the case of an isotropic teacher. In order to make the comparisons with our results at finite $`\mathrm{\Delta }`$, the regime where $`\stackrel{~}{\alpha }`$ is finite is represented as a function of $`\alpha =(1+\mathrm{\Delta })\stackrel{~}{\alpha }`$ using the value of $`\mathrm{\Delta }`$ corresponding to our numerical simulations, namely, $`\mathrm{\Delta }=25.5`$. In the same figure we represented the generalization error $`ϵ_g=(1/\pi )\mathrm{arccos}(R)`$ where $`R`$, given by eq. (19), is obtained after solving the saddle point equations with parameter values $`\sigma ^2=0.039`$ and $`\mathrm{\Delta }=25.5`$. These results, obtained for quadratic SVMs, are easily generalizable to higher order polynomial SVMs. The corresponding saddle point equations are cumbersome, and will not be given here. We expect a cascade of hierarchical generalization behaviour, in which successively more and more compressed features are learned. This may be understood by considering the set of saddle point equations that generalize equation (13). These equations relate the teacher-student overlaps in the successive subspaces. The sequence of different feature subspaces generalized by the SVM depends on the relative complexity of the teacher and the student. This is contained in the factors $`\mathrm{\Delta }_{\sigma _m}^{}/\mathrm{\Delta }_m`$ corresponding to the $`m^{th}`$ subspace, that appear in the set of equations that generalize eq. (13). ## V Conclusion We introduced a model that clarifies some aspects of the generalization properties of polynomial Support Vector Machines (SVMs) in high dimensional feature spaces. To this end, we focused on quadratic SVMs. The quadratic features, which are the pairwise products of input components, may be scaled by a normalizing factor. Depending on its value, the generalization error presents very different behaviours in the thermodynamic limit . In fact, a finite size SVM may be caracterized by two parameters: $`\mathrm{\Delta }`$ and $`\sigma `$. The inflation factor $`\mathrm{\Delta }`$ is the ratio between the quadratic and the linear features dimensions. Thus, it is proportional to the input space dimension $`N`$. The variance $`\sigma `$ of the quadratic features is related to the corresponding normalizing factor. Usually, either $`\sigma 1/\sqrt{N}`$ (normalized mapping) or $`\sigma 1`$ (non normalized mapping). In previous studies, not only the input space dimension diverges in the thermodynamic limit $`N\mathrm{}`$, but also $`\mathrm{\Delta }`$ and $`\sigma `$ are correspondingly scaled. In our model, neither the proportion of quadratic features $`\mathrm{\Delta }`$ nor their variance $`\sigma `$ are necessarily related to the input space dimension $`N`$. They are considered as parameters caracterizing the SVMs. Since we keep them constant when taking the thermodynamic limit, we can study the learning properties of actual SVMs with finite inflation ratios and normalizing factors, as a function of $`\alpha M/N`$, where $`M`$ is the number of training examples. Our theoretical results were obtained neglecting the correlations among the quadratic features. The agreement between our computer experiments with actual SVMs and the theoretical predictions is excellent. The effect of the correlations does not seem to be important, as there is almost no difference between the theoretical curves and the numerical results. We find that the generalization error $`ϵ_g`$ depends on the type of rule to be inferred through $`Q^{}`$, the (normalized) sum of the teacher’s squared weight components in the quadratic subspace. If $`Q^{}`$ is small enough, the quadratic components need more patterns to be learned than the linear ones. However, only if the quadratic features are normalized, $`ϵ_g`$ is dominated by the high rate learning of the linear components at small $`\alpha `$. Then, on increasing $`\alpha `$, there is a crossover to a regime where the decrease of $`ϵ_g`$ becomes much slower. The crossover between these two behaviours is smoother for larger values of $`Q^{}`$, and this effect of hierarchical learning disappears for large enough $`Q^{}`$. On the other hand, if the features are not normalized, the contributions of both the linear and the quadratic components to $`ϵ_g`$ are of the same order, and there is no hierarchical learning at all. In the case of the normalized mapping, if the limits $`\mathrm{\Delta }N\mathrm{}`$ and $`\sigma ^21/N0`$ are taken together with the thermodynamic limit, the hierarchical learning effect gives raise to the two different regimes, corresponding to $`MN`$ or $`MN^2`$, described previously . It is worth to point out that if the rule to be learned allows for hierarchical learning, the generalization error of the normalized mapping is much smaller than that of the non normalized one. In fact, the teachers corresponding to such rules are those with $`Q^{}Q_{iso}^{}`$, where $`Q_{iso}^{}`$ corresponds to the isotropic teacher, the one having all its weights components equal. For the others, both the normalized mapping and the non normalized one present similar performances. If the weights of the teacher are selected at random on a hypersphere in feature space, the most probable teachers have precisely $`Q^{}=Q_{iso}^{}`$, and the fraction of teachers with $`Q^{}Q_{iso}^{}`$ represent of the order of $`50\%`$ of the inferable rules. Thus, from a practical point of view, without having any prior knowledge about the rule underlying a set of examples, the normalized mapping should be preferred. ## Acknowledgements It is a pleasure to thank Arnaud Buhot for a careful reading of the manuscript, and Alex Smola for providing us the Quadratic Optimizer for Pattern Recognition program . The experimental results were obtained with the Cray-T3E computer of the CEA (project 532/1999). SR-G acknowledges economic support from the EU-research contract ARG/B7-3011/94/97. MBG is member of the CNRS.
no-problem/0002/hep-ph0002211.html
ar5iv
text
# Exact 𝒪⁢(𝑔²⁢𝛼_𝑠) top decay width from general massive two-loop integrals ## Abstract We calculate the $`b`$-dependent self-energy of the top quark at $`𝒪(g^2\alpha _s)`$ by using a general massive two-loop algorithm proposed in a previous article. From this we derive by unitarity the $`𝒪(\alpha _s)`$ radiative corrections to the decay width of the top quark, where all effects associated with the $`b`$ quark mass are included without resorting to a mass expansion. Our results agree with the analytical results available for the $`𝒪(\alpha _s)`$ correction to the top quark width. Precision measurements of the electroweak parameters are a powerful tool for testing the validity of the standard model and searching for new physics. The impressive accuracy attained experimentally, which is also expected to improve in the future, makes a complete two-loop analysis of the data necessary. However, on the theoretical side, such a complete two-loop electroweak analysis is far from being a simple task. Apart from the proliferation of diagrams, a special kind of technical problem is encountered in electroweak two-loop calculations. These processes involve particles with different masses, and in general need to be evaluated at finite external momenta. It has been known already for some time that general massive two-loop Feynman diagrams, when evaluated at nonvanishing external momenta, lead to complicated and often unknown special functions. See, for instance, ref. which relates the sunset two-point topology to the Lauricella function. Recent progress in two-loop electroweak analyses was mainly attained by using mass or momentum expansion methods. We note that in some situations mass expansions turn out to converge well, such as the top mass expansion of the $`\alpha _s`$ corrections to the $`Zb\overline{b}`$ process , while in other cases subleading terms are known to be substantial . In certain situations it was possible to recover the exact function starting from an expansion . On the other hand, when the process under consideration involves more than one ratio of masses, recovering the exact result from an expansion would be difficult, the functions involved being complicated. In order to circumvent the use of a mass or momentum expansion, we proposed in ref. a general framework for treating two-loop Feynman graphs by a combination of analytical and numerical methods. Given a Feynman diagram, its tensorial structure is first reduced analytically into a combination of ten special functions $`h_i`$, which are defined by fairly simple integral representations. The result of this analytical reduction is further integrated numerically. A subset of the ten special functions $`h_i`$ , namely $`h_1`$ and $`h_2`$, are sufficient for treating two-loop corrections in a theory involving only scalars and no derivative couplings. This is the case of radiative corrections of enhanced electroweak strength in the standard model, and in ref. this method was used for deriving corrections of leading power in the Higgs mass. The resulting radiative corrections agree with independent calculations which use different techniques . Going from a scalar theory to a full renormalizable theory with fermions and derivative couplings, such as the standard model, increases a lot the complexity of the problem. The analytical reduction of the tensor structure of a graph into $`h_i`$ functions along the lines of ref. needs to be handled by a symbolic manipulation program such as FORM or Schoonschip, and results in rather lengthy expressions. Once the reduction is performed, the numerical integration methods are the same as those used in ref. , with the only difference that more computing power is needed to handle the number of $`h_i`$ functions which need to be integrated. In ref. we have shown how these methods can be used for calculating momentum derivatives of two-loop two-point functions around a finite, non-zero value of the external momentum. Such momentum derivatives are encountered for instance when evaluating wave function renormalization constants on-shell. In this letter we show how this method can be used for calculating two-point functions of physical interest. We note that, due to the general nature of the tensor reduction algorithm given in ref. , the inclusion of more than two external legs proceeds in the same way. The difference is that the resulting expressions are more complicated than those stemming from two-point functions, and more computing power is needed for performing higher-dimensional numerical integration. In figure 1 we show the two-loop Feynman graphs relevant for the $`b`$-dependent self-energy of the top quark at order $`g^2\alpha _s`$. As for the counterterm structure, we only show in figure 1 the counterterm diagrams which are necessary for renormalizing the imaginary part of the self-energy. These are the on-shell one-loop QCD counterterms of the $`b`$-quark mass $`\delta m_b`$ and of the top wave function renormalization constant $`\delta Z_t`$: $`\delta m_b`$ $`=`$ $`2\alpha _sN_c\pi ^2m_b\left\{{\displaystyle \frac{3}{ϵ}}+{\displaystyle \frac{3}{2}}\left[\gamma +\mathrm{log}\pi +\mathrm{log}\left({\displaystyle \frac{m_b^2}{\mu ^2}}\right)\right]2\right\}`$ (1) $`\delta Z_t`$ $`=`$ $`\alpha _sN_c\pi ^2\left[{\displaystyle \frac{2}{ϵ}}+\gamma +\mathrm{log}\pi +\mathrm{log}\left({\displaystyle \frac{m_t^2}{\mu ^2}}\right)+2\mathrm{log}\left({\displaystyle \frac{m_t^2}{m_g^2}}\right)4\right]`$ Here, $`\mu `$ is the ’t Hooft mass scale, and $`m_g`$ is the gluon mass infrared regulator. $`N_c=4/3`$ is the color factor. The imaginary part of the two-loop self-energy on-shell is physical. Writing the top self-energy as $`\mathrm{\Sigma }(p\gamma )=\mathrm{\Sigma }_1(p\gamma )+\gamma _5\mathrm{\Sigma }_{\gamma _5}(p\gamma )`$, the top decay width is given by $`\mathrm{\Gamma }_t=2Im\mathrm{\Sigma }_1(p\gamma =m_t)`$. This can be compared to known analytical and numerical results for the $`𝒪(\alpha _s)`$ QCD corrections to the top decay width , and thus provides a nontrivial test of the two-loop algorithm. The algebraic reduction of the tensorial structure of the graphs was done by implementing the reduction formulae of ref. into a symbolic manipulation program. The resulting intermediary decomposition into $`h_i`$ functions is too lengthy to reproduce here. Instead, we give the final results obtained after numerical integration over the analytical decomposition. Here we would like to discuss briefly the treatment of infrared divergence associated with the massless gluon. To regularize the infrared singularities, we introduce a mass regulator for the gluon. We note that in higher-order QCD calculations introducing a gluon mass would affect the Slavnov-Taylor identities. In such a case, a different approach is needed to treat the infrared singular diagrams. One possibility is to try to extract analytically the IR singular part of the diagram and to treat this by dimensional regularization, while the remaining IR finite part of the diagram can be calculated by numerical integration. Whether or not this can always be done in a simple way is beyond the scope of this article. We also note that our approach is mainly designed to treat massive graphs, while other approaches are available for massless QCD calculations. However, this being an $`𝒪(g^2\alpha _s)`$ correction, the use of a gluon mass regulator is legitimate in this case. The physical quantity $`Im\mathrm{\Sigma }_1(p\gamma =m_t)`$, which is associated with the top quark width, is free of infrared singularities, and we checked numerically that indeed the result becomes independent of the gluon regulator when the mass regulator is much smaller than the $`b`$ mass. In figure 2 we give numerical results for the ultraviolet finite part of the quantity $`\mathrm{\Sigma }_1(p\gamma =m_t)`$ derived from figure 1; the inclusion of the counterterm contributions shown in figure 1 makes only the imaginary part (which is physical) of the self-energy finite, both UV and IR. We plot the results for a range of the top mass, and assume $`G_F=1.1663710^5`$ GeV<sup>-2</sup>, $`m_W=80.41`$ GeV, $`m_b=4.7`$ GeV, and $`\alpha _s(m_t)=.108`$. In table 1 we give numerical values for the $`𝒪(\alpha _s)`$ correction to the top decay rate $`tW+b`$ obtained from the imaginary part of the self-energy. Therefore, the $`𝒪(\alpha _s)`$ QCD correction is obtained as an inclusive quantity, integrated over the gluon spectrum. To check the size of the finite mass of the $`b`$ quark, we ran our programs in the vanishing $`b`$ mass limit. At tree level the finite $`b`$ mass amounts to a 3-4 MeV correction to the width, and in the $`𝒪(\alpha _s)`$ correction the effect is negligible. These results agree with existing calculations of $`𝒪(\alpha _s)`$ corrections to $`tW+b`$ , which provides a good test for the two-loop tensor decomposition and numerical integration algorithm. To conclude, we have shown that the general methods proposed in ref. can be used to calculate physical radiative corrections. We treated at two-loop order the $`b`$-mass dependent self-energy of the top quark at $`𝒪(g^2\alpha _s)`$. From the imaginary part of the self-energy we extract the $`𝒪(\alpha _s)`$ corrections to the top decay process $`tb+W`$, and find agreement with existing results for this process. The calculation is performed while respecting the actual mass and momentum kinematics of the process, and without resorting to a mass or momentum expansion of the diagrams. Aknowledgements The work of A.G. was supported by the US Department of Energy. The work of Y.-P. Y. was supported partly by the US Department of Energy.
no-problem/0002/math0002059.html
ar5iv
text
# An Abel ordinary differential equation class generalizing known integrable classes ## 1 Introduction Abel-type differential equations of the first kind are of the form $$y^{}=f_3y^3+f_2y^2+f_1y+f_0,$$ (1) where $`yy(x)`$ and $`f_i`$ are arbitrary functions of $`x`$. Abel equations appear in the reduction of order of many second and higher order families , and hence are frequently found in the modelling of real problems in varied areas. A general “exact integration” strategy for these equations was first formulated by Liouville and is based on the concepts of classes, invariants and the solution of the equivalence problem . Generally speaking, two Abel equations of the first kind belong to the same equivalence class if and only if one can be obtained from the other by means of a transformation of the form $$\{x=F(t),y=P(t)u+Q(t)\},$$ (2) where $`t`$ and $`uu(t)`$ are respectively the new independent and dependent variables, and $`F`$, $`P`$ and $`Q`$ are arbitrary functions satisfying $`F^{}0`$ and $`P0`$. By changing variables $`\{x=t,y=\left(g_1u+g_0\right)^1\}`$, where $`\{g_1,g_0\}`$ are arbitrary functions of $`t`$, Abel equations of the first kind can always be written in second-kind format $$y^{}=\frac{\stackrel{~}{f_3}y^3+\stackrel{~}{f_2}y^2+\stackrel{~}{f_1}y+\stackrel{~}{f_0}}{g_1y+g_0}.$$ (3) Abel equations of the second kind belong to the same class as their first-kind partners. However, due to the arbitrariness introduced when switching from first to second kind, the transformation preserving the class for the latter becomes $$\{x=F(t),y=\frac{P_1(t)u+Q_1(t)}{P_2(t)u+Q_2(t)}\},$$ (4) where $`P_1Q_2P_2Q_10`$. There are infinitely many Abel equation classes, and their classification is performed by means of algebraic expressions invariant under (2), the so called invariants, built with the coefficients $`f_i`$ in (1) and their derivatives. When the invariants of a given Abel equation are constant, its integration is straightforward: the equation can be transformed into a separable equation as explained in textbooks . On the contrary, when the invariants are non-constant, the integration strategy relies on recognizing the equation as equivalent to one of a set of previously known integrable equations, and then applying the equivalence transformation to that equation. However, for non-constant-invariant Abel equations, only a few integrable classes are known. In a recent work , for instance, a classification of all integrable cases presented by Abel, Liouville and others , including examples from Kamke’s book, showed, in all, only four classes depending on one parameter and seven classes depending on no parameters. In this work, we present a single multi-parameter Abel equation class (AIA<sup>1</sup><sup>1</sup>1The acronyms AIA, AIR and AIL are explained below.) generalizing all the integrable classes collected in . In addition, AIA contains as particular case a new subclass (AIR), depending on 6 parameters, all of whose members can be systematically transformed into Riccati-type equations. This AIR class, in turn, includes a 4-parameter fully integrable subclass (AIL). Then we will see in §2.3 that many representatives of subclasses of this AIA class can be mapped into non-trivial Abel equations belonging to different subclasses. Hence, if the member being mapped is solvable, it can be used to generate a different, maybe new, solvable and non-constant-invariant class as will be shown in §3. Due to its simplicity and the potential preparation of computer algebra routines for solving the related equivalence problem , the material being presented appears to us as a convenient starting point for a more systematic determination of exact solutions for Abel equations. Another important differential equation problem, complementary to the one discussed in this paper, is the one where a first-order equation $`y^{}=\mathrm{\Phi }(x,y)`$, where $`\mathrm{\Phi }`$ is arbitrary, can be transformed into separable by means of linear transformations (2). This problem is a generalization of the case where $`\mathrm{\Phi }`$ is cubic in $`y`$ and the Abel equation has constant invariant. An algorithmic approach for solving this problem is presented in the precedent paper in this issue. ## 2 The AIL, AIR and AIA classes As mentioned in the introduction, the whole collection of integrable classes presented in , consisting of four 1-parameter classes and 7 classes without parameters, can be obtained by assigning particular values to the parameters of a single multi-parameter Abel class, AIA. In turn, AIA contains a subclass, AIR, all of whose members can be transformed into Riccati-type equations, and inside AIR there is a fully integrable subclass (AIL) all of whose members can be mapped into first-order linear equations. Since all these classes are obtained by means of the same procedure, to better illustrate the ideas we discuss first this AIL (Abel, Inverse-Linear) subclass. So, consider the general linear equation $`y^{}+g(x)y+f(x)=0`$, where $`f`$ and $`g`$ are arbitrary, after changing variables by means of the inverse transformation $`\{xy\}`$:<sup>2</sup><sup>2</sup>2By $`\{xy\}`$ we mean changing variables $`\{x=u,y=t\}`$ followed by renaming $`\{uy,tx\}`$. $$y^{}=\frac{1}{g(y)x+f(y)}.$$ (5) An implicit solution to this equation is easily expressed in terms of quadratures as $$C_1=x\mathrm{exp}\left(g(y)𝑑y\right)+\mathrm{exp}\left(g(y)𝑑y\right)f(y)𝑑y.$$ (6) The key observation here is that (5) will be of second-kind Abel type for many choices of $`f`$ and $`g`$. For instance, by taking $$f(y)=\frac{s_0y+r_0}{a_3y^3+a_2y^2+a_1y+a_0},g(y)=\frac{s_1y+r_1}{a_3y^3+a_2y^2+a_1y+a_0},$$ (7) where $`\{s_1,s_0,r_1,r_0,a_i\}`$ are arbitrary constants, the resulting Abel family is $$y^{}=\frac{a_3y^3+a_2y^2+a_1y+a_0}{\left(s_1x+s_0\right)y+r_1x+r_0}.$$ (8) This equation has non-constant invariant, and can be seen as a representative of a non-trivial multi-parameter class, and from its connection with a linear equation, its general solution is obtained directly from Eqs.(6) and (7). As shown in the next subsection, four of the eight parameters in (8) are superfluous, and this number can be reduced further by splitting the class into subclasses. Even so, the class is surprisingly large, including multi-parameter subclasses we have not found elsewhere.<sup>3</sup><sup>3</sup>3We noted afterwards, however, that (8) could also be obtained using a different approach; for instance by following Olver and considering (1) as an “inappropriate reduction” of a second-order equation that has a solvable non-Abelian Lie algebra; the resulting restrictions on the coefficients in (1) surprisingly lead to a family of the same class as (8). Among others, the class represented by (8) contains as particular cases the two integrable 1-parameter classes related to Abel’s work , and most of the examples found in Kamke’s as well as in other textbooks . We note that if instead of starting with a linear equation we were to start with a Bernoulli equation, instead of (8) we would obtain $$y^{}=\frac{a_3y^3+a_2y^2+a_1y+a_0}{\left(s_1x+s_0x^\lambda \right)y+r_1x+r_0x^\lambda },$$ (9) which is reducible to (8) by changing $`\{x=t^{1/(1\lambda )},y=u\}`$ followed by a redefinition of the constants $`c_i(1\lambda )a_i`$, and so it belongs to the same class as (8). However, if we start with a Riccati equation instead of a Bernoulli equation, and hence instead of (5) we consider $$y^{}=\frac{1}{h(y)x^2+g(y)x+f(y)},$$ (10) and then choose $`f(y)`$, $`g(y)`$ and $`h(y)`$ as in (7), we obtain a 10-parameter $`\{s_i,r_i,a_k\}`$ Abel type family (AIR, meaning Abel, Inverse-Riccati) $$y^{}=\frac{a_3y^3+a_2y^2+a_1y+a_0}{\left(s_2x^2+s_1x+s_0\right)y+r_2x^2+r_1x+r_0},$$ (11) which becomes a Riccati-type equation by changing $`\{xy\}`$. Four of these ten parameters are superfluous. The remaining 6-parameter class includes as particular cases the parameterized families presented by Liouville and Appell, as well as other classes depending on no parameters shown in Kamke and and having solutions in terms of special functions. We note that (8) is also a particular case of (11). Finally, the same ideas can be used to construct a more general Abel class (AIA, meaning Abel, Inverse-Abel) embracing the previous classes Eqs.(8) and (11) as particular cases. This AIA family is obtained by taking as starting point an Abel equation of the second kind (3) and, taking the coefficients $`f_i`$ of the form (7), arriving at $$y^{}=\frac{\left(g_1a_3x+g_0a_3\right)y^3+\left(g_1a_2x+g_0a_2\right)y^2+\left(g_1a_1x+g_0a_1\right)y+g_1a_0x+g_0a_0}{\left(s_3x^3+s_2x^2+s_1x+s_0\right)y+r_3x^3+r_2x^2+xr_1+r_0}.$$ (12) Taking now $`g_1`$ and $`g_0`$ as constants and redefining $`g_1a_ia_i`$, $`g_0a_ib_i`$, instead of (11) we obtain the 16-parameter Abel family $$y^{}=\frac{\left(a_3x+b_3\right)y^3+\left(a_2x+b_2\right)y^2+\left(a_1x+b_1\right)y+a_0x+b_0}{\left(s_3x^3+s_2x^2+s_1x+s_0\right)y+r_3x^3+r_2x^2+r_1x+r_0}.$$ (13) Remarkably, by changing $`\{xy\}`$, we arrive at an equation of exactly the same type, $$y^{}=\frac{\left(s_3x+r_3\right)y^3+\left(s_2x+r_2\right)y^2+\left(s_1x+r_1\right)y+s_0x+r_0}{\left(a_3x^3+a_2x^2+a_1x+a_0\right)y+b_3x^3+b_2x^2+b_1x+b_0}.$$ (14) Although we are not aware of a method for solving this AIA class for arbitrary values of its 16 parameters, by applying the change of variables $`\{xy\}`$ to representatives of solvable subclasses of AIA, one may obtain representatives of non-trivial new solvable Abel classes (see §2.3). Due to this feature and since AIA already contains as a particular case the AIR class and hence AIL (Eqs.(11) and (8)), this AIA class generalizes in one all the solvable classes presented in , collected from the literature (see §3). ### 2.1 The intrinsic parameter-dependence of the classes AIL and AIR For the purposes of finding exact solutions to Abel equations, and in so doing, solving the equivalence problem with respect to the classes represented by (8) and to some extent (11), it is relevant to determine on how many parameters these classes intrinsically depend. In fact, the technique we have been using for tackling the equivalence problem relies heavily on the calculation of multivariate resultants, and even for classes depending on just one parameter these calculations require the use of special techniques . Although the procedure presented below does not exhaust the possible reduction of the number of parameters found in (8) and (11), it suffices to reduce this number by four. The idea is to explore fractional linear transformations of the form $$\{x=t,y(x)=\frac{au+b}{cu+d}\}.$$ (15) We note that this transformation does not change the degrees in $`x`$ of the denominators of the right-hand sides of (8) and (11). Keeping that fact in mind, we rewrite the representatives of the AIL and AIR classes as $$y^{}=\frac{\left(y\alpha _0\right)\left(y\alpha _1\right)\left(y\alpha _2\right)}{G_1(x)y+G_0(x)},$$ (16) where $`\{G_1(x),G_0(x)\}`$ are polynomials in $`x`$ of the same degree (the ones implied by either (8) or (11)). By changing variables $`\left\{x=t,y=u(t)+\alpha _0\right\}`$, (16) becomes $$y^{}=\frac{y\left(y\mathrm{\Delta }_{1,0}\right)\left(y\mathrm{\Delta }_{2,0}\right)}{G_1(x)y+\stackrel{~}{G_0}(x)},$$ (17) where $`\mathrm{\Delta }_{i,0}\alpha _i\alpha _0`$ and $`\stackrel{~}{G_0}(x)`$ is a polynomial in $`x`$ of the same degree as $`G_0(x)`$. Three cases now arise, depending whether three, two or none of the roots $`\alpha _i`$ are equal. Case 1: $`\alpha _0=\alpha _1=\alpha _2`$ In such a case, (16) already depends on four fewer parameters than (8) or (11); a further change of variables $`\{x=t,y=1/u\}`$ transforms (16) to the form $$y^{}=\frac{1}{M_1(x)y+M_0(x)},$$ (18) where $`\{M_1(x),M_0(x)\}`$ are polynomials in $`x`$ of the same degree as $`\{G_1(x),G_0(x)\}`$. Case 2: $`\alpha _0=\alpha _1\alpha _2`$ In this case, $`\mathrm{\Delta }_{1,0}=0`$, so that changing variables $$\{x=t,y=\frac{1}{u(t)+\mathrm{\Delta }_{2,0}^{}{}_{}{}^{1}}\}$$ in (17) leads to an equation of the form $$y^{}=\frac{y}{\stackrel{~}{M_1}(x)y+\stackrel{~}{M_0}(x)},$$ (19) where $`\{\stackrel{~}{M_1}(x),\stackrel{~}{M_0}(x)\}`$ are polynomials in $`x`$ of the same degree as $`\{G_1(x),G_0(x)\}`$. Note that (19) also depends on four fewer parameters than (8) or (11). Case 3: $`\alpha _0\alpha _1\alpha _2`$ In this case, $`\mathrm{\Delta }_{1,0}\mathrm{\Delta }_{2,0}0`$, and by changing variables $$\{x=t,y=\frac{1}{\left(\mathrm{\Delta }_{2,0}^{}{}_{}{}^{1}\mathrm{\Delta }_{1,0}^{}{}_{}{}^{1}\right)u(t)+\mathrm{\Delta }_{1,0}^{}{}_{}{}^{1}}\}$$ in (17) we obtain $$y^{}=\frac{y(y1)}{\stackrel{~}{\stackrel{~}{M_1}}(x)+\stackrel{~}{\stackrel{~}{M_0}}(x)y},$$ (20) where $`\{\stackrel{~}{\stackrel{~}{M_1}}(x),\stackrel{~}{\stackrel{~}{M_0}}(x)\}`$ are polynomials in $`x`$ of the same degree as $`\{G_1(x),G_0(x)\}`$ and so (20) too depends on four fewer parameters than either (8) or (11). ### 2.2 Splitting of AIL into cases Since the AIL class is fully solvable, it makes sense, for the purpose of tackling its related equivalence problem, to consider the further maximal reduction in the number of parameters, and hence completely split the class into a set of non-intersecting subclasses, all of which depend intrinsically on a minimal number of parameters. With this motivation, we performed some algebraic manipulations, finally determining that the AIL class actually consists of two different subclasses, respectively depending on two and one parameters. To arrive at convenient representatives for these two subclasses of AIL, we start by changing variables<sup>4</sup><sup>4</sup>4We are assuming $`\omega r_1s_0r_0s_10`$ and $`s_10`$, which is justified, since when $`\omega =0`$ (8) has constant invariant, therefore presenting no interest, and when $`s_1=0`$ it can be transformed into an equation of the form (26) anyway by means of $`\{x=t/r_1,y=(ur_0)/s_0\}`$. in (8) according to $$\{x=\frac{t}{s_{1}^{}{}_{}{}^{2}},y=\frac{r_0s_{1}^{}{}_{}{}^{2}+r_1u}{\left(s_1s_0+u\right)s_1}\},$$ (21) and introducing new parameters $`\{k_0,k_1,k_2,k_3\}`$ according to $`k_0`$ $`=`$ $`{\displaystyle \frac{\left(\left(a_2r_{0}^{}{}_{}{}^{2}+\left(s_0a_0a_1r_0\right)s_0\right)s_0a_3r_{0}^{}{}_{}{}^{3}\right)s_{1}^{}{}_{}{}^{2}}{\left(r_1s_0r_0s_1\right)^2}},`$ (22) $`k_1`$ $`=`$ $`{\displaystyle \frac{\left(a_2s_13a_3r_1\right)r_{0}^{}{}_{}{}^{2}+\left(\left(2r_1a_22a_1s_1\right)r_0+\left(3a_0s_1a_1r_1\right)s_0\right)s_0}{\left(r_1s_0r_0s_1\right)^2}},`$ (23) $`k_2`$ $`=`$ $`{\displaystyle \frac{\left(\left(2a_2s_13a_3r_1\right)r_1a_1s_{1}^{}{}_{}{}^{2}\right)r_0+\left(3s_{1}^{}{}_{}{}^{2}a_0+\left(r_1a_22a_1s_1\right)r_1\right)s_0}{s_{1}^{}{}_{}{}^{2}\left(r_1s_0r_0s_1\right)^2}},`$ (24) $`k_3`$ $`=`$ $`{\displaystyle \frac{a_0s_{1}^{}{}_{}{}^{3}+\left(\left(a_2s_1a_3r_1\right)r_1a_1s_{1}^{}{}_{}{}^{2}\right)r_1}{s_{1}^{}{}_{}{}^{4}\left(r_1s_0r_0s_1\right)^2}}.`$ (25) Thus (8) is transformed into $$y^{}=\frac{k_3y^3+k_2y^2+k_1y+k_0}{y+x},$$ (26) whose solution can be obtained directly from Eqs.(6) and (7) by taking $`s_1=0,r_1=1,s_0=1,r_0=0`$ and $`a_ik_i`$. For classification purposes (see §3), it is convenient to write (26) in first-kind form by changing variables $`\{x=t,y=\frac{1}{u}t\}`$, leading to $$y^{}=(k_3x^3k_2x^2+k_1xk_0)y^3(3k_3x^22k_2x+k_1+1)y^2+(3k_3xk_2)yk_3.$$ (27) Two different cases now arise, leading to two non-intersecting subclasses. Case $`k_30`$ By redefining $`k_3k_{4}^{}{}_{}{}^{2}`$, then changing variables in (26) according to $$\{x=\frac{k_2+3tk_4}{3k_{4}^{}{}_{}{}^{2}},y=\frac{1}{k_4u}+\frac{k_2+3tk_4}{3k_{4}^{}{}_{}{}^{2}}\},$$ (28) and next redefining $`\{k_4,k_0,k_1\}`$ in terms of new parameters $`\{\alpha ,\beta ,\gamma \}`$ according to $$k_4=\frac{\beta }{\gamma },k_0=\frac{k_{2}^{}{}_{}{}^{3}\gamma ^4}{27\beta ^4}\frac{k_2\alpha \gamma ^2}{3\beta ^2}+\gamma ,k_1=\alpha \frac{k_{2}^{}{}_{}{}^{2}\gamma ^2}{3\beta ^2},$$ we obtain a 2-parameter representative of the class, already in first kind format: $$y^{}=\left(x\alpha \beta x^3\right)y^3+\left(3x^21\alpha \right)y^23xy+1.$$ (29) Case $`k_3=0`$ This other branch of (26) splits into two subcases: $`k_2=0`$ and $`k_20`$. In the former case (26) becomes constant-invariant, and is thus of no interest. When $`k_20`$, by introducing a new parameter $`\alpha `$ by means of $$\alpha =k_2k_0\frac{k_{1}^{}{}_{}{}^{2}}{4},$$ (30) and changing variables in (26) according to $$\{x=\frac{k_12t}{2k_2},y=\frac{2tk_1}{2k_2}\frac{1}{k_2u}\},$$ (31) one obtains a simpler representative for this class, depending on just one parameter $`\alpha `$: $$y^{}=\left(\alpha +x^2\right)y^3\left(2x+1\right)y^2+y.$$ (32) ### 2.3 Generating new integrable classes from solvable members of AIA The motivation for this work was to try to unify the integrable Abel classes we have seen in the literature. This goal has been partially accomplished with the formulation of the AIL and AIR Abel classes, but there are still other integrable classes, not included in AIR, which however all have the following property: these Abel classes have representatives which could be obtained by changing variables $`\{xy\}`$ in the representative of other Abel classes. In this sense, these representatives are both of Abel and inverse-Abel types, which led us to ask the following question: > Which Abel classes lead to other Abel classes by applying the inverse transformation $`\{xy\}`$ to one of its representatives? We have formulated the answer to this question in terms of the following proposition and its corollary. Proposition: If one Abel equation, $`\alpha `$, maps into another Abel equation, $`\beta `$, by means of the inverse transformation $`\{xy\}`$, then both are of the form (13). Proof: By hypothesis $`\beta `$ is both of Abel and inverse-Abel type, that is, it is of the form $`y^{}=G(x,y)`$, where $`G`$ is both cubic over linear in $`y`$ (the Abel condition) and linear over cubic in $`x`$ (the inverse-Abel condition). Hence, $`G`$ is a rational function of $`x`$ and $`y`$, with numerator cubic in $`y`$ and linear in $`x`$, and denominator cubic in $`x`$ and linear in $`y`$, and so $`\beta `$ is in fact of the form (13). Corollary: If a given Abel class is related through the inverse transformation to another Abel class then both classes have representatives of the form (13). One consequence of this proposition is that AIA, (13), is in fact the most general equation which is both Abel and inverse-Abel; similarly AIR, (11), is the most general which is both Abel and inverse-Riccati; and AIL, (8), is the most general which is both Abel and inverse-linear. It is worth mentioning here that an Abel class may have many different representatives of the form (13). Consequently, for instance, the AIL integrable class represented by (26), which naturally maps into a linear equation by means of $`\{xy\}`$, also contains members which map into non-trivial Abel equations by means of the same transformation. As an example of this, consider 151 of Kamke’s book: $$y^{}=\frac{12xy+y^22y^3x}{x^2+1}.$$ (33) This equation has the form (13) and, by changing variables $`\{xy\}`$, it leads to a non-trivial new Abel class and nevertheless it belongs to the AIL class (see (46)). In other words, the class represented by this equation has a representative of the form (13) different from (33), which, by means of $`\{xy\}`$, maps into a linear equation. An example where the same thing happens with a member of the AIR class is given by $$y^{}=2xy^2+y^3.$$ (34) This equation was presented by Liouville ; it is of the form (13) and by means of $`\{xy\}`$ leads to another non-trivial Abel class. In addition (34) belongs to the AIR class, so that it has another representative of the form (13), which by means of $`\{xy\}`$ maps into a Riccati equation. Some other examples illustrating how new solvable classes can be obtained from solvable members of AIA by changing variables $`\{xy\}`$ are shown in the next section. ## 3 AIA: a generalization of known integrable classes As mentioned in the Introduction, all the solvable classes collected in , four depending on one parameter, labelled A, B, C and D, and another seven not depending on parameters, labelled 1 to 7, are particular members of the class represented by (13). The “classification” of these solvable classes as particular members of AIL, AIR or AIA is as follows: Although this classification shown in Table 1 is not difficult to verify, it is worthwhile to show it explicitly. Starting with the parameterized class by Abel , shown in as “Class A”, $$y^{}=\left(\alpha x+\frac{1}{x}+\frac{1}{x^3}\right)y^3+y^2,$$ (35) where $`\alpha `$ is the parameter, this equation can be obtained from (27) by taking $`\{k_3=2\alpha ,k_2=1,k_1=1/2,k_0=0\}`$ and changing variables $$\{x=\frac{t^2}{2},y=2\frac{u+t}{t^3}\}.$$ So, (35) is a member of AIL. Concerning the 1-parameter class by Liouville , labelled in as class “B”, $$y^{}=2\left(x^2\alpha \right)y^3+2\left(x+1\right)y^2,$$ (36) this equation is obtained from (11) by taking $`\{s_2=0,s_1=0,s_0=1,r_2=1,r_1=0,r_0=0,a_3=0,a_2=0,a_0=2\alpha ,a_1=2\}`$ and changing variables $$\{x=t,y=\frac{1}{u}t^2\}.$$ (36) is then a member of AIR. The next solvable class, related to Abel’s work, presented in as class “C”, is represented by $$y^{}=\frac{\alpha \left(1x^2\right)y^3}{2x}+\left(\alpha 1\right)y^2\frac{\alpha y}{2x},$$ (37) and this equation can be obtained from (27) by taking $`\{k_2=0,k_1=\alpha /2,k_0=0k_3=1/2\}`$ and changing variables $$\{x=\frac{\sqrt{\alpha }}{t},y=\frac{t\left(1tu\right)}{\sqrt{\alpha }}\}.$$ (37) is then a member of AIL. The last parameterized solvable class shown in , labelled there as class “D”, is related to Appell’s work , and is given by $$y^{}=\frac{y^3}{x}\frac{\left(\alpha +x^2\right)y^2}{x^2}.$$ (38) This equation is obtained from (11) by taking $`\{s_2=0,s_1=1,s_0=0,r_2=1,r_1=0,r_0=\alpha ,a_3=0,a_2=0,a_1=0,a_0=1\}`$ and changing variables $$\{x=t,y=\frac{1}{u}+\frac{\alpha t^2}{t}\}.$$ So, (38) is a member of AIR. Concerning the classes collected in not depending on parameters, the first one, there labelled as “Class 1”, was presented by Halphen in connection with doubly periodic elliptic functions: $$y^{}=\frac{3y\left(1+y\right)4x}{x\left(8y1\right)}.$$ (39) This equation, clearly a member of (13), can be obtained by changing $`\{xy\}`$ in $$y^{}=\frac{y\left(8x1\right)}{3x\left(x+1\right)4y}$$ which in turn can be obtained from (37) (the solvable AIL class) by taking $`\alpha =2/3`$ and changing variables $$\{x=\frac{(12t)\sqrt{36t}}{9t},y=\frac{12ut\sqrt{36t}}{\left(t+1\right)^2\left(4u3t^23t\right)}\}.$$ This derivation also illustrates how new solvable classes can be obtained by interchanging the roles between dependent and independent variables in solvable members of AIA. As for the representative of Class 2, by Liouville , shown in as $$y^{}=y^32xy^2,$$ (40) this equation is obtained from (11) by taking $`\{s_2=0,s_1=0,s_0=1,r_2=1,r_1=0,r_0=0,a_3=0,a_2=0,a_1=0,a_0=1\}`$ and converting the resulting equation into first-kind format by changing variables $$\{x=t,y=\frac{1}{u}t^2\}.$$ (40) is then a member of AIR. Also of note here, (40) is a special case of the equation presented by Appell in : $$y^{}=\frac{y^3}{\alpha x^2+\beta x+\gamma }\frac{d}{dx}\left(\frac{ax^2+bx+c}{\alpha x^2+\beta x+\gamma }\right)y^2$$ (41) with $`\alpha =0,\beta =0,\gamma =1,a=1,b=0,c=0`$. (41) is also seen to be a member of AIR since it can be obtained by changing $$\{x=t,y=\frac{1}{u}\frac{at^2+bt+c}{\alpha t^2+\beta t+\gamma }\}$$ in (11) with $`s_2=\alpha ,s_1=\beta ,s_0=\gamma ,r_2=a,r_1=b,r_0=c,a_3=0,a_2=0,a_1=0,a_0=1`$. Class 3, also by Liouville , presented in as $$y^{}=\frac{y^3}{4x^2}y^2$$ (42) can be obtained from (40) by changing $`\{xy\}`$ and then converting it to first-kind format by means of $$\{x=2t,y=\frac{1}{u}+t\}.$$ (42) is then a member of AIA. This also illustrates the derivation of a solvable class by changing $`\{xy\}`$ in solvable members of the AIR subclass. The next class, presented in as Class 4, collected among the Abel equation examples of Kamke’s book, $$y^{}=y^3\frac{\left(x+1\right)y^2}{x},$$ (43) can be obtained from (32) by taking $`\alpha =0`$ and changing variables $$\{x=\frac{1}{t},y=t^2ut\},$$ so it belongs to the AIL subclass. In , three new integrable classes not depending on parameters were presented too - these are classes “5”, “6” and “7”. Starting with class 5, given by $$y^{}=\frac{\left(2x+3\right)\left(x+1\right)y^3}{2x^5}+\frac{\left(5x+8\right)y^2}{2x^3},$$ (44) this equation can be obtained from (8) by taking $`\{s_1=0,s_0=1,r_1=1,r_0=0,a_3=6,a_2=10,a_1=4,a_0=0\}`$ and changing variables $$\{x=\frac{1}{t},y=\frac{2t^2\left(t+1\right)u}{t\left(\left(t+1\right)u+2t\right)}\}.$$ Regarding Class 6, given by $$y^{}=\frac{y^3}{x^2\left(x1\right)^2}+\frac{\left(1xx^2\right)y^2}{x^2\left(x1\right)^2},$$ (45) this equation is obtained from (43) by changing $`\{xy\}`$ and then converting it to first-kind form by means of $$\{x=t,y=\frac{t1+u}{\left(t1\right)u}\}.$$ Concluding, class 7 is given by $$y^{}=\frac{\left(4x^4+5x^2+1\right)y^3}{2x^3}+y^2+\frac{\left(14x^2\right)y}{2x\left(x^2+1\right)}.$$ (46) This equation can be obtained from Kamke’s first order example 151, $$y^{}=\frac{\left(y^2+1\right)\left(12yx\right)}{x^2+1},$$ (47) by first changing $`\{xy\}`$ and then converting the resulting equation to first kind format by means of $$\{x=t,y=\frac{1}{2t}\frac{1}{2\left(t^2+1\right)u}\}.$$ In turn, (47) is a member of the AIL subclass and can be obtained from (37) by taking $`\alpha =4`$ and changing variables $$\{x=\frac{i}{t},y=\frac{it\left(tu1\right)}{t^2+1}\}.$$ ## 4 Conclusions In this paper, a multi-parameter non-constant-invariant Abel class has been presented which generalizes in one the integrable cases shown in the works by Abel, Liouville, Appell and others, including all those shown in Kamke’s book and other references. This new class splits into various subclasses, many of which are fully integrable, including some not previously shown elsewhere to the best of our knowledge. The particular subclass represented by (26) mapping non-constant-invariant Abel equations into linear first-order equations contains by itself most of the exactly integrable cases we have seen. The subclass represented by (11) appears to us to be the most general class mapping Abel equations into Riccati ones; indeed it includes the parameterized mappings presented by Liouville and Appell as particular members. Finally, the mapping of Abel classes into other Abel classes presents a useful way of finding new integrable classes from other classes known to be solvable, as shown in §3. We are presently working on analyzing different connections between all these subclasses and expect to find reportable results in the near future. Acknowledgments This work was supported by the Centre of Experimental and Constructive Mathematics, Simon Fraser University, Canada, and by the Symbolic Computation Group, Faculty of Mathematics, University of Waterloo, Ontario, Canada. The authors would like to thank K. von Bülow<sup>5</sup><sup>5</sup>5Symbolic Computation Group of the Theoretical Physics Department at UERJ - Brazil. for a careful reading of this paper, and T. Kolokolnikov<sup>6</sup><sup>6</sup>6Department of Mathematics, University of British Columbia - Canada. and A. Wittkopf<sup>7</sup><sup>7</sup>7Centre for Experimental and Constructive Mathematics, Simon Fraser University - Canada. for useful related discussions. We also acknowledge T. Kolokolnikov and one anonymous referee for their suggestion of normalizing AIR by normalizing the cubic polynomial on the numerator of (8) and (11).
no-problem/0002/nucl-th0002044.html
ar5iv
text
# 1 Introduction ## 1 Introduction Electromagnetic probes provide a unique tool to investigate the early and interior properties of the system created in a heavy ion collision. Dileptons are of particular interest as their production cross section is related to the correlation function of the iso-vector vector current, the conserved current of the $`\mathrm{SU}_\mathrm{R}(2)\times \mathrm{SU}_\mathrm{L}(2)`$ chiral symmetry. Thus, a careful study of the dilepton spectrum may provide some evidence and signal for the existence of a chirally restored phase created in a heavy ion collisions. Of course, unambiguous proof of chiral restoration requires the measurement of both iso-vector vector and iso-vector axial-vector correlation functions. The detection of the latter, however, is rather difficult, as it does not couple to a penetrating probe but rather to pionic degrees of freedom, and thus is strongly affected by final state interactions. However, at least at low temperatures one can show , that the onset of chiral restoration goes via the mixing of the vector and axial correlators. Assuming that this mixing is the dominant mechanism for chiral restoration (for a discussion of other possibilities see e.g. ), one expects considerable changes in the vector-correlator, which is measurable via dilepton production. Let us, however, already note at this point that the medium can not only give rise to mixing of vector and axial vector correlation functions in the iso-vector channel, but also to mixing between iso-scalar and iso-vector (‘$`\rho `$ and $`\omega `$’) correlator. This mixing is not related to chiral restoration, but certainly contributes to the dilepton spectrum, as both iso-vector and iso-scalar currents couple to the electromagnetic field. For the purpose of extracting information about possible chiral restoration, it has to be considered as ‘background’. ## 2 Chiral restoration and mixing As already mentioned in the introduction, to leading order in the temperature the iso-vector vector correlator receives an admixture from the axial correlator . Similar arguments, although somewhat model dependent, can also be given at finite density, where one again finds this mixing of vector and axial vector correlator . In more physical terms, in both cases a pion from either the heat bath (finite temperature) or from the pion could of the nucleons (finite density) couples to the vector-current ($`\rho `$ meson) to form an axial-vector ($`a_1`$ or pion) intermediate state. This is depicted schematically on the left hand side of fig. 1. Conversely, the pions from the heatbath and/or pion cloud also give rise to a vector-admixture to the axial-vector correlator. One, therefore, can imagine that a certain temperature/density vector and axial-vector correlators are fully mixed and thus indistinguishable, which in turn means that chiral symmetry is restored. The pions in the medium, however, also give rise to other mixing. For example a pion from the heatbath may couple to a $`\rho `$-meson to give an $`\omega `$ intermediate state. This corresponds to the mixing between the iso-vector vector and iso-scalar vector correlators. This process is depicted schematically on the right hand side of fig. 1. This mixing is not related to chiral symmetry restoration, but still affects the dilepton spectrum. The contribution of these mixings to the dilepton spectrum can be easily understood in simple physical terms. The dilepton-production cross section is proportional to the imaginary part of these correlation functions, which corresponds to a simple cut of the diagrams depicted in fig.1. Therefore, the matrix-elements responsible for vector axial-vector mixing are identical to those for the Dalitz-decay of the $`a_1`$-meson. A similar relation hold between the the mixing of iso-vector and iso-scalar correlators to the Dalitz decay of the $`\omega `$ meson. In other words, the contribution of the $`a_1`$-Dalitz decay to the total dilepton spectrum provides a measure for the importance of the vector axial-vector mixing, i.e. for the sensitivity to chiral restoration effects. While it might be impossible to extract this channel from the data, in a given model calculation it can be easily accessed. We will show below, that the $`a_1`$-Dalitz contribution is so small that its absence or presence would make an indistinguishable difference in the total spectrum. Consequently, the sensitivity to chiral restoration appears to be very weak. Fig. 1. Schematic illustration of mixing induced by the medium. Left: Vector axial-vector mixing. Right: Iso-vector iso-scalar mixing. Finally, there has been discussion about the contributions of baryons, notably the $`N(1520)`$ resonance . Again, this can be understood either as a mixing of the $`\rho `$-meson with $`N(1520)`$-hole states or simply as the Dalitz decay of the $`N(1520)`$. At present, it is unclear what the chiral properties of a $`N(1520)`$-hole state are. Therefore, one cannot say to which extent this mixing contributes to chiral restoration. However, as will be shown below, the contribution of the $`N(1520)`$-Dalitz again is so small, that it hardly affects the total Dilepton spectrum. ## 3 The N(1520) There has been some discussion as to how important the contribution of baryons to the dilepton spectrum is. Baryon-resonances contribute chiefly through their Dalitz decay $`N^{}e^+e^{}N`$ to the dilepton spectrum. In this contribution has been estimated to be at most 50 % of that of the $`\omega `$-Dalitz decay in the mass range of $`400500\mathrm{MeV}`$. The $`N(1520)`$ plays a special role, as it couples very strongly to the $`\rho `$ meson and thus should also contribute most to the dilepton spectrum . In order to explore this we have calculated the Dalitz decay rate and width of the $`N(1520)`$ using the model and parameters of , which is based on an analysis of pion photoproduction data. A detailed description of this calculation can be found in . We have also compared to the nonrelativistic models, which are usually employed in the calculation of the $`\rho `$-meson spectral function . In fig. 2 we show the resulting width for two relativistic parameterizations and the nonrelativistic result. They agree reasonably well, and in the transport calculations shown below we employ the nonrelativistic result. We should also mention that we found a rather weak dependence on the off-shell parameter, which is always present in the relativistic description of a spin 3/2 object. Fig. 2. Dalitz decay width of the N(1520) based on a relativistic and nonrelativistic descriptions. The difference between the calculations labeled ‘Relativistic I’ and ‘Relativistic II’ are discussed in detail in . ## 4 Transport calculation The results which we will present in the subsequent section are based on a transport calculation. Therefore, it is appropriate to discuss how the dilepton spectrum calculated in transport is related to that obtained from in medium correlation functions (see e.g. ). In general, the dilepton production rate is related to the imaginary part of the current-current correlation function $`C(q_0,\stackrel{}{q})`$ $`{\displaystyle \frac{dR}{d^4q}}={\displaystyle \frac{\alpha ^2}{\pi ^3q^2}}\mathrm{Im}C(q_0,\stackrel{}{q}).`$ (1) Assuming vector dominance, and concentrating on the iso-vector vector ($`\rho `$) channel the current-current correlator is, up to constant factors, given by the in medium rho propagator $`D_\rho (q_0,\stackrel{}{q})`$ $`\mathrm{Im}C(q_0,\stackrel{}{q})\mathrm{Im}D\rho (q_0,\stackrel{}{q}){\displaystyle \frac{\mathrm{Im}(\mathrm{\Sigma })}{(m^2m_\rho ^2+\mathrm{Re}(\mathrm{\Sigma }))^2+\mathrm{Im}(\mathrm{\Sigma })^2}}`$ (2) with $`\mathrm{\Sigma }=\mathrm{\Sigma }_{\rho \pi \pi }+\mathrm{\Sigma }_{\pi +\rho a_1}+\mathrm{}`$ (3) being the selfenergy of the $`\rho `$ meson. In free space $`\mathrm{\Sigma }=\mathrm{\Sigma }_{\rho \pi \pi }`$. In the medium, one gets additional contributions from all the other channels coupling to the $`\rho `$ such as the $`a_1`$, the $`\omega `$, $`N(1520)`$ etc. In transport, on the other hand, one folds the collisions of the respective particles with the branching probability into the dilepton channel in order to generate the dilepton spectrum. At first sight, this appears to be a different approach then the one based on the imaginary part of the correlator. However, the relevant cross sections which control the collision probabilities are directly related to the imaginary part of the selfenergies. Thus the transport calculation can also be formulated in terms of an imaginary part of a correlation function which, for the results shown below has the following form $`\mathrm{Im}C(q_0,\stackrel{}{q})_{transport}\mathrm{Im}D\rho (q_0,\stackrel{}{q})_{transport}{\displaystyle \frac{\mathrm{Im}(\mathrm{\Sigma })}{(m^2m_\rho ^2)^2+\mathrm{Im}(\mathrm{\Sigma }_{\rho \pi \pi })^2}}.`$ (4) Note that there are two differences between the correlator (2) and that resulting from transport (4). First, in the denominator the real part of the selfenergy in neglected. This is a reasonable approximation as the real part appears to be small . The other approximation is that in the denominator only the free selfenergy enters in the imaginary part. Additional broadening due to other processes is not taken into account. This approximation becomes bad close to the resonance mass, where the imaginary part of the selfenergy dominates the denominator. There, the transport over predicts the actual dilepton yield. However, in the results below we will see that in this region the total dilepton spectrum is dominated by the decay of the $`\omega `$ meson, which happens mostly in the final state. Thus, a reduction of the $`\rho `$ peak will reduce the overall strength only by a small amount. This effect will become more visible, once the experimental mass resolution becomes sufficiently good to resolve the omega peak. Then the collisional broadening of the $`\rho `$ should be directly visible in the neighborhood of the omega. One should also note that this second approximation can in principle be overcome within a transport calculation. The transport gives the full information about the collisional width ($`Im(\mathrm{\Sigma })`$) for each spacetime point. It, however, requires a numerical tour de force to do a statistically reliable calculations. While this has not yet been attempted in the framework of nucleus-nucleus collisions, such a calculation has been carried out for dilepton production in photon-nucleus reactions . ## 5 Results for Pb+Au at 160 GeV/A and predictions for 40 GeV/A Let us now turn to the actual results. Most of the details of the calculations can be found in . The new element here is the inclusion of the N(1520). In fig. 3 and fig. 4 we show the resulting dilepton spectra in comparison with the preliminary CERES data from 1996 . We find an overall agreement with the data, also for the low transverse momentum ones. Notice, these results have been obtained without any in-medium effects. As explained in the initial conditions have been chosen such that the final hadronic spectra agree with experiment. Fig. 3. Results for the dilepton invariant mass spectrum at $`160\mathrm{GeV}/\mathrm{A}`$. All transverse momenta. Calculation based on model of . As discussed above, an indicator for the chiral mixing of axial-vector and vector correlator is the $`a_1`$ Dalitz contribution (thick full line at the bottom of the graph). In our calculation the $`a_1`$-Dalitz is at best a 2 % contribution. Considering the size of the experimental error bars, it is clear that the present data do not allow for any conclusions concerning chiral restoration. Note that our findings here are also in agreement with refs. and . Both references find that the $`a_1`$-Dalitz rate is considerably smaller than the pion annihilation rate. We also find a rather small contribution from the $`N(1520)`$ decay (thick dashed line at the bottom of the graph). It is about a factor of 4 below the contribution of the $`\omega `$-Dalitz decay, supporting the early estimate of . This also implies that at least for the $`160\mathrm{GeV}`$ data, the contribution of the baryons is rather small. Fig. 4. Results for the dilepton invariant mass spectrum at $`160\mathrm{GeV}/\mathrm{A}`$. Transverse momenta below 500 MeV/c. Calculation based on model of . Let us next turn to our prediction for the ‘low energy’ ($`40\mathrm{GeV}`$) run. In figs. 5 and 6 we show our prediction together with the 160 GeV data, which are intended as a reference. The prediction is based on central ($`N_{charge}=220`$) events generated with URQMD (for details of the calculation see ). In our calculation, we have assumed a mass resolution of 1 %, in order to account for the substantially improved mass resolution of the CERES spectrometer. Given this mass resolution, the omega should be clearly visible and thus should put a strong constraint on the model calculations. Aside from the improved resolution, however, we do not predict a significant difference between the high energy and the low energy spectrum, if plotted in the CERES normalization i.e. if divided by the number of charged particles. The only small but visible difference is around 400 MeV, where the yield is somewhat smaller than that of the high energy run. Finally, we find that the contribution of the baryons is still comparatively small, although somewhat stronger than for the high energy collisions. We have also carried out a calculation in the spirit of , where we have adjusted the initial hadronic state in order to reproduce the final spectra of an RQMD calculation at 40 GeV/A. The results are virtually identical to the ones using the full URQMD events. Fig. 5. Prediction for the dilepton invariant mass spectrum for Pb+Pb at $`40\mathrm{GeV}/\mathrm{A}`$. All transverse momenta. Calculation based on URQMD . The data are those for 160 GeV/A and shown for comparison only. ## 6 Conclusions The imaginary part of the iso-vector vector correlator contributes significantly to the dilepton spectrum. Therefore, a dilepton measurement is in principle sensitive to in medium changes of this correlation function, and thus may be utilized to investigate effects of chiral symmetry restoration, at least indirectly. We have argued that the strength of the $`a_1`$-Dalitz decay provides a good measure for the mixing of the vector and axial-vector correlators, which is one way of restoring chiral symmetry. Our analysis of the presently available dilepton data by the CERES collaboration shows a negligible contribution from the $`a_1`$-Dalitz decay, in agreement with other calculations. Therefore, the present dilepton data do not allow for any conclusions concerning the mixing of the axial-vector and vector correlation functions, and thus about the possible restoration of chiral symmetry achieved in SPS-energy heavy ion collisions. Fig. 6. Prediction for the dilepton invariant mass spectrum for Pb+Pb at $`40\mathrm{GeV}/\mathrm{A}`$. Transverse momenta below 500 MeV/c. Calculation based on URQMD . The data are those for 160 GeV/A and shown for comparison only. We have also provided a prediction for the dilepton spectrum in central Pb+Pb collisions at 40 GeV/A. If normalized by the number of charged particles we find only small differences between the low energy and the high energy spectra. Furthermore, even at the low bombarding energy, we find that the contribution from baryons is rather small. Acknowledgments: The work of CG and AKDM was supported by the Natural Sciences and Engineering Research Council of Canada and the Fonds FCAR of the Québec Government. The work of CMK was supported in part by the National Science Foundation under Grant No. PHY-9509266 and PHY-9870038, the Welch Foundation under Grant No. A-1358, the Texas Advanced Research Program under Grant FY97-010366-068, and the Alexander v. Humboldt Foundation. MB and VK were supported by the Director, Office of Science, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, and by the Office of Basic Energy Sciences, Division of Nuclear Sciences, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. MB was also supported by a Feodor Lynen Fellowship of the Alexander v. Humboldt Foundation, Germany.
no-problem/0002/astro-ph0002026.html
ar5iv
text
# Mass-loss induced instabilities in fast rotating stars ## 1 Introduction Although far from complete, the overall picture describing the angular momentum evolution of solar type stars is well established. Despite modest mass-loss rates, $`\dot{M}=10^{14}\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$ for the sun, magnetised stellar winds strongly brake the rotation of the star. The decrease of the angular velocity then reacts back on the magnetic field by reducing the efficiency of the dynamo process. This picture cannot be applied as such to early-type stars. First of all, the history of stellar rotation at intermediate and high masses is generally still poorly known; besides, the presence of magnetic fields in early-type stars has not been established, except for some categories of chemically peculiar stars, as well as for a couple of particular cases (Donati et al. 1997; Henrichs et al. 2000). On the other hand, we know that early type stars can experience strong angular momentum losses as stellar winds with very large mass loss rate have been observed ($`\dot{M}10^510^8\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$). In this paper, we investigate how such angular momentum losses will affect the stellar rotation, assuming magnetic fields are not dynamically relevant. This question has not yet received much attention although it might have important consequences for stellar structure and evolution as well as for the understanding of early-type stars activity. In the context of stellar evolution, the effect of mass-loss on rotation has to be investigated since the rotation strongly influences the stellar structure. However, current models of stellar evolution with rotation do not take this effect into account (Talon et al. 1997, Denissenkov et al. 1999). As explained below, we have been confronted to the same question while investigating the origin of the very strong activity of Herbig Ae/Be stars, a class of pre-main-sequence stars with masses ranging from $`2`$ to $`5\mathrm{M}_{\mathrm{}}`$. A significant fraction of these objects is known to possess extended chromospheres, winds, and to show high levels of spectral variability. In addition, the presence of magnetic fields, first suggested by the rotational modulation of certain spectral lines (Catala et al. 1986), has been recently supported by a direct detection at the surface of the Herbig Ae star, HD 104237 (Donati et al. 1997). Detailed estimates of the non-radiative heating in the outer atmosphere of Herbig Ae/Be stars (Catala 1989, Bouret et al. 1998) compared to observational constraints on the available energy sources strongly suggest that the rotation of the star is the only energy source capable of powering such activity (see discussion in F.Lignières et al. 1996). This led Vigneron et al. (1990) to propose a scenario whereby the braking torque exerted by the stellar wind forces turbulent motions in a differentially rotating layer below the stellar surface. Then, by invoking an analogy with stellar convection zones, these turbulent motions could generate a magnetic field which would transfer and dissipate the turbulent kinetic energy into the outer layers of the star. Unlike Herbig stars, there is at present no observational evidence of non-radiative energy input in the atmosphere of OB stars. Their radiatively-driven winds do not require any additional acceleration mechanism and an eventual non-radiative heating would be very difficult to detect because most lines are saturated. However, they show various forms of spectral variability. Recent observations seem to indicate that these phenomena are not due to an intrinsic variability of the wind but are instead caused by co-rotating features on the stellar surface (Massa et al. 1995). As proposed in the literature (Howarth et al. 1995; Kaper et al. 1996), these corotating features could be due to a magnetic structuration of the wind. Since OB stars rotate fast and possess strong winds, the Vigneron et al. scenario might also explain these phenomena. The recent direct detection of a magnetic field in an early Be star, $`\beta `$ Cep, adds some credit to this hypothesis (Henrichs et al. 2000). In this paper, we shall investigate the starting assumption of the Vigneron et al. scenario namely that the braking torque of an unmagnetised wind generates strong enough velocity shear to trigger an instability in the subphotospheric layers. This is a crucial step in the scenario since the onset of the instability allows to transfer kinetic energy from rotational motions to turbulent motions. The possible connection between mass-loss and instability had been already suggested by Schatzman (1981). However, the onset of the instability is not really considered in this study since it is assumed from the start that the wind driven angular momentum losses induce a turbulent flux of angular momentum. Here, we propose a simple model of angular momentum transport by a purely radial mass flux in order to estimate the angular velocity gradients induced by mass loss. Then, the stability of these gradients is studied according to existing stability criteria. Note that, as we neglect latitudinal flows, our model is best regarded as an equatorial model. In the absence of magnetic fields, the braking effect of a stellar wind is simply due to the fact that matter going away from the rotation axis has to slow down to preserve its angular momentum. Thus, for the star’s surface to be significantly braked, fluid parcels coming from deep layers inside the star have to reach the surface. Since radial velocities induced by mass-loss are very small deep inside the star, this is expected to take a relatively long time, not very different from the mass-loss time scale. By contrast, we shall see that the formation of unstable angular velocity gradients near the surface take place in a very short time, much smaller than the braking time scale. The paper is organised as follows: first, we estimate the time scale characterising the braking of the stellar surface and relate it to the mass loss time scale (Sect. 2). Then, we show that radial outflows in stellar envelopes generate differential rotation and we estimate the time scale of this process (Sect. 3). The stability of these angular velocity gradients is considered (Sect. 4) and the results are summarised and discussed (Sect. 5). ## 2 Braking of mass-losing stars Generally speaking, the braking of the star’s surface depends on the mass-loss mechanism and on the efficiency of angular momentum transfer inside the star. In this section, we shall make assumptions regarding these processes in order to estimate the braking time scale. However, before we consider these particular assumptions, it is interesting to show that the simple fact that the star adjusts its hydrostatic structure to its decreasing mass already implies that the star slows down as it loses mass. Indeed, according to models of pre-main-sequence evolution (Palla & Stahler 1993), a $`2M_{\mathrm{}}`$ pre-main-sequence star losing mass at a rate of the order of $`\dot{M}=10^8\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$ will have lost about one percent of its mass when it arrives on the main sequence. Because mass is concentrated in the core of stars, one percent of the total mass corresponds to a significant fraction of the envelope. For pre-main-sequence models of $`2`$ to $`5M_{\mathrm{}}`$, this means that all the matter initially located above the radius $`R_I0.63R_{}`$ is expelled during the pre-main-sequence evolution. But, during this mass-loss process, the star continuously adjusts its structure to its decreasing mass and, from the point of view of stellar structure, the loss of 1 percent of mass has a negligible effect on the stellar radius. Thus, as represented on Fig.1, the sphere containing 99 % of the initial mass must have expanded significantly during its pre-main-sequence evolution. Its moment of inertia has increased and, due to angular momentum conservation, its mean angular velocity has been reduced. Thus, we conclude that the hydrodynamic adjustment of stars ensures that mass loss is accompanied by a mean braking of the remaining matter. Now, if one wants to estimate the actual braking of the stellar surface, assumptions have to be made on the mass-loss process and on the efficiency of angular momentum transfer inside the star. In order to obtain the order of magnitude of the braking time scale, we assume that mass-loss is isotropic and constant in time and we consider two extreme assumptions regarding the efficiency of angular momentum transfer. First, we assume that the transfer of angular momentum is only due to radial expansion. Then, angular momentum conservation states that the angular velocity $`\mathrm{\Omega }_I`$ of a fluid parcel located at a radius $`r=R_I`$ will have decreased by $`(R_I/R)^2`$ when it reaches the stellar surface. This occurs when all the matter above $`r=R_I`$ has been expelled so that the decrease of the surface angular velocity can be related to the mass-loss. We used the density structure of a $`2M_{\mathrm{}}`$ pre-main-sequence model (Palla & Stahler 1993) for the following calculation. Second, we assume that an unspecified transfer mechanism enforces solid body rotation throughout the star so that angular momentum losses are distributed over the whole star and the braking of the stellar surface is less effective. In this case, global angular momentum conservation reads $$\frac{dJ}{J}=\frac{2}{3}\frac{MR^2}{I}\frac{dM}{M},$$ (1) where $`J=I\mathrm{\Omega }`$ is the total angular momentum. According to stellar structure models of $`2`$ to $`5M_{\mathrm{}}`$ pre-main-sequence stars, the radius of gyration, $`I/MR^2`$ is close to $`0.05`$ so that the angular velocity decrease is approximatively given by $$\frac{\mathrm{\Omega }_f}{\mathrm{\Omega }_i}=\frac{I_i}{I_f}\left(\frac{M_f}{M_i}\right)^{40/3}$$ (2) To simply relate the braking to the mass-loss, we also assumed that the decrease of the moment of inertia is proportional to the mass decrease, or equivalently that mass-loss induces an uniform density decrease throughout the star. Albeit rough, this assumption is not critical for the conclusion drawn in this section. Starting with an uniformly rotating star, Fig. 2 presents the braking of the stellar surface as a function of the percentage of mass lost assuming solid body rotation (solid line) and radial expansion (dashed line). This shows that, in the case of solid body rotation, the braking time scale is about ten times smaller than the mass loss time scale, whereas, it is hundred times smaller in the case of radial expansion (the braking time scale is defined as the time required to decrease the surface angular velocity by a factor $`e`$). We therefore conclude that the braking time scale is significantly smaller than the mass-loss time scale although both time scales does not differ by many orders of magnitude. In the next section, we study the formation of angular velocity gradients in a stellar envelope assuming the transport of angular momentum is only due to a radial mass flux. We shall see that this process generates strong near surface gradients in a time scale smaller by many orders of magnitude than the mass-loss time scale. ## 3 Radial advection of angular momentum across a stellar surface Non-uniform radial expansion tends to generate differential rotation and this is particularly true across the stellar surface where the steep density increase has to be accompanied by a steep decrease of the radial velocity. To show this let us follow the expansion of two spherical layers located at different depths in the star’s envelope, and rotating at the same rate. After a given time, the outer layer will have travelled a much larger distance than the inner one so that its angular velocity will have decreased more than that of the inner layer. A gradient of angular velocity has then appeared between both layers. To estimate the gradient generated by this non-uniform expansion, we write down the angular momentum conservation assuming the transport of angular momentum is only due to a radial flow. Consequently, angular momentum transfers by viscous stresses, meridional circulation, gravity waves or turbulence are neglected all together. Although latitudinal inhomogeneities in the mass-loss mechanism or Eddington-Sweet circulation are expected to induce latitudinal flows, we note that the present assumption would be still justified in the equatorial plane if the meridional circulation is symmetric with respect to the equatorial plane. In the context of our simplified model, the angular momentum balance reads $$\frac{}{t}(\omega )+v(r,t)\frac{}{r}(\omega )=0,$$ (3) where $`\omega `$ is the specific angular momentum and $`v(r,t)`$ is the radial velocity. Up to mass loss rates of the order of $`10^6\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$, radial velocities carrying out a time-independent and isotropic mass-loss are much smaller than the local sound speed. Consequently the star is always very close to hydrostatic equilibrium and continuously adjusts its structure to its decreasing mass. This means in particular that the temporal variations of the density near the surface are small, because the effect of mass-loss is distributed over the whole star through hydrostatic equilibrium. Accordingly, temporal variations of the radial velocities are small and we shall neglect them as their inclusion would not modify our conclusions. Then, the radial outflows satisfies $$4\pi r^2\varrho (r)v(r)=\dot{M},$$ (4) where $`\varrho (r)`$ is given by stellar structure models. With the above assumptions, the specific angular momentum evolves like a passive scalar advected in a one dimensional stationary flow $`v(r)`$. The mathematical problem can be readily solved and we will do so in the following for a radial flow corresponding to a $`2M_{\mathrm{}}`$ star with a mass loss rate of $`\dot{M}=10^8\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$. But first we derive useful properties by studying the evolution of the angular momentum gradient in the general case. Except when $`\omega `$ or $`v`$ are uniform in space, advection always modifies the distribution of the conserved quantity. While decelerated flows tend to sharpen $`\omega `$-gradients, accelerated flows smooth them out. To illustrate the process of gradient smoothing in an accelerated flow, the angular momentum evolution of two neighbouring fluid elements $`A`$ and $`B`$ has been represented in Fig.3. Following the motions, the angular momentum gradient between these two points decreases because their separation $`\mathrm{\Delta }r`$ increases while the angular momentum difference $`\mathrm{\Delta }\omega `$ is conserved. This simple sketch can also be used to quantify the gradient decrease. We first note that $`A`$ and $`B`$ travel the distance separating $`r_1+\mathrm{\Delta }r_1`$ from $`r_2`$ in the same time. Thus, the time interval required by A to go from $`r_1`$ to $`r_1+\mathrm{\Delta }r_1`$ is the same as the one used by $`B`$ to go from $`r_2`$ to $`r_2+\mathrm{\Delta }r_2`$. This is expressed by, $`\mathrm{\Delta }r_2/v(r_2)=\mathrm{\Delta }r_1/v(r_1)`$, and then, $$v(r_1)\frac{\mathrm{\Delta }\omega }{\mathrm{\Delta }r_1}=v(r_2)\frac{\mathrm{\Delta }\omega }{\mathrm{\Delta }r_2}.$$ (5) Taking the limit of vanishing separation between $`A`$ and $`B`$, we conclude that the product $`v(r)\omega /r`$ is conserved following the motions, a property which can be readily verified by calculating the Lagrangian derivative of $`v\omega /r`$. This property implies that angular momentum gradients can be completely smoothed out if they are advected in a strongly enough accelerated flow. This is particularly relevant near stellar surfaces where steep density gradients induce steep radial velocity gradients. To specify this effect we express the conservation of $`v\omega /r`$ for the radial outflow given by Eq. (4). Then, the evolution of the logarithmic gradients of the angular velocity following the flow between radii $`r_1`$ and $`r_2`$ reads $$\frac{\mathrm{ln}\mathrm{\Omega }}{\mathrm{ln}r}(r_2)=2+\frac{\varrho (r_2)}{\varrho (r_1)}\left(\frac{r_2}{r_1}\right)^3\left(\frac{\mathrm{ln}\mathrm{\Omega }}{\mathrm{ln}r}(r_1)+2\right),$$ (6) where the quantity $`\frac{\mathrm{ln}\mathrm{\Omega }}{\mathrm{ln}r}+2`$ measures departures from uniform specific angular momentum. In this expression, $`\varrho (r)r^3`$ decreases almost like $`\varrho (r)`$ in the vicinity of the photosphere since the density decreases considerably over distances small compared to the stellar radius. Then, the above expression shows that any departure from constant specific angular momentum will have decreased by a factor $`e^n`$ after crossing $`n`$ density scale heights. Moreover, initial departures can not be too large otherwise the corresponding differential rotation would be subjected to powerful instabilities. Then, for any realistic values of the initial angular velocity gradients, fluid elements reaching the surface will have an angular velocity gradient close to $`2\mathrm{\Omega }/R`$. We confirmed the validity of this simple picture by solving the advection problem for the density profile corresponding to the stellar structure model of a pre-main-sequence $`2M_{\mathrm{}}`$ star (Palla & Stahler 1993). Again, the mass-loss rate has been fixed to $`\dot{M}=10^8\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$. Starting from a solid body rotation, the evolution of the rotation profile in the vicinity of the stellar surface is presented in Fig.4. The different curves correspond to increasing advection times, $`10^1`$, $`1`$, $`10`$, $`10^2`$, $`10^3`$ and $`10^4`$ years, respectively. We observe that the evolution is first rapid and then much slower. This is because the evolution starts with the rapid expansion of the outer layers which tends to set-up a profile of uniform specific angular momentum $`\mathrm{\Omega }1/r^2`$. Once this is done, the evolution takes place on much larger time scales as it involves the much slowly expanding inner layers. The gradual set-up of the $`\mathrm{\Omega }1/r^2`$ profile is confirmed by the evolution of the angular velocity logarithmic gradients. As shown on Fig.5, specific angular momentum gradients are rapidly smoothed out in the outer layers while the inner ones become affected after a long time. We can therefore conclude that starting with any realistic angular velocity profile in a stellar envelope, radial expansion will generate angular velocity gradients very close to $`2\mathrm{\Omega }/R`$ near the surface. Whether such gradients are actually present in the subphotospheric layers of mass-losing stars depends on how the characteristic time for the formation of these gradients compares with the time scales of other angular momentum transport processes. The angular velocity gradients being produced by radial acceleration, the associated time scale is $$t_G=\frac{1}{\frac{v}{r}}=\frac{H_\varrho }{v(r)}=\frac{4\pi r^2\varrho (r)H_\varrho }{M}t_\mathrm{M},$$ (7) where $`H_\varrho `$ is the density scale height and $`t_\mathrm{M}=M/\dot{M}`$ is the mass-loss time scale. Fig.5 shows that no more than one month is necessary to form gradients of the order of $`\mathrm{\Omega }/R`$ for a mass loss rate equal to $`\dot{M}=10^8\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$. This time scale is smaller by many orders of magnitude than the mass-loss time scale or the braking time scale. This is because the formation of gradients requires radial displacements corresponding to a density scale height whereas a significant braking requires radial displacements of the order of the stellar radius. ## 4 Stability of the angular velocity gradients In this section, we investigate whether the differential rotation induced by the radial mass-loss is sufficient to trigger a hydrodynamical instability. The uniform angular momentum profile being marginally stable with respect to Rayleigh-Taylor instability, we consider its stability with respect to shear instabilities. When, as it is the case here, the vorticity associated with the velocity profile does not possess extrema, the study of shear instabilities is much complicated by the fact that finite amplitude perturbations involving non-linear effects have to be taken into account. Existing stability criteria for such velocity profiles rely on laboratory experiments which shows critical Reynolds numbers above which destabilization occurs. Although the critical Reynolds number depends on the particular flow configuration, its value is generally of order of $`1000`$. J.P. Zahn (1974) proposed the following instability criterion $$\frac{\mathrm{\Omega }}{r}>\left(PrRe^{\mathrm{crit}}\right)^{1/2}\frac{N}{r},$$ (8) where $`Re^{\mathrm{crit}}`$ is the critical Reynolds number, $`Pr=\nu /\kappa `$ is the Prandtl number comparing the thermal diffusivity $`\kappa `$ to the viscosity $`\nu `$ and $`N`$ is the Brunt-Väisälä frequency which measures the strength of stable stratification in radiative interiors. The combined effect of the stable stratification and the thermal diffusion has been derived on phenomenological grounds. However, the way the criterion depends on this effect has been recently supported in the context of the linear stability theory (Lignières et al. 1999). Applying the criterion to an uniform angular momentum profile, we find that instability occurs if the rotation period at the surface is smaller than $$P18.4\frac{g_{\mathrm{}}}{g}\mathrm{days},$$ (9) where the actual temperature gradient has been taken equal to $`=0.25`$, a typical value for radiative envelopes according to Cox (1968), and the radiative viscosity has been assumed to dominate the molecular one which is true in the envelope of intermediate mass stars. Note also that this expression holds for a mono-atomic completely ionised gas in a chemically homogeneous star. When radiation pressure is taken into account, the upper limit of the rotation period has to be multiplied by $`1/\sqrt{43\beta }`$, where $`\beta `$ is the ratio between the gas pressure and the total pressure. This factor remains very close to one for pre-main-sequence models from $`2`$ to $`5M_{\mathrm{}}`$. The above instability condition being easily met by early type stars, we conclude that mass-loss tends to impose an unstable differential rotation below the surface of early-type stars. ## 5 Discussion and conclusion We have studied the effect of mass-loss on the rotation of stellar envelopes in the simplified context of an unmagnetised spherically symmetric outflow. In the same way as an inhomogeneous distribution of a passive scalar tends to be smoothed out in a expanding flows, the radial gradients of specific angular momentum are smoothed out by radial expansion in stellar envelopes. This process becomes more and more effective as one approaches the surface because expansion becomes stronger and stronger. As shown in Figs 4 and 5, a profile of uniform angular momentum, $`\mathrm{\Omega }1/r^2`$ is rapidly set-up in the outermost layers of the star and then pervades towards the interior on much larger time scales. These time scales being proportional to the mass-loss time scale, they vary very much from stars to stars. For a typical Herbig star ($`\dot{M}=10^8\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$), an angular velocity gradient close to $`2\mathrm{\Omega }/R`$ appears in one month. By contrast, for a solar-type mass-loss rate, $`\dot{M}=10^{14}\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$, it would be $`10^6`$ times longer to reach the same level of differential rotation. According to existing stability criteria, the uniform angular momentum profile is subjected to shear instabilities provided the rotation period is shorter than $`18.4g_{\mathrm{}}/g`$ days. The present one-dimensional model is admittedly not realistic at least because latitudinal variations occur in rotating stars and give rise to a meridional circulation. Nevertheless, as already mentioned, neglecting latitudinal flows may be justified in the equatorial plane. In the following we discuss to which extent the neglected processes can prevent the formation of turbulent differentially rotating layers below the surface of rapidly rotating mass-losing stars. We first consider the angular momentum transport and then the stability problem. Latitudinal variations could be inherent to the mass-loss mechanism as proposed for radiatively driven wind emitted from fast rotating stars (Owocki & Gayley 1997). Such variations are likely to generate a meridional circulation (Maeder 1999) but, at the latitudes where outflows occur, we still expect radial expansion to play a major role in shaping the angular velocity profile. The Eddington-Sweet circulation driven by departure from sphericity operates on time scale larger than the Kelvin-Helmholtz time scale of the star. This time scale is of the order of the order of $`10^7`$ years for a $`2M_{\mathrm{}}`$ Herbig stars. Consequently the Eddington-Sweet circulation is unlikely to prevent the formation of the differentially rotating subphotospheric layer for strong mass-loss rate. Turbulent motions like those generated in thermal convection zones might prevent the formation of the differentially rotating layer. In radiative envelopes however, it is not clear whether turbulent motions (not generated by the mass-loss process) would be vigorous enough. Strong magnetic fields could in principle prevent the formation of these gradients. Note that this objection is not relevant in the context of the Vigneron et al. scenario where the shear layer is first produced by an unmagnetised wind. Finally, we also neglected the effect of pre-main-sequence contraction. This is justified because the negative radial velocities associated with the contraction are much smaller than the positive radial velocities induced by mass-loss in the vicinity of the stellar surface. In what concerns the stability of the uniform angular momentum profile, one has to remind that the stability criterion results from an extrapolation of laboratory experiments results. Despite recent numerical simulations which contradict the validity of this extrapolation for a differential rotation stable with respect to the Rayleigh-Taylor instability (Balbus et al. 1996), analysis of the experimental results reveal that for large values of the Reynolds number the properties of the turbulent flow do not depend on its stability or instability with respect to the Rayleigh-Taylor criterion (Richard & Zahn 1999). The solution of this debate would have to wait numerical simulations with higher resolution or specifically designed laboratory experiments. The fact that the shear layer is embedded in an expanding flow may also affect the stability because expansion is known to suppress the turbulence. For example, this phenomenon is observed in numerical simulations of turbulent convection near the solar surface (Stein & Nordlund 1998). However, the growth time scale of the instability being of the order of the rotation period ($`1`$ day for Herbig stars), we do not expect the expansion to be rapid enough to suppress shear turbulence. We conclude that the present model of angular momentum advection by a radial flow supports the assumption made by Vigneron et al. (1990) and F. Lignières et al. (1996) that unmagnetised winds tend to force an unstable differential rotation in the subphotospheric layers of Herbig Ae/Be stars. Note that this work is part of an ongoing effort to assess the viability of the Vigneron et al. scenario. F. Lignières et al (1996) and F.Lignières et al. (1998) have already studied the step further, once turbulent motions are generated at the top of the radiative envelope, their work pointing towards the formation and inwards expansion of a differentially rotating turbulent layer. In addition, three-dimensional numerical simulations are being performed to investigate whether magnetic fields can be generated in such a layer. ###### Acknowledgements. We are very grateful to F. Palla and S.W. Stahler for making the results of their stellar evolution code available to us.
no-problem/0002/cond-mat0002285.html
ar5iv
text
# Ab initio Hartree-Fock Born effective charges of LiH, LiF, LiCl, NaF, and NaCl ## I INTRODUCTION The Born effective charge (also called transverse or dynamic charge) of a crystalline system, defined as the induced polarization due to a unit sublattice displacement , is a fundamental quantity connecting the electrostatic fields of the lattice, to its phononic properties. It contains important information not only about the electronic structure and the bonding properties of the system, but also about the coupling of its longitudinal and transverse optical phonon modes to the external infrared radiation. In addition, the Born charges also find use in first-principles-based construction of effective Hamiltonians describing phase transitions in ferroelectric materials. Traditionally, the ab initio computations of Born charges of dielectrics have been performed either within the density-functional linear-response theory, or the density-functional perturbation theory. At a more phenomenological level, the bond-orbital method of Harrison has been very insightful . Recently, however, a very elegant formalism has been proposed by King-Smith and Vanderbilt, and Resta, which formulates the general problem of symmetry breaking induced by the macroscopic polarization (of which the Born charge is a special case) of a crystalline dielectric, in terms of the Berry phase of its wave function. This Berry-phase-based approach to macroscopic polarization of dielectrics has come to be known as the “modern theory of polarization” (MTP) in the current physics literature. The MTP has been used both within the density-functional theory (DFT) based implementations, as well as the wave-function-based Hartee-Fock (HF) formulations, to evaluate a variety of polarization related properties. DFT-based calculations of macroscopic polarization properties are very efficient, so that, without excessive effort, one can perform ab initio computations on complex compounds. Normally, the ab initio Born charges computed using the DFT-based formulations are within 10% agreement with the experiments for simple zinc-blende semiconductors, while the disagreement can be worse for more complex systems. Therefore, it is of interest to systematically explore alternative methods for computing the macroscopic polarization properties of crystalline insulators. For ionic systems, the HF method provides a powerful alternative in that it performs much better than the local-density approximation based schemes. The other advantage of the HF method is that it can be systematically improved, both by perturbative, as well as nonperturbative methods, to account for many-body effects. Recently, we have developed a Wannier-function-based ab initio HF approach to compute the ground-state properties of crystalline insulators. The approach has been successfully applied to compute the ground-state properties of not only three-dimensional crystals, but also of one-dimensional periodic insulators such as polymers. In the present paper we extend our approach to the problem of macroscopic polarization of dielectrics, and apply it to obtain benchmark HF values for the Born charges of diatomic ionic systems LiH, LiF, LiCl, NaF and NaCl. Besides their simplicity, the main criteria behind the choice of these materials for the present study were: (a) to the best of our knowledge, no prior benchmark calculations of the Born charges of these materials exist (b) high-quality experimental data has been available for them for a long time. Thus by comparing the benchmark HF results such as this one, to the corresponding experimental values, one can gauge the applicability of the HF method on a wide variety of systems. When we compare the HF values of the Born charges computed in the present work, with the experimental ones, we find that the agreement is good for LiH, LiF, and NaF, where the agreement between the theory and the experiment is always better than 7%. For NaCl and LiCl, however, the error is 10% and 16%, respectively, suggesting the possibility that the many-body effects may be stronger in these systems. Since, this is the first application of our Wannier-function-based method to the problem of macroscopic polarization, we also present the associated computational details. The Wannier functions, being very similar in character to the molecular orbitals encountered routinely in quantum chemistry, have the added advantage of being intuitive in character. Indeed, as demonstrated later on, they lead to a pictorial description of the symmetry breaking processes associated with macroscopic polarization of insulators. Remainder of the paper is organized as follows. In section II we briefly cover the theoretical aspects of the present work. Our numerical results for the Born effective charges of several ionic crystals are presented in section III. Finally, the conclusions are presented in section IV. ## II THEORETICAL BACKGROUND ### A Born Effective Charge The Born effective charge tensor, $`Z_{\alpha \beta }^{}(i)`$, associated with the atoms of the $`i`$-th sublattice, is defined as $$Z_{\alpha \beta }^{}(i)=Z_i+(\mathrm{\Omega }/e)\frac{P_\alpha ^{(el)}}{u_{i\beta }}|_{𝐄=\mathrm{𝟎}},$$ (1) where $`Z_i`$ is the charge associated with the nuclei (or the core) of the sublattice, $`\mathrm{\Omega }`$ is the volume of the unit cell, $`e`$ is the electronic charge, and $`P_\alpha ^{(el)}`$ is the $`\alpha `$-th Cartesian component of the electronic part of the macroscopic polarization induced as a result of the displacement of the sublattice in the $`\beta `$-th Cartesian direction, $`u_{i\beta }`$. For small $`\mathrm{\Delta }u_{i\beta }`$, one assumes $`\frac{P_\alpha ^{(el)}}{u_{i\beta }}|_{𝐄=\mathrm{𝟎}}=\frac{\mathrm{\Delta }P_\alpha ^{(el)}}{\mathrm{\Delta }u_{i\beta }}`$, and computes the change in the polarization $`\mathrm{\Delta }P_\alpha ^{(el)}`$ following Resta’s approach $$\mathrm{\Delta }𝐏_{el}=𝐏_{el}^{(1)}𝐏_{el}^{(0)}\text{,}$$ (2) where, $`𝐏_{el}^{(0)}`$ and $`𝐏_{el}^{(1)}`$, respectively, denote the electronic parts of the macroscopic polarization of the system for its initial ($`\lambda =0`$) and final ($`\lambda =1`$) states, where $`\lambda `$ is a parameter characterizing the adiabatic symmetry-breaking transformation of the lattice. Clearly, for the present case, $`\lambda `$ is to be identified with the sublattice displacement $`\mathrm{\Delta }u_{i\beta }`$. For one-electron theories such as the Kohn-Sham theory, or the HF theory, King-Smith and Vanderbilt showed that $$𝐏_{el}^{(\lambda )}=(fe/\mathrm{\Omega })\underset{n=1}{\overset{M}{}}𝐫\varphi _n^{(\lambda )}(𝐫)^2𝑑𝐫\text{,}$$ (3) where { $`\varphi _n^{(\lambda )}(𝐫),n=1,\mathrm{},M`$ } represent the $`M`$ occupied Wannier functions of the unit cell for a given value of $`\lambda `$, and $`f`$ is the occupation number of Wannier function ($`f=2`$, for the restricted-Hartree-Fock theory). King-Smith and Vanderbilt also showed that the r.h.s. of Eq. (3) is proportional to the sum of the Berry phases associated with individual Wannier functions (or bands), thus equating the change in the macroscopic polarization of the solid with the change in the Berry-phase of its wave function during the corresponding adiabatic transformation. In addition, Resta demonstrated that $`\mathrm{\Delta }𝐏_{el}`$ computed via Eqs. (2) and (3) is invariant under the choice of Wannier functions, even though the individual $`𝐏_{el}^{(\lambda )}`$’s are not. Computation of the Wannier functions for different values of $`\lambda `$ is discussed in the next section. ### B Wannier Functions In principle, any approach which can yield Wannier fucntions of a crystal corresponding to its Bloch orbitals, can be used to compute its Born charge tensor. However, in the present work we have applied a framework, recently developed by us, which directly yields the restricted-Hartree-Fock (RHF) Wannier functions of a crystalline insulator employing an LCAO approach. In our previous work we showed that one can obtain $`M`$ RHF Wannier functions, $`\{|\alpha ,\alpha =1,M\}`$ occupied by $`2M`$ electrons localized in the reference unit cell $`𝒞`$ by solving the equations $$(T+U+\underset{\beta }{}(2J_\beta K_\beta )+\underset{k𝒩}{}\underset{\gamma }{}\lambda _\gamma ^k|\gamma (𝐑_k)\gamma (𝐑_k)|)|\alpha =ϵ_\alpha |\alpha \text{,}$$ (4) where $`T`$ represents the kinetic-energy operator, $`U`$ represents the interaction of the electrons of $`𝒞`$ with the nuclei of the whole of the crystal, while $`J_\beta `$, $`K_\beta `$, respectively, represent the Coulomb and exchange interactions felt by the electrons occupying the $`\beta `$-th Wannier function of $`𝒞`$, due to the rest of the electrons of the infinite system. The first three terms of Eq.(4) constitute the canonical Hartree-Fock operator, while the last term is a projection operator which makes the orbitals localized in $`𝒞`$ orthogonal to those localized in the unit cells in the immediate neighborhood of $`𝒞`$ by means of infinitely high shift parameters $`\lambda _\gamma ^k`$’s. These neighborhood unit cells, whose origins are labeled by lattice vectors $`𝐑_k`$, are collectively referred to as $`𝒩`$. The projection operators along with the shift parameters play the role of a localizing potential in the Fock matrix, and once self-consistency has been achieved, the occupied eigenvectors of Eq.(4) are localized in $`𝒞`$, and are orthogonal to the orbitals of $`𝒩`$—thus making them Wannier functions . As far as the orthogonality of the orbitals of $`𝒞`$ to those contained in unit cells beyond $`𝒩`$ is concerned, it should be automatic for systems with a band gap once $`𝒩`$ has been chosen to be large enough. As in our previous calculations performed on three-dimensional ionic insulators, we included up to third-nearest-neighbor unit cells in the region $`𝒩`$. For computing the Born charges, first $`𝐏_{el}^{(0)}`$ is computed from Eq. (3) using the Wannier functions of the unit cell, with all the sublattices of the crystal in their original position corresponding to the case $`\lambda =0`$. Next, the $`i`$-th sublattice is displaced in the Cartesian direction $`\beta `$ by a small amount $`\mathrm{\Delta }u_{i\beta }`$, and $`𝐏_{el}^{(1)}`$ is computed in a manner identical to the previous case, except that the Wannier functions used for the purpose are recomputed for the transformed lattice. Now that we can compute $`\mathrm{\Delta }𝐏_{el}`$ (cf. Eq. (2)), and compute the Born effective charge tensor by substituting it in Eq. (1). The Wannier functions obtained by solving Eq. (4) are canonical Hartree-Fock solutions for the unit cell $`𝒞`$, and thus will satisfy the spatial symmetries of the unit cell. In appearance they look identical to the molecular orbitals encountered in any quantum-chemical calculation on a finite system, as was discussed in our earlier work. Therefore, by comparing the spatial appearances of the Wannier functions for the most symmetric case ($`\lambda =0`$), to the broken symmetry one ($`\lambda =1`$), we can obtain a pictorial representation of the polarization process. ## III RESULTS AND DISCUSSION In this section we present the results of our calculations of the Born effective charges for LiH, LiF, LiCl, NaF, and NaCl. Because of the cubic nature of the underlying Bravais lattices, the Born charge tensor for these systems has only one independent component. In all the cases, we assumed the corresponding experimental fcc crystal structure, with the anion at $`(0,0,0)`$ position and the cation at $`(a/2,0,0)`$ position, $`a`$ being the lattice constant. The lattice constants used in the calculations were the theoretical ones, obtained by minimizing the total energy per unit cell at the Hartree-Fock level. The Wannier functions used in the approach were obtained by performing all-electron HF calculations using a computer program developed by us recently. In order to evaluate the centers of the Wannier functions needed for computing the polarization properties, we added a small subroutine to the existing module. The program is implemented within an LCAO approach, employing Gaussian lobe-type functions. Lobe-type functions simulate the Cartesian $`p`$ and higher angular momentum orbitals located on a given atomic site, as linear combinations of $`s`$-type functions slightly displaced from the site. Because of this reason, it is possible, that in our approach we obtain somewhat different numerical values of the Wannier function centers, as compared to the ones computed by equivalent genuine Cartesian basis functions as implemented, e.g., in the CRYSTAL95 program . For these calculations we used the lobe representation of the state-of-the-art contracted Gaussian basis sets developed by the Torino group. For LiH, the details of the basis set can be obtained in ref. , while for the alkali halides, they are available in ref. . Using these basis sets we had studied NaCl earlier at the Hartree-Fock level, therefore, optimized lattice constants were already available for them. However, for the remaining systems, we performed fresh Hartree-Fock calculations to obtain the optimized lattice constants. The theoretical lattice constant finally used in these calculations were 4.106 $`\AA `$ (LiH), 4.018 $`\AA `$ (LiF), 4.633 $`\AA `$ (NaF), 5.262 $`\AA `$ (LiCl), and 5.785 $`\AA `$ (NaCl). These are in close agreement with the values 4.102 $`\AA `$ (LiH) , 4.02 $`\AA `$ (LiF) , 4.63 $`\AA `$ (NaF) , 5.28 (LiCl) , and 5.80 $`\AA `$ (NaCl) reported by the Torino group. The computed Born effective charges are presented in table I. These results were obtained by translating the sublattices of a given crystal by the amount $`\mathrm{\Delta }u=0.01a`$ in the (100) direction. However, in order to ensure the stability of the results, several calculations were performed with different directions and magnitudes of $`\mathrm{\Delta }u`$, and no significant changes in the results were observed. It was also verified by explicit calculations that the sum total of all the effective charges corresponding to the different atoms of a unit cell was always zero, in agreement with its electrical neutrality. All these reasons give us confidence as to the correctness of our results. From table I it is obvious that the theoretical Born effective charges obtained for LiH and the alkali halides are quite close to their nominal ionicities. This is in perfect agreement with the intuitive picture of these systems being highly ionic in nature. As far as the comparison of the HF results with the experimental results is concerned, it is very good for LiH, LiF, and NaF. However, for NaCl and LiCl, the disagreement is more than 10%. Similar differences with respect to the experiments were also observed by Yaschenko et al. who computed the HF Born charge of MgO to be 1.808, while the experimental value for that compound is in the range 1.96—2.02. One possible reason for the discrepancy between the theoretical and the experimental values of the Born charges could be the missing many-body effects. A qualitative discussion of these many-body effects was given by Harrison, in the context of his “ion-softening theory”. When, e.g., the anionic sublattice of an alkali halide is translated, the bulk of the contribution to the Born charge—which, for the HF case, we call the mean-field contribution—is due to the electron transfer along the direction of the movement of the anion, and is associated with top-most occupied $`p`$-type Wannier function, as is obvious from Fig. 1. However, according to Harrison , because of the many-body effects, we can have a single (virtual) excitation from the top $`p`$-type occupied Wannier function (the bonding orbital) into the first unoccupied Wannier function (the antibonding orbital) on the nearest-neighbor cations, thereby, modifying the Born charge. This virtual charge fluctuation, in effect, introduces some covalency into the system as compared to the mean-field HF results. In its simple parametrized form, the ion-softening theory of Harrison predicts a uniform value of $`Z^{}=1.16`$, for the alkali halides. This value of $`Z^{}`$, although reasonable, is clearly at variance with the experimental results which show a clear variation in the $`Z^{}`$ values of different alkali halides. Therefore, it is of interest to borrow the essence of the many-body effects incorporated in the ion-softening theory, and apply it to these systems within a rigorous ab initio formalism, to test its applicability. Indeed, this is what we intend to explore in a future paper. In table II we give the detailed contributions of various Wannier functions to the Born effective charges of the alkali halides, when the anionic sublattice is translated. It is clear from the table that the low-lying core-like orbitals basically translate rigidly along with the nuclei. Nonrigid translation is seen mainly for the $`n`$s and $`n`$p Wannier functions of the anion, where $`n`$ defines the top of the valence band. In particular, $`n`$s orbital gains some effective charge at the expense of the $`n`$p orbital. The case of NaF is an exception to this rule where the Na 2p Wannier function makes a significant contribution to the effective charge (-0.216). However, this contribution is due to an accidental near degeneracy of the sodium 2p Wannier function with the 2s Wannier function of fluorine, which leads to their mixture when the HF equations (cf. Eq. (4)) are solved. Because of this reason, some of the Born effective charge associated with the 2s Wannier function of F is transferred to the 2p function of Na (cf. table II). However, this is an instructive example of the nonuniqueness of the individual Wannier functions. But, as should be the case, the total Born charge of fluorine in NaF is free of this ambiguity associated with the individual Wannier functions, in that it has a normal value of -0.956. It is also instructive to examine the polarization process pictorially, as depicted by Wannier functions. We shall do so for the specific case of NaCl. The 1s and 3p Wannier functions, localized on the Cl<sup>-</sup> site of the unit cell, are plotted along the (100) direction, in Figs. 2 and 1, respectively, both before, and after, the translation of the Cl sublattice. As discussed earlier, on intuitive grounds we would expect the highly localized 1s Wannier function, which is the deepest-lying core orbital, to move rigidly with the nucleus. On the other extreme, we would expect the 3p Wannier function, which forms the top of the valence band, to show significant nonrigid behavior, because of its relatively delocalized character. This is indeed what we observe in Figs. 2 and 1, respectively. Owing to the perfectly cubically-symmetric crystal field that the Cl site sees in the undeformed lattice, one would expect the corresponding 3p Wannier function to exhibit perfect antisymmetry about its center. Once the Cl sublattice is moved along the (100) direction, the crystal symmetry is reduced, and one would expect to see the signatures of the broken symmetry in the 3p Wannier function of Cl. Both these phenomenon are clearly visible in Fig. 1, where, for the undeformed lattice the 3p Wannier function is perfectly antisymmetric about its center, while for the deformed case, it is no longer so, and it shows clear signs of induced polarization due to broken symmetry. ## IV CONCLUSIONS In conclusion, we have applied the Berry-phase-based theory of macroscopic polarization, developed by King-Smith and Vanderbilt , to obtain the benchmark values for the Born effective charges of several ionic compounds at the Hartree-Fock level. In the present work, we have utilized the Wannier functions as the single particle orbitals, and demonstrated that they lead to a pictorial description of the polarization process. As far as our results are concerned, they are in good agreement with the experiments for all the systems except LiCl and NaCl, where the disagreement with the experiments was more than 10%. One of the reasons behind this disagreement could be that the many-body effects in these systems are significant. Although, there have been generalizations of the theory of macroscopic polarization to include many-body effects, their implementation is not as straightforward as the single particle theory. Recently, we have generalized our Wannier-function-based approach to include many-body effects by systematically enlarging the many-particle ground-state wave function by considering virtual excitations from the space of the occupied Wannier functions to that of the virtual ones. The approach was demonstrated by computing the correlation contributions to the total energy per unit cell of bulk LiH . In a future paper, we intend to generalize our approach to compute the influence of many-body effects on macroscopic polarization properties as well. ###### Acknowledgements. I am thankful to Professor R. Resta for clarifying the evaluation of the experimental value of the Born effective charge of MgO reported in an earlier work of his(ref. ).
no-problem/0002/astro-ph0002265.html
ar5iv
text
# Hard X-Ray Spectra of Broad-Line Radio Galaxies from the Rossi X-Ray Timing Explorer ## 1 Introduction An important issue in our study of active galactic nuclei (hereafter AGNs) is the as yet unexplained difference between radio loud and radio-quiet objects. All AGNs are thought to be powered by accretion of matter onto a supermassive black hole, presumably via an equatorial accretion disk. From a theoretical perspective, the accretion disk is an essential ingredient for the formation of radio jets, although the exact mechanism is not well known (see the review by Livio 1996). The observational evidence<sup>1</sup><sup>1</sup>1The most direct observational evidence for the presence of accretion disks in AGNs takes the form of Fe K$`\alpha `$ lines with disk-like profiles in the X-ray spectra of Seyfert galaxies (e.g., Tanaka et al. 1995; Nandra et al. 1997b) and double-peaked H$`\alpha `$ lines in the optical spectra of BLRGs (Eracleous & Halpern 1994). suggests that both radio-loud and radio-quiet AGNs harbor accretion disks, but it is a mystery why well-collimated, powerful, relativistic radio jets only exist in the former class of object. The origin of the difference could lie in the nature of the host galaxy. At low redshifts ($`z<0.5`$) radio-loud AGNs are found only in elliptical galaxies, whereas radio-quiet AGNs can have either elliptical or spiral hosts (Smith et al. 1986; Hutchings, Janson, & Neff 1989; Véron-Cetty & Woltjer 1990; Dunlop et al. 1993; Bahcall et al. 1997; Boyce et al. 1998). This observational trend has led to the suggestion that the interstellar medium of the host galaxy may play an important role in the propagation and collimation of the radio jets on large scales (e.g., Blandford & Levinson 1995; Fabian & Rees 1995). Alternatively, one may seek the fundamental cause of the difference between radio-loud and radio-quiet AGNs in the properties of their accretion flows or the properties of their central black holes. One possibility is that radio-loud AGNs may harbor rapidly spinning black holes, whose energy is extracted electromagnetically via the Blandford & Znajek (1977) mechanism and used to power the radio jets. Rapidly spinning black holes could be associated with elliptical galaxies if both the black hole and the host galaxy result from the merger of two parent galaxies, each with its own nuclear black hole (see Wilson & Colbert 1995). Another possibility is that the inner accretion disks of radio-quiet AGNs are geometrically thin and optically thick throughout (Shakura & Sunyaev 1973) while the inner accretion disks of radio-loud AGNs are ion-supported tori (Rees et al. 1982; known today as advection-dominated accretion flows, or ADAFs, after the work of Narayan & Yi 1994, 1995). Because ADAFs are nearly spherical and parts of the flow are unbound, they can lead to the formation of outflows (e.g., Blandford & Begelman 1999). In either of the above pictures, an additional mechanism may be necessary to collimate the radio jets (see for example, Blandford & Payne 1982; Meier 1999). If the difference between radio-loud and radio-quiet AGNs is related to differences in their central engines, it would lead to observable differences in their respective X-ray spectra, in particular in the properties (profiles and equivalent width) of the Fe K$`\alpha `$ lines and in the shape of the continuum. This is because the Fe K$`\alpha `$ lines are thought to result from fluorescence of the dense gas in the geometrically thin and optically thick regions of the disk (e.g., George & Fabian 1991; Matt et al 1992). Similarly, the continuum above 10 keV is thought to include a significant contribution from X-ray photons from the “primary” X-ray source, near the center of the disk, which undergo Compton scattering (“reflection”) in the same regions of the disk where the Fe K$`\alpha `$ line is produced (e.g., Lightman & White 1988; George & Fabian 1991; Matt, Perola, & Piro 1991). It is, therefore, extremely interesting that studies of the X-ray spectra of broad-line radio galaxies (BLRGs) by Zdziarski et al. (1995) and Woźniak et al. (1998) found them to be systematically different from those of (radio-quiet) Seyfert galaxies. In particular, these authors found that the signature of Compton reflection, which is very prominent in the spectra of Seyfert galaxies above 10 keV (Pounds et al. 1989; Nandra & Pounds 1994), is weak of absent in the spectra of BLRGs. Moreover, Woźniak et al. (1997) found that the Fe K$`\alpha `$ lines of BLRGs are narrower and weaker than those of Seyfert galaxies. Motivated by the above theoretical considerations and observational results, we have undertaken a systematic study of the X-ray spectra of radio-loud AGNs with ASCA and RXTE in order to characterize their properties. Our main goal is to compare their spectroscopic properties with those of Seyfert galaxies and test the above ideas for the origin of the difference between the two classes. In our re-analysis of archival ASCA spectra of BLRGs and radio-loud quasars (Sambruna, Eracleous, & Mushotzky 1999) we found that the Fe K$`\alpha `$ lines of some objects are indeed weaker and narrower than those of Seyferts, in agreement with the findings of Woźniak et al. (1997). In other objects, however, the uncertainties are large enough that we cannot reach firm conclusions, thus we have not been able to confirm this result in general. In this paper we present the results of new observations of four BLRGs with RXTE, aimed at measuring the shape of their hard X-ray continuum and the equivalent width of their Fe K$`\alpha `$ lines. As such, these observations complement our study of the ASCA spectra of these objects. In §2 we describe the observations and data screening. In §3 we present and discuss the light curves and in §4 we compare the observed spectra with models. In §5 we discuss the implications of the results, while in §6 we summarize our conclusions. Throughout this paper we assume a Hubble constant of $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and a deceleration parameter of $`q_0=0.5`$. ## 2 Targets, Observations, and Data Screening Our targets were selected to be among the X-ray brightest BLRGs, since the background in the RXTE instruments is rather high. They are listed in Table 1 along with their basic properties, namely the redshift, the inferred inclination of the radio jet (see Eracleous & Halpern 1998a and references therein), and two different estimates of the column density in the Galactic interstellar medium. The 2–10 keV fluxes are about a few $`\times 10^{11}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$, which makes the targets readily detectable by the RXTE instruments. The observations were carried out with the Proportional Counter Array (PCA; Jahoda et al. 1996) and the High-Energy X-Ray Timing Experiment (HEXTE; Rothschild et al. 1998) on RXTE. Three out of the four objects in our collection (3C 111, Pictor A, and 3C 382) were observed in the spring of 1997 as part of our own guest-observer programs. The fourth object (3C 120) was observed in 1998 February as part of a different program and the data were made public immediately. Although a fair number of observations of 3C 120 exist in the RXTE archive, only the above observation was a long, continuous observation suitable for our purposes. All other observations were intended to monitor the variability of this object; they consist of short snapshots spanning a long temporal baseline. Similarly, the only observations of 3C 390.3 available in the archive are also monitoring observations of this type. Therefore we have not included 3C 390.3 in our sample. A log of the observations is included in Table 2. All of the observations fall within PCA gain epoch 3. The PCA data were screened to exclude events recorded when the pointing offset was greater than 0$`\stackrel{}{\mathrm{.}}`$02, when the Earth elevation angle was less than 10°, or when the electron rate was greater than 0.1. Events recorded within 30 minutes of passage though the South-Atlantic Anomaly were also excluded. After screening, time-averaged spectra were extracted by accumulating events recorded in the top layer of each Proportional Counter Unit (PCU) since these include about 90% of the source photons and only about 50% of the internal instrument background. In the case of 3C 111 and 3C 382 we extracted light curves and spectra from all five PCUs. In the case of 3C 120 and Pictor A, PCUs 3 and 4 were turned off part of the time, therefore the spectra from these two PCUs and PCUs 0, 1, and 2 were accumulated separately and then added together. Light curves corresponding to the energy range 3.6–11.6 keV (PCA channels 6–27), were also extracted for all objects and rectified over the time intervals when one or more of the PCUs were not turned on. The PCA background spectrum and light curve were determined using the `L7_240` model developed at the RXTE Guest-Observer Facility (GOF) and implemented by the program `pcabackest v.2.1b`. This model is appropriate for “faint” sources, i.e., those producing count rates less than 40 s<sup>-1</sup> PCU<sup>-1</sup>. All of the above tasks were carried out using the `FTOOLS v.4.2` software package and with the help of the `rex` script provided by the RXTE GOF, which also produces response matrices and effective area curves appropriate for the time of the observation. The net exposure times after data screening, as well as the total source and background count rates, are given in Table 2. The HEXTE data were also screened to exclude events recorded when the pointing offset was greater than 0$`\stackrel{}{\mathrm{.}}`$02, when the Earth elevation angle was less than 10°. The background in the two HEXTE clusters is measured during each observation by rocking the instrument slowly on and off source. Therefore, source and background photons are included in the same data set and are separated into source and background spectra according to the time they were recorded. After screening we extracted source and background spectra from each of the two HEXTE clusters and corrected them for the dead-time effect, which can be significant because of the high background rate. As with the PCA data, the HEXTE data were screened and reduced using the `FTOOLS v.4.2` software package. The exposure times and count rates in HEXTE/cluster 0 are given in Table 2 for reference. In Figure 1 we show the on-source and background spectra from the PCA and HEXTE/cluster 0, using 3C 111 as an example. This figure illustrates that the background makes the dominant contribution to the HEXTE count rate. In the case of the PCA, the background contributes approximately the same count rate as the source, and its relative contribution increases with energy. At energies above 30 keV the background dominates the count rate in the PCA. ## 3 Light Curves and Time Variability In Figure 2 we show the light curves of the four targets. For each object we plot the net count rate (after background subtraction) as well as the the background count rate vs time, for reference. The mean background count rate is generally comparable to the net source count rate, although the background level varies dramatically over the course of an observation. A visual inspection of the object light curves shows no obviously significant variability. To quantify this result we searched the light curves for variability using two complementary methods: 1. We compared the variance (i.e., the r.m.s. dispersion about the mean) with the average uncertainty of points in the light curve. This method is sensitive to fluctuations in the count rate that exceed the noise level on time scales comparable to the width of the bins in the light curve (the bin width in the light curves of Figure 2 is 640 s). We found no large fluctuations in any of the light curves, save for two instances of poor background subtraction (in Pictor A and 3C 382). We have also computed the “excess variance”, $`\sigma _{\mathrm{rms}}^2`$, following Nandra et al. (1997a), with the following results: $`(3.1\pm 0.1)\times 10^4\mathrm{s}^2`$ for 3C 111, $`(1.23\pm 0.01)\times 10^3\mathrm{s}^2`$ for 3C 120, $`(8.3\pm 0.6)\times 10^4\mathrm{s}^2`$ for Pictor A, and $`(6.1\pm 0.4)\times 10^4\mathrm{s}^2`$ for 3C 382. Since their luminosities are $`L_\mathrm{x}`$(2–10 keV)$`10^{44}\mathrm{erg}\mathrm{s}^1`$, these objects fall on the extrapolation of the $`\sigma _{\mathrm{rms}}^2`$$`L_\mathrm{x}`$ trend for Seyfert galaxies, found by Nandra et al. (1997a). It is noteworthy, however, that the excess variance is small enough at this luminosity that it is comparable to that of LINERs and other very low-luminosity AGNs (Ptak et al. 1998). 2. We fitted each light curve with a polynomial to find the lowest order that gives an acceptable fit. This method detects small, relatively slow variations in the light curve on time scales somewhat shorter than the length of the observations. We found that 3C 111, Pictor A, and 3C 382 show small secular variations on the order of a few percent relative to the mean while 3C 120 shows relatively slow variations with excursions of $`\pm 6`$% from the mean. For reference, we note that previous observations of 3C 120 with ASCA have shown it to be variable at the 20% level (Grandi et al. 1997). The above results are not surprising since BLRGs are generally known not to be highly variable on short time scales, although they can vary substantially on time scales of several days (see for example the soft X-ray light curve of 3C 390.3 presented by Leighly & O’Brien 1997). We will return to the issue of variability in our later discussion of our overall findings. ## 4 Model Fits to the Observed Spectra ### 4.1 The Continuum Shape The shape of the continuum was determined by fitting models to the observed spectra with the help of the `XSPEC v.10.0` software package (Arnaud 1996). We used PCA response matrices and effective area curves created specifically for the individual observations by the program `pcarsp v.2.37`, taking into account the evolution of the detector properties. All the spectra from individual PCUs were added together and the individual response matrices and effective area were combined accordingly. In fitting the HEXTE spectra we used the response matrices and effective area curves created on 1997 March 20. The spectra from the two HEXTE clusters were not combined. All spectra were rebinned so that each bin contained enough counts for the $`\chi ^2`$ test to be valid. The PCA spectra were truncated at low energies at 4 keV and at high energies at either 20 or 30 keV depending on the signal-to-noise ratio. In the case of the HEXTE spectra we retained the energy channels between 20 and 100 keV. We compared the spectra with models consisting of a continuum component and a Gaussian line, modified by interstellar photoelectric absorption. We adopted the absorbing column densities measured by ASCA (listed in Table 1), which we held fixed throughout the fitting process. These column densities are comparable to or greater than the Galactic column densities inferred from H I 21 cm observations (see Table 1). The photoelectric absorption cross-sections used were those of Morrison & McCammon (1983). We tried three different models for the continuum shape: a simple power law, a broken power law, and a power law plus its Compton reflection from dense, neutral matter. The Compton reflection model is meant to describe the effects of Compton scattering of photons from an X-ray source associated with the inner parts of the accretion disk in the gas that makes up the disk proper (e.g., Lightman & White 1988; George & Fabian 1991). The spectrum of reflected X-rays was computed as a function of disk inclination using the transfer functions of Magdziarz & Zdziarski (1995)<sup>2</sup><sup>2</sup>2The Compton reflection calculation is carried out by the pexrav+ model routine in XSPEC+. The free parameters of this model, in addition to the spectral index of the primary power law and the inclination angle of the disk, are the folding (i.e., upper cut-off) energy of the primary X-ray spectrum, the solid angle subtended by the disk to the central X-ray source, and the abundances of iron and other heavy elements. In our fits the inclination angles were constrained to lie within the limits inferred from the radio properties of each object (see Table 1), and the folding energy of the primary X-ray spectrum was constrained to be greater than 100 keV since all objects are detected by HEXTE up to that energy. The broken power law model is effectively a parameterization of the Compton reflection model: we included it in our suite of continuum models because it serves as an additional verification of the possible departure of the continuum shape from a simple power law. Although the broken power law and Compton reflection models have different shapes in detail, the difference is not discernible at the signal-to-noise ratio of the available HEXTE spectra. The results of fitting the above models to the observed spectra are summarized in Table 3. The fits yield 2–10 keV unabsorbed fluxes for the target objects in the range 2–6$`\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> and corresponding luminosities in the range 1–5$`\times 10^{44}`$ erg s<sup>-1</sup>, as listed in Table 3. Spectra with models superposed, are shown in Figures 3 and 4. We find that the spectra of two objects, 3C 111 and Pictor A, are described quite well by a simple power law throughout the observed energy range (4–100 keV), as shown in Figure 3. A broken power law model produces a slightly better fit but the improvement is not statistically significant in view of the additional free parameters (the F-test gives chance improvement probabilities of 0.35 and 0.29 respectively). Similarly, the Compton reflection model does not produce a significantly improved fit either.<sup>3</sup><sup>3</sup>3In fact, the Compton reflection fit results in a higher value of $`\chi ^2`$ than the broken power law, even though it has more free parameters. In the case of the other two objects, 3C 120 and 3C 382, we find that a simple power law does not provide an adequate description of the continuum shape (see Figure 3); a broken power law or a Compton reflection model is required by the data (the F-test gives chance improvement probabilities for the Compton reflection model of $`5\times 10^5`$ and $`5\times 10^3`$, respectively). Fits of power-law plus Compton reflection models to the spectra of these two objects are shown in Figure 4. Finally, we note that all of the BLRGs in our collection were also observed with SAX and detected at high energies up to 50 keV (Padovani et al. 1999; Grandi et al. 1999a,b). The SAX spectra yield very similar results to what we obtain here, namely very similar spectral indices and Compton reflection strengths. We have explored the multidimensional space defined by the free parameters of the Compton reflection model to find the range of acceptable parameter values. We found that the folding energy of the primary X-ray spectrum is not constrained very well by the observed spectra. Thus our initial physical restriction of $`E_{\mathrm{fold}}>100`$ keV is the only constraint on the folding energy of the primary power-law spectrum. Only in the case of 3C 120 (the brightest object, with the longest exposure time) are we able to derive a somewhat better constraint from the data of $`E_{\mathrm{fold}}>300`$ keV. We are particularly interested in the solid angle subtended by the reflector to the primary X-ray source, which is a diagnostic of the geometry and structure of the accretion flow. This parameter determines the absolute strength of the reflected component. In practice, however, it is not straightforward to constrain the reflector solid angle because the inclination angle of the disk and the iron abundance affect the observed strength of this component, through projection and photoelectric absorption above the Fe K edge at 7.1 keV (see, for example, George & Fabian 1991; Reynolds, Fabian, & Inoue 1995), respectively. Moreover, the folding energy of the primary X-ray spectrum also affects the determination of the reflector solid angle because it controls the number of energetic photons available for Compton downscattering. We have, therefore, examined the effect of each of the above parameters on our derived values of the reflector solid angle by searching the inclination angle - solid angle ($`\mathrm{cos}i`$$`\mathrm{\Omega }/2\pi `$) plane for regions with acceptable fits for different assumed values of the folding energy and iron abundance. In other words, we have taken 2-dimensional slices of the 4-dimensional parameter space defined by the solid angle, inclination angle, folding energy, and iron abundance. In Figure 5 we show the 90% confidence contours in the $`\mathrm{cos}i`$$`\mathrm{\Omega }/2\pi `$ plane for a Solar iron abundance and three different assumed values of the folding energy ($`E_{\mathrm{fold}}=100`$, 300, and 600 keV). The same results are also summarized in Table 3. The strength of the Compton reflection component, as parameterized by $`\mathrm{\Omega }/2\pi `$ is systematically lower than what is found in Seyferts observed with either Ginga or RXTE. In particular, we find that $`\mathrm{\Omega }/2\pi 0.5`$, while in Seyferts $`\mathrm{\Omega }/2\pi 0.8`$, with a dispersion of 0.2 (Nandra & Pounds 1994; see also Weaver, Krolik, & Pier 1998 and Lee et al. 1998, 1999 for results from RXTE observations). 3C 382 is the only case where the measured value of $`\mathrm{\Omega }/2\pi `$ is consistent, within uncertainties, with what is found in Seyferts. Pictor A represents the opposite extreme where only an upper limit of $`\mathrm{\Omega }/2\pi <0.3`$ can be obtained. In the case of 3C 111, the detection of a Compton reflection hump should be regarded as only marginal since (a) it is only detected at the 90% confidence level and is consistent with zero at the 99% confidence level for $`E_{\mathrm{fold}}=300,\mathrm{\hspace{0.17em}600}`$ keV, and (b) the probability of chance improvement of using the Compton-reflection model compared to the power-law model is 0.39 according to the F-test. In the case of 3C 111 and 3C 382 our results are consistent with what Nandra & Pounds (1994) and Woźniak et al. (1998) report based on Ginga observations. The iron abundance does not affect the above conclusions at all. We find that the confidence contours in the $`\mathrm{cos}i`$$`\mathrm{\Omega }/2\pi `$ plane are nearly identical for iron abundances ranging from one quarter to twice the Solar value. We illustrate this result in Figure 6 using 3C 111 as an example. This issue is relevant to the strength of the Fe K$`\alpha `$ lines whose measurements we report in the next section. ### 4.2 The Fe K$`\alpha `$ Line To study the properties of the Fe K$`\alpha `$ line we modeled its profile as a Gaussian of energy dispersion, $`\sigma `$, and intensity, $`I_{\mathrm{Fe}\mathrm{K}\alpha }`$. Because of the low energy resolution of the PCA, more sophisticated models for the line profile are not warranted. For each object we used the best-fitting continuum model and we ignored the HEXTE spectra since they do not extend to energies below 20 keV. After checking that the line centroid energies were consistent with the value for “cold” Fe, we fixed them to the nominal value based on the redshift of each object. We scanned the $`I_{\mathrm{Fe}\mathrm{K}\alpha }`$$`\sigma `$ plane for regions where the fit was acceptable, allowing the continuum parameters to vary freely. We found that variations in the continuum parameters were negligible throughout this plane, which we attribute to the fact that the continuum is well constrained over a wide range of energies. As a result the intensity of the line can be converted to an equivalent width ($`EW`$) via a simple scaling. We, therefore, present the outcome of the parameter search in the form of confidence contours in the $`EW`$$`FWHM`$<sup>4</sup><sup>4</sup>4$`FWHM`$ is the full velocity width of the line at half maximum. plane, which we display in Figure 7. These results are also summarized in Table 3. The lines are unresolved in all cases, with upper limits on the $`FWHM`$ ranging from 44,000 to 70,000 km s<sup>-1</sup> (at 90% confidence). The $`EW`$s range from 60 to 90 eV with uncertainties around 30–50% (at 90% confidence). In the case of 3C 111 and Pictor A, the results obtained here are consistent with the results of ASCA and SAX observations in which the Fe K$`\alpha `$ line was either marginally detected (3C 111; Reynolds et al. 1998; Eracleous & Halpern 1998b) or not detected at all (Pictor A; Eracleous & Halpern 1998a; Padovani et al. 1999). In Figure 7 we overlay the ASCA upper limits on the RXTE confidence contours for reference. In the case of 3C 120 and 3C 382 there is a significant discrepancy between the parameters determined from the ASCA spectra and those we determine here: the lines appear to be much broader and much stronger in the ASCA spectra than in the RXTE spectra. In particular, Grandi et al. (1997) find that the Fe K$`\alpha `$ line of 3C 120 has $`FWHM=91,000\mathrm{km}\mathrm{s}^1`$ and $`EW=380`$ eV while Reynolds (1997) finds that the Fe K$`\alpha `$ line of 3C 382 has $`FWHM=197,000\mathrm{km}\mathrm{s}^1`$ and $`EW=950`$ eV. These velocity widths seem uncomfortably large, as pointed out by the authors themselves. In view of the jet inclination angles the line widths imply that some of the line-emitting gas moves faster than light. We suspect that the discrepancy is due to incorrect determination of the continuum in the ASCA spectra (see Woźniak et al. 1998 and the discussion by Sambruna et al. 1999). This may be related to the fact that the sensitivity of the ASCA SIS, the instrument most commonly used to measure the lines, is extremely low at $`E>8`$ keV. Another possible cause for this discrepancy is calibration uncertainties in the ASCA SIS below 1 keV, which affect the detection of a soft excess, hence the determination of the spectral index. In fact, Woźniak et al. (1998) find that if the continuum in the ASCA spectra of 3C 120 and 3C 382 is modeled as a broken power law (to account for a soft soft excess) the measured $`FWHM`$ and $`EW`$ of the Fe K$`\alpha `$ line become considerably smaller and consistent with our findings. It also noteworthy, however, that Woźniak et al. (1998) have almost certainly underestimated the $`EW`$ of the Fe K$`\alpha `$ line of 3C 120 since they assumed the line to be unresolved. We have also been investigating the cause of this discrepancy through simultaneous ASCA and RXTE observations of 3C 382, the results of which we will report in a forthcoming paper. ## 5 Discussion ### 5.1 Comparison With Seyfert 1 Galaxies To search for systematic differences between BLRGs and Seyfert 1 galaxies, we compare the X-ray spectral properties of the two classes of objects using the results of our observations. In particular we compare the photon indices, the strength of the Compton reflection hump (parameterized by $`\mathrm{\Omega }/2\pi `$), and the $`EW`$ of the Fe K$`\alpha `$ line. We use the collection of Seyfert galaxies observed with Ginga (Nandra & Pounds 1994) as a comparison sample since the Ginga LAC has a broad enough bandpass to allow a measurement of the Compton reflection hump. We also use the collection of Seyfert galaxies observed by ASCA (Nandra et al. 1997b) as an additional comparison sample for the $`EW`$ of the Fe K$`\alpha `$ line since the uncertainties in the determination of the $`EW`$ from the Ginga spectra are relatively large. In Figure 8 we show the distribution of the above parameters (spectral index, $`\mathrm{\Omega }/2\pi `$, and Fe K$`\alpha `$ $`EW`$) among Seyfert 1 galaxies with the values for the 4 BLRGs observed by RXTE superposed as filled bins for comparison. A visual inspection of the histograms shows that the spectral indices of BLRGs and Seyfert 1s are fairly similar but the Compton reflection humps and Fe K$`\alpha `$ lines of BLRGs are considerably weaker than those of Seyfert 1s. Application of the Kolmogorov-Smirnov (KS) test shows that the distributions of Compton-reflection strengths and Fe K$`\alpha `$ $`EW`$s are indeed significantly different between Seyferts and BLRGs (chance probabilities of 2% for the former and 4% for the latter; if the ASCA $`EW`$ measurements are used instead of the Ginga measurements, the chance probability is 3%). In the case of the spectral indices the KS test gives a 41% probability that the spectral indices of Seyferts and BLRGs were drawn from the same parent population. Sambruna et al. (1999) reach a similar conclusion based on the larger sample of BLRGs observed by ASCA. To investigate whether the weakness of the Fe K$`\alpha `$ lines of BLRGs is a consequence of the X-ray Baldwin effect (Nandra et al. 1997c) we also plot the BLRGs observed by RXTE in the $`EW`$-$`L_\mathrm{X}`$ diagram shown in Figure 9. This figure shows clearly that the BLRGs have weaker lines than Seyfert 1s of comparable X-ray luminosity, supporting the outcome of the KS test presented above. These results are in agreement with the conclusions of Woźniak et al. (1998) and Sambruna et al. (1999), who found that radio-loud AGNs in general, and BLRGs in particular have weaker Fe K$`\alpha `$ lines and Compton reflection humps than their radio-quiet counterparts. They are also supported by the results of SAX observations by Padovani et al. (1999) and Grandi et al. (1998,1999). This raises the question of the origin of the observed differences, which we discuss below. ### 5.2 Interpretation Possible causes of the systematic differences between radio-loud and radio-quiet AGNs were discussed by Woźniak et al. (1998) and by Eracleous & Halpern (1998a). Woźniak et al. (1998) favored a scenario in which the Fe K$`\alpha `$ is produced in the obscuring torus invoked in the unification schemes for type 1 and type 2 AGNs. A column density in the reprocessing medium of $`N_\mathrm{H}10^{23}\mathrm{cm}^2`$ was found to reproduce the equivalent width of the Fe K$`\alpha `$ line. In this picture the reflection hump is still due to Compton scattering in the accretion disk. However the continuum is postulated to to be beamed away from the disk so that the illumination of the disk is ineffective with the result that Compton reflection and line emission from the disk made a relatively small contribution to the observed spectrum of BLRGs. Eracleous & Halpern (1998a) concluded that there were two equally viable explanations for the weakness of the Fe K$`\alpha `$ line in the ASCA spectra of Pictor A and 3C 111: either the iron abundance in BLRGs is low (it need only be lower by a factor of 2 compared to Seyferts), or the solid angle subtended by the reprocessor to the primary X-ray source is about a factor of 2 smaller in BLRGs than in Seyferts. Another possibility worth examining is that the disks of BLRGs are in a higher ionization state than those of Seyfert galaxies. In such a case, photons emerging from the disk will be Compton scattered by electrons in the ionized skin of the disk, with the result that both the Fe K$`\alpha `$ line and the Fe K edge are smeared out (Ross, Fabian, & Brandt 1996; Ross, Fabian, & Young 1999) and appear weaker. Finally, we mention one more possible explanation: X-rays with a featureless spectrum from the jets of BLRGs “dilute” the X-ray spectrum from the central engine, which is otherwise similar to that of Seyferts. This possibility was dismissed by both Woźniak et al. (1998) and by Eracleous & Halpern (1998a), but we re-examine it here in the light of the latest observational results. We evaluate the merits and applicability of these scenarios below. 1. Continuum beamed away from the accretion disk resulting in weak Fe K$`\alpha `$ emission and Compton reflection from it; additional Fe K$`\alpha `$ emission from obscuring torus: This explanation requires mild beaming by a sub-relativistic outflow, so that the X-ray beam has an opening angle that is just right to illuminate the obscuring torus effectively, while it it illuminates the accretion disk only “mildly” so that the strength of the Compton reflection hump from the disk is suppressed. This scenario suffers from the following drawbacks, which make it untenable: 1. It is contrary to observational evidence showing that the jets of BLRGs are highly relativistic. In particular, such jets are thought to have bulk Lorentz factors of $`\gamma =(1\beta ^2)^{1/2}10`$ (Padovani & Urry 1992; Ghisellini et al. 1993, 1998), which implies that their emission is tightly focused in a cone of opening angle $`\psi 1/\gamma 6^{}`$. This means that neither the accretion disk nor the obscuring torus can be illuminated by the primary X-ray source, if this is associated with a jet. Postulating that the X-ray continuum is beamed away from the disk without being associated with the jet is an ad hoc solution with no clear physical foundation. In fact, Reynolds & Fabian (1997) consider beaming of the X-ray continuum produced in a disk corona and conclude that the beamed emission is directed in large part towards the disk. The result is that the $`EW`$ of the Fe K$`\alpha `$ line is enhanced. 2. If the accretion disk is responsible for producing a Compton reflection hump, then it should also be producing Fe K$`\alpha `$ emission, which should be added to the emission from the obscuring torus. If this were the case, the Fe K$`\alpha `$ $`EW`$ should have been disproportionately large compared to the strength of the Compton reflection hump, since it would include contributions from two different sources. The observations, however, show that both the strength of the Compton reflection hump and the $`EW`$ of the Fe K$`\alpha `$ line are weaker by about the same factor compared to Seyfert galaxies. 3. Part of the motivation of Woźniak et al. (1998) for associating the source of the Fe K$`\alpha `$ line with the obscuring torus was their claimed lack of correlated variations of the continuum and line intensities in 3C 390.3. This conclusion is not justified by their data, however, because of the large uncertainties in the line intensity: a factor of 2 at 68% confidence (see their Figure 7). Since the continuum variations span a range of a factor of 2, it is next to impossible to find correlated variations in the line intensities when the error bars are as large as they are. Independently of the above arguments, this interpretation can be tested observationally using the profiles of the Fe K$`\alpha `$ lines. If the lines are produced in the accretion disk, their profiles should be considerably broader than if they are produced in the obscuring torus, with very distinct asymmetries caused by Doppler and gravitational redshift. 2. Low Iron Abundance: A low iron abundance would have a very clear observational consequence: it would make the Fe K$`\alpha `$ lines weaker but at the same time it would make the Compton reflection humps stronger by reducing the opacity above the Fe K edge (George & Fabian 1991; Reynolds, Fabian, & Inoue 1995). The RXTE spectra presented here contradict this interpretation since both the Fe K$`\alpha `$ lines and the Compton reflection humps are weak. We have specifically tested the hypothesis that the iron abundance is low with negative results (see §4.1 and Figure 6). Based on these findings we conclude that a low iron abundance is not a viable interpretation. 3. Small solid angle subtended by the reprocessor to the primary X-ray source: The weakness of both the Fe K$`\alpha `$ line and the Compton reflection hump can be explained if the reprocessing medium subtends a relatively small solid angle to the primary X-ray source. In the case of Seyfert galaxies, the primary X-ray source is thought to make up a hot corona overlaying the disk proper, which serves as the reprocessor. In such a picture the reprocessor covers half the sky as seen from the primary X-ray source, i.e., the solid angle is $`\mathrm{\Omega }/2\pi =1`$, in reasonable agreement with the observed X-ray spectra of Seyfert galaxies. In the case of BLRGs, on the other hand, the RXTE spectra indicate that $`\mathrm{\Omega }/2\pi 0.5`$. This can be explained in a picture where the primary X-ray source is a quasi-spherical ion torus (or ADAF) occupying the inner disk and the reprocessor is the geometrically thin, optically thick outer disk. The solid angle subtended by the outer disk to the ion torus is $`\mathrm{\Omega }/2\pi <0.5`$ on geometrical grounds (Chen & Halpern 1989; Zdziarski, Lubiński & Smith 1999), in agreement with our findings. This is our preferred interpretation over all the others we consider here and has the following additional attractive features: (a) the association of ion tori with BLRGs is appealing because these structures offer a way of producing radio jets as we have mentioned in §1, and (b) such an accretion disk structure also explains the double-peaked profiles of the Balmer lines in some of these objects (Pictor A, 3C 382, and 3C 390.3; Chen & Halpern 1989; Eracleous & Halpern 1994; Halpern & Eracleous 1994). The profiles of the Fe K$`\alpha `$ lines offer a way of testing this scenario further: if the lines do indeed originate in the outer parts of the accretion disk at radii $`R100R_\mathrm{g}`$ (where $`R_\mathrm{g}GM/c^2`$ is the gravitational radius, with $`M`$ the mass of the black hole), then their profiles should be narrower than those observed in Seyfert galaxies, although still skewed, asymmetric, and possibly double-peaked. 4. Smearing by Compton scattering in an ionized accretion disk: If the disk is photoionized so that the ionization parameter reaches values of $`\xi >10^4`$, photons emerging from the disk will be Compton scattered by hot electrons in the disk atmosphere. This process will smear out of the Fe K$`\alpha `$ line and the Compton reflection and make them appear weaker (see Ross et al. 1996, 1999). We consider this an unlikely explanation for our results because the observed spectra lack the distinct signature of this process. Namely, at $`\xi >10^3`$, the Fe K$`\alpha `$ lines is considerably broadened by Compton scattering while at $`\xi >10^4`$, the line centroid shifts to energies greater than 6.7 keV because most of the iron atoms are highly ionized. Neither of these effects is observed in either the RXTE PCA spectra presented here or in published ASCA spectra. 5. Dilution of the X-ray spectrum of the central engine by beamed emission from the jet: At first glance this is a plausible interpretation since a flux from the jet comparable to the flux from the central engine would provide the necessary dilution. This hypothesis does not withstand close scrutiny, however, for the following reasons: (a) as shown by Woźniak et al. (1998) the spectra of BLRGs cannot be described as the sum of two power-law components, which should have been possible if they included comparable contributions from the jet and the central engine, (b) the light curves of our targets (§3 and Figure 2) do not show a great deal of variability on time scales up to a few days, contrary to what is observed in blazars; 3C 120 shows the largest variability amplitude, consistent with the small inclination angle of the jet to the line of sight (Table 1), which make the contribution of the jet to the observed X-ray emission rather significant, (c) the X-ray spectral indices of BLRGs are systematically different from those of blazars, which are typically around 1.4 to 1.5 (Kubo et al. 1998), and (d) 3C 120 is the object where dilution of the observed spectrum by emission from the jet is expected to be most severe because of the very small inclination angle; nevertheless both an Fe K$`\alpha `$ line and a Compton reflection hump are detected, when they should have been undetectable. ## 6 Conclusions and Future Prospects Our study of the hard X-ray spectra of 4 BLRGs observed with RXTE has shown them to be systematically different from those of Seyfert galaxies. In particular, the Fe K$`\alpha `$ lines and Compton reflection humps in the spectra of BLRGs are at least a factor of 2 weaker than those observed in the spectra of Seyfert galaxies. This result is consistent with the conclusions of previous studies and is supported by the results of SAX observations of the same targets. After examining several possible explanations for this difference, we conclude that the most likely one is that the solid angle subtended by the source of these spectral features to the primary X-ray source is a factor of 2 smaller in BLRGs than in Seyferts. Since this reprocessing medium is thought to be the accretion disk, we interpret this difference as the result of a difference in the accretion disk structure between BLRGs and Seyferts. More specifically, we argue that if the inner accretion disks of BLRGs have the form of an ion-supported torus (or an ADAF) that irradiates the outer disk, then the observed differences can be explained. We find this explanation particularly appealing because ADAFs offer a possible way of producing the radio jets in these objects and because such a disk structure can also account for the double-peaked Balmer line profiles observed in these objects. This interpretation, as well as some of the other scenarios we have considered, can be tested further by studying the profiles of of the Fe K$`\alpha `$ lines. Unfortunately, such a test has not been possible so far using spectra from ASCA, because of their low signal-to-noise ratio. It will be possible, however, to carry out the test using spectra from upcoming observatories such as XMM and Astro-E. Correlated variations of the intensity of the Fe K$`\alpha `$ line and the strength of the X-ray continuum afford an additional observational test of these ideas. The light travel time between the ion torus and the outer accretion disk is $`\tau _{\mathrm{}}1.7(R/300R_\mathrm{g})(M/10^8M_{})`$ days, which means that one may expect lags of this order between line and continuum variations. If, on the other hand, the line is produced very close to the center of the disk, the lags should be considerably shorter, while if the line is produced in the obscuring torus, the lags should be on the order of several years. Monitoring campaigns with RXTE may provide the data needed for such a test. We are grateful to Karen Leighly and Niel Brandt for useful discussions and we thank the anonymous referee for thoughtful comments. This work was supported by NASA through grants NAG5–7733 and NAG5–8369. During the early stages of this project M.E. was based at the University of California, Berkeley, and was supported by a Hubble Fellowship (grant HF-01068.01-94A from the Space Telescope Science Institute, which is operated for NASA by the Association of Universities for Research in Astronomy, Inc. under contract NAS 5–2655). R.M.S. also acknowledges support from NASA contract NAS–38252.
no-problem/0002/hep-ph0002038.html
ar5iv
text
# Measuring 𝐵→𝜌⁢𝜋 decays and the unitarity angle 𝛼 ## I Introduction The measurement of the angle $`\alpha `$ in the unitarity triangle will be one of the paramount tasks of the future b–factories, such as the dedicated $`e^+e^{}`$ machines for the BaBar experiment at SLAC and the BELLE experiment at KEK , or hadron machines such as the LHC at CERN, with its program for $`B`$–physics <sup>*</sup><sup>*</sup>*Opportunities for $`B`$–physics at the LHC have been recently discussed at the workshop on Standard Model Physics (and More) at the LHC, 14-15 October 1999; copies of transparencies can be found at the site http://home.cern.ch/ mlm/lhc99/oct14ag.html.. Differently from the investigation of the $`\beta `$ angle, for which the $`BJ/\mathrm{\Psi }K_s`$ channel has been pinned up and ambiguities can be resolved , the task of determining the angle $`\alpha `$ is complicated by the problem of separating two different weak hadronic matrix elements, each carrying its own weak phase. The evaluation of these contributions, referred to in the literature as the tree ($`T`$) and the penguin ($`P`$) contributions, suffers from the common theoretical uncertainties related to the estimate of composite four-quark operators between hadronic states. For these estimates, only approximate schemes, such as the factorization approximation, exist at the moment, and for this reason several ingenuous schemes have been devised, trying to disentangle $`T`$ and $`P`$ contributions. In general one tries to exploit the fact that in the $`P`$ amplitudes only the isospin–$`1/2`$ If one neglects electroweak penguins. part of the non–leptonic Hamiltonian is active ; by a complex measurement involving several different isospin amplitudes, one should be able to separate the two amplitudes and to get rid of the ambiguities arising from the ill–known penguin matrix elements. One of the favorite proposals involves the study of the reaction $`B\rho \pi `$, i.e. six channels arising from the neutral $`B`$ decay: $`\overline{B}^0\rho ^+\pi ^{},`$ (1) $`\overline{B}^0\rho ^{}\pi ^+,`$ (2) $`\overline{B}^0\rho ^0\pi ^0,`$ (3) together with the three charge–conjugate channels, and the charged decay modes: $`B^{}\rho ^{}\pi ^0,`$ (4) $`B^{}\rho ^0\pi ^{},`$ (5) with two other charge–conjugate channels. Different strategies have been proposed to extract the angle $`\alpha `$, either involving all the decay modes of a $`B`$ into a $`\rho \pi `$ pair as well as three time–asymmetric quantities measurable in the three channels for neutral $`B`$ decays , or attempting to measure only the neutral $`B`$ decay modes by looking at the time-dependent asymmetries in different regions of the Dalitz plotIn this way the measurement of a decay mode with two neutral pions in the final state, eq. (4), can be avoided.. Preliminary to these analyses is the assumption that, using cuts in the three invariant masses for the pion pairs, one can extract the $`\rho `$ contribution without significant background contaminations. The $`\rho `$ has spin $`1`$, the $`\pi `$ spin $`0`$ as well as the initial $`B`$, and therefore the $`\rho `$ has angular distribution $`\mathrm{cos}^2\theta `$ ($`\theta `$ is the angle of one of the $`\rho `$ decay products with the other $`\pi `$ in the $`\rho `$ rest frame). This means that the Dalitz plot is mainly populated at the border, especially the corners, by this decay. Only very few events should be lost by excluding the interior of the Dalitz plot, which is considered a good way to exclude or at least reduce backgrounds. Analyses following these hypotheses were performed by the BaBar working groups ; MonteCarlo simulations, including the background from the $`f_0`$ resonance, show that, with cuts at $`m_{\pi \pi }=m_\rho \pm 300`$ MeV, no significant contributions from other sources are obtained. Also the role of excited resonances such as the $`\rho ^{}`$ and the non–resonant background has been discussed . A signal of possible difficulties for this strategy arises from new results from the CLEO Collaboration recently reported at the DPF99 and APS99 Conferences : $$(B^\pm \rho ^0\pi ^\pm )=(1.5\pm 0.5\pm 0.4)\times 10^5,$$ (6) $$(B\rho ^{}\pi ^\pm )=(3.5_{1.0}^{+1.1}\pm 0.5)\times 10^5,$$ (7) with a ratio $$R=\frac{(B\rho ^{}\pi ^\pm )}{(B^\pm \rho ^0\pi ^\pm )}=2.3\pm 1.3.$$ (8) As discussed in , this ratio looks rather small; as a matter of fact, when computed in simple approximation schemes, including factorization with no penguins, one gets, from the DDGN model of Ref. , $`R13`$, admittedly with a large uncertainty; another popular approach, i.e. the WBS model , gives $`R6`$ (in both cases we use $`a_1=1.02`$, $`a_2=0.14`$). The aim of the present study is to show that a new contribution, not discussed before, is indeed relevant to the decay (5) and to a lesser extent to the decay (3). It arises from the virtual resonant production depicted in Fig. 1, where the intermediate particle is the $`B^{}`$ meson resonance or other excited states. The $`B^{}`$ resonance, because of phase–space limitations, cannot be produced on the mass shell. Nonetheless the $`B^{}`$ contribution might be important, owing to its almost degeneracy in mass with the $`B`$ meson; therefore its tail may produce sizeable effects in some of the decays of $`B`$ into light particles, also because it is known theoretically that the strong coupling constant between $`B`$, $`B^{}`$ and a pion is large . Concerning other states, we expect their role to decrease with their mass, since there is no enhancement from the virtual particle propagator; we shall only consider the $`0^+`$ state $`B_0`$ with $`J^P=0^+`$ because its coupling to a pion and the meson $`B`$ is known theoretically to be uniformly (in momenta) large . The plan of the paper is as follows. In Section II we list the hadronic quantities that are needed for the computation of the widths; in Section III we present the results and finally, in Section IV, we give our conclusions. ## II Matrix elements The effective weak non-leptonic Hamiltonian for the $`|\mathrm{\Delta }B|=1`$ transition is <sup>§</sup><sup>§</sup>§We omit, as usual in these analyses, the electroweak operators $`Q_k`$ ($`k=`$ 7, 8, 9, 10); they are in general small, but for $`Q_9`$, whose role might be sizeable; its inclusion in the present calculations would be straightforward.: $$H=\frac{G_F}{\sqrt{2}}\left\{V_{ub}^{}V_{ud}\underset{k=1}{\overset{2}{}}C_k(\mu )Q_kV_{tb}^{}V_{td}\underset{k=3}{\overset{6}{}}C_k(\mu )Q_k\right\}.$$ (9) The operators relevant to the present analysis are the so–called current–current operators: $`Q_1`$ $`=`$ $`(\overline{d}_\alpha u_\beta )_{VA}(\overline{u}_\beta b_\alpha )_{VA}`$ (10) $`Q_2`$ $`=`$ $`(\overline{d}_\alpha u_\alpha )_{VA}(\overline{u}_\beta b_\beta )_{VA},`$ (11) and the QCD penguin operators: $`Q_3`$ $`=`$ $`(\overline{d}_\alpha b_\alpha )_{VA}{\displaystyle \underset{q^{}=u,d,s,c,b}{}}(\overline{q}_\beta ^{}q_\beta ^{})_{VA}`$ (12) $`Q_4`$ $`=`$ $`(\overline{d}_\alpha b_\beta )_{VA}{\displaystyle \underset{q^{}=u,d,s,c,b}{}}(\overline{q}_\beta ^{}q_\alpha ^{})_{VA}`$ (13) $`Q_5`$ $`=`$ $`(\overline{d}_\alpha b_\alpha )_{VA}{\displaystyle \underset{q^{}=u,d,s,c,b}{}}(\overline{q}_\beta ^{}q_\beta ^{})_{V+A}`$ (14) $`Q_6`$ $`=`$ $`(\overline{d}_\alpha b_\beta )_{VA}{\displaystyle \underset{q^{}=u,d,s,c,b}{}}(\overline{q}_\beta ^{}q_\alpha ^{})_{V+A},`$ (15) We use the following values of the Wilson coefficients: $`C_1=0.226`$, $`C_2=1.100`$, $`C_3=0.012`$, $`C_4=0.029`$, $`C_5=0.009`$, $`C_6=0.033`$; they are obtained in the HV scheme , with $`\mathrm{\Lambda }_{\overline{MS}}^{(5)}=225`$ MeV, $`\mu =\overline{m}_b(m_b)=4.40`$ GeV and $`m_t=170`$ GeV. For the CKM mixing matrix we use the Wolfenstein parameterization with $`\rho =0.05`$, $`\eta =0.36`$ and $`A=0.806`$ in the approximation accurate to order $`\lambda ^3`$ in the real part and $`\lambda ^5`$ in the imaginary part, i.e. $`V_{ud}=1\lambda ^2/2`$, $`V_{ub}=A\lambda ^3[\rho i\eta (1\lambda ^2/2)]`$, $`V_{td}=A\lambda ^3(1\rho i\eta )`$ and $`V_{tb}=1`$. The diagram of Fig. 1 describes two processes. For the $`B^{}`$ intermediate state there is an emission of a pion by strong interactions, followed by the weak decay of the virtual $`B^{}`$ into two pions; for the $`\rho `$ intermediate state there is a weak decay of $`B\rho \pi `$ followed by the strong decay of the $`\rho `$ resonance. We compute these diagrams as Feynman graphs of an effective theory within the factorization approximation, using information from the effective Lagrangian for heavy and light mesons and form factors for the couplings to the weak currents In the second reference of a similar approach has been used to describe the decay mode $`B^0D^+D^{}\pi ^0`$; the main difference is that for $`B3\pi `$ we cannot use soft pion theorems and chiral perturbation theory, because the pions are in general hard; therefore we have to use information embodied in the semileptonic beauty meson form factors. This is also the main difference with respect to .. To start with we consider the strong coupling constants. They are defined as $`\overline{B}^0(p^{})\pi ^{}(q)|B^{}(p,ϵ)`$ $`=`$ $`g^{B^{}B\pi }ϵq`$ (16) $`B^{}(p^{})\pi ^+(q)|\overline{B}_0^0(p)`$ $`=`$ $`G^{B_0B\pi }(p^2)`$ (17) $`\pi ^0(q^{})\pi ^{}(q)|\rho ^{}(p,ϵ)`$ $`=`$ $`g_\rho ϵ(q^{}q).`$ (18) In the heavy quark mass limit one has $`g^{B^{}B\pi }`$ $`=`$ $`{\displaystyle \frac{2m_Bg}{f_\pi }}`$ (19) $`G^{B_0B\pi }(s)`$ $`=`$ $`\sqrt{{\displaystyle \frac{m_{B_0}m_B}{2}}}{\displaystyle \frac{sm_B^2}{m_{B_0}}}{\displaystyle \frac{h}{f_\pi }}.`$ (20) For $`g`$ and $`h`$ we have limited experimental information and we have to use some theoretical inputs. For $`g`$ and $`h`$ reasonable ranges of values are $`g=0.3`$$`0.6`$, $`h=0.4`$$`0.7`$ . These numerical estimates encompass results obtained by different methods: QCD sum rules , potential models , effective Lagrangian , NJL-inspired models . Moreover $`g_\rho =5.8`$ and $`f_\pi 130`$ MeV. This value of $`g_\rho `$ is commonly used in the chiral effective theories including the light vector meson resonances and corresponds to $`\mathrm{\Gamma }_\rho 150`$ MeV; see, for instance, , where a review of different methods for the determination of $`g`$ is also given. For the matrix elements of quark bilinears between hadronic states, we use the following matrix elements: $`\pi ^{}|\overline{d}\gamma _5u|0`$ $`=`$ $`{\displaystyle \frac{if_\pi m_\pi ^2}{2m_q}}`$ (21) $`\pi ^0(q)|\overline{u}\gamma _5b|B^{}(p)`$ $`=`$ $`iϵ^\mu (qp)_\mu {\displaystyle \frac{2m_B^{}A_0^\pi }{m_b+m_q}}`$ (22) $`\rho ^+(q,ϵ)|\overline{u}\gamma _5b|\overline{B}^0(p)`$ $`=`$ $`iϵ^\mu (pq)_\mu {\displaystyle \frac{2m_\rho A_0}{m_b+m_q}}`$ (23) $`\pi ^+(q)|\overline{u}\gamma _\mu b|\overline{B}^0(p)`$ $`=`$ $`F_1\left[(p+q)^\mu {\displaystyle \frac{m_B^2m_\pi ^2}{(pq)^2}}(pq)^\mu \right]+F_0{\displaystyle \frac{m_B^2m_\pi ^2}{(pq)^2}}(pq)^\mu `$ (24) $`\pi ^+(q)|\overline{u}\gamma ^\mu (1\gamma _5)b|\overline{B}_0^0(p)`$ $`=`$ $`i\left\{\stackrel{~}{F}_1\left[(p+q)^\mu {\displaystyle \frac{m_{B_0}^2m_\pi ^2}{(pq)^2}}(pq)^\mu \right]+\stackrel{~}{F}_0{\displaystyle \frac{m_{B_0}^2m_\pi ^2}{(pq)^2}}(pq)^\mu \right\}`$ (25) $`0|\overline{u}\gamma ^\mu d|\rho ^{}(q,ϵ)`$ $`=`$ $`f_\rho ϵ^\mu ,`$ (26) where $`f_\rho =0.15\mathrm{GeV}^2`$ and $`A_0^\pi `$ $`=`$ $`A_0^\pi (0)=0.16,A_0=A_0(0)=0.29,`$ (27) $`F_1`$ $`=`$ $`F_1(0)=F_0(0)=0.37,F_0^\pi =\stackrel{~}{F}_1(0)=\stackrel{~}{F}_0(0)=0.19.`$ (28) The first three numerical inputs have been obtained by the relativistic potential model; $`A_0`$ and $`F_1`$ can be found in , while $`A_0^\pi `$ has been obtained here for the first time, using the same methods. The last figure in (28) concerns $`F_0^\pi `$, for which such an information is not available; for it we used the methods of and the strong coupling $`BB_0\pi `$ computed in . ## III Amplitudes and numerical results For all the channels we consider three different contributions $`A_\rho `$, $`A_B^{}`$, $`A_{B_0}`$, due respectively to the $`\rho `$ resonance, the $`B^{}`$ pole and the $`B_0`$ positive parity $`0^+`$ resonance, whose mass we take We identify the $`0^+`$ state mass with the average mass of the $`B^{}`$ states given in . to be $`5697`$ MeV. For each of the amplitudes $`A^+`$ $`=`$ $`A(B^{}\pi ^{}\pi ^{}\pi ^+)`$ (29) $`A^{00}`$ $`=`$ $`A(B^{}\pi ^{}\pi ^0\pi ^0)`$ (30) $`A^{+0}`$ $`=`$ $`A(\overline{B}^0\pi ^+\pi ^{}\pi ^0)`$ (31) we write the general formula<sup>\**</sup><sup>\**</sup>\**We add coherently the three contributions; the relative sign the $`B`$ resonances on one side and the $`\rho `$ contribution on the other is irrelevant, as the former are dominantly real and the latter is dominantly imaginary. The relative sign between $`B^{}`$ and $`B_0`$ is fixed by the effective Lagrangian for heavy mesons. $`A^{ijk}=A_\rho ^{ijk}+A_B^{}^{ijk}+A_{B_0}^{ijk}`$. We get, for the process (29): $`A_\rho ^+`$ $`=`$ $`\overline{\eta }^0\left[{\displaystyle \frac{t^{}u}{tm_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}+{\displaystyle \frac{tu}{t^{}m_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}\right]`$ (32) $`A_B^{}^+`$ $`=`$ $`K\left[{\displaystyle \frac{\mathrm{\Pi }(t,u)}{tm_B^{}^2+i\mathrm{\Gamma }_B^{}m_B^{}}}+{\displaystyle \frac{\mathrm{\Pi }(t^{},u)}{t^{}m_B^{}^2+i\mathrm{\Gamma }_B^{}m_B^{}}}\right],`$ (33) $`A_{B0}^+`$ $`=`$ $`\stackrel{~}{K}^0(m_{B_0}^2m_\pi ^2)\left[{\displaystyle \frac{1}{tm_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}+{\displaystyle \frac{1}{t^{}m_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}\right],`$ (34) where, if $`p_\pi ^{}`$ is the momentum of one of the two negatively charged pions $`t=(p_\pi ^{}+p_{\pi ^+})^2`$, $`t^{}`$ is obtained by exchanging the two identical pions and $`u`$ is the invariant mass of the two identical negatively charged pions. Clearly one has $`u+t+t^{}=m_B^2+3m_\pi ^2`$. The expressions entering in the previous formulas are $`\overline{\eta }^0`$ $`=`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}V_{ub}V_{ud}^{}{\displaystyle \frac{g_\rho }{\sqrt{2}}}\left[f_\rho F_1\left(c_1+{\displaystyle \frac{c_2}{3}}\right)+m_\rho A_0f_\pi \left(c_2+{\displaystyle \frac{c_1}{3}}\right)\right]+`$ (35) $`+`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}V_{tb}V_{td}^{}{\displaystyle \frac{g_\rho }{\sqrt{2}}}\left[\left(c_4+{\displaystyle \frac{c_3}{3}}\right)\left(f_\rho F_1m_\rho A_0f_\pi \right)+2\left(c_6+{\displaystyle \frac{c_5}{3}}\right)m_\rho A_0f_\pi {\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right],`$ (36) $`K`$ $`=`$ $`4\sqrt{2}gm_B^2A_0^\pi {\displaystyle \frac{G_F}{\sqrt{2}}}\left\{V_{ub}V_{ud}^{}\left(c_2+{\displaystyle \frac{c_1}{3}}\right)V_{tb}V_{td}^{}\left[c_4+{\displaystyle \frac{c_3}{3}}2\left(c_6+{\displaystyle \frac{c_5}{3}}\right){\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right]\right\}`$ (37) $`\stackrel{~}{K}^0`$ $`=`$ $`h\sqrt{{\displaystyle \frac{m_B}{m_{B0}}}}F_0^\pi {\displaystyle \frac{G_F}{\sqrt{2}}}(m_{B0}^2m_B^2)\left\{V_{ub}V_{ud}^{}\left(c_2+{\displaystyle \frac{c_1}{3}}\right)V_{tb}V_{td}^{}\left[c_4+{\displaystyle \frac{c_3}{3}}2\left(c_6+{\displaystyle \frac{c_5}{3}}\right){\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right]\right\},`$ (38) with $`m_b=4.6`$ GeV, $`m_qm_um_d6`$ MeV, $`\mathrm{\Gamma }_B^{}=0.2`$ keV, $`\mathrm{\Gamma }_{B_0}=0.36`$ GeV . Moreover, for the process (30) $`A_\rho ^{00}`$ $`=`$ $`\overline{\eta }^{}\left[{\displaystyle \frac{s^{}u}{sm_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}+{\displaystyle \frac{su}{s^{}m_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}\right],`$ (39) $`A_B^{}^{00}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left\{K_1{\displaystyle \frac{s+s^{}4m_\pi ^2}{2m_B^{}^2}}+{\displaystyle \frac{K\mathrm{\Pi }(s^{},s)+K_1\mathrm{\Pi }(s^{},u)}{s^{}m_B^{}^2+i\mathrm{\Gamma }_B^{}m_B^{}}}+{\displaystyle \frac{K\mathrm{\Pi }(s,s^{})+K_1\mathrm{\Pi }(s,u)}{sm_B^{}^2+i\mathrm{\Gamma }_B^{}m_B^{}}}\right\},`$ (40) $`A_{B_0}^{00}`$ $`=`$ $`\left({\displaystyle \frac{\stackrel{~}{K}^0+\stackrel{~}{K}^{cc}}{sm_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}+{\displaystyle \frac{\stackrel{~}{K}^0+\stackrel{~}{K}^{cc}}{s^{}m_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}+{\displaystyle \frac{\stackrel{~}{K}^0}{um_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}\right){\displaystyle \frac{(m_{B_0}^2m_\pi ^2)}{2}}`$ (41) In this case we define $`s=(p_\pi ^{}+p_{\pi ^0})^2`$, if $`p_{\pi ^0}`$ is the momentum of one of the two identical neutral pions, $`s^{}`$ is obtained by exchanging the two neutral pions and $`u`$ is their invariant mass (again we have a relation among the different Mandelstam variables: $`s+s^{}+u=m_B^2+3m_\pi ^2`$). Then $`\overline{\eta }^{}`$, $`K_1`$ and $`\stackrel{~}{K}^{cc}`$ are given by $`\overline{\eta }^{}`$ $`=`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}V_{ub}V_{ud}^{}{\displaystyle \frac{g_\rho }{\sqrt{2}}}\left[f_\rho F_1\left(c_2+{\displaystyle \frac{c_1}{3}}\right)+m_\rho A_0f_\pi \left(c_1+{\displaystyle \frac{c_2}{3}}\right)\right]+`$ (42) $`+`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}V_{tb}V_{td}^{}{\displaystyle \frac{g_\rho }{\sqrt{2}}}\left[\left(c_4+{\displaystyle \frac{c_3}{3}}\right)\left(f_\rho F_1+m_\rho A_0f_\pi \right)2\left(c_6+{\displaystyle \frac{c_5}{3}}\right)m_\rho A_0f_\pi {\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right],`$ (43) $`K_1`$ $`=`$ $`4gm_B^2A_0^\pi {\displaystyle \frac{G_F}{\sqrt{2}}}\left\{V_{ub}V_{ud}^{}\left(c_1+{\displaystyle \frac{c_2}{3}}\right)+V_{tb}V_{td}^{}\left[c_4+{\displaystyle \frac{c_3}{3}}2\left(c_6+{\displaystyle \frac{c_5}{3}}\right){\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right]\right\},`$ (44) $`\stackrel{~}{K}^{cc}`$ $`=`$ $`h\sqrt{{\displaystyle \frac{m_B}{m_{B0}}}}F_0^\pi {\displaystyle \frac{G_F}{\sqrt{2}}}(m_{B0}^2m_B^2)\left\{V_{ub}V_{ud}^{}\left(c_1+{\displaystyle \frac{c_2}{3}}\right)+V_{tb}V_{td}^{}\left[c_4+{\displaystyle \frac{c_3}{3}}2\left(c_6+{\displaystyle \frac{c_5}{3}}\right){\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right]\right\},`$ (45) and $$\mathrm{\Pi }(x,y)=m_\pi ^2\frac{y}{2}+\frac{x(m_B^2m_\pi ^2x)}{4m_B^{}^2},$$ (46) while $`\stackrel{~}{K}^0`$ was given above in (38). Finally, for the neutral $`B`$ decay (31), we have $`A_\rho ^{+0}`$ $`=`$ $`\eta ^0{\displaystyle \frac{us}{tm_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}+\eta ^+{\displaystyle \frac{st}{um_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}+\eta ^{}{\displaystyle \frac{tu}{sm_\rho ^2+i\mathrm{\Gamma }_\rho m_\rho }}`$ (47) $`A_B^{}^{+0}`$ $`=`$ $`{\displaystyle \frac{K\mathrm{\Pi }(s,t)+K_1\mathrm{\Pi }(s,u)}{sm_B^{}^2+i\mathrm{\Gamma }_B^{}m_B^{}}}{\displaystyle \frac{K\mathrm{\Pi }(t,s)}{tm_B^{}^2+i\mathrm{\Gamma }_B^{}m_B^{}}}`$ (48) $`A_{B_0}^{+0}`$ $`=`$ $`\left({\displaystyle \frac{\stackrel{~}{K}^0+\stackrel{~}{K}^{cc}}{sm_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}+{\displaystyle \frac{\stackrel{~}{K}^0}{tm_{B_0}^2+i\mathrm{\Gamma }_{B_0}m_{B_0}}}\right)(m_{B_0}^2m_\pi ^2),`$ (49) where $`s=(p_\pi ^{}+p_{\pi ^0})^2`$, $`t=(p_\pi ^{}+p_{\pi ^+})^2`$, $`u=(p_{\pi ^+}+p_{\pi ^0})^2`$, and $`s+t+u=m_B^2+3m_\pi ^2`$. The constants appearing in these equations are : $`\eta ^0`$ $`=`$ $`{\displaystyle \frac{g_\rho }{2}}(f_\rho F_1+m_\rho A_0f_\pi ){\displaystyle \frac{G_F}{\sqrt{2}}}\left[V_{ub}V_{ud}^{}\left(c_1+{\displaystyle \frac{c_2}{3}}\right)+V_{tb}V_{td}^{}\left(c_4+{\displaystyle \frac{c_3}{3}}\right)\right]+`$ (50) $`+`$ $`g_\rho m_\rho A_0f_\pi {\displaystyle \frac{G_F}{\sqrt{2}}}V_{tb}V_{td}^{}\left(c_6+{\displaystyle \frac{c_5}{3}}\right){\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}`$ (51) $`\eta ^+`$ $`=`$ $`g_\rho m_\rho A_0f_\pi {\displaystyle \frac{G_F}{\sqrt{2}}}\left\{V_{ub}V_{ud}^{}\left(c_2+{\displaystyle \frac{c_1}{3}}\right)V_{tb}V_{td}^{}\left[c_4+{\displaystyle \frac{c_3}{3}}2\left(c_6+{\displaystyle \frac{c_5}{3}}\right){\displaystyle \frac{m_\pi ^2}{(m_b+m_q)2m_q}}\right]\right\}`$ (52) $`\eta ^{}`$ $`=`$ $`g_\rho f_\rho F_1{\displaystyle \frac{G_F}{\sqrt{2}}}\left\{V_{ub}V_{ud}^{}\left(c_2+{\displaystyle \frac{c_1}{3}}\right)V_{tb}V_{td}^{}\left(c_4+{\displaystyle \frac{c_3}{3}}\right)\right\}.`$ (53) For the charged $`B`$ decays we obtain the results in Table I and II. In order to show the dependence of the results on the numerical values of the different input parameters, we consider in Table I results obtained with $`g=0.40`$ and $`h=0.54`$, which lie in the middle of the allowed ranges, while in Table II we present the results obtained with $`g=0.60`$ and $`h=0.70`$, which represent in a sense an extreme case (we do not consider the dependence on other numerical inputs, e.g. form factors, which can introduce further theoretical uncertainty). In both cases the branching ratios are obtained with $`\tau _B=1.6`$ psec and, by integration over a limited section of the Dalitz plot, defined as $`m_\rho \delta (\sqrt{t},\sqrt{t}^{})m_\rho +\delta `$ for $`B^{}\pi ^{}\pi ^{}\pi ^+`$ and $`m_\rho \delta (\sqrt{s},\sqrt{s}^{})m_\rho +\delta `$ for $`B^{}\pi ^{}\pi ^0\pi ^0`$. For $`\delta `$ we take $`300`$ MeV. This amounts to require that two of the three pions (those corresponding to the charge of the $`\rho `$) reconstruct the $`\rho `$ mass within an interval of $`2\delta `$. Numerical uncertainty due to the integration procedure is $`\pm 5\%`$. We can notice that the inclusion of the new diagrams ($`B`$ resonances in Fig. 1) produces practically no effect for the $`B^{}\pi ^{}\pi ^0\pi ^0`$ decay mode, while for $`B^{}\pi ^+\pi ^{}\pi ^{}`$ the effect is significant. For the choice of parameters in Table I the overall effect is an increase of $`50\%`$ of the branching ratio as compared to the result obtained by the $`\rho `$ resonance alone. In the case of Table II we obtain an even larger result, i.e. a total branching ratio $`(B^{}\pi ^+\pi ^{}\pi ^{})`$ of $`0.82\times 10^5`$, in reasonable agreement with the experimental result (6) (the contribution of the $`\rho `$ alone would produce a result smaller by a factor of 2). It should be observed that the events arising from the $`B`$ resonances diagrams represent an irreducible background, as one can see from the sample Dalitz plot depicted in Fig. 2 for the $`B^{}\pi ^+\pi ^{}\pi ^{}`$ (on the axis the two $`m_{\pi ^+\pi ^{}}^2`$ squared invariant masses). The contributions from the $`B`$ resonances populate the whole Dalitz plot and, therefore, cutting around $`tt^{}m_\rho `$ significantly reduces them. Nevertheless their effect can survive the experimental cuts, since there will be enough data at the corners, where the contribution from the $`\rho `$ dominates. Integrating on the whole Dalitz plot, with no cuts and including all contributions, gives: $$\mathrm{B}r(B^{}\pi ^{}\pi ^0\pi ^0)=1.5\times 10^5\mathrm{B}r(B^{}\pi ^+\pi ^{}\pi ^{})=1.4\times 10^5$$ (54) where the values of the coupling constants are as in Table I. We now turn to the neutral $`B`$ decay modes. We define effective width integrating the Dalitz plot only in a region around the $`\rho `$ resonance: $`\mathrm{\Gamma }_{eff}(\overline{B}^0\rho ^{}\pi ^+)`$ $`=`$ $`\mathrm{\Gamma }(\overline{B}^0\pi ^+\pi ^{}\pi ^0)|_{m_\rho \delta \sqrt{s}m_\rho +\delta }`$ (55) $`\mathrm{\Gamma }_{eff}(\overline{B}^0\rho ^+\pi ^{})`$ $`=`$ $`\mathrm{\Gamma }(\overline{B}^0\pi ^+\pi ^{}\pi ^0)|_{m_\rho \delta \sqrt{u}m_\rho +\delta }`$ (56) $`\mathrm{\Gamma }_{eff}(\overline{B}^0\rho ^0\pi ^0)`$ $`=`$ $`\mathrm{\Gamma }(\overline{B}^0\pi ^+\pi ^{}\pi ^0)|_{m_\rho \delta \sqrt{t}m_\rho +\delta }.`$ (57) The Mandelstam variables have been defined above and again we use $`\delta =300`$ MeV . Similar definitions hold for the $`B^0`$ decay modes. The results in Table III show basically no effect for the $`\overline{B}^0\rho ^\pm \pi ^{}`$ decay channels and a moderate effect for the $`\rho ^0\pi ^0`$ decay channel. The effect in this channel is of the order of 20% (resp. 50%) for $`\overline{B}^0`$ (resp. $`B^0`$) decay, for the choice $`g=0.60,h=0.70`$; for smaller values of the strong coupling constants the effect is reduced. Integration on the whole Dalitz plot, including all contributions, gives $$\mathrm{B}r(\overline{B}^0\pi ^+\pi ^{}\pi ^0)=2.6\times 10^5$$ (58) confirming again that most of the branching ratio is due to the $`\rho `$-exchange (the first three lines of the $`\rho `$ column in Table III sum up to $`2.3\times 10^5`$). To allow the measurement of $`\alpha `$, the experimental programmes will consider the asymmetries arising from the time–dependent amplitude: $$𝒜(t)=e^{\mathrm{\Gamma }/2t}\left(\mathrm{cos}\frac{\mathrm{\Delta }mt}{2}A^{+0}\pm i\mathrm{sin}\frac{\mathrm{\Delta }mt}{2}\overline{A}^{+0}\right),$$ (59) where one chooses the $`\pm `$ sign according to the flavor of the $`B`$, and $`\mathrm{\Delta }m`$ is the mass difference between the two mass eigenstates in the neutral $`B`$ system. Here $`\overline{A}^{+0}`$ is the charge–conjugate amplitude. We have performed asymmetric integrations over the Dalitz plot for three variables: $`R_1`$, $`R_2`$ and $`R_3`$, which multiply, in the time–dependent asymmetry, respectively $`1`$, $`\mathrm{cos}\mathrm{\Delta }mt`$ and $`\mathrm{sin}\mathrm{\Delta }mt`$. We have found no significant effect due to the $`B^{}`$ or the $`B_0`$ resonance for $`R_1`$ and $`R_3`$. On the other hand these effects are present in $`R_2`$, but $`R_2`$ is likely to be too small to be accurately measurable. ## IV Conclusions In conclusion our analysis shows that the effect of including $`B`$ resonance polar diagrams is significant for the $`B^{}\pi ^{}\pi ^{}\pi ^+`$ and negligible for the other charged $`B`$ decay mode. This result is of some help in explaining the recent results from the CLEO Collaboration, since we obtain $$R=3.5\pm 0.8,$$ (60) to be compared with the experimental result in eq. (8). The $`\rho `$ resonance alone would produce a result up to a factor of 2 higher. Therefore we conclude that the polar diagrams examined in this paper are certainly relevant in the study of the charged $`B`$ decay into three pions. In the case of neutral $`B`$ decays we have found that, as far as the branching ratios are concerned, the only decay mode where the contribution from the fake $`\rho `$’s (production of a pion and the $`B^{}`$ or the $`B_0`$ resonance) may be significant is the neutral $`\rho ^0\pi ^0`$ decay channel. As for the time–dependent asymmetry no significant effect is found. Therefore the $`B\pi \pi \pi `$ decay channel allows an unambiguous measurement of $`\alpha `$, with two provisos: 1) only the neutral $`B`$ decay modes are considered; 2) the $`\rho ^0\pi ^0`$ final state can be disregarded from the analysis. ###### Acknowledgements. We thank J. Charles, Y. Gao, J. Libby, A. D. Polosa and S. Stone for discussions.
no-problem/0002/hep-ph0002122.html
ar5iv
text
# Qualitative Signals of New Physics in 𝐵-𝐵̄ Mixing ## I Introduction The strongest constraints on the CKM matrix likely to be achieved in the next three years will arise from the values of $`\mathrm{sin}2\beta `$ and $`x_s`$. Results from CDF, Babar and Belle should provide $`\mathrm{sin}(2\beta )`$ with an error of $`0.08`$ or less . If $`x_s`$ is not too large, then it is hoped that $`x_s`$ will be measured to better than $`10\%`$ in Run 2 of CDF . For the analysis of the CKM matrix the useful quantity is $$\frac{x_s}{x_d}=\left(\frac{V_{ts}}{V_{td}}\right)^2K,$$ (1) where $`K`$ is a SU(3) correction factor, calculated to be $`1.3`$ with a claim by lattice calculations of a $`7\%`$ error . In terms of CKM parameters , $$\mathrm{sin}2\beta =\frac{2\eta (1\rho )}{(1\rho )^2+\eta ^2},$$ (2) and $$\frac{x_s}{x_d}=\frac{K}{\lambda ^2R_t^2}$$ (3) where $$R_t=\sqrt{(1\rho )^2+\eta ^2}=\frac{\mathrm{sin}\gamma }{\mathrm{sin}(\beta +\gamma )}.$$ (4) These two results will produce a small allowed region for the CKM parameters, assuming that they are consistent with the present loose constraints from $`|V_{ub}|`$ and $`ϵ_K`$. It is interesting to note that both Eqs. (2) and (3) have to do with $`B\overline{B}`$ mixing. Thus, if there is a significant new physics contribution to $`B\overline{B}`$ mixing, then these measurements provide no constraint on CKM parameters. Rather, they would provide wrong values for the parameters, which we denote by $`\stackrel{~}{\rho }`$, $`\stackrel{~}{\eta }`$, $`\stackrel{~}{\beta }`$, $`\stackrel{~}{\gamma }`$, and $`\stackrel{~}{R}_t`$. Naturally, $$\stackrel{~}{R}_t=\sqrt{(1\stackrel{~}{\rho })^2+\stackrel{~}{\eta }^2}=\frac{\mathrm{sin}\stackrel{~}{\gamma }}{\mathrm{sin}(\stackrel{~}{\beta }+\stackrel{~}{\gamma })}.$$ (5) We also define $$\stackrel{~}{R}_b=\sqrt{\stackrel{~}{\rho }^2+\stackrel{~}{\eta }^2}=\frac{\mathrm{sin}\stackrel{~}{\beta }}{\mathrm{sin}(\stackrel{~}{\beta }+\stackrel{~}{\gamma })}.$$ (6) A new physics contribution to $`B_d\overline{B_d}`$ mixing, described by an effective superweak-like interaction $$G_{bd}\overline{b}𝒪d\overline{b}𝒪d+\text{h.c.},$$ (7) with $`G_{bd}`$ of order $`10^7G_F`$ to $`10^8G_F`$, will affect both Eqs. (2) and (3). Eq. (2) is also sensitive to new physics in $`B_s\overline{B_s}`$ mixing. Considering only new physics of the form in Eq. (7), the measurements of $`\mathrm{sin}2\beta `$ and $`x_s`$ really determine the $`B_d\overline{B_d}`$ matrix $`M_{12}`$: $`\text{Im}M_{12}`$ $``$ $`2\stackrel{~}{\eta }(1\stackrel{~}{\rho }),`$ (8) $`\left|M_{12}\right|`$ $``$ $`(1\stackrel{~}{\rho })^2+\stackrel{~}{\eta }^2,`$ (9) $`\text{Re}M_{12}`$ $``$ $`\pm \left[(1\stackrel{~}{\rho })^2\stackrel{~}{\eta }^2\right].`$ (10) The CKM fit requires the positive value; the negative value corresponds to the well known ambiguity $`\stackrel{~}{\beta }\pi /2\stackrel{~}{\beta }`$ . The present limit on $`x_s`$ yields the result that $`\stackrel{~}{R}_t<\mathrm{\hspace{0.17em}1}`$. If we assume that $`x_s`$ will be measured to be between $`20`$ and $`30`$, then $`\stackrel{~}{R}_t`$ will lie between $`1.0`$ and $`0.8`$, corresponding to $`\stackrel{~}{\gamma }`$ in the region between $`80^{}`$ and $`50^{}`$. On the other hand, it has been suggested on the basis of the measured $`B`$ decay rates that $`\gamma `$ is greater than $`90^{}`$ . If this were true, then it could be a sign of new physics. As a possible way of finding a sign of this new physics, we consider the asymmetry in the time dependent decays $`B_d(\overline{B_d})\pi ^+\pi ^{}`$. In Fig. 1 the solid line shows the expected asymmetry for values of $`\stackrel{~}{R_t}`$ between $`1.0`$ and $`0.8`$, with $`\stackrel{~}{\beta }=25^{}`$ (this corresponds to values for $`\stackrel{~}{R_b}`$ which lie between $`0.422`$ and $`0.436`$). The values of the asymmetry are well above zero. In contrast, if the true $`\gamma 115^{}`$, then the expected asymmetry is approximately $`0.9`$. Thus, a discovery of such a large negative asymmetry in this scenario could be a signal of new physics in $`B\overline{B}`$ mixing. Note that the penguin contribution plays an important role in this analysis. In the absence of penguins the first asymmetry would be $`\mathrm{sin}2(\stackrel{~}{\beta }+\stackrel{~}{\gamma })`$, which is already significantly larger than $`\mathrm{sin}2(\stackrel{~}{\beta }+\gamma )`$. However, inclusion of the penguin separates the asymmetries further by increasing the first asymmetry well above $`\mathrm{sin}2(\stackrel{~}{\beta }+\stackrel{~}{\gamma })`$. This makes the difference even more striking. Details of this calculation are given in the appendix. We have used a method proposed by Silva and Wolfenstein in which one invokes SU(3) to relate the tree and penguin contributions in the $`B_d\pi ^+\pi ^{}`$ decays to those in the $`B_dK^\pm \pi ^{}`$ decays. The details of this calculation depend on SU(3), factorization and were originally performed assuming that the strong phase shift between the penguin and the tree contributions in the $`B_d\pi ^+\pi ^{}`$ and in the $`B_dK^\pm \pi ^{}`$ channels is equal to a common value $`\mathrm{\Delta }`$ and, moreover, that $`\mathrm{\Delta }=0^{}`$. Recently, this analysis has been revisited by Fleischer , allowing $`\mathrm{\Delta }`$ to take any value. Since we are interested in a large qualitative effect, small changes in the assumptions made in the calculation will not affect the conclusion. For example, setting $`\mathrm{\Delta }=45^{}`$ yields the dash-dotted curve in Fig. 1. There is, however, one critical assumption: the sign of the penguin term relative to that of the tree. If we choose the negative sign, or equivalently let $`\mathrm{\Delta }=180^{}`$, we obtain the dashed curve in Fig. 1. Assuming this sign, the asymmetry for $`\gamma =115^{}`$ is about $`0.7`$. Thus, except for low values of $`\stackrel{~}{R}_t`$ (large values of $`x_s`$), the large qualitative difference disappears. As discussed by Fleischer , allowing for any value of $`\mathrm{\Delta }`$, the measured asymmetry allows a limited range of values for $`\gamma `$. This leads us to consider the following three possibilities: * If a sizeable positive asymmetry is found, this would be consistent with values of $`\stackrel{~}{\gamma }`$ well below $`90^{}`$, as postulated in our scenario, and rule out values greater than $`90^{}`$. * A very large negative asymmetry would be consistent with a true value of $`\gamma `$ in the neighborhood of $`115^{}`$, as has been proposed , and could rule out values of $`\stackrel{~}{\gamma }`$ corresponding to $`\stackrel{~}{R}_t<0.95`$. In this case we could have a signal of new physics independent of any assumption about $`\mathrm{\Delta }`$. * A negative asymmetry in the neighborhood of $`0.5`$ could be interpreted in different ways: (1) if one believes from factorization that $`\mathrm{cos}\mathrm{\Delta }`$ is close to $`+1`$, then one obtains $`\gamma `$ slightly above $`90^{}`$ and this could be a sign of new physics in our scenario; (2) the true value of $`\gamma `$ is really $`\stackrel{~}{\gamma }`$ for values of $`\stackrel{~}{R}_t`$ above $`0.85`$ but $`\mathrm{cos}\mathrm{\Delta }`$ is close to $`1`$. It is interesting to note that, if $`\mathrm{cos}\mathrm{\Delta }`$ is close to $`1`$, then the motivation for a value $`\gamma >90^{}`$ given in Ref. may disappear. As a particular example, let us take $`\stackrel{~}{\eta }=0.4`$ and $`(1\stackrel{~}{\rho })=0.8`$, corresponding to $`\stackrel{~}{R}_t=0.89`$, $`\stackrel{~}{\gamma }=63.5^{}`$, and $`\stackrel{~}{\beta }=26.5^{}`$. If the true values were $`\eta =0.4`$ and $`(1\rho )=1.2`$, $`\beta =18.5^{}`$, and $`\gamma =116.5^{}`$, then, in order to fit the $`B_d\overline{B_d}`$ mixing from Eqs. (8)–(10), with the positive sign in Eq. (10), we would need $`{\displaystyle \frac{\left(\text{Im}M_{12}\right)_{sw}}{\left(\text{Im}M_{12}\right)_{\mathrm{CKM}}}}`$ $`=`$ $`{\displaystyle \frac{1}{3}}`$ (11) $`{\displaystyle \frac{\left(\text{Re}M_{12}\right)_{sw}}{\left(\text{Re}M_{12}\right)_{\mathrm{CKM}}}}`$ $`=`$ $`{\displaystyle \frac{5}{8}},`$ (12) where the subscript $`sw`$ denotes a superweak-like contribution, as in Eq. (7), and the subscript $`\mathrm{CKM}`$ denotes the Standard Model contribution. Another possible interpretation of the large negative asymmetry in this scenario would be that the true value of $`\text{Re}M_{12}`$ is given by the minus sign in Eq. (10). In that case, the phase $`2\stackrel{~}{\beta }`$ would be about $`130^{}`$ instead of $`50^{}`$. Then the true value of $`\gamma `$ would have to be around $`70^{}`$ in order to explain the large negative asymmetry. Therefore, the main new contribution would be $$\left(\text{Re}M_{12}\right)_{sw}2\left(\text{Re}M_{12}\right)_{\mathrm{CKM}}.$$ (13) ## II Conclusions In the Standard Model the best determination of the CKM matrix elements, from $`\mathrm{sin}2\beta `$ and $`x_s`$, depends on the absence of new physics effects in the mixing. Therefore, it is possible that experiments that are really sensitive to the phases of decay amplitudes may show qualitative deviations from expectations based on $`\mathrm{sin}2\beta `$ and $`x_s`$. One example is discussed in this paper, where we have stressed the usefulness of a qualitative measurement of the CP-violating asymmetry in $`B_d\pi ^+\pi ^{}`$ decays. ###### Acknowledgements. This work is supported in part by the Department of Energy under contracts DE-AC03-76SF00515 and DE-FG02-91-ER-40682. The work of J. P. S. is supported in part by Fulbright, Instituto Camões, and by the Portuguese FCT, under grant PRAXIS XXI/BPD/20129/99 and contract CERN/S/FIS/1214/98. L. W. thanks SLAC for support during the time this work was carried out. ## Determining the asymmetry in $`B_d\pi ^+\pi ^{}`$ decays For completeness, we include in this section a description of the method proposed by Silva and Wolfenstein to estimate the asymmetry in the time dependent decays $`B_d(\overline{B_d})\pi ^+\pi ^{}`$. In this method one uses two observables as input: $`\mathrm{sin}2\stackrel{~}{\beta }`$ obtained from the CP-violating asymmetry in $`B_dJ/\psi K_S`$ decays; and $$R\frac{\mathrm{\Gamma }[B_dK^+\pi ^{}]+\mathrm{\Gamma }[\overline{B_d}K^{}\pi ^+]}{\mathrm{\Gamma }[B_d\pi ^+\pi ^{}]+\mathrm{\Gamma }[\overline{B_d}\pi ^+\pi ^{}]},$$ (14) recently measured by CLEO to be $`R=4.0\pm 2.2`$ , where we have added the errors in quadrature. In addition, one uses the spectator approximation and SU(3) (in fact a U-spin rotation interchanging the $`d`$ and $`s`$ quarks) to relate the tree dominated process $`B_d\pi ^+\pi ^{}`$ to the penguin dominated $`B_dK^\pm \pi ^{}`$. Here we will follow the notation of Ref. . The relevant decay amplitudes may be written as $`A(B_d\pi ^+\pi ^{})`$ $`=`$ $`{\displaystyle \frac{V_{ub}^{}V_{ud}}{|V_{ub}V_{ud}|}}T+{\displaystyle \frac{V_{tb}^{}V_{td}}{|V_{tb}V_{td}|}}Pe^{i\mathrm{\Delta }}=e^{i\gamma }T\left[1+re^{i(\beta \gamma +\mathrm{\Delta })}\right],`$ (15) $`A(B_dK^+\pi ^{})`$ $`=`$ $`{\displaystyle \frac{V_{ub}^{}V_{us}}{|V_{ub}V_{us}|}}T^{}+{\displaystyle \frac{V_{tb}^{}V_{ts}}{|V_{tb}V_{ts}|}}P^{}e^{i\mathrm{\Delta }^{}}=e^{i\gamma }T^{}\left[1r^{}e^{i(\gamma +\mathrm{\Delta }^{})}\right].`$ (16) Here $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }^{}`$ are strong phases, while $`r=P/T`$ and $`r^{}=P^{}/T^{}`$ are the ratios of the magnitudes of the penguin to the tree contributions in each channel, respectively. Therefore , $$R=\frac{T_{}^{}{}_{}{}^{2}}{T^2}\frac{r_{}^{}{}_{}{}^{2}2r^{}\mathrm{cos}\gamma \mathrm{cos}\mathrm{\Delta }^{}+1}{1+2r\mathrm{cos}\left(\beta +\gamma \right)\mathrm{cos}\mathrm{\Delta }+r^2}.$$ (17) Using U-spin we may relate the two decay channels through $`\mathrm{\Delta }^{}`$ $`=`$ $`\mathrm{\Delta },`$ (18) $`{\displaystyle \frac{T^{}}{T}}`$ $`=`$ $`\left|{\displaystyle \frac{V_{us}}{V_{ud}}}\right|{\displaystyle \frac{f_K}{f_\pi }}\lambda {\displaystyle \frac{f_K}{f_\pi }},`$ (19) $`{\displaystyle \frac{P^{}}{P}}`$ $`=`$ $`\left|{\displaystyle \frac{V_{ts}}{V_{td}}}\right|{\displaystyle \frac{f_K}{f_\pi }}{\displaystyle \frac{1}{\lambda R_t}}{\displaystyle \frac{f_K}{f_\pi }}.`$ (20) In the last two expressions we have used factorization to estimate the ratio of matrix elements in the two channels. This provides a first order estimate of the SU(3) breaking effects. Therefore, defining $$a\frac{r^{}}{r}=\frac{1}{\lambda ^2R_t},$$ (21) we obtain $$R=\left(\lambda \frac{f_K}{f_\pi }\right)^2\frac{a^2r^22ar\mathrm{cos}\gamma \mathrm{cos}\mathrm{\Delta }+1}{1+2r\mathrm{cos}(\beta +\gamma )\mathrm{cos}\mathrm{\Delta }+r^2}.$$ (22) Here, we have used the true values of $`\beta `$ and $`\gamma `$ ($`R_t`$) as they appear in the CKM matrix. Eq. (22) determines $`r`$. Finally, the time dependent asymmetry in $`B_d(\overline{B_d})\pi ^+\pi ^{}`$ decays is given by $$\mathrm{asym}=\frac{2\text{Im}\lambda _f}{1+|\lambda _f|^2}$$ (23) where $$\lambda _f=\frac{q}{p}\frac{A(\overline{B_d}\pi ^+\pi ^{})}{A(B_d\pi ^+\pi ^{})}=e^{2i(\stackrel{~}{\beta }+\gamma )}\frac{1+re^{i(\beta +\gamma +\mathrm{\Delta })}}{1+re^{i(\beta \gamma +\mathrm{\Delta })}}.$$ (24) Notice that we have used $`q/p=\mathrm{exp}(2i\stackrel{~}{\beta })`$ (as opposed to the true CKM phase $`\beta `$) because any new physics appearing in the mixing will affect the interference CP-violating asymmetries in both the $`B_d\pi ^+\pi ^{}`$ and $`B_dJ/\psi K_S`$ channels, through the same $`q/p`$. In the presence of new physics $`\beta `$ in Eqs. (15) and (16) need not be the same as $`\stackrel{~}{\beta }`$. Fig. 1 was obtained in the following way. We have set $`\stackrel{~}{\beta }=25^{}`$. Then, for each value of $`\stackrel{~}{R}_t`$ between $`0.8`$ and $`1`$, we calculate $`\stackrel{~}{\gamma }`$ and $`\stackrel{~}{R}_b`$. Our choice of $`\stackrel{~}{\beta }`$ guarantees that $`\stackrel{~}{R}_b`$ lies within the range currently allowed by the measurement of $`|V_{ub}|`$. We have taken $`R=3`$, which is in the lower side of the range allowed by CLEO , in order to facilitate comparison with Ref. . Next, we consider the scenario in which the true value of $`\gamma `$ is really $`115^{}`$. The corresponding value of $`\beta `$ is obtained by requiring $`R_b=\stackrel{~}{R_b}`$, thus continuing to satisfy the $`|V_{ub}|`$ measurement. For each case, we can determine $`r`$ from Eq. (22), $`\lambda _f`$ from Eq. (23), and the asymmetry from Eq. (24).
no-problem/0002/astro-ph0002187.html
ar5iv
text
# Eclipse maps of spiral shocks in the accretion disc of IP Pegasi in outburst ## 1 Introduction Accretion discs are widespread in astrophysical environments, from sheltering the birth of stars to providing the energetics for the most violent phenomena such as relativistic jets. Despite its general importance and although considerable effort both in observation and theory has been invested over the past decade, the structure and underlying physics of accretion discs remains poorly understood. One of the major unsolved problems concerns the nature of the anomalous viscosity mechanism responsible for the inward spiraling of the disc material (Frank, King & Raine 1992). Best prospects for progress in understanding accretion discs physics are possibly found in mass-exchanging binaries such as Cataclysmic Variables (CVs). In these close binaries mass is fed to a non-magnetic ($`B\stackrel{<}{}10^5`$ G) white dwarf via an accretion disc by a Roche lobe filling companion star (the secondary). The sub-class of dwarf novae comprises low-mass transfer CVs which show recurrent outbursts of 2–5 magnitudes on timescales of months either due to an instability in the mass transfer from the secondary or due to a thermal instability in the accretion disc which switches the disc from a low to a high-viscosity regime (Warner 1995 and references therein). Spiral shocks have been advocated by various researchers as a possible mechanism for transport of angular momentum in accretion discs (Savonije, Papaloizou & Lin 1994) and may be the key, together with magnetic viscosity (Hawley, Balbus & Winters, 1999), in understanding the viscosity mechanism. The recent discovery of spiral shocks in the accretion disc of the dwarf novae IP Pegasi in outburst – from Doppler tomography of emission lines (Steeghs, Harlaftis & Horne 1997, 1998; Harlaftis et al. 1999) – confirmed the results of hydrodynamical simulations (Armitage & Murray 1998, Stehle 1999). The spiral shocks are produced in the outer regions of the disc by the tides raised by the secondary star. During the outburst the disc expands and its outer parts feel more effectively the gravitational attraction of the secondary star leading to the formation of spiral arms. Here we report on the eclipse mapping analysis of the data obtained by Harlaftis et al. (1999; see there for observations and data reduction). Our goal is to confirm the existence, constrain the location and to investigate the spatial structure of the spiral shocks observed in the Doppler tomograms. Section 2 presents the data and gives details of the analysis procedures. In section 3 we present a set of simulations with the eclipse mapping method aimed to clarify the interpretation of the results of section 4 in terms of real spiral shocks. A summary of our findings is given in section 5. ## 2 Data Analysis ### 2.1 Lightcurves A time-series of high-resolution, optical spectrophotometry ($`\mathrm{\Delta }\lambda =43544747`$ Å, velocity dispersion of $`27kms^1`$ per pixel) covering one eclipse of IP Peg was obtained during the third day of the November 1996 outburst. The reader is referred to Harlaftis et al. (1999) for a detailed description of the dataset and of the reduction procedures. Lightcurves were extracted for the blue continuum ($`43654440`$ Å) and for the C III+N III $`\lambda 4650`$ (Bowen blend) and He II $`\lambda 4686`$ lines and phase folded according to the sinusoidal ephemeris of Wolf et al. (1993), $$T_{\mathrm{mid}}(\mathrm{HJD})=\mathrm{2\hspace{0.17em}445\hspace{0.17em}615.4156}+\mathrm{0.158\hspace{0.17em}206\hspace{0.17em}16}E+(OC)$$ (1) where $$(OC)=1.0903\times 10^3\mathrm{sin}\left[2\pi \frac{E10258}{10850.88}\right].$$ The line lightcurves were continuum subtracted and, therefore, correspond to net line emission. The three lightcurves are shown in Fig. 1 as gray open squares. The C III+N III lightcurve shows a peculiar double-stepped eclipse shape revealing the presence of two asymmetric brightness sources displaced from disc centre. Although less pronounced, the same morphology can also be seen in the continuum lightcurve. The shape of the He II eclipse is more symmetrical than that of the other passbands but mid-eclipse occurs earlier with respect to the continuum eclipse, indicating that the line surface distribution is also asymmetric. The continuum and He II lightcurves show a conspicuous orbital modulation with maximum at phase $`\varphi 0.15`$ cycle and minimum at $`\varphi +0.15`$ cycle. This is not seen in the C III+N III lightcurve although an increase in flux is clearly visible after phase $`\varphi =+0.22`$ cycle. For any reasonable mass ratio, $`q<1`$ \[$`q`$=0.5 for IP Peg, Wood & Crawford (1986)\], the shadow of the secondary star covers regions outside the primary lobe for orbital phases $`|\varphi |>0.2`$ cycle. Therefore, it is hard to explain the observed modulation in terms of occultation by the secondary star unless the eclipsed source lies outside the primary lobe. Thereafter, we assign the orbital modulation to gas obscuration by the spiral arm seen at maximum between phases 0.0-0.25 cycle (see section 4). Out-of-eclipse brightness changes are not accounted for by the eclipse mapping method, which assumes that all variations in the eclipse lightcurve are due to the changing occultation of the emitting region by the secondary star (but see Bobinger et al. 1996 for an example of how to include orbital modulations in the eclipse mapping scheme). Orbital variations were therefore removed from the lightcurves by fitting a spline function to the phases outside eclipse, dividing the lightcurve by the fitted spline, and scaling the result to the spline function value at phase zero. This procedure removes orbital modulations with only minor effects on the eclipse shape itself. The corrected lightcurves are shown in Fig. 1 as filled circles. For the purpose of eclipse mapping analysis the lightcurves were limited to the phase range ($`0.18,+0.28`$) since data outside of eclipse is basically used to set the out-of-eclipse level and flickering in this phase range serves to add unnecessary noise to the fit. ### 2.2 Eclipse maps The eclipse mapping method is an inversion technique that uses the information contained in the shape of the eclipse to recover the surface brightness distribution of the eclipsed accretion disc. The reader is referred to Horne (1985), Rutten et al. (1992) and Baptista & Steiner (1993) for the details of the method. As our eclipse map we adopted a grid of $`51\times 51`$ pixels centered on the primary star with side 2 R<sub>L1</sub>, where R<sub>L1</sub> is the distance from the disk center to the inner Lagrangian point. The eclipse geometry is controled by a pair of $`q`$ and $`i`$ values. The mass ratio $`q`$ defines the shape and the relative size of the Roche lobes. The inclination $`i`$ determines the shape and extent of the shadow of the secondary star as projected onto the orbital plane. We obtained reconstructions for two sets of parameter, ($`q=0.5,i=81\text{}`$) (Wood & Crawford 1986) and ($`q=0.58,i=79.5\text{}`$) (Marsh 1988), which correspond to an eclipse width of the disc centre of $`\mathrm{\Delta }\varphi =0.086`$ (Wood & Crawford 1986; Marsh & Horne 1990). These combination of parameters ensure that the white dwarf is at the center of the map. There is no perceptible difference in eclipse maps obtained with either geometry. Hence, for the remainder of the paper we will refer to and show the results for ($`q=0.5,i=81\text{}`$). The lightcurves were analyzed with eclipse mapping techniques to solve for a map of the disc brightness distribution and for the flux of an additional uneclipsed component in each passband. The uneclipsed component accounts for all light that is not contained in the eclipse map in the orbital plane (i.e., light from the secondary star and/or a vertically extended disc wind). The reader is referred to Rutten et al. (1992) and Baptista, Steiner & Horne (1996) for a detailed description of and tests with the uneclipsed component. For the reconstructions we adopted the default of limited azimuthal smearing of Rutten et al. (1992), which is better suited for recovering asymmetric structures than the original default of full azimuthal smearing (see Baptista et al. 1996). Lightcurves, fitted models and grayscale plots of the resulting eclipse maps are shown in Fig. 2 and will be discussed in detail in section 4. ## 3 Eclipse mapping simulations We performed various simulations with asymmetric sources in order (i) to investigate how the presence of spiral structures in the accretion disc affects the shape of the eclipse lightcurve, and (ii) to evaluate the ability of the eclipse mapping method to reconstruct these structures in the eclipse maps. For the simulations we adopted the geometry of IP Peg ($`q=0.5`$ and $`i=81\text{}`$) and constructed lightcurves with the same signal to noise ratio and orbital phases of the real data of section 2. Figure 3 shows the results of the simulations. Asymmetric compact sources (model #1) result in eclipse lightcurves with rapid brightness changes at ingress/egress phases. The azimuthal smearing effect characteristic of the eclipse mapping method is responsible for the distortion which makes the compact sources appear ‘blurred’ in azimuth. Nevertheless, their radial and azimuthal locations are satisfactory recovered. Brightness distributions with spiral structures (models #2 to #4) result in eclipse shapes with characteristic ‘bulges’, whose extension and location in phase reflect the orientation and radial extent of the spiral arms. Due to the azimuthal smearing effect these structures are reproduced in the form of asymmetric arcs, whose maximum brightness and radial position yield information about the orientation, position and radial extent of the original spiral arms. The adition of a symmetric brightness source (i.e., centred in the eclipse map, model #5) can dilute the presence of spiral arms. In this case the eclipse shape is smoother and more symmetric in comparison with those of models #2-4 and the asymmetric arcs are less clearly visible in the eclipse map, mixing with the brightness distribution of the central source. These simulations show that the eclipse mapping method is able to reproduce asymmetric light sources such as spiral arms (provided the asymmetric sources are properly eclipsed) and that the asymmetric structures seen in the eclipse maps of Fig. 2 are not caused by artifacts of the method. Models #3 to #5 are the relevant ones for the purpose of comparing the results of the simulations with those from the IP Peg data (Fig. 2). The morphology of the continuum and C III+N III lightcurves is similar to that of models #3 and #4, while the He II lightcurve resembles that of model #5. ## 4 Results Data and model lightcurves are shown in the left panels of Fig. 2. Horizontal dashed lines indicate the uneclipsed component in each case. The uneclipsed component corresponds to about 12, 1 and 1 per cent of the total flux, respectively for the continuum, C III+N III and He II curves. While a non-negligible fraction of the light in the continuum probably arises from an emitting region outside of the orbital plane (possibly a disk wind), the net He II and C III+N III emission mostly arises from (or close to) the orbital plane. The middle panels of Fig. 2 show eclipse maps in a logarithmic grayscale. The maps in the right panels show the asymmetric part of the maps in the middle panels and are obtained by calculating and subtracting azimuthally-averaged intensities at each radius. The continuum and C III+N III lightcurves display bulges similar to that of the lightcurves of models #3 and #4 (Section 3) and result in eclipse maps with two clearly visible asymmetric arcs, which are interpreted as being the spiral shocks seen in the Doppler tomograms of Harlaftis et al. (1999). In comparison with the models of Fig. 3, the orientation of the arcs suggests that the spirals are aligned in a direction perpendicular to the major axis of the binary (models #3 and #4). The arcs show an azimuthal extent of $`90\text{}`$ and extend from the intermediate to the outer disc regions ($`R0.20.6R_{\mathrm{L1}}`$). Therefore, the outer radius of the spirals is of the same order of the disc outburst radius ($`R_d0.34a0.6R_{L1}`$) inferred by Wood et al. (1989). The eclipse maps show no evidence of the bright spot at disc rim and no enhanced emission along the gas stream trajectory. The azimuthal location of the arcs is consistent with the results from hydrodynamical simulations and from the Doppler tomography. The arc in the upper left quadrant of the eclipse map (hereafter arc 1) corresponds to the spiral arm whose maximum occurs at phases 0.5–0.75 while the arc in the lower right quadrant (arc 2) corresponds to the spiral arm seen at maximum intensity at phases 0.0–0.25 (orbital phases increases clockwise in the eclipse maps of Fig. 2 and phase zero coincides with the inner lagrangian point L1). The arcs are not symmetrical with respect to the centre of the disc. In C III+N III, arc 2 is further away from disc centre than arc 1 – in agreement with the Doppler tomography, which indicates smaller velocities for the spiral arm 2 ($`550kms^1`$) than for the spiral arm 1 ($`700kms^1`$). The lightcurve of He II is quite symmetrical with less pronounced bulges than in C III+N III, resulting in an eclipse map consisting of a symmetrical, centred brightness distribution and asymmetric arcs at different distances from disc centre. The outermost arc (arc 2) is more easily seen in the eclipse map, while the emission from the innermost arc (arc 1) is blendend with that of the central source. Nevertheless, arc 1 is clearly seen in the asymmetric part of the He II map show in the right panel of Fig. 2. The symmetrical emission component is probably related to the low-velocity component seen in He II Doppler tomograms and is suggested to be due to gas outflow in a wind emanating from the inner parts of the disc (see also Marsh and Horne, 1990; for an alternative interpretation, slingshot prominence from the secondary star, see Steeghs et al. 1996). The He II arcs contribute about 16 per cent of the total flux of the eclipse map – in good agreement with the results from the Doppler tomography, which indicate that the spirals contribute $`15`$ per cent of the total He II emission (Harlaftis et al. 1999). In comparison, the arcs in the C III+N III and continuum maps contribute, respectively, about 30 and 13 per cent of the total flux. We quantify the properties of the asymmetric arcs by dividing the eclipse map in azimuthal slices (i.e., ‘slices of pizza’) and computing the distance at which the intensity is maximum for each azimuth. This exercice allows to trace the distribution in radius and azimuth of the spiral structures. The results are plotted in Fig. 4 as a function of orbital phase. The diagrams for He II were computed from its asymmetric map and are noisier than those for C III+N III because most ($`84`$ per cent) of the flux in the eclipse map is subtracted with the symmetric component. The two spiral shocks are clearly visible in the intensity diagrams, as well as their distinct locations with respect to disc centre. In He II the outer spiral (arc2) is brighter than the inner spiral (arc1), in line with the results of Harlaftis et al. (1999), while in C III+N III arc 1 is brighter than arc 2. The middle panels give the radial position of the maximum intensity as a function of binary phase. For C III+N III, the maximum intensity along arc 1 lies at a constant distance of $`0.28R_{L1}`$ from disc centre while the maximum intensity of arc 2 occurs at $`0.55R_{L1}`$. The numbers are similar for He II. We computed equivalent Keplerian velocities for each radius assuming $`M_1=1.0\pm 0.1M_{}`$ e $`R_{\mathrm{L1}}=0.81R_{}`$ (Marsh & Horne 1990). The results are plotted in the upper panel of Fig. 4. Gray lines show the corresponding uncertainties at the 1-$`\sigma `$ limit. For comparison, the results from the C III+N III and He II Doppler tomograms (Harlaftis et al. 1999; see their fig. 4 for the He II diagram) are shown as dashed lines. Since the C III+N III Doppler map is much noisier and blurred than the He II Doppler map, the corresponding diagram is noisier and less reliable than the He II diagram on the right panel. We obtain velocities in the range $`8501050kms^1`$ for the spiral 1 compared to the observed $`400770kms^1`$ and in the range $`650800kms^1`$ for the spiral 2 compared to the observed $`400550kms^1`$ (observed values from Harlaftis et al. 1999). The Keplerian velocities calculated from the radial position of the shocks are systematically larger than those inferred from the Doppler tomography, suggesting that the gas in the spiral shocks has sub-Keplerian velocities. This is in line with the results of the hydrodynamical simulations of Steeghs & Stehle (1999, see their fig. 5), which predicts velocities lower than Keplerian (by as much as 15 per cent) in the outer disc near the spirals. We remark that with the white dwarf mass and Roche lobe radius of IP Peg the Keplerian velocity at the largest possible disc radius ($`R0.85R_{\mathrm{L1}}`$) is about $`530kms^1`$. Therefore, if the observed velocities of Harlaftis et al. (1999) do reflect Keplerian motions, then the emitting gas should be at the border and even outside the primary lobe. Occultation of light from the inner disc regions by the spirals might produce the out of eclipse variations seen in Fig. 1. This is expected since the spiral waves are also vertically extended. From the azimuthal position of the spirals in the eclipse map, the maximum occultation (i.e., the minimum of the orbital modulation) should occur when the spirals are seen face on, at orbital phases $`0.3`$ and $`+0.15`$ cycle, in agreement with the modulations seen in Fig. 1. ## 5 Conclusions We analyzed eclipse lightcurves of the dwarf novae IP Peg during the November 1996 outburst in order to confirm the existence, constrain the location and investigate the spatial structure of the spiral shocks observed in the Doppler tomograms. Our mais results can be summarized as follows: * Eclipse maps in the blue continuum and in the C III+N III emission line reveal two asymmetric arcs at different azimuth and radius from disc centre which are consistent with the spiral shocks seen in the Doppler tomograms. The arcs show an azimuthal extent of $`90\text{}`$ and extend from the intermediate to the outer disc regions ($`R0.20.6R_{L1}`$). The outer radius of the spirals is of the same order of the disc outburst radius ($`R_d0.34a0.6R_{L1}`$). * The He II eclipse map is composed of a central brightness source plus asymmetric arcs at different distances from disc centre. The symmetric component probably corresponds to the low-velocity component seen in He II Doppler tomograms and is understood in terms of gas outflow in a wind emanating from the inner parts of the disc. * The spirals contribute about 16 and 30 per cent of the total line flux, respectively, for the He II and C III+N III lines, and 13 per cent in the continuum. * The Keplerian velocities derived from the radial position of the shocks are systematically larger than those inferred from the Doppler tomography, indicating that the gas in the spiral shocks has sub-Keplerian velocities. ## Acknowledgments We thank an anonymous referee for helpful discussions and comments. This work was partially supported by the PRONEX/Brazil program through the research grant FAURGS/FINEP 7697.1003.00. RB acknowledges financial support from CNPq/Brazil through grant no. 300 354/96-7. ETH was supported by the TMR contract ERBFMBICT960971 of the European Union.
no-problem/0002/astro-ph0002099.html
ar5iv
text
# Geodetic Precession and the Binary Pulsar B1913+16 ## 1. Introduction The binary pulsar PSR B1913+16 exhibits a measurable amount of geodetic precession due to a misalignment of its spin axis and orbital angular momentum (relativistic spin orbit coupling) by an angle $`\delta `$ (see also Stairs et al., for geodetic precession in PSR B1534+12). The angle between the line of sight and the spin axis, $`\zeta `$, changes according to: $$\mathrm{cos}\zeta (t)=\mathrm{cos}\delta \mathrm{cos}i+\mathrm{sin}\delta \mathrm{sin}i\mathrm{cos}\mathrm{\Omega }_{\mathrm{prec}}(tT_0),$$ where $`i`$ is the orbital inclination of the binary system. General Relativity (GR) predicts a rate of precession $`\mathrm{\Omega }_{\mathrm{prec}}=1.21`$ deg yr<sup>-1</sup>, leading to a change in the pulse profile and polarization characteristics with time. Previous observations by Weisberg, Romani and Taylor (1989) detected a change in the relative amplitude of the trailing and leading component, but no change in the component separation. Also, there were no changes in the position angle swing of the linear polarization noted (Cordes, Wasserman & Blaskiewicz 1990). ## 2. Recent Observations Cast Some Light The evolution of the observed radio pulse profile of PSR B1913+16 would be determined when the four parameters $`\alpha ,\delta ,\rho ,T_0`$ were specified for a given $`i`$, where $`\alpha `$ is the angle between the magnetic pole and the spin axis, $`\delta `$ is as mentioned above, $`\rho `$ is the angular radius of the emission cone and $`T_0`$ the epoch of precession. Based on recent data from the Effelsberg 100-m telescope, Kramer (1998) first noticed a change in the component separation $`W`$. This allows a solution region in parametric space that respects GR and the hollow cone model (HCM). $$W(t)=2\mathrm{arccos}\left[\frac{\mathrm{cos}\rho \mathrm{cos}\alpha \mathrm{cos}\zeta (t)}{\mathrm{sin}\alpha \mathrm{sin}\zeta (t)}\right]$$ Effelsberg polarization data has been used in respect with the rotating vector model (RVM), to further narrow down the solution region. $$\psi (t)=\mathrm{arctan}\left[\frac{\mathrm{sin}\alpha \mathrm{sin}(\varphi \varphi _0)}{\mathrm{cos}\alpha \mathrm{sin}\zeta (t)\mathrm{sin}\alpha \mathrm{cos}\zeta (t)}\right]+const.$$ The inclination of the binary orbit with respect to the line-of-sight, $`i`$, is known from timing observations modulo the ambiguity $`i180^{}i`$. The possibilities are: Case A: $`i=47.2^{}`$, Case B: $`i=132.8^{}`$ ## 3. The Solution Today The set of plausible solutions has been well limited, as shown in Fig 1. The inclination of the binary orbit is most likely to be Case B rather than Case A (Wex, Kalogera, Kramer 2000). More precise use of the polarization data plus a molding of the two models (HCM & RVM) to specifically suit PSR B1913+16 (see also Weisberg & Taylor in these proceedings) could further limit the solutions and enable a prediction on the future of this system. ## References Cordes, J.M., Wasserman, I., & Blaskiewicz, M. 1990, ApJ, 349, 546 Damour, T., & Ruffini, R. 1974, Acad. Sci. Paris, 279, série A, 971 Damour, T., & Taylor, J.H. 1992, Phys. Rev. D, 45, 1840 Hulse, R.A., & Taylor, J.H. 1975, ApJ, 195, L51 Kramer, M. 1998, ApJ, 509, 856 Taylor, J.H. 1999, Proceedings of the XXXIV’th Rencontres de Moriond, to be published Weisberg, J.M., Romani, R., & Taylor, J.H. 1989, ApJ, 347, 1029 Wex, N., Kalogera, V., Kramer, M. 2000, ApJ, 528, 401
no-problem/0002/astro-ph0002446.html
ar5iv
text
# The 𝐿_𝑥-𝑇, 𝐿_𝑥-𝜎 and 𝜎-𝑇 relations for groups and clusters of galaxies ## 1 Introduction It has been well established that there exists a strong correlation between the X-ray determined bolometric luminosity $`L_x`$, the X-ray temperature $`T`$ of the intracluster gas and the optical measured velocity dispersion $`\sigma `$ of the cluster galaxies (Wu, Xue & Fang 1999; hereafter Paper I and references therein). A precise determination of these correlations is important not only for the study of dynamical properties and evolution of clusters themselves but also for distinguishing different cosmological models including the prevailing estimate of the cosmic density parameter $`\mathrm{\Omega }_M`$ through a combination of the baryon fraction of clusters and the Big Bang Nucleosynthesis (White et al. 1993; David, Jones & Forman 1995). For instance, if the observed $`L_x`$-$`T`$ relation follows $`L_xT^2`$ (e.g. Quintana & Melnick 1982; Markevitch et al. 1998), we would arrive at the conclusion that the X-ray emission is primarily due to purely gravitational heating and thermal bremsstrahlung, and the baryon fraction $`f_b`$ of clusters can be representative of the Universe, i.e., $`f_b`$ provides a robust estimate of $`\mathrm{\Omega }_M`$ (see Paper I). However, if the observed $`L_x`$-$`T`$ relation appears to be $`L_xT^3`$ (e.g. White, Jones & Forman 1997), other non-gravitational heating mechanisms must be invoked in order to give gas sufficient excess energy (e.g. Ponman, Cannon & Navarro 1999; Wu, Fabian & Nulsen 1999; Loewenstein 1999), unless the baryon fraction of clusters is requited to vary with temperature (David et al. 1993). The latter challenges the standard model of structure formation. Another critical issue is whether the $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations for clusters of galaxies can naturally extend to group scales. In the hierarchical model of structure formation, groups are believed to be the scaled-down version of clusters, and the underlying gravitational potentials of groups and clusters are similar when scaled to their virial radii (e.g. Navarro, Frenk & White 1995). It is expected that groups and clusters should exhibit similar correlations between $`L_x`$, $`T`$ and $`\sigma `$, provided that gas and galaxies are in hydrostatic equilibrium with the underlying gravitational potential of groups and their X-ray emissions are produced by thermal bremsstrahlung. Indeed, in the present Universe a majority of the baryons may be bound in the gravitational wells of groups (Fukugita, Hogan & Peebles 1998). All groups may contain hot X-ray emitting gas with $`kT`$ around or less than $`1`$ keV (e.g. Price et al. 1991; Ponman et al. 1996), and most of them should be detectable with future sensitive observations. Without the knowledge about the dynamical properties of groups characterized by the $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations, it could be misleading if the presently estimated baryon fraction of groups is immediately used as a cosmological indicator. Meanwhile, any difference in these correlations between clusters and groups will be helpful for our understanding of the formation and evolution of structures on scales of $`1`$$`10`$ Mpc and of the significance of non-gravitational heating mechanisms. The pioneering work of constructing the $`L_x`$-$`\sigma `$ relation for groups was carried out by Dell’Antonio, Geller & Fabricant (1994). They found a significant flattening in the relation for groups with $`\sigma `$ below $`300`$ km s<sup>-1</sup>, as compared with the same relation for rich clusters. An extensive study of the $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations for groups was soon made by Ponman et al.(1996), based on 22 Hickson’s compact groups (HCG) whose X-ray emissions are detected. The most remarkable result is the steepening ($`L_xT^{8.2}`$) of the $`L_x`$-$`T`$ relation, in contrast with the X-ray properties of clusters ($`L_xT^{2.5}`$$`T^3`$), while the significant flattening in the $`L_x`$-$`\sigma `$ relations for groups claimed by Dell’Antonio et al. (1994) was only marginally detected. Using the RASSCALS group catalog, Mahdavi et al. (1997, 2000) obtained a much shallower $`L_x`$-$`\sigma `$ relation ($`L_x\sigma ^{0.37\pm 0.3}`$) for groups with $`\sigma <340`$ km s<sup>-1</sup> than the above results. Interestingly, Mulchaey & Zabludoff (1998) studied 12 poor ROSAT groups but arrived at an opposite conclusion that the X-ray component in groups follows the same $`L_x`$-$`T`$-$`\sigma `$ relations as those in clusters. Namely, groups are indeed low-mass versions of clusters. Theoretically, several investigations have been made on the possible reasons why the $`L_x`$-$`T`$ relation flattens with the increase of scale and temperature (e.g. Cavaliere, Menzi & Tozzi 1997, 1999; Valageas & Schaeffer 1999). In particular, if the $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations show a significant departure from the expectations of $`L_xT^2\sigma ^4`$ within the framework of thermal bremsstrahlung and hydrostatic equilibrium, the previous estimate of the amount of baryonic matter in groups and its application to the determination of cosmic density parameter would become questionable. Consequently, one may need to study whether the observed X-ray emission of groups has partially arisen from the individual galaxies ( Dell’Antonio et al. 1994) or other non-gravitational mechanisms such as those suggested by the preheated ICM model (e.g. Ponman et al. 1999) and the shock model (Cavaliere et al. 1997). In this paper, we would like to update and then compare the $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations for groups and clusters, using all the available X-ray/optical data in literature especially the new measurements of poor clusters and groups. We wish to achieve a better statistical significance, which may essentially allow us to closely examine the question as to whether the similarity of these relations would break down on group scales. Throughout the paper we assume a Hubble constant of $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and a flat cosmological model of $`\mathrm{\Omega }_0=1`$. ## 2 Sample We follow the same strategy as in Paper I to select groups and clusters of galaxies from literature: We include all the groups and clusters for which at least two parameters among the X-ray bolometric luminosity ($`L_x`$) and temperature ($`T`$), and the optical velocity dispersion of galaxies ($`\sigma `$) are observationally determined. Essentially, we use the cluster catalog in Paper I, which contains a total of 256 clusters. We first remove MKW9 from the list and put it into our group catalog. We then add another 19 clusters whose temperatures or velocity dispersions have become available since then. This is mainly due to the recent temperature measurements by White (1999). Our final cluster sample consists of 274 clusters. Unlike the optimistic situation for X-ray clusters, the X-ray emission has remained undetectable for most groups because of their relatively low X-ray temperatures. By extensively searching the literature, we find 66 groups that meet our criteria, which include 23 HCGs (Table 1). As compared with the sample used in previous similar study by Ponman et al. (1996), the number of groups has almost tripled. We convert the observed X-ray luminosities to bolometric luminosities in the rest frame of the groups and clusters according to an optically thin and isothermal plasma emission model with 30 percent solar abundance, in which we assume $`T=6`$ and $`1`$ keV, respectively, for the 99 clusters and 26 groups whose temperatures remain unknown spectroscopically. ## 3 Results Essentially, two linear regression methods are used in the fit of the observed data ($`L`$, $`T`$ and $`\sigma `$) to a power-law relation: the standard ordinary least-square (OLS) method and the orthogonal distance regression (ODR) technique (Feigelson & Babu 1992). The latter is preferred because it can account for data scatters around two variables, which is particularly suited when two variables contain significant measurement uncertainties. We perform both the OLS and ODR fits to the data sets of our group and cluster samples, respectively. In order to maximally use the published data especially for groups in the ODR fitting, we assign the average values of measurement uncertainties in $`L_x`$, $`T`$ and $`\sigma `$ to those data whose error bars are not available. Specifically, the average relative errors ($`\mathrm{\Delta }L_x/L_x`$, $`\mathrm{\Delta }T/T`$, $`\mathrm{\Delta }\sigma /\sigma `$) are found to be ($`14.7\%`$, $`22.4\%`$, $`16.5\%`$) and ($`24.7\%`$, $`15.5\%`$, $`28.4\%`$) for clusters and groups, respectively. Finally, the Monte-Carlo simulations are performed to estimate the standard deviations in the fitted relations. ### 3.1 $`L_x`$-$`T`$ relation The observed and our best-fit $`L_x`$-$`T`$ relations are shown in Fig.1 and also summarized in Table 2. It appears that the resultant $`L_x`$-$`T`$ relation for (184) clusters remains nearly the same as that given in Paper I for 142 clusters, $`L_xT^{2.79\pm 0.08}`$. However, our best fit $`L_x`$-$`T`$ relation for 38 groups becomes somewhat flatter: The power-index in the ODR fitting drops to $`5.57\pm 1.79`$, in contrast to the value of $`8.2\pm 2.7`$ reported by Ponman et al. (1996) based on 16 HCGs. Nevertheless, at $`3\sigma `$ significance level we have confirmed their claim for the similarity breaking of the $`L_x`$-$`T`$ relation at the low temperature end. ### 3.2 $`L_x`$-$`\sigma `$ relation We display in Fig.2 and Table 3 the X-ray luminosity-velocity dispersion relations for 59 groups and 197 clusters. Again, the $`L_x`$-$`\sigma `$ relation for clusters agrees with our previous result based on 156 clusters (Paper I): $`L_x\sigma ^{5.3}`$, while the best fit $`L_x`$-$`\sigma `$ relation for our group sample reads $`L_x\sigma ^{2.35\pm 0.21}`$, which is significantly flatter than both the above relation for clusters and the Ponman et al. (1996) result for 22 HCGs $`L_x\sigma ^{4.9\pm 2.1}`$. Yet, our $`L_x`$-$`\sigma `$ relation for groups has not reached the shallow slopes (0.37 – 1.56) reported by Mahdavi et al. (1997, 2000). A glimpse of Fig.2 reveals the following two features: (1)The data of groups and clusters are joined together through poor cluster population, and there is no apparent gap in between; (2)The scatters of $`\sigma `$ around the best fit $`L_x`$-$`\sigma `$ relation become relatively large on group scale. ### 3.3 $`\sigma `$-$`T`$ relation The best fit $`\sigma `$-$`T`$ relation for clusters using 109 clusters remains almost unchanged (Fig.3 and Table 4) as compared with the previous studies by Wu, Fang & Xu (1998; and references therein) and Paper I: $`\sigma T^{0.65\pm 0.03}`$. Meanwhile, we have found roughly the same relation for our sample of 36 groups: $`\sigma T^{0.64\pm 0.08}`$, which is also consistent with the previous results within uncertainties (Ponman et al. 1996; Mulchaey & Zabludoff 1998). For clusters, this relation alone indicates that the intracluster gas may not be in isothermal and hydrostatic equilibrium with the underlying gravitational potential of clusters. However, the same conclusion is not strictly applicable to groups if the large uncertainty in the presently fitted $`\sigma `$-$`T`$ relation is included. Additionally, the ratios of specific energy in galaxies to that in gas for the 36 groups exhibit rather large scatters ranging from 0.3 to 3.5, with an average value of $`\beta _{spec}=0.85`$. ### 3.4 Co-consistency test The employment of the ODR fitting method essentially ensures that the best fit relations between $`L_x`$, $`T`$ and $`\sigma `$ are self-consistent (Paper I). We now examine the co-consistency between these relations in the sense that these three relations are not independent. Our strategy is as follows: We first derive the correlation between $`\sigma `$ and $`T`$ from the best fit $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations listed in Table 2 and Table 3. We then compare this ‘derived’ $`\sigma `$-$`T`$ relation with the one obtained independently from our ODR fitting shown in Table 4. Our fitted relations should be acceptable if a good agreement between the derived and fitted $`\sigma `$-$`T`$ relations is reached. Otherwise, at least one of the three fitted relations will be questionable. For cluster sample, combining the $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations in Table 2 and Table 3 yields $`\sigma T^{0.53\pm 0.04}`$. Therefore, at $`2\sigma `$ significance level the derived $`\sigma `$-$`T`$ relation is consistent with the directly fitted one $`\sigma T^{0.65\pm 0.03}`$. As for the group sample, the derived $`\sigma `$-$`T`$ relation reads $`\sigma T^{2.37\pm 0.90}`$, which compares with the best fit one $`\sigma T^{0.64\pm 0.08}`$. Regardless of the apparent disagreement between the mean slopes, the large $`68\%`$ confidence interval makes the two relations show no difference within $`2\sigma `$. As a result, the three fitted relations for groups in Table 2 – Table 4 are still consistent with each other when the $`2\sigma `$ uncertainties are taken into account. Indeed, a visual examination of Fig.1–Fig.3 reveals that the data points of groups show very large dispersions especially on the $`L_x`$-$`\sigma `$ plane. Either the sparse data points and/or the inclusion of a few unusual groups in our fitting (e.g. HCG94, S49-142, etc.) may account for the the marginal co-consistency between the $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations for groups. In Table 2 – Table 3, we have also given the significance level for the resulting correlation coefficient for each data set, based on Kendall’s $`\tau `$. It appears that the probability of erroneously rejecting the null hypothesis of no correlation between $`L_x`$, $`T`$ and $`\sigma `$ is $`P<0.2\%`$ for all the cases. Therefore, it is unlikely that the correlations we have found for groups and clusters are an artifact of the small samples and/or the statistical fluke. ## 4 Discussion and conclusions ### 4.1 Global properties Groups and clusters constitute large and nearly virialized dynamical systems in the Universe. While the distribution of dark matter in these systems may exhibit a regularity such as the universal density profile, the hot intragroup/intracluster gas could be affected by non-gravitational heating mechanisms (e.g. star formation), especially for poor systems like groups of galaxies where even individual galaxies may make a nonnegligible contribution to the X-ray emission (Dell’Antonio et al. 1994; Mulchaey & Zabludoff 1998). Therefore, the standard scenario that groups and clusters should be the similar dynamical systems at different mass ranges is only applicable to the distribution and evolution of the dark matter particles. Whether or not the hot gas can be used as a good tracer of the underlying gravitational potential wells depends on how significant the X-ray emission of individual galaxies and the non-gravitational heating would be, which in turn depends on how massive a dynamical system will be. Because groups of galaxies connect individual galaxies to clusters of galaxies, one may expect to detect the transition between “galaxy-dominated” and “intracluster medium dominated” properties occurring on group scales (e.g. Dell’Antonio et al. 1994). Such a scenario has essentially been justified by our analysis of the correlations between X-ray luminosity, temperature and velocity dispersion for groups and clusters, although the current data especially for groups still have rather large uncertainties. It is remarkable that we have detected the similarity breaking in the $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations between massive systems (clusters) and low mass ones (groups), indicating that the overall dominant X-ray emission and heating mechanisms are quite different in these two systems. Nevertheless, for the $`\sigma `$-$`T`$ relation we have not found an apparent discrepancy between the two systems. Our result essentially agrees with the previous findings of Dell’Antonio et al. (1994), Ponman et al. (1996) and Mahdavi et al. (1997, 2000). ### 4.2 Uncertainties Major uncertainties in the presently fitted $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations for groups and clusters come from the sparse (X-ray) data sets of poor clusters and groups. Although an observational determination of these relations does not in principle require the completeness of the sample, the reliability and significance of our fitting can be seriously affected by the small number statistics. The marginal co-consistency between the three relations for groups may be partially due to the small group sample. It is generally believed that poor clusters and groups should all contain hot X-ray emitting gas (e.g. Price et al. 1991; Ponman et al. 1996). However, most of them can be hardly detected by current X-ray observations because of their low temperatures $`T0.1`$$`1`$ keV. Therefore, our group sample suffers from a selection bias, in which the present X-ray data are acquired by different authors with different criteria. For instance, about 1/3 of our group sample are HCGs, and the steepening of the $`L_x`$-$`T`$ relation for groups may have partially arisen from the contribution of these HCG populations. Our statistical results are sensitive to the linear fitting methods, especially for the $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations. This arises because the observed quantities, $`L_x`$, $`T`$ and $`\sigma `$, all have measurement uncertainties, while OLS method ignores scatters around the horizontal variable (e.g. $`T`$ or $`\sigma `$). At this point, the ODR fitting technique is strongly recommended. However, not all the data points have information about their measurement uncertainties, or some error bars are difficult to evaluate. In this case, we have simplified the problem and employed the average values instead, in order to maximally use the available data points, which may have yielded further uncertainties in the ODR fitted relations. Note that the measurement uncertainties in $`L_x`$ are relatively smaller than those in $`T`$ and $`\sigma `$ (see Fig.1 and Fig.2). This explains the significant difference in the resultant $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations between OLS and ODR methods. ### 4.3 Physical implications Under the assumption that both gas and galaxies are in hydrostatic equilibrium with the underlying gravitational potential of group/cluster, we would expect that the total X-ray luminosity via thermal bremsstrahlung scales as (e.g. Paper I) $$L_xT^{2.5}f_b^2r_c^1S_{gas},$$ (1) and $$L_x\sigma ^4T^{1/2}f_b^2r_c^1S_{gal},$$ (2) where $`f_b`$ is the (gas) baryon fraction, $`r_c`$ is the core radius and $`S`$ is the so-called dimensionless structure factor which depends strongly on the power-index of gas/galaxy profile (e.g. the conventional $`\beta `$ model or the King model) but weakly on the core radius. Using the emission weighted temperature instead of $`T`$ in eq.(1) only leads to a modification to $`S_{gas}`$. Additionally, the velocity dispersion of galaxies and the temperature of hot X-ray emitting gas satisfy $$\sigma T^{1/2}.$$ (3) From our fitted $`\sigma `$-$`T`$ relations alone, groups are still consistent with $`\sigma T^{1/2}`$ within $`2\sigma `$ uncertainties, while a significant deviation from what is expected under the scenario of isothermal and hydrostatic equilibrium is found for clusters. The latter is consistent with a number of similar studies on the issue (e.g. White, Jones & Forman 1997; Wu et al. 1998; 1999). Yet, it is unlikely that the intragroup gas is in the state of a perfect isothermal and hydrostatic equilibrium with the underlying gravitational potential as allowed by the $`\sigma `$-$`T`$ relation within its $`95\%`$ significance interval. This point can be further demonstrated by the apparent disagreement between the theoretical prediction of eq.(1), $`L_xT^2`$, and our fitted $`L_x`$-$`T`$ relation for groups in Table 2, $`L_xT^{5.57\pm 1.79}`$, unless the baryon fraction is assumed to vary with gas temperature. Meanwhile, the best fit $`L_x`$-$`\sigma `$ relation for groups, $`L_x\sigma ^{2.35\pm 0.21}`$, differs from the theoretical expectation (eq.) in conjunction with eq.(3): $`L_x\sigma ^4`$. Taking these results as a whole, we feel that the currently available optical/X-ray data have already provided convincing evidence for either the failure of isothermal and hydrostatic equilibrium hypothesis for intragroup gas or the significant contribution of X-ray emission from the individual group galaxies. Therefore, any cosmological applications of these fitted relations without considering these effects could be misleading. The physical implications of the $`L_x`$-$`T`$, $`L_x`$-$`\sigma `$ and $`\sigma `$-$`T`$ relations for clusters have been extensively studied in Paper I. For the group sample, there exists an apparent difference between the theoretically predicted $`L_x`$-$`T`$ and $`L_x`$-$`\sigma `$ relations (eq.) and the observationally determined ones. Dell’Antonio et al. (1994) attributed the flattening of $`L_x`$-$`\sigma `$ relation to the additional X-ray emission of individual group galaxies, while Ponman et al (1996) interpreted the steepening of the $`L_x`$-$`T`$ relation as the result of the wind injection from galaxies. The recent detection of the X-ray wakes in pool cluster A160 may give a strong support to these scenarios, i.e., a large fraction of X-ray emission in groups and poor clusters can be associated with individual galaxies (Drake et al. 2000). It thus deserves to explore whether this scenario can quantitatively account for the reported $`L_x`$-$`T`$-$`\sigma `$ relations for groups. Alternatively, other mechanisms such as preheating by supernovae and substructure merging may also contribute extra heating to the intragroup medium. On the other hand, the prevailing determination of the baryon fraction of groups via hydrostatic equilibrium and thermal bremsstrahlung could be seriously contaminated by the X-ray emission of individual group galaxies and other non-gravitational heating. This might account for the relatively large variations of the presently derived baryon fractions among different groups. It is unlikely that a robust estimate of the baryon fractions of groups and poor clusters within the framework of conventional dynamical models and their cosmic evolution can be achieved without a better understanding of the various heating mechanisms. Indeed, we cannot exclude the possibility that some puzzles about the properties of the baryon fractions of groups and clusters are due to the contamination of the X-ray emission of individual galaxies and the non-gravitational heating. For instance, the standard model predicts an increase in the baryon fraction with radius, and a universal value of $`f_b`$ at large radii cannot be reached unless the X-ray temperature is required to rise in some clusters (Wu & Xue 2000 and references therein). The presence of such puzzle has essentially prevented the baryon fractions of groups and clusters from the cosmological applications. Inclusion of the contributions of other emission/heating sources in the estimate of the total gravitational masses of groups and clusters may lead to a re-arrangement of the radial distribution of baryon fraction. Further work will thus be needed to explore whether these effects can resolve the above puzzle. We thank the referee, Andisheh Mahdavi, for many valuable suggestions and comments. This work was supported by the National Science Foundation of China, under Grant No. 19725311.
no-problem/0002/cond-mat0002453.html
ar5iv
text
# Modifications of the hydrogen bond network of liquid water in a cylindrical 𝐒𝐢𝐎_𝟐 pore ## 1 Introduction The increasing interest in the study of structural and dynamical properties of confined water in different environments arises from the close connection with many relevant technological and biophysical problems . It is well known that the properties of liquid water at ambient conditions are mainly determined by the microscopic local tetrahedral order . Understanding how the connected random hydrogen bond network of bulk water is modified when water is confined in small cavities inside a substrate material, is very important, e. g., for studies of stability and enzymatic activity of proteins, oil recovery, or heterogeneous catalysis, where water-substrate interactions play a fundamental role. The modifications of the short range order in the liquid depend on the nature of the water-substrate interaction, i.e. hydrophilic or hydrophobic, as well as on its spatial range and on the geometry of the substrate. Many experimental studies of structural and dynamical properties of water in confined regimes have been performed . Water-in-Vycor is one of the most extensively studied systems. In particular, it has been found in a recent neutron scattering experiment that the hydrogen bond network of water in Vycor is strongly distorted with respect to bulk water. In order to shed light on this behavior, which was not evident in previous experiments , a more extensive study of the microscopic details of the molecular arrangement is needed. Computer simulation methods allow a more detailed exploration of the molecular arrangements inside the system than most experimental techniques. They can therefore be considered as an ideal microscopic tool to study the properties of water in confined geometries, subject, of course, to the usual limitations of the method and the interaction potential functions employed. In this paper we present results obtained by a computer simulation study of water confined in a model for a Vycor glass pore. The simulation cell is obtained by carving a single cylindrical cavity in a modeled silica glass, thus reproducing the symmetry, the average dimensions, and the structure of the rough surface of Vycor pores at the atomic level. In the next section we will briefly review the method for building the simulation cell and the implementation of the molecular dynamics. In the following sections we summarize some of the key results and compare them with recent experiments. The discussion is more detailed than in earlier work by our groups . ## 2 Model for the Vycor pore and molecular dynamics calculations A glass of $`\mathrm{SiO}_2`$ is obtained by computer simulation, following the method proposed by Brodka and Zerda , using potential parameters given by Feuston and Garofalini . The system is simulated as a simple ionic model for $`\mathrm{Si}^{4+}`$ and $`\mathrm{O}^2`$ ions . Inside a cell of vitreous SiO<sub>2</sub> with a box length of approximately 71 Å a cylindrical cavity of 40 Å diameter, corresponding to the average pore size of the Vycor glass of interest, is created. The details for generating the cavity with a rough surface and microscopic structure representative for pores in Vycor glass are given in previous publications . On the surface of the pore two types of oxygen atoms can be distinguished, depending on the number of silicon atoms to which they are connected. Nonbridging oxygens (NBO’s) are bonded to only one silicon atom and are saturated with a hydrogen atom. The number density of the hydroxyl groups in our samples is 2.5 $`\mathrm{nm}^2`$, in good agreement with the experimentally determined value . The second species of surface oxygen atoms are bridging oxygens (BO’s), which are connected to two silicon atoms. For the water-water interaction we assume the SPC/E model . The atoms of the substrate are allowed to interact with the water sites by means of the Coulomb potential, where different fractional charges are assigned to BO’s ($`0.629|e|`$), NBO’s ($`0.533|e|`$), silicon atoms ($`1.283|e|`$) and surface hydrogen atoms ($`0.206|e|`$). In addition both BO’s and NBO’s interact with the oxygen sites of water via a Lennard-Jones potential, whose parameters are $`\sigma =2.70`$ Å and $`\sigma =3.00`$ Å for BO and NBO atoms, respectively, and $`\epsilon =230`$ K in both cases . During the simulation the Vycor atoms are kept fixed, and periodic boundary conditions are applied along the $`z`$-axis. The shifted force method is used with a cut-off at 9 Å. As discussed in a previous paper , the use of larger cut-off or Ewald summation does not change the nature of the obtained results. The temperature is controlled by means of a Berendsen thermostat with relaxation times between 0.5 and 1 ps. This procedure simplifies running the simulations over long times but has negligible effects on the calculated properties. All calculations have been performed at room temperature for several hundred picoseconds, after an equilibration run lasting more than 50 ps. More details can be found in Ref. . ## 3 Structural properties of the confined water The molecular dynamics calculations have been performed for different numbers of water molecules, corresponding to different levels of hydration. The density profiles of the oxygen and hydrogen atoms of water for 96% (2600 water) and 19% degree of hydration (500 water) are shown in Fig. 1. Density profiles and further results of several intermediate degrees of hydration have been published elsewhere . We observe already at low hydration the presence of a layer of water molecules wetting the substrate surface; at nearly full hydration two layers of water with higher than average density are evident. The same features are present in the density profiles for the hydrogen atoms. Few molecules are able to penetrate into the glass and are trapped inside small pockets close to the surface, which are a byproduct of the ’sample preparation process’. The molecules in these pockets can easily be identified in the snapshot along the pore axis in Fig. 2. At low hydrations the surface is not covered completely; adsorption occurs preferentially in several regions, in which small clusters are formed. We have made no attempt to identify specific surface structures that favor cluster formation. In the comparison with experiment the definition of the hydration level deserves some consideration. Full hydration in the experimental sample preparation corresponds to an estimated water density which is $`11\%`$ less than the density of water at ambient conditions . If the simulated cavity would be a perfect cylinder of radius 20 Å, this would translate into $`N_W=2670`$ molecules. Due to the roughness of the glass surface and the existence of the small pockets inside the glass close to the surface, which trap several water molecules over the entire simulation length, the exact number of molecules can only be determined by a grand canonical simulation scheme. For the investigated number $`N_W=2600`$ we observe indeed that the density profile does not reach the bulk value in the center of the pore. If one assumes that full hydration corresponds to a bulk-like region in the center of the pore, we estimate that an additional 100 molecules would correspond to full hydration, not much different from the experimental value. Thus, $`N_W=2600`$ corresponds to a hydration level of approximately 96%. We calculate the site-site radial distribution functions for $`N_W=2600`$, i. e., at approximately the experimental density. The calculation of these functions is not straightforward when dealing with a non-periodic system, since excluded volume effects must be carefully taken into account . Unlike a bulk liquid, a *uniform confined fluid*, i.e., a collection of non-interacting particles in a confining volume, where periodic boundary conditions are only along the $`z`$-axis, does not create a *uniform* radial distribution function $`g=1`$. Rather, the form of the radial distribution function depends on the geometry of the confining system. If we disregard the confinement effects, the raw site-site pair correlation functions can be calculated in the usual way from the simulation by taking the average number $`n_{\alpha \beta }^{(2)}(r)`$ of sites of type $`\beta `$ lying in a spherical shell $`\mathrm{\Delta }v(r)`$ at distance $`r`$ from a site of type $`\alpha `$ and normalizing it to the average number of atoms in the same spherical shell in an ideal gas at the same density $$g_{\alpha \beta }^{(MD)}\left(r\right)=\frac{n_{\alpha \beta }^{(2)}(r)}{\frac{N_\beta }{V_p}\mathrm{\Delta }v(r)},$$ (1) where $`N_\beta `$ is the total number of $`\beta `$ sites, and $`V_p`$ is the volume of the cylindrical cell. The three resulting raw site-site correlation functions are shown in Fig. 3. It is evident that the three functions do not approach a constant value at large $`r`$. The normalization factor in (1) must also include a uniform profile, $$g_u(r)=\frac{V_p}{\left(2\pi \right)^3}d^3QP_{cyl}\left(Q\right).$$ (2) $`P_{cyl}(Q)`$ is the form factor of the cylindrical simulation cell of height $`L`$ and radius $`R`$ $$P_{cyl}\left(Q\right)=_0^1𝑑\mu \left[j_0\left(\frac{\mu QL}{2}\right)\right]^2\left[\frac{2j_1\left(QR\sqrt{1\mu ^2}\right)}{QR\sqrt{1\mu ^2}}\right]^2$$ (3) where $`j_n(x)`$ are the spherical Bessel functions of the first kind of order $`n`$. The resulting $`g_u(r)`$ is the smooth dashed line shown in Fig. 3. The properly normalized oxygen-oxygen and oxygen-hydrogen site-site distribution functions are then obtained by dividing $`g_{\alpha \beta }^{MD}(r)`$ by $`f_cg_u(r)`$. The correction factor $`f_c`$, which lies between 1.0 and 1.04, takes into account the roughness of the surface and has been adjusted in such a way that the corrected pair correlation functions are close to one in the region $`r>8`$ Å. The corrected pair correlation functions are shown in Fig. 4 and are compared to the corresponding functions of SPC/E water at ambient conditions (dashed lines). We note that the modification of the oxygen-oxygen function relative to bulk water shows a similar trend as the experimental one (see Fig. 7a in Ref. ). The first minimum becomes shallower and fills in. Such behavior was also observed in experiments and computer simulations of water under pressure, which was explained as resulting from a collapsing hydrogen-bond network . In the oxygen-hydrogen function we notice that the first peak is lower in amplitude and the first minimum shifts towards shorter distances, as was observed in the experiment (Fig. 7b in Ref. ). Agreement with experimental results is less satisfactory in the case of the hydrogen-hydrogen pair correlation function. The experimental function is changed much more than the simulated one. The differences may be due to the fact that, in the experiment, the hydroxyl hydrogen atoms of Vycor are not distinguished from water hydrogen atoms. We conclude that our results are in qualitative agreement with the experimental trend for the confinement-induced changes in water structure. According to Ref. , those changes indicate a distortion of the H-bond ordering in the confined water, induced by the interaction with the substrate. The distortion of the hydrogen bond network deserves further investigation. Since there are several approximations made in extracting the water-water contribution from the measured structure factors, computer simulations can contribute to our understanding by first providing a more detailed analysis of the simulated hydrogen bond network than is possible in experiment, and second by investigating the differences between water molecules located at various distances from the substrate surface. ## 4 Hydrogen bond network of confined water In a first attempt to characterize the hydrogen bond network, we calculated the number of hydrogen bonds between water molecules and between water molecules and surface atoms. A geometric criterion is applied. A hydrogen bond exists between two oxygen atoms, if the angle between the intramolecular O-H vector and the intermolecular O$`\mathrm{}`$O vector is less than $`30^{}`$, provided that the O$`\mathrm{}`$O separation is less than $`3.35`$ Å. In Fig. 5 we show the results for $`N_W=2600`$. Close to the surface the number of water-water hydrogen bonds is reduced; a large number of interfacial water molecules is engaged in hydrogen bonds between water molecules and oxygen atoms of the pore surface. Taking the sum of both contributions, one can see that the total number of hydrogen bonds remains constant and the loss of water-water hydrogen bonds is compensated by water-Vycor bonds. Only bridging oxygen atoms of the Vycor surface contribute significantly to the number of hydrogen bonds; the amount of bonds between hydroxyl groups on the surface and water molecules is small and therefore not shown in Fig. 5. It is worth noticing that at this high level of hydration the average number of H-bonds is always lower than the number of bonds formed by bulk SPC/E water. In Fig. 6 we have investigated the distribution of hydrogen bond angles in more detail for the simulation with $`N_W=2600`$. The figure depicts the distribution of $`\mathrm{cos}\varphi `$, where $`\varphi `$ is the angle between intramolecular O-H and intermolecular H$`\mathrm{}`$O vector (see inset). Only that part of the distribution which corresponds to almost linear hydrogen bonds is shown. The top frame shows the distribution of water-water hydrogen bond angles close to the center of the cylinder (solid line) and close to the pore surface (dashed). While there is a reduction in the number of water-water bonds, their nature is not changed. By and large, linear water-water hydrogen bonds are favored. There is however, a tendency to have slightly more distorted hydrogen bonds close to the pore surface, as can be inferred from the reduced height of the distribution (dashed line) at $`\mathrm{cos}\varphi =1`$ and the increase for $`\mathrm{cos}\varphi <0.9`$. The center frame of Fig. 6 contains the corresponding distribution for the hydrogen bonds between water molecules and the bridging oxygen atoms. The distribution is almost identical to that between water molecules. Hence, the loss in water-water bonds is quantitatively (see Fig. 5) and structurally compensated by hydrogen bonds to Vycor atoms. The linearity of hydrogen bonds between water molecules and surface oxygen atoms on hydrophilic substrates was already observed by Lee and Rossky . The angle distribution between water molecules and hydroxyl groups on the surface (bottom frame of Fig. 6) is uniform; there is no tendency to form linear bonds between water and NBO’s as in the case of water and BO’s. The probable cause is the fact that, contrary to Lee and Rossky’s work, all substrate atoms including the hydrogen atoms are immobile during the simulation and that, consequently, the hydroxyl groups are not able to reorientate in an optimal way to form hydrogen bonds with water molecules. Almost linear hydrogen bonds can, however, form between water molecules and BO’s due to the rotational mobility of the water molecule. Since the total number of hydrogen bonds between water and NBO’s is very small, we did not attempt to modify our model of the Vycor surface. We have further analyzed the local structure around a water molecule. We explored the arrangement of nearest neighbor molecules by means of the distribution of the angle $`\gamma `$, which is the angle between the two vectors from an oxygen atom of a water molecule with the oxygen atoms of the two closest water neighbors (see inset of Fig. 7). Figure 7 shows the angular distribution function $`P(\mathrm{cos}\gamma )`$ in the fully hydrated system with $`N_W=2600`$ for different radial layers (full lines, as indicated). For comparison, the corresponding distribution for bulk water is given as the dashed lines. The arrows point to the average value of $`\mathrm{cos}\gamma `$. Both in experiment and in simulation, the distribution function for bulk water at ambient conditions shows a well defined peak at $`\gamma 105^{}`$ ($`\mathrm{cos}\gamma 0.26`$), representing the tetrahedral order, and a secondary peak at $`\gamma 54^o`$ ($`\mathrm{cos}\gamma 0.6`$), attributed to neighbors located in the cavities of the hydrogen bond network . In Vycor glass the distributions $`P(\mathrm{cos}\gamma )`$ near the center of the pore are similar to the bulk distribution. In the distribution in the innermost region with density less than ambient density, a more ordered tetrahedral structure than in the bulk phase is quite obvious. Closer to the boundary of the pore, the minimum between the main peak of the tetrahedral coordination and the peak at $`\mathrm{cos}\gamma =0.6`$ begins to fill in and for $`\rho >16`$ Å the distribution is monotonic, indicating a departure from the preferential tetrahedral arrangement. In the layer between 18 and 19 Å, there is an obvious preference for small angles. The average value, $`\mathrm{cos}\gamma `$ (arrows in Fig. 7), shifts to larger values from the center to the surface of the pore. The shift to larger values is a consequence both of the substrate-induced distortion of individual water-water hydrogen bonds and of the higher local density, which leads to a collapse of the tetrahedral structure due to packing effects. ## 5 Summary and Conclusions We have performed molecular dynamics simulations of water in porous Vycor glass for different degrees of hydration, ranging from 19 to 96 %. In the present work, we analyzed the modifications of the hydrogen bond network of water at the highest degree of hydration. At nearly full hydration the structural properties of water in the center of the pore are similar to the bulk phase. We did not compare the chemical potential of water in the pore to that of bulk water. Close to the surface the structure changes in several ways. Due to the hydrophilic properties of the surface the density distribution of hydrogen and oxygen atoms shows the formation of two layers of water molecules which interact with the Vycor surface; the water-water hydrogen bond network in the boundary region is distorted, as hydrogen bonds between water molecules are partially substituted by bonds from water to bridging oxygens of the Vycor glass. Hydrogen bonds close to the surface are still preferentially linear but show a larger deviation from linearity than hydrogen bonds in the center of the pore or in bulk water. The local tetrahedral arrangement of water molecules, however, is destroyed close to the surface due to the geometric confinement and the increased local density. The presence of the pore surface also changes the radial pair distribution functions relative to those of bulk water. The minimum of the oxygen-oxygen radial distribution function fills in with respect to the bulk; this observation is familiar from water under pressure, where it also indicates a decrease in hydrogen bond order. The comparison of the pair correlation functions with experiment is encouraging, and we believe that further studies at different temperatures would be helpful for a deeper understanding of the phenomena concerned with the modification of hydrogen bonding in confined water. ## Acknowledgments We gratefully acknowledge financial support by the Fonds der Chemischen Industrie and the Clothilde Eberhardt Foundation.
no-problem/0002/astro-ph0002061.html
ar5iv
text
# 1 On the left, the total (heavy line), potential (dashed line), kinetic (dot-dash line) and magnetic (full line) energies of three CMEs. On the right, their mass (diamonds) and velocity (asterisks). LASCO and EIT Observations of Coronal Mass Ejections K. P. Dere Naval Research Laboratory, Code 7663, Washington DC 20375 A. Vourlidas, P. Subramanian Center for Earth Observing and Space Research, Institute for Computational Sciences, George Mason University, Fairfax, VA 22030 Proceedings of the YOHKOH 8th Anniversary International Symposium, “Explosive Phenomena in Solar and Space Plasma”, Dec 6-8 1999, Sagamihara, Japan The LASCO and EIT instruments on the SOHO spacecraft have provided an unprecedented set of observations for studying the physics of coronal mass ejections (CMEs). They provide the ability to view the pre-event corona, the initiation of the CME and its evolution from the surface of the Sun through 30 R. An example of the capability of these instruments is provided in a description of a single event (Dere et al., 1997). During the first 2 years of operation of LASCO and EIT on SOHO, a substantial fraction, on the order of 25 to 50%, of the CMEs observed exhibited structure consistent with the ejection of a helical magnetic flux rope. An examples of these has been reported by Chen et al. (1997) and Dere et al. (1999). These events may be the coronal counterpart of magnetic clouds discussed by Burlaga et al.(1981) and Klein and Burlaga (1982). They analyzed observations of magnetic fields behind interplanetary shocks and deduced that the field topology was that of a helical flux rope. Recently, we have explored a number of the consequences of the helical flux rope description of these types of CMEs. Vourlidas et al. (1999) examined the energetics of CMEs with data from the LASCO coronagraphs on SOHO. The LASCO observations provide fairly direct measurements of the mass, velocity and dimensions of CMEs. Using these basic measurements, we determined the potential and kinetic energies and their evolution for several CMEs that exhibited a flux-rope morphology. Assuming magnetic flux conservation (’frozen-in’ fields), we used observations of the magnetic flux in a variety of magnetic clouds near the Earth to determine the magnetic flux and magnetic energy in CMEs near the Sun. Figure 1 shows these quantities for a few representative flux rope CMEs. In general, we find that the potential and kinetic energies increase at the expense of the magnetic energy as the CME moves out, keeping the total energy roughly constant. This demonstrates that flux rope CMEs are magnetically driven. Furthermore, since their total energy is constant, the flux rope parts of the CMEs can be considered to be a closed system above $``$ 2 $`R_{}`$. Subramanian et al. (1999) examined images from LASCO to study the relationship of coronal mass ejections (CMEs) to coronal streamers. We wished to test the suggestion of Low (1996) that CMEs arise from flux ropes embedded in streamers near their base. It is expected that the CME eruption would lead to the disruption of the streamer. To date, this is the most extensive observational study of the relation between CMEs and streamers. The data span a period of 2 years near sunspot minimum through a period of increased activity as sunspot numbers increased. We have used LASCO C2 coronagraph data which records Thomson scattered white light from coronal electrons at heights between 1.5 and 6$`R_s`$. Synoptic maps of the coronal streamers have been constructed from C2 observations at a height of 2.5$`R_s`$ at the east and west limbs. We have superposed the corresponding positions of CMEs observed with the C2 coronagraph onto the synoptic maps. We identified the different kinds of signatures CMEs leave on the streamer structure at this height (2.5$`R_s`$). We find four categories of CMEs with respect to their effect on streamers: 1. CMEs that disrupt the streamer. 2. CMEs that have no effect on the streamer, even though they are related to it. 3. CMEs that create streamer-like structures. 4. CMEs that are latitudinally displaced from the streamer. Figure 2 summarizes these results. CMEs in categories 3 and 4 are not related to the streamer structure. We therefore conclude that approximately 35% of the observed CMEs bear no relation to the pre-existing streamer, while 46% have no effect on the observed streamer, even though they appear to be related to it. Previous studies using SMM data (Hundhausen 1993) have made the general statement that CMEs are mostly associated with streamers and that they frequently disrupt it. Our conclusions thus significantly alters the prevalant paradigm about the relationship of CMEs to streamers. Subramanian and Dere (2000) have examined coronal transients observed on the solar disk in EIT 195 Å images that correspond to coronal mass ejections observed by LASCO during the solar minimum phase of January 1996 through May 1998. The objective of the study is to gain an understanding of the source regions from which the CMEs observed in LASCO images emanate. We compare the CME source regions as discerned from EIT 195 Å images with photospheric magnetograms from the MDI on SOHO and from NSO Kitt Peak, and also with BBSO H$`\alpha `$ images. The overall results of our study suggest that a majority of the CME related transients observed in EIT 195 Å images are associated with active regions. We have carried out detailed case studies of 5 especially well observed events. These case studies suggest that active region CMEs are often associated with the emergence of parasitic polarities into fairly rapidly evolving active regions. CMEs associated with prominence eruptions, on the other hand, are typically associated with long lived active regions. Figure 3 summarizes these results. References Chen, J., et al. 1997, ApJ, 338, L194 Dere, K. P., et al., 1997, Solar Phys., 175, 601. Dere, K. P., Brueckner, G. E., Howard, R. A., Michels, D.J., Delaboudiniere, J.P., 1999, ApJ, 516, 465. Burlaga, L., Sittler, E., Mariani, F., Schwenn, R., 1981, JGR, 86, 6673 Klein, L. W., Burlaga, L. F., 1982, JGR, 87, 613 Subramanian, P. and Dere, K.P., 2000, ApJ, in preparation Subramanian, P., Dere, K.P., Rich, N.B., Howard, R.A., 1999, JGR, 104, 22331 Vourlidas, A., Subramanian, P., Dere, K.P., Howard, R.A., 2000, ApJ, in press
no-problem/0002/cond-mat0002103.html
ar5iv
text
# Pulse driven switching in one-dimensional nonlinear photonic band gap materials: a numerical study ## I INTRODUCTION Nonlinear dielectric materials exhibiting a bistable response to intense radiation are key elements for an all-optical digital technology. For certain input optical powers there may exist two distinct transmission branches forming a hysteresis loop, which incorporates a history dependence in the system’s response. Exciting applications involve optical switches, logic gates, set-reset fast memory elements etc . Much interest has been given lately to periodic nonlinear structures , in which because of the distributed feedback mechanism, the nonlinear effect is greatly enhanced. In the low intensity limit, these structures are just Bragg reflectors characterized by high transmission bands separated by photonic band gaps . For high intensities and frequencies inside the transmission band, bistability results from the modulation of transmission by an intensity-dependent phase shift. For frequencies inside the gap bistability originates from gap soliton formation , which can lead to much lower switching thresholds . The response of nonlinear periodic structures illuminated by a constant wave (CW) with a frequency inside the photonic gap is generally separated into three regimes: i) steady state response via stationary gap soliton formation, ii) self-pulsing via excitation of solitary waves, and iii) chaotic. Much theoretical work has been done for systems with a weak sinusoidal refractive index modulation and uniform nonlinearity , or deep modulation multilayered systems , as well as experimental . One case of interest is when the system is illuminated by a CW bias and switching between different transmission states is achieved by means of external pulses. Such switching has already been demonstrated experimentally for various kinds of nonlinearities , but to our knowledge, a detailed study of the dynamics, the optimal pulse parameters and the stability under phase variations during injection, has yet to be performed. In this paper we use the Finite-Difference-Time-Domain (FDTD) method to study the time-dependent properties of CW propagation in multilayer structures with a Kerr type nonlinearity. We find our results generally in accord to those obtained for systems with weak linear index modulation , which were solved with approximate methods. We next examine the dynamics of driving the system from one transmission state to the other by injecting a pulse, and try to find the optimal pulse parameters for this switching. We also test how these parameters change for a different initial phase or frequency of the pulse. Finally, we will repeat all work for the case of a linear multilayer structure with a nonlinear impurity layer. ## II FORMULATION Electromagnetic wave propagation in dielectric media is governed by Maxwell’s equations $`\mu {\displaystyle \frac{\stackrel{}{H}}{t}}=\stackrel{}{}\times \stackrel{}{E}`$ $`{\displaystyle \frac{\stackrel{}{D}}{t}}=\stackrel{}{}\times \stackrel{}{H}`$ (1) Assuming here a Kerr type saturable nonlinearity and an isotropic medium, the electric flux density $`\stackrel{}{D}`$ is related to the electric field $`\stackrel{}{E}`$ by $`\stackrel{}{D}=ϵ_0\stackrel{}{E}+\stackrel{}{P}_L+\stackrel{}{P}_{NL}=ϵ_0\left(ϵ_r+{\displaystyle \frac{\alpha |\stackrel{}{E}|^2}{1+\gamma |\stackrel{}{E}|^2}}\right)\stackrel{}{E}`$ (2) where $`\gamma 0`$. $`\stackrel{}{P}_L`$ and $`\stackrel{}{P}_{NL}`$ are the induced linear and nonlinear electric polarizations respectively. Here we will assume zero linear dispersion and so a frequency independent $`ϵ_r`$. Inverting this to obtain $`\stackrel{}{E}`$ from $`\stackrel{}{D}`$ involves the solution of a cubic equation in $`|\stackrel{}{E}|`$. For $`\alpha 0`$ there is always only one real root, so there is no ambiguity. For $`\alpha <0`$ this is true only for $`\gamma >0`$. In our study we will use $`\alpha =1`$ and $`\gamma =(ϵ_r1)^1`$ so that for $`|\stackrel{}{E}|\mathrm{}`$, $`\stackrel{}{D}ϵ_0\stackrel{}{E}`$. The structure we are considering consists of a periodic array of 21 nonlinear dielectric layers in vacuum, each 20 nm wide with $`ϵ_r=3.5`$, separated by a lattice constant $`a=`$200 nm. The linear, or low intensity, transmission coefficient as a function of frequency is shown in Fig. 1a. In the numerical setting each unit cell is divided into 256 grids, half of them defining the highly refractive nonlinear layer. For the midgap frequency, this corresponds to about 316 grids per wavelength in the vacuum area and 1520 grids per effective wavelength in the nonlinear dielectric, where of course the length scale is different in the two regions. Stability considerations only require more than 20 grids per wavelength . Varying the number of the grids used we found our results to be completely converged. On the two sides of the system we apply absorbing boundary conditions . We first study the structure’s response to an incoming constant plane wave of frequency close to the gap edge $`\omega a/2\pi c=0.407`$. For each value of the amplitude, we wait until the system reaches a steady state and then calculate the corresponding transmission and reflection coefficients. If no steady state is achievable, we approximate them by averaging the energy transmitted and reflected over a certain period of time, always checking that energy conservation is satisfied. Then the incident amplitude is increased to its next value, which is done adiabatically over a time period of 20 wave cycles, and measurements are repeated. This procedure continues until a desired maximum value is reached, and then start decreasing the amplitude, repeating backwards the same routine. The form of the incident CW is $`E_{CW}(t)=\left(A_{CW}+dA_{CW}{\displaystyle \frac{min\{(tt_0),20T\}}{20T}}\right)e^{i\omega t}`$ (3) where $`t_0`$ is the time when the amplitude change started, $`A_{CW}`$ is the last amplitude value considered and $`dA_{CW}`$ the amplitude increment. One wave cycle $`T`$ involves about 2000 time steps. ## III RESPONSE TO A CW BIAS The amplitude of the CW is varied from zero to a maximum of 0.7 with about 40 measurements in between. Results are shown in Fig. 1b, along with the corresponding one from a time-independent approximation. The agreement between the two methods is exact for small intensities, however, after a certain input the output waves are not constant any more but pulsative. This is in accord with the results obtained with the slowly varying envelope approximation for systems with a weak refractive index modulation . It is interesting that the averaged output power is still in agreement with the time-independent results, something not mentioned in earlier work. For higher input values, the solution will again reach a steady state just before going to the second nonlinear jump, after which it will again become pulsative. This time though, the averaged transmitted power is quantitatively different from the one predicted from time-independent calculations. The nonlinear transmission jump originates from the excitation of a stationary gap soliton when the incident intensity exceeds a certain threshold value. Due to the nonlinear change of the dielectric constant, the photonic gap is shifted locally in the area underneath the soliton, which becomes effectively transparent, resembling a quantum well with the soliton being its bound state solution . The incident radiation coupled to that soliton tunnels through the structure and large transmission is achieved. We obtain a maximum switching time of the order of 100 $`T_r`$, or a frequency of 360 GHz, where $`T_r=2L/c`$ is roundtrip time in vacuum. The second transmission jump is related to the excitation of two gap solitons, which however, are not stable and so transmission is pulsative. The Fourier transform of the output shows that, after the second transmission jump, the system pulsates at a frequency $`\omega a/2\pi c=0.407\pm n\times 0.024`$, exactly three times the one of the first pulsating solution $`\omega a/2\pi c=0.407\pm n\times 0.008`$, where $`n`$ is an integer. For much higher input values the response eventually becomes chaotic. A more detailed description of the switching process as well as the soliton generation dynamics can be found in . ## IV PULSE DRIVEN SWITCHING We next turn to the basic objective of this work. We assume a specific constant input amplitude $`|A_{CW}|=0.185`$ corresponding to the middle of the first bistable loop. Depending on the system’s history, it can be either in the low transmission state I, shown in Fig. 1c, or in the high transmission state II, shown in Fig. 1d, which are both steady states. We want to study the dynamics of a pulse injected into a system like that. More specifically, if it will drive the system to switch from one state to the other, how the fields change in the structure during switching, for which pulse parameters this will happen and if these parameters change for small phase and frequency fluctuations. We assume Gaussian envelope pulses $`E_P(0,t)=A_Pe^{(tt_05t_w)^2/t_w^2}e^{i\omega t}`$ (4) where $`A_P`$ is the pulse amplitude, $`t_0`$ the time when injection starts, and $`W=2t_wc`$ is the pulse’s full width at $`1/e`$ of maximum amplitude. The beginning of time $`t`$ is the same as for the CW, so there is no phase difference between them. After injection we wait until the system reaches a steady state again and then measure the transmission and reflection coefficients to determine the final state. During this time we save the field values inside the structure every few time steps, as well as the transmitted and reflected waves. This procedure is repeated for various values of $`A_P`$ and $`t_w`$, for both possible initial states. Our results are summarized in Fig. 2. White areas indicate the pulse parameters for which the intended switch was successful while black are for which it failed. In Fig. 2a, or the “Switch” graph, the intended switching scheme is for the same pulse to be able to drive the system from state I to state II and vise versa. Fig. 2b, or “Switch All Up”, is for a pulse able to drive the system from I to II, but fails to do the opposite, ie. the final state is always II independently from which the initial state was. Similarly, Fig. 2c or “Switch All Down”, is for the pulse whose final state is always I, and Fig. 2d or ”No Switch”, for the pulse that does not induce any switch for any initial state. We find a rich structure on these parameter planes. Note also that there is a specific cyclic order as one crosses the curves moving to higher pulse energies: $``$ d $``$ b $``$ a $``$ c $``$ etc. This indicates that there must be some kind of energy requirements for each desired switching scheme. After analyzing the curves it was found that only the first one in the “Switch” graph could be assigned to a simple constant energy curve $``$$`W|A_p|^2`$. Since any switching involves the creation or destraction of a stationary soliton, then this should be its energy. In order to put some numbers, if we would assume a nonlinearity $`|\alpha |=10^9`$ cm<sup>2</sup>/W, then we would need a CW of energy $`34`$ MW/cm<sup>2</sup> and a pulse of width $`W/c`$ of a few tenths of femtoseconds and energy $``$$`2.5\mu `$J/cm<sup>2</sup>. These energies may seem large, but they can be sufficiently lowered by increasing the number of layers and using an incident frequency closer the the gap edge. In order to find more about how the switching occurs, we plotted in Fig. 3 the effective transparent areas and the output fields as a function of time, for the first three curves of Fig. 2a. As expected, for the pulse from the first curve, the energy for the soliton excitation is just right, and the output fields are small compared to the input. For the other curves however, there is an excess of energy. The system has to radiate this energy away before a stable gap soliton can be created. It is interesting to note that this energy goes only in the transmitted wave, not the reflected, and it consists of a series of pulses . For the second curve in Fig. 2a there is one pulse, for the third there are two etc. The width and frequency of the pulses are independent of the incident pulse, they are the known pulsating solutions we found in the CW case. So the system temporally goes into a pulsating state to radiate away the energy excess before settling down into a stable state. If this energy excess is approximately equal to an integer number of pulses (the solitary waves from the unstable solutions), then we will have a successful switch, otherwise it will fail. A similar behavior is found in the system’s response during switch down for the first three curves in Fig. 2a, using exactly the same pulses as before for the switch up. So the same pulse is capable of switching the system up, and if reused, switching the system back down. Using the numbers assumed before for the nonlinearity $`\alpha `$, the pulses used in Fig. 3 are (a) $`W/c=14`$ fs, $``$=2.5 $`\mu `$J/cm<sup>2</sup>, (b) $`W/c=28`$ fs, $``$=12 $`\mu `$J/cm<sup>2</sup>, (c) $`W/c=42`$ fs, $``$=32 $`\mu `$J/cm<sup>2</sup>. Up to now, the injected pulse has been treated only as an amplitude modulation of the CW source, ie. they had the same exactly frequency and there was no phase difference between them. The naturally rising question is how an initial random phase between the CW and the pulse, or a slightly different frequency affect our results. We repeated the simulations for various values of an initial phase difference, first keeping them with the same frequency. We find that although the results show qualitatively the same stripped structure as in Fig. 2, there are quantitative differences. The main result is that there is not a set of pulse parameters that would perform the desired switching successfully for any initial phase difference. Thus the pulse can not be incoherent with the CW, ie. generated at different sources, if a controlled and reproducible switching mechanism is desired, but rather it should be introduced as an amplitude modulation of the CW. However, if this phase could be controlled, then the switching operation would be controlled, and a single pulse would be able to perform all different operations. The picture does not change if we use pulses of slightly different frequency from the source. We used various pulses with frequencies both higher and lower than the CW, and we found a sensitive, rather chaotic, dependence on the initial phase at injection time. The origin of this complex response, if it is an artifact of the simple Kerr-type nonlinearity model that we used, and if it should appear for other kinds of nonlinearities, is not yet clear to as. More work is also needed on how these results would change if one used a different $`|A_{CW}|`$ not in the middle of the bistable loop, a wider or narrower bistable loop etc., but these would go more into the scope of engineering. ## V LINEAR LATTICE WITH A NONLINEAR IMPURITY LAYER Besides increasing the number of layers to achieve lower switching thresholds, one can use a periodic array of linear layers $`ϵ=ϵ_0ϵ_r`$ with a nonlinear impurity layer $`ϵ=ϵ_0(ϵ_r^{}+\alpha |\stackrel{}{E}|^2)`$ where $`ϵ_rϵ_r^{}`$ and we will use $`\alpha =+1`$ and $`\gamma =0`$. This system is effectively a Fabry-Perot cavity with the impurity (cavity) mode inside the photonic gap, as shown in Fig. 4a. The bistable response originates from its nonlinear modulation with light intensity. The deeper this mode is in the gap, the stronger the linear dispersion for frequencies close to it. Because of the high $`Q`$ of the mode, we can use frequencies extremely close to it achieving very low switching thresholds . Here however we only want to study the switching mechanism, so we will use a shallow impurity mode. The bistable input-output diagram, the output fields during switching and the field distributions in the two transmission branches are also shown in Fig. 4. We observe a smaller relaxation time and of course the absence of pulsating solutions. The parameters used are $`ϵ_r^{}=1`$ and $`\omega a/2\pi c=0.407`$ which corresponds to a frequency between the mode and the gap edge. We want to test if a pulse can drive this system in switching between the two different transmission states, and again test our results against phase and frequency perturbations. The two states shown in Fig. 4 are for an input CW amplitude of $`|A_{CW}|=0.16`$. The results for coherent, pulse and CW, are shown in Fig.5. We see that any desired form of switching can still be achieved, but the parameter plane graphs no more bare any simple explanations as the ones obtained for the nonlinear superlattice. Repeating the simulations for incoherent beams and different frequencies we obtain the same exactly results as before. Only phase-locked beams can produce controlled and reproducible switching. ## VI CONCLUSIONS We have studied the time-dependent switching properties of nonlinear dielectric multilayer systems for frequencies inside the photonic band gap of the corresponding linear structure. The system’s response is characterized by both stable and self-pulsing solutions. We examined the dynamics of driving the system between different transmission states by pulse injection, and found correlations between the pulse, the stationary gap soliton and the unstable solitary waves. A small dependence on the phase difference between the pulse and the CW is also found, requiring coherent beams for fully controlled and reproducible switching. Similar results are also found for the case of a linear periodic structure with a nonlinear impurity. ###### Acknowledgements. Ames Laboratory is operated for the U. S. Department of Energy by Iowa State University under contract No. W-7405-ENG-82. This work was supported by the Director of Energy Research office of Basic Energy Science and Advanced Energy Projects, the Army Research office, and a PENED grand.
no-problem/0002/astro-ph0002282.html
ar5iv
text
# Silicate emission in Orion Based on observations with ISO, an ESA project with instruments funded by ESA member states (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. ## 1 Introduction The Orion nebula is one of the most studied star–forming regions in the Galaxy. The ionizing stars of the Orion nebula (the Trapezium stars, the hottest of which is $`\theta ^1`$ Ori C, O6) have eroded a bowl–shaped H ii region into the surface of the Orion molecular cloud. The Orion bar is the limb–brightened edge of this bowl where an ionization front is progressing into the molecular cloud. It is seen as an elongated structure at a position angle of approximately 60°. The Orion nebula extends to the North. The Trapezium stars are located at an angular distance of approximately 2.3 arc minutes from the bar, corresponding to 0.35 pc at a distance of 500 pc. The molecular cloud extends to the other side of the bar, but also to the back of the Orion nebula. The bright star $`\theta ^2`$ Ori A (O9.5Vpe) lies near the bar, and is clearly in front of the molecular cloud since its color excess is only E(B–V) $``$ 0.2 mag. Figs. 1 and 2 illustrate the geometry of the region observed. Fig. 1 shows six representative images of the region of the Orion bar (see the figure caption for details). Fig. 2 shows the contours of the the \[Ne iii\] 15.5$`\mu `$m fine-structure line emission which delineates the H ii region. The emission in one of the mid-IR bands (hereafter called the Aromatic Infrared Bands, AIBs) at 6.2$`\mu `$m traces the Orion bar (an edge-on PhotoDissociation Region or PDR). The AIBs are usually strongly emitted by PDRs. The Trapezium region was avoided because of possible detector saturation. Pioneering infrared observations by Stein & Gillet (Stein (1969)) and Ney et al. (Ney (1973)) discovered interstellar silicate emission near 10$`\mu `$m in the direction of the Trapezium. This was confirmed by Becklin et al. (Becklin (1976)) who also noticed extended silicate emission around $`\theta ^2`$ Ori A. Since that time, interstellar silicate emission has been found by the Infrared Space Observatory (ISO) in the H ii region N 66 of the Small Magellanic Cloud (Contursi et al. in preparation) and in a few Galactic compact H ii regions (Cox et al. in preparation) and Photodissociation Regions (PDRs, Jones et al. in preparation). The emission consists of two broad bands centered at 9.7 and 18$`\mu `$m, which show little structure and are clearly dominated by amorphous silicates. We report in the present article ISO observations of the Orion bar and of a part of the Orion nebula made with the Circular Variable Filter of the ISO camera (CAM-CVF) which allowed imaging spectrophotometry of a field $`3\mathrm{}\times 3\mathrm{}`$ at low wavelength resolution (R $``$ 40). We also use an ISO Short–Wavelength Spectrometer (SWS) observation which provides higher–resolution ($`R1000`$) spectroscopy at a position within the H ii region (see Fig. 2). This spectrum was taken as part of the MPEWARM guaranteed-time program. We show here that these new ISO data confirm and extend previous observations of the amorphous silicate emission and also give evidence for emission by crystalline silicates. Sect. 2 of this paper describes the observations and data reduction. In Sect. 3, we discuss the emission of dust and gas. The silicate emission is characterized in section 4 through modelling of the observed continuum IR emission. Finally, conclusions are presented in Sect. 5. Our observations also give information on the fine–structure lines and on the AIBs. This will be presented in Appendices A and B respectively. ## 2 Observations and data reduction Imaging spectrophotometry was performed with the 32$`\times `$32 element mid-IR camera (CAM) on board the ISO satellite, using the Circular Variable Filters (CVFs) (see Cesarsky et al. 1996a for a complete description). The observations employed the 6″ per pixel field-of-view of CAM. Full scans of the two CVFs in the long-wave channel of the camera were performed with both increasing and decreasing wavelength. The results of these two scans are almost identical, showing that the transient response of the detector was only a minor problem for these observations. The total wavelength range covered is 5.15 to 16.5$`\mu `$m and the wavelength resolution $`\lambda /\mathrm{\Delta }\lambda 40`$. 10$`\times `$0.28 s exposures were added for each step of the CVF, and 7 more at the first step in order to limit the effect of the transient response of the detectors. The total observing time was about 1 hour. The raw data were processed as described in Cesarsky et al. (1996b ), with improvements described by Starck et al. (Starck (1998)) using the CIA software<sup>1</sup><sup>1</sup>1CIA is a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matière, C.E.A., France. The new transient correction described by Coulais & Abergel (Coulais (1998)) has been applied but the corrections introduced are minimal, as mentioned above. The bright star $`\theta ^2`$ Ori A is visible in the maps of several spectral components and has been used to re–position the data cube. This involved a shift of only 2″. The final positions are likely to be good to 3″ (half a pixel). All the maps presented here were obtained from the CVF data cube and have approximately the same resolution: namely, 6″ pixels at the short wavelengths increasing to about 8″ at 15$`\mu `$m; see Appendix C for more details. In several of these maps a faint emission can be seen on the south–east part of the ISOCAM field of view. This feature does not correspond to anything conspicuous in published images of the region, in particular in the near–IR images of Marconi et al. (Marconi (1998)). It is a spurious feature due to multiple reflections of the strong Trapezium between the detector and the CVF filter wheel, as shown by the ISOCAM ray tracing studies of Okumura (koko (1999)). The complete SWS scan (2.4-46$`\mu `$m) was reduced with the latest version of SWS-IA running at the Institut d’Astrophysique Spatiale. Calibration files version CAL-030 were used. Fig. 3 presents the SWS spectrum ($`\lambda /\mathrm{\Delta }\lambda 1000`$) obtained inside the H ii region at the position indicated on Fig. 2. Fig. 3 also shows the comparison of the SWS spectrum with that of the CAM-CVF pixels averaged in the SWS aperture. The agreement between these spectra is excellent, well within 20 percent for the continuum. ## 3 Gas and dust emission The spectra towards the H ii region of the Orion nebula are shown in Figs. 3 and 4. The CAM-CVF spectrum of Fig. 3 is representative of the whole field because CVF spectra obtained at different positions in the H ii region and around $`\theta ^2`$ Ori A look qualitatively similar (compare Figs. 3 and 10 which show the CVF spectra of different pixels; note particularly the rising long wavelength portion of the spectra). In the SWS spectrum, a large number of unresolved lines from atoms, ions and molecules are visible. We note the Pf$`\alpha `$ recombination line of hydrogen (emitted by the warm, ionized gas of the H ii region) and the molecular hydrogen pure rotation lines S(2) and very faintly S(3) and S(5) (stemming from the cooler, molecular PDR gas). The simultaneous presence of these lines reflects the variety of physical conditions present along the line of sight. Clearly, we are looking at emission from the H ii region mixed with some emission from the background PDR. These unresolved lines are briefly discussed in Appendix A. The other striking fact of the SWS spectrum is the strong continuum peaking at about 25$`\mu `$m. It is emitted by warm dust in the H ii region, but dust from the background PDR probably also contributes. The broad emission bands of amorphous silicates centered at 10 and 18$`\mu `$m are visible. The classical AIBs at 6.2, 7.7, 8.6, 11.3 and 12.7$`\mu `$m dominate the mid-IR part of the spectrum. As discussed by Boulanger et al. bbcr (1998), the mid-IR spectrum can be decomposed into Lorentz profiles (the AIBs) and an underlying polynomial continuum. Maps of the various AIBs constructed in this way all show the same morphology originating mainly from the PDR gas in the Orion bar (see Appendix B). We will hereafter use the 6.2$`\mu `$m-band as representative of the behaviour of the AIBs. In Fig. 5 we compare the behaviour of the mid-IR continuum emission and of the AIBs. Clearly, the AIB emission is concentrated in the Orion bar whereas the 15.5$`\mu `$m-continuum emission extends throughout the whole CAM field and shows a local peak around $`\theta ^2`$ Ori A (note that the mid-IR emission around this star is foreground because the star lies in front of the nebula). The continuum emission, however, appears to peak towards $`\theta ^1`$ Ori C, outside the region observed with ISOCAM. The contrast in the emission morphology between the bands and continuum can be interpreted in terms of the photodestruction of the AIB carriers in the hard UV-radiation field of the H ii region. The AIB carriers must be efficiently destroyed while the larger grains are much more resistant (e.g. Allain et al. 1996). We detail the modelling of the dust thermal emission in the next section. ### 3.1 Modelling the dust emission To account for the observed SWS spectra, we have calculated the thermal equilibrium temperature of dust in the Orion H ii region as a function of distance of the Orion bar from the Trapezium stars, assuming that $`\theta ^1`$ Ori C (an O6 star) dominates the local radiation field. We use the optical constants of the amorphous astronomical silicate of Draine (Draine85 (1985)) and of the amorphous carbon AC1 of Rouleau & Martin (Rouleau (1991)). Assuming typical interstellar grain sizes (e.g. Draine & Lee DandL84 (1984)), we find a temperature range of 85–145 K for amorphous silicates and a range of 110–200 K for amorphous carbon, corresponding to grains of radius 1500 and 100 Å respectively, at a distance of $`0.35`$ pc from $`\theta ^1`$ Ori C (the distance of the Orion Bar to the Trapezium stars). Using, for simplicity, discrete dust temperatures consistent with those calculated above (T<sub>silicate</sub> = 80 K and 130 K, T<sub>carbon</sub> = 85 K and 155 K) we are able to satisfactorily model the continuum emission spectrum from the dust in the Orion H ii region at the position of the ISO-SWS spectrum. In Fig. 4 we show the calculated emission spectrum from our model where we adopt the carbon/silicate dust mass ratios of Draine & Lee (DandL84 (1984)). In the calculated spectrum we have included the emission from carbon grains at 300 K, containing $``$ 1 percent. of the total carbon dust mass, in order to fit the short wavelength continuum emission. The hot carbon grain emission mimics that of the stochastically-heated Very Small Grains (VSGs, Désert et al. 1990). The 300 K temperature represents a mean of the temperature fluctuations for these small particles in the radiation field of $`\theta ^1`$ Ori C, and therefore indicates a lower mass limit of $``$ 1 percent for the mass of the available carbon in VSGs. The results of our model show that the emission feature in the 10$`\mu `$m region is dominated by amorphous silicates at temperatures of the order of 130 K, but that there may also be a small contribution from amorphous carbon grains in the 12$`\mu `$m region (Fig. 4). We also note broad “features” in the SWS spectrum, above the modelled continuum in Fig. 4, at $`1520`$$`\mu `$m, $`2028`$$`\mu `$m and longward of 32$`\mu `$m, that are not explained by our model. These features bear a resemblance to the major bands at 19.5, 23.7 and 33.6$`\mu `$m seen in the crystalline forsterite spectra of Koike et al. (Koike (1993)) and of Jaeger et al. (Jaeger (1998)). Bands in these same wavelength regions were noted by Jones et al. (Jones (1998)) in the SWS spectra of the M 17 H ii region and were linked with the possible existence of crystalline Mg-rich olivines in this object. Thus, similar broad emission bands are now observed in the 15–40$`\mu `$m wavelength region of the SWS spectra of two H ii regions (Orion and M 17). These bands resemble those of the crystalline Mg-rich silicate forsterite. Another band at 9.6$`\mu `$m is probably due to some sort of crystalline silicate, and will be discussed in more details in the next section. This dust model is simple–minded but emphasizes dust spectral signatures in the mid-IR continuum which was the main aim here. More detailed modelling treating temperature fluctuations and taking into account the grain size distribution is underway (Jones et al. in preparation). The broad continua that lie above the model fit (i.e. $`2028`$$`\mu `$m and $`>`$ 32$`\mu `$m, Fig. 4) can be associated with crystalline silicate emission bands. This seems to be a robust conclusion of this study. The features are too narrow to be explained by single-temperature blackbody emission and are therefore likely to be due to blended emission features from different materials. Unfortunately, having only one full SWS spectrum and CVF spectra that do not extend beyong 18$`\mu `$m, we are unable to say anything about the spatial variation of these broad bands in the Orion region. Interestingly, broad plateaux in the $`1520`$$`\mu `$m region have been associated with large aromatic hydrocarbon species containing of the order of a thousand carbon atoms (van Kerckhoven et al. vanKerckhoven (2000)). However, in this study the integrated intensity of the $`1520`$$`\mu `$m plateaux do vary by a factor of up to 10 relative to the aromatic carbon features shortward of 13$`\mu `$m. Thus, the origin of these broad emission features does remain something of an open question at this time. ## 4 Tracing the silicate emission To delineate the spatial extent of the 10$`\mu `$m-silicate emission conspicuously visible in Fig. 3 and 4, we proceed as follows. We start with the spectrum towards $`\theta ^2`$ Ori A, which shows the most conspicuous silicate emission and we represent the AIBs by Lorentz profiles, see Fig. 6 (top). Next we subtract them from the CVF spectra. The remaining continuum has the generic shape of a blackbody on top of which we see the broad bands corresponding to the silicate emission, Fig. 6 (middle). Finally, we subtract a second order polynomial from the continuum thus obtaining the well known silicate emission profile at that position, Fig. 6 (bottom). The profile thus obtained is then used as a scalable template to estimate the emission elsewhere, see Appendix C for more details. On top of the broad band of amorphous silicate centered near 9.7$`\mu `$m we see a band centered at nearly 9.6$`\mu `$m, which we ascribe to crystalline silicates (Jaeger et al. Jaeger (1998)). This band was also used as a scalable template as explained above and in Appendix C. Finally, the S(5) rotation line of H<sub>2</sub> at 6.91$`\mu `$m is present and is probably blended with the \[Ar ii\] line at 6.99$`\mu `$m. In Fig. 7, we see that the spatial distribution of the 9.7$`\mu `$m-feature of amorphous silicate is quite similar to that of the 15.5$`\mu `$m-continuum. The 15.5$`\mu `$m continuum emission includes a strong contribution from silicates (see Fig. 4), but a peak in the silicate emission around $`\theta ^2`$ Ori A is also evident. The silicate emission is thus predominantly due to larger grains. The narrower 9.6$`\mu `$m feature is mapped in Fig. 8. We note its similarity to the distribution of the 9.7$`\mu `$m broad band: this fact lends support to our assignation of this band to crystalline silicate. Due to the low spectral resolution of the CAM-CVF, however, the 9.6$`\mu `$m feature will certainly blend with the S(3) pure rotational line of molecular hydrogen - if present. To check this we have compared our 9.6$`\mu `$m map to that of molecular hydrogen in its fluorescent vibrational line 1$`0`$ S(1) (2.12$`\mu `$m). Courtesy of P.P. van der Werf (van der Werf et al. vanderwerf (1996)), we reproduce in Fig. 9 the map of the fluorescent molecular hydrogen emission. This latter correlates better with the AIB emission as traced by the 6.2$`\mu `$m-feature (bottom figure) than it does with the tentative crystalline silicate emission (top), namely they both peak along the bar. This is not surprising because the H<sub>2</sub> and AIB emitters require shielding from far-UV radiation to survive. Conversely, the 9.6$`\mu `$m silicate feature is stronger where H<sub>2</sub> is weak as can be seen around $`\theta ^2`$ Ori A. In addition, the H<sub>2</sub> S(3) rotational line at 9.66$`\mu `$m is detected in the ISO-SWS spectrum of the Orion bar presented in Verstraete et al. (1999, in preparation) with an intensity of $`6\times 10^7`$ W m<sup>-2</sup> sr<sup>-1</sup>. This value is a factor of 16 below the median flux of the 9.6$`\mu `$m feature in our map, namely $`10^5`$ W m<sup>-2</sup> sr<sup>-1</sup>. We can thus safely conclude that our 9.6$`\mu `$m-emission predominantly originate from silicates. A confirmation of the identification of the 9.6$`\mu `$m band with a crystalline silicate dust component would be possible if a second signature band were seen in our spectra. The SWS spectrum (Fig. 4) shows only broad emission bands that are difficult to characterise, and additionally, the chacteristic crystalline olivine band in the $`11.211.4`$$`\mu `$m region (e.g. Jaeger et al. Jaeger (1998)), if present, is blended with the 11.2$`\mu `$m aromatic hydrocarbon feature. Additionally, most of the chacteristic crystalline bands fall longward of the CVF spectra. Thus, it is difficult to self-consistently confirm the 9.6$`\mu `$m band identification with the presented data. In summary, emission in the 9.7$`\mu `$m band of amorphous silicate emission exists everywhere inside the Orion H ii region. Previously, amorphous silicate emission had only been seen in the direction of the Trapezium (Stein & Gillett Stein (1969); Forrest et al. Forrest (1975); Gehrz et al. Gehrz (1975)). We may assume that the 18$`\mu `$m band is also widely present in the region, as witnessed by the single SWS spectrum (Fig. 4) and by the generally rising long wavelength end of ISOCAM spectra; the two spectra shown, Figs. 3 and 10 are quite representative of the steeply rising continuum longward of 15$`\mu `$m. ### 4.1 The interstellar silicate and H<sub>2</sub> emission around $`\theta ^2`$ Ori A The case of $`\theta ^2`$ Ori A is particularly interesting because the geometry is simple and therefore allows quantitative calculations. Moreover, the thermal radio continuum, the recombination lines and the fine–structure lines are faint in the neighbourhood of this star (Felli et al. Felli (1993); Pogge et al. Pogge (1992); Marconi et al. Marconi (1998), and the present paper, Fig. 2). $`\theta ^2`$ Ori A is classified as an O9.5Vpe star and shows emission lines (see e.g. Weaver & Torres–Dodgen Weaver (1997)). It is a spectroscopic binary and an X-ray source. There is little gas left around the star and the observed silicate dust (Fig. 8) is almost all that is visible of the interstellar material left over after its formation. Indeed, O stars are not known to produce dust in their winds which are probably much too hot, so that the silicates we see here must be of interstellar origin. The mid–IR continuum observed towards $`\theta ^2`$ Ori A can be accounted for by combining emission of warm silicate and carbon grains (see Fig. 10). The model continuum was obtained in the same way as for the SWS observation (see Fig. 4) and with the same assumptions. The grain temperatures are consistent with the heating of interstellar grains by the strong radiation field of the star. As discussed above, the band near 9.6 $`\mu `$m (Fig. 6 bottom and Fig. 8) may be due to crystalline silicates, any contribution of the S(3) H<sub>2</sub> line to this band is minor. Another band at 14 $`\mu `$m (see Figs. 4 and 10) might also be due to crystalline silicates. Amongst the crystalline silicates whose mid–IR absorption spectra are shown by Jaeger et al. (Jaeger (1998)), synthetic enstatite (a form of pyroxene) might perhaps match the $`\theta ^2`$ Ori A spectrum. The interest in the possible presence of crystalline silicates around this star is that they would almost certainly be of interstellar origin, pre–dating the formation of the star. Observations at longer wavelengths are needed for a definitive check of the existence of crystalline silicates and for confirming their nature. Such observations do not exist in the ISO archives and should be obtained by a future space telescope facility. ## 5 Conclusions We obtained a rather complete view of the infrared emission of the Orion nebula and its interface with the adjacent molecular cloud. The most interesting results are the observation of amorphous, and possibly crystalline, silicates in emission over the entire H ii region and in an extended region around the bright O9.5Vpe star $`\theta ^2`$ Ori A. We have fitted the mid–IR continuum of the H ii region and around $`\theta ^2`$ Ori A with the emission from amorphous silicate and amorphous carbon grains at the equilibrium temperatures predicted for the grains in the given radiation field. This shows that both types of grains can survive in the harsh conditions of the H ii region. A number of bands (the 9.6$`\mu `$m bump seen in Fig. 6; the excess 14$`\mu `$m emission indicated in Figs. 4 and 10) suggest emission from crystalline silicates (essentially forsterite) in the H ii region. Crystalline silicates may also exist around $`\theta ^2`$ Ori A, but further, longer wavelength observations are required to confirm their presence. Do the observed crystalline silicates result from processing of amorphous silicates in the H ii region or in the environment of $`\theta ^2`$ Ori A? Silicate annealing into a crystalline form requires temperatures of the order of 1000 K for extended periods (Hallenbeck et al. Hallenbeck (1998)). The dust temperatures observed in the H ii region and around $`\theta ^2`$ Ori A are considerably lower than this annealing temperature. One might however invoke grain heating following grain–grain collisions in the shock waves that are likely to be present in the H ii region. However, grain fragmentation rather than melting is the more likely outcome of such collisions (Jones et al. JTH (1996)). It is probable that the crystalline silicates observed here were already present in the parent molecular cloud, and probably originate from oxygen–rich red giants. Emission by both amorphous and crystalline silicates has been observed with ISO around evolved stars (Waters et al. Waters (1996); Voors et al. Voors (1998)). The crystalline silicates there must have been produced locally by annealing of amorphous silicates. Gail & Sedlmayr (Gail (1999)) have shown that this is possible, and that both amorphous and crystalline forms can be released into the interstellar medium. However, there is no evidence for absorption by crystalline silicates in the general interstellar medium in front of the deeply embedded objects for which amorphous silicate absorption is very strong (Demyk et al. 1999, Dartois et al. Dartois (1998)). Consequently, crystalline silicates represent only a minor fraction compared to amorphous silicates. It would be difficult to detect the emission from a small crystalline component of dust in the diffuse interstellar medium because the dust is too cool (T$`20`$K) to emit strongly in the $`1540\mu `$m wavelength region. Observations of H ii regions and bright stars provide the opportunity of observing this emission due to the strong heating of dust. Emission from amorphous and crystalline silicates is seen around young stars (Waelkens et al. Waelkens (1996); Malfait et al. Malfait (1998)) as well as in comets (Crovisier et al. Crovisier (1998)). There are also silicates in meteorites, but their origin is difficult to determine because of secondary processing in the solar system. Crystalline silicates in comets, and perhaps in interplanetary dust particles believed to come from comets (Bradley et al. Bradley (1992)), must be interstellar since the material in comets never reached high temperatures. However, the silicates probably experienced changes during their time in the interstellar medium. It is interesting to note that while very small grains of carbonaceous material exist, there seem to be no very small silicate grains in the interstellar medium (Désert et al. 1986). ###### Acknowledgements. A.P. Jones is grateful to the Société de Secours des Amis des Sciences for funding during the course of this work. We are grateful to P.P. van der Werf for providing us with his map of molecular hydrogen emission. Appendix A: the mid–IR line emission from the Orion nebula We presented in Fig. 2 a map of the studied region in the \[Ne iii\] line at 15.5 $`\mu `$m. Fig. 11 displays the map of the \[Ar ii\] line at 7.0 $`\mu `$m superimposed on the map of the \[Ar iii\] 9.0$`\mu `$m line. These maps illustrate the ionization structure of the Orion nebula. The spectral resolution of the CVF does not allow a separation of the the \[Ar ii\] line at 6.99 $`\mu `$m from the S(5) pure rotation line of H<sub>2</sub> at 6.91 $`\mu `$m. However, the bulk of the H<sub>2</sub> emission come from deeper in the molecular cloud than that of \[Ar ii\], ie. more to the south-west (see Fig. 9) and the contamination by the S(5) line is probably minor. The SWS spectrum shown here and that taken towards the bar (Verstraete et al. 1999, in preparation) in which the \[Ar ii\] and the H<sub>2</sub> S(5) line are well separated from each other, show that the H<sub>2</sub> line is a factor 4 or 5 weaker and hence cannot seriously contaminate the \[Ar ii\] map. The emission by the singly–charged ion \[Ar ii\] is concentrated near the ionization front on the inner side of the bar. This is very similar to what is seen in the visible lines of \[N ii\] $`\lambda `$6578 and \[S ii\] $`\lambda `$6731 (Pogge et al. Pogge (1992) Fig. $`1c`$ and $`1d`$). The detailed correspondence between the maps in these three ions is excellent: note that the optical maps are not much affected by extinction. The ionization potentials for the formation of these ions are 15.8, 14.5 and 10.4 eV for Ar ii, N ii , S ii respectively, and are thus not too different from each other. The emission from the doubly–charged ions \[Ne iii\] and \[Ar iii\] shows a very different spatial distribution, with little concentration near the bar but increasing towards the Trapezium. The \[Ne iii\] map (Fig. 2) is very similar to the \[O iii\]$`\lambda `$5007 line map (Pogge et al. Pogge (1992) Fig. $`1e`$), as expected from the similarity of the ionization potentials of \[Ne ii\] and \[O ii\], respectively 41.1 and 35.1 eV. However, the distribution of the \[Ar iii\] line (Fig. 11) is somewhat different, with a trough where the \[Ne iii\] and the \[O iii\] lines exhibit maxima. Ar iii is ionized to Ar iv at 40.9 eV, almost the same ionization potential as that of Ne ii, so that Ar iv (not observable) should co–exist with Ne iii and Ar iii with Ne ii. A map (not displayed) in the 12.7 $`\mu `$m feature, which is a blend of the 12.7 $`\mu `$m AIB and of the \[Ne ii\] line at 12.8 $`\mu `$m, is indeed qualitatively similar to the \[Ar iii\] line map in the H ii region. It differs in this region from the maps in the other AIBs, showing that it is dominated by the \[Ne ii\] line. As expected, the dereddened distribution of the H$`\alpha `$ line (Pogge et al. Pogge (1992) Fig. $`3b`$), an indicator of density, is intermediate between that of the singly–ionized and doubly–ionized lines. Appendix B: the AIB emission Maps of the emission of the 6.2 and 11.3$`\mu `$m AIBs are shown in Fig. 12. We do not display the distribution of the other AIBs because they are very similar. All the spectra of Figs. 3, 4 and 6 show the classical UIBs at 6.2, 7.7, 8.6, 11.3 and 12.7 $`\mu `$m (in the CAM-CVF data the latter is blended with the \[Ne ii\] line at 12.8 $`\mu `$m). There are fainter bands at 5.2, 5.6, 11.0,, 13.5 and 14.2$`\mu `$mvisible in the SWS spectrum of Fig. 3: they may be AIBs as well. All the main bands visible in the CVF spectra are strongly concentrated near the bar. Emission is observed everywhere, because of the extension of the PDR behind the Orion nebula and the presence of fainter interfaces to the South–East of the bar. We confirm the general similarity between the distributions of the different AIBs through the Orion bar observed by Bregman et al. (Bregman (1989)). We thus conclude that, although the excitation conditions vary greatly from the Trapezium region towards the South–West of the bar, the mixing of fore– and background material along the line of sight does not allow us to observe spectroscopical changes in the AIB emission features (due e.g. to ionization or dehydrogenation as in M17-SW, Verstraete et al. verstraete (1996)). Appendix C: Estimates of emission strengths Spectral emission maps have been obtained using one or another of three different methods. The emission from well defined and rather narrow spectral features, viz. AIBs and ions, can be estimated either by numerical integration of the energy within the line and an ad–hoc baseline (method 1), or by simultaneous fit of Lorentz (Boulanger et al. , bbcr (1998)) and/or gauss profiles, including a baseline, determined by a least square fitting algorithm (method 2). The strength of features not amenable to an analytical expression, like the suspected amorphous silicate emission (see Fig. 6) has been estimated using the following method (method 3). We have constructed an emission template consisting of all the observed emission features, each one arbitrarily normalised to unit peak intensity, see Fig. 13. A least square computer code was then used to obtain, for each of the $`32\times 32`$ lines of sight, a set of multiplying coefficients for each feature present in the template plus a global parabolic baseline so as to minimize the distance between the model and the data points. The number of free parameters is then eleven “line intensities” and three polynomial coefficients, for a total of 14 free parameters to be determined from 130 observed spectral points per line of sight. The main drawback of this method is that it does not allow for varying line widths or line centres; however, given the low resolution of ISOCAM’s CVF this is not a serious drawback. We have found that integrated line emission estimated from methods 2 and 3 give results that agree to within 20 percent; numerical integration of Lorentzian line strengths, on the other hand, badly underestimates the energy carried in the extended line widths and hence this method has not been used.
no-problem/0002/hep-ph0002282.html
ar5iv
text
# The geometry of atmospheric neutrino production ## 1 Introduction Atmospheric neutrinos are produced in the hadronic showers generated by cosmic rays in the Earth’s atmosphere. Absorption in the Earth is negligible in the entire relevant energy range, and therefore a detector located near the surface of the Earth will receive a neutrino flux from all directions. In the presence of neutrino oscillations the observed rates of $`\nu `$ interactions will be modified, and the angular distributions distorted. These effects have been in fact measured by Super–Kamiokande and other atmospheric neutrino detectors and give clear evidence for the existence of flavor transitions . The evidence for oscillations is robust, and does not depend on a detailed calculation of the expected neutrino fluxes with and without neutrino oscillations. The strongest evidence comes in fact from the detection of an up–down asymmetry in the angular distribution of muon events and in a small ratio for the rates of $`\mu `$–like and $`e`$–like events. It is possible to predict with very simple considerations that in the absence of oscillations the $`\nu `$ fluxes are approximately up–down symmetric, and the fluxes of electron and muon neutrinos are strictly related because they are produced in the chain decay of the same parent mesons. In order to extract the oscillation parameters from the data it is however important to have detailed predictions for the expected intensity and angular distributions of the no–oscillation fluxes. In this work I will discuss the angular distributions of the atmospheric neutrino fluxes. These distributions are determined by the presence of neutrino oscillations (if they exist) and a combination of three most important effects: 1. Geomagnetic effects on the primary cosmic rays. Low rigidity particles cannot reach the vicinity of the Earth, and the effect depends on the point of the Earth where cosmic rays arrive (being stronger near the geomagnetic equator), and on their direction (local zenith and azimuth angles). 2. The zenith angle dependence of the neutrino yields. Inclined showers produce more neutrinos that vertical ones because the decay of charged mesons and muons is then more probable. 3. The spherical geometry of the neutrino source volume. This results in an enhancement of the neutrino fluxes from horizontal directions and a suppression for the (up–going and down–going) vertical directions. The effect is exactly up–down symmetric, and is important for sub–GeV neutrinos. Additional smaller effects are due to the bending of charged particles during shower development, the presence of mountains, and the existence of different air density profiles over different geographycal regions. The aim of this paper is to discuss in particular the effect of the spherical geometry of the neutrino source region. This effect has been recognized only recently , is not included in the calculations currenty in use in the analysis of experimental data and is still the object of some controversy. The work is organized as follows: the next section contains a discussion what is the one–dimensional (1D) approximation used in the first calculations of the atmospheric neutrino fluxes and why the approximation was considered necessary; section 3 and 4 present for completeness very brief qualitative discussion on the geomagnetic effects and of the angular dependence of the neutrino yields; section 5 discusses the geometrical effects related to the shape of the neutrino source volume; section 6 presents the results of a detailed calculations; section 7 gives a summary. ## 2 The one–dimensional approximation Any cosmic ray shower, produced by a primary particle interacting in an arbitrary point of the atmosphere, with an arbitrary direction can result in one (or more) neutrinos with trajectories that intersect the detector in consideration. Of course only a very small fraction of the cosmic ray showers produce neutrinos that pass inside (or in the vicinity) of a detector, and therefore a full montecarlo calculation of the neutrino fluxes appears as extremely inefficient. It is because of this difficulty that the first generation calculations of the atmospheric neutrino fluxes have been performed in what is called the one–dimensional approximation. In this approximation one considers only a very small subset of all possible cosmic rays primaries, those that have a trajectory that when continued as a straight line beyond the interaction point intersects the detector. The neutrinos produced in the showers of these primaries are considered as collinear to the primary parent, and also have trajectories that intersect the detector, therefore all $`\nu `$’s generated in a montecarlo calculation can be collected and analysed to estimate the atmospheric neutrino fluxes. More explicitely, in a 1–D calculation the flux of neutrinos with flavor $`\alpha `$, energy $`E_\nu `$ and direction $`\mathrm{\Omega }`$ observed by a detector located at the position $`\stackrel{}{x}_d`$ is calculated as: $$\varphi _{\nu _\alpha }(E_\nu ,\mathrm{\Omega }_\nu ,\stackrel{}{x}_d)=\underset{A}{}𝑑E_0\varphi _A[E_0,\mathrm{\Omega }_0\left(\mathrm{\Omega }_\nu \right),\stackrel{}{x}_0(\mathrm{\Omega }_\nu ,\stackrel{}{x}_d)]\frac{dn_{A\nu _\alpha }}{dE_\nu }(E_\nu ;E_0,\mathrm{cos}\theta _0)$$ (1) the sum is over all primary cosmic rays nuclear species, the quantity $`dn_{A\nu _\alpha }/dE_\nu `$ is the differential yield of neutrinos of flavor $`\alpha `$ from a primary of type $`A`$, energy $`E_0`$ and zenith angle $`\theta _0`$, $`\varphi _A(E_0,\mathrm{\Omega }_0,\stackrel{}{x}_0)`$ is the flux of primary cosmic rays of type $`A`$ , energy $`E_0`$ and direction $`\mathrm{\Omega }_0`$ that reach the Earth at point $`\stackrel{}{x}_0`$. The dependences on the direction and position for the primary flux are the consequence of geomagnetic effects. Since neutrinos travel along straight lines, the interaction position of the primary flux is determined to a good approximation by the neutrino direction $`\mathrm{\Omega }_\nu `$. All down–going neutrinos ($`\mathrm{cos}\theta _\nu >0`$) are produced in showers in the general vicinity of the detector: $`\stackrel{}{x}_0\stackrel{}{x}_d`$, while for up–going trajectories ($`\mathrm{cos}\theta _\nu <0`$) the primary interaction point is in the vicinity of the point where the neutrino enters the Earth. The crucial approximation made in the 1–D approximation, is to consider also the direction of the primary $`\mathrm{\Omega }_0`$ as determined by $`\mathrm{\Omega }_\nu `$. For down–going neutrinos this simply means $`\mathrm{\Omega }_0\mathrm{\Omega }_\nu `$, and for up–going neutrinos, expressing $`\mathrm{\Omega }_0`$ as a zenith and azimuth angle with respect to the local vertical, this means: $`\mathrm{\Omega }_0(\mathrm{cos}\theta _0,\phi _0)=(\mathrm{cos}\theta _\nu ,\phi _\nu )`$. Equation (1) simplifies enormously the calculation of the atmospheric neutrinos fluxes because now only a very small (indeed infinitesimal) fraction of the cosmic ray showers has to be studied. About half of these showers have trajectories that reach the Earth near the detector point (arriving from all directions in a $`2\pi `$ solid angle), the other half is distributed over the entire surface of the Earth, but a unique direction has to be considered for each point (see fig. 1). In the equation (1) the problem of the calculation of the neutrino is “factorized” in three parts: 1. Determination of the primary cosmic ray fluxes. These fluxes have of course to be measured experimentally. The primary fluxes have also a time dependence due to the solar modulation. The effect decreases with momentum, and becomes negligible for rigidities $`p/q\stackrel{>}{_{}}100`$ GV. 2. Calculation of the geomagnetic effects. The primary cosmic ray fluxes reaching point $`\stackrel{}{x}_0`$ from the direction $`\mathrm{\Omega }_0`$ can be calculated from the isotropic flux unperturbed by geomagnetic effects (that is measured in the Earth’s magnetic polar regions) with a knowledge of the structure of the geomagnetic field, testing if the trajectories are allowed or forbidden. 3. Calculation of the neutrino yields. The number of neutrinos (and their energy spectrum) produced by a primary of energy $`E_0`$ and zenith angle $`\theta _0`$ can be calculated studying the average development of hadronic showers in the atmosphere. A calculation using a three–dimensional method can still be considered as divided into the three parts discussed above, and indeed the first two steps are identical. The calculation is however much more complex because it is not anymore sufficient to consider for the neutrino yield simply the inclusive energy distribution of the neutrinos produced in a shower, but also the angular distributions of the neutrinos (and their strong correlation with the energy) are important and have to be calculated. Moreover the simple integral in (1) performed over the primary energy $`E_0`$ keeping the direction fixed has now to be replaced by a more complex convolution that involves not only the energy but also the direction of the primary particle, since the neutrinos with a given trajectory can be produced in showers with a non collinear axis. ## 3 Geomagnetic effects An assumption common to all atmospheric neutrino calculation, is that the fluxes of cosmic rays at one astronomical unit of distance from the sun, when they are not perturbed by the near presence of the Earth are isotropic, and can be simply described by their energy dependence $`\varphi _A\left(E_0\right)`$. The fluxes reaching the surface of the Earth are however affected by the geomagnetic field and therefore are not isotropic, and have different intensities in different locations. For example it is intuitively clear that low rigidity particles can reach the Earth only traveling parallel to the magnetic field lines arriving near the magnetic poles, while high rigidity particles can reach all points on the Earth’s surface from all possible directions. Given a map of the magnetic field around the Earth it is a straightforward exercise to compute numerically if a given primary particle three–momentum and position correspond to an allowed or a forbidden trajectory. It is sufficient to integrate the equation of motions of the charged particle in the geomagnetic field, and see if the past trajectory of the particle intersects the Earth’s surface, remains confined to a finite distance from the Earth (forbidden trajectory); or originates from large distances (allowed trajectory). For a qualitative understanding it can be useful to consider the historically important approximation of describing the Earth geomagnetic effect as an exactly dipolar field. In a dipolar field the problem can be solved analytically. All positively particles with rigidity $`R>R_S^+`$ are allowed and the trajectories of all particles with $`R<R_S^+`$ are forbidden because they remain confined to a finite distance from the dipole center <sup>1</sup><sup>1</sup>1In this solution it is assumed that the field fills the entire space. A fraction of the allowed trajectories have really a segment “inside” the Earth (with $`r<R_{}`$) and therefore should be considered forbidden also in a dipole field.. The quantity $`R_S^\pm (\stackrel{}{x},\widehat{n})`$ is the Störmer rigidity cutoff : $$R_S^+(r,\lambda _M,\theta ,\phi )=\left(\frac{M}{2r^2}\right)\left\{\frac{\mathrm{cos}^4\lambda _M}{\left[1+\left(1\mathrm{cos}^3\lambda _M\mathrm{sin}\theta \mathrm{sin}\phi \right)^{1/2}\right]^2}\right\}$$ (2) where we have made use of the cylindrical symmetry of the problem, $`M`$ is the magnetic dipole moment of field, $`r`$ is the distance from the dipole center, $`\lambda _M`$ the magnetic latitude, $`\theta `$ the zenith angle and $`\phi `$ an azimuth angle measured counterclockwise from magnetic north. For negatively charged particles the cutoff is obtained with the reflection $`\phi \phi +\pi `$, that is exchanging east and west. The qualitative features that are important for our discussion are the following: 1. For a fixed direction the cutoff rigidity grows monotonically from a vanishing value at the magnetic pole to a maximum value at the magnetic equator. 2. The cutoffs for particles traveling toward magnetic west are higher than those for particles traveling toward magnetic east. Note that geomagnetic effects are the only mechanism discussed in this work that can generate a non–flatness in the azimuthal angle distributions of atmospheric neutrinos. 3. The highest cutoff corresponds to westward going, horizontal particles reaching the surface of the Earth at the magnetic equator ($`\phi =90^{}`$, $`\theta =90^{}`$, $`\lambda _M=0^{}`$). The maximum rigidity cutoff is approximately 60 GV. All geomagnetic effects vanish for particles above the rigidity value. ## 4 The neutrino yields The “neutrino yield” is the average number of neutrinos of a certain flavor produced by a primary cosmic ray particle. It depends on the primary particle type, energy, and zenith angle. There are also weaker dependences on the azimuth angle of the shower and its geographycal locations due to the effects of the geomagnetic field on the shower development. In the 1–D approximation the angles of the secondary particles with respect to the primary directions are neglected (or integrated over), and therefore the neutrino yields can be described simply as a set of functions that give the energy distribution of the neutrinos (of different flavors) produced by primaries of given type, energy, and zenith angle. The one–dimensional neutrino yields have been obtained with the integration (numerical or even analytical in simplified tratments ) of a set of differential equations that describe the average development of hadronic showers. The most accurate calculations of the yields are however performed with montecarlo techniques. A large number of showers (for primary particles of different type, energy and zenith angle) is generated, and the number, flavor and energy of all produced neutrinos is recorded to compute numerically the neutrino yields as a function of primary energy, direction and particle type. The authors of the montecarlo calculations of the yields have also usually made the approximation of considering all secondary particles in the shower as collinear to the primary particle. This can be obtained “rotating” all final state particles so that their momentum is parallel to the projectile (or parent) particle, and neglecting multiple scattering and bending in the geomagnetic field. Strictly speaking this rotation of course implies a small (negligible) deviation from exact conservation of the longitudinal momentum. In the procedure energy is exactly conserved. The neutrino yields have some important dependence on the zenith angle of the primary particle. The yields grow monotonically when the zenith angle of the primary particle changes from the vertical to the horizontal direction. The reasons for this growth are simple to understand, and originate from two effects. The first effect is simply that an inclined shower develops in air for a longer distance before hitting the ground, therefore the muons produced in high zenith angle showers have more time to decay in air and produce more neutrinos. Essentially all muons that hit the ground rapidly lose all their energy in ionization and radiation processes and decay at rest producing very soft neutrinos (with $`E_\nu <m_\mu /2`$) that can be neglected because are below the thresholds of the existing detectors. A second effect, contributing also to an enhancement of neutrino production for horizontal showers is due to the fact that the air density is not constant but decreases (approximately exponentially in the stratosphere) with increasing altitude. Therefore the inclination of a trajectory (assuming the same starting point) determines the column density $`X`$ (g cm<sup>-2</sup>) that corresponds to a fixed length $`L`$ (cm). The column density is largest for vertically down–going particles (zenith angle $`\theta =0`$) and decreases monotonically with increasing zenith angle. The relation between $`L`$ and $`X`$ determines the relative probability of decay or interactions for charged pions (or kaons), and the energy loss of a muon before decay. A smaller $`X`$ (for a fixed $`L`$) implies that the interaction of weakly decaying mesons is suppressed and their decay enhanced; it also means that the muons will decay with a higher energy. For inclined showers the first effect results in a higher number of neutrinos, and the second one in a slightly harder spectrum of neutrinos The ratio between the vertical and horizontal yields is a function of the neutrino energy. For low $`E_\nu `$ the ratio is close to unity, and there is very little enhancement for the horizontal directions. This can be qualitatively easily understood. Low energy neutrinos are produced in the decay of low energy pions and low energy muons. Because of relativistic effects the decay length of unstable particles is proportional to the particle momentum, for low energy particles the decay is so rapid, that the decay probability is unity, and the energy loss before decay is negligible independently from the direction of the particle. In these circumstances there is no enhancement for the horizontal directions, and the yield is isotropic. With increasing energy the decay length increases, and the effects outlined above start to be significant. As an illustration,the muon decay length is $$L_\mu =\tau _\mu \frac{p_\mu }{m_\mu }=6.23p_\mu \left(\mathrm{GeV}\right)\mathrm{Km}$$ (3) The muons are produced at an average altitude of $`17.5`$ Km ($`30`$ Km) for vertical (horizontal) particles with only a weak energy dependence. For $`p_\mu <1`$ GeV, most muons decay independently from their direction. For $`p_\mu =3`$ GeV (10 GeV) the decay of the vertical muons is suppressed by a factor $`2`$ ($`4`$) while the decay probability of horizontal particles remains approximately unity. These effects are carefully included in the existing calculations, that in fact give neutrino zenith angle distributions that are approximately flat for low $`E_\nu `$ (with distortions that reflect only geomagnetic effects and the angular distribution of the primary cosmic ray flux), and develop a stronger and stronger enhancement on the horizontal with increasing energy. ## 5 Spherical geometry of the source Most atmospheric neutrinos are produced in a spherical shell of air at an altitude between 10 and 40 km. The approximately spherical geometry of the source region has some very important effects on the angular distribution of the neutrinos and results in an enhancement of the neutrino flux from the horizontal directions. This enhancement has been overlooked in all calculations of the atmospheric neutrino fluxes before the work of Battistoni et al. . To illustrate the nature of the new effect, let us consider a situation where the geomagnetic effects are absent (and therefore the flux of primary cosmic rays is exactly isotropic), and the zenith angle dependence of the neutrino yield is negligible, that is the number of neutrinos produced by a primary particle does not depend on inclination of the shower in the atmosphere (as it is the case for low neutrino energy). In this situation a one–dimensional calculation predicts an isotropic neutrino flux. This result is incorrect. A realistic (three–dimensional) calculation results in an angular distribution of the neutrino flux that is exactly symmetric for up–down reflections, and for rotations around the vertical axis but that is not isotropic and exhibits an enhancement for the horizontal directions. The enhancement for the horizontal directions depends on the neutrino energy and is more and more marked with decreasing energy. The origin of this result can appear surprising, and for “pedagogical reasons” it can be instructive to consider two simple problems that have clear analogies with the emission of neutrinos in thre atmosphere. In problem A an observer is located inside a thin spherical shell of stars of radius $`R_s`$; in problem B the observer is inside a spherical cavity in a black body with temperature $`T`$. In both cases we want to compute the angular distribution of the photon flux measured by the observer. ### 5.1 Problem A: a shell of stars (isotropic emission) Let us consider an observer inside a thin shell composed of isotropically emitting stars (see fig. 2). Let us assume that $`L_0`$ (erg/s) is the average luminosity of the stars, $`n`$ is the surface number density (cm<sup>-2</sup>) of stars in the shell, and $`R_A`$ (with $`R_A<R_s`$) the distance of the observer from the center of the star shell. The energy flux per unit solid angle $`dF\left(\mathrm{\Omega }\right)/d\mathrm{\Omega }`$ (erg/(cm<sup>2</sup> s sr)) measured by the observer can be calculated as: $$\left(\frac{dF\left(\mathrm{\Omega }\right)}{d\mathrm{\Omega }}\right)_{\mathrm{iso}}=n\frac{L_0}{4\pi }\frac{1}{\mathrm{cos}\theta _e}=n\frac{L_0}{4\pi }\left[1\left(\frac{R_A}{R_s}\right)^2\left(1\mathrm{cos}^2\theta \right)\right]^{\frac{1}{2}}$$ (4) where $`\theta `$ (see fig. 2) is a polar (“zenith”) angle around the axis that passes through the center of the spherical shell and the observer. The solution respects the cylindrical symmetry of our problem (does not depend on the azimuthal angle $`\phi `$) and is up–down symmetric (it remains identical after a reflection $`\theta \pi \theta `$), but neglecting the special case of the observer at the center of the shell, it is not spherically symmetric, and is peaked for horizontal directions ($`\mathrm{cos}\theta =0`$). When the observer is close to the emitting shell (that is when $`R_A`$ approaches $`R_s`$) the peaking is very strong, and the observer sees the shell as a narrow bright disk. As a numerical example: for $`R_AR_{}=6371`$ Km and $`R_s=R_{}+15`$ km, formula (4) predicts a horizontal flux 14.6 times more intense than the vertical flux (see fig. 3). It is elementary to deduce equation (4). Only stars in the surface element $`dA`$ subtended by the solid angle $`d\mathrm{\Omega }`$ contribute to the energy flux from that solid angle interval with a total of $`ndA`$ stars (see fig. 2); each star contributes an energy flux $`L_0/\left(4\pi \mathrm{}^2\left(\theta \right)\right)`$, where $`\mathrm{}\left(\theta \right)`$ is the distance between the observer and the sources in the direction $`\theta `$. Therefore: $$\left(dF\right)_{\mathrm{iso}}=ndA\frac{L_0}{4\pi \mathrm{}^2}=n\frac{d\mathrm{\Omega }\mathrm{}^2}{\mathrm{cos}\theta _e}\frac{L_0}{4\pi \mathrm{}^2}$$ (5) In the second equality we have written explicitely the expression for the area of the element of shell seen in the solid angle $`d\mathrm{\Omega }`$: $`dA=d\mathrm{\Omega }\mathrm{}^2/\mathrm{cos}\theta _e`$. The area of the shell element obviously scales as $`\mathrm{}^2`$, and this factor cancels exactly the factor $`\mathrm{}^2`$ that takes into accout the reduction with distance of the apparent luminosity of the stars, however the expression for $`dA`$ contains also a geometrical term $`\left(\mathrm{cos}\theta _e\right)^1`$ that takes into account the orientation of the surface element of the shell with respect to the line of sight. The “emission” angle $`\theta _e`$ is the angle between the normal to the source surface (toward the center of the shell) and the line of sight from the source to the observer. From elementary geometry one obtains: $$\mathrm{cos}\theta _e=\sqrt{1\left(\frac{R_A}{R_s}\right)^2\left(1\mathrm{cos}^2\theta \right)}$$ (6) It is this factor that is responsible for the strong enhancement of the flux from the horizontal direction. It is interesting to note that the up–going and down–going vertical fluxes ($`\mathrm{cos}\theta =\pm 1`$) are independent from the position of the observer inside the shell, while the horizontal flux depends on the observer position, and grows when the observer approaches the emitting spherical shell. It follows that the total (angle integrated flux) is a function of the observer position: $$F_{\mathrm{tot}}^{\mathrm{iso}}=_{\left[4\pi \right]}𝑑\mathrm{\Omega }\left(\frac{dF\left(\mathrm{\Omega }\right)}{d\mathrm{\Omega }}\right)_{\mathrm{iso}}=nL_0\frac{1}{r}\mathrm{sinh}^1\left[\frac{r}{\sqrt{1r^2}}\right]$$ (7) (where $`r=R_A/R_s`$) and grows monotonically from a value $`nL_0`$ for an observer at the center of the shell to a divergent value when $`r1`$. The divergence is connected to the fact that an observer exactly on the shell must be (in the approximation we have used of a shell of vanishing thickness) infinitesimally close to a star. ### 5.2 Problem B: a cavity in a blackbody (emission $`\mathrm{cos}\theta _e`$) The solution of problem B, is of course well kown, we actually live in a (very large) cavity whose walls have a temperature of 2.7 Kelvin. An observer inside a spherical “cavity” in a black body (and in fact in a cavity of arbitrary shape and dimension) observes an isotropic black body spectrum independently from its position. It can be instructive to deduce this well known result from the same considerations used in the previous discussion. The surface of a black body emits energy per unit surface and unit solid angle at a rate: $$\frac{dL_{bb}}{d\mathrm{\Omega }_e}=\sigma T^4\frac{\mathrm{cos}\theta _e}{\pi }$$ (8) where $`\sigma `$ is the Stefan–Boltzmann constant. The emission is not isotropic but decreases linearly with the cosine of the emission angle with respect to the normal to the surface element. The energy flux from the direction $`\mathrm{\Omega }`$ for the observer of problem B can be calculated as: $$\left(dF\right)_{bb}=dA\frac{dL_{bb}}{d\mathrm{\Omega }}\frac{1}{\mathrm{}^2}=\left(\frac{d\mathrm{\Omega }\mathrm{}^2}{\mathrm{cos}\theta _e}\right)\left(\sigma T^4\frac{\mathrm{cos}\theta _e}{\pi }\right)\left(\frac{1}{\mathrm{}^2}\right)=\frac{\sigma T^4}{\pi }d\mathrm{\Omega }$$ (9) Note that in this case there is a cancellation not only for the factor $`\mathrm{}^2`$, but also for the orientation factor $`\mathrm{cos}\theta _e`$. If the normal to emitting surface has a large angle with the line of sight, a larger surface ($`\left(\mathrm{cos}\theta _e\right)^1`$) can contribute to the emission, but the large angle emission is suppressed by a factor $`\mathrm{cos}\theta _e`$. Combining the two effects one has an exact cancellation. The demonstration has been actually general, and is valid for a cavity of arbitrary shape or dimension and for an arbitrary position of the observer. ### 5.3 Atmospheric neutrinos We can now come back to the problem of atmospheric neutrinos. The source volume of atmospheric neutrinos is also with very good approximation a spherical shell with the observers placed inside, very close to the inner radius of the source volume. However an element of atmosphere emits neutrinos with an angular distribution that is neither isotropic, nor linear in zenith angle, therefore we can expect a neutrino flux at sea level with an angular distribution with a form that is intermediate between the solution for problem A (isotropic emission in the source and strong horizontal peaking of the observed flux) and problem B (emission $`\mathrm{cos}\theta _e`$ in the source and isotropic observed flux). Let us consider an isotropic flux of cosmic rays that impinges on the Earth atmosphere. The quantity $`C_0`$ that is the number of cosmic rays absorbed per unit time by an element of unit area of the atmosphere (units cm<sup>-2</sup> s<sup>-1</sup>) can be calculated as: $$C_0=_{\left[2\pi \right]}𝑑\mathrm{\Omega }\varphi _0\mathrm{cos}\theta _0=\pi \varphi _0$$ (10) (the integration is over one hemisphere). Note that the absorption rate of cosmic rays is proportional to $`\mathrm{cos}\theta _0`$, with $`\theta _0`$ the angle between the normal to the surface element and the cosmic ray direction. If the cosmic rays produce an average number $`n_\nu `$ of neutrinos per showers, then each element of the atmosphere is also a source of neutrinos, with an emission rate $`S_\nu =C_0n_\nu `$ (again in units cm<sup>-2</sup> s<sup>-1</sup>). The angular distribution of the neutrino emission from a “patch” of atmosphere is not easy to calculate and depends on the average angle between neutrino and primary particle. Knowing this distribution it is possible to calculate the observable flux at sea level, following the same steps outlined above. The results for two limiting cases are easy to obtain. If the neutrinos in a shower are produced collinearly with the primary trajectory ($`\mathrm{\Omega }_\nu =\mathrm{\Omega }_0`$) then the angular distribution of the neutrino emission from a surface element of the atmosphere is $`\mathrm{cos}\theta _\nu `$, because it simply reflects the angular distribution of the absorbed primary flux (see equation (10)). From equation (9) (neglecting the angular dependence of the neutrino yield) it follows that the observed neutrino flux is isotropic. The opposite limiting case is obtained when then neutrinos are emitted quasi isotropically in the primary cosmic ray showers. This is approximately true only for very low neutrino energy, ($`E_\nu \stackrel{<}{_{}}10`$ MeV). In this case all memory of the direction of the primaries is erased, and the neutrino emission from an element of the atmosphere is isotropic. From equation (4) it follows that the angular distribution of the observed neutrino fluxes is very sharply peaked for the horizontal directions. Considering the altitude distribution of the neutrino production points (that is related to the density profile of the atmosphere and the value of the hadronic cross sections) it can be estimated that for the very low energy neutrinos that are emitted quasi–isotropically, the flux on the horizontal plane is more than an order of magnitude more intense than in the vertical directions (see fig. 3). In general neutrinos are emitted in a cone with an axis that corresponds to the primary particle direction and an opening angle that shrinks with increasing neutrino energy. We can therefore expect that the emission of the neutrinos from the atmosphere is in general intermediate, between the two extremes corresponding to isotropic emission, and an emission $`\mathrm{cos}\theta _e`$. The two limiting cases are approached for very low neutrino energy, when the neutrino emission is very poorly correlated to the primary direction, and for high energy neutrinos when because of the Lorentz boost, the neutrinos are emitted quasi–collinearly with the primary particle. These expectations can be verified with more detailed calculations. As an illustration, let us consider the simplified problem where (i) the atmosphere is formed by a single thin layer at an height $`h`$ (the radius of the emitting layer is therefore $`R_s=R_{}+h`$); (ii) the primary cosmic ray flux is exactly isotropic; (iii) the neutrino production is independent of the zenith angle of the primary particle; and (iv) the angle $`\theta _{0\nu }`$ between each neutrino and its primary particle has a fixed value $`\theta _{0\nu }=\alpha `$ (and the emission has cylindrical symmetry around the axis defined by the primary particle trajectory). The problem is to compute the the neutrino flux received from an observer located at sea level. This simplified problem contains essentially all the interesting geometry of atmospheric neutrino production . Defining $`r=R_{}/R_s`$, and $`S_\nu =\pi \varphi _0n_\nu `$ the number of neutrinos emitted per unit time and unit surface by the emitting layer, the solution can be written as: $$\varphi _\nu (\mathrm{\Omega }_\nu ,r,\alpha )=\frac{S_\nu }{2\pi }\left[\frac{F_\alpha \left(y\right)}{y}\right]_{y=\sqrt{1r^2\left(1\mathrm{cos}^2\theta _\nu \right)}}$$ (11) with (for $`\alpha <90^{}`$): $$F_\alpha (y)=\{\begin{array}{cc}0\hfill & \text{for }y<\mathrm{sin}\alpha \text{,}\hfill \\ & \\ \mathrm{cos}\alpha y+\frac{2}{\pi }\sqrt{1\mathrm{cos}^2\alpha y^2}\frac{2}{\pi }\mathrm{cos}\alpha y\mathrm{tan}^1\left[\frac{\mathrm{cos}\alpha y\sqrt{1\mathrm{cos}^2\alpha y^2}}{\mathrm{cos}^2\alpha +y^21}\right]\hfill & \text{for }y<|\mathrm{sin}\alpha |\text{ ,}\hfill \\ & \\ 2\mathrm{cos}\alpha y\hfill & \text{for }y>\mathrm{sin}\alpha \text{.}\hfill \end{array}$$ (12) For $`\alpha >90^{}`$ one has to substitute: $`F_\alpha \left(y\right)=F_{\pi \alpha }\left(y\right)`$. This solution can be easily checked numerically with a montecarlo method writing a simple program with few lines of computer code. Some examples of the solution are shown in fig. 4 and fig. 5. For $`\theta _{0\nu }0`$ the observed neutrino flux is isotropic with a value $`\varphi _\nu =\varphi _0n_\nu `$ where $`\varphi _0`$ is the isotropic primary flux and $`n_\nu `$ the average number of neutrino produced by a primary particle. When the angle $`\theta _{0\nu }`$ grows the observed flux is suppressed for the vertical direction and is enhanced on the horizontal plane. The suppression and enhancement become more important when the angle $`\theta _{0\nu }`$ increases. For example (see fig. 4) if the neutrinos are produced at an height $`h=20`$ km and are emitted with an angle $`\theta _{0\nu }=10^{}`$, $`30^{}`$, or $`60^{}`$ with respect to the primary particle direction, the observed neutrino flux at sea level has a horizontal/vertical ratio of 1.45, 3.25, and 8.5. It is important to observe that the enhancement for the horizontal directions for a fixed angle $`\theta _{0\nu }`$ depends on the ratio $`r=R_{}/R_s`$ (that is on the altitude of the neutrino production), and is stronger when the altitude decreases. On the contrary the suppression for the vertical direction does depend on the angle $`\theta _{0\nu }`$ but is independent of the ratio $`R_{}/R_s`$ (see equation (12) and fig. 5). An important consequence of this is the fact that the average (angle integrated) neutrino flux observed at sea level depends on the altitude of neutrino production, and grows when the average altitude of production decreases. This can easily understood qualitatively observing that for finite $`\theta _{0\nu }`$ a fraction of the produced neutrinos will “miss” the Earth because are emitted on trajectories that do not intersect her surface. The fraction neutrinos “lost in space” will decrease if the production points are closer to the surface. It is less intuitive to note that for large $`\theta _{0\nu }`$ the flux of neutrinos in a full 3–D calculations can be higher than what is obtained in a collinear approximation. This can be checked integrating over angle equation (11) and it is possible because the enhancement on the horizontal plane can be stronger than the suppression on the vertical. This does not represent a violation of “unitarity” since it can happen only for a small range of radii just below the source region in the atmosphere, while in most of the volume inside the Earth for larger $`\theta _{0\nu }`$ the flux is suppressed. From fig. 3 and fig. 5 one can see that the height of neutrino production is very important in determining the enhancement for the horizontal directions. To compute the geometrical effects for atmospheric neutrinos it is not possible to approximate the emission volume as a thin surface. The results for a thick shell can of course be obtained integrating over a continuos distributions of thin shells with different radii, The assumption of emission from an altitude of $`15`$–20 km can also give a reasonable first order approximation. In summary this “geometrical effect” can be very important for low neutrino energy. In a realistic calculation it must be of course be combined with the other effects: geomagnetic effects and the zenith angle dependence of the neutrino yields to determine the angular distribution of atmospheric neutrinos. It has been stated that the enhancement on the horizontal is the result of the inclusion in a 3–D calculation of primary particles with “grazing” trajectories that if continued would not intersect the Earth’s surface and are not included in a 1–D calculation. These grazing trajectories give a neglible contribution to the horizontal neutrino fluxes, and are not the source of the enhancement. The large flux from the horizontal directions can be understood as the effect of the production of “horizontal” neutrinos from the more numerous “vertical” showers after emission with an appropriate angle with respect to the shower axis. Note than an isotropic flux implies that most of the interacting tracks are in fact vertical (the absorption rate is $`\mathrm{cos}\theta _0`$), and therefore the opposite effects of “vertical” showers producing “horizontal” neutrinos and vice versa are of different size and do not cancel each other. Note also that the enhancement for the horizontal directions is linked with a suppression of the vertical fluxes. A second 3–D calculation of the atmospheric neutrino fluxes (Tserkovnyak et al. ) has been recently made public. In this calculation the enhancement of the neutrino fluxes for the horizontal direction is not present, and the predicted angular distributions of the neutrinos are very similar to those calculated in a 1–D approach. The reasons for the discrepancy with the Battistoni et al work are not easily understandable from a simple reading of the papers. Also an early attempt at a 3–D calculation of Lee and Koh did not find a flux enhancement for horizontal directions. As we are trying to illustrate in this work the enhancement of the neutrino fluxes in the horizontal directions is the consequence of simple geometry and must be present in a correctly performed 3–D calculation. The absence of this effect in and is therefore evidence of the existence of errors it the calculation method used for the prediction. ## 6 A complete three-dimensional calculation To illustrate the points discussed in the previous section with a concrete example we have performed a detailed calculation of the neutrino fluxes with a three–dimensional method. The method of the calculation is conceptually very simple. The space around the Earth has been divided into two regions: an outer region ($`r>R_{}+\mathrm{\Delta }R`$ where $`R_{}=6371.2`$ Km is the Earth’s average radius and $`\mathrm{\Delta }R=80`$ km) where the matter density is assumed to be vanishingly small and only a magnetic field is present, and an inner region ($`R_{}<r<R_{}+\mathrm{\Delta }R`$) where particles interact not only with the magnetic field but also with air. The air is modeled as having a spherically symmetric density profile $`\rho \left(r\right)`$. The value of $`\mathrm{\Delta }R`$ was determined calculating that less that one percent of the cosmic rays that graze the Earth with a distance of closest approach equal to $`\mathrm{\Delta }R`$ interact in the residual high altitude atmosphere. The result of the calculation are independent of $`\mathrm{\Delta }R`$ (if it is sufficiently large), but the calculation become inefficient for $`\mathrm{\Delta }R`$ too large, The fluxes of cosmic rays entering the spherical surface at $`R_s=R_{}+\mathrm{\Delta }R`$ were determined using a model for the isotropic flux in the absence of geomagnetic effects (we used the results of primary flux of the Bartol calculation ), and taking into account the effects of the geomagnetic field (we used the IGRF field model ) in the propagation of the particles. This was obtained generating a uniform and isotropic flux that enters the spherical surface $`R_s`$, and rejecting the particles that correspond to forbidden trajectories. The primary particles are then propagated inside the inner shell, taking into account the bending in the magnetic field and the interactions with the air nuclei. When an inelastic interaction occurs, a set of final state particles is generated with a realistic distributions, of multiplicity, flavor, energy and transverse momentum. In this calculation we have used a simple model of the hadronic interactions due to Hillas . A fraction of approximately 3% of the primary cosmic rays that enter the inner shell exit without interacting. The subsequent development of the shower is a standard one. The trajectories of charged particles are propagated along curved trajectories because of the presence of the magnetic field. When neutrinos are produced their trajectories (simple straight lines) are studied. The trajectories intersect $`n`$ times the surface of the Earth with $`n=0`$ or $`n=2`$. If $`n=0`$ the neutrino is discarded. If $`n=2`$ two neutrinos (corresponding to down–going and up–going trajectories) are “collected” and their position and direction are recorded. Some of the results of our calculation are collected in six figures (from fig. 6 to fig. 11). For illustration (and debugging purposes) we have performed the calculation three times. * A first calculation (represented by the thick) histograms was performed using a fully 3–D method. This includes geomagnetic effects on the primary cosmic ray fluxes, and in the shower development the inclusion of realistic $`p_{}`$ distributions in hadronic interactions, and the correct treatment of the kinematics in particle decays. * A second calculation (represented by thin histograms) was performed to reproduce the 1–D algorithms. The geomagnetic effects are calculated for the primary cosmic ray particles (exactly as in the previous case) but not for the shower development (particles travel along straight lines for $`r<R_{}+80`$ Km.), and all final state particles are collinear with the projectile (or parent) particle. This is achieved modeling the interactions and particle decays exactly as in the previous case, (including therefore transverse momentum), and performing as a last step a rotation of all the 3–momenta of the final state particles so that they become parallel to the projectile (for interactions) or parent (for decays) particle. * A third calculation (represented by dashed histograms) was performed neglecting the geomagnetic effects on the primary flux (therefore considering exactly isotropic primary fluxes) and using the 1–D algorithms outlined in the previous point. Since the atmospheric neutrino fluxes depend on the geographycal position of the detector, we have divided the Earth’s surfaces into five equal area regions, according to the geomagnetic latitude: two polar regions, two intermediate latitude regions and an equatorial region, and have collected separately all neutrinos generated in the montecarlo calculation that land in the different regions (each neutrino is recorded in two separate regions, once as down–going and once as up–going). The results in the two (north and south) polar regions and the two intermediate latitude regions are essentially undistinguishable, and therefore only the results for the north regions are presented here. The six figures (from 6 to 11) with the results of our calculation are organized as follows: three figures are for $`\mu `$–like events and three for $`e`$–like events; in each set the three figures correspond to the three different magnetic latitude regions: polar ($`\mathrm{sin}\lambda _M=[0.6,1]`$ where $`\lambda _M`$ is the magnetic latitude), intermediate ($`\mathrm{sin}\lambda _M=[0.2,0.6]`$) and equatorial ($`\mathrm{sin}\lambda _M=[0.2,+0.2]`$). In each figure four panels show the zenith angle distributions for the event rates in four different neutrino energy intervals. To compute the rates the $`\nu `$ fluxes have been convoluted with the neutrino cross sections model described in . Note that the zenith angle is the neutrino one, and no experimental smearing or inefficiency has been included. The motivation to perform the three different calculations are that a comparison between the three allows to put in evidence the different effects that determine the shape of the zenith angle distributions of the neutrino fluxes. In fact: 1. In the calculation (iii) (1–D approximation with no geomagnetic effects) the only source of a non–flatness in the zenith angle dependence of the event rates are the neutrino yields. 2. In calculation (ii) (1–D approximation) the zenith angle distributions reflect both the geomagnetic effects on the primary cosmic rays, and the zenith angle dependence of the neutrino yields. 3. Finally for the full three–dimensional calculation (i) the zenith angle distributions are determined by a combination of all three effects: the geomagnetic effects, the neutrino yields and the spherical geometry of the source. Note that in the calculation (iii) where geomagnetic fields are neglected and the primary cosmic ray fluxes are considered as isotropic the neutrino event rates are independent from the detector position, therefore the results (described by the dashed histograms) are equal (within the statistical errors of the montecarlo calculations) for the same event type ($`\mu `$ or $`e`$) and same energy range in each of the three regions that correspond to different magnetic latitude. For the other two calculations the neutrino event rates do depend on the detector position and therefore the results are different in the three magnetic latitude regions. Let us consider first the zenith angle distributions of the $`\nu `$ fluxes in calculation (iii) (1–D with no geomagnetic effects, represented by the dashed histograms in fig. 611). For $`E_\nu \stackrel{<}{_{}}1`$ GeV, the $`\nu `$ fluxes are essentially isotropic, while with increasing energy the fluxes start to develop an enhancement for horizontal directions. This enhancement is due to the larger neutrino yields in more inclined showers, and becomes more marked when the energy increases as discussed in section 4. Note that the enhancement caused by the neutrino yields is more marked for $`e`$–like events, reflecting the fact that the effect of the shower inclination is especially important for muon decay, that is the source of essentially all $`e`$–like events and of only a fraction (less or equal to one half) of the $`\mu `$ events. Note also that this effect is exactly up–down symmetric, and all dashed histograms remain equal (within statistical errors) for a reflection ($`\mathrm{cos}\theta _z\mathrm{cos}\theta _z`$). The zenith angle distributions of calculation (ii) (1-D, represented by thin histograms) very clearly display the effect of the geomagnetic cutoffs. This is reflected in two effects: the calculated event rates now depend on the detector position and are highest in the polar region, and lowest in the equatorial region (the $`y`$ axis is an absolute scale in fig. 611), and the up–down symmetry is broken. For example in the calculations for the polar region (fig. 6 and fig. 9) the up–going ($`\mathrm{cos}\theta _z<0`$) rates are smaller that the down–going ones ($`\mathrm{cos}\theta _z>0`$) reflecting the fact that the geomagnetic cutoffs above the detector (in the magnetic polar region) are lower than average, while up–going trajectories are produced everywhere on the Earth. On the contrary in the calculations for the equatorial region (fig. 8 and fig. 10) the up–going rates are larger than the down–going ones that are suppressed because of the high geomagnetic cutoffs in the equatorial region. The geomagnetic effects become negliglible when the neutrino energy becomes larger than a few GeV, reflecting the fact that higher energy neutrinos are produced in the showers of higher energy primary particles that are not affected by the geomagnetic effects. Finally the 3–D calculation (thick histograms) clearly exhibits the contribution from all three sources of zenith angle dependence. A new and remarkable feature is present for $`E_\nu `$ below few GeV’s, and is the sharp enhancement for the horizontal directions, whose origin we have discussed in section 5. It should be noticed that the “geometric” enhancement for the horizontal plane in a realistic calculation that includes the geomagnetic effects is stronger than what can be estimated assuming that the primary spectrum is isotropic. This can be easily understood qualitatively noting that the value of the geomagnetic rigidity cutoff (see eq. 2) for a fixed position and azimuth grows monotonically with zenith angle from a minimum value for the vertical direction to a maximum value on the horizontal plane. The enhancement on the horizontal plane, as discussed in the previous section, can be understood as the consequence of the fact that there are more “vertical” showers producing “horizontal” $`\nu `$’s than vice versa. This is true for an isotropic flux, when the source of the vertical/horizontal asymmetry is due to the fact that the zenith angle distribution of the showers is $`\mathrm{cos}\theta _0`$ (with $`\theta _0`$); the asymmetry between vertical and horizontal showers becomes more important when the geomagnetic effects are included, since in this case the horizontal primary flux is also suppressed by an higher rigidity cutoff. For a realistic calculation the size of the “geometric” horizontal enhancement also depends on the position of the detector, reflecting the effect discussed above, and the fact that the energy spectrum of the $`\nu `$ flux observed in different location changes being softer (harder) at high (low) magnetic latitude, and the angle $`\theta _{0\nu }`$ is correlated with the neutrino energy. The “spherical geometry” enhancement can be easily distinguished from the “neutrino yield” enhancement since it is most important at low neutrino energy reflecting the higher average neutrino–primary angle, and vanishes as the angle $`\theta _{0\nu }`$ shrinks with increasing $`E_\nu `$. Conversely the “neutrino yield” enhancement becomes more important as the neutrino energy increases. ## 7 Conclusions In this work we have discussed the different sources that together with neutrino oscillations determine the zenith angle distributions of the atmospheric neutrino fluxes. The source volume of the atmospheric neutrinos has the shape of a spherical shell, and a simple consequence of this geometry is the existence of an enhancement of the flux intensity for the horizontal directions. This enhancement is stronger when the neutrinos are emitted with a large average angle with respect to the primary particle direction, and therefore it is most important at low energy, and disappears for $`E_\nu \stackrel{>}{_{}}3`$–10 GeV when the neutrino emission is approximtely collinear with the shower axis. The detection of this enhancement is not easy, since the angular resolution for sub–GeV neutrinos is poor. This is true for a detector such as Super–Kamiokande that can measure only the charged lepton produced in a quasi–elastic scattering (such as $`\nu _\mu n\mu ^{}p`$), since the angle between the neutrino and the final state charged lepton is large, but it is also true for higher resolution detectors (such as Soudan–2 or Icarus) that are capable of detecting the recoil protons. Even in this case the angular resolution will be limited by the fact that the spectator nucleons (that are undetected) can absorb a non negligible 3–momentum, and therefore the vector sum of the recoil nucleon and charged lepton momenta does not correspond exactly to the initial neutrino momentum. This difficulty in the detection of the enhancement does not imply that the effect is negligible when the atmospheric neutrino data is analysed to extract information about the oscillation parameters. Several effects can be significant. The most important one is a significant modification of the pathlength distributions of the neutrino events. These distributions are controlled by the $`\nu `$ zenith angle distributions that can very significantly distorted by the geometrical effects discussed here. Note also that the height distribution of the horizontal neutrinos for a fixed value of the zenith angle, is also modified with respect to a 1–D calculation. For example in a 3–D calculation most of the sub–GeV horizontal neutrinos are actually produced in less inclined (and deeper developing showers) after emission at large angles with respect to the shower axis, and are therefore produced at lower altitude with respect to the 1–D case. This effect is second order with respect to the distortion of the zenith angle distribution but must also be taken into account. As an example, even if the detector resolution is so poor that the shape of the neutrino angular distribution is not measurable, in the presence of $`\nu _\mu \nu _\tau `$ oscillations the pathlength distribution determines the average (zenith angle integrated) suppression of the $`\mu `$–like event rate for a given set of oscillation parameters. The geometrical effects can also be a non negligible correction for the calculation of the $`\mu /e`$ ratio. The enhancement for the horizontal directions depend in fact on the average angle $`\theta _{0\nu }`$ between neutrino and primary particle and on the average altitude of the neutrino production. Both these quantities are slightly different for neutrinos produced directly in meson decay or in the chain decay of a muon, and are therefore also slightly different for electron and muon (anti)–neutrinos. Also the absolute observable rates for sub–GeV neutrino events depend on these effects. The absolute rate is not important in the determination of the oscillation parameters, but the consistency of data and prediction also for the absolute rate is certainly desirable. The improved calculation of the atmospheric $`\nu `$ fluxes is now an active field of research, strongly stimulated by the great importance of the discovery of the existence of flavor transitions for these neutrinos. New improved calculations will be important in the detailed interpretation of the existing and future data on atmospheric neutrinos, and in the unbiased determinations of the $`\nu `$ oscillation parameters (or other parameters describing the new physics of the flavor transitions). New measurements of the primary cosmic rays have been recently performed , and have to be included in a new fit for the primary cosmic rays, and at the same a significant effort has been dedicated to the developoment of improved modeling of hadronic interactions in cosmic ray showers. Because of these developments in the field we postpone to a future work a quantitative discussion of a new estimate of the oscillation parameters. The newly recognizied geometric effects that we have discussed in this work will be important in these future calculations. Acknowledgments Special thanks to Giuseppe Battistoni for several very useful discussions and to Takaaki Kajita for encouragement. I’m grateful to Tom Gaisser, Teresa Montaruli and Todor Stanev for reading the manuscript and giving suggestions for improvements. I also acknowledge useful discussions with Ed Kearns, Todd Haines and M. Honda.
no-problem/0002/cond-mat0002242.html
ar5iv
text
# Steady-state selection in driven diffusive systems with open boundaries ## Abstract We investigate the stationary states of one-dimensional driven diffusive systems, coupled to boundary reservoirs with fixed particle densities. We argue that the generic phase diagram is governed by an extremal principle for the macroscopic current irrespective of the local dynamics. In particular, we predict a minimal current phase for systems with local minimum in the current–density relation. This phase is explained by a dynamical phenomenon, the branching and coalescence of shocks, Monte-Carlo simulations confirm the theoretical scenario. \] A recurrent problem in the investigation of many-body systems far from equilibrium is posed by the coupling of a driven particle system with locally conserved particle number to external reservoirs with which the system can exchange particles at its boundaries. In the presence of a driving force a particle current will be maintained and hence the system will always remain in an nonequilibrium stationary state characterized by some bulk density and the corresponding particle current. While for periodic boundaries the density is a fixed quantity, the experimentally more relevant scenario of open boundaries naturally leads to the question of steady-state selection, i.e. the question which stationary bulk density the system will assume as a function of the boundary densities . In the topologically simplest case of quasi one-dimensional systems this is of importance for the understanding of many-body systems in which the dynamic degrees of freedom reduce to effectively one dimension as e.g. in traffic flow , kinetics of protein synthesis , or diffusion in narrow channels . Within a phenomenological approach this problem was first addressed in general terms by Krug who postulated a maximal-current principle for the specific case where the density $`\rho ^+`$ at the right boundary to which particles are driven is kept at zero. The exact solution of the totally asymmetric simple exclusion process (TASEP) for arbitrary left and right boundary densities $`\rho ^{},\rho ^+`$ confirms and extends the results by Krug. The complete phase diagram comprises boundary-induced phase transitions of first order between a low-density phase and a high density phase, and a second-order transition from these phases to a maximal current phase . Analysis of the density profile provides insight into the dynamical mechanisms that lead to these phase transitions and shows that the phase diagram is generic for systems with a single maximum in the current-density relation . Experimental evidence for the first-order transition is found in the process of biopolymerization for which the TASEP with open boundaries was originally invented as a simple model , and more directly also in recent measurements of highway traffic close to on-ramps . Renormalization group studies indicate universality of the second order phase transition. In this Letter we develop a dynamical approach to generic driven one-component systems with several maxima in the current-density relation, and we show that a novel phase of rather unexpected nature appears: For a certain range of boundary densities the steady state carries the minimal current between two maxima even though both boundary densities support a higher current. We shall refer to this phase as minimal current phase. More generally, we claim that the current always obeys the extremal principle $`j`$ $`=`$ $`\underset{\rho [\rho ^+,\rho ^{}]}{\mathrm{max}}j(\rho )\text{ for }\rho ^{}>\rho ^+`$ (1) $`j`$ $`=`$ $`\underset{\rho [\rho ^{},\rho ^+]}{\mathrm{min}}j(\rho )\text{ for }\rho ^{}<\rho ^+.`$ (2) To understand the origin of this extremal principle we first note that in the absence of detailed balance stationary behavior cannot be understood in terms of a free energy, but has to be derived from the system dynamics. For definiteness consider a driven lattice gas with hard-core repulsion. At $`\rho =1`$ no hopping can take place and hence the current vanishes. Two local maxima can arise as the result of sufficiently strong repulsion between nearest neighbor particles as opposed to the pure on-site repulsion of the usual TASEP which leads to a single maximum (Fig. 1). At first sight one might not expect such a little change in the interaction radius of the particles to affect the phase diagram. However, the extremal principle (1), (2) predicts that the full phase diagram (Fig. 2) generically consists of seven distinct phases, including two maximal current phases with bulk densities corresponding to the respective maxima of the current and the minimal current phase in a regime defined by $$j(\rho ^+),j(\rho ^{})>j(\rho _{min});\rho ^{}<\rho _{min}<\rho ^+.$$ (3) Here the system organizes itself into a state with bulk density $`\rho _{bulk}`$ corresponding to the local minimum of the current. As in the maximal current phases no fine-tuning of the boundary densities is required. In order to understand the basic mechanisms which determine the steady state selection we first follow Ref. and consider the collective velocity $$v_c=j^{}(\rho )$$ (4) which is the velocity of the center of mass of a local perturbation in a homogeneous, stationary background of density $`\rho `$ (Fig. 3(a)). A second quantity of interest is the shock velocity $$v_s=\frac{j_2j_1}{\rho _2\rho _1}$$ (5) of a ‘domain wall’ between two stationary regions of densities $`\rho _{1,2}`$ (Fig. 3(b)). Notice that both velocities may change sign, depending on the densities. These velocities are sufficient to understand the phase diagram of systems with a single maximum in the current . Further to these notions we need to introduce here the concepts of coalescence and branching of shocks. A single large shock (with a large density difference $`\rho _2\rho _1`$) may be understood as being composed of subsequent smaller shocks with narrow plateaux at each level of density (Fig. 4). In the usual asymmetric exclusion process these shocks travel with different relative speeds such that they coalesce and form a ‘bound state’ equivalent to a single shock . In the present situation, however, the minimum in the current-density relation leads to more complicated dynamics. A closer investigation of Eq. (5) shows that depending on $`\rho _{1,2}`$ a single shock may branch into two distinct shocks, moving away from each other, discussed below. With these observations the dynamical origin of the phase transition lines can be understood by considering the time evolution of judiciously chosen initial states. Because of ergodicity, the steady state does not depend on the initial conditions and a specific choice involves no loss of generality. We turn our attention to a line $`\rho ^+=c`$ with $`\rho _{min}<c<\rho _2^{}`$ in the phase diagram which crosses the minimal current phase. Along this line it is convenient to consider an initial configuration with a shock with densities $`\rho ^{}`$ and $`\rho ^+`$ on the left and on the right respectively, which is composed of many narrow subsequent shocks at various levels of intermediate densities (Fig. 4). (i) Let us start with the point with equal boundary densities in which case the system evolves into a steady state with the same bulk density $`\rho _{bulk}=\rho ^{}=\rho ^+`$. (ii) Lower $`\rho ^{}`$ slightly below $`\rho ^+`$ with just a single shock separating both regions. According to (5) the shock travels with speed $`v_s=(j^+j^{})/(\rho ^+\rho ^{})>0`$ to the right, making the bulk density equal to $`\rho ^{}`$. At the same time, small disturbances will according to (4) also drift to the right, as $`v_c=j^{}(\rho ^{})>0`$ in this region, thus stabilizing the single shock. (iii) Now, lower $`\rho ^{}`$ below $`\rho _{min}`$. While the shock velocity $`v_s`$ is still positive, so that one expects the shock to move to the right, the collective velocity $`v_c=j^{}(\rho ^{})<0`$ indicates that disturbances will spread to the left. This discrepancy marks the failure of a single shock scenario. In order to resolve it, consider for simplicity $`\rho ^{}=\rho _1^{}`$ and return to the picture with many subsequent shocks at each density level between $`\rho ^{}`$ and $`\rho ^+`$ (Fig.4). Eq. (5) shows that small shocks below $`\rho _{min}`$ will move to the left, while those above $`\rho _{min}`$ will move to the right<sup>*</sup><sup>*</sup>*Actually, the small shock between the levels $`\rho _k^{},\rho _k^+`$ will be stable only if $`v_c(\rho _k^+)<v_s<v_c(\rho _k^{})`$, see Eq.(4,5). Some of small shocks satisfy the above condition and some do not. This brings about additional structure to the resulting shock, which will be discussed elsewhere . The leftmost of the left-moving shocks will merge in a single one, and so will the rightmost of the right-moving shocks. The result are two single shocks and moving different directions and thus expanding the region with the density $`\rho _{bulk}=\rho _{min}`$. The system is in the minimal current phase. This picture is well supported by the Monte-Carlo simulations shown in Fig. 5, demonstrating the branching of a single shock into two distinct shocks moving in opposite directions. The minimal current phase will persist for any left boundary density in the range $`\rho ^{}[\stackrel{~}{\rho }_1,\rho _{min}]`$. Notice that the change of bulk density is continuous across the point $`\rho ^{}=\rho _{min}`$ to the minimal current phase, so the transition is of the second order. (iv) As we lower $`\rho ^{}`$ below $`\stackrel{~}{\rho }_1`$, the shock velocity $`v_s=(j_{min}j(\rho ^{}))/(\rho _{min}\rho ^{})>0`$ becomes positive. The shock is moving to the right, leading to a low density phase with bulk density $`\rho _{bulk}=\rho ^{}`$ which drops discontinuously from $`\rho _{bulk}=\rho _{min}`$ at $`\rho ^{}=\stackrel{~}{\rho }_1+0`$ to $`\rho _{bulk}=\rho ^{}`$ at $`\rho ^{}=\stackrel{~}{\rho }_10`$. The system undergoes a first order phase transition. On the transition line the shock performs an unbiased random walk, separating coexisting regions of densities $`\rho _{min}`$ and $`\rho ^{}`$ respectively. (v) Let us start again from $`\rho ^{}=\rho ^+`$ and now increase $`\rho ^{}`$. Until one reaches $`\rho ^{}=\rho _2^{}`$, the collective velocity $`v_c=j^{}(\rho ^{})>0`$ is positive, leading to $`\rho _{bulk}=\rho ^{}`$. (vi) As soon as $`\rho ^{}`$ crosses the point $`\rho ^{}=\rho _2^{}`$, the sign of collective velocity $`v_c`$ changes and the overfeeding effect occurs: a perturbation from the left does not spread into the bulk and therefore further increase of the left boundary density does not increase the bulk density. The system enters the maximal current phase II through a second-order transition. Using analogous arguments one constructs the complete phase diagram (Fig. 2) and obtains the extremal principle (1, 2). The velocities (4), (5) which determine the phase transition lines follow from the current-density relation. This behavior can be checked with Monte Carlo simulations. A model with two maxima of the current is a TASEP with next nearest neighbor interaction defined by the bulk hopping rates (see A26,A27 in ) $`\mathrm{0\hspace{0.33em}1\hspace{0.33em}0\hspace{0.33em}0}`$ $``$ $`\mathrm{0\hspace{0.33em}0\hspace{0.33em}1\hspace{0.33em}0}\text{ with rate }1+\delta `$ (6) $`\mathrm{1\hspace{0.33em}1\hspace{0.33em}0\hspace{0.33em}0}`$ $``$ $`\mathrm{1\hspace{0.33em}0\hspace{0.33em}1\hspace{0.33em}0}\text{ with rate }1+ϵ`$ (7) $`\mathrm{0\hspace{0.33em}1\hspace{0.33em}0\hspace{0.33em}1}`$ $``$ $`\mathrm{0\hspace{0.33em}0\hspace{0.33em}1\hspace{0.33em}1}\text{ with rate }1ϵ`$ (8) $`\mathrm{1\hspace{0.33em}1\hspace{0.33em}0\hspace{0.33em}1}`$ $``$ $`\mathrm{1\hspace{0.33em}0\hspace{0.33em}1\hspace{0.33em}1}\text{ with rate }1\delta `$ (9) with $`|ϵ|<1;|\delta |<1`$. For sufficiently strong repulsive interaction $`1ϵ1`$ the current at half-filling is strongly suppressed, which brings about two maxima structure in the current-density relation (Fig. 1). The limit $`ϵ=1`$ leads to $`j_{min}=0`$. For negative $`ϵ`$ (attractive interaction) the current-density relation always has a single maximum. The other parameter, $`\delta `$, is responsible for the particle-hole symmetry. $`\delta =0`$ corresponds to symmetrical graph $`j(\rho )=j(1\rho )`$. $`\delta 0`$ breaks the particle-hole symmetry in favor of larger particles current ( $`\delta >0`$) or larger vacancies current ( $`\delta <0`$). In particular, $`\delta 0`$ is responsible for different heights of two maxima on Fig. 1. The injection at the left boundary site 1 and extraction of particles at the right boundary site $`L`$ is chosen to correspond to coupling to boundary reservoirs with densities $`\rho ^\pm `$ respectively. Along the line $`\rho ^+=\rho ^{}`$ the stationary distribution is then exactly given by the equilibrium distribution of an 1-D Ising model with boundary fields and the bulk field such that the density profile is constant with density $`\rho =\rho ^+=\rho ^{}`$ . The current $`j=(1+\delta )\mathrm{\hspace{0.17em}0100}+(1+ϵ)\mathrm{\hspace{0.17em}1100}+(1ϵ)\mathrm{\hspace{0.17em}0101}+(1\delta )\mathrm{\hspace{0.17em}1101}`$ as a function of the density can be calculated exactly using standard transfer matrix techniques. The exact graph is shown in Fig. 1 for specific values of the hopping rates. We performed Monte Carlo simulations for systems of sizes $`L`$ from 100 to 1000. Densities and currents were averaged over at least 50$`L`$ rounds, and averaged over 7 different histories. We consider the bulk density $`\rho _{bulk}`$ as order parameter. In analogy to equilibrium phase transitions a singularity (jump discontinuity) in the first derivatives of $`\rho _{bulk}`$ signals a first (second) order phase transition. In the finite system we determined the location of the first (second) order transition by the appearance of peak (jump) in the first derivatives of the bulk density $`\rho _{bulk}(\rho ^+,\rho ^{})`$ with respect to $`\rho ^+`$ and $`\rho ^{}`$. As initial state we chose either the empty or the completely filled lattice, whichever gave the faster convervence. Despite finite size effects, precise analysis of which requires further investigations, the overall agreement of the simulated phase diagram with the predicted one is very good already for $`L=150`$ (Fig.2) We have chosen the set of parameters $`ϵ`$ and $`\delta `$, which shows distinct minimum between two maxima. For the other choices of $`ϵ,\delta `$, the phase diagram can be obtained in the same way, e.g. by current-density relation and extremal principles Eqs.(1,2).. We conclude that the interplay of density fluctuations and shock diffusion, coalescence and branching resp. as described above represents the basic mechanisms which determine the steady state selection (1,2) of driven diffusive systems with a nonlinear current–density relation. A surprising phenomenon is the occurrence of the self-organized minimal current phase. Since little reference is made to the precise nature of the dynamics we argue that the phase diagram is generic and hence knowledge of the macroscopic current-density relation of a given physical system is sufficient to calculate the exact nonequilibrium phase transition lines which determine the density of the bulk stationary state. For systems with more than two maxima in the current the interplay of more than two shocks has to be considered. A fundamentally different scenario seems likely in systems with long-range bulk correlations, which, however, is untypical in one dimension. Perhaps the most interesting open question which one might address in a similar fashion is the question of steady-state selection in systems with two or more local conservation laws. This could provide insight into the mechanisms responsible for boundary-induced spontaneous symmetry breaking in 1-D driven diffusive systems with open boundaries. Acknowledgements This work was partially supported by DAAD and CAPES within the PROBRAL programme. G.M.S. thanks H. Guiol for useful discussions.
no-problem/0002/cond-mat0002216.html
ar5iv
text
# Ferroelectricity and structure of BaTiO3 grown on YBa2Cu3O7-δ thin films ## I Introduction In recent years, there has been much interest in the preparation of superconducting field effect transistors (SuFETs) with a (Ba,Sr)TiO<sub>3</sub> thin film as gate insulator and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) as source-drain channel . More recently the development of high frequency superconducting devices increased the interest of the community on YBCO/ferroelectric heterostructures . From these studies it became clear that it is possible to grow high quality perovskite ferroelectric thin films on top of YBCO electrodes. This motivated us to investigate the room temperature ferroelectric and structural properties of YBCO/BaTiO<sub>3</sub>/Au heterostructures. The objective of this work is to analyse the domain wall contribution to the dielectric properties of high quality BaTiO<sub>3</sub> (BTO) thin films. So far, similar studies have only been done on Pt/PbZr<sub>x</sub>Ti<sub>1-x</sub>O<sub>3</sub> (PZT)/Pt heterostructures . ## II Experiment The 100 nm thick YBCO layers and the BTO films were sputtered in-situ in pure O<sub>2</sub> atmosphere. The top Au electrodes were thermally evaporated through a shadow mask. Before deposition, the YBCO target was pre-sputtered for 1 hour and the BTO target for 30 minutes. After this time the dc-voltage of the plasma did not decrease anymore showing that the targets reached a stable state. For both processes the substrate target distance was about 20 mm. The heterostructures have been prepared on MgO substrates because of the better High Frequency properties of this material in comparison to SrTiO<sub>3</sub>. The YBCO ground electrode was prepared following the method already described in details in Ref. . The difference in the present work is that the post annealing was performed after the deposition of the BTO layer. The stoichiometric BTO targets were prepared at the University of Mainz. The BTO films were deposited by rf-sputtering in on axis geometry without a magnetron in O<sub>2</sub> pressures between 0.5 and 1.5 mbar. This was needed in order to prevent re-sputtering. During deposition, the rf power was regulated at 130 W resulting in a typical bias voltage of 30 V. The best films were deposited at 700 C. At the end of the process, the bi-layers were cooled down to room temperature during 1 hour and annealed in 1 bar oxygen pressure. The electrode contact size varied between 0.5 mm<sup>2</sup> and 1 mm<sup>2</sup>. X-ray diffraction patterns in Bragg-Brentano and four-circle geometry were obtained using a Philips X’Pert-MPD and a STOE STADI4 diffractometer, respectively. The capacitance and the hysteresis loops were measured with a HP4284A LCR-meter and a self made Sawyer-Tower circuit, respectively. ## III Results and Discussion ### A X-ray diffraction The $`\theta `$/2$`\theta `$-scan shown in Fig. 1 reveals only $`c`$-axis oriented growth of the BTO films. However as shown by detailed TEM studies there may still exist small amounts of $`a`$-axis oriented domains embedded on a $`c`$\- axis matrix even if by x-ray diffraction no splitting caused by $`a`$-and $`c`$-axis oriented domains can be found . The $`c`$-axis length of the BTO thin films in our work sputtered at 0.7 mbar is determined to be 3% larger in comparison with the bulk material and coincides with the value of the work of Abe et al. . On the other hand, by using a deposition pressure of 1.5 mbar we obtain the same $`c`$-axis length like the bulk material. Hence the lattice expansion at a lower pressure may result from atomic defects due to the high kinetic energy of the incident species. This explanation is in accordance with our observation that the lattice expansion occurs only for BTO but not for SrTiO<sub>3</sub> thin films. As barium ions are much heavier than strontium ions they can induce more serious damage upon impact on the previously deposited film. We expect therefore the lattice parameters to be also dependent on the deposition technique, as already reported by Srikant et al. . In order to characterise the in-plane orientation of the YBCO/BTO heterostructures we performed a $`\varphi `$-scan of the YBCO (103) reflex (see Fig. 2) and a two-dimensional scan of the plane $`\mathrm{}=1`$ for BTO (see Fig. 3). For both films there are 8 peaks, indicating a fourfold symmetry and an in-plane alignment of YBCO’s and BTO’s crystallographic axes parallel and rotated 45 relative to the MgO lattice. The rotated alignment can be explained by the fact that 2 lattice constants of MgO equals 3 diagonal lattice parameters of YBCO. Since BTO grows on YBCO epitaxial it reproduces this 45 rotation. A fourfold symmetry was obtained by the use of SrTiO<sub>3</sub> substrates. The orientation matrix gives for our films the in-plane lattice constants $`a=b=(0.3994\pm 0,0002)`$ nm for BTO. This value is in excellent agreement with the result on bulk materials. For YBCO we measured $`a=0.3847`$ nm, $`b=0.3851`$ nm. These values coincide well with the average between $`a`$ and $`b`$ in bulk materials. This is due to the usual mixing of these two orientations observed in thin films done on standard substrates. The results above show, that the $`c`$\- axis distortion of BTO cannot be explained by the requirement to conserve the unit cell volume, according to the Poisson ratio. This indicates that the high energy of barium at the sputtering process might be indeed responsible for the observed lattice distortion. The mosaic spreads of the different layers were determined by measuring the rocking curves of BTO (002) and YBCO (005). For BTO we obtain a FWHM (full width at half maximum) of 0.38 and for YBCO 0.32, indicating a good epitaxial growth of the heterostructures. The small FWHM obtained in our BTO films can be explained by a reduced lattice mismatch between YBCO and BTO. For comparison, Wills et al. got for $`a`$-axis oriented BTO on LaAlO<sub>3</sub> substrates a FWHM of 0.6 . ### B Ferroelectric hysteresis loops The non-remanent component of polarisation is caused by non-ferroelectric ionic and electronic polarisability as well as field induced reversible ferroelectric domain wall motion. The remanent component is induced by switching ferroelectric domains. Using a Sawyer-Tower circuit, we obtain a remanent polarisation of $`P_\mathrm{r}=1.25\mu `$C/cm<sup>2</sup> and a coercive field of $`E_\mathrm{c}=30`$ kV/cm (Fig. 4). The remanent polarisation is consistent with other results obained by rf-sputtering . But Yoneda et al. even observed for 30 monolayers thick BTO films a ferroelectric hysteresis up to 600 C . Measuring the dielectric constant of our 300 nm thick BTO films, from room temperature to 200 C, did not indicate any sign of a ferroelectric phase transition, as well. This result is rather surprising since bulk BTO is known to have a Curie temperature of about 120 C . This increase in the curie temperature could be due to internal stresses induced by the substrate. ### C Contribution of the domain walls to the permittivity The major research in ferroelectric thin films emphasises the nucleation, growth and switching of domains because these features are very important for nonvolatile memory applications. In contrast, contributions to the dielectric or the piezoelectric constant at sub-switching fields due to domain wall displacements and vibrations were investigated mainly for single crystals or ceramics. We follow here the procedure of Taylor et al. by extending the models for the interaction and pinning of domain walls with randomly distributed pinning centres in magnetic materials to the dielectric displacement in ferroelectric thin films . Below we will address the dielectric response of a BTO thin film both in terms of the weak-field (Rayleigh law) and the logarithmic frequency dependence. The Rayleigh law is valid for low-field conditions, where no nucleation of domains occur and the average structure of the domain walls drifts within the sample as the field is cycled . In order to achieve the linear Rayleigh behaviour for the dielectric constant we cycled the sample about 10<sup>7</sup> times prior to the measurement with an amplitude below the coercive voltage. Figure 5 shows the linear dependence of the capacitance versus ac voltage amplitude $`V_0`$. From the capacitance at 100 Hz, and from the geometry of our electrodes we deduce a dielectric constant $`ϵ=300`$. In accordance to the Rayleigh law the weak-field dependence can be described by the formula $`C=C_{\mathrm{init}}+aV_0`$. $`C_{\mathrm{init}}`$ represents the intrinsic lattice and reversible domain wall movement. The second term $`aV_0`$ with the Rayleigh coefficient $`a`$ is responsible for the weak-field hysteresis due to the pinning and depinning of domain walls. According to Néel it is caused by the irreversible displacement of the walls between two meta-stable states when the applied field is large enough to overcome the barrier. In Fig. 6 we show the logarithmic dependence of the capacitance on frequency. As for magnetic systems this logarithmic behaviour is generally attributed to domain wall motion across randomly distributed pinning centres . In order to determine whether the reversible $`C_{\mathrm{init}}`$ or the irreversible Rayleigh coefficient $`a`$ is responsible for this behaviour, we extracted them for all frequencies from the ac field dependence of the capacitance. The result indicating the logarithmic frequency dependence of both parameters is presented in Fig 7. Using a Sawyer-Tower circuit we finally measured the non-saturating hysteresis loop (Rayleigh loop) of our samples at sub-switching fields (see Fig. 8). The full line representing the calculated hysteresis curve is based on the LCR-meter measurement by using the formula $`P=(ϵ_{init}+\alpha E_0)E\pm \alpha /2(E_0^2E^2)`$, where $`+`$ stands for the decreasing and $``$ for the increasing field. $`ϵ_{\mathrm{init}}`$ and $`\alpha `$ are derived from $`C_{\mathrm{init}}`$ and $`a`$ respectively. $`E_0`$ designates the maximum field reached during the cycle. While the slopes of the loops coincide, the area of the calculated hysteresis curve is underestimated in comparison to the measured one. This fact suggests that the observed hysteresis is not only due to the Rayleigh-type irreversible contribution of the dielectric constant, but by the losses having their origin in the series resistance and parallel conductivity of the capacitor. This explanation is confirmed by the fact that the calculated curve is only based on the imaginary and not on the real part of the impedance. So far there is no universal behaviour for the frequency response of capacitors based on insulating perovskites. While Taylor et al. observed, for Pt/PZT/Pt capacitors, in the range from 10<sup>-1</sup> Hz to 10<sup>4</sup> Hz an exponential law between capacitance and frequency , Zafar et al. used for Pt/(Ba,Sr)TiO<sub>3</sub>/Pt between 10<sup>2</sup> Hz and 10<sup>6</sup> Hz a power law to describe their data . While the logarithmic frequency dependence is in accordance to the contribution of domain walls, the power law dependence is known as the Curie-von Schweidler law. Obviously further studies are necessary to determine the correct relationship between the dielectric constant and the frequency. Maybe our data and the results of Taylor et al, fit better to the logarithmic frequency dependence because the films exhibit a remanent polarisation and thus a higher domain wall contribution in comparison to non-ferroelectric (Ba,Sr)TiO<sub>3</sub> films. The relative irreversible domain wall contribution to the total capacitance can be obtained by the ratio $`aV/C`$. If we apply an sinusoidal electrical field with a frequency of 100 Hz and an amplitude of $`E_0=5`$ MV/cm we obtain a ratio of 3%. This value is much smaller than the 21% calculated by Taylor et al. for 1.3 $`\mu `$m thick PZT thin films. Since the irreversible part of the dielectric constant determines the remanent polarisation it is reasonable for BTO to obtain a lower contribution of irreversible domain wall movement than for PZT because the remanent polarisation of PZT thin films is typically in the range of 40 $`\mu `$C/cm<sup>2</sup>. This is more than one magnitude larger in comparison with our BTO thin films, revealing a remanent polarisation of 1.25 $`\mu `$C/cm<sup>2</sup>. Damjanovic et al. determined the Rayleigh parameters describing the piezoelectric response of PZT and BTO ceramics by applying an external ac pressure to the samples . The Rayleigh law for the piezoelectric behaviour can be described by the formula $`d=d_{\mathrm{init}}+\beta X_0`$ with the piezoelectric constant $`d`$, the amplitude $`X_0`$ of the applied pressure and the Rayleigh coefficient $`\beta `$. In contrast to our results for thin films they observed, for the coarse grain (average grain size 27 $`\mu `$m) BTO ceramic, a higher relative irreversible contribution $`\beta X/d`$ than for soft PZT. Thus the potential barrier for a domain wall displacement is not dominated by the material, but by the grain size and the type of domain walls because 180 domain walls influence the dielectric constant but not the piezoelectric coefficient . Demartin concluded for fine grain samples (0.7 $`\mu `$m grain size) of BTO that domain walls are clamped to a considerable degree by the high internal pressure and hence they contribute less to the piezoelectric coefficient. In contrast to coarse grain ceramics (26 $`\mu `$m grain size), the domain walls created in fine grain ceramics are not sufficient to relieve the stress produced by the phase transition, resulting in a high internal stress and a weak activity of the domain walls. The decrease in dielectric constant with decreasing grain size (from 50 nm to 12 nm) arises also from the absence of 180 domain boundaries in smaller grains . On the other hand, it is well known that polycrystalline BTO with a grain size of approximately 1 $`\mu `$m possesses a much higher permittivity at room temperature than a single crystal BTO due to the higher contribution of domain walls . The much larger film thickness of 1.3 $`\mu `$m chosen by Taylor and Damjanovic yields an enhanced activity of the domain walls and a higher Rayleigh coefficient in comparison to our 300 nm thick BTO films. The grain size and the stresses due to the phase transition determine the dielectric constant in ceramics. But ferroelectric thin films are additionally influenced by the existence of interfaces and additional stresses, caused by the lattice mismatch between film and substrate. ## IV Conclusion The small lattice mismatch and chemical compatibility between YBCO and BTO allow a good crystal structure for heterostructures of both materials. X-ray diffraction patterns display a high crystalline order of both YBCO and BTO whereas their in-plane orientation is influenced by the choice of MgO as substrate material. The sub-switching ac electric field dependence of the permittivity may be described in terms of the Rayleigh law. At a field of 5 MV/m the total dielectric constant contains 3% contribution of irreversible domain wall movement. In addition to $`C_{init}`$ the irreversible Rayleigh parameter representing irreversible domain wall movement shows also a linear logarithmic frequency dependence of permittivity. This behaviour can be explained by pinning of domain walls. In comparison to conventional Pt as base electrode for ferroelectric memories, YBCO offers the advantages of better fatigue behaviour. The authors gratefully acknowledge support by the Materialwissenschaftlichen Forschungszentrum (MWFZ), Mainz, the Training and Mobility of young researchers (TMR) program of the European Union under grant number ERBFM-BICT972217 and the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF) under project number 13N6916.
no-problem/0002/astro-ph0002461.html
ar5iv
text
# Identification of a nearby stellar association in the Hipparcos catalog: implications for recent, local star formation. ## 1. Introduction For many years, the only known star clusters within 70 pc of Earth were the rich Hyades (at $``$ 45 pc) and the sparce U Ma nucleus (at $``$25 pc). Then a group of stars $``$55 pc from Earth was established as a bona fide association of T Tauri stars of age $``$10 million years (Webb et al 1999; Sterzik et al 1999; Webb, Reid, & Zuckerman 2000). This TW Hydrae Association (hereafter TWA) was unrecognized for many decades in spite of its being the nearest region of recent star formation (Kastner et al 1997). A ubiquitous signpost of newly formed stars has been a nearby molecular cloud. But no interstellar cloud has been found near TW Hya despite multiple searches. Therefore, we ask: do additional unrealized young associations far from molecular clouds exist near Earth? Different teams have used the Hipparcos catalog to search for previously unrecognized stellar associations. For example, Platais et al (1998) undertook “A search for star clusters from the Hipparcos Data” and list basic data for 5 “very likely” new clusters and associations as well as 15 “possible” ones. Yet, in their own words, “At distances $`<`$100 pc the survey is incomplete as a result of the chosen search strategy.” Of the 20 potential new groupings in their Table 1, the closest, which contains 11 members from Hipparcos, is 132 pc away. By contrast, the “Tucanae Association” we propose in the present paper is only $``$45 pc from Earth and much younger than the U Ma and Hyades clusters. Just as the sparse U Ma cluster nucleus is accompanied by more numerous U Ma stream stars, we suggest that the Sun is embedded in a stream of stars with similar space motions (the “Tucanae Stream”), with the Tucanae Association playing a role analogous to the U Ma nucleus. Figures 1a and 1b depict the Tucanae Association and some stream stars, respectively. These stars likely represent some of the younger, nearer, members of the more extensive Pleiades group or Local Association proposed by Eggen (e.g., see Jeffries 1995 and references therein). ## 2. Observations Excepting TW Hya itself, the first T Tauri stars in the TWA were identified in a study of IRAS sources at high galactic latitude (de la Reza et al 1989; Gregorio-Hetem et al 1992). It has been shown that, on the main sequence, young stars are more likely 60 $`\mu `$m IRAS sources than are old stars (e.g., Jura et al 1993 & 1998; Silverstone et al 2000) and young stars are also more apt to be members of associations than are older stars. Therefore, we interrogated the Hipparcos catalog within a six degree radius of two dozen stars detected by IRAS at 60 $`\mu `$m and scattered around the sky. The stars were those with, in our opinion, reliable excesses at 60 $`\mu `$m listed by Mannings & Barlow (1998), Backman & Paresce (1993), and/or Silverstone et al. (2000). We searched for Hipparcos stars with similar proper motions and distances to the IRAS stars; results for our most interesting regions are presented in Figure 1, Tables 1 & 2 and are discussed below. Results at additional IRAS stars will be discussed in a later paper. To verify or deny common space motions of Table 1 stars with similar distances and proper motions in the Hipparcos catalog, we measured radial velocities with the Bench Mounted Echelle (BME) spectrograph on the 1.5-m telescope at the Cerro Tololo Interamerican Observatory (CTIO). The spectra cover from $``$ 5000 - 8200Å at a typical measured resolution of 0.15Å. The data, obtained 21-26 (UT) August 1999, were reduced and calibrated in IRAF. Six radial velocity standard stars were observed during the run with spectral types ranging from F6 to M1.5. Radial velocity was determined by cross-correlating the spectrum of the target and a standard star of similar spectral type observed close in time to the target. Approximately 15 echelle orders, chosen to produce strong correlations and have few atmospheric features, were used to compute the correlation. The accuracy of these measurements is strongly dependent on the S/N of the spectra and the spectral type and rotational velocity of the target star. The uncertainty ranges from $``$ 1-10 km/s, the majority being $`<`$2 km/s. H$`\alpha `$ line profiles and equivalent widths of lithium (6708Å) were also measured and are listed in Table 2. Projected rotation velocities (v sin i) were measured and are listed in Table 2. We used a procedure similar to that of Strassmeier et al (1990), specifically we followed the prescription described in their Section IIId. We measured the FWHM of lines in the 6420Å region and corrected them for the instrumental broadening (0.134 km/s, FWHM) determined from five ThAr lamp lines at similar wavelengths. Following Strassmeier et al, we assumed a macroturbulent velocity, zeta, equal to 3 km/s. For a few of the faint late K-type and M-type stars, v sin i is quite uncertain due to low S/N. With the 0.9-m telescope at CTIO we obtained BVRI photometry of stars in Table 1. The purpose was to determine whether any of the stars were sufficiently young to still be above the main sequence. These data will be reported in a later paper. Properties of the 37 star systems we observed at CTIO are listed in Tables 1 and 2. The calculated UVW and the age indicators usually agree in the sense that the stars with space motions similar to those of stars in the youthful TWA (Soderblom, King & Henry 1998; Webb et al 2000), Beta Pic moving group (Barrado y Navascues et al 1999), and Local Association (Jeffries 1995), usually also have additional indications of youth. Youth is deduced from one or more of the following characteristics: ROSAT All-Sky Survey Bright Source Catalog (RASS-BSC, Voges et al. 1998) X-ray source, strong lithium 6708Å absorption, H$`\alpha `$ emission or weak (filled-in) absorption, rapid rotation, IRAS far-infrared excess emission, and, for the A- and late-B type stars, location near the bottom envelope of brightness of stars of comparable spectral type i.e. on or near the zero-age main sequence, (Jura et al 1998; Lowrance et al 2000). We interprete Tables 1a and 2a in the following way. Nine or ten star systems near zero hours right ascension are likely to be part of a small stellar association similar to the TWA or the U Ma nucleus (see Figure 1c). These 10 stars are indicated in column 4 of Table 1a and we dub them ”the Tucanae Association”; their distribution is shown in Figure 1a. Other stars in the Tables and Figures, may be placed into one of the following catagories: (1) a member of the stream of nearby, young Local Association stars, or (2) a star with no obvious indications of youth that happens to have distance from Earth and proper motions similar to those of the Tucanae Association. Some of these stars may indeed be Tucanae Association members without signatures of youth (e.g. see discussion of HD 207129 below). An excess of stars with similar proper motions and distance from Earth seems to exist in the Tucanae region, indicating some stars listed in Table 1b may be Tucanae members. But individually, each of these stars is unlikely to be a member. ## 3. Results & Discussion Ages have been deduced for a few stars in Table 1a. For example, HIP 92680 (PZ Tel) is above the main sequence and estimated to be 15-20 million years old (Favata et al 1998; Soderblom et al. 1998). With the NICMOS camera on HST, Lowrance et al. (2000) discovered a late M-type object within 4$`\mathrm{}`$ of the A-type star HR 7329; if a companion, which appears very likely, then the object is a brown dwarf of age $``$ 20 Myr. Other stars with similar space motions and ages around 20 million years have been identified close to the Sun, for example, Beta Pictoris (Jura et al 1993; Barrado y Navascues et al 1999) and Gliese 799 and 803 (Barrado y Navascues et al 1999). The Tucanae Association itself is probably somewhat older than 20 million years. We tentatively assign an age of 40 Myrs based on the strength of H$`\alpha `$ emission lines seen in HIP 1910, 1993, and 2729 which are brighter than in stars of similar spectral type in the $`\alpha `$ Per cluster (see Figure 16 in Prosser 1992) whose age has recently been estimated to be 90 +/- 10 Myr (Stauffer et al 1999). Other indicators of age, the presence of rapid rotators, stars with high Li abundance and or large X-ray flux, all agree with this estimate but are somewhat less diagnostic. For example, the X-ray counts per second of the K- and M-type stars in Table 1a imply L<sub>x</sub>/L<sub>bol</sub> comparable to that of late-type stars in very young clusters (see Fig. 1 in Kastner et al. 1997). And L<sub>x</sub>/L<sub>bol</sub> of the late F- and G-type stars in Table 1a are typically three orders of magnitude larger than L<sub>x</sub>/L<sub>bol</sub> of the Sun (Fleming 1999) which suggests an age younger than the Pleiades (see Fig. 2 in Gaidos 1998). Over 40 Myr, dispersion as small as 1 km/s in the critical V component of space velocity would lead to a 40 pc separation between stars. Since no such separation is present among the nuclear members (Fig. 1a), either the range in V given in Table 2a is due to measurement error or the stars are younger than 40 Myr or both. Measurement errors are characterized by, for example, differences between Hipparcos and PPM proper motions and the large uncertainty in radial velocity for A- and B-type members. Also, close companions in unrecognized binary systems will generate orbital motion that could shift the measured value of V away from the true V velocity of the binary system. We chose the stars in Table 1 for further study at CTIO because of similar distances from Earth and proper motions. However, once radial velocities were measured it became clear that many of these stars share similar space motions (UVW) with other very young nearby stars found in very different directions. The mean UVW for the nine likely member systems of the Tucanae nucleus (PPM 366328 not included) is (-10.5, -20.8, +0.3) $`\pm `$ (2.3, 2.4, 3.0). For comparison, UVW for the TWA is -11, -18, -5 (Webb et al 2000) and, for the Beta Pic moving group, -10.3, -16.5, -10.2 (Barrado y Navascues et al 1999). In addition, we calculate UVW = -11, -18, -10 for $`\eta `$ Cha, the brightest member of the recently identified, young, compact $`\eta `$ Chamaeleontis cluster (Mamajeck, Lawson & Feigelson 1999). Similar space motions are also evident for some stars with far-IR excess emission as measured by IRAS or having strong lithium 6708Å lines (Jeffries 1995). As noted by Jeffries, many of these lithium stars have space motions similar to that of Eggen’s Local Association (U,V,W = -11, -21, -11). Four such stars, HD 172555, HD 195627, HD 207129 and HD 10647, are plotted as asterisks on Figures 1a and 1b. These four stars were not targeted by us for CTIO observations because they are significantly closer to Earth than stars in Table 1 and we did not initially recognize them as potential members of a common stream. The G-type star HD 207129 is of special interest because it is surrounded by a cold dust ring detected by IRAS and is only 15.6 pc from Earth. Jourdain de Muizon et al (1999) argue that this star is 4.7 Gyrs old and construct a corresponding model for evolution of the dust ring. In contrast, we believe that HD 207129, UVW = -13.7, -22.3, +0.6, is actually a member of the Tucanae stream and probably only about as old as the Tucanae Association. Stars with space motions within the range encompassing the young groups listed above, (-15, -23, -13) $`<`$ (U, V, W) $`<`$ (-9, -16, +3), comprise less than 2% of the stars in Gleise’s Catalog of Nearby Stars. Thus the chance that HD 207129 is as old as 4.7 Gyrs and yet have a space motion so similar to many very young stars is small. Also, location of HD 207129 in the same direction as the Tucanae stream stars seen in Figure 1 supports the idea that they are kinematically associated. This strikes us as more compelling evidence for youth than the weak Ca II K-line emission, relied on by Jourdain de Muizon, as an indicator of a much older star. Not all young stars have activity in the Ca II lines. For example, the very young, F-type star HD 135344 which has a huge far-IR excess and associated CO rotational emission (Zuckerman, Forveille, & Kastner 1995), has no Ca activity (Duncan, Barlow & Ryan 1997). Finally, we note that the intrinsic X-ray luminosity of HD 207129 as measured by ROSAT is about 10 times that of the Sun. Many papers published recently describe field stars with high lithium abundance, large X-ray fluxes, and other indicators suggestive of youth, but no consistent or compelling picture has been established for the solar vicinity. We believe it is now possible to paint a plausible picture of the recent star formation history of the present solar neighborhood. Between 10 and 40 million years ago in a co-moving frame centered near the present position of the Sun, an ensemble of molecular clouds were forming stars at a modest rate. The spectral types of these stars ranged primarily from A to M, but included a few B-type stars also. About 10 Myrs ago, the most massive of the B-type stars exploded as a supernova at about the time that stars in the TWA were forming. This event terminated the star formation episodes and helped to generate a very low density region with radius of order 70 pc in most directions from the present position of the Sun (Welsh, Crifo & Lallemont 1998 and references therein). Thus we now have a “150 pc conspiracy” whereby molecular clouds (Taurus, Lupus, Cha, Sco, Oph) are seen in various directions, typically $``$150 from Earth, and, like the star-forming clouds 10-40 million years ago, mostly at negative declinations. If the rate of supernovae in the Galaxy is one per 50 years, then in a typical sphere of radius 70 pc, a supernova will explode every $``$10<sup>7</sup> years. Ten million years ago, the Sun would have been further than 100 pc from the supernova explosion. The above picture is consistent with one painted by Elmegreen (1992 & 1993) which was based on more general considerations pertaining to local galactic structure and Gould’s Belt. In particular, Elmegreen remarks that ”The local star formation activity began 60 million years ago when the Carina arm passed through the local gas…This scenario is largely speculative…”. The recent discoveries of very nearby young star clusters and field stars in the southern hemisphere, greatly enhances the likelihood of Elmegreen’s speculative scenario. ## 4. Conclusions We have found a previously unrecogized southern association - “the Tucanae Association” - which is only $``$45 pc from Earth, but not quite as young as the recently established TW Hydrae Association. Thus, in the past year, the number of known stellar associations within 60 pc of Earth has increased from two (the Hyades and U Ma) to four. The existence of the Tucanae and TW Hydrae Associations resolves the mystery of how the Beta Pictoris moving group can be so young, 20 $`\pm `$ 10 Myrs (Barrado y Navascues et al 1999) and yet so near to Earth (within 20 pc). That is, 10-40 million years ago, the region through which the Sun is now passing experienced a significant era of star formation which produced the Beta Pictoris group, the two southern associations, and related stream stars. We thank the crew at CTIO including N. Suntzeff, T. Ingerson, M. Fernandez, A. Gomez, E. Cosgrove & R. Venegas for critical help acquiring these data. A special thanks to M. Smith for his flexibility with telescope maintenence schedules. We are grateful to M. Jura for calling the Mannings and Barlow paper to our attention and for pointing out the likelihood of proper motion companions to some of the stars in Table 1. We thank Dr. Jura and E. E. Becklin and P. Lowrance for comments on drafts of the paper. This research was supported by NSF Grant 9417156 to UCLA and by the UCLA Center for Astrobiology.
no-problem/0002/hep-ph0002191.html
ar5iv
text
# 1 Introduction to Gauge-Mediated SUSY Breaking ## 1 Introduction to Gauge-Mediated SUSY Breaking Since no superpartners have been detected at collider experiments so far, supersymmetry (SUSY) cannot be an exact symmetry of Nature. The requirement of “soft” supersymmetry breaking alone is not sufficient to reduce the free parameters to a number suitable for predictive phenomenological studies. Hence, motivated theoretical hypotheses on the nature of SUSY breaking and the mechanism through which it is transmitted to the visible sector of the theory \[here assumed to be the one predicted by the minimal SUSY extension of the standard model (MSSM)\] are highly desirable. If SUSY is broken at energies of the order of the Planck mass and the SUSY breaking sector communicates with the MSSM sector through gravitational interactions only, one falls in the supergravity-inspired (SUGRA) scheme. The most recognised alternative to SUGRA is based instead on the hypothesis that SUSY breaking occurs at relatively low energy scales and it is mediated mainly by gauge interactions (GMSB) . A good theoretical reason to consider such a possibility is that it provides a natural, automatic suppression of the SUSY contributions to flavour-changing neutral current and $`𝒞𝒫`$-violating processes. A pleasant consequence is that, at least in the simplest versions of GMSB, the MSSM spectrum and other observables depend on just a handful of parameters, typically $$M_{\mathrm{mess}},N_{\mathrm{mess}},\mathrm{\Lambda },\mathrm{tan}\beta ,\mathrm{sign}(\mu ),$$ (1) where $`M_{\mathrm{mess}}`$ is the overall messenger scale; $`N_{\mathrm{mess}}`$ is the so-called messenger index, parameterising the structure of the messenger sector; $`\mathrm{\Lambda }`$ is the universal soft SUSY breaking scale felt by the low-energy sector; $`\mathrm{tan}\beta `$ is the ratio of the vacuum expectation values of the two Higgs doublets; sign($`\mu `$) is the ambiguity left for the SUSY higgsino mass after conditions for correct electroweak symmetry breaking (EWSB) are imposed (see e.g. Refs. ). The phenomenology of GMSB (and more in general of any theory with low-energy SUSY breaking) is characterised by the presence of a very light gravitino $`\stackrel{~}{G}`$ , $$m_{3/2}m_{\stackrel{~}{G}}=\frac{F}{\sqrt{3}M_P^{}}\left(\frac{\sqrt{F}}{100\mathrm{TeV}}\right)^22.37\mathrm{eV},$$ (2) where $`\sqrt{F}`$ is the fundamental scale of SUSY breaking, 100 TeV is a typical value for it, and $`M_P^{}=2.44\times 10^{18}`$ GeV is the reduced Planck mass. Hence, the $`\stackrel{~}{G}`$ is always the lightest SUSY particle (LSP) in these theories. If $`R`$-parity is assumed to be conserved, any produced MSSM particle will finally decay into the gravitino. Depending on $`\sqrt{F}`$, the interactions of the gravitino, although much weaker than gauge and Yukawa interactions, can still be strong enough to be of relevance for collider physics. As a result, in most cases the last step of any SUSY decay chain is the decay of the next-to-lightest SUSY particle (NLSP), which can occur outside or inside a typical detector or even close to the interaction point. The pattern of the resulting spectacular signatures is determined by the identity of the NLSP and its lifetime before decaying into the $`\stackrel{~}{G}`$, $$c\tau _{\mathrm{NLSP}}\frac{1}{100}\left(\frac{\sqrt{F}}{100\mathrm{TeV}}\right)^4\left(\frac{m_{\mathrm{NLSP}}}{100\mathrm{GeV}}\right)^5,$$ (3) where $``$ is a number of order unity depending on the nature of the NLSP. The identity of the NLSP \[or, to be more precise, the identity of the sparticle(s) having a large branching ratio (BR) for decaying into the gravitino and the relevant SM partner\] determines four main scenarios giving rise to qualitatively different phenomenology: Occurs whenever $`m_{\stackrel{~}{N}_1}<(m_{\stackrel{~}{\tau }_1}m_\tau )`$. Here typically a decay of the $`\stackrel{~}{N}_1`$ to $`\stackrel{~}{G}\gamma `$ is the final step of decay chains following any SUSY production process. As a consequence, the main inclusive signature at colliders is prompt or displaced photon pairs + X + missing energy. $`\stackrel{~}{N}_1`$ decays to $`\stackrel{~}{G}Z^0`$ and other minor channels may also be relevant at TeV colliders. Defined by $`m_{\stackrel{~}{\tau }_1}<\mathrm{Min}[m_{\stackrel{~}{N}_1},m_{\stackrel{~}{\mathrm{}}_R}]m_\tau `$, features $`\stackrel{~}{\tau }_1\stackrel{~}{G}\tau `$ decays, producing $`\tau `$ pairs or charged semi-stable $`\stackrel{~}{\tau }_1`$ tracks or decay kinks + X + missing energy. Here and in the following, $`\mathrm{}`$ stands for $`e`$ or $`\mu `$. When $`m_{\stackrel{~}{\mathrm{}}_R}<\mathrm{Min}[m_{\stackrel{~}{N}_1},m_{\stackrel{~}{\tau }_1}+m_\tau ]`$, $`\stackrel{~}{\mathrm{}}_R\stackrel{~}{G}\mathrm{}`$ decays are also open with large BR. In addition to the signatures of the stau NLSP scenario, one also gets $`\mathrm{}^+\mathrm{}^{}`$ pairs or $`\stackrel{~}{\mathrm{}}_R`$ tracks or decay kinks. If $`|m_{\stackrel{~}{\tau }_1}m_{\stackrel{~}{N}_1}|<m_\tau `$ and $`m_{\stackrel{~}{N}_1}<m_{\stackrel{~}{\mathrm{}}_R}`$, both signatures of the neutralino NLSP and stau NLSP scenario are present at the same time, since $`\stackrel{~}{N}_1\stackrel{~}{\tau }_1`$ decays are not allowed by phase space. Note that in the GMSB parameter space the relation $`m_{\stackrel{~}{\mathrm{}}_R}>m_{\stackrel{~}{\tau }_1}`$ always holds. Also, one should keep in mind that the classification above is only valid as an indicative scheme in the limit $`m_e`$, $`m_\mu 0`$, neglecting also those cases where a fine-tuned choice of $`\sqrt{F}`$ and the sparticle masses may give rise to competition between phase-space suppressed decay channels from one ordinary sparticle to another and sparticle decays to the gravitino . In this report, we treat two important aspects of the GMSB phenomenology at TeV colliders: The consequences of the GMSB hypotheses on the light Higgs spectrum using the most accurate tools available today for model generation and $`m_h`$ calculation; Studies and possible measurements at the LHC with the ATLAS detector in the stau NLSP or slepton co-NLSP scenarios, with focus on determining the fundamental SUSY breaking scale $`\sqrt{F}`$. For this purpose, we generated about 30000 GMSB models under well defined hypotheses, using the program SUSYFIRE , as described in the following section. ## 2 GMSB Models In the GMSB framework, the pattern of the MSSM spectrum is simple, as all sparticle masses are generated in the same way and scale approximately with a single parameter $`\mathrm{\Lambda }`$, which sets the amount of soft SUSY breaking felt by the visible sector. As a consequence, scalar and gaugino masses are related to each other at a high energy scale, which is not the case in other SUSY frameworks, e.g. SUGRA. Also, it is possible to impose other conditions at a lower scale to achieve EWSB and further reduce the dimension of the parameter space. To build our GMSB models, we adopt the usual phenomenological approach, in particular following Ref. , where problems relevant for GMSB physics at TeV colliders were also approached. We do not specify the origin of the SUSY higgsino mass $`\mu `$, nor do we assume that the analogous soft SUSY breaking parameter $`B\mu `$ vanishes at the messenger scale. Instead, we impose correct EWSB to trade $`\mu `$ and $`B\mu `$ for $`M_Z`$ and $`\mathrm{tan}\beta `$, leaving the sign of $`\mu `$ undetermined. However, we are aware that to build a satisfactory GMSB model one should also solve the latter problem in a more fundamental way, perhaps by providing a dynamical mechanism to generate $`\mu `$ and $`B\mu `$, possibly with values of the same order of magnitude. This might be accomplished radiatively through some new interactions. However, in this case the other soft terms in the Higgs potential, namely $`m_{H_{1,2}}^2`$, will be also affected and this will in turn change the values of $`|\mu |`$ and $`B\mu `$ coming from EWSB conditions . Within the study (A), we are currently considering some “non-minimal” possibilities for GMSB models that to some extent take this problem into account, and we are trying to assess the impact on the light Higgs mass. We do not treat this topic here, but refer to for further details. To determine the MSSM spectrum and low-energy parameters, we solve the renormalisation group equation (RGE) evolution with the following boundary conditions at the $`M_{\mathrm{mess}}`$ scale, $`M_a`$ $`=`$ $`N_{\mathrm{mess}}\mathrm{\Lambda }g\left({\displaystyle \frac{\mathrm{\Lambda }}{M_{\mathrm{mess}}}}\right){\displaystyle \frac{\alpha _a}{4\pi }},(a=1,2,3)`$ $`\stackrel{~}{m}^2`$ $`=`$ $`2N_{\mathrm{mess}}\mathrm{\Lambda }^2f\left({\displaystyle \frac{\mathrm{\Lambda }}{M_{\mathrm{mess}}}}\right){\displaystyle \underset{a}{}}\left({\displaystyle \frac{\alpha _a}{4\pi }}\right)^2C_a,`$ (4) respectively for the gaugino and the scalar masses. In Eq. (4), $`g`$ and $`f`$ are the one-loop and two-loop functions whose exact expressions can be found e.g. in Ref. , and $`C_a`$ are the quadratic Casimir invariants for the scalar fields. As usual, the scalar trilinear couplings $`A_f`$ are assumed to vanish at the messenger scale, as suggested by the fact that they (and not their squares) are generated via gauge interactions with the messenger fields at the two loop-level only. To single out the interesting region of the GMSB parameter space, we proceed as follows. Barring the case where a neutralino is the NLSP and decays outside the detector (large $`\sqrt{F}`$), the GMSB signatures are very spectacular and are generally free from SM background. Keeping this in mind and being interested in GMSB phenomenology at future TeV colliders, we consider only models where the NLSP mass is larger than 100 GeV, assuming that searches at LEP and Tevatron, if unsuccessful, will in the end exclude a softer spectrum in most cases. We require that $`M_{\mathrm{mess}}>1.01\mathrm{\Lambda }`$, to prevent an excess of fine-tuning of the messenger masses, and that the mass of the lightest messenger scalar be at least 10 TeV. We also impose $`M_{\mathrm{mess}}>M_{\mathrm{GUT}}\mathrm{exp}(125/N_{\mathrm{mess}})`$, to ensure perturbativity of gauge interactions up to the GUT scale. Further, we do not consider models with $`M_{\mathrm{mess}}>\mathrm{\hspace{0.33em}10}^5\mathrm{\Lambda }`$. As a result of this and other constraints, the messenger index $`N_{\mathrm{mess}}`$, which we assume to be an integer independent of the gauge group, cannot be larger than 8. To prevent the top Yukawa coupling from blowing up below the GUT scale, we require $`\mathrm{tan}\beta >1.2`$ (and in some cases $`>1.5`$). This is also motivated by the current bounds from SUSY Higgs searches at LEP II . Models with $`\mathrm{tan}\beta >\mathrm{\hspace{0.33em}55}`$ (with a mild dependence on $`\mathrm{\Lambda }`$) are forbidden by the EWSB requirement and typically fail in giving $`m_A^2>0`$. To calculate the NLSP lifetime relevant to our study (B), one needs to specify the value of the fundamental SUSY breaking scale $`\sqrt{F}`$ on a model-by-model basis. Using perturbativity arguments, for each given set of GMSB parameters, it is possible to determine a lower bound according to Ref. , $$\sqrt{F}\sqrt{F_{\mathrm{mess}}}\sqrt{\mathrm{\Lambda }M_{\mathrm{mess}}}>\mathrm{\Lambda }.$$ (5) On the contrary, no solid arguments can be used to set an upper limit on $`\sqrt{F}`$ of relevance for collider physics, although some semi-qualitative cosmological arguments are sometimes evoked. To generate our model samples using SUSYFIRE, we used logarithmic steps for $`\mathrm{\Lambda }`$ (between about 45 TeV/$`N_{\mathrm{mess}}`$ and about 220 TeV/$`\sqrt{N_{\mathrm{mess}}}`$, which corresponds to excluding models with sparticle masses above $`4`$ TeV), $`M_{\mathrm{mess}}/\mathrm{\Lambda }`$ (between 1.01 and $`10^5`$) and $`\mathrm{tan}\beta `$ (between 1.2 and about 60), subject to the constraints described above. SUSYFIRE starts from the values of SM particle masses and gauge couplings at the weak scale and then evolves up to the messenger scale through RGE’s. At the messenger scale, it imposes the boundary conditions (4) for the soft sparticle masses and then evolves the full set of RGE’s back to the weak scale. The decoupling of each sparticle at the proper threshold is taken into account. Two-loop RGE’s are used for gauge couplings, third generation Yukawa couplings and gaugino soft masses. The other RGE’s are taken at the one-loop level. At the scale $`\sqrt{m_{\stackrel{~}{t}_1}m_{\stackrel{~}{t}_2}}`$, EWSB conditions are imposed by means of the one-loop effective potential approach, including corrections from stops, sbottoms and staus. The program then evolves up again to $`M_{\mathrm{mess}}`$ and so on. Three or four iterations are usually enough to get a good approximation for the MSSM spectrum. (A) The Light Higgs Boson Spectrum in GMSB Models Contribution by: S. Ambrosanio, S. Heinemeyer, G. Weiglein ## A.1 Introduction Within the MSSM, the masses of the $`𝒞𝒫`$-even neutral Higgs bosons are calculable in terms of the other low-energy parameters. The mass of the lightest Higgs boson, $`m_h`$, has been of particular interest, as it is bounded to be smaller than the $`Z^0`$ boson mass at the tree level. The one-loop results for $`m_h`$ have been supplemented in the last years with the leading two-loop corrections, performed in the renormalisation group (RG) approach , in the effective potential approach and most recently in the Feynman-diagrammatic (FD) approach . The two-loop corrections have turned out to be sizeable. They can lower the one-loop results by up to 20%. These calculations predict an upper bound on $`m_h`$ of about $`m_h130`$ GeV for an unconstrained MSSM with $`m_t=175`$ GeV and a common SUSY mass scale $`M_{\mathrm{SUSY}}1`$ TeV. As discussed in Sec. 1, the GMSB scenario provides a relatively simple set of constraints and thus constitutes a very predictive and readily testable realization of the MSSM. The main goal of the present analysis is to study the spectrum of the lightest neutral $`𝒞𝒫`$-even Higgs boson, $`m_h`$, within the GMSB framework. Particular emphasis is given to the maximal value of $`m_h`$ achievable in GMSB after an exhaustive scanning of the parameter space. Our results are discussed in terms of the GMSB constraints on the low-energy parameters and compared to the cases of a SUGRA-inspired or an unconstrained MSSM. ## A.2 Calculation of $`m_h`$ To evaluate $`m_h`$, we employ the currently most accurate calculation based on the FD approach . The most important radiative corrections to $`m_h`$ arise from the top and scalar top sector of the MSSM, with the input parameters $`m_t`$, the masses of the scalar top quarks, $`m_{\stackrel{~}{t}_1}`$, $`m_{\stackrel{~}{t}_2}`$, and the $`\stackrel{~}{t}`$-mixing angle, $`\theta _{\stackrel{~}{t}}`$. Here we adopt the conventions introduced in Ref. . The complete diagrammatic one-loop result has been combined with the dominant two-loop corrections of $`𝒪(\alpha \alpha _s)`$ and with the subdominant corrections of $`𝒪(G_F^2m_t^6)`$ . GMSB models are generated with the program SUSYFIRE, according to the discussion of Sec. 2. For this study, we consider only models with $`\mathrm{tan}\beta >1.5`$ and $`m_A>80`$ GeV . In addition, we always use $`m_t=175`$ GeV. A change of 1 GeV in $`m_t`$ translates roughly into a shift of 1 GeV (with the same sign) in $`m_h`$ as well. Thus, changing $`m_t`$ affects our results on $`m_h`$ in an easily predictable way. The results of the $`m_h`$ calculation have been implemented in the program FeynHiggs . This Fortran code has been combined with SUSYFIRE, which has been used to calculate the low energy parameters $`m_{\stackrel{~}{t}_1}`$, $`m_{\stackrel{~}{t}_2}`$, $`\theta _{\stackrel{~}{t}}`$, $`\mu `$, $`M_1`$, $`M_2`$, $`m_{\stackrel{~}{g}}`$, $`\mathrm{}`$ for each of the $``$30000 GMSB models generated. These have then been passed to FeynHiggs for the $`m_h`$ evaluation in a coherent way. Indeed, we transform the $`\overline{\mathrm{MS}}`$ parameters in the SUSYFIRE output into on-shell parameters before feeding them into FeynHiggs. Compact expressions for the relevant transition formulas can be found in Refs. . Compared to an existing analysis in the GMSB framework , we use a more complete evaluation of $`m_h`$. This leads in particular to smaller values of $`m_h`$ for a given set of input parameters in our analysis. Also, in Ref. although some GMSB scenarios with generalised messenger sectors were considered, the parameter space for the “minimal” case with a unique, integer messenger index $`N_{\mathrm{mess}}=N_1=N_2=N_3`$ was not fully explored. Indeed, $`\mathrm{\Lambda }`$ was in most cases limited to values smaller than 100 TeV and $`M_{\mathrm{mess}}`$ was fixed to $`10^5`$ TeV. Furthermore, partly as a consequence of the above assumptions, the authors did not consider models with $`N_{\mathrm{mess}}>4`$, i.e. their requirements for perturbativity of the MSSM gauge couplings up to the GUT scale were stronger than ours. We will see in the following section that maximal $`m_h`$ values in our analysis are instead obtained for larger values of the messenger scale and the messenger index. ## A.3 The Light Higgs Spectrum in GMSB In the following, we give some results in the form of scatter plots showing the pattern in GMSB for $`m_h`$, $`m_A`$ as well as other low-energy parameters of relevance for the light Higgs spectrum. In Fig. 1(a), we show the dependence of $`m_h`$ on $`\mathrm{tan}\beta `$, where only models with $`\mathrm{tan}\beta >1.5`$, $`m_A>80`$ GeV and $`m_{\mathrm{NLSP}}>100`$ GeV are considered, while $`m_t`$ is fixed to 175 GeV. The dependence is strong for small $`\mathrm{tan}\beta <\mathrm{\hspace{0.33em}10}`$, while for larger $`\mathrm{tan}\beta `$ the increase of the lightest Higgs mass is rather mild. The maximum values for $`m_h124`$ GeV are achieved for $`\mathrm{tan}\beta >50`$. It should be noted that for very large $`\mathrm{tan}\beta >\mathrm{\hspace{0.33em}52}`$, we also find a few models with relatively small $`m_h<\mathrm{\hspace{0.33em}100}`$ GeV. This is due to the fact that in this case EWSB conditions tend to drive $`m_A`$ toward very small values (cfr. Sec. 2). This is made visible by the scatter plot in Fig. 1(b), where the pseudoscalar Higgs mass is shown as a function of $`\mathrm{tan}\beta `$. For such small values of $`m_A`$ and for large $`\mathrm{tan}\beta `$, the relation $`m_hm_A`$ holds. Thus small $`m_h`$ values are quite natural in this region of the parameter space. On the other hand, one can see that extremely large values of $`m_A>\mathrm{\hspace{0.33em}2}`$ TeV can only be obtained for small or moderate $`\mathrm{tan}\beta <\mathrm{\hspace{0.33em}10}`$ GeV. A comparison between Fig 1(a) and (b) reveals that the largest $`m_h`$ values $`>\mathrm{\hspace{0.33em}123}`$ GeV correspond in GMSB to $`m_A`$ values in the 300–800 GeV range. Indeed, it has been checked that such large $`m_h`$ values are in general obtained in the FD calculation for $`300<m_A<\mathrm{\hspace{0.33em}1000}`$ GeV, see Ref. . In Fig. 2, we show the dependence of the lightest Higgs boson mass on the stop mixing parameter $`x_{\mathrm{top}}`$ defined by $$x_{\mathrm{top}}\frac{A_{\mathrm{top}}\mu /\mathrm{tan}\beta }{m_S},\mathrm{where}m_S=\sqrt{(m_{\stackrel{~}{t}_1}^2+m_{\stackrel{~}{t}_2}^2)/2}.$$ (6) For equal soft SUSY breaking parameters in the stop sector with the $`D`$-terms neglected, $`x_{\mathrm{top}}`$ corresponds to the ratio $`X_t/M_S`$ of the off-diagonal and diagonal entries in the stop mixing matrix, see e.g. Ref. . Maximal $`m_h`$ values are obtained for $`x_{\mathrm{top}}\pm 2`$, a minimum is reached around $`x_{\mathrm{top}}0`$. Thus, for large $`m_h`$ values a large numerator in Eq. (6) is required. From Fig. 3(a), one can see that in GMSB only negative values of $`A_{\mathrm{top}}`$ are allowed at the electroweak scale, as a consequence of the fact that the trilinear couplings are negligible at the messenger scale. Due to the logarithmic dependence of $`m_h`$ on the stop masses, relatively large values of $`|A_{\mathrm{top}}|`$ are needed for large $`m_h`$. In addition, large $`\mathrm{tan}\beta `$ is also required. From Fig. 3(b) one can check that this leads to values of $`x_{\mathrm{top}}0.95`$, which can only be achieved for positive $`\mu `$. Fig. 4(a) shows the dependence of $`A_{\mathrm{top}}`$ on $`\mu `$. Large values of $`|A_{\mathrm{top}}|`$ are only reached for large $`|\mu |`$ values. Therefore maximal $`h`$ masses are obtained for relatively large and positive $`\mu `$, as can be seen in Fig. 4(b).<sup>1</sup><sup>1</sup>1In general, for large values of $`|\mu |`$ and $`\mathrm{tan}\beta `$ the effects of the corrections from the $`b`$$`\stackrel{~}{b}`$ sector can become important, leading to a decrease in $`m_h`$. For the GMSB models under consideration, however, this is not the case as a consequence of the relatively large $`\stackrel{~}{b}`$ masses. All these arguments about the combination of low energy parameters needed for large $`m_h`$ in GMSB are summarised in Tab. 1, where we report the 10 models in our sample that give rise to the highest $`m_h`$ values. Together with $`m_h`$, Tab. 1 shows the corresponding input GMSB parameters \[cfr. Eq. (1)\] as well as the values of the low energy parameters mentioned above. It is interesting to note that all the models shown in Tab. 1 feature a large messenger index and values of the messenger scale not far from the maximum we allowed while generating GMSB models. We could not construct a single model with $`m_h>\mathrm{\hspace{0.33em}122.5}`$ GeV having $`N_{\mathrm{mess}}<6`$ or $`M_{\mathrm{mess}}<10^5`$ TeV, for $`m_t=175`$ GeV. It is hence worth mentioning here that our choice of imposing $`M_{\mathrm{mess}}/\mathrm{\Lambda }<10^5M_{\mathrm{mess}}<\mathrm{\hspace{0.33em}2}\times 10^{10}`$ GeV does not correspond to any solid theoretical prejudice. On the other hand it is true that $`M_{\mathrm{mess}}>\mathrm{\hspace{0.33em}3}\times 10^8`$ GeV always corresponds to gravitino masses larger than $`1`$ keV, due to Eqs. (2) and (5). The latter circumstance might be disfavoured by cosmological arguments . A curious consequence is that the GMSB models with the highest $`m_h`$ belong always to the stau NLSP or slepton co-NLSP scenarios. Note also that restricting ourselves to GMSB models with $`\mathrm{\Lambda }<100`$ TeV, $`M_{\mathrm{mess}}<10^5`$ TeV and $`N_{\mathrm{mess}}4`$, we find a maximal $`m_h`$ value of 122.2 GeV, for $`m_t=175`$ GeV and $`\mathrm{tan}\beta 52`$. This is to be compared with the one-loop result of Ref. , $`m_h`$(max) = 131.7, for $`\mathrm{tan}\beta `$ around 30 (the assumed value of $`m_t`$ is not quoted). Values for $`m_h`$ slightly larger than those we found here may also arise from non-minimal contributions to the Higgs potential, in connection with a dynamical generation of $`\mu `$ and $`B\mu `$ (cfr. Sec. 2). A treatment of this problem can be found in Ref. . One should also keep in mind that our analysis still suffers from uncertainties due to unknown higher order corrections both in the RGE’s for GMSB model generation and in the evaluation of $`m_h`$ from low energy parameters. A rough estimate of these effects leads to shifts in $`m_h`$ not larger than 3 to 5 GeV. ## A.4 Conclusions We conclude that in the minimal GMSB framework described above, values of $`m_h>\mathrm{\hspace{0.33em}124.2}`$ GeV are not allowed for $`m_t=175`$ GeV. This is almost 6 GeV smaller than the maximum value for $`m_h`$ one can achieve in the MSSM without any constraints or assumptions about the structure of the theory at high energy scales . On the other hand, the alternative mSUGRA framework allows values of $`m_h`$ that are $`3`$ GeV larger than in GMSB . This makes the GMSB scenario slightly easier to explore via Higgs boson search. This result was expected in the light of the rather strong GMSB requirements, such as the presence of a unique soft SUSY breaking scale, the relative heaviness of the squarks and the gluino compared to non-strongly interacting sparticles, and the fact that the soft SUSY breaking trilinear couplings $`A_f`$ get nonzero values at the electroweak scale only by RGE evolution. Nevertheless, once the whole parameter space is explored, it is not true that mGMSB gives rise to $`m_h`$ values that are considerably smaller than in mSUGRA. Even smaller differences in the maximal $`m_h`$ might be present when considering non-minimal, complex messenger sectors or additional contributions to the Higgs potential . In any case, as for mSUGRA, current LEP II or Tevatron data on Higgs boson searches are far from excluding mGMSB, and the upgraded Tevatron and the LHC will certainly be needed to deeply test any realistic SUSY model. ### Acknowledgements S. A. and S. H. thank the organisers of the Workshop “Physics at TeV Colliders”, for the hospitality and the pleasant and productive atmosphere in Les Houches. (B) Measuring the SUSY Breaking Scale at the LHC in the Slepton NLSP Scenario of GMSB Models Contribution by: S. Ambrosanio, B. Mele, S. Petrarca, G. Polesello, A. Rimoldi ## B.1 Introduction The fundamental scale of SUSY breaking $`\sqrt{F}`$ is perhaps the most important quantity to determine from phenomenology in a SUSY theory. In the mSUGRA framework, the gravitino mass sets the scale of the soft SUSY breaking masses in the MSSM ($`0.11`$ TeV), so that $`\sqrt{F}`$ is typically large $`10^{1011}`$ GeV \[cfr. Eq. (2)\]. As a consequence, the interactions of the $`\stackrel{~}{G}`$ with the other MSSM particles $`F^1`$ are too weak for the gravitino to be of relevance in collider physics and there is no direct way to access $`\sqrt{F}`$ experimentally. In GMSB theories, the situation is completely different. The soft SUSY breaking scale of the MSSM and the sparticle masses are set by gauge interactions between the messenger and low energy sectors to be $`\alpha _{\mathrm{SM}}\mathrm{\Lambda }`$ \[cfr. Eq. (4)\], so that typical $`\mathrm{\Lambda }`$ values are $`10100`$ TeV. On the other hand, $`\sqrt{F}`$ is subject to the lower bound (5) only, which tells us that values well below $`10^{10}`$ GeV and even as low as several tens of TeV are perfectly reasonable. The $`\stackrel{~}{G}`$ is in this case the LSP and its interactions are strong enough to allow NLSP decays to the $`\stackrel{~}{G}`$ inside a typical detector size. The latter circumstance gives us a chance for extracting $`\sqrt{F}`$ experimentally through a measurement of the NLSP mass and lifetime \[cfr. Eq. (3)\]. Furthermore, the possibility of determining $`\sqrt{F}`$ with good precision opens a window on the physics of the SUSY breaking (the so-called “secluded”) sector and the way this SUSY breaking is transmitted to the messenger sector. Indeed, the characteristic scale of SUSY breaking felt by the messengers (and hence the MSSM sector) given by $`\sqrt{F_{\mathrm{mess}}}`$ in Eq. (5) can be also determined once the MSSM spectrum is known. By comparing the measured values of $`\sqrt{F}`$ and $`\sqrt{F_{\mathrm{mess}}}`$ it might well be possible to get information on the way the secluded and messenger sector communicate to each other. For instance, if it turns out that $`\sqrt{F_{\mathrm{mess}}}\sqrt{F}`$, then it is very likely that the communication occurs radiatively and the ratio $`\sqrt{F_{\mathrm{mess}}/F}`$ is given by some loop factor. On the contrary, if the communication occurs via a direct interaction, this ratio is just given by a Yukawa-type coupling constant, with values $`<\mathrm{\hspace{0.33em}1}`$, see Refs. . An experimental method to determine $`\sqrt{F}`$ at a TeV scale $`e^+e^{}`$ collider through the measurement of the NLSP mass and lifetime was presented in Ref. , in the neutralino NLSP scenario. Here, we are concerned about the same problem, but at a hadron collider, the LHC, and in the stau NLSP or slepton co-NLSP scenarios. These scenarios provide a great opportunity at the LHC, since the characteristic signatures with semi-stable charged tracks are muon-like, but come from massive sleptons with a $`\beta `$ significantly smaller than 1. In particular, we perform our simulations in the ATLAS muon detector, whose large size and excellent time resolution allow a precision measurement of the slepton time of flight from the production vertex out to the muon chambers and hence of the slepton velocity. Moreover, in the stau NLSP or slepton co-NLSP scenarios, the knowledge of the NLSP mass and lifetime is sufficient to determine $`\sqrt{F}`$, since the factor $``$ in Eq. (3) is exactly equal to 1. This is not the case in the neutralino NLSP scenario, where $``$ depends at least on the neutralino physical composition, and more information and measurements are needed for extracting a precise value of $`\sqrt{F}`$. ## B.2 Choice of the Sample Models <br>and Event Simulation The two main parameters affecting the experimental measurement at the LHC of the slepton NLSP properties are the slepton mass and momentum distribution. Indeed, at a hadron collider most of the NLSP’s come from squark and gluino production, followed by cascade decays. Thus, the momentum distribution is in general a function of the whole MSSM spectrum. However, one can approximately assume that most of the information on the NLSP momentum distribution is provided by the squark mass scale $`m_{\stackrel{~}{q}}`$ only (in the stau NLSP scenario or slepton co-NLSP scenarios of GMSB, one generally finds $`m_{\stackrel{~}{g}}>m_{\stackrel{~}{q}}`$). To perform detailed simulations, we select a representative set of GMSB models generated by SUSYFIRE. We limit ourselves to models with $`m_{\mathrm{NLSP}}>100`$ GeV, motivated by the discussion in Sec. 2, and $`m_{\stackrel{~}{q}}<2`$ TeV, in order to yield an adequate event statistics after a three-year low-luminosity run (corresponding to 30 fb<sup>-1</sup>) at the LHC. Within these ranges, we choose eight extreme points (four in the stau NLSP scenario and four in the slepton co-NLSP scenario) allowed by GMSB in the ($`m_{\mathrm{NLSP}}`$, $`m_{\stackrel{~}{q}}`$) plane, in order to cover the various possibilities. In Tab. 2, we list the input GMSB parameters we used to generate these eight points, while in Tab. 3 we report the corresponding values of the stau mass, the squark mass scale and the gluino mass. The “NLSP” column indicates whether the model belongs to the stau NLSP or slepton co-NLSP scenario. The last column gives the total cross section in pb for producing any pairs of SUSY particles at the LHC. For each model, the events were generated with the ISAJET Monte Carlo that incorporates the calculation of the SUSY mass spectrum and branching fraction using the GMSB parameters as input. We have checked that for the eight model points considered the sparticle masses calculated with ISAJET are in good agreement with the output of SUSYFIRE. The generated events were then passed through ATLFAST , a fast particle-level simulation of the ATLAS detector. The ATLFAST package, however, was only used to evaluate the efficiency of the calorimetric trigger that selects the GMSB events. The detailed response of the detector to the slepton NLSP has been parametrised for this work using the results of a full simulation study, as described in the next section. ## B.3 Slepton detection The experimental signatures of heavy long-lived charged particles at a hadron collider have already been studied both in the framework of GMSB and in more general scenarios . The two main observables one can use to separate these particles from muons are the high specific ionisation and the time of flight in the detector. We concentrate here on the measurement of the time of flight, made possible by the timing precision ($`<\mathrm{\hspace{0.33em}1}`$ ns) and the size of the ATLAS muon spectrometer. It was demonstrated with a full simulation of the ATLAS muon detector that the $`\beta `$ of a particle can be measured with a resolution that can be approximately parameterised as $`\sigma (\beta )/\beta ^2=0.028`$. The resolution on the transverse momentum measurement for heavy particles is found to be comparable to the one expected for muons. We have therefore simulated the detector response to NLSP sleptons by smearing the slepton momentum and $`\beta `$ according to the parameterisations in Ref. . An important issue is the online selection of the SUSY events. We have not made any attempt to evaluate whether the heavy sleptons can be selected using the muon trigger. For the event selection, we rely on the calorimetric $`E_T^{\mathrm{miss}}`$ trigger, consisting in the requirement of at least a hadronic jet with $`p_T>50`$ GeV, and a transverse momentum imbalance calculated only from the energy deposition in the calorimeter larger than 50 GeV. We checked that this trigger has an efficiency in excess of 80% for all the considered models. A detailed discussion of the experimental assumptions underlying the results presented here is given in Ref. . ## B.4 Event Selection and Slepton Mass Measurement In order to select a clean sample of sleptons, we apply the following requirements: * at least a hadronic jet with $`P_T>50`$ GeV and a calorimetric $`E_T^{\mathrm{miss}}>50`$ GeV (trigger requirement); * at least one candidate slepton satisfying the following cuts: + $`|\eta |<`$2.4 to ensure that the particle is in the acceptance of the muon trigger chamber, and therefore both coordinates can be measured; + $`\beta _{\mathrm{meas}}<0.91`$, where $`\beta _{\mathrm{meas}}`$ is the $`\beta `$ of the particle measured with the time of flight in the precision chambers; + The $`P_T`$ of the slepton candidate, after the energy loss in the calorimeters has been taken into account, must be larger than 10 GeV, to ensure that the particle traverse all of the muon stations. Considering an integrated luminosity of 30 fb<sup>-1</sup>, a number of events ranging from a few hundred for the models with 2 TeV squark mass scale to a few hundred thousand for a 500 GeV mass scale survive these cuts and can be used for measuring the NLSP properties. From the measurements of the slepton momentum and of particle $`\beta `$, the mass can be determined using the standard relation $`m=p\frac{\sqrt{1\beta ^2}}{\beta }`$. For each value of $`\beta `$ and momentum, the measurement error is known and it is given by the parametrisations in Ref. . Therefore, the most straightforward way of measuring the mass is just to use the weighted average of all the masses calculated with the above formula. In order to perform this calculation, the particle momentum is needed, which implies measuring the $`\eta `$ coordinate. In fact, with the precision chambers only one can only measure the momentum components transverse to the beam axis. The measurement of the second coordinate must be provided by the trigger chambers, for which only a limited time window around the beam crossing is read out, therefore restricting the $`\beta `$ range where this measurement is available. Hence, we have evaluated the achieved measurement precision for two different $`\beta `$ intervals: $`0.6<\beta <0.91`$ and $`0.8<\beta <0.91`$ for the eight sample points. We found a statistical error well below the 0.1% level for those model points having $`m_{\stackrel{~}{q}}<1300`$ GeV. Even for the three models (B5, B7, B8) with lower statistics ($`m_{\stackrel{~}{q}}2`$ TeV), the error stays below the 0.4% level. Many more details, tables and figures about this part of our study can be found in Ref. . ## B.5 Slepton Lifetime Measurement The measurement of the NLSP lifetime at a high energy $`e^+e^{}`$ collider was studied in detail in Ref. for the neutralino NLSP case. Similar to that study, the measurement of the slepton NLSP lifetime we are interested in here can be performed by exploiting the fact that two NLSP’s are produced in each event. One can therefore select $`N_1`$ events where a slepton is detected through the time-of-flight measurement described above, count the number of times $`N_2`$ when a second slepton is observed and use this information to measure the lifetime. Although in principle very simple, in practice this method requires an excellent control on all possible sources of inefficiency for detecting the second slepton. We give here the basis of the method, without mentioning the experimental details. We provide an estimate of the achievable statistical error for the models considered and a parametrisation of the effect on the lifetime measurement of a generic systematic uncertainty on the slepton efficiency. In case the sparticle spectrum and BR’s can be measured from the SUSY events, as e.g. shown in Ref. , an accurate simulation of all the SUSY production processes can be performed, and the results from this section are representative of the measurement precision achievable in a real experiment. Another method based on the same principles, but assuming minimal knowledge of the SUSY spectrum, is described in Ref. , where a detailed estimate of the achievable systematic precision is given. We define $`N_1`$ starting from the event sample defined by the cuts discussed in Sec. B.4 Event Selection and Slepton Mass Measurement, with the additional requirement that, for a given value of the slepton lifetime, at least one of the produced sleptons decays at at a distance from the interaction vertex $`>10`$ m, and is therefore reconstructed in the muon system. For the events thus selected, we define $`N_2`$ as the subsample where a second particle with a transverse momentum $`>10`$ GeV is identified in the muon system. The search for the second particle should be as inclusive as possible, in order to minimise the corrections to the ratio. In particular, the cut $`\beta _{\mathrm{meas}}<0.91`$ is not applied, but particles with a mass measured from $`\beta `$ and momentum incompatible with the measured slepton mass are rejected. This leaves a background of high momentum muons in the sample that can be statistically subtracted using the momentum distribution of electrons. The ratio $$R=\frac{N_2}{N_1}$$ (7) is a function of the slepton lifetime. Its dependence on the NLSP lifetime $`c\tau `$ in metres in shown in Fig. 5 for four among our eight sample models. The curves for the model points not shown are either very similar to one of the curves we show or are mostly included between the external curves corresponding to points B1 and B8, thus providing no essential additional information. Note that the curve for model 6 starts from $`c\tau =2.5`$ m and not from $`c\tau =50`$ cm, as for the other models. This is due to the large value of $`M_{\mathrm{mess}}`$ (cfr. Tab. 2), determining a minimum NLSP lifetime allowed by theory which is macroscopic in this case \[cfr. Eqs. (3) and (5)\]. The probability for a particle of mass $`m`$, momentum $`p`$ and proper lifetime $`\tau `$ to travel a distance $`L`$ before decaying is given by the expression $$P(L)=e^{mL/pc\tau }.$$ (8) $`N_2`$ is therefore a function of the momentum distribution of the slepton, which is determined by the details of the SUSY spectrum. One needs therefore to be able to simulate the full SUSY cascade decays in order to construct the $`c\tau `$$`R`$ relationship. The statistical error on $`R`$ can be evaluated as $$\sigma (R)=\sqrt{\frac{R(1R)}{N_1}}.$$ (9) Relevant for the precision with which the SUSY breaking scale can be measured is instead the error on the measured $`c\tau `$. This can be extracted from the curves shown in Fig. 5 and can be evaluated as $$\sigma (\mathrm{c}\tau )=\sigma (R)/\left[\frac{R(c\tau )}{c\tau }\right].$$ (10) The measurement precision calculated according to this formula is shown in Figs. 6 and 7 for the eight sample points, for an integrated luminosity of 30 fb<sup>-1</sup>. The full line in the plots is the error on $`c\tau `$ considering the statistical error on $`R`$ only. The available statistics is a function of the strongly interacting sparticles’ mass scale. Even if a precise $`R`$$`c\tau `$ relation can be built from the knowledge of the model details, there will be a systematic uncertainty in the evaluation of the losses in $`N_2`$, because of sleptons produced outside the $`\eta `$ acceptance, or absorbed in the calorimeters, or escaping the calorimeter with a transverse momentum below the cuts. The full study of these uncertainties is in progress. At this level, we just parameterise the systematic error as a term proportional to $`R`$, added in quadrature to the statistical error. We choose two values, $`1\%R`$ and $`5\%R`$, and propagate the error to the $`c\tau `$ measurement. The results are represented by the dashed and dotted lines in Figs. 6 and 7. For the models with squark mass scales up to 1200 GeV, assuming a 1% systematic error on the measured ratio, a precision better than 10% on the $`c\tau `$ measurement can be obtained for lifetimes between 0.5–1 m and 50–80 m. If the systematic uncertainty grows to 5%, the 10% precision can only be achieved in the range 1–10 m. If the mass scale goes up to 2 TeV, even considering a pure statistical error only, a 10% precision is not achievable. However a 20% precision is possible over $`c\tau `$ ranges between 5 and 100 m, assuming a 1% systematic error. Note that the curves corresponding to the model points B2, B6 and B7 do not start from $`c\tau =50`$ cm, but from the theoretical lower limit on $`c\tau `$ of 1.8, 2.5 and 6.1 metres, respectively. ## B.6 Determining the SUSY Breaking Scale $`\sqrt{F}`$ Using the measured values of $`c\tau `$ and the NLSP mass, the SUSY breaking scale $`\sqrt{F}`$ can be calculated from Eq. (3), where $`=1`$ for the case where the NLSP is a slepton. From simple error propagation, the fractional uncertainty on the $`\sqrt{F}`$ measurement can be obtained adding in quadrature one fourth of the fractional error in $`c\tau `$ and five fourths of the fractional error on the slepton mass. In Figs. 8 and 9, we show the fractional error on the $`\sqrt{F}`$ measurement as a function of $`\sqrt{F}`$ for our three different assumptions on the $`c\tau `$ error. The uncertainty is dominated by $`c\tau `$ for the higher part of the $`\sqrt{F}`$ range and grows quickly when approaching the lower limit on $`\sqrt{F}`$. This is because very few sleptons survive and the statistical error on both $`m_\stackrel{~}{\mathrm{}}`$ and $`c\tau `$ gets very large. If we assume a 1% systematic error on the ratio $`R`$ from which $`c\tau `$ is measured (dashed lines in Figs. 8 and 9), the error on $`\sqrt{F}`$ is better than 10% for $`1000<\sqrt{F}<\mathrm{\hspace{0.33em}4000}`$ TeV for model points B1–B4 with higher statistics. For points B5–B8, in general one can explore a range of higher $`\sqrt{F}`$ values with a small relative error, essentially due to the heaviness of the decaying NLSP in these models. Note also that the theoretical lower limit (5) on $`\sqrt{F}`$ is equal to about 1200, 1500, 3900, 8900 TeV respectively in model points B2, B5, B6, B7, while it stays well below 1000 TeV for the other models. ## B.7 Conclusions We have discussed a simple method to measure at the LHC with the ATLAS detector the fundamental SUSY breaking scale $`\sqrt{F}`$ in the GMSB scenarios where a slepton is the NLSP and decays to the gravitino with a lifetime in the range 0.5 m $`<c\tau _{\mathrm{NLSP}}<\mathrm{\hspace{0.33em}1}`$ km. This method requires the measurement of the time of flight of long lived sleptons and is based on counting events with one or two identified NLSP’s. It relies on the assumptions that a good knowledge of the MSSM sparticle spectrum and BR’s can be extracted from the observation of the SUSY events and that the systematic error in evaluating the slepton losses can be kept below the few percent level. We performed detailed, particle level simulations for eight representative GMSB models, some of them being particularly hard due to low statistics. We found that a level of precision of a few 10’s % on the SUSY breaking scale measurement can be achieved in significant parts of the $`1000<\sqrt{F}<\mathrm{\hspace{0.33em}30000}`$ TeV range, for all models considered. More details as well as a full study of the systematics associated with this procedure and another less “model-dependent” method to measure $`\sqrt{F}`$ is presented in detail in Ref. . ### Acknowledgements S. A. and G. P. thank the organisers of the Workshop “Physics at TeV Colliders”, for the hospitality and the pleasant and productive atmosphere in Les Houches.
no-problem/0002/cond-mat0002094.html
ar5iv
text
# 1 𝐽 vs. 𝑉/𝑁 curves for three mesas at 𝑇=4.2K and 𝑇=150 K. Inset a) shows temperature dependencies of the gap voltage, 𝑉_𝑔, and the voltage separation of multiple quasiparticle branches, Δ⁢𝑉. Inset b) shows temperature dependencies of the zero-bias resistance. Intrinsic tunneling spectroscopy in small Bi2212 mesas Vladimir Krasnov<sup>1,2</sup>, August Yurgens<sup>1,3</sup>, Dag Winkler<sup>4,1</sup> and Per Delsing<sup>1</sup> <sup>1</sup> MINA, Chalmers University of Technology, S41296, Göteborg, Sweden <sup>2</sup> Institute of Solid state Physics, 142432 Chernogolovka, Russia <sup>3</sup> P.L.Kapitsa Institute, 117334 Moscow, Russia <sup>4</sup> IMEGO Institute, Aschebergsgatan 46, S41133, Göteborg, Sweden Tunneling spectroscopy of high-$`T_c`$ superconductors (HTSC) provides an important information about quasiparticle density of states (DOS), which is crucial for understanding HTSC mechanism. Surface tunneling experiments showed, that besides the superconducting gap in the DOS, $`\mathrm{\Delta }`$, there is a structure usually referred to as the ”pseudo-gap”, which exists well above $`T_c`$. Both the behaviour of the superconducting gap and it’s correlation with the pseudo-gap are still a matter of controversy. To avoid drawbacks of surface tunneling experiments, such as dependence on the surface deterioration, surface states and undefined geometry, we used ”intrinsic” tunneling spectroscopy. HTSC single crystals can be considered as stacks of atomic scale intrinsic Josephson junctions (IJJ’s). Using microfabrication, it is possible to make small HTSC mesa structures with a well defined geometry. Moreover, IJJ’s far from the sample surface can be measured, and deterioration of the sample surface becomes less important. Current-voltage characteristics (IVC’s) of mesas exhibiting tunnel junction behavior and can be used for studying DOS. On the other hand, intrinsic tunneling spectroscopy can suffer from ohmic heating of the IJJ’s and steps (defects) on the surface of the crystal. In this paper we present experimental data for small area Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub> (Bi2212) mesas containing few IJJ’s. By decreasing the mesa area, $`S`$, we minimize both the effect of overheating and the probability of defects in the mesa. Mesas with dimensions from 2 to 20 $`\mu `$m were fabricated simultaneously on top of Bi2212 single crystals. First a long and narrow mesa was fabricated using photolithography and chemical etching. Next, insulating CaF<sub>2</sub> layer was deposited and lift-off was used to make an opening. Finally, Ag film was deposited and electrodes were formed on top of the initial mesa by photolithography and Ar-ion etching. After etching, mesas beneath Ag electrodes remain. In Fig.1, normalized IVC’s of three mesas at $`T`$=4.2K and $`T`$=150 K are shown. The vertical axis represents the current density, $`J=I/S`$, and the horizontal axis shows the voltage per junction, $`V/N`$. IVC’s for different samples are plotted by different colours. Parameters of the mesas are listed in Table 1. From Fig.1 it is seen, that the normalized IVC’s merge quite well into one curve. This indicates good reproducibility of the fabrication procedure. The $`c`$-axis normal resistivity was, $`\rho _N=R_NS/Ns=44\pm 2.0`$ $`\mathrm{\Omega }`$cm, where $`s=15.5\AA `$ is the spacing periodicity of IJJ’s and $`R_N`$ is the resistance at large bias current and $`T_c93`$ K. | Table 1: | mesa | $`S`$ ($`\mu `$m<sup>2</sup>) | $`N`$ | $`R_N(\mathrm{\Omega })`$ | | --- | --- | --- | --- | --- | | | S251b | $`6\times 6`$ | 12 | 229.9 | | | S255b | $`5.5\times 6`$ | 12 | 256.4 | | | S211b | $`4\times 7.5`$ | 12 | 272.5 | | | S216b | $`4\times 20`$ | 10 | 87 | From Fig.1 it is seen that IVC’s exhibit clear tunnel junction behavior: (i) At low bias, multiple quasiparticle branches are seen, representing one-by-one switching of IJJ’s into the resistive state. The number of IJJ’s in the mesa was obtained by counting those branches. (ii) At intermediate currents there is a pronounced knee in IVC’s, representing the sum-gap voltage, $`V_g=2\mathrm{\Delta }/e`$. (iii) At high currents, there is a well defined normal resistance branch, $`R_N`$. As seen from Fig.1, $`R_N`$ is almost temperature independent, as may be expected for a tunnel resistance. In contrast, the zero bias resistance, $`R(0)`$, increases sharply with decreasing $`T`$, as shown in inset b). Inset a) in Fig.1 shows temperature dependence of the gap voltage, $`V_g`$, and the maximum spacing between multiple quasiparticle branches, $`\mathrm{\Delta }V`$, for four mesas on two diffrerent chips. Solid and open symbols represent branches for $`V>0`$ and $`V<0`$, respectively. $`\mathrm{\Delta }V`$ was determined at the ”maximum critical current”, at which the last IJJ switches to the resistive state, e.g. in Fig.1 that would correspond to $`J1.3\times 10^3`$ A/cm<sup>2</sup>. This current is fluctuating from run to run, therefore, causing an uncertainty in determination of $`\mathrm{\Delta }V`$. From the inset a) it is seen, that $`\mathrm{\Delta }V`$ is approximately two times less than $`V_g`$. This is simply due to the fact that all the IJJ’s switch to the resistive state before they reach the gap voltage, see Fig.1. However, it is seen that the temperature dependence of $`\mathrm{\Delta }V`$ reflects that for $`V_g`$. The observed $`V_g`$ correspond to $`\mathrm{\Delta }32`$ meV at $`T`$=4.2 K, in agreement with . The IVC’s in Fig.1 suggest that there is no significant heating effect in our mesas. Indeed, overheating should cause the reduction of $`V_g`$. Instead, we observed that the normalized IVC’s merge into a single curve with identical $`V_g`$, despite a considerable difference in the dissipated power, $`PS`$. Moreover, we have checked that the voltages of the quasiparticle branches scale with their number, $`V_n\frac{n}{N}V_N`$, where $`V_N`$ is the top branch, with all IJJ’s in the resistive state. Thus, switching of additional IJJ’s does not cause visible overheating of the mesa. On the other hand, we did observe a strong overheating for even smaller mesas (2$`\times 4`$ $`\mu `$m<sup>2</sup>), containing larger number of IJJ’s $`(100)`$, so that a clear back-banding was seen at large currents, and significantly lower $`V_g`$ was obtained, similar to that in Refs.. Therefore, a small number of IJJ’s in the mesa decreases the risk of overheating. In Fig.2, the conductance at different temperatures is shown for one of the samples. The sharp peak at $`V_g`$ and the depletion of conductance at $`\left|V\right|<V_g`$ is seen at low $`T`$, representing the superconducting gap in the DOS. The suppression of DOS below the gap results in strong temperature dependence of $`R(0)`$, see inset b) in Fig.1. With increasing temperature, the peak at $`V_g`$ shifts to lower voltages and decreases in magnitude. At $`T`$80 K, the peak is smeared out completely and only smooth depletion of the conductance remains at $`V`$=0. With the further increase of $`T`$, this depletion gradually decreases, but is still visible even at room temperature, representing the pseudo-gap in DOS. In agreement with the surface tunneling experiments, there is almost no changes in the conductance at $`T_c`$, which implies that the pseudo-gap coexists with the superconductivity. There is a crucial difference between our, ”intrinsic”, and the surface tunneling experiments, which allows us to distinguish the superconducting gap from the pseudo gap. The difference is in existence of multiple quasiparticle branches in IVC’s, see Fig.1. Therefore, we have an additional quantity, the spacing between the quasiparticle branches, $`\mathrm{\Delta }V`$, which can be used for estimation of $`\mathrm{\Delta }`$. Even though the peak in conductance is smeared out at $`T>`$80 K, the quasiparticle branches in the IVC’s remain well defined up to $`T_c`$. The $`\mathrm{\Delta }V`$ continuously decreases with increasing $`T`$ and vanishes exactly at $`T_c`$, as shown in inset a) in Fig.1. This brings us to conclusion that the superconducting gap does close at $`T_c`$, in contrast to the statement of Ref.. On the other hand, the pseudo gap is almost independent of $`T`$, in agreement with . The pseudo gap can exist well above $`T_c`$ and, probably, can coexist with the superconducting gap even at $`T<T_c`$, see Fig.2. We can not conclude that the superconducting gap is developed from the pseudo gap nor that they are competing with each other. From our experiment, it seems more natural to assume that those two gaps are independent or only weakly dependent, despite having the same order of magnitude. In Ref. possible scenarios of the pseudo gap were reviewed. One of the possible mechanisms is due to Coulomb charging effect in IJJ’s. Some experimental evidence for that was obtained in . For the smallest mesas, we have also seen certain features, such as a complete suppression of the critical current and an offset voltage in IVC’s, which may be explained in terms of the Coulomb charging effect. However, further study is necessary before making a decisive conclusion about the origin of the pseudo-gap.
no-problem/0002/cond-mat0002340.html
ar5iv
text
# Crystal-like high frequency phonons in the amorphous phases of solid water \[ ## Abstract The high frequency dynamics of low- ($`LDA`$) and high-density amorphous-ice ($`HDA`$) and of cubic ice ($`I_c`$) has been measured by inelastic X-ray Scattering (IXS) in the 1$`÷`$15 nm<sup>-1</sup> momentum transfer ($`Q`$) range. Sharp phonon-like excitations are observed, and the longitudinal acoustic branch is identified up to $`Q=8`$ nm<sup>-1</sup> in $`LDA`$ and $`I_c`$ and up to 5 nm<sup>-1</sup> in $`HDA`$. The narrow width of these excitations is in sharp contrast with the broad features observed in all amorphous systems studied so far. The ”crystal-like” behavior of amorphous ices, therefore, implies a considerable reduction in the number of decay channels available to sound-like excitations which is assimilated to low local disorder. \] Amorphous polymorphism, i.e. the existence of two or more amorphous states in the phase diagram of a chemical substance, has recently attracted wide interest from the scientific community . In water, amorphous polymorphism has received particular attention as it was associated with a possible phase separation in the deeply undercooled liquid \- this is suggested by the idea that the two amorphous water phases may be identified with the glassy forms of two liquid phases. In systems like water, the ergodic to non-ergodic transition from the undercooled liquid to the glass cannot be studied continuously as a function of temperature due to homogeneous crystallization. The glassy nature of the amorphous ice phases can, therefore, only be established indirectly, and this has contributed to raise controversies on the exact nature of the phase-diagram of water, and on the inter-relations among the different stable and metastable phases of this molecule . The growing perception that the glassy state is a very general state of condensed matter justifies the many experimental, theoretical and simulation studies performed on glasses. Among them, there is a growing interest in the study of the collective dynamical properties at high frequency, i.e. in the wavelength regime approaching the interparticle distances. Here one expects that either a continuum description, valid in the hydrodynamic limit, or the ordered-medium (phonon) picture developed for crystalline materials will fail as a consequence of the disorder. The deviations can then be used to characterize and better understand the glassy state. More specifically, there is a number of dynamic signatures which are associated with the glassy state . At very low temperatures two-level states give rise to tunneling phenomena . At higher temperatures disorder scattering and anharmonicities play an increasingly important role leading to excess densities of states (Boson peak) and relaxational phenomena . Recently, the overdamping of sound waves at high frequency was proposed as a further criteria to characterize the glassy behavior . The study of the collective dynamics of amorphous ice at wavelengths approaching the distance among water molecules may contribute, therefore, to the understanding of the differences between these two phases, and may help to shed light on such issues as their relation to stable crystalline phases or on the existence of two different liquid phases. In the present letter we present the excitation spectrum of both amorphous forms of water, i.e. high-density ($`HDA`$) and low-density amorphous ice ($`LDA`$), as determined by high resolution inelastic X-ray scattering (IXS) at momentum transfers ($`Q`$) comparable to the inverse of the inter-molecular distance. The experiment has been performed at the very high energy resolution ID16 beamline at the European Synchrotron Radiation Facility in Grenoble. The instrument energy resolution was set to 1.6 meV full width half maximum (FWHM) using 21.748 keV incident photons. The detection system was made up of five independent analyzer systems, displaced among each other by a constant angular offset corresponding to a $`Q`$-spacing of 3 nm<sup>-1</sup>. The $`Q`$ resolution was set to 0.4 nm<sup>-1</sup> FWHM. The energy scans at constant $`Q`$-transfer took about 150’, and each $`Q`$ point was obtained by a typical averaging of 3-5 scans. The transverse dimensions of the beam at the sample were 0.15 $`\times `$ 0.3 mm<sup>2</sup>. Further experimental details can be found elsewhere . The $`HDA`$ sample was obtained by pressurizing hexagonal D<sub>2</sub>O ice $`I_h`$ at 77 K beyond 10 kbar using the piston-cylinder apparatus described previously . Keeping the sample always at liquid nitrogen temperature, the metastable compound was retrieved from the cell at ambient pressure in mm-size chunks ($`\rho =1.17\pm 0.02`$ g/cm<sup>3</sup>), and placed into a steel container with two diametrically opposed openings to allow for the passage of the incident and scattered X-ray beam. The effective powder sample thickness along the beam was $``$ 15 mm, matching well the X-ray photo-absorption length. Once filled, the container was transferred under a continuous helium flow from the liquid nitrogen bath onto the cold-finger of a precooled closed-cycle refrigerator. Above $`90`$ K $`HDA`$ transforms with a strongly temperature dependent rate to $`LDA`$ ($`\rho =0.94\pm 0.02`$ g/cm<sup>3</sup>), which, in turns, converts itself into cubic ice $`I_\mathrm{c}`$ at $`140`$ K. $`HDA`$ and $`LDA`$, after proper annealing, were both measured at 60 K, while $`I_c`$ after annealing was measured at 80 K. The similarity of the temperatures allows a direct comparison of the scattering intensities. The purity of each phase was checked by measuring the respective static structure factors, $`S(Q)`$, reported in Fig. 1 with a momentum resolution of 0.04 nm<sup>-1</sup>. Both $`HDA`$ and $`LDA`$ samples show the known static structure factor with no signs of Bragg-peaks and, therefore, they are free of crystalline ice XII . There is a pronounced small-angle signal in $`HDA`$ which disappears upon annealing to $`LDA`$. Within our energy resolution, its origin is purely elastic as it will be seen by the inelastic scans reported in the following. As only moderate changes in the elastic small angle signal were observed with neutrons , it is questionable whether the strong effect observed here is a real bulk property of the $`HDA`$ sample or whether its origin stems from surface scattering in the powder sample. A selection of inelastic spectra obtained in the range $`Q<8`$ nm<sup>-1</sup> are reported in Fig. 2 and Fig. 3. In both amorphous ice phases and in the cubic crystalline phase there are well pronounced resonances. Increasing $`Q`$ one initially observes a single feature with a marked dispersion. At $`Q`$-values higher than 5 nm<sup>-1</sup>, the spectra become more complicated by the appearance of additional inelastic features. As X-rays, like neutrons, couple directly to the longitudinal component of the density fluctuations, the dispersive excitation observed from the lowest $`Q`$-values can be readily identified with the longitudinal acoustic-like branches. In $`HDA`$ these excitations can be clearly observed up to about 5 nm<sup>-1</sup>, while, in $`LDA`$ and $`I_c`$, they can be safely distinguished from the other feature(s) up to 10 nm<sup>-1</sup>. In order to derive the energy position, $`\mathrm{}\mathrm{\Omega }(Q)`$, and the width, $`\mathrm{}\mathrm{\Gamma }(Q)`$, of the excitations, they have been fitted by a Lorenzian function convoluted with the experimentally determined instrumental resolution function. This resolution function corresponds to the central peak at $`E=0`$ in Fig. 2, that, as discussed previously, is due to the static disorder in the samples and to small angle scattering from the sample environment (see Fig. 2 and Ref.). The best fit to the inelastic X-ray scattering data shows that the excitation energy $`\mathrm{}\mathrm{\Omega }(Q)`$ scales linearly with the momentum transfer $`Q`$ in the limit of small momentum transfers. ¿From the derived linear relation, $`\mathrm{\Omega }(Q)=cQ`$, it is possible to obtain the longitudinal sound velocities $`c_{HDA}=3550\pm 50`$ m/s, $`c_{LDA}=3550\pm 50`$ m/s and $`c_{I_c}=3750\pm 50`$ m/s, respectively. The differences among these values are small, and this is in agreement with the similar Debye level found in these three materials by neutron scattering experiments . The determination from the best fits of the resonance widths, $`\mathrm{}\mathrm{\Gamma }(Q)`$, is more involved, and is complicated by two facts: i) the absolute values for $`\mathrm{\Gamma }(Q)`$ are highly correlated with the background, and ii), particularly in $`HDA`$, the features appearing with increasing $`Q`$ on the low-frequency side of the acoustic-like peaks interfere with the fit. Therefore, no systematic dependence of $`\mathrm{\Gamma }(Q)`$ on $`Q`$ has been established. In both $`LDA`$ and $`HDA`$ one observes, however, that the resonances become broader with increasing $`Q`$-transfer. In any case care has to be taken in interpreting such broadening because the line shape of the resonances is unknown. For a single phonon it could be assumed Lorenzian reflecting the finite lifetime of the excitation. However, when taking a powder average the line shape acquires a non-analytic form even for the crystal due to the anisotropy within the multidimensional dispersion-sheet. Despite the difficulty to give full account of the observed broadening in the measured spectra, one can deduce from Figs. 2 and 3 the following important observation: As seen by the naked eye, the width of the resonances remains by far smaller than the excitation energy, $`\mathrm{}\mathrm{\Omega }(Q)`$, for all the considered $`Q`$-values. This result is very different from the observation made so far in other glassy systems. There, in fact, one observes an acoustic phonon-like resonance with a linear dispersion of $`\mathrm{\Omega }(Q)`$ vs $`Q`$ and a quadratic dispersion of $`\mathrm{\Gamma }(Q)`$ vs $`Q`$ up to a value $`Q=Q_m`$. $`Q_m`$ is defined by the relation $`\mathrm{\Omega }(Q_m)\mathrm{\Gamma }(Q_m)`$. At $`Q`$ larger than $`Q_m`$ it is no longer possible to observe well defined excitations and the inelastic part of the spectrum is, at the most, a broad and structureless background. Let us take the position, $`Q_M`$, of the first sharp diffraction peak as an indicator for the extent of structural correlations in the two amorphous phases. This allows us to define a pseudo Brillouin zone which extends to $`Q_M/210`$ nm<sup>-1</sup> in $`HDA`$, and to $`Q_M/28`$ nm<sup>-1</sup> in $`LDA`$. The quantity $`Q_m/Q_M`$ has so far always been found smaller than 0.5 . In the two amorphous ice phases studied here, excitations are very well defined up to $`Q`$-transfer values approaching $`Q_M`$, and, at least in $`LDA`$, the longitudinal acoustic-like branch can be identified very well at least up to $`Q=8`$ nm<sup>-1</sup>. In Ref. , it has been suggested the possible existence of a relation between the value of $`Q_m/Q_M`$ and the degree of fragility of the considered glass . The high value of $`Q_m/Q_M`$, which seems to approach unity in these two glasses, would imply that amorphous ices, and especially $`LDA`$, are the extreme end of fragile glasses. This is in contradiction to the view that water passes through an inflection in the deeply supercooled region where the liquid behavior changes from extremely fragile to strong . A further observation coming from the analysis of the spectra in Fig. 2 and 3 is that the spectra of $`LDA`$, at all the considered $`Q`$-values, are very similar to those measured in ice $`I_c`$ . In $`HDA`$ the resemblance is less pronounced . The similarity of $`LDA`$ and $`I_c`$ holds not only for the resonances assigned to the acoustic mode, but also for the excitations at $``$7 meV, which appear at higher $`Q`$ and recall the transverse dynamics found in ice $`I_h`$ and liquid water . In fact, as in these systems, the excitations set in around 5 nm<sup>-1</sup> and, as seen in Fig. 3, they become more pronounced beyond $`Q_M`$. The translational part of the density-of-states for $`I_h`$, $`I_c`$ and $`LDA`$, as obtained from INS spectra , is peaked around 6.5 meV (D<sub>2</sub>O). It is, therefore to be expected, that the INX spectra show excitations in this energy region at low $`Q`$ due to Umklapp processes. These processes take place via the Bragg peaks in the crystalline state and via the static structure factor in the case of $`LDA`$ . Despite the high definition of the low $`Q`$ excitations in the two amorphous ice phases when compared to other glasses, one recovers a clear indication of the disordered character from the evolution of the inelastic spectrum at larger $`Q`$ values. The spectra of $`LDA`$ and $`HDA`$ are less structured, lacking the sharp features observed in ice $`I_c`$. To conclude, we reported on an IXS measurement of the $`S(Q,E)`$ of the two known phases of amorphous ice. This has allowed to show that these two states of the water molecule possess a surprisingly crystal-like dynamic response. In both $`HDA`$ and $`LDA`$ the sound wave excitations are well defined. These experimental findings are in sharp contrast to the results found so far in other glasses, glass forming materials, liquids, dense gases and disordered materials in general. In these systems, in fact, an important broadening has always been observed in the inelastic part of the dynamic structure factor $`S(Q,E)`$. In this $`Q`$-region the scattering experiment becomes sensitive to the topological disorder which opens decay channels for sound excitations in addition to those available in the crystal. These channels are found practically absent in the case of the amorphous ice phases indicating a very low degree of local disorder. Structural results are controversially discussed and to date do not give a clear picture of the topology . Experimentally only the pair correlation functions are directly accessible. Higher order correlations, among these the orientational correlation function, must be obtained in an indirect way e.g. via the dynamic response. We deduce from our data highly intact hydrogen bond networks both in $`LDA`$ and to some lesser degree equally in $`HDA`$. Although an intact network in itself is no warranty for the absence of decay channels — e.g. an infinite random framework of corner-linked SiO<sub>2</sub>-tetrahedra can undergo large phonon-assisted distortions — it seems a necessary condition. In $`LDA`$ this network is perfectly annealed as it is not obtained via a fast quench from the liquid. Due to the constraints of the network the number of states the system can sample on the ps time scale should be small, i.e. there is a small configurational entropy, a view which is compatible with thermodynamic data . $`HDA`$ is expected to possess a larger configurational entropy, and on this basis one can justify that $`HDA`$ has more ”glassy behavior” than $`LDA`$. Apart from structure and bonding the dynamical properties of the water molecule influence the decay of sound-like excitations both in crystalline and amorphous ice. We just want to point to the clear separation of translational and librational bands which independent of the structural details arises from the very small moment of inertia of the water molecule. This separation closes decay channels — e.g. present in SiO<sub>2</sub> — involving resonances of acoustic-like and librational modes, and may equally explain the absence of strong excess intensities in the inelastic neutron scattering data . Temperature and the concomitant anharmonicities equally have to be given proper consideration in the discussion . In the end only detailed molecular dynamics calculations on well characterized ensembles combined with experiments on similar systems will be able to unambiguously give the reasons for the crystal like dynamic response of amorphous ice phases. We acknowledge A. Mermet for his help during the IXS measurements, B. Gorges and R. Verbeni for technical support, F. Sciortino, C.A. Angell and G. Ruocco for useful discussions. We also acknowledge the financial support of the Bundesministerium für Bildung und Forschung under project number 03fu4dor5. FIGURE CAPTIONS FIG. 1 - Static structure factor of high density ($`HDA`$) and low density ($`LDA`$) amorphous ice as measured on the inelastic X-ray beamline prior to the inelastic experiments. No signs of Bragg peaks are observed, indicating that both amorphous phases are free of crystalline ice XII contaminations. Note the strong small-angle signal in $`HDA`$. The inset shows the diffraction pattern of cubic ice. FIG. 2 - Inelastic X-ray spectra of high density amorphous ($`HDA`$), low density amorphous ($`LDA`$) and crystalline cubic ice ($`I_c`$) at the indicated $`Q`$ values lying mainly in the first pseudo Brillouin zone. The dashed line is a fit to the sinal using Lorenzian lineshapes convoluted with the resolution function (equally indicated). The solid line represents the inelastic contribution to the total fits. The numbers in brackets on the left of the elastic line give the elastic intensities in arbitrary units. Note the close resemblance of the sharp inelastic response of both amorphous phases to the crystalline phase. FIG. 3 - Inelastic X-ray spectra of high density amorphous ($`HDA`$), low density amorphous ($`LDA`$) and crystalline cubic ice ($`I_c`$) at high $`Q`$ values in the second pseudo Brillouin zones. The dashed line gives the resolution function. The elastic intensities are given in brackets. For these high $`Q`$ values the excitations become part of a broad intensity distribution reminiscent of the density-of-states.
no-problem/0002/astro-ph0002194.html
ar5iv
text
# Resonant inverse Compton scattering by secondary pulsar plasma ## 1 Introduction Rotation of a highly magnetized neutron star is known to induce a strong electric field, which intensely accelerates charged particles. According to the customary polar gap models (Ruderman & Sutherland 1975, Arons & Scharlemann 1979), the acceleration takes place near the neutron star surface above the polar cap, the Lorentz-factor of the primary particles increasing up to $`10^6`$. The particles move along the magnetic lines and emit curvature photons, which initiate pair-production cascade. The first electron-positron pairs created screen the accelerating electric field, so that at higher altitudes the particle energy remains unaltered; the typical Lorentz-factors of the secondary plasma are $`1010^4`$. Recent observations testify to thermal soft X-ray emission from some pulsars indicating that the neutron stars can be rather hot, $`T510^5`$ K, while the polar caps can have still higher temperatures scaling a few times $`10^6`$ K (Cordova et al. 1989, Ogelman 1991, Halpern & Holt 1992, Finley et al. 1992, Halpern & Ruderman 1993, Ogelman & Finley 1993, Ogelman et al. 1993, Yancopoulos et al. 1994, Ogelman 1995, Greiveldinger et al. 1996). Such high temperatures of the neutron star surface are also predicted theoretically (Alpar et al. 1984, Shibazaki & Lamb 1989, Van Riper 1991, Page & Applegate 1992, Umeda et al. 1993, Halpern & Ruderman 1993). The thermal X-ray photons should suffer inverse Compton scattering off the primary particles in the polar gap. In the presence of a strong magnetic field the scattering cross-section is essentially enhanced, if the photon energy in the particle rest frame equals the cyclotron energy (Herold 1979, Xia et al. 1985). For the pulsars with hot polar caps the resonant Compton scattering in the polar gap was found to be rather efficient (Kardashev et al. 1984, Xia et al. 1985, Daugherty & Harding 1989, Dermer 1990, Sturner 1995, Chang 1995). Firstly, it was recognized as an essential mechanism for energy loss of primary particles accelerating in the polar gap. Secondly, inverse Compton scattering was found to condition the gap formation, since the pair production avalanche may be triggered by the upscattered Compton photons rather than by curvature photons (Zhang & Qiao 1996, Qiao & Zhang 1996, Luo 1996, Zhang et al. 1997). As shown by Sturner (1995), given typical values of neutron star temperature and surface magnetic field, the resonant Compton scattering is the strongest for particle Lorentz-factors $`10^210^3`$. Theoretical models (Van Riper 1991, Page & Applegate 1992, Umeda et al. 1993) suggest that the typical temperatures of the neutron star surface are as high as a few times $`10^5`$ K. Hence, the scattering is likely to be essential for the secondary plasma in most of the pulsars. Daugherty & Harding (1989), Zhang et al. (1997) traced the evolution of the Lorentz-factor of secondary particles on account of magnetic inverse Compton scattering above the polar gap. Our aim is to investigate in more detail the influence of resonant inverse Compton scattering on the parameters of secondary plasma. For the resonant character of the scattering, the evolution of particle Lorentz-factor with the distance depends strongly on the initial particle energy. Since the distribution function of the secondary plasma is generally believed to be rather broad (the Lorentz-factor ranges from $`10`$ to $`10^4`$), its evolution on account of the Compton scattering is of a great interest. It will be shown that only particles with Lorentz-factors between 100 and 1000 are essentially decelerated in the course of the resonant scattering forming a sharp peak at low energies. Particles with larger Lorentz-factors are not decelerated at all. Thus, the resultant distribution function of secondary particles becomes two-humped, giving rise to the two-stream instability. In Sect. 2 we examine how the resonant inverse Compton scattering affects the Lorentz-factor of a secondary particle at various pulsar parameters. Section 3 is devoted to studying the evolution of the distribution function. The conditions for the development of the two-stream instability are also discussed. In Sect. 4 we estimate the gamma-ray luminosity caused by the upscattered Compton photons. Section 5 contains a brief summary. ## 2 Deceleration of secondary particles due to resonant inverse Compton scattering Consider the flow of secondary plasma streaming along the open magnetic field lines above the polar gap. The neutron star is supposed to emit a blackbody radiation, which is scattered by the plasma particles. In a strong magnetic field the scattering is particularly efficient, if the photon energy, $`\epsilon mc^2`$, satisfies the resonance condition: $$\epsilon \gamma (1\beta \mathrm{cos}\theta )=\epsilon _B,$$ $`(1)`$ where $`\gamma `$ is the Lorentz-factor of the scattering particles, $`\beta `$ the particle velocity in units of $`c`$, $`\theta `$ the angle the photon makes with the particle velocity, $`\epsilon B/B_{cr}`$, with $`B_{cr}=m^2c^3/\mathrm{}e=4.41410^{13}`$ G. At the distance $`z`$ from the neutron star the rate of energy loss due to the resonant inverse Compton scattering can be written as (Sturner 1995): $$\frac{d\gamma }{dt}=4.9210^{11}\frac{T_6B_{12}^2(z)}{\beta \gamma }$$ $$\times \mathrm{ln}\left[1\mathrm{exp}\left(\frac{\epsilon _Bmc^2}{\gamma (1\beta \mathrm{cos}\theta _c(z))kT}\right)\right]\mathrm{s}^1.$$ $`(2)`$ Here $`T_6`$ is the neutron star temperature in units of $`10^6`$ K, $`B_{12}(z)`$ the magnetic field strength in units of $`10^{12}`$ G, $`\theta _c(z)`$ the maximum incident angle of photons, $$\mathrm{cos}\theta _c=\sqrt{1\frac{1}{(1+z/R)^2}},$$ $`(3)`$ where $`R`$ is the neutron star radius. Provided that the magnetic field is dipolar, $`B_{12}(z)(1+z/R)^3`$. Numerical solutions of Eq. (2) are presented in Fig. 1. One can see that the resonant inverse Compton scattering is significant up to $`zR`$ and it affects the particle Lorentz-factor essentially. Note that the curves for different initial Lorentz-factors are not similar to each other. In agreement with Eq. (1), the larger the particle Lorentz-factor the lower is the energy of resonantly scattered photons. At $`\gamma _0=3000`$ the energy of the resonant photons is below $`\epsilon _{max}=2.82kT/(mc^2)`$, which corresponds to the maximum in the photon distribution. As $`\gamma `$ is decreasing with the altitude, the resonant energy increases. Hence, the photon spectral density increases and the scattering becomes more efficient. However, at distances $`zR`$ it ceases due to the essential shrinking of the solid angle subtended by the neutron star. If the initial Lorentz-factor is $`100`$, the energy of resonant photons is above $`\epsilon _{max}`$ and further increases with the altitude, so that the scattering gradually ceases. Theoretical models predict that the polar cap region should be significantly hotter than the rest surface of the neutron star (e.g. Cheng & Ruderman 1980, Arons 1981, Alpar et al. 1984, Umeda et al. 1993, Luo 1996). However, the luminosities observed from the hot spots appear to be too small (Finley et al. 1992, Halpern & Ruderman 1993, Becker & Trumper 1993, Yancopoulos et al. 1994, Greiveldinger et al. 1996). The latter is usually interpreted as a consequence of small hot spot radii (e.g. Yancopoulos et al. 1994, Greiveldinger et al. 1996). As is evident from Fig.1, for typical hot spot parameters the particle energy loss is mainly determined by the scattering of the photons from the whole neutron star rather than by the scattering of hot spot photons. So hereafter we take into account only the photons from the whole neutron star surface keeping in mind that the influence of hot spot photons can somehow alter our quantitative results, while the qualitative picture should remain the same. It should be pointed out that the evolution of Lorentz-factor with the altitude shown in Fig. 1 is somewhat different from that reported by Zhang et al.(1997)(see their Fig. 1). These authors claim that at distances $`z\mathrm{few}R`$ the particles suffer severe nonresonant magnetized scattering, which leads to the drastic decrease of the final Lorentz-factor. In fact, the rate of energy loss on account of nonresonant magnetic scattering increases with decreasing the field strength as $`B_{12}^2(z)`$ (see Eq. (5) in Sturner 1995). However, this equation is applicable only if in the particle rest frame the cyclotron energy exceeds the energy of most of photons, $`\epsilon _B>\epsilon \gamma (1\beta \mathrm{cos}\theta _c)`$. For the photon energies at the peak of the Planck distribution, $`\epsilon 2.82kT`$, the latter condition can be rewritten as $`410^2B_{12}(z)/(T_6\gamma _3(1\beta \mathrm{cos}\theta _c))>1`$, with $`\gamma _3\gamma /10^3`$. Taking $`T_6=0.5`$, $`\gamma _3=3`$, $`B_{12}=0.1`$ (Zhang et al. 1997, Fig. 1a), one can obtain that at $`z=10R`$ the left-hand side of this inequality is $`510^5`$, while at $`z=0`$ it equals $`310^4`$. So even at the stellar surface most of the photons scatter in the Thomson regime rather than in the magnetic one. The rate of energy loss out of the Thomson scattering is found to be $$\frac{d\gamma _T}{dt}=30\gamma T_6^4(1\beta \mathrm{cos}\theta _c)^3\mathrm{s}^1,$$ $`(4)`$ showing that this process is inefficient. Thus the resonant inverse Compton scattering is the only significant energy loss mechanism for the secondary particles in pulsar magnetospheres. We believe that the particles start deceleration just above the polar gap. In general the gap thickness is supposed to be of the order of the polar cap radius (Ruderman & Sutherland 1975) or even larger (Arons & Scharlemann 1979). The energy loss of secondary particles due to the resonant inverse Compton scattering above the gap should certainly be influenced by the adopted value of the gap height. In Fig. 2 we show the final Lorentz-factors versus the gap height. One can see that the dependence is sufficiently weak, therefore, below we fix $`z_0`$ at $`10^2R`$. In contrast with the gap height, such pulsar parameters as the star temperature and magnetic field strength can influence particle deceleration essentially. In Fig. 3 we present the dependences of the normalized final Lorentz-factor on the neutron star temperature at various magnetic field strengths. Apparently, at the temperatures $`<10^5`$ K the scattering is inefficient yet, while at higher temperatures it becomes significant. Note that at various initial Lorentz-factors ($`\gamma _0=100`$ and $`3000`$) the scattering efficiency versus magnetic field strength is essentially different. The particles with $`\gamma _0=100`$ resonantly scatter the photons from the Wien region. Then the weaker the field strength, the lower is the energy of resonant photons and, correspondingly, the higher is the photon spectral density and the larger is the particle energy loss (see Fig. 3a). At $`\gamma _0=3000`$ the resonant energy lies in the Rayleigh-Jeans region. So the scattering is more efficient for higher magnetic field strengths, since the photon spectral density increases with the energy (see Fig. 3b). For the resonant character of the scattering the particle energy loss depends strongly on the initial Lorentz-factor. Figure 4 shows the final Lorentz-factor versus the initial one for various neutron star temperatures and magnetic field strengths. At $`\gamma _010`$ as well as at $`\gamma _010^4`$ the resonant scattering appears to be inefficient. Provided that the scattering is intense ($`\gamma _010^210^3`$), the final Lorentz-factor appears to be independent of the initial one. According to Fig. 4a, the length of the plateau increases with the star temperature. In fact, the higher temperature implies the larger amount of photons at every energy, the scattering becoming more efficient. As can be seen from Fig. 4b, the increase of the magnetic field strength leads to the shift of the plateau toward a higher energy; this is certainly consistent with the resonance condition (1). ## 3 The distribution function of the secondary plasma as a result of resonant inverse Compton scattering Since the energy loss depends essentially on the initial particle energy, we are to investigate the evolution of the distribution function of secondary particles as a result of the resonant inverse Compton scattering. Conservation of the number of particles along a phase trajectory implies that $$f(z,\gamma )d\gamma =f(z_0,\gamma _0)d\gamma _0,$$ where $`f(z,\gamma )`$ is the particle distribution function and the subscript ”0” refers to the initial values. So looking for the phase trajectories of individual particles one can reconstruct the distribution function at any height $`z`$. We are particularly interested in the final distribution function arising at distances, where the resonant scattering ceases. Let us begin with the evolution of a waterbag distribution function: $$f(z_0,\gamma _0)=\{\begin{array}{c}\frac{1}{\gamma _{max}\gamma _{min}},\gamma _{min}\gamma \gamma _{max},\hfill \\ 0,\gamma <\gamma _{min}\mathrm{and}\gamma >\gamma _{max},\hfill \end{array}$$ with $`\gamma _{min}=10`$, $`\gamma _{max}=10^4`$. The final distribution functions at various pulsar parameters are plotted in Fig. 5. The particles with $`\gamma 10^210^3`$ are essentially decelerated due to the scattering, the final energies becoming equal (see also Fig. 4). These particles form the sharp peak at $`\gamma <10^2`$. For the Lorentz-factors of a few thousand the scattering becomes inefficient and the distribution function remains almost unaltered. Thus, the resonant inverse Compton scattering leads to the two-humped distribution function of the secondary plasma. At higher neutron star temperatures the main peak of the function shifts towards the lower energies and the humps become more prominent. The magnetic field strength variation also results in the shift of the main peak. For the more realistic distribution function resembling that found by Arons (1980), $$f(\gamma )=\{\begin{array}{c}\mathrm{exp}\left(10\frac{\gamma _m\gamma }{\gamma _m}\right),10\gamma \gamma _m,\hfill \\ (\gamma /\gamma _m)^{3/2},\gamma _m\gamma \gamma _c,\hfill \\ (\gamma _c/\gamma _m)^{3/2}\mathrm{exp}\left(\frac{\gamma \gamma _c}{\gamma _c}\right),\gamma _c\gamma 10^4,\hfill \end{array}$$ $`(5)`$ with $`\gamma _m=10^2`$, $`\gamma _c=10^{3.5}`$, the evolution on account of the resonant inverse Compton scattering is qualitatively the same (see Fig. 6). We next examine the dispersion properties of the plasma with the evolved distribution function. For simplicity let us assume that the plasma consists of the two particle flows characterized by the number densities $`n_a`$, $`n_b`$ and by the Lorentz-factors $`\gamma _a`$, $`\gamma _b`$ ($`\gamma _a<\gamma _b`$). Given the infinitely strong magnetic field, the dispersion relation is as follows (see, e.g. Lyubarskii 1995): $$\frac{\omega _{pa}^2}{(\omega kv_a)^2\gamma _a^3}+\frac{\omega _{pb}^2}{(\omega kv_b)^2\gamma _b^3}=1.$$ $`(6)`$ Here $`\omega `$ is the frequency, $`k`$ the wave number, $`v_{a,b}`$ are the particle velocities, $`\omega _{pa,b}`$ the plasma frequencies given by the customary expression: $$\omega _{pa,b}=\sqrt{\frac{4\pi n_{a,b}e^2}{m}},$$ $`(7)`$ where $`e`$ is the electron charge, $`m`$ the electron mass. As is evident from Eq. (6), the dispersion properties of the plasma are mainly determined by one of the particle flows on condition that $$\frac{n_a}{\gamma _a^3}\frac{n_b}{\gamma _b^3},$$ $`(8)`$ rather than $`n_an_b`$. This is because of the great inertia of the fast particles performing one-dimensional motion in the superstrong magnetic field. For the distribution functions plotted in Fig. 5 and 6 $`n_a`$ and $`n_b`$ are comparable, while $`\gamma _a/\gamma _b10^110^2`$. Therefore the low-energy particles of the main peak almost completely determine the dispersion properties of the plasma. The two-humped distribution function implies the possibility of the two-stream instability. Provided that the contribution of one of the plasma flows to the plasma dispersion is small (i.e. Eq. (8) is valid), the growth rate of the instability takes the form (Lominadze & Mikhailovskii 1979, Cheng & Ruderman 1980): $$\mathrm{Im}\omega \left(\frac{n_b}{n_a}\right)^{1/3}\frac{\omega _{pa}}{\gamma _a^{1/2}\gamma _b}.$$ $`(9)`$ The two-stream instability results in an essential development of initial perturbations on condition that $$\frac{R_c}{c}\mathrm{Im}\omega >10,$$ $`(10)`$ where $`R_c`$ is the characteristic scale length for the increase of perturbations. It is convenient to normalize the number density of the secondary plasma by the Goldreich-Julian charge density: $$n=\frac{\kappa B}{Pce},$$ $`(11)`$ where $`\kappa `$ is the multiplicity factor of the secondary plasma, $`P`$ the pulsar period. The latter equation can be rewritten as: $$n=6.2510^{13}P^1\kappa _3B_{12}(1+z/R)^3\mathrm{cm}^3,$$ $`(12)`$ where $`\kappa _3\kappa /10^3`$. Using Eqs. (9) and (12) in Eq. (10) we reduce the condition for the efficient instability development to the form: $$\frac{\kappa _3B_{12}}{PR_{c7}\gamma _{b3}^2\gamma _{a2}}>410^4.$$ $`(13)`$ Here $`R_{c7}R_c/10^7\mathrm{cm}`$, $`\gamma _{b3}\gamma _b/10^3`$, $`\gamma _{a2}\gamma _a/10^2`$ and it is assumed that $`(n_b/n_a)^{1/3}1`$. As can be seen from Eq. (13), at typical pulsar parameters the two-stream instability can develop readily providing the increase of plasma oscillations. The latter can be transformed into electromagnetic waves, thus giving rise to pulsar radio emission. It should be mentioned that two-steram instability has always been one of the most popular mechanisms for pulsar radio emission. For more than two decades a number of scenarios for the instability development were proposed. The first and the most natural one involves the instability caused by the flows of primary and secondary pulsar plasma (Ruderman & Sutherland 1975). However, the growth rate of this instability appears to be too small because of enormous inertia of the high-energy primary particles (Benford & Buschauer 1977). Cheng & Ruderman (1977) considered the two-stream instability arising in the secondary plasma due to the difference in the velocities of electrons and positrons moving along the curved magnetic lines. However, this difference is insufficient to cause the instability, since the particle distribution functions are too broad (Buschauer & Benford 1977). Lyubarskii (1993) suggested that current and charge density adjustment in pulsar magnetosphere leads to the backward particle flow, which causes intense two-stream instability. However, numerical simulations of the plasma flow in the open field line tube are necessary to prove this idea. Given nonstationary generation of the secondary plasma the particles are confined to the separate clouds, and the fastest particles of a cloud can outstrip the slower particles of the previous cloud giving rise to the two-stream instability (Usov 1987, Ursov & Usov 1988). Up to now it is not clear whether the instability initiated in such a way can account for pulsar radio emission, since the nonstationarity of plasma generation is not studied in detail yet. The present paper suggests one more possibility of two-stream instability in pulsars, which is based on the selective energy loss of particles as a result of resonant inverse Compton scattering. Note that in this model the instability arises naturally, with no additional assumptions being involved. ## 4 Gamma-ray luminosity provided by resonantly upscattered Compton photons The photons produced by the resonant inverse Compton scattering have the energies $$E\gamma ϵ_Bmc^2B_{12}\gamma _2\mathrm{MeV},$$ $`(14)`$ so that typically $`E110`$ MeV. Note that the curvature gamma-photons produced in the polar gap as well as the photons upscattered by the primary particles have essentially higher energies, $`E100`$ MeV. Hence, the resonant scattering by secondary plasma results in an additional low-energy component in pulsar gamma-ray spectrum. The spectrum of this component in the case of some specific distribution functions of the scattering particles is obtained by Daugherty & Harding (1989). In general, the low-energy tail of this component spreads even to the X-ray band on account of the photons resonantly scattered at distances $`z>R`$, where the magnetic field strength decreases essentially. However, these photons are very few, since at $`z>R`$ the scattering rate decreases significantly due to the shrinking of the solid angle subtended by the neutron star. Given that the scattering is efficient, most of the energy of the secondary plasma should be transferred to the low-energy gamma-rays. The luminosity provided by the upscattered photons can be estimated as follows: $$L=nSmc^3\mathrm{\Delta }\gamma ,$$ $`(15)`$ where $`S`$ is the cross-sectional area of the open field line tube, $$S=\frac{\pi R^3}{R_L},$$ $`(16)`$ $`R_L`$ is the light cylinder radius, $`\mathrm{\Delta }\gamma `$ the difference between the Lorentz-factors of the particles, which mainly contribute to the final and initial plasma energy, $`\mathrm{\Delta }\gamma 10^210^3`$. Substituting Eqs. (12) and (16) into Eq. (15) we find: $$L=0.94210^{30}\frac{B_{12}R_6^3\kappa _3\gamma _3}{P^2}\mathrm{ergs}/\mathrm{s}.$$ $`(17)`$ Although in the particle rest frame the photons are scattered in all directions, in the laboratory frame they are beamed along the particle velocity. So the opening angle of the gamma-ray beam is given by $`\phi =3\sqrt{R/R_L}.`$ The averaged observed photon flux, $`F`$, is related to the luminosity as $$F=\frac{L\phi }{2\pi \mathrm{\Omega }d^2E\mathrm{\Delta }E},$$ $`(18)`$ where $`\mathrm{\Omega }=\pi \phi ^2/4`$ is the solid angle occupied by the beam of upscattered photons, $`d`$ the distance to the pulsar, $`\mathrm{\Delta }E`$ the energy band. Taking into account Eq. (17), Eq. (18) can be rewritten as $$F=310^{10}\frac{B_{12}R_6^{5/2}\gamma _3\kappa _3}{P^{3/2}d_3^2E_6^2}\mathrm{photons}/(\mathrm{cm}^2\mathrm{s}\mathrm{keV}),$$ $`(19)`$ where $`d_3d/10^3\mathrm{pc}`$, $`E_6E/1\mathrm{MeV}`$. For most pulsars the flux given by Eq. (19) is too low to be detected. At present the detectors of the Compton gamma-ray observatory are the most sensitive to the low-energy gamma-ray emission (OSSE at 0.05–10 MeV and COMPTEL at 1–30 MeV). At 1 MeV the source sensitivity of OSSE is only $`210^7`$ photons/($`\mathrm{cm}^2\mathrm{s}\mathrm{keV}`$) (Gehrels & Shrader 1996). In the observations reported by Kuiper et al. (1996) the flux from Geminga in the band of 3–10 MeV was found to be $`10^7E_6^2`$ photons/($`\mathrm{cm}^2\mathrm{s}\mathrm{keV}`$). Substituting Geminga parameters ($`P=0.237`$ s, $`B_{12}=3.3`$, $`d_3=0.15`$) into Eq. (19) one can obtain the flux provided by the upscattered Compton photons: $`F=3.810^7\kappa _3\gamma _3R_6^{5/2}E_6^2`$ photons/($`\mathrm{cm}^2\mathrm{s}\mathrm{keV}`$), which is consistent with the one detected. ## 5 Conclusions We have investigated resonant inverse Compton scattering by secondary pulsar plasma. The process is found to cause the efficient energy loss of the secondary particles given the neutron star temperatures $`>10^5`$ K, so that our results are applicable to most pulsars. For the resonant character of the scattering, the energy loss depends strongly on the initial particle energy. At $`\gamma _010^210^3`$ the scattering is the most essential, the final Lorentz-factors of the particles being independent of the initial ones. The distribution function of the secondary plasma is significantly altered by the resonant inverse Compton scattering. It is shown that ultimately the distribution function becomes two-humped. The main peak at $`\gamma 10^2`$ is very sharp. It is formed by particles which suffered severe energy loss on account of the scattering. Another hump is sufficiently broad. It is associated with the particles, whose Lorentz-factors are almost unaltered by the scattering. The two-humped distribution function of the plasma particles is known to be unstable. It is shown that at pulsar conditions the two-stream instability develops readily and leads to an essential increase of plasma oscillations, which are likely to be transformed into radio emission. We have also estimated the gamma-ray flux provided by the upscattered Compton photons. The resonantly scattered photons appear to gain the energies of 1–10 MeV forming an additional low-energy component in pulsar gamma-ray spectrum.
no-problem/0002/astro-ph0002011.html
ar5iv
text
# For submission to Monthly Notices Gravitational lens magnification by Abell 1689: Distortion of the background galaxy luminosity function ## 1 Introduction The use of gravitational lensing as a means of cluster mass reconstruction provides a theoretically efficient approach without the equilibrium and symmetry assumptions which typically accompany virial and X-ray temperature methods. Mass determination through application of lens shear proves to give good resolution in mass maps although measurement of absolute quantities is not possible without external calibration. This so called sheet-mass degeneracy (Falco, Gorenstein & Shapiro 1985) is broken however by methods which exploit the property of lens magnification. First recognised by Broadhurst, Taylor & Peacock (1995, BTP hereafter) as a viable tool for the reconstruction of cluster mass, lens magnification has the twofold effect of amplifying background source galaxy fluxes as well as their geometrical size and separation. This immediately permits two separate approaches for measuring lensing mass. The first involves selecting a sample of sources with a flat or near-flat number count slope. Magnification results in a reduction of their local surface number density owing to the dominance of their increased separation over the enhanced number detectable due to flux amplification. Although contaminated by faint cluster members, Fort, Mellier & Dantel-Fort (1997) first reported this dilution effect using B and I band observations of the cluster CL0024$`+`$1654. Later, Taylor et al. (1998, T98 hereafter) demonstrated how the dilution in surface number density of a colour-selected sample of red galaxies lying behind the cluster Abell 1689 enables determination of its total mass profile and 2d distribution. A projected mass interior to $`0.24h^1\mathrm{Mpc}`$ of $`M_{2d}(<0.24h^1\mathrm{Mpc})=(0.50\pm 0.09)\times 10^{15}h^1\mathrm{M}_{}`$ was predicted, in good agreement with the shear analysis of Tyson & Fischer (1995) who measured $`M_{2d}(<0.24h^1\mathrm{Mpc})=(0.43\pm 0.02)\times 10^{15}h^1\mathrm{M}_{}`$ and Kaiser (1995) with a measurement of $`M_{2d}(<0.24h^1\mathrm{Mpc})=(0.43\pm 0.04)\times 10^{15}h^1\mathrm{M}_{}`$. Since then, several authors have detected source number count depletion due to cluster lensing. Athreya et al (1999) observe MS1008$``$1224 and use photometric redshifts to identify a background population of galaxies within which they measure depletion. Mayen & Soucail (2000) constrain the mass profile of MS1008$``$1224 by comparing to simulations of depletion curves. Gray et al (2000) measure the first depletion in the near infra-red due to lensing by Abell 2219. Finally and most recently, Rögnvaldsson et al. (2000) find depletion in the source counts behind CL0024$`+`$1654 in the R band and for the first time, in the U band. The second mass reconstruction approach permitted by magnification forms the primary focus of this paper. The amplification of flux by lens magnification introduces a measurable shift in the luminosity function of background source galaxies. With a sufficiently well defined luminosity function derived from an unlensed offset field for comparison, this shift can be measured to allow an estimate of the lens mass (BTP). This method relies upon a set of observed source magnitudes which, if assumed to form an effective random sampling of luminosity space, is not limited by noise from background source clustering unlike the number count method (see Section 5.2 for further discussion). This paper presents the first application of mass reconstruction using lens flux magnification inferred from the luminosity function of background samples. Unlike the method of T98 who defined their background sample based on colour cuts, in this work photometric redshifts of all objects in the observed field have been estimated. This not only allows an unambiguous background source selection but alleviates the need to estimate source distances when scaling convergence to real lens mass. The following section details the theory of mass reconstruction from lens magnification of background source magnitudes. Section 3 describes the photometric analysis applied to observations of A1689 with the redshifts which result. Observations of the offset field which provide the absolute magnitude distribution required for comparison with the A1689 background source sample are presented in Section 4. From this, a parameterised luminosity function is calculated in Section 4.2 necessary for application of the maximum likelihood method. Following a discussion of sample incompleteness in Section 5.1, a mass measurement of A1689 is given in Section 6 where the effects of sample incompleteness are quantified. Finally, a signal to noise study is carried out in Section 7 to investigate the effects of shot noise, calibration uncertainty of the offset field and photometric redshift error. ## 2 Mass Reconstruction Measurement of lens magnification of background source fluxes requires a statistical approach in much the same way as do shear or number count depletion studies. The basis of this statistical method relies on the comparison of the distribution of lensed source luminosities with the luminosity function of an un-lensed offset field. As Section 5.2 discusses further, for a fair comparison, the population of sources detected behind the lens must be consistent with the population of objects used to form this un-lensed reference luminosity function. The effect of lens magnification by a factor $`\mu `$ on a source is to translate its observed magnitude from $`M`$ to $`M+2.5\mathrm{log}_{10}\mu `$. In terms of the reference luminosity function, $`\varphi (M,z)`$, the probability of a background galaxy with an absolute magnitude $`M`$ and redshift $`z`$ being magnified by a factor $`\mu `$ is (BTP) $$\mathrm{P}[M|\mu ,z]=\frac{\varphi (M+2.5\mathrm{log}_{10}\mu (z),z)}{\varphi (M+2.5\mathrm{log}_{10}\mu (z),z)dM}.$$ (1) Magnification depends on the geometry of the observer-lens-source system hence for a fixed lens and observer, $`\mu `$ is a function of source redshift. This redshift dependence comes from the familiar dimensionless lens surface mass density or convergence, $`\kappa (z)`$, and shear, $`\gamma (z)`$, which are related to $`\mu (z)`$ via, $$\mu (z)=\left|[1\kappa (z)]^2\gamma ^2(z)\right|^1.$$ (2) We wish to apply maximum likelihood theory using the probability in equation (1) to determine lens magnification and hence $`\kappa `$. A parametric luminosity function is therefore required and so we take $`\varphi (M,z)`$ to be a Schechter function (Schechter 1976), $$\varphi (M,z)=\varphi ^{}(z)10^{0.4(M_{}M)(1+\alpha )}\mathrm{exp}\left[10^{0.4(M_{}M)}\right].$$ (3) The Schechter parameters $`\varphi ^{}`$, $`M_{}`$ and $`\alpha `$ are determined by fitting to the magnitude distribution of the offset field (see Section 4). T98 use $`\mu `$ as a likelihood parameter by adopting the simplification that all sources lie at the same redshift. However this is not possible when each source is attributed its own redshift. We must therefore express $`\mu `$ in terms of a redshift depenent quantity and a source–independent likelihood parameter. The most direct solution is to separate the convergence. Using the parameter $`\kappa _{\mathrm{}}`$ introduced by BTP as the convergence for sources at $`z=\mathrm{}`$, we can write $`\kappa (z)`$ as, $`\kappa (z)`$ $`=`$ $`\kappa _{\mathrm{}}f(z),`$ $`f(z)`$ $`=`$ $`{\displaystyle \frac{\sqrt{1+z}\sqrt{1+z_L}}{\sqrt{1+z}1}}.`$ (4) We therefore choose $`\kappa _{\mathrm{}}`$ as our likelihood parameter with all source redshift dependency being absorbed into the function $`f`$. The lens surface mass density, $`\mathrm{\Sigma }`$, is then related to $`\kappa _{\mathrm{}}`$ and the lens redshift, $`z_L`$, by $`\mathrm{\Sigma }(z_L)`$ $`=`$ $`{\displaystyle \frac{cH_0}{8\pi G}}\kappa _{\mathrm{}}\left[{\displaystyle \frac{(1+z_L)^2}{\sqrt{1+z_L}1}}\right]`$ (5) $`=`$ $`2.75\times 10^{14}\kappa _{\mathrm{}}\left[{\displaystyle \frac{(1+z_L)^2}{\sqrt{1+z_L}1}}\right]hM_{}\mathrm{Mpc}^2.`$ Here, we assume an Einstein-de-Sitter universe for reasons of simplicity and because BTP show that this result depends only weakly on the chosen cosmological model. Before choosing a likelihood function, consideration must be given to the shear term in equation (2). Since shear scales with source redshift in the same way as the convergence, we use the so-called $`\kappa `$ estimators discussed by T98 which relate $`\kappa `$ to $`\gamma `$. At the extremes these are $`\gamma =\kappa `$ for the isothermal sphere or $`\gamma =0`$ for the sheet-like mass. A third variation, motivated by cluster simulations and the fact that it has an invertible $`\mu (\kappa )`$ relation, is $`\gamma \kappa ^{1/2}`$ (van Kampen 1998). This gives rise to the parabolic estimator which predicts values of $`\kappa `$ between those given by the sheet and isothermal estimators. Using equation (2), the magnification for these three different cases therefore relates to $`\kappa _{\mathrm{}}`$ via $$\mu (z)=\{\begin{array}{cc}\left|12\kappa _{\mathrm{}}f(z)\right|^1\hfill & \mathrm{iso}.\hfill \\ \left|\left[\kappa _{\mathrm{}}f(z)c\right]\left[\kappa _{\mathrm{}}f(z)1/c\right]\right|^1\hfill & \mathrm{para}.\hfill \\ \left[1\kappa _{\mathrm{}}f(z)\right]^2\hfill & \mathrm{sheet}\hfill \end{array}$$ (6) and hence three different estimations of $`\kappa _{\mathrm{}}`$ exist for a given $`\mu `$. The constant $`c`$ in the parabolic case is chosen to provide the best fit with the cluster simulations. As in T98, we take $`c=0.7`$ throughout this paper. The likelihood function for $`\kappa _{\mathrm{}}`$ is then formed from equation (1), $$(\kappa _{\mathrm{}})\underset{i}{}\mathrm{P}[M_i|\mu (\kappa _{\mathrm{}}),z_i]$$ (7) where $`\mu `$ is one of the three forms in equation (6) and the product applies to the galaxies behind the cluster region under scrutiny. Absolute surface mass densities are then calculated from $`\kappa _{\mathrm{}}`$ using equation (5). The probability distribution for $`\kappa _{\mathrm{}}`$ obtained from equation (1) for a single galaxy is typically double-peaked as two solutions for $`\kappa _{\mathrm{}}`$ exist for a given magnification. The choice of peak is determined by image parity such that the peak at the higher value of $`\kappa _{\mathrm{}}`$ is chosen for a galaxy lying inside the critical line and vice versa. The chosen peak is then extrapolated to extend over the full $`\kappa _{\mathrm{}}`$ range before contributing to the likelihood distribution. In this way, a single-peaked likelihood distribution is obtained. Evidently, calculation of lens surface mass density in this way requires redshift and absolute magnitude data for background galaxies together with knowledge of the intrinsic distribution of magnitudes from an unlensed offset field. The next section details the photometric analysis applied to our observations of Abell 1689 to arrive at background object redshifts and absolute magnitudes. ## 3 Photometric Analysis ### 3.1 Data acquisition Observations of Abell 1689 were performed with the Calar Alto 3.5m telescope in Spain using 8 different filters, chosen for photometric distinction between foreground, cluster and background objects. In addition, the I-band observations of T98 were included to bring the combined exposure time to a total of exactly 12 hours worth of useable data characterised by a seeing better than $`2.1^{\prime \prime }`$. Table 1 details the summed integration time for each filter set together with the motivation for inclusion of the filter. Note the narrow band filters 466/8, 480/10 and 774/13 which were selected to pick out spectral features of objects lying at the cluster redshift of $`z=0.185`$ (Teague, Carter & Gray 1990). Image reduction and photometry was performed using the MPIAPHOT (Meisenheimer & Röser 1996) software written at the MPIA Heidelberg as an extension to the MIDAS reduction software package. Images were de-biased and flattened from typically four or five median filtered dusk sky flats observed each night for each filter set. Any large scale remnant flux gradients were subsequently removed by flattening with a second-order polynomial fitted to the image background. Cosmic ray removal was carried out using the pixel rejection algorithm incorporated in MPIAPHOT. All post-reduced images were flattened to a $`1\sigma `$ background flux variation of less than 0.02 mag. ### 3.2 Galaxy catalogue Instead of co-adding images in each filter set before object detection, photometric evaluation was carried out on images individually. In this way, an estimate of the uncertainty in the photon count for each galaxy could be obtained. The mean photon count $`I^{(b,m)}`$ of a galaxy $`m`$ observed in a filter $`b`$ was calculated as the usual reciprocal-variance weighted sum, $$I^{(b,m)}=\underset{i}{}\frac{I_i^{(b,m)}}{\left(\sigma _i^{(b,m)}\right)^2}/\left[\underset{i}{}\left(\sigma _i^{(b,m)}\right)^2\right]$$ (8) where the summation acts over all images belonging to a particular filter set and the error on $`I^{(b,m)}`$ is $$\overline{\sigma }^{(b,m)}=\left[\underset{i}{}\left(\sigma _i^{(b,m)}\right)^2\right]^{1/2}.$$ (9) The quantity $`\sigma _i^{(b,m)}`$ is the standard deviation of background pixel values surrounding galaxy $`m`$ in image $`i`$. Background pixels were segregated by applying an appropriate cut to the histogram of counts in pixels within a box of size $`13^{\prime \prime }\times 13^{\prime \prime }`$ ($`40\times 40`$ pixels) centred on the galaxy. This cut removed the high count pixels belonging to the galaxy itself and any other neighbouring galaxies within the box. Integrated galaxy photon counts were determined using MPIAPHOT which sums together counts in all pixels lying inside a fixed aperture of radius $`6^{\prime \prime }`$ centred on each galaxy. A ‘mark table’ accompanying every image in every filterset provided co-ordinates of galaxy centres. Tables were fit within an accuracy of $`<1^{\prime \prime }`$ to individual images using copies of a master table derived from the deepest co-added image; that observed in the I-band. In this way consistent indexing of each galaxy was achieved throughout all catalogues. The master table was generated using the object detection software ‘SExtractor’ (Bertin & Arnouts 1996). Only galaxies were contained in the master table, star-like objects being removed after identification by their high brightness and low FWHM. With a detection threshold of $`3\sigma `$ above the average background flux, galaxies in the I-band image were catalogued after coincidence matching with objects detected at the $`3\sigma `$ level in the associated V-band data presented by T98. Despite cataloguing $`3000`$ galaxies, the resulting number was limited to a total of $`1000`$ due to the relatively shallow data observed with the 466/8 narrow-band filter. ### 3.3 Photometry The integration of photon counts in an aperture of fixed size requires constant seeing across all images to allow correct determination of colours. To ensure constant seeing, all images were degraded by Gaussian convolution to the worst seeing of $`2.1^{\prime \prime }`$ measured in the 466/8 filter before galaxy counts were evaluated. The effects of changing weather conditions were compensated for by normalising images within each filter set to an arbitrarily chosen image in that set. Normalisation was conducted by scaling the galaxy counts in each image so that the average counts of the same stars in all images was equal. This ensured correct calculation of the weighted counts and the error from equation (8) and (9). These quantities were later scaled to their calibrated photometric values. Calibration of the photometric fluxes from the weighted counts was provided using the spectrum of the dominant elliptical galaxy in the centre of A1689 taken from Pickles & van der Kruit (1991). Denoting this spectrum as the function $`F_s(\lambda )`$, the calibration scale factors $`k_b`$ for each filter $`b`$ were calculated using $$I^{(b,s)}=k_bd\lambda \frac{E(\lambda )T_b(\lambda )F_s(\lambda )\lambda }{hc}$$ (10) where the function $`T_b(\lambda )`$ describes the filter transmission efficiency, $`E(\lambda )`$ is the combined filter-independent efficiency of the detector and telescope optics and $`I^{(b,s)}`$ is the measured integrated photon count rate of the central galaxy. The values of $`k_b`$ obtained in this way were only relatively correct owing to the lack of an absolute calibration of the published spectrum. Absolute calibration scale factors were calculated from observations of the standard star G60-54 (Oke 1990) in all filters in exactly the same manner. Verification of this absolute calibration was provided by the consistency of ratios of $`k_b(\mathrm{absolute})/k_b(\mathrm{relative})`$ to a zero point of $`\mathrm{\Delta }m=2.11\pm 0.01`$ magnitudes averaged over all filters. Consideration of equation (10) shows that only the quantity $$d\lambda E(\lambda )T_b(\lambda )F_m(\lambda )\lambda =\frac{hcI^{(b,m)}}{k_b}$$ (11) can be known for any galaxy $`m`$ with a calibrated photon count rate. The required photometric flux $$F^{(b,m)}=d\lambda T_b(\lambda )F_m(\lambda )$$ (12) can not therefore be directly determined without making an approximation such as the simplification of filter transmission curves to top hat functions. Although this is acceptably accurate in narrow band filters, it is not for broad band filters. This problem was avoided by the more sophisticated technique of fitting model spectra to measured galaxy colours as the next section discusses. ### 3.4 Photometric redshift evaluation Direct calculation of photometric fluxes using equation (12) was made possible by fitting model spectra to the set of calibrated photon count rates measured for each galaxy across all filters. Expressed more quantitatively, equation (11) was applied for each filter to a library of template spectra to arrive at a set of scaled filter counts for each spectrum. Galaxies were then allocated library spectra by finding the set of library colours which best fit the measured galaxy colours. Note that this differs from conventional template fitting where spectra are redshifted and scaled to fit observed colours in a much more time costly manner. The spectral library was formed from the template galaxy spectra of Kinney et al. (1996). A regular grid of galaxy templates was generated, varying in redshift along one axis from $`z=0`$ to $`z=1.6`$ in steps of $`\mathrm{\Delta }z=0.002`$ and ranging over 100 spectral types from ellipticals, through spirals to starbursts along the other. The set of photometric errors given by equation (9) for an individual galaxy across all filters gives rise to an error ellipsoid in colour space. Using the size and location of these error ellipsoids, probabilities of each library entry causing the observed sets of colours for each galaxy were then calculated as $$p(𝒒|z,s)=\frac{1}{\sqrt{(2\pi )^n|V|}}\mathrm{exp}\left(\frac{1}{2}\underset{j=1}{\overset{n}{}}\frac{[q_jQ_j(z,s)]^2}{\sigma _j^2}\right)$$ (13) where $`n`$ is the number of colours, $`\sigma _j`$ comes from propagation of the error given by equation (9) and $`V\mathrm{diag}(\sigma _1^2,\mathrm{},\sigma _n^2)`$. Each galaxy’s position vector in colour space, $`𝒒(q_1,\mathrm{},q_n)`$ is compared with the colour vector $`𝑸`$ of the library spectrum with a given redshift $`z`$ and type $`s`$. Finding the maximum probability corresponding to the closest set of matching colours therefore immediately establishes redshift and galaxy type. An assessment of the uncertainty in this redshift is subsequently obtained directly from the distribution of the probabilities associated with neighbouring library spectra. Figure 1 shows the distribution of the 958 successfully classified galaxy redshifts estimated from the full filter set. We measure an average redshift error of $`\sigma _z=0.08`$. The maximum redshift limit of $`z0.8`$ comes from the condition that the 4000Å limit must lie in or blue-ward of the second reddest filter in the set. The peak at $`z0.18`$ in Figure 1 is clearly the contribution from the cluster galaxies. The feature at $`z0.4`$ is most likely real and not an artifact of the photometric method. Such artifacts occur due to ‘redshift focusing’ when particular redshifts are measured more accurately than others. Where the uncertainty is larger, galaxies can be randomly scattered out of redshift bins, producing under-densities and corresponding over-densities where the redshift measurement is more accurate. This effect depends on the details of the filter set, being more common when fewer filters are used, but can be modelled by Monte Carlo methods. The top half of Figure 2 shows the results of one realisation of such a Monte Carlo test for redshift focusing. The plot indicates how accurately the method reproduces redshifts of spectra scaled to $`\mathrm{I}=20`$ with photometric noise levels taken from the A1689 filter set. Each point represents a single library spectrum. Reproduced spectral redshifts, $`z_{\mathrm{phot}}`$, were determined by calculating colours through application of equation (12) to the library spectra with redshifts $`z_{\mathrm{lib}}`$. These colours were then randomly scattered by an amount determined from the filter-specific photometric error measured in the A1689 data before application of the redshift estimation method outlined above. The bottom half of Figure 2 shows the same plot generated using spectra scaled to $`\mathrm{I}=21`$ with the same photometric error taken from the A1689 data. The accuracy of reproduced redshifts at $`\mathrm{I}=20`$ is clearly better than those at $`\mathrm{I}=21`$ where photometric noise is more dominant. The lack of any sign of redshift focusing in the vicinity of $`z0.4`$ leads us to conclude that the feature seen at this redshift in the A1689 data is probably real. The $`\mathrm{I}=21`$ plot which corresponds approximately to our sample magnitude cut of $`\mathrm{B}=23.6`$ (see Section 5.1) shows that galaxies at redshifs below $`z=0.05`$ have on average higher estimated redshifts. This only marginally affects the overall redshift distribution and yet partly explains the lack of galaxies at $`z<0.05`$ in the A1689 redshifts of Figure 1. It is worth emphasising here that the significance of the peak at $`z0.18`$ attributed to the cluster galaxies is far in excess of any effects of redshift focusing. Figure 3 shows a comparison of the photometrically determined redshifts $`z_{\mathrm{phot}}`$ around the peak of the redshift distribution of Abell 1689, with spectroscopically determined (Teague, Carter & Gray, 1990) redshifts $`z_{\mathrm{spec}}`$. A very slight bias between $`z_{\mathrm{phot}}`$ and $`z_{\mathrm{spec}}`$ can be seen. This bias is quantified by fitting the line $`z_{\mathrm{phot}}=z_{\mathrm{spec}}+c`$ by least-squares to the data points which gives $`c=0.0036`$. Referring to equation (27) shows that if this small bias is applied to all redshifts in our sample, a negligible difference of $`\mathrm{\Delta }\kappa _{\mathrm{}}=0.001`$ would result. Our filter set was selected primarily to distinguish the cluster members, hence at higher redshift we must rely on our Monte Carlo estimates of the redshift uncertainty (see Section 6.1). Abell 1689 lies in a region of sky where there is a very low level of galactic dust. Our redshifts are therefore not affected by this source of contamination. However, dust in the cluster itself is another concern. We have modelled the effects of reddening by cluster dust and find that although magnitudes are slightly affected, the redshifts remain the same. ## 4 Offset field and lens calibration The unlensed, intrinsic magnitude distribution required by the likelihood analysis was taken from an offset field observed as part of the Calar Alto Deep Imaging Survey (CADIS) conducted by the Max-Planck Institut für Astronomie, Heidelberg (Meisenheimer et al. 1998). Data for this survey were observed to a complete depth of $`\mathrm{B}24.5`$ mag in 16 filters from the B-band to the K-band with the 2.2m telescope at Calar Alto. We use the CADIS 16-hour field for our mass calibration. Using exactly the same methods outlined in the previous section for the A1689 data, photometric redshifts and rest-frame absolute magnitudes were determined for all objects in the field. In addition to galaxy templates in the spectral library however, quasar spectra from Francis et al. (1991) and stellar spectra from Gunn & Stryker (1983) were also included, the primary motivation for this difference being the CADIS quasar study. As a by-product, a more sophisticated object classification method was achieved by finding the overall object class yielding the highest significant probability given by equation (13). Details of this and the CADIS quasar study are given in Wolf et al. (1999). To ensure a fair comparison between the offset field and the cluster field, the CADIS B-band ($`\lambda _c/\mathrm{\Delta }\lambda =461/113`$ nm) galaxy magnitudes were used with the A1689 B-band magnitudes in the likelihood analysis discussed in Section 2. Investigation of evolution (see for example, Lilly et al. 1995, Ellis et al. 1996) of the CADIS luminosity function is left for future work. A preliminary study indicated no significant evolution which would impact on the lens mass determination. We therefore applied the same redshift selection as used for the Abell 1689 background sources, and assumed a no–evolution model. We present two estimates of the calibration B-band luminosity function: a nonparametric $`1/V_{\mathrm{max}}`$ method (Section 4.1) and a maximum likelihood parametric fit to a Schechter function (Section 4.2). The former method allows us to see the distribution of luminosities without imposing a preconceived function, and gives a visual impression of the uncertainties in the parametric fit. In addition the $`V_{\mathrm{max}}`$ approach allows us to make basic tests for sample completeness (we discuss this in Section 5.1). The latter maximum likelihood fit provides a convenient function for performing the second likelihood analysis to determine lens magnification. We begin by describing the nonparametric method. ### 4.1 The nonparametric CADIS B-band luminosity function An estimate of the luminosity function of galaxies in the CADIS B band was provided initially using the canonical $`1/V_{\mathrm{max}}`$ method of Schmidt (1968). The quantity $`V_{\mathrm{max}}`$ is computed for each galaxy as the comoving volume within which the galaxy could lie and still remain in the redshift and magnitude limits of the survey. For an Einstein-de-Sitter universe, this volume is, $$V_{\mathrm{max}}=\left(\frac{c}{H_0}\right)^3\delta \omega _{\mathrm{max}(z_l,z_{m_{\mathrm{min}}})}^{\mathrm{min}(z_u,z_{m_{\mathrm{max}}})}dz\frac{D^2(z)}{(1+z)^{3/2}}$$ where $`\delta \omega `$ is the solid angle of the observed field of view and $`D(z)`$ is $$D(z)=2(1(1+z)^{1/2}).$$ (14) The upper limit of the integral in equation (4.1) is set by the minimum of the upper limit of the redshift interval chosen, $`z_u`$, and the redshift at which the galaxy would have to lie to have an apparent magnitude of the faint limit of the survey, $`z_{m_{\mathrm{max}}}`$. Similarly, the maximum of the lower limit of the chosen redshift interval, $`z_l`$, and the redshift at which the galaxy would have to lie to have an apparent magnitude of the bright limit of the survey, $`z_{m_{\mathrm{min}}}`$, forms the lower limit of the integral. This lower integral limit plays a non-crucial role when integrating over large volumes originating close to the observer where the volume element makes only a relatively small contribution to $`V_{\mathrm{max}}`$. The redshifts $`z_{m_{\mathrm{max}}}`$ and $`z_{m_{\mathrm{min}}}`$ are calculated for each object by finding the roots of $$Mm_{\mathrm{lim}}+5\mathrm{log}_{10}\left[\frac{(1+z)D(z)}{h_0}\right]K(z)=42.39$$ (15) where $`M`$ is the absolute magnitude of the object, $`m_{\mathrm{lim}}`$ is the appropriate maximum or minimum survey limit and $`K(z)`$ is the K-correction. Although the K-correction for each object at its actual redshift was known from its apparent and absolute rest-frame magnitude, the redshift dependence of this K-correction was not. In principle, this redshift dependence could have been calculated directly for each object using its best-fit spectrum returned from the photometric analysis. However, the much simplified approach of approximating the K-correction as a linear function of redshift was employed. This was primarily motivated by its improved efficiency and the relatively weak influence the K-correction was found to have on the final luminosity function. Lilly et al. (1995) find that the K-correction in their B-band for elliptical, spiral and irregular galaxies is proportional to redshift in the redshift range $`0z0.8`$. The following form for $`K(z)`$ was thus adopted $$K(z)=\frac{K(z_0)}{z_0}z$$ (16) where $`K(z_0)`$ is the K-correction of the object at its actual redshift $`z_0`$. Once $`V_{\mathrm{max}}`$ has been calculated for all objects, the maximum likelihood estimated luminosity function $`\varphi `$ at the rest-frame absolute magnitude $`M`$ in bins of width $`\mathrm{d}M`$ is then, $$\varphi (M)\mathrm{d}M=\underset{i}{}\frac{1}{V_{\mathrm{max},i}}$$ (17) where the sum acts over all objects with magnitudes between $`M\mathrm{d}M/2`$ and $`M+\mathrm{d}M/2`$. Figure 4 shows the luminosity function of B band magnitudes from the CADIS offset field which has a solid viewing angle of $`\delta \omega =100\mathrm{arcmin}^2`$. To match the selection of objects lying behind A1689, only objects within the redshift range $`0.3z0.8`$ were chosen. A further restriction on the apparent B magnitude of $`m_\mathrm{B}24.5`$ was applied for completeness of the sample (see Section 5.1), yielding a total of 371 galaxies. The data points in Figure 4 are centred on bins chosen to maintain an equal number of objects in each. The $`1\sigma `$ errors shown here were calculated from Monte Carlo simulations. Object redshifts were randomly scattered in accordance with their associated errors provided in the CADIS dataset. For each realisation, the $`V_{\mathrm{max}}`$ of each object was re-calculated using the re-sampled redshift. The resulting standard deviation of the distribution of values of $`\varphi `$ for each bin given by equation (17) was then taken as the error. In this particular instance, no consideration was given to the magnitude errors or the propagation of the redshift error into object magnitudes. Section 4.2 discusses this further. ### 4.2 Parameterisation of the CADIS B-band luminosity function The maximum likelihood method of Sandage, Tammann & Yahil (1979, STY hereafter) was employed to determine the Schechter function best describing the CADIS B-band magnitudes. This parameterisation is essential for the determination of lens mass using the likelihood approach. In much the same way as the probability in equation (1) was formed, the STY method forms the probability $`p_i`$ that a galaxy $`i`$ has an absolute magnitude $`M_i`$, $$p_ip(M_i|z_i)\frac{\varphi (M_i)}{_{\mathrm{max}(M_{\mathrm{min}}(z_i),M_1)}^{\mathrm{min}(M_{\mathrm{max}}(z_i),M_2)}\varphi (M)dM}$$ (18) where $`M_{\mathrm{max}}(z_i)`$ and $`M_{\mathrm{min}}(z_i)`$ are the absolute magnitude limits corresponding to the apparent magnitude limits of the survey at a redshift of $`z_i`$. Conversion of these apparent magnitude limits includes the K-correction using equation (15) with $`z`$ set to $`z_i`$. A further restriction is placed upon the integration range by imposing another set of magnitude limits $`M_1<M<M_2`$ which for the CADIS data were set at the maximum and minimum absolute magnitudes found in the sample. The likelihood distribution in this case is a two dimensional function of the Schechter parameters $`M_{}`$ and $`\alpha `$ formed from the product of all probabilities $`p_i`$. The best fit $`M_{}`$ and $`\alpha `$ are therefore found by maximizing the likelihood function, $$\begin{array}{c}\mathrm{ln}(M_{},\alpha )=\hfill \\ _{i=1}^N\left\{\mathrm{ln}\varphi (M_i)\mathrm{ln}_{\mathrm{max}(M_{\mathrm{min}}(z_i),M_1)}^{\mathrm{min}(M_{\mathrm{max}}(z_i),M_2)}\varphi dM\right\}+c_p\hfill \end{array}$$ (19) with the constant $`c_p`$ arising from the proportionality in equation (18). An estimate of the errors on $`M_{}`$ and $`\alpha `$ are calculated by finding the contour in $`\alpha ,M_{}`$ space which encompasses values of $`\alpha `$ and $`M_{}`$ lying within a desired confidence level about the maximum likelihood $`_{\mathrm{max}}`$. To account for uncertainties in the Schechter parameters due to the redshift and magnitude errors derived by the photometric analysis, Monte Carlo simulations were once again performed. Redshifts and absolute magnitudes of the entire sample were randomly scattered before re-application of the maximum likelihood process each time. The error in absolute magnitude was calculated using simple error propagation through equation (15) yielding, $$\sigma _{M_i}^2=\left(\frac{K_i}{z_i}\frac{5}{\mathrm{ln}10}\frac{10.5(1+z_i)^{1/2}}{1+z_i(1+z_i)^{1/2}}\right)^2\sigma _{z_i}^2+\sigma _{m_i}^2$$ (20) for each object with redshift $`z_i`$ and apparent magnitude $`m_i`$. Here, the K-correction given by equation (16) has been used such that the quantity $`K_iK(z_i)`$ is calculated from equation (15) using $`m_i`$, $`M_i`$ and $`z_i`$ as they appeared in the CADIS dataset. The typical ratio of apparent magnitude to redshift error was found to be $`\sigma _{m_i}/\sigma _{z_i}1\%`$ due to the relatively imprecise nature of photometric redshift determination. The final errors on $`M_{}`$ and $`\alpha `$ were taken from an effectively convolved likelihood distribution obtained by combining the scattered distributions produced from the Monte Carlo simulations. Figure 5 shows the final $`1\sigma `$ and $`2\sigma `$ likelihood contours calculated allowing for redshift and magnitude errors. These predict the resulting parameters $$M_{}=19.43_{0.64}^{+0.47}+5\mathrm{log}_{10}h,\alpha =1.45_{0.23}^{+0.25}$$ (21) where projected $`1\sigma `$ errors are quoted. For completeness, the normalisation was also calculated to be $$\varphi ^{}=0.0164h^3\mathrm{Mpc}^3,$$ (22) in this case where evolution of the luminosity function has been neglected. ## 5 Sample Consistency Ensuring that the A1689 sample of sources is consistent with the CADIS offset field sample is necessary to prevent biases from entering our results. The first level of compatibility we have already enforced by applying a redshift selection of $`0.3z0.8`$ to both samples. The second, discussed in Section 5.1 below, is sample completeness. A slightly less obvious consideration must also be given to galaxy morphological type as Section 5.2 explains. ### 5.1 Completeness Determination of the faint magnitude limit beyond which both the A1689 and CADIS data set become incomplete is important for the calculation of an accurate lens mass. Both samples must be complete for fair comparison. Incorrect evaluation of the CADIS limiting magnitude results in larger values of $`V_{\mathrm{max}}`$ and hence a biased luminosity function not representative of the intrinsic A1689 distribution. Similarly, completeness of the A1689 sample also affects the lens mass in a manner quantified in Section 6.5. Estimation of the completeness of both data sets was provided using the $`V/V_{\mathrm{max}}`$ statistic (Schmidt 1968). In this ratio, $`V_{\mathrm{max}}`$ is calculated using equation (4.1) whereas $`V`$ is the comoving volume described by the observer’s field of view from the same lower redshift limit in the integral of equation (4.1) to the redshift of the object. If a sample of objects is unclustered, exhibits no evolution and is complete, the position of each object in its associated volume $`V_{\mathrm{max}}`$ will be completely random. In this case, the distribution of the $`V/V_{\mathrm{max}}`$ statistic over the range 0 to 1 will be uniform with $`V/V_{\mathrm{max}}=0.5`$. If the sample is affected by evolution such that more intrinsically bright objects lie at the outer edges of the $`V_{\mathrm{max}}`$ volume, then $`V/V_{\mathrm{max}}`$ is biased towards values larger than 0.5. The reverse is true if a larger number of brighter objects lie nearby. If the sample is incomplete at the limiting apparent magnitude chosen, estimations of $`V_{\mathrm{max}}`$ will be on average too large and will cause $`V/V_{\mathrm{max}}`$ to be biased towards values less than 0.5. The requirement that $`V/V_{\mathrm{max}}=0.5`$ for completeness is also subject to fluctuations due to finite numbers of objects. In the absence of clustering, the uncertainty due to shot noise on $`V/V_{\mathrm{max}}`$ calculated from $`N`$ galaxies can be simply shown to be $$\sigma _{V/V_{\mathrm{max}}}^2=\frac{1}{12N}.$$ (23) In order to arrive at an apparent magnitude limit for the CADIS and A1689 fields, values of $`V/V_{\mathrm{max}}`$ were calculated for different applied limiting magnitudes and plotted as shown in Figure 6. The grey region in this plot corresponds to the $`1\sigma `$ errors described by equation (23) which lessen at the fainter limiting magnitudes due to the inclusion of more objects. Clustering adds extra noise and so these errors are an underestimate of the true uncertainty (one can show that the uncertainty in $`V/V_{\mathrm{max}}`$ increases by the square root of the average number of objects per cluster). Without knowledge of the effects of clustering, Figure 6 shows that a limiting magnitude of $`m_\mathrm{B}24.5`$ for CADIS and $`m_\mathrm{B}23.6`$ for A1689 corresponds to a value of $`V/V_{\mathrm{max}}0.5`$ and thus completeness. These magnitudes are in agreement with the apparent magnitude limits at which the number counts begin to fall beneath that measured by deeper surveys (such as the B band observations of Lilly, Cowie & Gardner (1991) which extend to a depth of $`m_\mathrm{B}26`$) and also correspond to a $`10\sigma `$ object detection threshold deduced from the photometry. Figure 7 shows the distribution of galaxies in the redshift–apparent magnitude plane in the Abell 1689 field. Abell 1689 itself can clearly been seen at $`z=0.18`$. Superimposed is the region of parameter space we use for the mass determination with $`0.3<z<0.8`$ and $`18m_\mathrm{B}23.6`$ yielding 146 galaxies. ### 5.2 Morphological Type Perhaps the most difficult inconsistency to quantify is that of variation in galaxy morphological type between samples. It has been known for some time that elliptical galaxies cluster more strongly than spirals (Davis & Geller 1976) and that the elliptical fraction in clusters is an increasing function of local density (Dressler 1980). One might therefore expect a sample of galaxies lying behind a large cluster such as A1689 to contain a higher proportion of ellipticals than a sample of field galaxies away from a cluster environment. Since the luminosity function of E/S0 galaxies is thought to be different from that of spirals (see, for example, Chiba & Yoshii 1999 and references therein), comparison of our A1689 sample with the CADIS offset field sample may be expected to introduce a bias. A related issue stems from the fact that the determination of an object’s photometric redshift requires its detection in every filter belonging to the filter set. Both the CADIS and A1689 filter sets contain narrow band filters which cause the main restriction on which objects enter into the photometric analysis. Our $`V/V_{\mathrm{max}}`$ test in the B-band therefore does not give a true limiting B magnitude but one which applies only to objects detectable across all filters. As long as both samples under comparison are complete according to this test, the sole consequence of this is that certain morphological types will be under-represented. With the CADIS filter set of 16 filters differing from the 9 filters used for the observation of A1689, this again might be expected to cause inconsistent galaxy types between fields, biasing results. Fortunately, our photometric analysis yields morphological type in addition to redshift. We are therefore able to directly measure the fraction of galaxies of a given type in both samples and thus test for biases. We find that in the CADIS sample, the ratio of E/S0:Spiral galaxies is $`12\%\pm 7\%`$ and for the A1689 sample, this is $`25\%\pm 13\%`$ with Poisson errors quoted. This is in reasonable agreement with the canonical Postman and Geller (1984) E/S0:S fraction for field galaxies of about 30%. Approximately $`60\%70\%`$ of both samples are classified as starburst galaxies. Given the uncertainty in these fractions, we would argue that as far as we can tell, they are consistent with each other. Without a bigger sample of galaxies and possibly spectroscopically confirmed morphologies we are unable to do better although as the measurements stand, we would not expect any serious inconsistencies in morphology which would bias our results. ## 6 Mass Determination Taking the CADIS luminosity function as a good estimate of the intrinsic distribution of A1689 source magnitudes in the range $`0.3z0.8`$, we can calculate $`\kappa _{\mathrm{}}`$ and $`\mathrm{\Sigma }`$ assuming a relation between $`\kappa `$ and $`\gamma `$. As noted in Section 4, in the absence of a detection of evolution of the luminosity function with redshift, we assume a no-evolution model for the background sources in this field. In general, the occurrence of evolution is anticipated however we expect its inclusion in the background model to have only a minor effect on the derived mass. For the purpose of comparison with other studies we shall quote values of $`\kappa `$ as well as $`\kappa _{\mathrm{}}`$ and $`\mathrm{\Sigma }`$. As $`\kappa `$ is dependent on the source redshift, this is not a useful quantity to quote when the redshift distribution is known. The convergence we quote is the redshift averaged quantity defined by $$\kappa =\frac{\kappa _{\mathrm{}}}{N_b}\underset{i=1}{\overset{N_b}{}}f(z_i)=\kappa _{\mathrm{}}f_b$$ (24) where $`N_b`$ is the number of source galaxies. For the field of Abell 1689 we find that $`f_b=0.57`$, giving an effective source redshift of $`z_{\mathrm{eff}}=0.45`$. ### 6.1 Sources of uncertainty Three sources of error on $`\kappa _{\mathrm{}}`$ were taken into consideration: * The maximum likelihood error obtained from the width of the likelihood distribution at $`\mathrm{ln}_{\mathrm{max}}0.5\mathrm{\Delta }\chi ^2`$, with $`\mathrm{\Delta }\chi ^2`$ the desired confidence level. All object magnitudes and redshifts were taken as presented directly in the A1689 data while assuming the Schechter parameters from equation (21). * The uncertainty of the Schechter parameters $`M_{}`$ and $`\alpha `$ from the likelihood analysis of the CADIS offset field. * The redshift and magnitude uncertainties of individual objects in the A1689 data, derived from the photometric analysis. In Section 7 we will show that the contribution of each source of uncertainty to the overall error depends on the number of galaxies included in the analysis. Taking all 146 galaxies across the entire field of view, the errors from each contributor listed above, expressed as a percentage of the total standard deviation were found to be; 50% from the maximum likelihood (essentially the shot noise), 25% from the uncertainty in $`M_{}`$ and $`\alpha `$ and 25% from the redshift and magnitude error. The latter two sources of error in the above list were simultaneously included using the Monte Carlo method. 1000 simulations were carried out, randomly drawing values of $`M_{}`$ and $`\alpha `$ from the convolved likelihood distribution shown in Figure 5. For each realisation, redshifts and absolute magnitudes of objects in the A1689 field were scattered in exactly the same fashion as before with the CADIS dataset using their associated photometric errors. The standard deviation of the scattered values of $`\kappa _{\mathrm{}}`$ produced in this way was then added in quadrature to the uncertainty of the maximum likelihood error obtained from item one of the list above to give the overall error. The magnitude calibration error of $`\sigma _{\mathrm{\Delta }M}=0.01`$ discussed in Section 3.3 was ignored. Inspection of the form of the Schechter function in equation (3) shows that a systematic magnitude offset is exactly equivalent to an error in $`M_{}`$. Clearly, the $`1\sigma `$ error quoted for $`M_{}`$ in equation (21) completely overwhelms this magnitude calibration uncertainty which was therefore deemed insignificant. Finally, the dependence of our measurement of $`\kappa _{\mathrm{}}`$ on the feature seen in the A1689 redshift distribution at $`z0.4`$ was tested. We removed all galaxies contributing to this peak and re-calculated the results of the following two sections. Apart from a larger uncertainty due to the decreased number of objects, we found very little difference from the results obtained from the full dataset, indicating that our measurement is not dominated by the concentration of galaxies at $`z0.4`$. ### 6.2 The differential radial $`\kappa `$ profile Background source objects from the A1689 data set were binned in concentric annuli about the cluster centre for the calculation of a radial mass profile. The relatively small number of objects contained in the sample however was unfortunately insufficient to allow computation of a profile similar in resolution to that of T98. Apart from the effects of shot noise, this limitation results from the simple fact that bins which are too narrow do not typically contain a large enough number of intrinsically bright objects. This has the effect that the knee of the Schechter function assumed in the likelihood analysis is poorly constrained. As equation (1) shows, a large uncertainty in $`M_{}`$ directly results in a large error on the magnification and hence on $`\kappa _{\mathrm{}}`$. Experimentation with a range of bin widths quickly showed that in order to achieve a tolerable precision for $`\kappa _{\mathrm{}}`$, bins had to be at least $`1.1`$ arcmin in width. With the observed field of view, this gave a limiting number of three bins, illustrated spatially in the lower half of Figure 8. In Section 6.3 we find the average profile within an aperture, which provides a more robust measurement of $`\kappa `$. The top half of Figure 8 shows the $`\kappa `$ data points. These were converted from the maximum likelihood derived $`\kappa _{\mathrm{}}`$ for each bin using equation (24). The profile of T98 is shown superimposed for comparison. Upper points correspond to the sheet estimator while the lower points are due to the isothermal estimator. The $`1\sigma `$ error bars plotted were calculated taking all three contributions listed in Section 6.1 into account. Despite relatively large errors, the data points show an amplitude in good agreement with the profile derived from the number count study of T98. These errors seem large in comparison to those of the number count profile but the number count errors do not take the systematic uncertainties in background count normalisation, number count slope or background source redshift into consideration. It is noticeable that the data points suggest a profile that is perhaps a little flatter than that derived by T98. It appears that more mass is detected at larger radii although this is not particularly significant. ### 6.3 Aperture $`\kappa `$ profile In addition to the radial profile, the variation of average surface mass density contained within a given radius can be calculated. By applying the likelihood analysis to the objects contained within an aperture of varying size, a larger signal to noise can be attained at larger radii where more objects are encompassed. With a small aperture, the same low galaxy count problem is encountered as Figure 9 shows by the large uncertainty in this vicinity. In this plot, the parabolic estimator of equation (6) is used to obtain $`\kappa _{\mathrm{}}`$ since as T98 show, this is a good average of the isothermal and sheet estimators and agrees well their self-consistent axi-symmetric solution. Application of the axi-symmetric solution is not viable in this case since we are limited to only 3 bins. Using equation (24), $`\kappa _{\mathrm{}}`$ is again scaled to $`\kappa `$. The grey shaded region in Figure 9 depicts the $`1\sigma `$ errors, with the sources of uncertainty from Section 6.1 taken into account. The thin solid and dashed black lines show the variation of aperture $`\kappa `$ and its error calculated by averaging the parabolic estimator profile presented in T98. The error does not account for uncertainties arising from background count normalisation, number count slope or background source redshift. The results of T98 were shown to be in good agreement with the shear analysis of Kaiser (1996) and hence we find a consistent picture of the mass amplitude and slope from all three independent methods. As expected from the results of Section 6.2, generally more mass than that predicted from the number counts is seen, especially at large radii. The following section quantifies this for a comparison with the projected mass result of T98. ### 6.4 Projected mass From the values of $`\kappa _{\mathrm{}}`$ used to generate the $`\kappa `$ profile in Section 6.2 and the result of equation (5), the cumulative projected masses in Table 2 were calculated. Errors were derived from propagation of the errors on the binned values of $`\kappa _{\mathrm{}}`$. These projected masses are in excellent agreement with those of T98. At the redshift of the cluster $`1^{}=0.117h^1\mathrm{Mpc}`$ and hence the second cumulative mass listed in Table 2 gives $$M_{2d}(<0.25h^1\mathrm{Mpc})=(0.48\pm 0.16)\times 10^{15}h^1\mathrm{M}_{}$$ (25) which is perfectly consistent with the result from the number count study. The error here is comparable to the $`30\%`$ error quoted for the result of T98 when allowing for all sources of uncertainty. The projected mass contained within 195 arcsec is a little higher than that predicted by T98 although remains arguably consistent given the errors involved in each. ### 6.5 Effects of sample incompleteness One final uncertainty not taken into consideration so far is that of sample incompleteness. Changing the limiting apparent B magnitude in the determination of the CADIS luminosity function directly affects the fitted values of $`M_{}`$, $`\alpha `$ and hence the maximum likelihood $`\kappa _{\mathrm{}}`$. Similarly, differing numbers of objects included in the A1689 sample from variations in its limiting magnitude also has an influence on $`\kappa _{\mathrm{}}`$. Table 3 quantifies this effect for the CADIS objects. It can be seen that increasing the faint limit $`m_{\mathrm{max}}`$ (ie. including fainter objects) has little effect on $`\kappa _{\mathrm{}}`$ until the limit $`m_{\mathrm{max}}24.5`$ is reached. Beyond this limit, $`\kappa _{\mathrm{}}`$ starts to fall. Two inferences can therefore be made. Firstly, this suggests that the magnitude limit in Section 5.1 from the $`V/V_{\mathrm{max}}`$ test, being consistent with the limit here, was correctly chosen. Secondly, $`\kappa _{\mathrm{}}`$ is relatively insensitive to the choice of $`m_{\mathrm{max}}`$ if the sample is complete (and not smaller than the limit at which shot noise starts to take effect). The effect of varying the magnitude limit of the A1689 sample is quantified in Table 4. A clear trend is also seen here; as its limiting faint magnitude $`m_{\mathrm{max}}`$ is reduced, $`\kappa _{\mathrm{}}`$ falls. Assuming linearity, a rough estimate of the uncertainty of $`\kappa _{\mathrm{}}`$ given the uncertainty of the sample magnitude limit is given by: $$\mathrm{\Delta }\kappa _{\mathrm{}}=\{\begin{array}{cc}0.1\mathrm{\Delta }m_{\mathrm{max}}\hfill & \mathrm{isothermal}\hfill \\ 0.2\mathrm{\Delta }m_{\mathrm{max}}\hfill & \mathrm{parabolic}\hfill \\ 0.4\mathrm{\Delta }m_{\mathrm{max}}\hfill & \mathrm{sheet}\hfill \end{array}$$ (26) Referring to Figure 6, a suitable uncertainty in $`m_{\mathrm{max}}`$ of the A1689 sample of say $`\pm 0.2`$ magnitudes might be argued. If this were the case, the projected masses of the previous section calculated with the parabolic estimator would have a further error of $`5\%`$ which is negligibly small. ## 7 Signal-to-Noise Predictions Including all possible contributions of uncertainty in the calculation of mass, the previous section showed that even with relatively few galaxies, a significant cluster mass profile can be detected. One can make predictions of the sensitivity of the method with differing input parameters potentially obtained by future measurements. This exercise also serves as an optimisation study, enabling identification of quantities requiring more careful measurement and those which play an insignificant part. The most convenient means of carrying out this investigation is by application of the reconstruction method to simulated galaxy catalogues. Catalogues were therefore constructed by randomly sampling absolute magnitudes from the Schechter function fitted to the CADIS offset field in Section 4.2. Redshifts were assigned to each magnitude by randomly sampling the distribution parameterised by T98 (their equation 22) from the Canada France Redshift Survey (Lilly et al. 1995). A range of catalogues were produced, varying by the number of objects they contained and their distribution of galaxy redshift errors modeled from the A1689 data. Figure 10 shows how the distribution of photometric redshift error, $`\sigma _z`$, correlates with the A1689 B-band apparent magnitude. No significant correlation between $`\sigma _z`$ and redshift was found. Catalogue objects were thus randomly assigned redshift errors in accordance with their apparent magnitude, given by the correlated distribution in Figure 10. Different catalogues were generated from different scalings of this distribution along the $`\sigma _z`$ axis. Each catalogue was then lensed with a sheet mass characterised by $`\kappa _{\mathrm{}}=1`$ before applying the reconstruction. 1000 Monte Carlo realisations were performed for each reconstruction, scattering object redshifts according to their assigned errors in the same manner as in the reconstruction of A1689. Furthermore, to model the uncertainty associated with the offset field, assumed values of the Schechter parameters $`M_{}`$ and $`\alpha `$ were once again subject to Monte Carlo realisations. All catalogues were reconstructed assuming sets of Schechter parameters drawn from a range of scaled versions of the distribution shown in Figure 5. The resulting scatter measured in the reconstructed value of $`\kappa _{\mathrm{}}`$ for each catalogue and assumed $`\alpha `$-$`M_{}`$ scaling was combined with the average maximum likelihood error across all realisations of that catalogue to give an overall error. This total error was found to be well described by, $$\sigma _\kappa _{\mathrm{}}^2=\frac{1+(2\sigma _z)^2}{n}+(0.12\sigma _M_{})^2+(0.37\sigma _\alpha )^20.18\sigma _{\alpha M_{}}$$ (27) where $`n`$ is the number of galaxies, $`\sigma _z`$ is the sample average redshift error and $`\sigma _M_{}`$ and $`\sigma _\alpha `$ are the projected errors on $`M_{}`$ and $`\alpha `$ respectively as quoted in equation (21). The quantity $`\sigma _{\alpha M_{}}`$ is the covariance of $`\alpha `$ and $`M_{}`$ defined by $$\sigma _{\alpha M_{}}=\frac{(M_{},\alpha )(M_{}M_{})(\alpha \alpha )dM_{}d\alpha }{(M_{},\alpha )dM_{}d\alpha }$$ (28) where the likelihood distribution $``$ is given by equation (19). We find that $`\sigma _{\alpha M_{}}=0.039`$ for the CADIS offset field. Equation (27) is valid for $`n20`$ and $`0.0\sigma _z0.3`$. Equation (27) shows that when the number of objects is low, shot noise dominates. With $`n200`$ however, uncertainties from the calibration of the offset field start to become dominant. The factor of 2 in the photometric redshift error term stems from the fact that redshift errors also translate directly into absolute magnitude errors through equation (20). Another discrepancy arises when comparing this redshift error with the redshift error contribution of 25% claimed for the A1689 data in Section 6.1. This is accounted for by the fact that K-corrections were present in the A1689 data whereas in the simulated catalogues there were not. Equation (20) quantifies the increase in magnitude error with the inclusion of K-corrections. This translates to an approximate increase of $`20\%`$ in the overall error with an average K-correction of $`1.0`$ for the A1689 data. Emphasis should be placed on the criteria for which equation (27) is valid. The predicted overall error rises dramatically when fewer than $`20`$ objects are included in the analysis. Simulations with 15 objects resulted in maximum likelihood errors rising to beyond twice that predicted by simple shot noise. This stems mainly from the effect mentioned in Section 6.2, namely the failure of the likelihood method when the knee of the Schechter function is poorly constrained. The most immediate improvement to a multi-colour study such as this would therefore be to increase galaxy numbers. As previously noted, only when bins contain $`200`$ objects do offset field uncertainties become important. Observing in broader filters is one way to combat the limit presented by galaxy numbers. Section 3.2 noted how the 3000 galaxies detected in the I band image were instantly reduced to 1000 by the shallow depth limit placed by the narrow 466/8 filter, even though both were observed to similar integration times. Using broader filters will also inevitably give rise to less accurate photometric redshifts. However as the analysis of this section has shown, one can afford to sacrifice redshift accuracy quite considerably before its contribution becomes comparable to that of shot noise. Deeper observations provide another obvious means of increasing the number of galaxies. The error predictions above indicate that the expected increase in galaxy numbers using an 8 metre class telescope with the same exposure times as those in this work should reduce shot noise by a factor of $`3`$. Since deeper observations would also reduce redshift and offset field calibration uncertainties to negligible levels, the only source of error would be shot noise. In this case, the signal to noise for $`\kappa _{\mathrm{}}`$ from equation (27) becomes simply $`\kappa _{\mathrm{}}\sqrt{n}`$ and hence our mass estimate for A1689 could be quoted with a $`9\sigma `$ certainty. ## 8 Summary Photometric redshifts and magnitudes have been determined for objects in the field of Abell 1689 from multi-waveband observations. This has allowed calculation of the luminosity function of source galaxies lying behind the cluster. Comparison of this with the luminosity function obtained from a similar selection of objects in an unlensed offset field has resulted in the detection of a bias in the A1689 background object magnitudes attributed to lens magnification by the cluster. To ensure that systematic biases do not affect our results, we have given careful consideration to the consistency between both the A1689 dataset and the CADIS offset field dataset. We find the distribution of galaxy types within the redshift range $`0.3z0.8`$ applied to both samples to be very similar. This demonstrates that our lower redshift cutoff is sufficient to prevent objects in the A1689 dataset from being significantly influenced by the cluster environment. After allowing for sources of uncertainty due to redshift and magnitude error, offset field calibration error and likelihood error (including shot noise), a significant radial mass profile for A1689 has been calculated. We predict a projected mass interior to $`0.25h^1\mathrm{Mpc}`$ of $$M_{2d}(<0.25h^1\mathrm{Mpc})=(0.48\pm 0.16)\times 10^{15}h^1\mathrm{M}_{}$$ (29) in excellent agreement with the number count analysis of T98 and the shear results of Tyson & Fischer (1995) and Kaiser (1995). We can compare the efficiency of the method presented in this paper in terms of telescope time and signal-to-noise with the number count method used by T98. The $`5.5\sigma `$ result quoted by T98 does not include uncertainty due to their background count normalisation, number count slope or background source redshift distribution. Adding these errors to their result gives an estimated signal-to-noise of $`3\sigma `$, the same as this work. Regarding telescope time, we define a ‘total information content’ for each study as the product of telescope collecting area and total integration time. T98 observed 6000s in each of the V and I bands with the NTT 3.6m. Comparing with the CA 3.5m telescope and 12 hours integration time used in this study shows that we have amassed a total information content of approximately 3 times that required for the T98 result. Despite the extra time penalty induced by our method, we note that deeper observations, especially in the narrow band filters used, would increase the signal-to-noise of our result significantly since we are dominated by shot noise. Our signal-to-noise analyses in Section 7 showed that a $`9\sigma `$ detection of mass would be possible using an 8 metre class telescope, equivalent to an increase in integration time by a factor of 5. This is in contrast to T98 whose main source of uncertainty comes from the unknown source redshift distribution. Shot-noise makes a negligible contribution to their error to the extent that increasing their total integration time by a factor of three to match the total information content of this work would still result in a signal-to-noise of $`3\sigma `$ (Dye 1999). This paper has been primarily devoted to establishing the viability of lens mass reconstruction using the luminosity function method. We have shown that the two main advantages over the number count method employed by T98 are that use of photometric redshifts have enabled breaking of the mass/background degeneracy and that the technique is independent of their clustering if it is assumed that they form an effective random sampling of luminosity space. ACKNOWLEDGEMENTS SD thanks PPARC for a studentship and the IfA Edinburgh for funding, ANT thanks PPARC for a research fellowship and EMT thanks the Deutsche Forschungsgemeinschaft for the research fellowship supporting his stay in Edinburgh. We also thank Tom Broadhurst for use of his I-band data and Narciso Benitez who performed the original data reduction. REFERENCES Athreya R., Mellier Y., van Waerbeke L., Fort B., Pelló R. & Dantel-Fort M., 1999, submitted to A&A, astro-ph/9909518 Bertin E. & Arnouts S., 1996, A&AS, 117, 393 Broadhurst T.J., Taylor A.N. & Peacock J.A., 1995, ApJ, 438, 49 Chiba M. & Yoshii Y., 1999, ApJ, 510, 42 Davis M. & Geller M.J., 1976, ApJ, 208, 13 Dressler A., 1980, ApJ, 236, 351 Dye S., 1999, PhD Thesis, University of Edinburgh, UK Ellis R.S., Colless M., Broadhurst T., Heyl J., Glazebrook K., 1996, MNRAS, 280, 235 Falco E.E., Gorenstein M.V. & Shapiro I.I., 1985, ApJ, 437, 56 Fort B., Mellier Y. & Dantel-Fort M., 1997, A&A, 321, 353 Francis P.J., Hewett P.C., Foltz C.B., Chaffee F.H., Weymann R.J., Morris S.L., 1991, ApJ, 373, 465 Gray M.E., Ellis R.E., Refregier A., Bézecourt J., McMahon R.G., Beckett M.G., Mackay C.D. & Hoenig M.D., 2000, submitted to MNRAS, astro-ph/0004161 Gunn J.E. & Stryker L.L., 1983, ApJS, 52, 121 Kaiser N., 1996, in O. Lahav, E. Terlevich, R.J. Terlevich, eds, Proc. Herstmonceux Conf. 36, Gravitational Dynamics, Cambridge University Press, p.181 Kinney A.L., Calzetti D., Bohlin R.C., McQuade K., Storchi-Bergmann T., Schmitt H.R., 1996, ApJ, 467, 38 Lilly S.J., Cowie L.L. & Gardner J.P., 1991, ApJ, 369, 79 Lilly S.J., Tresse L., Hammer F., Crampton D., Le Févre O., 1995 ApJ, 455, 108 Mayen C. & Soucail G., 2000, submitted to A&A, astro-ph/0003332 Meisenheimer K., Röser H.J., 1996, MPIAPHOT User Manual, MPIA Heidelberg Meisenheimer K. et al., 1998, in S. D’Oderico, A.Fontana, E. Giallongo, eds, ASP Conf. Ser. Vol. 146, The Young Universe: Galaxy Formation and Evolution at Intermediate and High Redshift, p.134 Oke J.B., 1990, AJ, 99, 1621 Parzen E., 1963, Proc. Symp. on Time Series Analysis, p.155, ed. I.M. Rosenblatt Pickles A.J. & van der Kruit P.C., 1991, A&AS, 91, 1 Postman M. & Geller M.J., 1984, ApJ, 281, 95 Rögnvaldsson Ö.E., Greve T.R., Hjorth J., Gudmundsson E.H., Sigmundsson V.S., Jakobsson P., Jaunsen A.O., Christensen L.L., van Kampen E. & Taylor A.N., 2000, submitted to MNRAS, astro-ph/0009045 Sandage A., Tammann G.A. & Yahil A., 1979, ApJ, 232, 352 Schmidt M., 1968, ApJ, 151, 393 Taylor A.N., Dye S., Broadhurst T.J., Benitez N., van Kampen E., 1998, ApJ, 501, 539 Teague P.F., Carter D. & Gray P.M., 1990, ApJS, 72, 715 Thommes E.M. et al., 1999, ’CADIS Revised Proposal’, MPIA Publication Tyson J.A. & Fischer P., 1995, ApJ, 446, L55 van Kampen E., 1998, MNRAS, 301, 389 Wolf C., Meisenheimer K., Röser H.J., Beckwith S., Fockenbrock H., Hippelein H., Phleps S., Thommes E., 1999, A&A, 343, 399
no-problem/0002/astro-ph0002426.html
ar5iv
text
# Eccentric discs in binaries with intermediate mass ratios: Superhumps in the VY Sculptoris stars ## 1 Superhumps in Nova-like Variables The superhump phenomenon was first observed in superoutbursts of dwarf novae of short orbital period ($`P_{\mathrm{orb}}\stackrel{<}{}2.8\mathrm{h}`$: see Warner 1995b). It can be explained as an effect of an eccentric disc that precesses with respect to the tidal field of the secondary star (Whitehurst 1988). In low mass ratio binaries ($`q\stackrel{<}{}1/4`$), disc eccentricity can be excited via tidal resonance (Whitehurst 1988, Hirose & Osaki 1990, Lubow 1991). In systems with larger $`q`$ however, the disc is truncated at too small a radius for the resonance to be effective. Observationally there is evidence that discs can be tidally resonant for $`q`$ significantly greater than $`1/4`$. Superhumps appear in high mass transfer discs (those of nova-like variables and of nova remnants) in some systems with orbital periods up to 3.5 h (Skillman et al. 1998). At this period the mass of the secondary star is $`0.31\mathrm{M}_{}`$ (equation 2.100 of Warner 1995b). Unfortunately there are no reliable determinations of $`q`$ in the superhumping systems with $`P_{\mathrm{orb}}>3`$h. The mean mass of the primaries of the nova-likes is $`0.74\mathrm{M}_{}`$ (Warner 1995b) and the range is likely to be at least $`0.51.0\mathrm{M}_{}`$. If the primary mass is at the upper end of this range, it is therefore possible that a system with $`P_{\mathrm{orb}}3.5`$h would have a mass ratio approaching that needed for the eccentric resonance to lie within the accretion disc. There can be little doubt that a mechanism is required to extend the range of mass ratios for which tidal resonance occurs, beyond the $`q\stackrel{<}{}1/4`$ established in equilibrium disc calculations. One possibility is suggested by the VY Scl stars, systems with orbital periods in the range $`3.0<P_{\mathrm{orb}}<4.0`$ h (Warner 1995b) in which mass transfer is liable to reduce significantly at irregular intervals. Such a drop in $`\dot{M}_\mathrm{s}`$ would allow a tidally stable, equilibrium disc that just marginally fails to reach the eccentric resonance to expand radially and become eccentrically unstable. In this paper we demonstrate that the discs in high $`\dot{M}`$ systems can be excited into an eccentric state when the mass transfer from the secondary drops below its equilibrium value. In the next section we discuss the observational peculiarities of the VY Scl stars. In section three we outline our model for superhumps in these systems. We present numerical results that support our case in section four, and make our conclusions in section five. ## 2 Mass Transfer in Nova-like Variables The long-term light curves of nova-like variables show brightness variations on a range of time-scales. Quasi-periodic modulations with ranges $`0.7`$ mag and time-scales of $`12`$ months may be intrinsic to the disc and in effect suppress dwarf nova outbursts in which the thermal instability of the disc is modified by irradiation from the white dwarf primary (Warner 1995a). Larger brightness variations on time-scales from days to years are also seen, in which the star shows a “bright state” before falling from 1 to 5 magnitudes in brightness. This behaviour is named after the type star, VY Sculptoris, and is generally believed to be the result of significant and relatively rapid drops in mass transfer from the secondary. The known VY Scl stars (Table 4.1 in Warner 1995b) have orbital periods in the range $`3.2P_{\mathrm{orb}}4.0`$ h. Two apparently discrepant systems, V794 Aql and V745 Cyg have since had their orbits redetermined. Thorstensen (private communication) found the orbital period of V745 Cyg to be $`4.05`$ h, and Honeycutt & Robertson (1998) found the period of V794 Aql to be $`3.68`$ h. Furthermore, every nova-like variable in the 3.0 to 4.0 h $`P_{\mathrm{orb}}`$ range that has a well observed long-term light curve has been found to be of VY Scl type (Warner 1995b). Unstable mass transfer in these systems may be due to irradiation of the secondary by the primary (Wu, Wickramasinghe & Warner, 1995). As a consequence, in the range $`3.0\stackrel{<}{}P_{\mathrm{orb}}\stackrel{<}{}4.0`$ h, high $`\dot{M}`$, stable accretion discs are subject to occasional reductions of received mass at their outer edge. In the following sections we numerically investigate the disc response. ## 3 Tidal instability in systems with intermediate mass ratios Eccentricity can be excited in a close binary accretion disc if it overlaps or very nearly overlaps the $`3:1`$ eccentric inner Lindblad resonance (Lubow 1991). With a characteristic radius almost half the interstellar separation, this resonance is only accessible to discs in low mass ratio systems. Tidal forces prevent discs in high mass ratio systems from reaching the resonance. Paczyński (1977) approximated the streamlines in a pressureless, inviscid disc with ballistic particle trajectories. He then suggested that such a disc could not be larger than the largest ballistic orbit that doesn’t intersect orbits interior to it. Using Paczyński’s estimate, Hirose & Osaki (1990), and Lubow (1991) found that the $`3:1`$ resonance could only be reached if $`q\stackrel{<}{}1/4`$. But the tides that truncate a close binary disc also remove angular momentum from it. Papaloizou & Pringle (1977) calculated the tidal removal of angular momentum from a disc comprised of gas in linearly perturbed circular orbits. They estimated that the disc would need to extend out to approximately $`0.850.9`$ times the mean Roche lobe radius for tidal torques to be effective (slightly beyond the Paczyński radius). However the assumption of linearity made by Papaloizou & Pringle lead to an underestimate of the dissipation in the neighbourhood of Paczyński’s orbit crossing radius and thus to a larger estimate for the tidal radius. Thus when Whitehurst & King (1991) used the Papaloizou & Pringle value for maximum disc size, they estimated that the $`3:1`$ resonance could be reached in systems with $`q`$ as high as $`1/3`$. For near equilibrium discs subjected to a fixed rate of mass addition from the secondary, the numerical results are more in keeping with the lower mass ratio limit of $`1/4`$. Hirose & Osaki (1990) obtained eccentric discs in simulations with $`0.10q0.20`$, and Murray (1998) for mass ratios $`0.25`$. Whitehurst (1994) found eccentricity in simulations at mass ratios as high as $`1/3`$. However his discs were initialised with a rapid burst of mass transfer and so were not close to an equilibrium when they encountered the $`3:1`$ resonance. As discussed in the previous section, the observations are not consistent with a mass ratio limit as low as $`1/4`$. We propose then that higher mass ratio systems can become resonant during periods when the disc is not in equilibrium with the mass transfer stream. Gas passing through the inner Lagrangian point has a specific angular momentum with respect to the white dwarf equivalent to that of a circular Keplerian orbit of radius $`r_{\mathrm{circ}}`$. This circularisation radius is of the order of a tenth to one fifth the interstellar separation and is very much inside the disc’s tidal truncation limit (Lubow & Shu, 1975). Hence, mixing with the mass transfer stream reduces specific angular momentum in the outer disc and restricts the disc radially. Of course the principal confinement of the disc is provided by tidal torques, however the additional confinement due to the stream can be significant. If, as is suspected to happen in the VY Scl stars, $`\dot{M}_\mathrm{s}`$ is suddenly reduced, then the radial restriction on the disc will be relaxed, and the disc will expand. The expansion can be only temporary, as the disc will subsequently adjust to the new $`\dot{M}_\mathrm{s}`$ on its viscous time-scale, and return to its initial size. However, we will show in the next section that the period of readjustment can be sufficient to allow the disc to encounter the $`3:1`$ resonance and become eccentric. ## 4 Results ### 4.1 Numerical Method In this section we present numerical results obtained using a two dimensional, fixed spatial resolution, smooth particle hydrodynamics (SPH) code. Previous calculations and detailed descriptions of the code are to be found in Murray (1996, 1998). Although the code has been improved to allow for variable spatial resolution (see e.g. Murray, de Kool & Li, 1998) and for the three dimensional modelling of accretion discs (Murray & Armitage, 1998), we found that by restricting ourselves to two dimensions and fixed resolution, we could more thoroughly explore parameter space with a reasonable expenditure of computer time. Our ability to capture the essential features of the eccentric instability is not compromised, and previous results have been consistent with those of three dimensional calculations (see e.g. Simpson & Wood, 1998). The simulations described here demonstrate that, in systems with intermediate mass ratio, a sudden reduction in mass transfer from the secondary star can allow an equilibrium disc to expand so as to come into contact with the eccentric resonance. Two sequences of simulations were completed. In the first series we subjected near-equilibrium discs to sudden reductions in $`\dot{M}_\mathrm{s}`$, the aim being to demonstrate the viability of the proposed mechanism. We then completed a second set of simulations in which we replaced the mass transfer stream with steady mass addition at the circularisation radius, $`r_{\mathrm{circ}}`$. By removing the additional constraining effect of the stream, and allowing the disc to reach a fully extended steady state, we made it as easy as possible for the disc to interact with the eccentric resonance. We ran such simulations for several values of $`q`$ and thus determined the range of mass ratios for which a reduction in mass transfer might result in superhumps. As in Murray (1996) and Murray (1998), we use the interstellar separation $`d`$, the binary mass $`M`$, and the inverse of the orbital angular velocity $`\mathrm{\Omega }_{\mathrm{orb}}^1`$, as the length, mass and time scalings for our simulations. ### 4.2 Variable $`\dot{M}_\mathrm{s}`$ Calculations In Murray (1998) we described a calculation for mass ratio $`q=0.29`$ that failed to excite the eccentric resonance (simulation 1 of that paper). In that calculation mass was introduced at a constant rate of one particle per $`\mathrm{\Delta }t=0.01`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$ at the inner Lagrangian point. An isothermal equation of state was used with $`c=0.02`$$`d\mathrm{\Omega }_{\mathrm{orb}}`$. The spatial resolution was fixed with the smoothing length $`h=0.01`$$`d`$. The high rate of viscous dissipation that occurs in CV discs in outburst was modelled using an artificial viscosity term (Murray 1996, 1998). The artificial viscosity parameter $`\zeta =10`$. The gave an effective Shakura-Sunyaev parameter $`\alpha 1.9`$ at the resonance radius. This was somewhat on the high side but made the calculations more manageable. At the end of the simulation ($`t=1652.00`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$) the non-resonant disc had an equilibrium mass of 19950 particles. For this paper we used the final state of the above calculation as the initial state for a new simulation with $`\dot{M}_\mathrm{s}=0`$ (no mass addition). All other system parameters were left unchanged. The disc evolution was then followed to $`t=2000.00`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$, by which time there were only 5031 particles remaining in the disc. By plotting the energy viscously dissipated in the disc as a function of time we obtained a “light curve” for the simulation (figure 1). The stream-disc impact is responsible for a significant fraction of the energy dissipation in the outer disc. Hence when the accretion stream was shut off (at $`t=1652.00`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$) there was a sudden drop in disc luminosity. Almost immediately after mass transfer ceased the accretion disc became resonant and eccentric. The subsequent motion of the eccentric mode relative to the binary frame introduced a cyclic contribution to the energy dissipation in the outer disc (this is discussed in more depth in Murray, 1998) which we identify as a superhump signal. Figure 1 shows the superhumps reaching maximum amplitude somewhere between $`t=1800`$ and $`1900`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$. The subsequent decay of both the superhump amplitude and of the background luminosity was simply due to the decline in disc mass. As the simulation ran we Fourier analysed the disc’s density distribution using the technique described in Lubow (1991). Each Fourier mode’s variation in azimuth and time is given by $`l\theta m\mathrm{\Omega }_{\mathrm{orb}}t`$ where $`\theta `$ is azimuth measured in the inertial frame, $`\mathrm{\Omega }_{\mathrm{orb}}`$ is the angular velocity of the binary, and $`l`$ and $`m`$ are integers. Hence the disc’s eccentricity is measured by the $`l=1`$, $`m=0`$ mode, and the largest amplitude non-resonant tidal response has $`l=m=2`$. In some sense this latter mode (which appears in density maps as a two armed spiral pattern fixed in the binary frame) indicates the radial extent of the disc. The larger the tidal response, the more disc material there is at large radii. The strengths of the eccentric and spiral modes are plotted in figure 2. The tidal mode strength jumped sharply once mass addition ceased, strongly indicating a redistribution of matter radially outward through the disc. We confirmed this interpretation by checking the actual particle distributions. Immediately prior to the demise of the mass transfer stream, $`18.2\%`$ of the disc mass was at radii $`r>0.4`$$`d`$. By $`t=1700.00`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$, this figure had increased to $`25.6`$ %. There was also an increase in absolute terms from 3636 to 4046 particles. The eccentric mode also became immediately stronger upon cessation of mass transfer, and then continued to grow as the resonance took effect. The maximum in the eccentric mode strength coincided with the largest amplitude superhumps. We have produced a sequence of grey scale density maps of the disc (figure 3), from which it is clear that the mass transfer stream did indeed restrict the disc’s radial extent. In its original equilibrium state (top left panel), the disc immediately downstream of the stream impact region lay entirely within the Roche lobe of the primary. Compare that with the disc shortly after mass transfer had ceased (top right) which extended beyond the Roche lobe. Having determined the mechanism to be viable, we then investigated whether disc eccentricity could be excited with a less drastic reduction in $`\dot{M}_\mathrm{s}`$. We completed a second calculation with identical initial conditions, but with the mass transfer rate reduced to half its initial value instead of being extinguished completely. Figures 4 and 5 follow the evolution of the disc to $`t=3000`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$, by which time the disc mass had stabilised at approximately 10500 particles. As in the first simulation, the disc immediately came into contact with the resonance and became more eccentric. This time however the superhump amplitude was much reduced. The eccentricity reached a maximum at $`t2300`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$ and then declined. Note that despite the reduction in superhump amplitude, the amplitude of the $`(1,0)`$ mode was larger in the second calculation (compare figures 2 and 5). Whereas the eccentric mode strength is a measure of the eccentricity as a whole, it does not directly measure the eccentricity of individual particle orbits. The superhump signal, however, is generated by those particles on the most eccentric orbits. In the second simulation, the continued mass transfer restricted the maximum eccentricity which could be acquired by gas in the outer disc, and hence limited the superhump amplitude. We had expected a priori that the disc would return to an axisymmetric state in equilibrium with the reduced mass flux from the secondary. However we followed this calculation to $`t=5700`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$ and were surprised to find that the eccentricity did not decline to zero as expected, but instead stabilised with $`0.05<S_{(1,0)}<0.07`$. The plot of the tidal $`(2,2)`$ mode (figure 5) and density maps (not shown) indicate that the disc returned to its original radial extent by time $`t2100`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$. There is thus the intriguing possibility that this eccentric state, once reached, was quasi-stable. To complete this section we performed a third simulation, in which we took the previous simulation at time $`t=2400.00`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$ and restored the mass transfer rate to its original value. Figures 6 and 7 show that the disc was initially forced rapidly inwards and circularised by the increased mass flux from the secondary. Further adjustment then occurred on the viscous time-scale as the disc re-expanded to its equilibrium radius and mass flux, and the eccentricity decayed away. By the end of the calculation ($`t=2400.00`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$) the disc had returned to its original axisymmetric state. ### 4.3 Stable $`\dot{M}_\mathrm{s}`$ Calculations In this section we describe four simulations of discs built up to steady state from zero initial mass via steady mass addition at the circularisation radius. Although one would be hard-pressed to find such a disc in nature, it is perhaps the most favourable disc configuration for the excitation of the eccentric resonance. Whereas in the simulations of the previous section the interaction with the resonance was limited to the viscous time-scale, there is no such time limitation in these calculations. Thus if, for a given mass ratio, eccentricity cannot be excited with this artificially favourable setup, then it is hard to see how it could be excited with more realistic mass input. Hence we can obtain a limiting mass ratio for which mass transfer reductions can inspire superhumps. As in the previous section we took an isothermal equation of state with $`c=0.02`$$`d\mathrm{\Omega }_{\mathrm{orb}}`$, an SPH smoothing length $`h=0.01`$$`d`$, and set the SPH artificial viscosity parameter $`\zeta =10`$. Each calculation was terminated at $`t=1000`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$. Simulations were completed for $`q=0.29,0.30,0.31`$ and $`1/3`$. In each case, mass was added at a fixed rate of one particle per $`\mathrm{\Delta }t=0.01`$$`\mathrm{\Omega }_{\mathrm{orb}}^1`$, at a radius $`r=0.1781`$$`d`$ ($`r_{\mathrm{circ}}`$ for a system with $`q=0.18`$). By doing this we have kept the specific angular momentum of newly added material constant across the simulations. However the angular momentum is somewhat greater (14% in the case of $`q=1/3`$) than if we had used the correct circularisation radius for each mass ratio. The strength of the eccentric mode at the conclusion of each simulation, and the disc precession periods are listed in table 1. The higher the mass ratio, the more weakly the eccentric resonance was excited. For the $`q=1/3`$ simulation some volatility could still be discerned in the eccentric mode strength but no superhumps were apparent either in the light curve or in its power spectrum. We included in table 1 the second of the simulations completed for section 4.2 (for which $`\dot{M}_\mathrm{s}`$ was reduced a factor 2). Given its length (four times that of the calculations completed for this section), this calculation provided a very reliable measurement of $`P_\mathrm{d}`$ and thus a useful check on the other results tabulated here. A mass ratio of approximately $`1/3`$ would appear to be the upper limit beyond which the eccentric resonance cannot be excited even temporarily. Our results correspond well with those of Whitehurst (1994) who found superhumps in a simulation with $`q=1/3`$ but not in a simulation with $`q=0.34`$. He constructed his discs with a massive initial mass transfer burst, followed by a much reduced but constant $`\dot{M}_\mathrm{s}`$. The mass distribution in Whitehurst’s discs would have been therefore more akin to a disc built up via mass addition at $`r_{\mathrm{circ}}`$ than to a disc subject to steady mass addition from $`L_1`$. ## 5 Conclusions Under conditions of steady mass transfer, a mass ratio $`\stackrel{<}{}1/4`$ is required for a close binary accretion disc to encounter the $`3:1`$ eccentric inner Lindblad resonance. However, it is possible for eccentricity to be excited in the disc of a high mass transfer system with $`q\stackrel{<}{}1/3`$ if $`\dot{M}_\mathrm{s}`$ is reduced, as is thought to occur in the VY Sculptoris systems. Our simulations suggest that the disc will remain eccentric as long as $`\dot{M}_\mathrm{s}`$ remains at the lower value. Thus, a precessing, eccentric disc remains the best explanation of superhumps, even for systems with $`3.0<P_{\mathrm{orb}}<4.0`$ h.
no-problem/0002/hep-lat0002027.html
ar5iv
text
# 1 Introduction ## 1 Introduction One of the most interesting and still unresolved problems is the structure of a (nonperturbative) vacuum in QCD and a confinement mechanism. At the time being we have a number of different competing scenarios of confinement. One of the most promising approaches is a quasiclassical approach promoted by Polyakov which assumes that in the treatment of infrared problems certain classical field configurations are of paramount importance. These classical field configurations (”pseudoparticles”) are supposed to be stable, i.e. they correspond to local minima of the action and the interaction of these pseudoparticles creates a correlation length which corresponds to a new scale – confinement scale. This approach gives a clear field–theoretical prescription how to calculate analytically nonperturbative observables in the weak coupling region. In principle, this approach can be extended to the case of ”quasistable” solutions. Another very attractive approach is a topological (or monopole) mechanism of confinement . This mechanism suggests that the QCD vacuum state behaves like a magnetic (dual) superconductor, abelian magnetic monopoles playing the role of Cooper pairs, at least, for the specially chosen (”maximally abelian”) gauge . At the time being this approach remains the most popular one in numerical studies in lattice QCD. It is rather tempting to try to interprete lattice (abelian) monopoles as pseudoparticles (stable or quasistable). Recently the classical solutions have been found which correspond to Dirac sheet (i.e. flux tube) configurations . It is the aim of this note to study the monopolelike ($`M\overline{M}`$) abelian solutions of classical equations of motion in $`SU(2)`$ and $`U(1)`$ lattice gauge theories in $`d=3`$ and $`d=4`$ dimensions. In what follows periodic boundary conditions are presumed. Lattice derivatives are : $`_\mu f_x=f_{x+\mu }f_x`$ ; $`\overline{}_\mu f_x=f_xf_{x\mu }`$ and $`\overline{}_\mu (U)f_x=f_xU_{x\mu ;\mu }^{}f_{x\mu }U_{x\mu ;\mu }.`$ $`\mathrm{\Delta }=_\mu _\mu \overline{}_\mu `$ and the lattice spacing is chosen to be unity. ## 2 Abelian classical solutions ### 2.1 Iterative procedure Classical equations of motion are $$\underset{\mu }{}\text{Im}\text{Tr}\left\{\sigma ^a\overline{}_\mu U_{x\mu \nu }\right\}=0,$$ (2.1) where $`U_{x\mu \nu }SU(2)`$. For abelian solutions $`U_{x\mu \nu }=\mathrm{exp}\left(i\sigma _3\theta _{x\mu \nu }\right)`$ eq. (2.1) becomes $$\underset{\nu }{}\overline{}_\nu \mathrm{sin}\theta _{x\mu \nu }=0.$$ (2.2) Let us represent the plaquette angle $`\theta _{x\mu \nu }`$ in the form $$\theta _{x\mu \nu }=\stackrel{~}{\theta }_{x\mu \nu }+2\pi n_{x\mu \nu };\pi <\stackrel{~}{\theta }_{x\mu \nu }\pi ,$$ (2.3) and $`n_{x\mu \nu }=n_{x\nu \mu }`$ are integer numbers. The classical equations of motion (2.2) can be represented in the form $$\underset{\nu }{}\overline{}_\nu \theta _{x\mu \nu }=F_{x\mu }(\theta ),$$ (2.4) where $$F_{x\mu }(\theta )\underset{\nu }{}\overline{}_\nu \left(\theta _{x\mu \nu }\mathrm{sin}\theta _{x\mu \nu }\right)$$ and $`_\mu \overline{}_\mu F_{x\mu }=0`$. For any given configuration $`\left\{n_{x\mu \nu }\right\}`$ these equations can be solved iteratively $$\theta _{x\mu }^{(1)}\theta _{x\mu }^{(2)}\mathrm{}\theta _{x\mu }^{(k)}\mathrm{},$$ (2.5) where $$\underset{\nu }{}\overline{}_\nu \theta _{x\mu \nu }^{(k+1)}=F_{x\mu }(\theta ^{(k)});k=1;2;\mathrm{},$$ (2.6) and $$\underset{\nu }{}\overline{}_\nu \theta _{x\mu \nu }^{(1)}=2\pi \underset{\nu }{}\overline{}_\nu n_{x\mu \nu };$$ (2.7) In the Lorentz gauge $`_\mu \overline{}_\mu \theta _{x\mu }=0`$ eq.’s (2.6,2.7) are equivalent to $$\mathrm{\Delta }\theta _{x\mu }^{(k+1)}=J_{x\mu }^{(k)};k=0;1;\mathrm{},$$ (2.8) where $`J_{x\mu }^{(0)}=2\pi _\nu \overline{}_\nu n_{x\mu \nu }`$ and $`J_{x\mu }^{(k)}=F_{x\mu }(\theta ^{(k)})`$ at $`k1`$. Evidently, $$\underset{\mu }{}\overline{}_\mu J_{x\mu }^{(k)}=0;\underset{x}{}J_{x\mu }^{(k)}=0.$$ (2.9) Defining the propagator $$G_{x;y}=\frac{1}{V}\underset{q0}{}\frac{e^{iq(xy)}}{𝒦^2};𝒦^2=\underset{\mu }{}4\mathrm{sin}^2\frac{q_\mu }{2},$$ (2.10) one can easily find solutions of eq.’s (2.8) : $`\theta _{x\mu }^{(1)}`$ $`=`$ $`2\pi {\displaystyle \underset{y\nu }{}}G_{x;y}\overline{}_\nu n_{x\mu \nu };`$ (2.11) $`\theta _{x\mu }^{(k+1)}`$ $`=`$ $`{\displaystyle \underset{y}{}}G_{x;y}J_{y\mu }^{(k)};{\displaystyle \underset{\mu }{}}\overline{}_\mu \theta _{x\mu }^{(k+1)}=0,`$ (2.12) and $`_x\theta _{x\mu }^{(k+1)}=0`$. The results of iterative solution can be summarized as follows. 1. The convergence of this iterative procedure is very fast and becomes even faster with increasing the distance between monopole and antimonopole. As an example in Figure 1a it is shown the dependence of the action on the number of iterations on the $`8^4`$ lattice where $`\stackrel{}{R}_1`$ and $`\stackrel{}{R}_2`$ are positions of the static monopole and antimonopole, respectively. In fact, the first approximation $`\theta _{x\mu }^{(1)}`$ as given in eq.(2.11) is a very good approximation to the exact solution. 2. There are no solutions when monopole and antimonopole are too close to each other. As an example, in Figure 1b one can see the dependence of the action on the number of iteration steps when $`\stackrel{}{R}_2\stackrel{}{R}_1=(0,0,2)`$. 3. In four dimensions only static (i.e. threedimensional) solutions have been found. ### 2.2 Stability The question of stability of the classical solution $`U_{x\mu }^{cl}`$ is the question of the eigenvalues $`\lambda _j`$ of the matrix $`L_{xy;\mu \nu }^{ab}`$ where $$S(e^{i\delta \theta }U^{cl})=S_{cl}+\underset{abxy\mu \nu }{}\delta \theta _{x\mu }^aL_{xy;\mu \nu }^{ab}\delta \theta _{y\nu }^b+\mathrm{},$$ (2.13) where $`S_{cl}=S(U^{cl})`$ and $`\delta \theta _{x\mu }^a`$ are infinitesimal variations of the gauge field. A solution $`U_{x\mu }^{cl}`$ is stable if all $`\lambda _j0`$. However, a solution can be unstable but ”quasistable” if, say, only one eigenvalue is negative : $`\lambda _1<0`$, $`\lambda _j0,j1`$. A cooling history of such configuration could have demonstrated an approximate plateau. If it could have been a case, one could extend, in principle, Polyakov’s approach to the case of quasistable solutions. It is rather easy to show that in the case of $`U(1)`$ theory $`M\overline{M}`$ solutions are stable, i.e. correspond to local minima of the action. Therefore, Polyakov’s approach based on the $`M\overline{M}`$ classical solutions is expected to describe confinement and pseudoparticles are (anti)monopoles. Stability of $`M\overline{M}`$–classical solutions in $`SU(2)`$ theory has been studied numerically. To this purpose every classical $`M\overline{M}`$ configurations has been (slightly) heated and then a (soft) cooling procedure has been used. In Figure 2 one can see a typical cooling history of such configuration. The classical action $`S_{cl}`$ corresponding to the $`M\overline{M}`$ configuration is $`130`$. Therefore, $`M\overline{M}`$–classical solution looks absolutely unstable. It is interesting to compare the stability of monopole classical solutions with that of Dirac sheet (flux tube) solutions. In Figure 3 one can see a typical cooling of the heated single Dirac sheet (SDS) in $`SU(2)`$ theory. Parameters of the cooling have been chosen the same for all configurations. In fact, it is also unstable. However, a strong plateau permits to define this configuration as a quasistable. ## 3 Summary and discussions Classical solutions corresponding to monopole–antimonopole pairs in $`3d`$ and $`4d`$ $`SU(2)`$ and (compact) $`U(1)`$ lattice gauge theories have been found. In the case of $`3d`$ and $`4d`$ $`U(1)`$ theories these monopole–antimonopole classical solutions ($`M\overline{M}`$–pseudoparticles) are stable, i.e. correspond to local minima of the action. Therefore, the quasiclassical approach has chance to be successful. In contrast, in $`SU(2)`$ theory ($`d=3`$ and $`d=4`$) $`M\overline{M}`$ classical solutions are completely unstable. At the moment it is not clear if Polyakov’s (quasiclassical) approach can be applied to nonabelian theories (at least, with monopoles as pseudoparticles). It is very probable that the vacuum in the (compact) $`U(1)`$ theory is a rather poor model of the vacuum in $`SU(2)`$ theory. It is interesting to note that the Dirac sheet (i.e. flux tube) solutions are quasistable in $`SU(2)`$ theories (for $`d=3`$ and $`d=4`$). This observation could be interesting in view of the famous spaghetti vacuum picture where the color magnetic quantum liquid state is a superposition of flux–tubes states (Copenhagen vacuum) . However, the relevance of this scenario still needs a further confirmation.
no-problem/0002/math-ph0002034.html
ar5iv
text
# Generalization of the Bloch-Messiah-Zumino theorem \[ ## Abstract It is shown how to construct a basis in which two arbitrary complex antisymmetric matrices $`C`$ and $`C^{}`$ acquire simultaneously canonical forms. The present construction is not restricted by any conditions on properties of the $`C^+C^{}`$ matrix. Canonical bases pertaining to the generator-coordinate-method treatment of many-fermion systems are discussed. \] The Bardeen-Cooper-Schrieffer (BCS) pairing theory , and its generalization by Bogolyubov , are based on fermion wave functions that have the form of fermion-pair condensates, i.e., $$|C=\mathrm{exp}\left(\frac{1}{2}\underset{mn}{}C_{mn}^{}a_m^+a_n^+\right)|0,$$ (1) where $`a_m^+`$ are the fermion creation operators, $`|0`$ is the fermion vacuum, and $`C_{mn}`$ is an antisymmetric complex matrix. Up to a unitary transformation of the single-particle basis, $$\overline{a}_m^+=\underset{m^{}}{}U_{mm^{}}^{}a_m^{}^+,$$ (2) state (1) is equal to the so-called BCS state $$|C=\underset{m>0}{}\left(1+s_mc_m\overline{a}_{\stackrel{~}{m}}^+\overline{a}_m^+\right)|0,$$ (3) better known in its normalized form, $$\frac{|C}{C|C^{1/2}}=\underset{m>0}{}\left(u_m+s_mv_m\overline{a}_{\stackrel{~}{m}}^+\overline{a}_m^+\right)|0,$$ (4) for $$u_m=\frac{1}{\sqrt{1+c_m^2}},v_m=\frac{c_m}{\sqrt{1+c_m^2}}.$$ (5) The Bloch-Messiah-Zumino theorem provides the link between the two forms of state $`|C`$, Eqs. (1) and (3), by stating that every complex antisymmetric matrix can be brought by a unitary transformation into its canonical form, i.e., $$(U^TCU)_{mn}=s_n^{}c_n\delta _{m\stackrel{~}{n}},$$ (6) where index $`\stackrel{~}{m}`$ denotes the so-called canonical partner of the state $`m`$, the phase factors $`s_n^{}`$=$`s_n^1`$=$`s_{\stackrel{~}{n}}^{}`$ have for the canonical partners opposite signs, and numbers $`c_n`$=$`c_{\stackrel{~}{n}}`$ are real and positive. Standard notation $`m`$$`>`$0, used in Eqs. (3) and (4), means that only one state is taken from each canonical pair. The proof of the theorem goes through a diagonalization of the hermitian matrix $`C^+C`$, that yields the unitary transformation $`U`$ and real, non-negative, pairwise degenerate eigenvalues $`c_n^2`$. When using states (1) in applications beyond the mean-field approximation, and in particular in the generator coordinate method (GCM) , the matrix elements and overlaps depend on the product matrix $`C^+C^{}`$. For example, the overlap of two states (1) reads $$C^{}|C=\stackrel{1/2}{det}\left(1+C^+C^{}\right),$$ (7) and the transition density matrix is given by $$\rho _{mn}=\frac{C^{}|a_n^+a_m|C}{C^{}|C}=\left[\left(1+C^+C^{}\right)^1C^+C^{}\right]_{mn}.$$ (8) It has been realized long time ago that the matrix $`C^+C^{}`$ is also pairwise degenerate, which facilitates calculation of the phase of the overlap, otherwise ambiguous because of the square root appearing in Eq. (7). Moreover, under certain conditions it has been proved in Ref. that it is enough to give up the unitarity of matrix $`U`$ to bring both matrices $`C`$ and $`C^{}`$ simultaneously into the canonical forms analogous to (6). The same fact has later been rediscovered in Ref. , although the necessary restrictions on matrices $`C`$ and $`C^{}`$ have not been recognized there. In the present paper, I generalize results of Ref. by deriving canonical forms of two arbitrary complex matrices $`C`$ and $`C^{}`$ in a common canonical basis. These results are not restricted by any conditions on matrices $`C`$ and $`C^{}`$. Let us begin by recalling the notion of the Jordan form (see e.g. ) of an arbitrary complex matrix. Focussing our attention on the matrix $`C^+C^{}`$, the vectors defining its Jordan basis can be arranged in columns of matrix $`W`$, and one has $$\underset{n}{}\left(C^+C^{}\right)_{mn}W_{ni}=\underset{j}{}W_{mj}D_{ji},$$ (9) where matrix $`D`$ is block diagonal (composed of the Jordan blocks). One can attribute the number of the block $`I_i`$, the length of the block $`L_i`$, and the number within the block $`k_i`$, to every index $`i`$ that numbers the Jordan basis vectors. In this notation, $`D`$ has the form: $$D_{ji}=\delta _{I_jI_i}D_{k_jk_i}^{I_i},$$ (10) where in every block matrix $`D_{kk^{}}^I`$ reads $$D_{kk^{}}^I=D_I\delta _{kk^{}}+\delta _{kk^{}1},$$ (11) i.e., it has a common complex number $`D_I`$ on the main diagonal and the ones just above the main diagonal. Basis vectors belonging to a given block $`I`$ form the so-called Jordan series of length $`L`$. The series starts with the basis vector called the series head, and ends with an eigenvector of $`C^+C^{}`$. The whole series is uniquely determined by the series head, because the remaining basis vectors in the series can be obtained by a repeated action of $`C^+C^{}`$ on the series head. The basis vectors in a given series are not unique, because a linear combination of these vectors may give another valid series head, and leads to the same Jordan canonical form. Explicitly, this transformation reads $$W_{mk^{}}^{}=\underset{k=1}{\overset{L}{}}W_{mk}\alpha _{kk^{}}=\underset{k=1}{\overset{k^{}}{}}W_{mk}\alpha _{Lk+1},$$ (12) where the transformation matrix $`\alpha _{kk^{}}`$ depends on $`L`$ arbitrary complex numbers $`\alpha _k`$ (only $`\alpha _1`$ must not vanish), and has the following explicit structure: $$\alpha _{kk^{}}=\left(\begin{array}{ccccc}\alpha _1\hfill & \alpha _2\hfill & \alpha _3\mathrm{}\hfill & \alpha _{L1}\hfill & \alpha _L\hfill \\ 0\hfill & \alpha _1\hfill & \alpha _2\mathrm{}\hfill & \alpha _{L2}\hfill & \alpha _{L1}\hfill \\ 0\hfill & 0\hfill & \alpha _1\mathrm{}\hfill & \alpha _{L3}\hfill & \alpha _{L2}\hfill \\ \multicolumn{5}{c}{}\\ 0\hfill & 0\hfill & 0\mathrm{}\hfill & \alpha _1\hfill & \alpha _2\hfill \\ 0\hfill & 0\hfill & 0\mathrm{}\hfill & 0\hfill & \alpha _1\hfill \end{array}\right).$$ (13) It is easy to check that matrices having this structure form a group. It is also easy to check that vectors $`W_{mk}^{}`$ form the Jordan series, similarly as vectors $`W_{mk}`$ do, and that they can replace vectors $`W_{mk}`$ in the Jordan basis, giving the same matrix $`D`$ in Eq. (9). According to the Jordan construction, the whole space in which acts matrix $`C^+C^{}`$ splits into subspaces spanned by the Jordan series. The number of eigenvectors of $`C^+C^{}`$ equals to the number of different series, or to the number of Jordan blocks, and is in general smaller than the dimension of the matrix $`C^+C^{}`$. Some matrices (hermitian or not) can be fully diagonalized, i.e., they have numbers of eigenvectors equal to their dimensions. This corresponds to the case when all the Jordan series have the length equal 1. One calls two blocks degenerate, or two series degenerate, if they have the same diagonal number $`D_I`$, and they have the same length $`L`$. The latter condition is very important, because only degenerate series defined in such a way can be mixed; this is an analogue of the possibility to mix degenerate eigenvectors of a matrix which can be fully diagonalized. If two series have different lengths then vectors of a longer series cannot be admixed to those of the shorter series, even if the series have the same diagonal number $`D_I`$. If the matrix can be fully diagonalized, then all the series have length 1, the number of eigenvectors equals to the dimension of $`C^+C^{}`$, and transformation (12) reduces to the possibility of arbitrarily normalizing every eigenvector. After these necessary preliminaries, let us proceed with presenting the main results of the present paper. Multiplying from the left-hand and right-hand sides the eigen-equation (9) by $`W^1`$, and then transposing, we obtain that $$C^{}\left(C^+W_{}^{1}{}_{}{}^{T}\right)=W_{}^{1}{}_{}{}^{T}D^T,$$ (14) which multiplied by $`C^+`$ from the left-hand side gives $$\left(C^+C^{}\right)\left(C^+W_{}^{1}{}_{}{}^{T}\right)=\left(C^+W_{}^{1}{}_{}{}^{T}\right)D^T.$$ (15) One can see now that matrix $`C^+C^{}`$ has another equivalent set of Jordan series, i.e., $$\underset{n}{}\left(C^+C^{}\right)_{mn}V_{ni}=\underset{j}{}V_{mj}D_{ji},$$ (16) where the new matrix of basis vectors $`V`$ is given by $$V=C^+W_{}^{1}{}_{}{}^{T}J.$$ (17) In every Jordan block, matrix $`J`$ has the ones on the skew diagonal and zeros otherwise, i.e., $`J_{kk^{}}`$=$`\delta _{k,Lk^{}+1}`$. Hence, when an arbitrary matrix is multiplied by $`J`$ from the right-hand (left-hand) side, the order of its columns (rows) is flipped. In particular, one obtains that $`D^T`$=$`JDJ`$. We can now analyze cases of different degeneracies of the Jordan blocks. The arguments given below closely follow proofs presented in Ref. , only with the degeneracies of eigenvalues replaced by the degeneracies of the Jordan blocks. Let us first suppose that $`C^+C^{}`$ has a non-degenerate Jordan block. Then, in this block the basis vectors $`V`$ must be connected with the basis vectors $`W`$ by transformation (12), i.e., $$\left(C^+W_{}^{1}{}_{}{}^{T}J\right)_{mk^{}}=\underset{k=1}{\overset{L}{}}W_{mk}\alpha _{kk^{}}.$$ (18) Multiplying Eq. (18) by $`W^1`$ from the left-hand side, and by $`J`$ from the right-hand side, one obtains $$\left(W^1C^+W_{}^{1}{}_{}{}^{T}\right)_{kk^{}}=\left(\alpha J\right)_{kk^{}}.$$ (19) However, the matrix on the left-hand side of this equation is antisymmetric, while that on the right-hand side is symmetric; therefore, matrix $`\alpha _{kk^{}}`$ must vanish. This requires that, in a non-degenerate Jordan block, all vectors $`V`$ vanish, which contradicts Eq. (14), unless the block has length $`L`$=1 and $`D_I`$=0. Therefore, matrix $`C^+C^{}`$ cannot have non-degenerate Jordan blocks apart from the subspace of $`L`$=1 eigenvectors with all eigenvalues equal zero. One can set this subspace aside, and assume from now on that $`C^+C^{}`$ is non-singular and has an even dimension. In this case, matrix $`C^+C^{}`$ cannot have any non-degenerate Jordan block, and hence Jordan blocks must appear in degenerate pairs. (In odd dimensions, $`C^+C^{}`$ must have at least one null eigenvalue, which can be separated, and the remaining matrix can be treated in the even dimension). In the present considerations, it is enough to consider only pairs of degenerate blocks; had the higher degeneracies of the Jordan blocks occurred, one could have considered one pair after another, and at each step one could reduce the dimension of the problem. This is possible here, and has not been possible when considering degenerate eigenvalues in Ref. , because the whole space can be separated into the subspaces corresponding to the Jordan blocks, while it cannot be separated into subpaces corresponding to the eigenvalues. Let us now consider a pair of degenerate Jordan blocks, each block having length $`L`$ and the common diagonal element $`D_I`$=$`D_{\stackrel{~}{I}}`$. One can adopt here the standard notation that originally pertains to the canonical pairs, namely, we denote the indices of the two degenerate blocks by $`I`$ and $`\stackrel{~}{I}`$. Similarly, indices inside these two blocks are denoted by $`k`$=1, 2,…,$`L`$ and $`\stackrel{~}{k}`$=1, 2,…,$`L`$, respectively. Note that vectors in these two blocks form series, i.e., they are arranged in a specific order; therefore a vector at a given position must be associated with the vector at the same position in the second block. Since for matrix $`C^+C^{}`$ two equivalent Jordan bases exist, $`W`$ and $`V`$, vectors in series $`V`$ must be linear combinations of those in series $`W`$. In the pair of degenerate Jordan blocks, this leads to the following relations between the two series: $`\left(C^+W_{}^{1}{}_{}{}^{T}J\right)_{mk^{}}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{L}{}}}W_{mk}\alpha _{kk^{}}+{\displaystyle \underset{\stackrel{~}{k}=1}{\overset{L}{}}}W_{m\stackrel{~}{k}}\beta _{kk^{}},`$ (21) $`\left(C^+W_{}^{1}{}_{}{}^{T}J\right)_{m\stackrel{~}{k}^{}}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{L}{}}}W_{mk}\gamma _{kk^{}}+{\displaystyle \underset{\stackrel{~}{k}=1}{\overset{L}{}}}W_{m\stackrel{~}{k}}ϵ_{kk^{}},`$ (22) All the four matrices $`\alpha `$, $`\beta `$, $`\gamma `$, and $`ϵ`$ have the same structure (13). One may now proceed with multiplying Eqs. (21) and (22) from the left-hand side either by $`W_{km}^1`$ or by $`W_{\stackrel{~}{k}m}^1`$, and from the right-hand side by $`J`$. Since all matrices $`\alpha J`$, $`\beta J`$, $`\gamma J`$, and $`ϵJ`$ are symmetric, one then obtains that $`\alpha `$=$`ϵ`$=0 and $`\gamma `$=$`\beta `$. Therefore, the canonical form of the $`C^+`$ matrix reads $$\left(W^1C^+W_{}^{1}{}_{}{}^{T}\right)_{ji}=s_{I_j}C_{k_jk_i}^{I_j+}\delta _{I_j\stackrel{~}{I}_i},$$ (23) where the symmetric matrix $`C^I`$=$`C^{\stackrel{~}{I}}`$ occupies the off-diagonal part in every pair of the degenerate Jordan blocks, and $$C^I=s_I\beta ^{}J,$$ (24) for $`\beta `$ having form (13). Following the standard notation, we have defined the phase factors $`s_I`$=$`s_{\stackrel{~}{I}}`$ in such a way that the skew-diagonal matrix elements of $`C^I`$ (that are all equal one to another) are real and positive, i.e., $`C_{k,Lk+1}^I`$$`>`$0. Since the canonical basis of $`C^+`$ is the same as the Jordan basis of $`C^+C^{}`$, matrix $`C^{}`$ must in the very same basis assume an analogous canonical form: $$\left(W^TC^{}W\right)_{ji}=s_{I_i}^{}C_{k_jk_i}^{I_i}\delta _{I_j\stackrel{~}{I}_i},$$ (25) where the symmetric matrix $`C^I`$=$`C^{\stackrel{~}{I}}`$ reads $$C^I=s_I^{}J\beta ^{}$$ (26) and $`\beta ^{}`$ has also form (13). Finally, in order to satisfy Eq. (9) matrices $`\beta `$ and $`\beta ^{}`$ must obey the following condition: $$\beta \beta ^{}=D.$$ (27) This leaves us still some freedom in the choice of the canonical basis, because any solution of Eq. (27) gives one valid canonical form. Two obvious choices are, for example, $`\beta `$=$`D`$ and $`\beta ^{}`$=$`I`$ or $`\beta `$=$`\sqrt{D}`$ and $`\beta ^{}`$=$`\sqrt{D}`$, where any one of the possible branches of the matrix square root can be taken. Equations (23) and (25) complete the proof of the canonical forms of two arbitrary complex antisymmetric matrices $`C`$ and $`C^{}`$. Both these matrices can be simultaneously transformed by matrix $`W`$ (in general non-unitary) into the block-diagonal forms with non-zero elements only between pairs of degenerate Jordan blocks. Needless to say, whenever matrix $`C^+C^{}`$ can be fully diagonalized, which was the case in Ref. , both matrices $`C`$ and $`C^{}`$ acquire in the canonical basis the standard canonical forms analogous to Eq. (6), i.e., $$\left(W^1C^+W_{}^{1}{}_{}{}^{T}\right)_{ji}=s_jc_j^{}\delta _{j\stackrel{~}{ı}},$$ (28) and $$\left(W^TC^{}W\right)_{ji}=s_i^{}c_i^{}\delta _{j\stackrel{~}{ı}},$$ (29) where $`c_i^{}c_i^{}`$=$`D_i`$. In Ref. it was noticed that an incorrect conjecture was formulated in Ref. , namely, the conjecture that the simple forms of Eqs. (28) and (29) can always be achieved. In the present study we have seen that these simple forms occur only when matrix $`C^+C^{}`$ can be fully diagonalized. In fact, this is the case which occurs most often in applications. Therefore, let us now discuss conditions for the full diagonalization of $`C^+C^{}`$. In the applications given in Ref. , the full diagonalization of matrix $`C^+C^{}`$ was secured by using a model in which matrices $`C`$ were time-even, $$C^+=U_TC^TU_T^T,$$ (30) and the Hermitian and time-even matrices $`\stackrel{~}{C}`$ defined by $$\stackrel{~}{C}=U_TC$$ (31) were positive definite. In these equations, $`U_T`$ is a unitary and antisymmetric matrix, $`U_T^+`$=$`U_T^1`$=$`U_T^{}`$. The positive definiteness of matrices $`\stackrel{~}{C}`$ was in guaranteed by a special form of matrices $`C`$. In that study, the GCM states were constructed within the SCEM model , and therefore matrices $`C`$ had the form shown in Eq. (2.13) of . Therefore, the corresponding $`\stackrel{~}{C}`$ matrices were all equal to exponents of hermitian matrices, and hence trivially positive definite. In the general presentation of the present paper, conditions (30) and (31) can be formulated as follows: If there exist a unitary antisymmetric matrix $`U_T`$ such that Eq. (30) holds for $`C`$ and $`C^{}`$, and at the same time at least $`\stackrel{~}{C}`$ or $`\stackrel{~}{C^{}}`$ is positive-definite, then matrix $`C^+C^{}`$ can be fully diagonalized, and the simple canonical forms (28) and (29) exist. The proof of this statement has been given in Ref. (Appendix C), and will not be repeated here. The positive definiteness of $`\stackrel{~}{C}`$ is a required condition, because the hermitian square-root of $`\stackrel{~}{C}`$ must exist. Unfortunately, this condition cannot be released, i.e., if both $`\stackrel{~}{C}`$ and $`\stackrel{~}{C^{}}`$ are not positive definite, it may happen that matrix $`C^+C^{}`$ cannot be fully diagonalized. An example of such a situation is provided by the following two 4$`\times `$4 matrices: $$C=\left(\begin{array}{cc}0& A\\ A^T& 0\end{array}\right)\text{ and }C^{}=\left(\begin{array}{cc}0& A^{}\\ A^T& 0\end{array}\right),$$ (32) where the two-dimensional matrices $`A`$ and $`A^{}`$ read $$A=\left(\begin{array}{cc}1& a\\ a^{}& 0\end{array}\right)\text{ and }A^{}=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right).$$ (33) For the standard time-reversal matrix $`U_T`$ given by $$U_T=\left(\begin{array}{cc}0& I\\ I& 0\end{array}\right)\text{ one has }\stackrel{~}{C}=\left(\begin{array}{cc}A^T& 0\\ 0& A\end{array}\right),$$ (34) and $`\stackrel{~}{C}^{}`$ has the same form. Neither $`\stackrel{~}{C}`$ (for $`a`$$``$0) nor $`\stackrel{~}{C}^{}`$ is positive definite, and the $`C^+C^{}`$=$`\stackrel{~}{C}\stackrel{~}{C}^{}`$ matrix, $$C^+C^{}=\left(\begin{array}{cc}(AA^{})^{}& 0\\ 0& AA^{}\end{array}\right)\text{ for }AA^{}=\left(\begin{array}{cc}a& 1\\ 0& a^{}\end{array}\right),$$ (35) cannot be fully diagonalized, unless $`a`$$``$$`a^{}`$. However, for any small but non-zero imaginary part of $`a`$, matrix $`C^+C^{}`$ can be fully diagonalized. Therefore, this example also shows that the positive definiteness of (time-even) matrices $`\stackrel{~}{C}`$ or $`\stackrel{~}{C}^{}`$ is only a sufficient condition for the full diagonalization of $`C^+C^{}`$=$`\stackrel{~}{C}\stackrel{~}{C}^{}`$, but it is not necessary. Moreover, it is clear that matrix $`C^+C^{}`$ cannot be diagonalized for $`a`$=$`a^{}`$, because in the limit of $`\mathrm{}a`$$``$0 two eigenvectors of $`C^+C^{}`$ become parallel. This illustrates the difficulty of diagonalizing $`C^+C^{}`$ numerically for small values of $`\mathrm{}a`$; the task is then bound to become ill-conditioned. In the GCM, matrices $`C`$ are most often obtained from solutions of the Hartree-Fock-Bogoliubov (HFB) or Hartree-Fock+BCS equations for time-even states. In these cases, matrices $`\stackrel{~}{C}`$ are diagonal in the HFB or BCS canonical bases (composed of pairs of time-reversed states), and their eigenvalues are equal to $`v_m/u_m`$=$`c_m`$, where $`v_m`$ and $`u_m`$ are the standard quasiparticle amplitudes of Eq. (5). Here, the canonical pairs are defined by the time reversal, and therefore the eigenvalues $`c_m`$, can have, in principle, arbitrary signs. However, in the BCS method (with a constant gap parameter $`\mathrm{\Delta }`$) all these quasiparticle amplitudes are positive, and hence all the resulting $`\stackrel{~}{C}`$ matrices are positive definite, thus fulfilling the sufficient condition for the full diagonalization of $`C^+C^{}`$=$`\stackrel{~}{C}\stackrel{~}{C}^{}`$. In fact, quasiparticle amplitudes of different signs rarely occur in nuclear physics applications, cf. Ref. . This is so, because typical pairing forces couple the time-reversed states, and, in general, are always attractive. This shows that the Jordan structures discussed here cannot be expected to be frequently encountered, and most often one will deal with the standard canonical forms of Eqs. (28) and (29), in which the only non-zero matrix elements are adjacent to the main diagonal. In summary, I have shown how to extend the results of Ref. in order to construct canonical basis in which two arbitrary complex antisymmetric matrices $`C`$ and $`C^{}`$ acquire simultaneously canonical forms. This construction completes the generalization of the classic Bloch-Messiah-Zumino theorem to the case of non-diagonal matrix elements calculated between fermion-pair condensates. The critical reading of the manuscript by S.G. Rohoziński is gratefully appreciated. This research was supported by the Polish Committee for Scientific Research, Contract No. 2 P03B 040 14.
no-problem/0002/math0002232.html
ar5iv
text
# Isogeny classes of abelian varieties with no principal polarizations ## 1. Introduction A natural question to ask of an isogeny class $`𝒞`$ of abelian varieties over a field $`k`$ is whether or not it contains a principally polarized variety. If $`k`$ is algebraically closed then $`𝒞`$ will certainly contain a principally polarized variety, and if $`k`$ is finite then every $`𝒞`$ that satisfies some relatively weak conditions will contain a principally polarized variety (see ); for example, it is enough for the varieties in $`𝒞`$ to be simple and odd-dimensional. In this paper we show that for a large class of fields, including all number fields and function fields over finite fields, it is very easy to construct isogeny classes of abelian varieties that contain no principally polarized varieties. We also provide a framework for considering the more general problem of determining which finite group schemes occur as the kernels of polarizations of varieties in a given isogeny class. Our construction of isogeny classes containing no principally polarized varieties is very straightforward, but to describe it we must introduce some terminology. If $`\mathrm{}`$ is a finite extension of a field $`k`$ and if $`E`$ is an elliptic curve over $`\mathrm{}`$, we let $`\mathrm{Res}_{\mathrm{}/k}E`$ denote the restriction of scalars of $`E`$ from $`\mathrm{}`$ to $`k`$ (see Section 1.3 of ). If $`E`$ is an elliptic curve over $`k`$, we define the *reduced restriction of scalars* of $`E`$ from $`\mathrm{}`$ to $`k`$ to be the kernel of the trace map from $`\mathrm{Res}_{\mathrm{}/k}E`$ to $`E`$. Let $`p`$ be an odd prime. We will say that an elliptic curve $`E`$ over $`k`$ is *$`p`$-isolated* if it has no complex multiplication over $`k`$ and if it has no $`p`$-isogeny to another elliptic curve over $`k`$. We will say that a field $`k`$ is *$`p`$-admissible* if there is a $`p`$-isolated elliptic curve over $`k`$ and if there is a Galois extension of $`k`$ of degree $`p`$. One can show, for example, that every number field is $`p`$-admissible; that every function field over a finite field of characteristic not $`p`$ is $`p`$-admissible; and that no finite field or algebraically-closed field is $`p`$-admissible. ###### Theorem 1.1. Let $`p`$ be an odd prime number and let $`k`$ be a $`p`$-admissible field. Let $`E`$ be a $`p`$-isolated elliptic curve over $`k`$, let $`\mathrm{}`$ be a degree-$`p`$ Galois extension of $`k`$, and let $`X`$ be the reduced restriction of scalars of $`E`$ from $`\mathrm{}`$ to $`k`$. Then $`X`$ is simple, and every polarization of every abelian variety isogenous to $`X`$ has degree divisible by $`p^2`$. We provide an elementary proof of this theorem in Part 2 of this paper. In Section 2.1 we prove some basic results about polarizations and endomorphism rings of Galois twists of abelian varieties; we apply these general results in Section 2.2 to prove several results about reduced restrictions of scalars of $`p`$-isolated elliptic curves. We use the results of Section 2.2 to prove Theorem 1.1 in Section 2.3. In Part 3 we prove a very general theorem that sheds additional light on the proof of Theorem 1.1. In Section 3.1 we associate to every isogeny class $`𝒞`$ of margin: No — the proof of this statement is incorrect. abelian varieties over a field $`\textcolor[rgb]{1,0,0}{k}`$ a two-torsion group $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ and a finite set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$, and we prove in Section 3.2 that the set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ determines the set of kernels of polarizations of varieties in $`\textcolor[rgb]{1,0,0}{𝒞}`$ up to Jordan-Hölder isomorphism. Then in Section 3.3 we revisit the proof of Theorem 1.1 and show how it can be interpreted in terms of the group $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ and the set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$. Theorem 1.1 was inspired by a construction of Silverberg and Zarhin (see and ); they too construct twists of a power of an elliptic curve such that every polarization of every abelian variety isogenous to the twist has degree divisible by a given prime. Their original construction is limited to base fields of positive characteristic that have nonabelian Galois extensions of a certain type, but more recently they have produced a new construction that works over an arbitrary number field (see ). Acknowledgments. The author thanks Daniel Goldstein, Bob Guralnick,Alice Silverberg, and Yuri Zarhin for helpful conversations and correspondence. Conventions and notation. We consider varieties to be schemes over some specified base field; it follows that if $`\textcolor[rgb]{1,0,0}{U}`$ and $`\textcolor[rgb]{1,0,0}{V}`$ are varieties over a field $`\textcolor[rgb]{1,0,0}{k}`$, then what we call a *morphism* from $`\textcolor[rgb]{1,0,0}{U}`$ to $`\textcolor[rgb]{1,0,0}{V}`$ others might call a *$`\textcolor[rgb]{1,0,0}{k}`$-morphism* from $`\textcolor[rgb]{1,0,0}{U}`$ to $`\textcolor[rgb]{1,0,0}{V}`$. If $`\textcolor[rgb]{1,0,0}{U}`$ is a variety over a field $`\textcolor[rgb]{1,0,0}{k}`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ is an extension field of $`\textcolor[rgb]{1,0,0}{k}`$, then we let $`\textcolor[rgb]{1,0,0}{U}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ denote the $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$-variety $`\textcolor[rgb]{1,0,0}{V}\textcolor[rgb]{1,0,0}{\times }_{\textcolor[rgb]{1,0,0}{\mathrm{Spec}}\textcolor[rgb]{1,0,0}{k}}\textcolor[rgb]{1,0,0}{\mathrm{Spec}}\textcolor[rgb]{1,0,0}{\mathrm{}}`$. If $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{U}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{V}`$ is a morphism of varieties over $`\textcolor[rgb]{1,0,0}{k}`$, we let $`\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ denote the induced morphism from $`\textcolor[rgb]{1,0,0}{U}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ to $`\textcolor[rgb]{1,0,0}{V}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$. If $`\textcolor[rgb]{1,0,0}{X}`$ is an abelian variety, we let $`\widehat{\textcolor[rgb]{1,0,0}{X}}`$ denote its dual variety, and if $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{Y}`$ is a morphism of abelian varieties, we let $`\widehat{\textcolor[rgb]{1,0,0}{\alpha }}`$ denote the dual morphism $`\widehat{\textcolor[rgb]{1,0,0}{Y}}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{X}}`$. If $`\textcolor[rgb]{1,0,0}{G}`$ is a group scheme over $`\textcolor[rgb]{1,0,0}{k}`$ and $`\textcolor[rgb]{1,0,0}{n}`$ is an integer, we denote by $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}`$ the $`\textcolor[rgb]{1,0,0}{n}`$-torsion subscheme of $`\textcolor[rgb]{1,0,0}{G}`$. ## 2. Isogeny classes containing no principally polarized varieties ### §2.1. Polarizations and endomorphisms of Galois twists of abelian varieties In this section we prove some simple general results about Galois twists of abelian varieties that we will need in our proof of Theorem 1.1. Suppose $`\textcolor[rgb]{1,0,0}{k}`$ is a field, $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ is a Galois extension of $`\textcolor[rgb]{1,0,0}{k}`$ with Galois group $`\textcolor[rgb]{1,0,0}{G}`$, and $`\textcolor[rgb]{1,0,0}{Y}`$ is an abelian variety over $`\textcolor[rgb]{1,0,0}{k}`$. Suppose that $`\textcolor[rgb]{1,0,0}{X}`$ is an $`\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}`$-twist of $`\textcolor[rgb]{1,0,0}{Y}`$ and that $`\textcolor[rgb]{1,0,0}{f}\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{Y}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ is an isomorphism. Then $`\textcolor[rgb]{1,0,0}{X}`$ corresponds (as in Section III.1.3 of ) to the element of $`\textcolor[rgb]{1,0,0}{H}^\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{Y}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{)}`$ represented by the cocycle $`\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{:=}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }`$. Our first proposition tells us when an endomorphism $`\textcolor[rgb]{1,0,0}{\alpha }`$ of $`\textcolor[rgb]{1,0,0}{Y}`$ gives rise to an endomorphism of $`\textcolor[rgb]{1,0,0}{X}`$. ###### Proposition 2.1. The endomorphism $`\textcolor[rgb]{1,0,0}{f}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ of $`\textcolor[rgb]{1,0,0}{X}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ descends to an endomorphism of $`\textcolor[rgb]{1,0,0}{X}`$ if and only if we have $`\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }`$ for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ in $`\textcolor[rgb]{1,0,0}{G}`$. ###### Proof. Let $`\textcolor[rgb]{1,0,0}{\beta }`$ be the endomorphism $`\textcolor[rgb]{1,0,0}{f}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ of $`\textcolor[rgb]{1,0,0}{X}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$. Then $`\textcolor[rgb]{1,0,0}{\beta }`$ will descend to $`\textcolor[rgb]{1,0,0}{X}`$ if and only if for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ in $`\textcolor[rgb]{1,0,0}{G}`$ we have $`\textcolor[rgb]{1,0,0}{\beta }^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\beta }`$, which is $$\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{f}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{f}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{.}$$ By multiplying this equality by $`\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }`$ on the right and by $`\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ on the left, and by using the fact that $`\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^\textcolor[rgb]{1,0,0}{\sigma }`$, we see that $`\textcolor[rgb]{1,0,0}{\beta }`$ descends to $`\textcolor[rgb]{1,0,0}{X}`$ if and only if for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ we have $$\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{,}$$ if and only if $`\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }`$ for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ in $`\textcolor[rgb]{1,0,0}{G}`$. ∎ Now suppose $`\textcolor[rgb]{1,0,0}{\lambda }`$ is a polarization of $`\textcolor[rgb]{1,0,0}{Y}`$, and let $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ denote the Rosati involution on $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{Y}`$ corresponding to $`\textcolor[rgb]{1,0,0}{\lambda }`$, so that $`\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\lambda }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\widehat{\textcolor[rgb]{1,0,0}{x}}\textcolor[rgb]{1,0,0}{\lambda }`$. Our second proposition tells us when the polarization $`\textcolor[rgb]{1,0,0}{\lambda }`$ gives rise to a polarization of $`\textcolor[rgb]{1,0,0}{X}`$. ###### Proposition 2.2. The polarization $`\widehat{\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ of $`\textcolor[rgb]{1,0,0}{X}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ descends to a polarization of $`\textcolor[rgb]{1,0,0}{X}`$ if and only if we have $`\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{1}`$ for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ in $`\textcolor[rgb]{1,0,0}{G}`$. ###### Proof. Let $`\textcolor[rgb]{1,0,0}{\mu }`$ be the polarization $`\widehat{\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ of $`\textcolor[rgb]{1,0,0}{X}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$. Then $`\textcolor[rgb]{1,0,0}{\mu }`$ will descend to $`\textcolor[rgb]{1,0,0}{X}`$ if and only if for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ in $`\textcolor[rgb]{1,0,0}{G}`$ we have $`\textcolor[rgb]{1,0,0}{\mu }^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mu }`$, which is $$\textcolor[rgb]{1,0,0}{\left(}\widehat{\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{=}\widehat{\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{.}$$ By multiplying this equality by $`\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }`$ on the right and by $`\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\widehat{\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }}`$ on the left, and by using the fact that $`\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^\textcolor[rgb]{1,0,0}{\sigma }`$, we see that $`\textcolor[rgb]{1,0,0}{\mu }`$ descends to $`\textcolor[rgb]{1,0,0}{X}`$ if and only if for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ we have $$\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\widehat{\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{\right)}}\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{f}^\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{,}$$ if and only if $`\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{a}_\textcolor[rgb]{1,0,0}{\sigma }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{1}`$ for all $`\textcolor[rgb]{1,0,0}{\sigma }`$ in $`\textcolor[rgb]{1,0,0}{G}`$. ∎ ### §2.2. Reduced restrictions of scalars of $`\textcolor[rgb]{1,0,0}{p}`$-isolated elliptic curves Let $`\textcolor[rgb]{1,0,0}{k}`$ be a $`\textcolor[rgb]{1,0,0}{p}`$-admissible field, let $`\textcolor[rgb]{1,0,0}{E}`$ be a $`\textcolor[rgb]{1,0,0}{p}`$-isolated elliptic curve over $`\textcolor[rgb]{1,0,0}{k}`$, and let $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ be a Galois extension of $`\textcolor[rgb]{1,0,0}{k}`$ of degree $`\textcolor[rgb]{1,0,0}{p}`$. Let $`\textcolor[rgb]{1,0,0}{X}`$ be the reduced restriction of scalars of $`\textcolor[rgb]{1,0,0}{E}`$ from $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ to $`\textcolor[rgb]{1,0,0}{k}`$. In this section we calculate the endomorphism ring of $`\textcolor[rgb]{1,0,0}{X}`$, a restriction on the degrees of the polarizations of $`\textcolor[rgb]{1,0,0}{X}`$, and the Galois module structure of the $`\textcolor[rgb]{1,0,0}{p}`$-torsion of $`\textcolor[rgb]{1,0,0}{X}`$. These results are the building blocks of our proof of Theorem 1.1. ###### Lemma 2.3. The elliptic curve $`\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ has no complex multiplication. ###### Proof. The endomorphism algebra $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}`$ is either $`\textcolor[rgb]{1,0,0}{𝐐}`$, an imaginary quadratic field, or a quaternion algebra over $`\textcolor[rgb]{1,0,0}{𝐐}`$. The action of $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}`$ on $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ gives us a homomorphism $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$, and since $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{𝐙}`$, this homomorphism must be nontrivial if $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}`$. Suppose $`\textcolor[rgb]{1,0,0}{A}`$ were a quaternion algebra. Then the image of a generator of $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}`$ in $`\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$ would be a nontrivial automorphism. Since every automorphism of a quaternion algebra is inner, this automorphism would have to be given by conjugation by a non-central element $`\textcolor[rgb]{1,0,0}{s}`$ of $`\textcolor[rgb]{1,0,0}{A}`$. But then the $`\textcolor[rgb]{1,0,0}{2}`$-dimensional sub-algebra $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{s}\textcolor[rgb]{1,0,0}{\right)}`$ would be fixed by the action of $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}`$ on $`\textcolor[rgb]{1,0,0}{A}`$, contradicting the fact that $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{𝐙}`$. Therefore $`\textcolor[rgb]{1,0,0}{A}`$ is not a quaternion algebra. Suppose $`\textcolor[rgb]{1,0,0}{A}`$ were a quadratic field. Then $`\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$ would be a cyclic group of order $`\textcolor[rgb]{1,0,0}{2}`$, contradicting the existence of a nontrivial homomorphism $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$. Thus $`\textcolor[rgb]{1,0,0}{A}`$ must be $`\textcolor[rgb]{1,0,0}{𝐐}`$, and $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ must be $`\textcolor[rgb]{1,0,0}{𝐙}`$. ∎ Let $`\textcolor[rgb]{1,0,0}{\sigma }`$ be a generator of $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}`$ and let $`\textcolor[rgb]{1,0,0}{R}`$ be the restriction of scalars of $`\textcolor[rgb]{1,0,0}{E}`$ from $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ to $`\textcolor[rgb]{1,0,0}{k}`$. Then $`\textcolor[rgb]{1,0,0}{R}`$ is the $`\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}`$-twist of $`\textcolor[rgb]{1,0,0}{E}^\textcolor[rgb]{1,0,0}{p}`$ given by the element of $`\textcolor[rgb]{1,0,0}{H}^\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{)}`$ represented by the cocycle that sends $`\textcolor[rgb]{1,0,0}{\sigma }`$ to the automorphism $`\textcolor[rgb]{1,0,0}{\xi }`$ of $`\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^\textcolor[rgb]{1,0,0}{p}`$ that cyclically shifts the factors. The kernel $`\textcolor[rgb]{1,0,0}{S}`$ of the trace map $`\textcolor[rgb]{1,0,0}{E}^\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{E}`$ is stable under $`\textcolor[rgb]{1,0,0}{\xi }`$, and the reduced restriction of scalars $`\textcolor[rgb]{1,0,0}{X}`$ of $`\textcolor[rgb]{1,0,0}{E}`$ is the $`\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}`$-twist of $`\textcolor[rgb]{1,0,0}{S}`$ given by sending $`\textcolor[rgb]{1,0,0}{\sigma }`$ to the restriction of $`\textcolor[rgb]{1,0,0}{\xi }`$ to $`\textcolor[rgb]{1,0,0}{S}`$. The projection map from $`\textcolor[rgb]{1,0,0}{E}^\textcolor[rgb]{1,0,0}{p}`$ onto its first $`\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ factors gives an isomorphism from $`\textcolor[rgb]{1,0,0}{S}`$ to $`\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$; under this isomorphism, the restriction of $`\textcolor[rgb]{1,0,0}{\xi }`$ to $`\textcolor[rgb]{1,0,0}{S}`$ is given by the $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}`$ matrix $$\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\begin{array}{ccccc}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\\ \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{0}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{0}& \textcolor[rgb]{1,0,0}{0}\\ \textcolor[rgb]{1,0,0}{0}& \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{0}& \textcolor[rgb]{1,0,0}{0}\\ \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}\\ \textcolor[rgb]{1,0,0}{0}& \textcolor[rgb]{1,0,0}{0}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{0}\end{array}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{,}$$ where we identify the ring $`\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{\right)}`$ of $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}`$ integer matrices with the endomorphism ring of $`\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$. In other words, $`\textcolor[rgb]{1,0,0}{X}`$ is the $`\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}`$-twist of $`\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ given by the element of $`\textcolor[rgb]{1,0,0}{H}^\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{\mathrm{Aut}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{)}`$ represented by the cocycle that sends $`\textcolor[rgb]{1,0,0}{\sigma }`$ to $`\textcolor[rgb]{1,0,0}{\zeta }`$. Note that the minimal polynomial of the endomorphism $`\textcolor[rgb]{1,0,0}{\zeta }`$ is the $`\textcolor[rgb]{1,0,0}{p}`$th cyclotomic polynomial. ###### Lemma 2.4. The abelian variety $`\textcolor[rgb]{1,0,0}{X}`$ is simple over $`\textcolor[rgb]{1,0,0}{k}`$, and its endomorphism ring is isomorphic to the ring of integers of the $`\textcolor[rgb]{1,0,0}{p}`$th cyclotomic field. ###### Proof. Since $`\textcolor[rgb]{1,0,0}{X}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}`$, every endomorphism of $`\textcolor[rgb]{1,0,0}{X}`$ comes from an endomorphism of $`\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$. According to Proposition 2.1, the only endomorphisms of $`\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ that give rise to elements of $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}`$ are the endomorphisms that commute with the element $`\textcolor[rgb]{1,0,0}{\zeta }`$ of $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ defined by the matrix above. Since $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$ is a field of degree $`\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ over $`\textcolor[rgb]{1,0,0}{𝐐}`$, it is a maximal commutative subring of the matrix ring $`\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$, so the only elements of $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ that commute with $`\textcolor[rgb]{1,0,0}{\zeta }`$ are those elements that lie in $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$. The intersection of $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$ with $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ is $`\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right]}`$, which is the ring of integers of the cyclotomic field $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$. Thus $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}`$ is isomorphic to the ring of integers of the $`\textcolor[rgb]{1,0,0}{p}`$th cyclotomic field. And finally, the fact that $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}`$ is a field shows that $`\textcolor[rgb]{1,0,0}{X}`$ is simple. ∎ ###### Lemma 2.5. If $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{)}`$ is an endomorphism of $`\textcolor[rgb]{1,0,0}{X}`$, then the degree of $`\textcolor[rgb]{1,0,0}{\alpha }`$ is the square of the norm of $`\textcolor[rgb]{1,0,0}{\alpha }`$ from $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{)}`$ to $`\textcolor[rgb]{1,0,0}{𝐐}`$. ###### Proof. It is easy to see that under the identification of $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ with $`\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{\right)}`$, the degree function is the square of the determinant function. The lemma then follows from the fact the determinant function from $`\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$ to $`\textcolor[rgb]{1,0,0}{𝐐}`$, restricted to the maximal subfield $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$ of $`\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\right)}`$, is the field norm from $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$ to $`\textcolor[rgb]{1,0,0}{𝐐}`$. ∎ ###### Lemma 2.6. The abelian variety $`\textcolor[rgb]{1,0,0}{X}`$ has a polarization $`\textcolor[rgb]{1,0,0}{\lambda }`$ of degree $`\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}`$. ###### Proof. Let $`\textcolor[rgb]{1,0,0}{\mu }`$ be the product principal polarization of $`\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$, let $`\textcolor[rgb]{1,0,0}{b}`$ be the endomorphism of $`\textcolor[rgb]{1,0,0}{E}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ defined by the matrix $$\textcolor[rgb]{1,0,0}{\left[}\begin{array}{cccc}\textcolor[rgb]{1,0,0}{2}& \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{1}\\ \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{2}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{1}\\ \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{\mathrm{}}\\ \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{1}& \textcolor[rgb]{1,0,0}{\mathrm{}}& \textcolor[rgb]{1,0,0}{2}\end{array}\textcolor[rgb]{1,0,0}{\right]}$$ with $`\textcolor[rgb]{1,0,0}{2}`$’s on the diagonal and $`\textcolor[rgb]{1,0,0}{1}`$’s elsewhere, and let $`\overline{\textcolor[rgb]{1,0,0}{\lambda }}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{b}`$. The Rosati involution associated to $`\textcolor[rgb]{1,0,0}{\mu }`$ is the matrix transpose $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^\textcolor[rgb]{1,0,0}{t}`$, and the Rosati involution associated to $`\overline{\textcolor[rgb]{1,0,0}{\lambda }}`$ is the matrix transpose conjugated by $`\textcolor[rgb]{1,0,0}{b}`$ — that is, $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{b}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{x}^\textcolor[rgb]{1,0,0}{t}\textcolor[rgb]{1,0,0}{b}`$. One checks that $`\textcolor[rgb]{1,0,0}{b}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\zeta }^\textcolor[rgb]{1,0,0}{t}\textcolor[rgb]{1,0,0}{b}\textcolor[rgb]{1,0,0}{\zeta }`$ is the identity matrix, so by Proposition 2.2 the polarization $`\overline{\textcolor[rgb]{1,0,0}{\lambda }}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ of $`\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$ descends to give a polarization $`\textcolor[rgb]{1,0,0}{\lambda }`$ of $`\textcolor[rgb]{1,0,0}{X}`$. Since $`\textcolor[rgb]{1,0,0}{\mu }`$ is a principal polarization and $`\textcolor[rgb]{1,0,0}{\lambda }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mu }_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{b}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$, the degree of $`\textcolor[rgb]{1,0,0}{\lambda }`$ is the degree of $`\textcolor[rgb]{1,0,0}{b}`$, which is the square of the determinant of $`\textcolor[rgb]{1,0,0}{b}`$. An easy calculation shows that $`\textcolor[rgb]{1,0,0}{det}\textcolor[rgb]{1,0,0}{b}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{p}`$, which proves the lemma. ∎ ###### Lemma 2.7. Suppose $`\textcolor[rgb]{1,0,0}{\mu }`$ is a polarization of $`\textcolor[rgb]{1,0,0}{X}`$. Then there is an integer $`\textcolor[rgb]{1,0,0}{n}`$such that $`\textcolor[rgb]{1,0,0}{\mathrm{deg}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{n}^\textcolor[rgb]{1,0,0}{4}`$. ###### Proof. Let $`\textcolor[rgb]{1,0,0}{K}`$ be the field $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}`$ and let $`\textcolor[rgb]{1,0,0}{\zeta }`$ be the endomorphism of $`\textcolor[rgb]{1,0,0}{X}`$ defined above, so that $`\textcolor[rgb]{1,0,0}{\zeta }`$ is a primitive $`\textcolor[rgb]{1,0,0}{p}`$th root of unity and $`\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{\right)}`$. Let $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ be the Rosati involution associated to the polarization $`\textcolor[rgb]{1,0,0}{\lambda }`$ of Lemma 2.6. Then for every $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{K}`$, the element $`\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ is simply the complex conjugate of $`\textcolor[rgb]{1,0,0}{x}`$. Suppose $`\textcolor[rgb]{1,0,0}{\mu }`$ is a polarization of $`\textcolor[rgb]{1,0,0}{X}`$. Then the element $`\textcolor[rgb]{1,0,0}{\lambda }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\mu }`$ of the field $`\textcolor[rgb]{1,0,0}{K}`$ is fixed by the Rosati involution (see §21, Application 3, pp. 208–210 of ), and is therefore an element of the maximal real subfield $`\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}`$ of $`\textcolor[rgb]{1,0,0}{K}`$. By Lemma 2.5, the degree of an element of $`\textcolor[rgb]{1,0,0}{K}`$ is equal to the square of its norm to $`\textcolor[rgb]{1,0,0}{𝐐}`$, so we find that $$\textcolor[rgb]{1,0,0}{\mathrm{deg}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{deg}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{deg}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\lambda }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{N}_{\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\lambda }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\right)}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{N}_{\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\lambda }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\right)}^\textcolor[rgb]{1,0,0}{4}\textcolor[rgb]{1,0,0}{.}$$ Let $`\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{N}_{\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{𝐐}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\lambda }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right)}`$. Since $`\textcolor[rgb]{1,0,0}{\mathrm{deg}}\textcolor[rgb]{1,0,0}{\mu }`$ is an integer and $`\textcolor[rgb]{1,0,0}{n}`$ is rational, we see that $`\textcolor[rgb]{1,0,0}{n}`$ is an integer, and the lemma is proved. ∎ We end this section by describing the Galois module structure of $`\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$. First note that $`\textcolor[rgb]{1,0,0}{p}`$ is not equal to the characteristic of the base field $`\textcolor[rgb]{1,0,0}{k}`$ because $`\textcolor[rgb]{1,0,0}{E}`$ has no $`\textcolor[rgb]{1,0,0}{p}`$-isogenies. Thus, the group scheme structures of $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$ and $`\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$ are completely captured by the Galois module structures of the sets of points of these schemes over a separable closure $`\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}`$ of $`\textcolor[rgb]{1,0,0}{k}`$. Furthermore, $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$ is a simple $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}`$-module because $`\textcolor[rgb]{1,0,0}{E}`$ has no $`\textcolor[rgb]{1,0,0}{p}`$-isogenies to another elliptic curve over $`\textcolor[rgb]{1,0,0}{k}`$. ###### Lemma 2.8. The sequence of modules $$\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{2}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{3}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}$$ is a composition series for the $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{)}`$-module $`\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{]}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{)}`$, and each composition factor is isomorphic to $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{]}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{)}`$. ###### Proof. Multiplication by $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{2}}`$ gives an isomorphism from the quotient $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^{\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$ to $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{2}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$, which is the kernel of $`\textcolor[rgb]{1,0,0}{\zeta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ acting on $`\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$. This kernel is the image of $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$ under the diagonal embedding of $`\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}`$ into $`\textcolor[rgb]{1,0,0}{E}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}^{\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}}`$. ∎ ###### Lemma 2.9. If $`\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{Y}`$ is an isogeny, then the only simple $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{)}`$-module that occurs as a composition factor of the $`\textcolor[rgb]{1,0,0}{p}`$-power-torsion part of the kernel of $`\textcolor[rgb]{1,0,0}{\phi }`$ is $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{]}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{)}`$. ###### Proof. Immediate from Lemma 2.8. ∎ ### §2.3. Proof of Theorem 1.1 For every finite group scheme $`\textcolor[rgb]{1,0,0}{G}`$ over $`\textcolor[rgb]{1,0,0}{k}`$, let us define the *$`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{]}`$-rank* of $`\textcolor[rgb]{1,0,0}{G}`$ to be the multiplicity of the simple Galois module $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$ as a composition factor of the $`\textcolor[rgb]{1,0,0}{p}`$-power-torsion of $`\textcolor[rgb]{1,0,0}{G}`$. We will denote the $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$-rank of $`\textcolor[rgb]{1,0,0}{G}`$ by $`\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\right)}`$. The $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$-rank is an additive function on exact sequences of finite group schemes. Suppose $`\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{Y}`$ is an isogeny and $`\textcolor[rgb]{1,0,0}{\nu }`$ is a polarization of $`\textcolor[rgb]{1,0,0}{Y}`$. Then the map $`\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{X}}`$ given by $`\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\phi }`$ is a polarization of $`\textcolor[rgb]{1,0,0}{X}`$, so we have $$\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{.}$$ Now, $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\phi }}`$ is the Cartier dual of $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\phi }`$, and $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$ is its own Cartier dual, so $`\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{\right)}`$. Thus the parity of $`\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right)}`$ is equal to that of $`\textcolor[rgb]{1,0,0}{\mathrm{rank}}_{\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right)}`$, which is odd by Lemmas 2.7 and 2.9. Therefore $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$ appears as a composition factor of the $`\textcolor[rgb]{1,0,0}{p}`$-power-torsion of $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }`$, so $`\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}`$ divides the order of $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }`$, so $`\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}`$ divides the degree of $`\textcolor[rgb]{1,0,0}{\nu }`$. ∎ ## 3. Polarizations up to Jordan-Hölder isomorphism In this part of the paper we associate to every isogeny class $`\textcolor[rgb]{1,0,0}{𝒞}`$ of abelian varieties over a field $`\textcolor[rgb]{1,0,0}{k}`$ a two-torsion group $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ and a finite set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$, and we show that the set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ determines the set of kernels of polarizations of varieties in $`\textcolor[rgb]{1,0,0}{𝒞}`$ up to Jordan-Hölder isomorphism. Then we revisit the proof of Theorem 1.1 and show how it can be interpreted in terms of the group $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ and the set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$. ### §3.1. Statement of results For every isogeny class $`\textcolor[rgb]{1,0,0}{𝒞}`$ of abelian varieties over a field $`\textcolor[rgb]{1,0,0}{k}`$, we let $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ be the category whose objects are finite commutative group schemes over $`\textcolor[rgb]{1,0,0}{k}`$ that can be embedded (as closed sub-group-schemes) in some variety in the isogeny class $`\textcolor[rgb]{1,0,0}{𝒞}`$ and whose morphisms are morphisms of group schemes. We see that the objects in $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ are those group schemes that can be written $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\phi }`$ for some isogeny $`\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{Y}`$ of elements of $`\textcolor[rgb]{1,0,0}{𝒞}`$. The *Grothendieck group* $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ of $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ is defined to be the quotient of the free abelian group on the isomorphism classes of objects in $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ by the subgroup generated by the expressions $`\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}^{\textcolor[rgb]{1,0,0}{\prime \prime }}`$ for all exact sequences $`\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{X}^{\textcolor[rgb]{1,0,0}{\prime \prime }}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{0}`$ in $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$. If $`\textcolor[rgb]{1,0,0}{X}`$ is an object of $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ we let $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\right]}`$ denote its class in $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$, and we say that two objects $`\textcolor[rgb]{1,0,0}{X}`$ and $`\textcolor[rgb]{1,0,0}{Y}`$ are *Jordan-Hölder isomorphic* to one another if $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{Y}\textcolor[rgb]{1,0,0}{\right]}`$. The group $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ is a free abelian group on the simple objects of $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$. An element of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ is said to be *effective* if it is a sum of positive multiples of simple objects. Let us call an element $`\textcolor[rgb]{1,0,0}{P}`$ of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ *attainable* if there is a polarization $`\textcolor[rgb]{1,0,0}{\lambda }`$ of a variety in $`\textcolor[rgb]{1,0,0}{𝒞}`$ such that $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}`$. Our goal in this section will be to identify the attainable elements of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. To identify the attainable elements we must first define several groups for every isogeny class of abelian varieties. The first two groups will be subgroups of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$, and the others will be defined solely in terms of the endomorphism rings of the varieties in the isogeny class. Let $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ denote the subgroup of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ generated by the elements of the form $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\right]}`$, where $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ is a group scheme whose rank is a square and for which there exists a non-degenerate alternating pairing $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐆}_\textcolor[rgb]{1,0,0}{m}`$.<sup>1</sup><sup>1</sup>1 The existence of a non-degenerate alternating pairing *implies* that the rank of $`G`$ is a square, except in characteristic $`2`$; the unique simple local-local group scheme in characteristic $`2`$, which has rank $`2`$, has a non-degenerate alternating pairing. Cartier duality on $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ defines an involution $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{}\overline{\textcolor[rgb]{1,0,0}{P}}`$ of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$, and $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ is stable under this involution. Let $`\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ be the subgroup $`\textcolor[rgb]{1,0,0}{\{}\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{+}\overline{\textcolor[rgb]{1,0,0}{P}}\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\}}`$ of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$; it is not hard to see that $`\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. Now let us define the groups that depend on the endomorphism rings of the varieties in $`\textcolor[rgb]{1,0,0}{𝒞}`$. If $`\textcolor[rgb]{1,0,0}{X}`$ and $`\textcolor[rgb]{1,0,0}{Y}`$ are two varieties in $`\textcolor[rgb]{1,0,0}{𝒞}`$ then $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{Y}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}`$, so we may define $`\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$ to be $`\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐐}`$ for any $`\textcolor[rgb]{1,0,0}{X}`$ in $`\textcolor[rgb]{1,0,0}{𝒞}`$. We may write $`\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝒞}_\textcolor[rgb]{1,0,0}{1}^{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{𝒞}_\textcolor[rgb]{1,0,0}{r}^{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{r}}`$ for some distinct isogeny classes $`\textcolor[rgb]{1,0,0}{𝒞}_\textcolor[rgb]{1,0,0}{i}`$ of simple varieties and some integers $`\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{i}`$; by this we mean that every $`\textcolor[rgb]{1,0,0}{X}`$ in $`\textcolor[rgb]{1,0,0}{𝒞}`$ is isogenous to a product $`\textcolor[rgb]{1,0,0}{X}_\textcolor[rgb]{1,0,0}{1}^{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{X}_\textcolor[rgb]{1,0,0}{r}^{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{r}}`$ where $`\textcolor[rgb]{1,0,0}{X}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝒞}_\textcolor[rgb]{1,0,0}{i}`$. Let $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$. Then $$\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{r}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{r}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{,}$$ where $`\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}_\textcolor[rgb]{1,0,0}{i}`$ and where $`\textcolor[rgb]{1,0,0}{M}_{\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{i}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{\right)}`$ denotes an $`\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{i}`$-by-$`\textcolor[rgb]{1,0,0}{n}_\textcolor[rgb]{1,0,0}{i}`$ matrix algebra over $`\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{i}`$. Each $`\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{i}`$ is a division algebra over its center $`\textcolor[rgb]{1,0,0}{K}_\textcolor[rgb]{1,0,0}{i}`$, which is a number field, and the center $`\textcolor[rgb]{1,0,0}{K}`$ of $`\textcolor[rgb]{1,0,0}{A}`$ is the product of the $`\textcolor[rgb]{1,0,0}{K}_\textcolor[rgb]{1,0,0}{i}`$. Only certain kinds of division algebras can occur as the endomorphism algebras of simple isogeny classes, and they are classified into four types (see Theorem 2 (p. 201) of ). We will define three subgroups $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$, $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$, and $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ of $`\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$ by defining these groups first for the four possible types of division algebras $`\textcolor[rgb]{1,0,0}{D}`$ and by then setting $$\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}_\textcolor[rgb]{1,0,0}{r}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{.}$$ The group $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ will be a subgroup of $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$, and $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ will be a subgroup of finite index in $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$. Type I: For this type, $`\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{K}`$ is a totally real number field. We let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$, we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ be the multiplicative group of totally positive elements of $`\textcolor[rgb]{1,0,0}{K}`$, and we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$. Type II: For this type, $`\textcolor[rgb]{1,0,0}{K}`$ is a totally real number field and $`\textcolor[rgb]{1,0,0}{D}`$ is a quaternion algebra over $`\textcolor[rgb]{1,0,0}{K}`$ that is split at every infinite prime of $`\textcolor[rgb]{1,0,0}{K}`$. We let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$, we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ be the multiplicative group of totally positive elements of $`\textcolor[rgb]{1,0,0}{K}`$, and we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ be the subgroup of $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ consisting of those elements that are squares in $`\textcolor[rgb]{1,0,0}{K}_\textcolor[rgb]{1,0,0}{𝔭}`$ for all primes $`\textcolor[rgb]{1,0,0}{𝔭}`$ of $`\textcolor[rgb]{1,0,0}{K}`$ for which the local Brauer invariant $`\textcolor[rgb]{1,0,0}{\mathrm{inv}}_\textcolor[rgb]{1,0,0}{𝔭}\textcolor[rgb]{1,0,0}{D}`$ of $`\textcolor[rgb]{1,0,0}{D}`$ at $`\textcolor[rgb]{1,0,0}{𝔭}`$ is nonzero. Type III: For this type, $`\textcolor[rgb]{1,0,0}{K}`$ is a totally real number field and $`\textcolor[rgb]{1,0,0}{D}`$ is a quaternion algebra over $`\textcolor[rgb]{1,0,0}{K}`$ that is ramified at everyinfinite prime of $`\textcolor[rgb]{1,0,0}{K}`$. We let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ be the multiplicative group of totally positive elements of $`\textcolor[rgb]{1,0,0}{K}`$, we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ be the group of squares of elements of $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$, and we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$. Type IV: For this type, $`\textcolor[rgb]{1,0,0}{K}`$ is a CM-field with maximal real subfield $`\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}`$ and $`\textcolor[rgb]{1,0,0}{D}`$ is a division algebra over $`\textcolor[rgb]{1,0,0}{K}`$ such that if $`\textcolor[rgb]{1,0,0}{\sigma }`$ is the nontrivial automorphism of $`\textcolor[rgb]{1,0,0}{K}`$ over $`\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}`$ then $`\textcolor[rgb]{1,0,0}{\mathrm{inv}}_\textcolor[rgb]{1,0,0}{𝔭}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{0}`$ for every prime of $`\textcolor[rgb]{1,0,0}{K}`$ fixed by $`\textcolor[rgb]{1,0,0}{\sigma }`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{inv}}_\textcolor[rgb]{1,0,0}{𝔭}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\mathrm{inv}}_{\textcolor[rgb]{1,0,0}{𝔭}^\textcolor[rgb]{1,0,0}{\sigma }}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{0}`$ for every prime $`\textcolor[rgb]{1,0,0}{𝔭}`$ of $`\textcolor[rgb]{1,0,0}{K}`$. We let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$, we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ be the multiplicative group of totally positive elements of $`\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}`$, and we let $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$. An involution $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ of $`\textcolor[rgb]{1,0,0}{A}`$ is *positive* if $`\textcolor[rgb]{1,0,0}{\mathrm{Trd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$ is a totally positive element of $`\textcolor[rgb]{1,0,0}{K}`$ for every element $`\textcolor[rgb]{1,0,0}{x}`$ of $`\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$, where $`\textcolor[rgb]{1,0,0}{\mathrm{Trd}}`$ is the reduced trace map from $`\textcolor[rgb]{1,0,0}{A}`$ to $`\textcolor[rgb]{1,0,0}{K}`$. If $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{A}`$ is fixed by a positive involution then $`\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right)}`$ is a product of totally real number fields. If $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ is a positive involution of $`\textcolor[rgb]{1,0,0}{A}`$, we let $`\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}`$ denote the set of elements of $`\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$ that are fixed by $`\textcolor[rgb]{1,0,0}{}`$ and that are totally positive. The following lemma, whose proof we will give in Section 3.2, motivates the definitions of the groups $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{.}`$ ###### Lemma 3.1. Let $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$ for an isogeny class $`\textcolor[rgb]{1,0,0}{𝒞}`$ of abelian varieties over $`\textcolor[rgb]{1,0,0}{k}`$. Then $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{)}`$, where $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}`$ is the reduced norm map from $`\textcolor[rgb]{1,0,0}{A}`$ to its center $`\textcolor[rgb]{1,0,0}{K}`$. Suppose $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ is a positive involution of $`\textcolor[rgb]{1,0,0}{A}`$. Then $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{)}\textcolor[rgb]{1,0,0}{.}`$ Remark. Note that if $`\textcolor[rgb]{1,0,0}{A}`$ is built up out of simple $`\textcolor[rgb]{1,0,0}{D}`$ that are of type I, III, and IV then $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$ is a subgroup of $`\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$, but if a $`\textcolor[rgb]{1,0,0}{D}`$ of type II occurs in $`\textcolor[rgb]{1,0,0}{A}`$ then Lemma 3.1 only allows us to say that $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$ lies between two subgroups of $`\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$. We cannot expect to do much better than this; one can find examples of $`\textcolor[rgb]{1,0,0}{D}`$ of type II for which $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$ is not a group. Let $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$. We define a homomorphism $`\textcolor[rgb]{1,0,0}{\mathrm{Prin}}`$ from the multiplicative group $`\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$ to $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ as follows: Suppose $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$ is given. We pick a variety $`\textcolor[rgb]{1,0,0}{X}`$ in $`\textcolor[rgb]{1,0,0}{𝒞}`$ and choose an endomorphism $`\textcolor[rgb]{1,0,0}{\beta }`$ of $`\textcolor[rgb]{1,0,0}{X}`$ and an integer $`\textcolor[rgb]{1,0,0}{n}`$ such that $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{n}`$. Then we set $`\textcolor[rgb]{1,0,0}{\mathrm{Prin}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}`$. We leave it to the reader to show that the value $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}`$ does not depend on the choice of $`\textcolor[rgb]{1,0,0}{X}`$, $`\textcolor[rgb]{1,0,0}{\beta }`$, and $`\textcolor[rgb]{1,0,0}{n}`$. Note that Lemma 3.1 states in part that $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$. We can use this identity, together with the homomorphism $`\textcolor[rgb]{1,0,0}{\mathrm{Prin}}`$, to define a homomorphism $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}`$ from $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ to $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. Suppose $`\textcolor[rgb]{1,0,0}{a}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ is given, and suppose $`\textcolor[rgb]{1,0,0}{\alpha }`$ and $`\textcolor[rgb]{1,0,0}{\alpha }^{\textcolor[rgb]{1,0,0}{}}`$ are elements of $`\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$ with $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{a}`$. Then, by an easy consequence of Wang’s Theorem (see , Theorem 1.14, p. 38, and the comments on p. 39), we see that $`\textcolor[rgb]{1,0,0}{\alpha }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\alpha }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ lies in the commutator subgroup of $`\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$, so that $`\textcolor[rgb]{1,0,0}{\mathrm{Prin}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\alpha }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{0}`$. Thus we may define $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}`$ by taking $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{a}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{Prin}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right)}`$ for any choice of $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}`$ such that $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{a}`$. Finally, we let $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ be the quotient of $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ by the subgroup generated by $`\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{)}\textcolor[rgb]{1,0,0}{)}`$, and we let $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ be the quotient of $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ by the subgroup generated by $`\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\right)}`$. Since $`\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ is contained in $`\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ the groups $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ and $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ are $`\textcolor[rgb]{1,0,0}{2}`$-torsion, and since $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ is a subgroup of finite index in $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$, the natural surjection $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ has a finite kernel. ###### Theorem 3.2. margin: This result is incorrect. With the definition of $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ given below, the “only if” implication holds but the “if” implication does not. The error is indicated below. There is a finite subset $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ of the group $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ such that an element of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ is attainable if and only if it is an effective element of $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ whose image in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ lies in $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$. Furthermore, the image of $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{)}`$ consists of a single element $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$. This theorem must surely seem quite removed from the down-to-earth question of whether there exists a principally-polarized variety in a given isogeny class. However, we will show in Section 3.3 that our proof of Theorem 1.1 can be viewed as an argument showing that there is a homomorphism $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{𝐙}`$ such that the image of $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ is nonzero. Also, in certain cases the abstract objects in Theorem 3.2 can be computed. For example, if $`\textcolor[rgb]{1,0,0}{𝒞}`$ is an isogeny class over a finite field then the group $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ (which in this case is equal to $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$) can be computed, and one can even compute a subgroup of order at most $`\textcolor[rgb]{1,0,0}{2}`$ that contains $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ (see ). If in addition the varieties in $`\textcolor[rgb]{1,0,0}{𝒞}`$ are ordinary, the element $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ itself can be calculated (see ). ### §3.2. Proofs of Lemma 3.1 and Theorem 3.2 In this section we prove the results stated in the preceding section. ###### Proof of Lemma 3.1. Clearly it will suffice to prove the lemma in the case where $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ for some integer $`\textcolor[rgb]{1,0,0}{n}`$ and division algebra $`\textcolor[rgb]{1,0,0}{D}`$ of one of the four types listed in the preceding section. For each of these types of algebras, the Hasse-Schilling-Maass theorem (, Theorem 33.15, p. 289) shows that $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$, so we need only prove the second statement of the lemma. If $`\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{K}`$ is of Type III then the statement we are to prove is Theorem 4.7 of , while if $`\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{K}`$ is of Type IV then the statement we are to prove is Theorem 4.1 of . Suppose $`\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{K}`$ is of Type I. We must show that $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$ is the set of totally positive elements of $`\textcolor[rgb]{1,0,0}{K}`$. It is clear that $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right)}`$ is totally positive if $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}`$, so all we must show is that every totally positive element of $`\textcolor[rgb]{1,0,0}{K}`$ is the reduced norm of some element of $`\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}`$. Let $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ be the transpose on $`\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\right)}`$. Then by Theorem 8.7.4 (pp. 301–302) and Theorem 7.6.3 (p. 259) of , there is an isomorphism $`\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\right)}`$ and a diagonal matrix $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\right)}`$ such that the isomorphism $`\textcolor[rgb]{1,0,0}{i}`$ takes the involution $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ to the involution $`\textcolor[rgb]{1,0,0}{\eta }`$ of $`\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\right)}`$ defined by $`\textcolor[rgb]{1,0,0}{\eta }\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\alpha }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$. Now suppose we are given a totally positive $`\textcolor[rgb]{1,0,0}{b}`$ in $`\textcolor[rgb]{1,0,0}{K}`$. Let $`\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\right)}`$ be the diagonal matrix with $`\textcolor[rgb]{1,0,0}{b}`$ in the upper left corner and $`\textcolor[rgb]{1,0,0}{1}`$’s elsewhere. Then $`\textcolor[rgb]{1,0,0}{\beta }`$ is totally positive and fixed by $`\textcolor[rgb]{1,0,0}{\eta }`$, and its determinant is $`\textcolor[rgb]{1,0,0}{b}`$. Therefore $`\textcolor[rgb]{1,0,0}{i}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\right)}`$ is an element of $`\textcolor[rgb]{1,0,0}{K}_{\textcolor[rgb]{1,0,0}{}}`$ with reduced norm equal to $`\textcolor[rgb]{1,0,0}{b}`$. Suppose $`\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{K}`$ is of Type II. It is clear that the reduced norm of an element of $`\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}`$ is totally positive, so we have $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$, and we must prove that $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right)}`$. Let $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ be the conjugate transpose involution on $`\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$, where “conjugation” on $`\textcolor[rgb]{1,0,0}{D}`$ is the standard involution $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{Trd}}_{\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{K}}\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}`$. Then again by Theorem 8.7.4 (pp. 301–302) and Theorem 7.6.3 (p. 259) of , there is an isomorphism $`\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ and a diagonal matrix $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ with $`\textcolor[rgb]{1,0,0}{\alpha }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\alpha }`$ such that the isomorphism $`\textcolor[rgb]{1,0,0}{i}`$ takes the involution $`\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}`$ to the involution $`\textcolor[rgb]{1,0,0}{\eta }`$ of $`\textcolor[rgb]{1,0,0}{M}_\textcolor[rgb]{1,0,0}{d}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{\right)}`$ defined by $`\textcolor[rgb]{1,0,0}{\eta }\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{x}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{x}^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\alpha }^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$; furthermore, as is argued on pp. 194–195 of , the entries of the diagonal matrix $`\textcolor[rgb]{1,0,0}{\alpha }^\textcolor[rgb]{1,0,0}{2}`$ are totally negative elements of $`\textcolor[rgb]{1,0,0}{K}`$. Let $`\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{n}`$ denote the diagonal entries of $`\textcolor[rgb]{1,0,0}{\alpha }`$ and let $`\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}^\textcolor[rgb]{1,0,0}{2}`$, so that $`\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}`$ satisfies the polynomial $`\textcolor[rgb]{1,0,0}{x}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}`$. Since the field $`\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}`$ splits the quaternion algebra $`\textcolor[rgb]{1,0,0}{D}`$, the element $`\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}`$ of $`\textcolor[rgb]{1,0,0}{K}`$ must be a nonsquare in $`\textcolor[rgb]{1,0,0}{K}_\textcolor[rgb]{1,0,0}{𝔭}`$ for every prime $`\textcolor[rgb]{1,0,0}{𝔭}`$ of $`\textcolor[rgb]{1,0,0}{K}`$ for which $`\textcolor[rgb]{1,0,0}{\mathrm{inv}}_\textcolor[rgb]{1,0,0}{𝔭}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{0}`$. Suppose $`\textcolor[rgb]{1,0,0}{b}`$ is an element of $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$. We claim that there exists an element $`\textcolor[rgb]{1,0,0}{\beta }`$ of $`\textcolor[rgb]{1,0,0}{D}`$ such that (1) $`\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\beta }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{0}`$ and (2) $`\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\beta }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{b}\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}`$ and (3) $`\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ is totally positive. To see that such a $`\textcolor[rgb]{1,0,0}{\beta }`$ exists, we first note that $`\textcolor[rgb]{1,0,0}{b}\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}`$ is a nonsquare in $`\textcolor[rgb]{1,0,0}{K}_\textcolor[rgb]{1,0,0}{𝔭}`$ for every prime $`\textcolor[rgb]{1,0,0}{𝔭}`$ of $`\textcolor[rgb]{1,0,0}{K}`$ for which $`\textcolor[rgb]{1,0,0}{\mathrm{inv}}_\textcolor[rgb]{1,0,0}{𝔭}\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{0}`$, so the field $`\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{\left(}\sqrt{\textcolor[rgb]{1,0,0}{b}\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}}\textcolor[rgb]{1,0,0}{\right)}`$ splits $`\textcolor[rgb]{1,0,0}{D}`$. This shows that there is an element $`\textcolor[rgb]{1,0,0}{\beta }_\textcolor[rgb]{1,0,0}{0}`$ of $`\textcolor[rgb]{1,0,0}{D}`$ such that $`\textcolor[rgb]{1,0,0}{\beta }_\textcolor[rgb]{1,0,0}{0}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{b}\textcolor[rgb]{1,0,0}{c}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{0}`$, so that $`\textcolor[rgb]{1,0,0}{\beta }_\textcolor[rgb]{1,0,0}{0}`$ satisfies (1) and (2). If we choose a $`\textcolor[rgb]{1,0,0}{K}`$-basis for the trace-$`\textcolor[rgb]{1,0,0}{0}`$ elements of $`\textcolor[rgb]{1,0,0}{D}`$, then the set of $`\textcolor[rgb]{1,0,0}{\beta }`$ satisfying (1) and (2) is a level set of a homogeneous ternary quadratic form $`\textcolor[rgb]{1,0,0}{Q}`$ that is indefinite at every infinite prime of $`\textcolor[rgb]{1,0,0}{K}`$, and we have just shown that there are $`\textcolor[rgb]{1,0,0}{K}`$-points in this level set. Condition (3) is simply a linear inequality at each of the infinite primes of $`\textcolor[rgb]{1,0,0}{K}`$, and the inequality can be satisfied locally at each infinite prime by points on the level set because the form $`\textcolor[rgb]{1,0,0}{Q}`$ is indefinite, so by weak approximation we see that there do exist $`\textcolor[rgb]{1,0,0}{\beta }`$’s satisfying (1), (2), and (3). Choose such a $`\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{D}`$, and let $`\textcolor[rgb]{1,0,0}{\gamma }`$ be the diagonal matrix with $`\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}`$ in the upper left corner and $`\textcolor[rgb]{1,0,0}{1}`$’s elsewhere on the diagonal. A computation shows that $`\textcolor[rgb]{1,0,0}{\gamma }`$ is fixed by the involution $`\textcolor[rgb]{1,0,0}{\eta }`$, that the reduced norm of $`\textcolor[rgb]{1,0,0}{\gamma }`$ is $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}_{\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{K}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\beta }\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}_{\textcolor[rgb]{1,0,0}{D}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{K}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\alpha }_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{b}`$, and that $`\textcolor[rgb]{1,0,0}{\gamma }`$ is totally positive. Thus $`\textcolor[rgb]{1,0,0}{i}^\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\gamma }\textcolor[rgb]{1,0,0}{\right)}`$ is an element of $`\textcolor[rgb]{1,0,0}{A}_{\textcolor[rgb]{1,0,0}{}}`$ with reduced norm equal to $`\textcolor[rgb]{1,0,0}{b}`$. ∎ ###### Proof of Theorem 3.2. Suppose $`\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{X}}`$ and $`\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{Y}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{Y}}`$ are polarizations of varieties in $`\textcolor[rgb]{1,0,0}{𝒞}`$. First note that the ranks of $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }`$ and $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }`$ are squares, and that there are non-degenerate alternating pairings from these groups to the multiplicative group (see §23 of ), so that $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}`$ and $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right]}`$ lie in $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. Let $`\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{Y}`$ be an isogeny, and let $`\textcolor[rgb]{1,0,0}{\nu }`$ be the polarization $`\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\phi }`$ of $`\textcolor[rgb]{1,0,0}{X}`$, where $`\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{:}\widehat{\textcolor[rgb]{1,0,0}{Y}}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{X}}`$ is the dual isogeny of $`\textcolor[rgb]{1,0,0}{\phi }`$. Let $`\textcolor[rgb]{1,0,0}{n}`$ be any positive integer such that $`\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right)}`$ as group schemes. Then there is an isogeny $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{:}\widehat{\textcolor[rgb]{1,0,0}{X}}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{X}}`$ such that $`\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\lambda }`$. A polarization is equal to its own dual isogeny, so we can equate the right-hand side of the last equality with its dual to get $`\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\lambda }\widehat{\textcolor[rgb]{1,0,0}{\alpha }}`$. Using Application III (pp. 208–210) of §21 of (see especially the final paragraph) and the fact that $`\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\nu }`$ and $`\textcolor[rgb]{1,0,0}{\lambda }`$ are polarizations, we find that $`\widehat{\textcolor[rgb]{1,0,0}{\alpha }}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{X}`$ is fixed by the Rosati involution associated to $`\textcolor[rgb]{1,0,0}{\lambda }`$ and is totally positive. The equality $`\textcolor[rgb]{1,0,0}{n}\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\lambda }\widehat{\textcolor[rgb]{1,0,0}{\alpha }}`$ translates into the equality $$\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\alpha }}\textcolor[rgb]{1,0,0}{\right]}$$ in $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. Now, $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\phi }\textcolor[rgb]{1,0,0}{\right]}`$ is an element of $`\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$, and $`\widehat{\textcolor[rgb]{1,0,0}{\alpha }}`$ and $`\textcolor[rgb]{1,0,0}{n}`$ are totally positive elements of $`\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$ that are fixed by the Rosati involution, so $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}`$ and $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\widehat{\textcolor[rgb]{1,0,0}{\alpha }}\textcolor[rgb]{1,0,0}{\right]}`$ lie in $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\right)}`$. It follows that the images of $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\mu }\textcolor[rgb]{1,0,0}{\right]}`$ and $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}`$ in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ are equal. Thus, we may define $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ to be the image in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ of the kernel of any polarization of any variety in $`\textcolor[rgb]{1,0,0}{𝒞}`$. Let $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ be the image in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ of the subset margin: Note that this choice of $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ is the only one that will make the “only if” implication of Theorem 3.2 hold. $$\textcolor[rgb]{1,0,0}{\{}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\text{ is a polarization of some }}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\text{ in }}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\}}$$ of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. We see that $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ is a subset of the preimage of $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ under the natural reduction map $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. Since this reduction map has a finite kernel, the set $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$ is finite. To complete the proof of Theorem 3.2, we must show that if $`\textcolor[rgb]{1,0,0}{P}`$ is an effective element of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ whose image in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ lies in $`\textcolor[rgb]{1,0,0}{S}_\textcolor[rgb]{1,0,0}{𝒞}`$, then $`\textcolor[rgb]{1,0,0}{P}`$ is attainable. So suppose $`\textcolor[rgb]{1,0,0}{P}`$ is an effective element of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ whose image in $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ is equal to the image of $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}`$ for a polarization $`\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{}\widehat{\textcolor[rgb]{1,0,0}{X}}`$ of a variety in $`\textcolor[rgb]{1,0,0}{𝒞}`$. Then there is an element $`\textcolor[rgb]{1,0,0}{Q}`$ of $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ and an element $`\textcolor[rgb]{1,0,0}{a}`$ of $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ such that $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{+}\overline{\textcolor[rgb]{1,0,0}{Q}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{a}\textcolor[rgb]{1,0,0}{\right)}`$ in $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$. Lemma 3.1 shows that there is an $`\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$ that is totally positive and fixed by the Rosati involution associated to $`\textcolor[rgb]{1,0,0}{\lambda }`$ such that $`\textcolor[rgb]{1,0,0}{\mathrm{Nrd}}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{a}`$. Choose an integer $`\textcolor[rgb]{1,0,0}{n}`$ so that $`\textcolor[rgb]{1,0,0}{n}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\alpha }`$ is an actual endomorphism of $`\textcolor[rgb]{1,0,0}{X}`$ and such that $`\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}`$ is effective. Replacing $`\textcolor[rgb]{1,0,0}{\alpha }`$ with $`\textcolor[rgb]{1,0,0}{n}^\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\alpha }`$ and $`\textcolor[rgb]{1,0,0}{Q}`$ with $`\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{n}\textcolor[rgb]{1,0,0}{\right]}`$, we see that we have $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{+}\overline{\textcolor[rgb]{1,0,0}{Q}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\alpha }\textcolor[rgb]{1,0,0}{\right]}`$. Since $`\textcolor[rgb]{1,0,0}{\alpha }`$ is fixed by the Rosati involution associated to $`\textcolor[rgb]{1,0,0}{\lambda }`$ and is totally positive, the composite map $`\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\lambda }\textcolor[rgb]{1,0,0}{\alpha }`$ is also a polarization of $`\textcolor[rgb]{1,0,0}{X}`$, and we have $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{+}\overline{\textcolor[rgb]{1,0,0}{Q}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right]}`$. Let $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }`$ and let $`\textcolor[rgb]{1,0,0}{e}\textcolor[rgb]{1,0,0}{:}\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐆}_\textcolor[rgb]{1,0,0}{m}`$ be the non-degenerate margin: Here is the error in the proof. The cited proposition assumes the base field is finite, but Theorem 3.2 allows for arbitrary base fields. The highlighted statement is false, for example, over $`\textcolor[rgb]{1,0,0}{𝐐}`$. alternating pairing on $`\textcolor[rgb]{1,0,0}{G}`$ whose existence is shown in §23 of . Let $`\textcolor[rgb]{1,0,0}{H}`$ be a simple element of $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$ that occurs in $`\textcolor[rgb]{1,0,0}{Q}`$. Proposition 5.2 of shows that there is an embedding of $`\textcolor[rgb]{1,0,0}{H}`$ into $`\textcolor[rgb]{1,0,0}{G}`$ such that the pairing $`\textcolor[rgb]{1,0,0}{e}`$ restricted to $`\textcolor[rgb]{1,0,0}{H}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{H}`$ is the trivial pairing. Let $`\textcolor[rgb]{1,0,0}{\phi }`$ be the natural isogeny from $`\textcolor[rgb]{1,0,0}{X}`$ to $`\textcolor[rgb]{1,0,0}{Y}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{H}`$. Then the Corollary to Theorem 2 (p. 231) of §23 of shows that there is a polarization $`\textcolor[rgb]{1,0,0}{\nu }^{\textcolor[rgb]{1,0,0}{}}`$ of $`\textcolor[rgb]{1,0,0}{Y}`$ such that $`\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{=}\widehat{\textcolor[rgb]{1,0,0}{\phi }}\textcolor[rgb]{1,0,0}{\nu }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\phi }`$. In $`\textcolor[rgb]{1,0,0}{G}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ this gives us the equality $`\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{H}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }^{\textcolor[rgb]{1,0,0}{}}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{+}\overline{\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{H}\textcolor[rgb]{1,0,0}{\right]}}\textcolor[rgb]{1,0,0}{.}`$ If we replace $`\textcolor[rgb]{1,0,0}{Q}`$ by $`\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{H}\textcolor[rgb]{1,0,0}{\right]}`$ and $`\textcolor[rgb]{1,0,0}{\nu }`$ by $`\textcolor[rgb]{1,0,0}{\nu }^{\textcolor[rgb]{1,0,0}{}}`$, we will again have the equality $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{+}\textcolor[rgb]{1,0,0}{Q}\textcolor[rgb]{1,0,0}{+}\overline{\textcolor[rgb]{1,0,0}{Q}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right]}`$, but we will have decreased the number of simple group schemes that occur in $`\textcolor[rgb]{1,0,0}{Q}`$. By applying this argument repeatedly, we can finally obtain the equality $`\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{\mathrm{ker}}\textcolor[rgb]{1,0,0}{\nu }\textcolor[rgb]{1,0,0}{\right]}`$ for a polarization $`\textcolor[rgb]{1,0,0}{\nu }`$ of a variety in $`\textcolor[rgb]{1,0,0}{𝒞}`$. This shows that $`\textcolor[rgb]{1,0,0}{P}`$ is attainable. ∎ ### §3.3. Theorem 1.1 revisited In this section we show how our proof ofTheorem 1.1 can be understood in terms of Theorem 3.2. Let $`\textcolor[rgb]{1,0,0}{k}`$ be a $`\textcolor[rgb]{1,0,0}{p}`$-admissible field, let $`\textcolor[rgb]{1,0,0}{E}`$ be a $`\textcolor[rgb]{1,0,0}{p}`$-isolated elliptic curve over $`\textcolor[rgb]{1,0,0}{k}`$, let $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ be a degree-$`\textcolor[rgb]{1,0,0}{p}`$ Galois extension of $`\textcolor[rgb]{1,0,0}{k}`$, let $`\textcolor[rgb]{1,0,0}{X}`$ be the reduced restriction of scalars of $`\textcolor[rgb]{1,0,0}{E}`$ from $`\textcolor[rgb]{1,0,0}{\mathrm{}}`$ to $`\textcolor[rgb]{1,0,0}{k}`$, and let $`\textcolor[rgb]{1,0,0}{𝒞}`$ be the isogeny class of $`\textcolor[rgb]{1,0,0}{X}`$. Then $`\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{\mathrm{End}}^\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{𝒞}`$ is the cyclotomic field $`\textcolor[rgb]{1,0,0}{K}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{𝐐}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{\zeta }_\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right)}`$, so $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ is $`\textcolor[rgb]{1,0,0}{K}^{\textcolor[rgb]{1,0,0}{}}`$ and $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ and $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ are both equal to the multiplicative group of totally positive elements of the maximal real subfield $`\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}`$ of $`\textcolor[rgb]{1,0,0}{K}`$. Note that the $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$-rank defines a homomorphism $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{B}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{}\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{𝐙}`$. Lemma 2.5 shows that the degree of an element of $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}`$ is equal to the fourth power of its norm from $`\textcolor[rgb]{1,0,0}{K}^\textcolor[rgb]{1,0,0}{+}`$ to $`\textcolor[rgb]{1,0,0}{𝐐}`$. Since Lemma 2.8 shows that $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$ is the only simple $`\textcolor[rgb]{1,0,0}{p}`$-torsion group scheme that occurs in $`\textcolor[rgb]{1,0,0}{\mathrm{Ker}}_\textcolor[rgb]{1,0,0}{𝒞}`$, and since the rank of $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$ is $`\textcolor[rgb]{1,0,0}{p}^\textcolor[rgb]{1,0,0}{2}`$, the $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$-rank of every element of $`\textcolor[rgb]{1,0,0}{\mathrm{\Phi }}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{A}\textcolor[rgb]{1,0,0}{\right)}\textcolor[rgb]{1,0,0}{\right)}`$ is even. Thus, the $`\textcolor[rgb]{1,0,0}{E}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}`$-rank gives us a homomorphism from $`\textcolor[rgb]{1,0,0}{}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{𝒞}\textcolor[rgb]{1,0,0}{\right)}`$ to $`\textcolor[rgb]{1,0,0}{𝐙}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{𝐙}`$. Lemma 2.6 shows that the image of $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ under this homomorphism is nonzero, so $`\textcolor[rgb]{1,0,0}{I}_\textcolor[rgb]{1,0,0}{𝒞}`$ itself is nonzero. Then Theorem 3.2 shows that the trivial group scheme is not attainable in $`\textcolor[rgb]{1,0,0}{𝒞}`$, so $`\textcolor[rgb]{1,0,0}{𝒞}`$ contains no principally polarized varieties. We leave it to the reader to use the methods of this paper to prove the following generalization of Theorem 1.1: ###### Theorem 3.3. margin: A hypothesis is missing here: We should also assume that $`\textcolor[rgb]{1,0,0}{p}`$ is not equal to the characteristic of $`\textcolor[rgb]{1,0,0}{k}`$. The proof of this theorem (with the added hypothesis) does not use Theorem 3.2 — it is based on the ideas used in §2 — and still carries through. Let $`\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}`$ be a Galois extension of odd prime degree $`\textcolor[rgb]{1,0,0}{p}`$, and let $`\textcolor[rgb]{1,0,0}{Y}`$ and $`\textcolor[rgb]{1,0,0}{Z}`$ be abelian varieties over $`\textcolor[rgb]{1,0,0}{k}`$ such that 1. $`\textcolor[rgb]{1,0,0}{Y}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$ is a simple $`\textcolor[rgb]{1,0,0}{\mathrm{Gal}}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}\textcolor[rgb]{1,0,0}{\right)}`$-module; 2. the simple module $`\textcolor[rgb]{1,0,0}{Y}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$ does not occur in $`\textcolor[rgb]{1,0,0}{Z}\textcolor[rgb]{1,0,0}{\left[}\textcolor[rgb]{1,0,0}{p}\textcolor[rgb]{1,0,0}{\right]}\textcolor[rgb]{1,0,0}{\left(}\textcolor[rgb]{1,0,0}{k}^{\textcolor[rgb]{1,0,0}{\text{sep}}}\textcolor[rgb]{1,0,0}{\right)}`$; and 3. $`\textcolor[rgb]{1,0,0}{\mathrm{End}}\textcolor[rgb]{1,0,0}{Y}_{\textcolor[rgb]{1,0,0}{\mathrm{}}}\textcolor[rgb]{1,0,0}{=}\textcolor[rgb]{1,0,0}{𝐙}`$. Let $`\textcolor[rgb]{1,0,0}{X}`$ be the kernel of the trace map from $`\textcolor[rgb]{1,0,0}{\mathrm{Res}}_{\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{/}\textcolor[rgb]{1,0,0}{k}}\textcolor[rgb]{1,0,0}{Y}`$ to $`\textcolor[rgb]{1,0,0}{Y}`$. Then $`\textcolor[rgb]{1,0,0}{X}`$ is simple, and every polarization of every abelian variety isogenous to $`\textcolor[rgb]{1,0,0}{X}\textcolor[rgb]{1,0,0}{\times }\textcolor[rgb]{1,0,0}{Z}`$ has degree divisible by $`\textcolor[rgb]{1,0,0}{p}^{\textcolor[rgb]{1,0,0}{2}\textcolor[rgb]{1,0,0}{dim}\textcolor[rgb]{1,0,0}{Y}}`$.
no-problem/0002/gr-qc0002015.html
ar5iv
text
# Sonic analog of gravitational black holes in Bose-Einstein condensates ## Abstract It is shown that, in dilute-gas Bose-Einstein condensates, there exist both dynamically stable and unstable configurations which, in the hydrodynamic limit, exhibit a behavior resembling that of gravitational black holes. The dynamical instabilities involve creation of quasiparticle pairs in positive and negative energy states, as in the well-known suggested mechanism for black hole evaporation. We propose a scheme to generate a stable sonic black hole in a ring trap. Many investigations of dilute gas Bose-Einstein condensates are directed towards experimentally creating nontrivial configurations of the semiclassical mean field, or to predicting the properties of such configurations in the presence of quantum fluctuations. Such problems are hardly peculiar to condensates, but ultracold dilute gases are so easy to manipulate and control, both experimentally and theoretically , that they may allow us to analyze less amenable systems by analogy. As an essay in such an application of condensates, in this paper we discuss the theoretical framework and propose an experiment to create the analog of a black hole in the laboratory and simulate its radiative instabilities. The hydrodynamic analog of an event horizon was suggested originally by Unruh as a more accessible phenomenon which might shed some light on the Hawking effect (thermal radiation from black holes, stationary insofar as backreaction is negligible) and, in particular, on the role of ultrahigh frequencies . An event horizon for sound waves appears in principle wherever there is a closed surface through which a fluid flows inwards at the speed of sound, the flow being subsonic on one side of the surface and supersonic on the other. There is a close analogy between sound propagation on a background hydrodynamic flow, and field propagation in a curved spacetime; and although hydrodynamics is only a long-wavelength effective theory for physical (super)fluids, so also field theory in curved spacetime is to be considered a long-wavelength approximation to quantum gravity . Determining whether and how sonic black holes radiate sound, in a full calculation beyond the hydrodynamic approximation or in an actual experiment, can thus offer some suggestions about black hole radiance and its sensitivity to high frequency physics. The basic challenge of our proposal is to keep the trapped Bose-Einstein gas sufficiently cold and well isolated to maintain a locally supersonic flow long enough to observe its intrinsic dynamics. Detecting thermal phonons radiating from the horizons would obviously be a difficult additional problem, since such radiation would be indistinguishable from many other possible heating effects. This further difficulty does not arise in our proposal, however, because the black-hole radiation we predict is, unlike Hawking radiation, not quasistationary, but grows exponentially under appropriate conditions. It should therefore be observable in the next generation of atom traps. A Bose-Einstein condensate is the ground state of a second quantized many body Hamiltonian for $`N`$ interacting bosons trapped by an external potential $`V_{\mathrm{ext}}(𝐱)`$ . At zero temperature, when the number of atoms is large and the atomic interactions are sufficiently small, almost all the atoms are in the same single-particle quantum state $`\mathrm{\Psi }(𝐱,t)`$, even if the system is slightly perturbed. The evolution of $`\mathrm{\Psi }`$ is then given by the well-known Gross-Pitaevskii equation $$i\mathrm{}_t\mathrm{\Psi }=\left(\frac{\mathrm{}^2}{2m}^2+V_{\mathrm{ext}}+\frac{4\pi a\mathrm{}^2}{m}|\mathrm{\Psi }|^2\right)\mathrm{\Psi },$$ where $`m`$ is the mass of the atoms, $`a`$ is the scattering length, and we normalize to the total number of atoms $`d^3𝐱|\mathrm{\Psi }(𝐱,t)|^2=N`$. Our purposes do not require solving the Gross-Pitaevskii equation with some given external potential $`V_{\mathrm{ext}}(𝐱)`$; our concern is the propagation of small collective perturbations of the condensate, around a background stationary state $`\mathrm{\Psi }_s(𝐱,t)=\sqrt{\rho (𝐱)}e^{i\vartheta (𝐱)}e^{i\mu t/\mathrm{}},`$ where $`\mu `$ is the chemical potential. Thus it is only necessary that it be possible, in any external potential that can be generated, to create a condensate in this state. Many realistic techniques for “quantum state engineering,” to create designer potentials and bring condensates into specific states, have been proposed, and even implemented successfully ; our simulations indicate that currently known techniques should suffice to generate the condensate states that we propose. Perturbations about the stationary state $`\mathrm{\Psi }_s(𝐱,t)`$ obey the Bogoliubov system of two coupled second order differential equations. Within the regime of validity of the hydrodynamic (Thomas-Fermi) approximation , these two equations for the density perturbation $`\varrho `$ and the phase perturbation $`\varphi `$ in terms of the local speed of sound $`c(𝐱)\frac{\mathrm{}}{m}\sqrt{4\pi a\rho (𝐱)},`$ and the background stationary velocity $`𝐯\frac{\mathrm{}}{m}\vartheta `$ read $$\dot{\varrho }=(\frac{m}{4\pi a\mathrm{}}c^2\varphi +𝐯\varrho ),\dot{\varphi }=𝐯\varphi \frac{4\pi a\mathrm{}}{m}\varrho .$$ Furthermore, low frequency perturbations are essentially just waves of (zero) sound. Indeed, the Bogoliubov equations may be reduced to a single second order equation for the condensate phase perturbation $`\varphi `$. This differential equation has the form of a relativistic wave equation $`_\mu (\sqrt{g}g^{\mu \nu }_\nu \varphi )=0`$, with $`g=detg_{\mu \nu }`$, in an effective curved spacetime with the metric $`g_{\mu \nu }`$ being entirely determined by the local speed of sound $`c`$ and the background stationary velocity $`𝐯`$. Up to a conformal factor, this effective metric has the form $$(g_{\mu \nu })=\left(\begin{array}{cc}(c^2𝐯^2)& 𝐯^\mathrm{T}\\ 𝐯& \mathrm{𝟏}\end{array}\right).$$ This class of metrics can possess event horizons. For instance, if an effective sink for atoms is generated at the center of a spherical trap (such as by an atom laser out-coupling technique ), and if the radial potential profile is suitably arranged, we can produce densities $`\rho (r)`$ and flow velocities $`𝐯(𝐱)=v(r)𝐫/r`$ such that the quantity $`c^2𝐯^2`$ vanishes at a radius $`r=r_h`$, being negative inside and positive outside. The sphere at radius $`r_h`$ is a sonic event horizon completely analogous to those appearing in gravitational black holes, in the sense that sonic perturbations cannot propagate through this surface in the outward direction . The physical mechanism of the sonic black hole is quite simple: inside the horizon, the background flow speed $`v`$ is larger than the local speed of sound $`c`$, and so sound waves are dragged inwards. In fact there are two conditions which must hold for this dragged sound picture to be accurate. Wavelengths larger than the black hole itself will of course not be dragged in, but merely diffracted around it. And perturbations must have wavelengths $`\lambda 2\pi \xi ,2\pi \xi /\sqrt{|1v/c|}`$, where $`\xi (𝐱)\mathrm{}/[mc(𝐱)]`$ is the local healing length. Otherwise they do not behave as sound waves since they lie outside the regime of validity of the hydrodynamic approximation. These short-wavelength modes must be described by the full Bogoliubov equations, which allow signals to propagate faster than the local sound speed, and thus permit escape from sonic black holes. Even if such an intermediate range of wavelengths does exist, the modes outside it may still affect the stability of the black hole as discussed below. As it stands, this description is incomplete. The condensate flows continually inwards and therefore at $`r=0`$ there must be a sink that takes atoms out of the condensate. Otherwise, the continuity equation $`(\rho 𝐯)=0`$, which must hold for stationary configurations, will be violated. We have analyzed several specific systems which may be suitable theoretical models for future experiments, and have found that the qualitative behavior is analogous in all of them. Black holes which require atom sinks are both theoretically and experimentally more involved, however; moreover, maintaining a steady transonic flow into a sink may require either a very large condensate or some means of replenishment. We will therefore discuss here an alternative configuration which may be experimentally more accessible and whose description is particularly simple: a condensate in a very thin ring that effectively behaves as a periodic one-dimensional system. Under conditions that we will discuss, the supersonic region in a ring may be bounded by two horizons: a black hole horizon through which phonons cannot exit, and a “white hole” horizon through which they cannot enter. In a sufficiently tight ring-shaped external potential of radius $`R`$, motion in radial ($`r`$) and axial ($`z`$) cylindrical coordinates is effectively frozen. We can then write the wave function as $`\mathrm{\Psi }(z,r,\theta ,\tau )=f(z,r)\mathrm{\Phi }(\theta ,\tau )`$ and normalize $`\mathrm{\Phi }`$ to the number of atoms in the condensate $`_0^{2\pi }𝑑\theta |\mathrm{\Phi }(\theta )|^2=N`$, where with the azimuthal coordinate $`\theta `$ we have introduced the dimensionless time $`\tau =\frac{\mathrm{}}{mR^2}t`$. The Gross-Pitaevskii equation thus becomes effectively one-dimensional: $$i_\tau \mathrm{\Phi }=\left(\frac{1}{2}_\theta ^2+𝒱_{\mathrm{ext}}+\frac{𝒰}{N}|\mathrm{\Phi }|^2\right)\mathrm{\Phi },$$ (1) where $`𝒰4\pi aNR^2𝑑z𝑑rr|f(z,r)|^4`$ and $`𝒱_{\mathrm{ext}}(\theta )`$ is the dimensionless effective potential (in which we have already included the chemical potential) that results from the dimensional reduction. The stationary solution can then be written as $`\mathrm{\Phi }_s(\theta ,\tau )=\sqrt{\rho (\theta )}e^{i{\scriptscriptstyle 𝑑\theta v(\theta )}}`$ and the local dimensionless angular speed of sound as $`c(\theta )=\sqrt{𝒰\rho (\theta )/N}`$. Periodic boundary conditions around the ring require the “winding number” $`w\frac{1}{2\pi }_0^{2\pi }𝑑\theta v(\theta )`$ to be an integer. The qualitative behavior of horizons in a ring is well represented by the two-parameter family of condensate densities $$\rho (\theta )=\frac{N}{2\pi }(1+b\mathrm{cos}\theta ),$$ where $`b[0,1]`$. Continuity, $`_\theta (\rho v)=0`$, then determines the dimensionless flow-velocity field $$v(\theta )=\frac{𝒰w\sqrt{1b^2}}{2\pi c(\theta )^2},$$ which depends on $`w`$ as a third discrete independent parameter. Requiring that $`\mathrm{\Phi }_s(\theta ,\tau )`$ be a stationary solution to Gross-Pitaevskii equation then determines how the trapping potential must be modulated as a function of $`\theta `$. All the properties of the condensate, including whether and where it has sonic horizons, and whether or not they are stable, are thus functions of $`𝒰`$, $`b`$ and $`w`$. For instance, if we require that the horizons be located at $`\theta _h=\pm \pi /2`$, which imposes the relation $`𝒰=2\pi w^2(1b^2)`$, then we must have $`c^2v^2`$ positive for $`\theta (\pi /2,\pi /2)`$, zero at $`\theta _h=\pm \pi /2`$, and negative otherwise, provided that $`𝒰<2\pi w^2`$. The further requirement that perturbations on wavelengths shorter than the inner and the outer regions are indeed phononic implies $`𝒰2\pi `$, which in turn requires $`w1`$ and $`1b1/w^2`$. In fact, detailed analysis shows that $`w5`$ is sufficient. A black hole solution should also be stable over sufficiently long time scales in order to be physically realizable. Since stability must be checked for perturbations on all wavelengths, the full Bogoliubov spectrum must be determined. For large black holes within infinite condensates, this Bogoliubov problem may be solved using WKB methods that closely resemble those used for solving relativistic field theories in true black hole spacetimes . The results are also qualitatively similar to those we have found for black holes in finite traps, where we have resorted to numerical methods because, in these cases, WKB techniques may fail for just those modes which threaten to be unstable. Our numerical approach for our three-parameter family of black/white holes in the ring-shaped condensate has been to write the Bogoliubov equations in discrete Fourier space, and then truncate the resulting infinite-dimensional eigenvalue problem. Writing the wave funtion as $`\mathrm{\Phi }=\mathrm{\Phi }_s+\phi e^{i{\scriptscriptstyle 𝑑\theta v(\theta )}}`$, decomposing the perturbation $`\phi `$ in discrete modes $`\phi (\theta ,\tau )=`$ $`{\displaystyle \underset{\omega ,n}{}}e^{i\omega \tau }e^{in\theta }A_{\omega ,n}u_{\omega ,n}(\theta )`$ (3) $`+e^{i\omega ^{}\tau }e^{in\theta }A_{\omega ,n}^{}v_{\omega ,n}^{}(\theta ),`$ and substituting into the Gross-Pitaevskii equation, we obtain the following equation for the modes $`u_{\omega ,n}`$ and $`v_{\omega ,n}`$: $$\omega \left(\begin{array}{c}u_{\omega ,n}\\ v_{\omega ,n}\end{array}\right)=\underset{p}{}\left(\begin{array}{cc}h_{np}^+& f_{np}\\ f_{np}& h_{np}^{}\end{array}\right)\left(\begin{array}{c}u_{\omega ,p}\\ v_{\omega ,p}\end{array}\right).$$ In this equation, $`f_{np}`$ $`=`$ $`{\displaystyle \frac{𝒰}{2\pi }}\left(\delta _{n,p}+{\displaystyle \frac{b}{2}}\delta _{n,p+1}+{\displaystyle \frac{b}{2}}\delta _{n,p1}\right),`$ (4) $`h_{np}^\pm `$ $`=`$ $`{\displaystyle \frac{1}{2}}(n+p)w\sqrt{1b^2}\alpha _{np}`$ (5) $`\pm `$ $`\left(f_{np}+{\displaystyle \frac{4n^21}{8}}\delta _{n,p}+{\displaystyle \frac{1b^2}{8}}\beta _{np}\right),`$ (6) $`\alpha _i`$ $`=`$ $`{\displaystyle \underset{j|i|,i+j\mathrm{even}}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{b}{2}}\right)^j\left(\begin{array}{c}j\\ (i+j)/2\end{array}\right),`$ (9) $`\beta _i`$ $`=`$ $`{\displaystyle \underset{j|i|,i+j\mathrm{even}}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{b}{2}}\right)^j\left(\begin{array}{c}j\\ (i+j)/2\end{array}\right)(j+1).`$ (12) Eliminating Fourier components above a sufficiently high cutoff $`Q`$ has negligible effect on possible instabilities, which can be shown to occur at relatively long wavelengths. The numerical solution to this eigenvalue equation, together with the normalization condition $`𝑑\theta (u_{\omega ^{},n}^{}u_{\omega ^{},n^{}}v_{\omega ^{},n}^{}v_{\omega ^{},n^{}})=\delta _{nn^{}}\delta _{\omega \omega ^{}}`$, provides the allowed frequencies. Real negative eigenfrequencies for modes of positive norm are always present, which means that black hole configurations are energetically unstable, as expected. This feature is inherent in supersonic flow, since the speed of sound is also the Landau critical velocity. In a sufficiently cold and dilute condensate, however, the time scale for dissipation may in principle be made very long, and so these energetic instabilities need not be problematic . More serious are dynamical instabilities, which occur for modes with complex eigenfrequencies and are genuine physical phenomena. For sufficently high values of the cutoff (e.g., $`Q25`$ in our calculations), the complex eigenfrequencies obtained from the truncated eigenvalue problem become independent of the cutoff within the numerical error. The existence and rapidity of dynamical instabilities depend sensitively on $`(𝒰,b,w)`$. For instance, see Fig. 1 for a contour plot of the maximum of the absolute values of the imaginary parts of all eigenfrequencies for $`w=7`$, showing that the regions of instability are long, thin fingers in the $`(𝒰,b)`$ plane. Not shown in the figure is the important fact that the size of the imaginary parts, which gives the rate of the instabilities, increases starting from zero, quite rapidly with $`b`$, although they remain small as compared with the real parts. The stability diagram of Fig. 1 suggests a strategy for creating a sonic black hole from an initial stable state. Within the upper subsonic region, the vertical axis $`b=0`$ corresponds to a homogeneous persistent current in a ring, which can in principle be created using different techniques . Gradually changing $`𝒰`$ and $`b`$, it is possible to move from such an initial state to a black/white hole state, along a path lying almost entirely within the stable region, and passing only briefly through instabilities where they are sufficiently small to cause no difficulty. Indeed, we have simulated this process of adiabatic creation of a sonic black/white hole by solving numerically (using the split operator method) the time-dependent Gross-Pitaevskii equation (1) that provides the evolution of the condensate when the parameters of the trapping potential change so as to move the condensate state along various paths in parameter space. One of these paths is shown in Fig. 1 (light-grey solid line): we start with a current at $`w=7`$, $`b=0`$, and sufficiently high $`𝒰`$; we then increase $`b`$ adiabatically keeping $`𝒰`$ fixed until an appropriate value is reached; finally, keeping $`b`$ constant, we decrease $`𝒰`$ adiabatically (which can be physically implemented by decreasing the radius of the ring trap), until we meet the dashed contour for black holes of comfortable size. Our simulations confirm that the small instabilities which briefly appear in the process of creation do not disrupt the adiabatic evolution. The final quantum state of the condensate, obtained by this procedure, indeed represents a stable black/white hole. We have further checked the stability of this final configuration by numerically solving the Gross-Pitaevskii equation (1) for very long periods of time (as compared with any characteristic time scale of the condensate) and for fixed values of the trap parameters. This evolution reflects the fact that no complex frequencies are present, as predicted from the mode analysis, and that the final state is stationary. Once the black/white hole has been created, one could further change the parameters $`(𝒰,b)`$ so as to move between the unstable “fingers” into a stable region of higher $`b`$ (a deeper hole); or one could deliberately enter an unstable region. In the latter case, the black hole should disappear in an explosion of phonons, which may be easy to detect experimentally. Such an event might be related to the evaporation process suggested for real black holes in the sense that pairs of quasiparticles are created near the horizon in both positive and negative energy modes. The Hermiticity of the Bogoliubov Hamiltonian implies that eigenmodes with complex frequencies appear always in dual pairs, whose frequencies are complex conjugate. In the language of second quantization, the linearized Hamiltonian for each such pair has the form $$H=\underset{n}{}(\omega A_{\omega ^{},n}^{}A_{\omega ,n}+\omega ^{}A_{\omega ,n}^{}A_{\omega ^{},n}),$$ and the only nonvanishing commutators among these operators are $`[A_{\omega ,n},A_{\omega ^{},n^{}}^{}]=\delta _{nn^{}}`$. It is then clear that none of these operators is actually a harmonic oscillator creation or annihilation operator in the usual sense. However, the linear combinations (note that $`A_{\omega ^{},n}^{}A_{\omega ,n}^{}`$) $$a_n=\frac{1}{\sqrt{2}}(A_{\omega ,n}+A_{\omega ^{},n}),b_n=\frac{i}{\sqrt{2}}(A_{\omega ,n}^{}+A_{\omega ^{},n}^{})$$ and their Hermitian conjugates are true annihilation and creation operators, with the standard commutation relations, and in terms of these the Bogoliubov Hamiltonian becomes $$H=\underset{n}{}\left[\mathrm{Re}(\omega )(a_n^{}a_nb_n^{}b_n)\mathrm{Im}(\omega )(a_n^{}b_n^{}+a_nb_n)\right],$$ which obviously leads to self-amplifying creation of positive and negative frequency pairs. Evaporation through an exponentially self-amplifying instability is not equivalent, however, to the usual kind of Hawking radiation ; this issue will be discussed in detail elsewhere. Trapped bosons at ultralow temperature can provide an analog to a black-hole spacetime. Similar analogs have been proposed in other contexts, such as superfluid helium , solid state physics , and optics ; but the outstanding recent experimental progress in cooling, manipulating and controlling atoms makes Bose-Einstein condensates an especially powerful tool for this kind of investigation. We have analyzed in detail the case of a condensate in a ring trap, and proposed a realistic scheme for adiabatically creating stable sonic black/white holes. We thank the Austrian Science Foundation and the European Union TMR networks ERBFMRX–CT96–0002 and ERB–FMRX–CT96–0087. Note added.— Further details as well as the study of cigar shaped condensates with atom sinks at the center will appear in Ref. .
no-problem/0002/hep-ph0002049.html
ar5iv
text
# Untitled Document Some low energy effects of a light stabilized radion in the Randall-Sundrum model Uma Mahanta, Mehta Research Institute Chhatnag Road, Jhusi, Allahabad-211019, India and Subhendu Rakshit, Department of Physics University of Calcutta, Calcutta-700009, India Abstract In this paper we study some of the low energy effects of a light stabilized radion field in the Randall-Sundrum scenario. We find that the NLC 500 with its projected precision level will be able to probe the radion contribution to $`\kappa _v`$ and $`\lambda _v`$ for values of $`\varphi `$ up to 500 Gev. On the other hand the BNL E821 experiment will be able to test the radion contribution to $`a_\mu `$ for $`\varphi `$= 1 Tev and $`m_\varphi m_\mu `$. We have also shown that the higgs-radion mixing induces a 2.6% correction in the WWh coupling. Finally by comparing the radionstralung process with the higgsstralung process we have found that the LEPI bound of 60 Gev on the higgs mass based on $`Zhl\overline{l}`$ decay mode suggests a lower bound of about 35 Gev on the radion mass. PACS numbers: 11.10Kk, 04.50+h, 12.10 Dm. Key words: Theories in Extra dimension, Precision tests, Radion phenomenology. Recently Goldberger and Wise have shown that the separation between the branes in the Randall-Sundrum (R-S) scenario can be stabilised by a bulk scalar field. They also showed that if the large value of $`kr_c12`$ that is necessary for solving the heirarchy problem arises from a small bulk scalar mass term then the modulus potential is nearly flat near its minimum. The R-S scenario therefore predicts a modulus field which is much lighter than the Kaluza-Klein excitations of the bulk fields which usually lie in the Tev range. The modulus field is therefore expected to be the first experimental signal of R-S model. Its couplings to ordinary fields on the visible brane is suppressed by the Tev scale and completely determined by general covariance. Although the radion vev is more or less well determined in the R-S scenario its mass is an almost free parameter. It turns out to be much lighter than 1 Tev only if the mass of the bulk scalar field that stabilizes the modulus is smaller than 1 Tev. It is the purpose of this paper to investigate what bounds can be derived on the radion vev and particularly its mass from precision tests and direct searches at $`e^+e^{}`$ collider. The couplings of the modulus field $`\varphi `$ to ordinary matter on the visible brane is given by $$L_I=\frac{T_\mu ^\mu }{\varphi }\stackrel{~}{\varphi }.$$ $`(1)`$ where $`\varphi `$ is expanded about its vev as $`\varphi =\stackrel{~}{\varphi }+\varphi `$. In the context of the SM, ignoring the contribution of the higgs sector, we have $`T_\mu ^\mu =_fm_f\overline{f}f+2m_z^2Z^\mu Z_\mu +2m_w^2W^\mu W_\mu `$. The couplings of $`\stackrel{~}{\varphi }`$ to SM fields are therefore suppressed by $`\varphi `$ which is in the Tev range. Since conformal invariance is broken explicitly by the mass of the SM fields, the effects of $`\stackrel{~}{\varphi }`$ on low energy phenomenology ( i.e. much below the Kaluza-Klein excitations) can become appreciable only if the SM fields involved are quite heavy and lie in the few hundred Gev range. In discussing the low energy phenomenology of the radion we shall consider particularly those processes that involve the couplings of $`\stackrel{~}{\varphi }`$ to the heavy SM fields e.g. Z, W and t. Loop induced virtual effects Let us first consider the effects of $`\stackrel{~}{\varphi }`$ through loop induced radiative corrections which can be tested through high precision tests. In this category we shall discuss the effect of $`\stackrel{~}{\varphi }`$ on anomalous $`W^+W^{}\gamma `$, $`W^+W^{}Z`$ couplings and the renormalization of $`Zt_R\overline{t}_R`$ coupling. Besides we shall also consider the radion contribution to the magnetic moment anomaly of the muon. Unless stated otherwise we shall assume that $`\varphi `$= 1 Tev. i) Anomalous magnetic moment of W boson: The anomalous magnetic moment of W boson is given by the operator $$L_{WWV}=ie\kappa _vW_\mu ^+W^{}\nu V^{\mu \nu }.$$ $`(2)`$ where $`V^{\mu \nu }=^\mu V^\nu ^\nu V^\mu `$ and V=A, Z. $`\kappa _v`$ is the anomalous magnetic moment of the W boson. The value of $`\kappa _v`$ arising from SM interactions at one loop is of the order of $`6\times 10^3`$. The radion contribution to $`\kappa _v`$ arises from only one diagram. After pulling out one momentum corresponding to the incoming $`\gamma `$ or Z the remaining scalar integration can be evaluated by naive power counting. For $`m_\varphi m_w`$ the integral gets most of its contribution from loop momenta of order $`m_W`$ and is given by $`I(m_\varphi ^2,m_w^2)\frac{1}{16\pi ^2m_w^2}f(\frac{m_\varphi ^2}{m_w^2})`$ where $`f(\mu ^2)=_0^1x^2\frac{x^2x+2+\frac{\mu ^2}{2}(1x)}{x^2+\mu ^2(1x)}`$ and $`\mu =\frac{m_\varphi }{m_w}`$. For $`m_\varphi m_w`$ the radion contribution to $`\kappa _v`$ is therefore given by $`\kappa _v^{\stackrel{~}{\varphi }}\frac{1}{16\pi ^2}\frac{m_w^2}{\varphi ^2}f(\frac{m_\varphi ^2}{m_w^2})3.7\times 10^4`$. The anomalous magnetic moment of the W boson arising from radion exchange is therefore roughly one order of magnitude smaller than that from SM processes. The size of the above radiative correction however decreases inversely as $`\varphi ^2`$. Thus for $`\varphi =`$ 500 Gev the value of $`\kappa _v^{\stackrel{~}{\varphi }}`$ increases to $`1.5\times 10^3`$. However the dependence of $`\kappa _v^{\stackrel{~}{\varphi }}`$ on $`\frac{m_\varphi ^2}{m_w^2}`$ is almost flat over the entire range of the variable and therefore no useful bound on $`m_\varphi `$ can be obtained. ii) Electric quadrupole moment of W boson: The electric quadrupole moment of the W boson is given by the operator $$L_{WWV}=ie\frac{\lambda _v}{m_w^2}W_{\lambda \mu }^+W_\nu ^\mu V^{\nu \lambda }.$$ $`(3)`$ . In SM $`\lambda _v`$ is zero at tree level. As in the case of $`\kappa _v`$ the radion contribution to $`\lambda _v`$ arises from only one diagram. To evaluate the electric quadrupole moment of the W boson arising from radion exchange we need to extract three momenta corresponding to the three external gauge bosons outside the loop integral. The remaining integral can be shown to converge as $`\frac{1}{m_w^4}g(\frac{m_\varphi ^2}{m_w^2})`$. The radion contribution to $`\lambda _v`$ is therefore given by $`\lambda _v\frac{1}{16\pi ^2}\frac{m_w^2}{\varphi ^2}g(\frac{m_\varphi ^2}{m_w^2})(1.63.2)\times 10^4`$ where we have taken $`g(x)`$ to lie between 1 and 2. At one loop order SM processes gives rise to a $`\lambda _v`$ that is of order $`5\times 10^3`$. The modulus contribution to $`\lambda _v`$ is therefore suppressed relative to the SM contribution by one order of magnitude. The sensitivity reach of different colliders for measuring $`\kappa _v`$ and $`\lambda _v`$ have been estimated by several working groups. These studies indicate that LEPII with a design luminosity of 500 pb<sup>-1</sup> will be able to measure $`\kappa _v`$ and $`\lambda _v`$ with a precision of a few times $`10^2`$. On the other hand NLC 500 with a design luminosity of 10 fb<sup>-1</sup> will be able to reach precision levels of about $`10^3`$ which will enable it to probe the radion contributions for $`\varphi `$ up to 500 Gev. If $`\varphi `$ lies in the few Tev range then NLC 500 will not be able to probe the radion contribution to $`\kappa _v`$ and $`\lambda _v`$. However we have seen that increasing the center of mass energy from 200 Gev to 500 Gev enhances the sensitivity reach for measuring $`\kappa _v`$ and $`\lambda _v`$ by one order of magnitude. Hence a multi Tev $`e^+e^{}`$ collider with a very high luminosity may be able to probe the small radion contribution to $`\kappa _v`$ and $`\lambda _v`$ for $`\varphi `$ = 1 Tev. iii) Effect of $`\stackrel{~}{\varphi }`$ on the renormalization of $`Zt\overline{t}`$ couplings: The radion contribution on the renormalization of $`Zt\overline{t}`$ couplings is important because top quark is by far the heaviest known particle and has the strongest coupling to the radion. The stabilized modulus of Golberger and Wise is expected to be lighter than 1 Tev and therefore it would contribute to the renormalization of Zt$`\overline{t}`$ coupling from $`m_t`$ to 1 Tev. There are two distinct vertex correction diagrams due to $`\stackrel{~}{\varphi }`$ exchange. The dominant diagram involves the exchange of $`\stackrel{~}{\varphi }`$ between the outgoing $`t_R`$ and $`\overline{t}_R`$. The vertex renormalization constant corresponding to this diagram is given by $`Z_v^1=1\frac{1}{32\pi ^2}\mathrm{ln}\frac{q^2}{m_t^2}\frac{g_L^t}{g_R^t}(\frac{m_t}{\varphi })^2`$ where q is the momentum of the incoming Z boson, $`g_L^t=\frac{1}{2}\frac{2}{3}\mathrm{sin}^2\theta _w`$ and $`g_R^t=\frac{2}{3}\mathrm{sin}^2\theta _w`$. In the second diagram Z goes into virtual $`\stackrel{~}{\varphi }`$ and $`Z`$ which then exchanges a top quark to produce a $`t_R`$, $`\overline{t}_R`$ pair in the final state. This diagram involves a chirality flip through a mass insertion on the internal top quark line. The renormalization constant corresponding to this diagram does not involve a leading log term and therefore it does not contribute to the renormalization of $`Zt\overline{t}`$ couplings. The self energy correction of Z due to $`\stackrel{~}{\varphi }`$ produces only a mass renormalization of Z but no wavefunction renormalization. The wavefunction renormalization of $`t_R`$ arising from $`\stackrel{~}{\varphi }`$ exchange is given by $`Z_{t_R}=1+\frac{1}{32\pi ^2}(\frac{m_t}{\varphi })^2\mathrm{ln}\frac{q^2}{m_t^2}`$. The effect of a light radion on the renormalization of $`g_R^t`$ from $`m_t^2`$ to $`q^2=(1000Gev)^2`$ is therefore given by $$\frac{[g_R^t(q)]_{RS}}{[g_R^t(q)]_{SM}}=1+(1\frac{g_L^t}{g_R^t})(\frac{m_t}{\varphi })^2\frac{1}{32\pi ^2}\mathrm{ln}\frac{q^2}{m_t^2}.$$ $`(4)`$ where we have assumed that $`g_R^t(m_t)_{RS}=g_R^t(m_t)_{SM}=g_R^t(m_t)_{expt}`$. The presence of an extra light field ($`\stackrel{~}{\varphi }`$) in the RS scenario causes the renormalization of $`g_R^t`$ to deviate from that in the SM by .1% for $`\varphi `$= 1 Tev. The splitting increases to .004% for $`\varphi `$=500 Gev. This effect is of the same magnitude as the radiative corrections due to ordinary QED processes and it might be possible to detect this effect at a multi Tev scale $`e^+e^{}`$ collider. Note that the splitting between $`[g_R^t(q)]_{RS}`$ and $`[g_R^t(q)]_{SM}`$ increases logarithmically with the high energy scale at which they are compared. iv) The muon magnetic moment anomaly: Although the muon coupling to the radion is small the muon magnetic moment anomaly is an extremely well measured quantity. The present experimental value of $`a_\mu `$ is given by $$a_\mu ^{exp}=(116592.30\pm .8)\times 10^8$$ $`(5)`$ The extremely high precision with which $`a_\mu `$ can be measured compensates for the low radion coupling strength to the muon. The radion contribution to the muon anomaly arises from only one diagram and is given by $$a_\mu ^{\stackrel{~}{\varphi }}=\frac{1}{4\pi ^2}(\frac{m_\mu }{\varphi })^2_0^1𝑑x\frac{x^2(2x)}{x^2+r(1x)}.$$ $`(6)`$ where $`r=(\frac{m_{\stackrel{~}{\varphi }}}{m_\mu })^2`$ is a free parameter. An upper bound on $`a_\mu ^{\stackrel{~}{\varphi }}`$ can be obtained by setting r=0. In this limit we get $`a_\mu ^{\stackrel{~}{\varphi }}4.4\times 10^{10}`$. This value should be compared with the present experimental precision of $`10^8`$ in measuring $`a_\mu `$. The BNL experiment (E821) hopes to lower the error the error down to $`4\times 10^{10}`$. At that level it will be able to probe the small radion contribution to $`a_\mu `$. As $`\frac{m_\varphi ^2}{m_\mu ^2}`$ becomes much greater than one the value of $`a_\mu ^{\stackrel{~}{\varphi }}`$ decreases from the above estimate. However for all $`m_\varphi m_w`$ the dependence of $`a_\mu ^{\stackrel{~}{\varphi }}`$ on $`m_\varphi `$ is almost flat. So even with the BNL precision neither a sharp nor a useful bound can be put on $`m_\varphi `$. It is worth noting that the bound from muon anomaly experiments are relevant only for very light radion ($`m_\varphi m_\mu `$). For such low radion mass the stabilization of the modulus is very weak. iv) Radiative mixing between the radion and the higgs: Since the couplings of the radion and the higgs boson to SM particles are similar in structure they can mix through loop corrections. The dominant correction comes from the top loop because of a color factor and its large yukawa coupling. It can be shown that this contribution is given by $$\delta m_\varphi ^2\frac{3N_c}{4\pi ^2}\frac{m_t^2}{v\varphi }m_t^2\mathrm{ln}\frac{m_t^2}{\mu ^2}$$ $`(7)`$ where $`\mu `$ is the renormalization scale. For $`\mu =m_h=m_\varphi 100`$ Gev the mixing angle is given by $`\theta \frac{\delta m_\varphi ^2}{m_h^2}`$-.11. This mixing between the radion and the higgs boson will shift the coupling of the physical higgs boson to W and Z. For example the WWh coupling is now given by $$L_{wwh}=[gm_w+2m_w\frac{m_w}{\varphi }\theta ]WWh=gm_w[1.026]WWh.$$ $`(8)`$ . which implies a 2.6% correction to the WWh coupling. The shift in the Z coupling due to $`\varphi h`$ mixing is also of the same order. Needless to say that probing this shift is a challenging task but it should be reachable at NLC 500 with a high luminosity. Direct production processes Let us now consider the effects arising from direct production of $`\stackrel{~}{\varphi }`$. The prominent among them is the radionstrahlung at an $`e^+e^{}`$ collider. It involves the radiation of $`\stackrel{~}{\varphi }`$ from an on shell (LEP I) or off shell (LEP II or NLC) Z boson. This process is similar to higgsstrahlung which is used for higgs search at $`e^+e^{}`$ collider. At LEPII higgs boson is produced by the process $`e^+e^{}Z^{}h+Z`$. It can be shown that the momentum of the outgoing Z boson is given by $`p_h^z=\frac{1}{2\sqrt{s}}[s(m_hm_z)^2]^{\frac{1}{2}}[s(m_h+m_z)^2]^{\frac{1}{2}}`$. Hence the momentum of the outgoing Z boson for a 50 Gev higgs production at $`\sqrt{s}`$=160 Gev will be given by $`p_h^z`$= 37 Gev. On the other hand for a 20 Gev radion emission its momentum will be given by $`p_\varphi ^z`$=51 Gev. So by imposing the cut $`p_z>45`$ Gev it should be possible to suppress the SM higgs production for $`m_h`$ 50 Gev while still allowing considerable amount of radion production with a lower mass. The $`\stackrel{~}{\varphi }ZZ`$ coupling is suppressed relative to hZZ coupling by a factor of $`(\frac{\mathrm{sin}2\theta _w}{e}\frac{m_z}{\varphi })`$. Hence for a given $`\sqrt{s}`$ the production cross section for $`\stackrel{~}{\varphi }`$ will be comparable to that of higgs(h) only if $`m_{\stackrel{~}{\varphi }}`$ is light but $`m_h`$ is close to its kinematic limit so that its production is phase space suppressed. We find that at LEPII for a cm energy of 189 Gev, $`\sigma _{\stackrel{~}{\varphi }z}`$ .089 pb for $`m_{\stackrel{~}{\varphi }}=20`$ Gev and $`\varphi `$ =1 Tev, but $`\sigma _{hz}=.078`$ pb for $`m_h=97`$ Gev. For $`m_{\stackrel{~}{\varphi }}=20`$ Gev and $`m_h=97`$ Gev the branching ratio for $`\varphi b\overline{b}`$ and $`hb\overline{b}`$ are both roughly equal to one. Hence in this case the number of $`q\overline{q}b\overline{b}`$ events for example arising from $`e^+e^{}hz`$ will be roughly equal to that arising from $`e^+e^{}\stackrel{~}{\varphi }z`$. Let us consider the extreme scenario where the higgs is quite heavy so that it is kinematically inaccessible at LEPII but $`m_{\stackrel{~}{\varphi }}`$ lies in the 20 Gev range. In this case one can derive a lower bound on $`m_{\stackrel{~}{\varphi }}`$ from the condition $`S2\sqrt{S+B}`$ where is S is the signal and B is the background. The L3 collaboration at LEPII have placed a lower limit of 96 Gev on the higgs mass at 95% CL based on 176 pb<sup>-1</sup> of data collected at $`\sqrt{s}=189`$ Gev . If we make the somewhat unrealistic assumption that the cut efficiencies and background estimates are the same for the radion then this would imply a lower bound of about 20 Gev on the radion mass. It should howver be noted that the optimal cuts that are necessary to enhace the signal to background ratio for a 20 Gev radion will be quite different from that for a 97 Gev higgs. Therefore to obtain a realistic bound on $`m_{\stackrel{~}{\varphi }}`$ from LEPII the different working groups will have to retune their cuts appropriately so that they correspond to light ($`m_\varphi 20`$ Gev) radion detection. Finally let us consider the implication of LEPI data on the radion mass. At LEPI the higgs scalar has been searched based on $`ZhZ^{}`$ with the $`Z^{}`$ going into $`l\overline{l}`$. LEPI has placed a lower bound of about 60 Gev on $`m_h`$ by looking for this decay mode. The branching ratio for the above decay mode for $`m_h`$= 60 Gev is around $`1.4\times 10^6`$. The smaller radion coupling to the Z boson implies that the same branching ratio will be attained for $`Z\stackrel{~}{\varphi }l\overline{l}`$ at $`m_\varphi =35`$ Gev. We therefore think that the present collider data on direct higgs searches does not rule out the existence of a light radion above 35 Gev. We would like to point out that these values are indications of the likely bounds on the radion mass and should not be considered as realistic collider bounds. More dedicated searches and detailed analysis are necessary to arrive at realistic collider bounds. Conclusions In this paper we have studied some low energy effects of a light stabilized radion in the R-S scenario. We have found that the radion contribution to the anomalous magnetic moment and electric quadrupole moment of the W boson will fall beyond the projected sensitivity reach of NLC 500 if $`\varphi `$=1 Tev. However the planned BNL experiment might be able to reach a sensitivity level just adequate for probing the radion contribution to $`a_\mu `$ if the radion mass is around 100 Mev. We have also shown that the higgs-radion mixing through loop corrections induces a shift in the WWh coupling of the order of 2.6%. The size of the radiative corrections arising from radion exchange however decreases inversely as $`\varphi ^2`$. Therefore with decreasing $`\varphi `$ the radiative corrections increase rapidly in size and approach the search limits. The dependence of the radiative corrections on the radion mass however does not produce any useful bound on it. We have also shown that the presence of a light radion field will cause the renormalization of $`Zt_R\overline{t}_R`$ coupling to deviate from that in the SM by about .1%. Finally by comparing the radion production cross section at LEPII with that of the higgs boson we find that the lower bound of 96 Gev on the higgs mass suggests a bound of about 20 Gev on the radion mass. The LEPI bound of 60 Gev on $`m_h`$ however implies a slightly stronger bound of 35 Gev on $`m_\varphi `$. Acknowledgement We would like to thank Dr. Sandip Trivedi, Dr. Sunanda Banerjee and Dr. Anindya Datta for several helpful discussions during the WHEPP6 working group activity. S. Rakshit would like to thank the Council for Scientific and Industrial Research, India for support during this work. References 1. W.D. Goldberger and M. B. Wise: Phys. Rev. Lett 83, 4922 (1999); Phys. Rev. D 60, 107505 (1999). 2. L. Randall and R. Sundrum: Phys. Rev. Lett. 83, 3370 (1999); Phys. Rev. Lett. 83, 4690 (1999). 3. W. D. Goldberger and M. B. Wise, hep-ph/9911457. C. Csaki, M. Graesser, L. Randall and J. Terning, hep-ph/9911406. 4. K. Hagiwara, R. Peccei, D. Zeppenfield and K. Hikasa, Nucl. Phys. B 282, 253 (1987). 5. Tests of alternative models at a 500 Gev NLC, ENSLAPP-A-365/92. Published in “$`e^+e^{}`$ collisions at 500 Gev, the physics potential” ed. P. Zerwas, DESY, 1992. 6. Caso et al., Particle Data group: Review of Particle Physics, Euro. Phys. J. C3, 1 (1998). 7. By E821 BNL collaboration (B. C. Roberts et al.). Published in the proceedings of XXVIII International Conference on High Energy Physics, Warsaw, Poland, 1996, River Edge, NJ World Scientific, 1997, p.1035. 8. Search for the SM higgs boson in $`e^+e^{}`$ interactions at $`\sqrt{s}`$= 189 Gev, L3 collaboration, CERN-EP/99-080.
no-problem/0002/astro-ph0002225.html
ar5iv
text
# Extended X-ray emission from FRIIs and RL quasars ## 1. Introduction It is well known that the synchrotron radio emission from the extended lobes of strong radio galaxies and radio-loud (RL) quasars samples ultra relativistic electrons. It is a customary practice to estimate the average magnetic field intensity via the minimum energy argument by making use of the emitted radio flux in the 10 MHz – 100 GHz band (source rest frame). Since the critical frequency emitted in the synchrotron process is $`\nu (MHz)B(\mu G)(\gamma /10^3)^2`$ and typically $`B(\mu G)>10`$, it then follows that only electrons with Lorentz factor $`\gamma >10^3`$ are taken into account. These electrons are also responsible for producing X-rays via the inverse Compton (IC) scattering of the cosmic microwave background (CMB) photons \[$`ϵ(keV)(\gamma /10^3)^2`$\]. When these X-rays are detected, the number of ultra-relativistic electrons is fixed and from the radio flux one can uniquely determine the average magnetic field intensity. Up to now, because of the weakness and diffuse nature of the predicted X-ray emission, detection of X-ray fluxes due to this process has been possible for a few sources only, notably Fornax A (Feigelson et al. 1995; Kaneda et al. 1995) and Cen B (Tashiro et al. 1998); the derived magnetic field intensities are lower than the classical equipartition values by factors 1.5–2. Brunetti, Setti & Comastri (1997) have pointed out that sizeable X-ray fluxes can also be emitted by the IC scattering of relativistic electrons in the FRII’s radio lobes with the IR photons from a quasar, and associated circumnuclear dusty/molecular torus, hidden in the galaxy’s nucleus. Since the IR emission peaks at $`50100\mu m`$ electrons at lower energies ($`\gamma <500`$) are involved in this process. Of course there are no reasons why these lower energy particles shouldn’t be present, on the contrary one would expect them on physical grounds based on acceleration and loss mechanisms. In order to estimate the size of the expected X-ray flux, for a given quasar IR emission, one may work out the equipartition by extrapolating downward the electron spectrum derived from the synchrotron emission to a minimum energy ($`\gamma _{min}`$) limited from below by possible Coulomb losses. The equipartition fields ($`B_{eq}`$) so derived are stronger than the classical one by factors from 1.5 to 3 (also Setti, Brunetti & Comastri 1999). We have shown that the IC scattering of the IR nuclear photons may easily account for a large fraction of the extended X-ray emission of several powerful FRIIs at large redshifts ($`z1`$) detected by ROSAT in the 0.1–2.4 keV interval. Morphologically there are two important aspects that should be mentioned: firstly, for obvious geometrical reasons, the X-rays from the IC scattering of the quasar photons are more concentrated toward the nuclear region than those from the IC scattering of the CMB photons and, secondly, given two symmetrical radio lobes the X-ray emission from the far lobe can be much larger than that from the near one, depending on the orientation of the radio axis with respect to the line of sight, due to the enhanced efficiency of head-on scatterings (Brunetti et al.1997). It should also be mentioned that, while the X-rays from the IC scattering of the CMB photons must have a spectral slope coincident with that of the synchrotron radio emission, the X-ray spectrum associated with the IC scattering of the nuclear photons may or may not have the synchrotron slope simply because a different portion of the primary electron spectrum is being sampled (see also Brunetti 2000). Direct evidence of extended X-ray emission from the IC scattering of the IR photons from a hidden quasar has been gathered by Brunetti et al.(1999) making use of ROSAT HRI observations of the powerful, double lobed radio galaxy 3C 219. The residual X-ray distribution after subtraction of the absorbed, unresolved nuclear source is remarkably coincident with the radio structure. The central extended ($``$ 100 kpc) X-ray emission, somewhat stronger in the counter-jet side as expected in our model, can be accounted for by assuming a magnetic field $`3`$ times weaker than our equipartion value ($`B_{eq}10\mu G`$, $`\gamma _{min}=50`$). Of course this estimate depends on the assumed IR power of the hidden quasar which we have derived by two, albeit indirect, approaches since 3C 219 has not been detected by IRAS and not observed by ISO. Observations with Chandra, scheduled in the fall of the year 2000, will likely provide a check of our model. ## 2. IC X-rays from the radio quasars 3C 215 and 3C 334 Extended X–ray emission around RL quasars has been recently discovered from the analysis of ROSAT HRI observations (Crawford et al. 1999; Hardcastle & Worrall 1999). The origin of the extended and rather weak component (5–15% of total intensity) is usually ascribed to thermal emission from the surrounding intracluster medium (ICM), although a significant contribution from the IC scattering of the quasar’s photons predicted by our model cannot be excluded (Crawford et al. 1999). In order to test the IC hypothesis we have carried out a detailed analysis of the spatial profiles of the sources in the Crawford et al.(1999) sample. The data retrieved from the public archive have been analyzed following a procedure similar to that in Crawford et al.(1999), but the azimuthal distribution has been investigated with the aim of checking for a possible correlation with the radio structure. Accordingly the source counts have been subdivided in four quadrants centered on the quasar (X-ray source peak) and so oriented that two opposite quadrants are aligned with the radio axis defined by the direction of the innermost radio lobes. We have then compared the source counts in the two quadrants along the radio axis with those collected in the perpendicular direction. We find evidence of X–ray extension spatially correlated with the radio axis in two quasars: 3C 215 and 3C 334. For 3C 48, 254 and 273 our analysis is inconclusive since their radio angular sizes are lower then, or comparable to, the HRI PSF. For 3C 215 (Fig.1) the effect is strong : a KS test rejects at 99.9% level the hypotesis that the count distributions along (filled dots) and perpendicular to (open squares) the radio axis are extracted from the same population. In the case of 3C 334 (not shown here), although the count distribution along the radio axis systematically exceeds that in the perpendicular direction, the effect is statistically marginal (the same KS test gives $`92\%`$). In order to check whether the elongation on $`10`$ arcsec scale could be due to an intrinsic elongation of the HRI PSF and/or to an insufficient correction of the wobbling, we have analyzed several isolated stars (companion at least 6 mag fainter) with similar count statistics extracted from the RASSDWARF catalogue (Huensch et al. 1998): no evidence of asymmetric distributions has been found. Moreover the count profiles of 3C 215 and 334 in the direction perpendicular to the radio axis are consistent with those of spatially unresolved sources. The X–ray fluxes (0.1–2.4 keV, spectral index $`\alpha =1`$) associated with the extended structures have been estimated by subtracting the counts within the quadrants in the direction perpendicular to the radio axis from those within the quadrants aligned with radio axis: the luminosity of 3C 215 is $`1.510^{45}`$ erg s<sup>-1</sup> of which $`2.210^{44}`$ erg s<sup>-1</sup> in the extended component, while for 3C 334 one has $`10^{45}`$ erg s<sup>-1</sup> and $`10^{44}`$ erg s<sup>-1</sup>, respectively \[$`H_o=75`$ km/s/Mpc, $`q_o=0`$\]. Knowledge of the quasar IR radiation is of crucial importance for the computation of the expected IC X-ray fluxes. Unfortunately no FIR data are available for 3C 215, while 3C 334 has been observed by IRAS with a 60$`\mu m`$ luminosity of $`10^{46}`$erg s<sup>-1</sup> (van Bemmel et al. 1998). By adopting typical quasar SEDs we estimate a $`1100\mu m`$ luminosity of $`10^{46}`$ erg s<sup>-1</sup> and $`410^{46}`$ erg s<sup>-1</sup> for 3C 215 and 3C 334, respectively. Following Brunetti et al.(1997) model we find that the magnetic field intensities required to fully account for the X-ray fluxes of the extended components are $`5`$ and $`3`$ times smaller than $`B_{eq}`$ for 3C 215 and 334, respectively. In each source $`B_{eq}`$ has been calculated by extrapolating downward to $`\gamma _{min}=50`$ the power law electron spectrum derived from the low frequency radio spectrum. It should be pointed out that by applying the standard equipartition we would have obtained factors $`2.6`$ (3C 215) and $`2`$ (3C 334) below the corresponding equipartion fields, but this would be conceptually wrong. ## 3. Conclusions There is supportive observational evidence of extended X-ray emission from the IC scattering of quasar IR photons with relativistic electrons in the lobes of powerful radio galaxies. Besides being a confirmation of the FRII–quasar unification, this may provide an important tool for the diagnostic of the relativistic plasma at particle energies not sampled by radio observations. The unavoidable presence of lower energy particles implies stronger magnetic fields than derived by standard equipartition formulae and, consequently, a larger pressure inside the lobes. Moreover, accounting for the extended X-ray emission in sources for which the quasar radiation can be constrained indicates magnetic field strengths lower than equipartition. Therefore, confirmation of our IC model by Chandra and XMM satellites may provide important clues on the physics and evolution of radio sources. ### Acknowledgments. It is a pleasure to thank C.S. Crawford, A.C. Fabian and I. Lehmann for informative discussions concerning 3C 215 and 3C 334. ## References Brunetti G., 2000, APh 13, 105 Brunetti G., Setti G., Comastri A., 1997, A&A 325, 898 Brunetti G., et al., 1999, A&A 342, 57 Crawford C.S., et al., 1999, MNRAS 308, 1159 Feigelson E.D., et al., 1995, ApJL 449, 149 Hardcastle M.J., Worrall D.M., 1999, MNRAS 309, 969 Huensch M., Schmitt J.H.M.M., Voges W., 1998, A&AS 132, 155 Kaneda H., et al., 1995, ApJL 453, 13 Setti G., Brunetti G., Comastri A., 1999, in ’Diffuse Thermal and Relativistic Plasma in Galaxy Clusters’, eds. H.Böhringer, L.Feretti, P.Schuecker, MPE Report 271, p.55 Tashiro M., et al., 1998, ApJ 499, 713 van Bemmel I.M., Barthel P.D., Yun M.S., 1998, A&A 334, 799
no-problem/0002/cond-mat0002008.html
ar5iv
text
# Hysteresis effect due to the exchange Coulomb interaction in short-period superlattices in tilted magnetic fields ## Abstract We calculate the ground-state of a two-dimensional electron gas in a short-period lateral potential in magnetic field, with the Coulomb electron-electron interaction included in the Hartree-Fock approximation. For a sufficiently short period the dominant Coulomb effects are determined by the exchange interaction. We find numerical solutions of the self-consistent equations that have hysteresis properties when the magnetic field is tilted and increased, such that the perpendicular component is always constant. This behavior is a result of the interplay of the exchange interaction with the energy dispersion and the spin splitting. We suggest that hysteresis effects of this type could be observable in magneto-transport and magnetization experiments on quantum-wire and quantum-dot superlattices. preprint: Version A well known manifestation of the Coulomb exchange interaction in a two-dimensional electron gas (2DEG) in a perpendicular magnetic field is the enhancement of the Zeeman splitting for odd-integer filling factors, observable in magetotransport experiments on GaAs systems. The same mechanism leads to the enhancement of the Landau gaps for even-integer filling factors, which can be identified in more recent magnetization measurements. In the presence of a periodic potential the Landau levels become periodic Landau bands, and the calculations based on the Hartree-Fock approximation (HFA) show an enhancement of the energy dispersion of the bands intersected by the Fermi level. Such an effect has been indirectly observed in the magnetoresistance of short-period superlattices as an abrupt onset of the spin splitting of the Shubnikov-de Haas peaks, occurring only for a sufficiently strong magnetic field. In other words, when the magnetic field increases the systems makes a first-order phase transition from spin-unpolarized to spin-polarized states. This effect has also been discussed in other forms, for narrow quantum wires, and for edge states. In the spirit of the HFA, the Coulomb interaction can be split into a direct and an exchange component. The direct (classical) interaction is repulsive (i. e. positive) and long ranged, while the exchange (quantum mechanical) part is attractive (i. e. negative) and short ranged. The direct component is usually much larger than the exchange one. In our system this is decided by the two lengths involved, the superlattice (or modulation) period $`a`$, and the magnetic length $`\mathrm{}=\sqrt{\mathrm{}/eB_0}`$ determined by the perpendicular magnetic field $`B_0`$. For long periods, $`a\mathrm{}`$, the screening (direct) effects are strong: the width of the Landau bands is typically much smaller than the amplitude of the periodic potential, except when a gap is eventually present at the Fermi level. For periods of the order of $`\mathrm{}`$ the situation becomes opposite: the screening effect is weak, the exchange interaction is the dominant Coulomb manifestation, and the energy dispersion of the Landau bands may exceed the amplitude of the periodic potential if the latter is small enough. In a recent paper we have studied the numerical solutions of the Hartree-Fock equations in the presence of short-period potentials. After preparing the solution for a fixed potential we change the potential amplitude by a small amount and find a new, perturbed solution, and then we change again the amplitude, and repeat the scheme. In this way, by increasing and then decreasing the amplitude, we obtain a hysteretic evolution of the ground state due to the combined effects of the external potential and of the exchange interaction, on the energy dispersion of the Landau bands. In the present paper we consider a fixed modulation amplitude, but a tilted magnetic field, such that we include in the problem, self-consistently, the Zeeman splitting of the Landau bands. We hereby intend to suggest further experiments that can identify strong effects of the Coulomb exchange interaction. The material parameters are those for GaAs: effective mass $`m_{\mathrm{e}ff}=0.067m_e`$, dielectric constant $`\kappa =12.4`$, bare g-factor $`g=0.44`$, and electron concentration $`n_s=2.4\times 10^{11}`$ cm<sup>-2</sup>. We fix the component of the magnetic field perpendicular to the 2DEG, $`B_0`$, which determines our filling factors, while the bare Zeeman splitting is given by the total field $`B=B_0/\mathrm{cos}\varphi `$, where $`\varphi `$ is the tilt angle. We first consider a periodic potential varying only along one spatial direction, $`V\mathrm{cos}Kx`$, where $`K=2\pi /a`$, and solve for the eigenstates of the Hamiltonian within the thermodynamic HFA. We chose the Landau gauge for the vector potential and we diagonalize the Hamiltonian in the Landau basis $`\psi _{nX_0}(x,y)=L_y^{1/2}e^{iX_0y/\mathrm{}^2}f_n(xX_0)\sigma `$, where $`X_0`$ is the so-called center coordinate, $`L_y`$ is the linear dimension of the 2DEG, $`f_n(xX_0)`$ are shifted oscillator wave functions, and $`\sigma =\pm 1`$ is the spin projection. We begin our calculations with $`\varphi =0`$, and find the numerical HFA-eigenstates by an iterative method, starting from the noninteracting solution. Then, we increase $`\varphi `$ and find a new solution starting from the previous one. In Fig. 1(a) we show a typical energy spectrum, i. e. the Landau bands $`E_{n\sigma X_0}`$, $`n=0,1,2,\mathrm{}`$, within the first Brillouin zone, for a small tilt angle, $`\varphi <\varphi _1`$. Here $`B_0=4.1`$ T, and the parameters of the external potential are $`a=40`$ nm and $`V=9`$ meV. In a simplified view, the exchange interaction contributes with a negative amount of energy to the occupied states, which enhances the energy dispersion in the vicinity of the Fermi level. The classical Hartree (positive) energy is small in our case, but it would increase with increasing modulation period and would rapidly flatten the energy bands. Also, the spin splitting is almost suppressed for $`\varphi <\varphi _1`$. However, for a sufficiently high field, when $`\varphi =\varphi _1`$, the difference in population of the spin-up and spin-down bands exceeds a critical value, and the spin gap is abruptly amplified by the exchange energy. The spin-up states become self-consistently more populated and lower in energy. The energy spectrum becomes like in Fig. 1(b), and keeps this structure when $`\varphi `$ further increases. Then, we decrease $`\varphi `$ step by step. For low temperatures we find for $`\varphi =0`$ the solution with large spin gap, similar to Fig. 1(b), while for higher temperatures we may find a transition to the spin-unpolarized state, Fig. 1(a), at $`\varphi _2<\varphi _1`$. We show in Fig. 2 the spin polarization, $`(n_{}n_{})/(n_{}+n_{})`$, for two temperatures, when $`\varphi _2>0`$. We consider here the temperature only as an effective parameter, that may also include includes the effects of a certain disorder, inherent in any real system. Clearly, in the presence of disorder similar results will appear for lower temperatures. We have explicitly included disorder in a transport calculation, by assuming Gaussian spectral functions, $`\rho _{n\sigma }(E)=(\mathrm{\Gamma }\sqrt{\pi /2})^1\mathrm{exp}[2(EE_{n\sigma })^2/\mathrm{\Gamma }^2]`$, where $`\mathrm{\Gamma }`$ is the Landau level broadening. We have calculated the conductivity tensor $`\sigma _{\alpha \beta }`$, $`\alpha ,\beta =x,y`$, using the standard Kubo formalism for the modulated 2DEG. In Fig. 3 we show the hysteresis loops for the longitudinal resistivities $`\rho _{xx,yy}=\sigma _{yy,xx}/(\sigma _{xx}\sigma _{yy}+\sigma _{xy}^2)`$. In our regime $`\sigma _{xy}^2\sigma _{xx}\sigma _{yy}`$, such that $`\rho _{xx,yy}`$ are in fact proportional to $`\sigma _{yy,xx}`$. Also, the conductivity in the $`y`$ direction is dominated by the quasi-free net electron motion along the equipotential lines of the modulation, with group velocity $`<v_y>=(eB_0)^1dE_{nX_0}/dX_0`$, known as band conductivity, while the conductivity in the $`x`$ direction is related to inter-band scattering processes (scattering conductivity). The band and the scattering conductivities are inverse and respectively direct proportional to a power of the density of states at the Fermi level (DOSF). In the transition between spin-unpolarized and spin-polarized states the Fermi level touches the minima of the band $`E_1`$, where DOSF has a van Hove singularity. Therefore, for that situation the band conductivity, and thus $`\rho _{xx}`$, have a minimum, whereas the scattering conductivity, and thus $`\rho _{yy}`$, have a maximum, see Fig. 3. Similar hysteresis effects can be found in a 2DEG that is modulated in two perpendicular spatial directions. However, in this case the picture is further complicated by the presence of the Hofstadter gaps and their interplay with the spin gaps. Then, since the dispersion of the Landau bands is essential for the hysteresis, another complication with the two-dimensional potential is that for an asymmetric unit cell the behavior of the system may not be the same when the magnetic field is tilted towards the $`x`$ or towards the $`y`$ axis of the plane, reflecting the anisotropy of the Brillouin zones. These details are not addressed in this paper. In principle, the effects discussed in the present paper should also occur in narrow quantum wires or dots, and not necessarily only in periodic systems, as long as Landau bands with both flat and steep regions are generated, as in Fig. 1. In wide wires or dots the electrostatic screening is expected to dominate the exchange interaction, just like in long-period superlattices, and thus the Landau bands are smooth, except at the edges. Rijkels and Bauer have also predicted hysteresis effects in the edge channels of quantum wires when a small chemical-potential difference between the spin-up and spin-down channels can be controlled. To our knowledge an experimental confirmation has not been reported yet. Instead, several groups have build short-period superlattices for transport and other experiments, which can also be used to check our predictions. In conclusion, we have found a hysteresis property of the numerical solution of the thermodynamic HFA, with the physical origin in the exchange effects of the Coulomb interaction in the quantum Hall regime, in the presence of a short-period potential, when the Zeeman splitting is changed by tilting the magnetic field with respect to the 2DEG. We suggest that such effects could be observed e. g. in magnetization or magnetotransport measurements. A. M. was supported by a NATO fellowship at the Science Institute, University of Iceland. The research was partly supported by the Icelandic Natural Science Foundation, and the University of Iceland Research Fund.
no-problem/0002/cond-mat0002455.html
ar5iv
text
# References MOLECULAR DYNAMICS STUDY OF THE GLASS TRANSITION IN CONFINED WATER P. Gallo and M. Rovere Dipartimento di Fisica, Università di Roma Tre, and Istituto Nazionale per la Fisica della Materia, Unità di Ricerca Roma Tre, Via della Vasca Navale 84, I-00146 Roma, Italy Abstract. A molecular dynamics simulation of SPC/E water confined in a Silica pore is presented. The pore has been constructed to reproduce the average properties of a pore of Vycor glass. Due to the confinement and to the presence of a strong hydrophilic surface, the dynamic behaviour of the liquid appears to be strongly dependent on the hydration level. The approach to the glass transition of confined water is investigated on lowering hydration and on supercooling in the framework of Mode Coupling Theories. At higher hydrations two quite distinct subsets of water molecules are detectable. Those belonging to the first layer close to the substrate suffer a severe slowing down, while the remaining ones display a scenario typical of supercooled liquids approaching the kinetic glass transition. 1. INTRODUCTION The study of the modification of the properties of confined water with respect to the bulk is highly interesting since in many technological and biological applications water is confined in porous media. Several experimental studies and computer simulations give evidence that the perturbation of the substrate and the geometrical confinement change the properties at freezing , the mobility and the dynamical behaviour of water . Particularly attracting is the study of confined water in the supercooled region. It is well known that below $`235K`$ the crystallization process driven by the homogeneous nucleation prevents the observation of a transition to a glass phase of water . Experimentally forbidden regions of the phase diagram of bulk water could become accessible through the study of confined water. Molecular dynamics (MD) can be a suitable tool for exploring the approach of water to vitrification. In recent years MD simulation of Simple Point Charge/ Extended SPC/E water model potential in the supercooled phase found a kinetic glass transition at a critical temperature $`T_C`$, as defined in the Mode Coupling Theory (MCT), which is $`T_CT_S`$ , where $`T_S`$ is the singular temperature of water , which is $`T_S=228K`$ or, for SPC/E, $`49`$ degrees below the temperature of maximum density. In many experimental studies on confined or interfacial water a slowing down of the dynamics of the liquid with respect to the bulk water has been found with inelastic neutron scattering and NMR spectroscopy . It has been inferred from experiments that confining water could be equivalent to the supercooling of the bulk . Reduced self diffusion coefficient of water in contact with solid hydrophilic surfaces, when compared to bulk water are found in some of the computer simulation studies but it is still difficult to find general trends in the dynamical properties of confined water and there are not any systematic studies in the supercooled region. Among the different systems studied experimentally water confined in porous Vycor glass is one of the most interesting with relevance to catalytic processes and enzymatic activity. Vycor is a porous silica glass with a quite well characterized structure of cylindrical pores and a quite sharp distribution of the pore sizes around the average value of $`40\pm 5\AA `$. Several experiments on water-in-Vycor have been performed . We present here results obtained from MD simulations of the single particle dynamics of SPC/E water confined in a cylindrical silica cavity modeled to represent the average properties of Vycor pores . The SPC/E potential used for water molecules models a single water molecule as a rigid set of interaction sites with an OH distance of $`0.1`$ nm and a $`HOH`$ angle equal to the tetrahedral angle $`109.47^o`$. The Coulomb charges are placed on the atoms. Oxygens atoms additionally interact via a Lennard-Jones potential. SPC/E is particularly suitable for the study of dynamics in the supercooled region since it has been explicitly parametrized to reproduce not only the density but also the experimental value of the self diffusion constant at ambient conditions and moreover it is able to reproduce the temperature of maximum density of water. In our study we will focus on the dynamics of confined water at room temperature where both on lowering the hydration and on supercooling we observe a splitting of the time scale in the relaxation laws. 2. SLOW DYNAMICS OF WATER CONFINED IN VYCOR In our simulation we build up a cubic cell of silica glass of $`71\AA `$ with a cylindrical cavity of $`40\AA `$ of diameter as described in detail in previous works . The inner surface of the cylinder is then corrugated by removing all the silicon atoms bonded to less than four oxygens. The oxygens dangling bonds are then saturated with acidic hydrogens in analogy with the experimental preparation of the sample of Vycor before hydration. Water molecules interact with the substrate atoms using an empirical potential model . The periodic boundary conditions are applied along the axis of the cylinder. The molecular dynamics calculations have been performed for different numbers of SPC/E water molecules, introduced in the pore, corresponding to different levels of hydration: $`N_W=2600`$ corresponding to $`96\%`$ of hydration level of the pore, $`N_W=2000`$ ($`74\%`$), $`N_W=1500`$ ($`56\%`$), $`N_W=1000`$ ($`37\%`$) and $`N_W=500`$ ($`19\%`$). We will discuss here the results obtained for two temperatures: $`T=298`$ $`K`$ for all the hydrations and $`T=240`$ $`K`$ for $`N_W=1500`$. A more detailed report will be published elsewhere . As a first result we show in Fig.1 the density profiles along the pore radius. Already at lower hydrations the presence of a layer of water molecules wetting the substrate surface is observed. At nearly full hydration two layers of water with higher than bulk density are evident. There is a strong tendency of water molecules close to the surface to form hydrogen bonds (HB) with the atoms of the substrate . In the MCT description of a liquid approaching the glass transition the dynamic behaviour is mastered by the “cage effect”. The molecule is trapped by the transient cage formed by its nearest neighbours. Signatures of the MCT behaviour can be found in the intermediate scattering function, ISF $`F_S(Q,t)`$, which will be shown below for our system. We expect to observe a diversification of the relaxation times resulting in a development of a shoulder in the relaxation laws. The long time tail of the ISF is predicted to have a stretched exponential behaviour when the system approaches the glass transition in the framework of MCT. In Fig.2 we present the ISF for the water oxygens at the oxygen-oxygen peak of the structure factor. The ISF are displayed at room temperature for the different hydrations. Upon decreasing hydration level the development of the shoulder in the relaxation laws is evident. All the tails of the correlators of Fig.2 are highly non-exponential. None-the-less these ISF could not be fitted to the same formula used for bulk supercooled water , $$F_S(Q,t)=\left[1A(Q)\right]e^{\left(t/\tau _s\right)^2}+A(Q)e^{\left(t/\tau _l\right)^\beta }$$ (1) where $`A(Q)=e^{a^2Q^2/3}`$ is the Lamb-Mössbauer factor (the analogous of the Debye Waller Factor for the single particle) arising from the cage effect, $`\tau _s`$ and $`\tau _l`$ are, respectively, the short and the long relaxation times and $`\beta `$ is the Kohlrausch exponent. Due to the strong hydrophilicity of the pore we observe a diversification of dynamic behaviour as we proceed from the pore surface to the center of the pore. The function $`F_S(Q,t)`$ is therefore splitted into the contribution coming from the two layers of water molecules closer to the pore surface (outer shells) and the contribution coming from all the remaining ones (inner shells) as displayed in Fig.1. The shell contribution is given only by the particles that move in the selected shell. We find that the inner shell contribution could be perfectly fit to eq.1, while the outer shells one decays to zero over a much longer timescale so that water molecules there behave already as a glass. The double step relaxation observed for the lowest hydration deserves some comment. In fact for $`N_W=500`$ the surface coverage is not complete and patches of water molecules are visible along the pore surface. The MCT is able to account also for the dynamics of clusters in a frozen environment and this might be the reason of the development of the diversification of the relaxation law. From our analysis of ISF turns out that the behaviour of molecules belonging to the first hydration layers changes as a complete surface coverage is achieved. In Fig.3 we show an example of the shell analysis for T=240 K and $`N_W=1500`$. The top curve is the contribution to the total ISF coming from the molecules in the two shells closest to the substrate. The bottom curve is the contribution from the molecules in the remaining shells. The central one is the total ISF. The dashed line is the fit to eq.1. From the fit we obtain $`\beta =0.62`$, $`\tau _l=11`$ $`ps`$ $`\tau _s=0.16`$ $`ps`$. The $`\tau _l`$ is similar to that of bulk water at the same temperature while the $`\beta `$ is much lower. For the inner shells a bump around $`0.7ps`$ is also observed that could possibly be related to the existence of the Boson Peak feature in the $`S(q,\omega )`$ . This bump also appears in Fig.2 for the lowest hydration level investigated, namely $`19`$%. In Fig.4 the $`\tau _l`$ and the $`\beta `$ values extracted from the fits to eq.1 are plotted as a function of $`Q`$ for T=240 K and $`N_W=1500`$. The $`\beta `$ value reaches a plateau value and the $`\tau _l`$ values show a $`Q^2`$ dependence. Both these behaviours are found in some glass former undergoing a kinetic glass transition and in particular the $`Q^2`$ behaviour has been observed for example in glycerol close to the glass transition . However other behaviours of the relaxation times as a function of $`Q`$ have been also found . 3. CONCLUDING REMARKS We report some of the results achieved in the computer simulation of the dynamics of water molecules confined in a silica pore. The substrate is carefully modeled to reproduce the main features of the hydrophilic cavity of Vycor glass. In the water density profile we observe at high hydration levels a double layer structure which strongly influences the dynamics of the molecules. An analogy between supercooled bulk water and confined water as a function of hydration level of the pore is possible in the sense that upon decreasing the hydration level a glassy behaviour appears already at ambient temperature. None-the-less the approach of confined liquid to the kinetic glass transition is rather different with respect to its bulk phase. In particular two quite distinct subsets of water molecules are detectable for confined water. The subset that is in contact with the surface is at higher density with respect to the bulk and is already a glass with low mobility even at ambient temperature. The inner subset displays, like a supercooled liquid, a two step relaxation behaviour. The shape of the ISF long time tail, also called late part of the $`\alpha `$-relaxation region, can be perfectly fitted to a stretched exponential function. The $`\beta `$ and $`\tau _l`$ behaviours are consistent with the values extracted for other glass formers in literature. More simulations on this system as function of temperature and hydration are in progress for a full MCT test and a complete comparison with the bulk. The relation between the two types of subsets and the so called free and bound water found in several experiments on confined water at freezing is worthwhile to be explored in the future. Acknowledgments The authors wish to thank M.A. Ricci and E. Spohr for their contribution to this work.
no-problem/0002/astro-ph0002022.html
ar5iv
text
# The power spectral properties of the Z source GX 340+0 ## 1. Introduction GX 340+0 is a bright low-mass X-ray binary (LMXB) and a Z source (Hasinger & van der Klis 1989). The Z-shaped track traced out by Z sources in the X-ray color-color diagram or hardness-intensity diagram (HID) is divided into three branches: the horizontal branch (HB), the normal branch (NB), and the flaring branch (FB). The power spectral properties and the HID of GX 340+0 were previously described by van Paradijs et al. (1988) and Kuulkers & van der Klis (1996) using data obtained with the EXOSAT satellite, by Penninx et al. (1991) using data obtained with the Ginga satellite, and by Jonker et al. (1998) using data obtained with the Rossi X -ray Timing Explorer (RXTE) satellite. An extra branch trailing the FB in the HID has been described by Penninx et al. (1991) and Jonker et al. (1998). When the source is on the HB or on the upper part of the NB, quasi-periodic oscillations (QPOs) occur with frequencies varying from 20–50 Hz: the horizontal branch quasi-periodic oscillations or HBOs (Penninx et al. 1991; Kuulkers & van der Klis 1996; Jonker et al. 1998). Second harmonics of these HBOs were detected by Kuulkers & van der Klis (1996) and Jonker et al. (1998) in the frequency range 73–76 Hz and 38–69 Hz, respectively. In the middle of the NB, van Paradijs et al. (1988) found normal branch oscillations (NBOs) with a frequency of 5.6 Hz. Recently, Jonker et al. (1998) discovered twin kHz QPOs in GX 340+0. These QPOs have now been seen in all six originally identified Z sources (Sco X–1, van der Klis et al. 1996; Cyg X–2, Wijnands et al 1998a; GX 17+2, Wijnands et al. 1997b; GX 349+2, Zhang et al. 1998; GX 340+0, Jonker et al. 1998; GX 5–1, Wijnands et al. 1998b; see van der Klis 1997, 1999 for reviews), but not in Cir X–1, which combines Z source and atoll source characteristics (Oosterbroek et al. 1995; Shirey, Bradt, and Levine 1999; see also Psaltis, Belloni, & van der Klis 1999). In the other class of LMXBs, the atoll sources (Hasinger & van der Klis 1989), kHz QPOs are observed as well (see van der Klis 1997, 1999 for reviews). Recently, also HBO-like features have been identified in a number of atoll sources (4U 1728–34, Strohmayer et al. 1996, Ford & van der Klis 1998, Di Salvo et al. 1999; GX13+1, Homan et al. 1998; 4U 1735–44, Wijnands et al. 1998c; 4U 1705–44, Ford, van der Klis, & Kaaret 1998; 4U 1915–05, Boirin et al. 1999; 4U 0614+09, van Straaten et al. 1999; see Psaltis, Belloni, & van der Klis 1999 for a summary). Furthermore, at the highest inferred mass accretion rates, QPOs with frequencies near 6 Hz have been discovered in the atoll sources 4U 1820–30 (Wijnands, van der Klis, & Rijkhorst 1999c), and XTE J1806–246 (Wijnands & van der Klis 1998e, 1999b; Revnivtsev, Borozdin, & Emelyanov 1999), which might have a similar origin as the Z source NBOs. At low mass accretion rates the power spectra of black hole candidates, atoll, and Z sources show similar characteristics (van der Klis 1994a,b). Wijnands & van der Klis (1999a) found that the break frequency of the broken power law which describes the broad-band power spectrum, correlates well with the frequency of peaked noise components (and sometimes narrow QPO peaks) observed in atoll sources (including the millisecond X-ray pulsar SAX J1808.4–3658; Wijnands & van der Klis 1998d, Chakrabarty & Morgan 1998), and black hole candidates. The Z sources followed a slightly different correlation. In a similar analysis, Psaltis, Belloni, & van der Klis (1999) have pointed out correlations between the frequencies of some of these QPOs and other noise components in atoll sources, Z sources, and black hole candidates, which suggests these phenomena may be closely related across these various source types, or at least depend on a third phenomenon in the same manner. Because of these correlations, models describing the kHz QPOs which also predict QPOs or noise components in the low-frequency part of the power spectrum can be tested by investigating this low-frequency part. In this paper, we study the full power spectral range of the bright LMXB and Z source GX 340+0 in order to further investigate the similarities between the atoll sources and the Z sources, and to help constrain models concerning the formation of the different QPOs. We report on the discovery of two new components in the power spectra of GX 340+0 with frequencies less than 40 Hz when the source is on the left part of the HB. We also discuss the properties of the NBO, and those of the kHz QPOs. ## 2. Observations and analysis The Z source GX 340+0 was observed 24 times in 1997 and 1998 with the proportional counter array (PCA; Jahoda et al. 1996) on board the RXTE satellite (Bradt, Rothschild & Swank 1993). A log of the observations is presented in Table 1. Part of these data (observations 1, 9–18) was used by Jonker et al. (1998) in the discovery of the kHz QPOs in GX 340+0. The total amount of good data obtained was $``$390 ksec. During $``$19% of the time only 3 or 4 of the 5 PCA detectors were active. The data were obtained in various modes, of which the Standard 1 and Standard 2 modes were always active. The Standard 1 mode has a time resolution of 1/8 s in one energy band (2–60 keV). The Standard 2 mode has a time resolution of 16 s and the effective 2–60 keV PCA energy range is covered by 129 energy channels. In addition, high time resolution data (with a resolution of 244 $`\mu `$s or better for the 2–5.0 keV band and with a resolution of 122 $`\mu `$s or better for the 5.0–60 keV range) were obtained for all observations. For all observations except observation 1, which had only 4 broad energy bands, and observation 22, for which technical problems with the data occurred, we computed power spectra in five broad energy bands (2–5.0, 5.0–6.4, 6.4–8.6, 8.6–13.0, 13.0–60 keV) with a Nyquist frequency of 256 Hz dividing the data in segments of 16 s length each. We also computed power spectra for all observations using 16 s data segments in one combined broad energy band ranging from 5.0–60 keV with a Nyquist frequency of 4096 Hz. To characterize the properties of the low-frequency part (1/16–256 Hz) of the power spectrum we experimented with several fit functions (see Section 3) but finally settled on a fit function that consisted of the sum of a constant to represent the Poisson noise, one to four Lorentzians describing the QPOs, an exponentially cut-off power law component, $`P\nu ^\alpha exp(\nu /\nu _{cut})`$ to describe the low frequency noise (LFN), and a power law component to represent the very low frequency noise (VLFN) when the source was on the NB. To describe the high frequency part (128 to 4096 Hz or 256 to 4096 Hz) of the power spectrum we used a fit function which consisted of the sum of a constant and a broad sinusoid to represent the dead-time modified Poisson noise (Zhang et al. 1995), one or two Lorentzian peaks to represent the kHz QPOs, and sometimes a power law to fit the lowest frequency part ($`<`$ 150 Hz). The PCA setting concerning the very large event window (Zhang et al. 1995; van der Klis et al. 1997) was set to 55 $`\mu `$s. Therefore, its effect on the Poisson noise was small and it could be incorporated into the broad sinusoid. The errors on the fit parameters were determined using $`\mathrm{\Delta }\chi ^2`$=1.0 ($`1\sigma `$ single parameter). The 95% confidence upper limits were determined using $`\mathrm{\Delta }\chi ^2`$=2.71. We used the Standard 2 data to compute hardnesses and intensities from the three detectors that were always active. Figure 1 shows three HIDs; one (A) for observations 1 and 9–18 combined (data set A), one (B) for observation 2–8 combined (data set B), and one (C) for observation 19–24 combined (data set C). The observations were subdivided in this way because the hard vertex, defined as the HB–NB intersection, is at higher intensities in data set C than in data set A. The hard vertex of data set B falls at an intermediate intensity level. We assigned a value to each power spectrum according to the position of the source along the Z track using the $`\mathrm{S}_\mathrm{z}`$ parameterization (Dieters & van der Klis 1999, Wijnands et al. 1997) applied to each HID separately. In this parametrization, the hard vertex (defined as the HB-NB intersection) is assigned the value $`\mathrm{S}_\mathrm{z}`$ = 1.0 and the soft vertex (defined as the NB–FB intersection) is assigned $`\mathrm{S}_\mathrm{z}`$ = 2.0. Thus, the distance between the hard and soft vertex defines the length scale along each branch. Since for HID C we only observed part of the Z track, we used the position of the soft vertex of HID A in HID C. From the fact that the soft vertex of the HID B was consistent with that from HID A, we conclude that the error introduced by this is small. The shifts in the position of the hard vertex prevented us from selecting the power spectra according to their position in an HID of all data combined. We selected the power spectra according to the $`\mathrm{S}_\mathrm{z}`$ value in each of the three separate Z tracks, since Jonker et al. (1998) showed that for GX 340+0 the frequency of the HBO is better correlated to the position of the source relative to the instantaneous Z track than to its position in terms of coordinates in the HID. The power spectra corresponding to each $`\mathrm{S}_\mathrm{z}`$ interval were averaged. However, employing this method yielded artificially broadened HBO peaks, and sometimes the HBO profile even displayed double peaks. The reason for this is that in a typical $`\mathrm{S}_\mathrm{z}`$ selection interval of 0.05 the dispersion in HBO frequencies well exceeds the statistical one, as shown in Figure 2. While the relation between $`\mathrm{S}_\mathrm{z}`$ and HBO frequency is roughly linear, the spread is large. For this reason, when the HBO was detectable in the 5.0–60 keV power spectra, we selected those power spectra according to HBO frequency rather than on $`\mathrm{S}_\mathrm{z}`$ value. In practice, this was possible for all data on the HB. To determine the energy dependence of the components, the 2–5.0, 5.0–6.4, 6.4–8.6, 8.6–13.0, 13.0–60 keV power spectra were selected according to the frequency of the HBO peak in the 2–60 keV power spectrum, when detectable. The HBO frequency selection proceeded as follows. For each observation we constructed a dynamical power spectrum using the 5–60 keV or 2–60 keV data (see above), showing the time evolution of the power spectra (see Fig. 3). Using this method, we were able to trace the HBO frequency in each observation as a function of time. We determined the maximum power in 0.5 Hz bins over a range of 2 Hz around the manually identified QPO frequency for each power spectrum, and adopted the frequency at which this maximum occurred as the HBO frequency in that power spectrum. This was done for each observation in which the HBO could be detected. The 18–52 Hz frequency range over which the HBO was detected was divided in 16 selection bins with widths of 2 or 4 Hz, depending on the signal to noise level. For each selection interval the power spectra were averaged, and a mean $`\mathrm{S}_\mathrm{z}`$ value was determined. The HBO selection criteria were applied to all data along the HB and near the hard vertex. When the source was near the hard vertex, on the NB, or the FB, we selected the power spectra according to the $`\mathrm{S}_\mathrm{z}`$ value. An overlap between the two methods occurred for data near the hard vertex; both selection methods yielded the same results for the fit parameters to well within the statistical errors (see Section 3). Separately, for each set of observations (A,B, and C) we also determined the kHz QPO properties according to the $`\mathrm{S}_\mathrm{z}`$ method. ## 3. Results Using the fit function described by Jonker et al. (1998) which consisted of two Lorentzians to describe the HBO and the second harmonic of the HBO, and a cut-off power law to describe the LFN noise component, we obtained poor fits. Compared with Jonker et al. (1998) we combined more data, resulting in a higher signal to noise ratio. First we included a peaked noise component (called sub-HBO component) at frequencies below the HBO, since a similar component was found by van der Klis et al. (1997) in Sco X–1. This improved the $`\chi _{red}^2`$ of the fit. Remaining problems were that the frequency of the second harmonic was not equal to twice the HBO frequency (similar problems fitting the power spectra on the HB of Cyg X–2 were reported by Kuulkers, Wijnands, & van der Klis 1999), and the frequency of the sub-HBO component varied erratically along the HB. Inspecting the fit showed that both the fit to the high frequency tail of the HBO, and the fit to its second harmonic did not represent the data points very well. Including an additional component in the fit function representing the high frequency tail of the HBO (called shoulder component after Belloni et al. who used this name) resulted in a better fit to the HBO peak, a centroid frequency of the HBO second harmonic more nearly equal to twice the HBO frequency, and a more consistent behavior of the frequency of the sub-HBO component (which sometimes apparently fitted the shoulder when no shoulder component was present in the fit function). We also experimented with several other fit function components to describe the average power spectra which were used by other authors to describe the power spectra of other LMXBs. Using a fit function built up out of a broken power law, to fit the LFN component, and several Lorentzians to fit the QPOs after Wijnands et al. (1999a) results in significantly higher $`\chi _{red}^2`$ values than when the fit function described in Section 2 was used ($`\chi _{red}^2`$= 1.66 for 205 degrees of freedom (d.o.f.) versus a $`\chi _{red}^2`$= 1.28 with 204 d.o.f.). We also fitted the power spectra using the same fit function as described in Section 2 but with the frequency of the sub-HBO component fixed at 0 Hz, in order to test whether or not an extra LFN-like component centred around 0 Hz was a good representation of the extra sub-HBO component. Finally, we tested a fit function built up out of two cut-off power laws; one describing the LFN and one either describing the sub-HBO component or the shoulder component, and three Lorentzians, describing the HBO, its second harmonic, and either the sub-HBO or shoulder component when not fitted with the cut-off power law. But in all cases the $`\chi _{red}^2`$ values obtained using these fit functions were significantly higher (for the 24–26 Hz selection range values of 1.52 for 205 d.o.f., 1.62 for 204 d.o.f., and 2.00 for 205 d.o.f. were obtained, respectively). Settling on the fit function already described in Section 2, we applied an F-test (Bevington & Robinson 1992) to the $`\chi ^2`$ of the fits with and without the extra Lorentzian components to test their significance. We derived a significance of more than 8 $`\sigma `$ for the sub-HBO component, and a significance of more than 6.5 $`\sigma `$ for the shoulder component, in the average selected power spectrum corresponding to HBO frequencies of 24 to 26 Hz. In Figure 4 we show the contribution of all the components used to obtain the best fit in this power spectrum. The properties of all the components used in describing the low-frequency part of the average power spectra along the HB are given in Fig. 5 as a function of $`\mathrm{S}_\mathrm{z}`$. When the HBO frequency was higher than 32 Hz, the sub-HBO and shoulder component were not significant. We therefore decided to exclude these two components from the fit function in the HBO frequency selections of 32 Hz and higher. When this affected the parameters determined for the remaining components in the fit function, we mention so. Splitting the total counts into different photon energies reduced the signal to noise in each energy band and therefore these effects were more important in the fits performed to determine the energy dependence of the parameters. ### 3.1. The LFN component The fractional rms amplitude of the LFN decreased as a function of $`\mathrm{S}_\mathrm{z}`$ (Fig 5 A), with values ranging from 10% to 2.2% (5.0–60 keV). Upper limits on the LFN component were calculated by fixing the power law index at 0.0. The power law index of the LFN component increased from $``$ 0 at $`\mathrm{S}_\mathrm{z}`$ 0.5 to $``$ 0.4 around $`\mathrm{S}_\mathrm{z}`$= 0.9; when the source moved on to the NB the index of the power law decreased to values slightly below 0.0 (Fig 5 B). The cut-off frequency of the LFN component increased as a function of $`\mathrm{S}_\mathrm{z}`$. For $`\mathrm{S}_\mathrm{z}>1.0`$ the cut-off frequency could not be determined with high accuracy (Fig 5 C). The LFN fractional rms amplitude depended strongly on photon energy all across the selected frequency range. The rms amplitude increased from 5% at 2–5.0 keV to more than 15% at 13.0–60 keV ($`\mathrm{S}_\mathrm{z}`$=0.48). The power law index, $`\alpha `$, of the LFN component was higher at lower photon energies (changing from 0.3–0.5 along the HB at 2–5.0 keV) than at higher photon energies (changing from $``$0.2–0.2 along the HB at 13.0–60 keV). The cut-off frequency of the LFN component did not change as a function of photon energy. ### 3.2. The HBO component The fractional rms amplitude of the HBO decreased as a function of $`\mathrm{S}_\mathrm{z}`$ (Fig 5 G), with values ranging from 10% to 1.7% over the detected range (5.0–60 keV). Upper limits on the HBO component were determined using a fixed FWHM of 15 Hz. The frequency of the HBO increased as a function of $`\mathrm{S}_\mathrm{z}`$ but for $`\mathrm{S}_\mathrm{z}>`$1.0 it was consistent with being constant around 50 Hz (Fig 5 I). The ratio of the rms amplitudes of the LFN and the HBO component, of interest in beat frequency models (see Shibazaki & Lamb 1987) decreased from $``$ 1 at an $`\mathrm{S}_\mathrm{z}`$ value of 0.48 to $`0.6`$ at $`\mathrm{S}_\mathrm{z}`$ values of 0.8–1.0. The ratio increased again to a value of $``$ 0.9 at $`\mathrm{S}_\mathrm{z}=`$1.05 when the source was on the NB. The HBO rms amplitude depended strongly on photon energy all across the selected frequency range. The rms amplitude increased from 5% at 2–5.0 keV to $``$ 16% at 13.0–60 keV (at $`\mathrm{S}_\mathrm{z}`$=0.48) (see Figure 6 \[dots\] for the HBO energy dependence in the 26–28 Hz range). The increase in fractional rms amplitude of the HBO towards higher photon energies became less as the frequency of the HBO increased. At the highest HBO frequencies the HBO is relatively stronger in the 8.4–13.0 keV band than in the 13.0–60 keV band. The ratio between the fractional rms amplitude as a function of photon energy of the HBO at lower frequencies and the fractional rms amplitude as a function of photon energy of the HBO at higher frequencies is consistent with a straight line with a positive slope. The exact fit parameters depend on the HBO frequencies at which the ratios were taken. This behavior was also present in absolute rms amplitude ($`\mathrm{fractional}\mathrm{rms}\mathrm{amplitude}\mathrm{I}_\mathrm{x}`$, where $`\mathrm{I}_\mathrm{x}`$ is the count rate), see Fig. 7. So, this behavior is caused by actual changes in the QPO spectrum, not by changes in the time-averaged spectrum by which the QPO spectrum is divided to calculate the fractional rms spectrum of the HBO. The FWHM and the frequency of the HBO were the same in each energy band. ### 3.3. The second harmonic of the HBO The rms amplitude of the second harmonic of the HBO decreased as a function of $`\mathrm{S}_\mathrm{z}`$ (Fig 5 M) from 5.2% to 3.6% (5.0–60 keV). Upper limits on the second harmonic of the HBO were derived using a fixed FWHM of 25 Hz. The frequency of the second harmonic of the HBO was consistent with being twice the HBO frequency when the sub-HBO and the shoulder component were strong enough to be measured (see Fig 5 O, and Fig. 8). When these two extra components could not be determined significantly, due to the limited signal to noise, and we therefore omitted them from the fit function (as explained above), the frequency of the second harmonic of the HBO was clearly less than twice the HBO frequency (see Fig. 8). The rms amplitude of the second harmonic of the HBO was also energy dependent. Its rms amplitude increased from less than 4% in the 2–5.0 keV, to more than 9% in the 8.6–13 keV band. The FWHM of the second harmonic varied erratically in the range of 10–50 Hz. This is not necessarily a property of the second harmonic since the HBO shoulder component which was not significant by themselve was omitted from the fit function. This may have influenced the fit to the FWHM of the second harmonic when it was weak. Its frequency was consistent with being the same in each energy band. ### 3.4. The sub-HBO component The centroid frequency (Fig 5 F) and FWHM (Fig 5 E) of the Lorentzian at sub-HBO frequencies increased from 9.3$`\pm `$0.3 Hz to 13.6$`\pm `$1.0 Hz and from 7.1$`\pm `$0.6 Hz to 18$`\pm `$3 Hz, respectively, as the source moved up the HB from $`\mathrm{S}_\mathrm{z}=`$ 0.48 to 0.67. The rms amplitude of this component did not show a clear relation with $`\mathrm{S}_\mathrm{z}`$; its value was consistent with being constant around 5% (Fig 5 D). Upper limits on the sub-HBO component were determined using a fixed FWHM of 15 Hz. The frequency of the sub-HBO component is close to half the frequency of the HBO component. The fact that the ratio between the HBO frequency and the sub-HBO frequency is not exactly 2 but $`2.2`$ may be accounted for by the complexity of the data and therefore its description. We detected the sub-HBO component in the three highest energy bands that we defined over an $`\mathrm{S}_\mathrm{z}`$ range from 0.48–0.65. Its rms amplitude is higher in the highest energy band ($``$7% in 13.0–60 keV, and less than 5% in 6.4–8.6 keV) and decreased as a function of $`\mathrm{S}_\mathrm{z}`$, while the FWHM and the frequency increased from 6–12 Hz, and 9–15 Hz, respectively. ### 3.5. The HBO shoulder component At an $`\mathrm{S}_\mathrm{z}`$ value of 0.48 (the left most part of the HB) the frequency of the shoulder component was higher than the frequency of the HBO, and the frequency separation between them was largest (Fig 5 L and Fig. 8). Both the frequency of the shoulder and the HBO increased when the source moved along the HB, but the frequency difference decreased. The FWHM of the shoulder component increased from 7$`\pm `$2 Hz to 17$`\pm `$3 Hz and then decreased again to 8.6$`\pm `$1.0 Hz as the frequency of the HBO peak increased from 25.7$`\pm `$0.7 Hz to 30.9$`\pm `$0.4 Hz (Fig 5 K). From an $`\mathrm{S}_\mathrm{z}`$ value of 0.61 to 0.67 the frequency was consistent with being constant at a value of 32 Hz. The rms amplitude was consistent with being constant around 4% (5.0–60 keV), over the total range where this component could be detected (Fig 5 J), but the data is also consistent with an increase of fractional rms amplitude with increasing HBO frequency. Upper limits on the HBO shoulder component were determined using a fixed FWHM of 7 Hz. In the various energy bands the HBO shoulder component was detected seven times in total; once in the 2–5.0 keV band, three times in the 6.4–8.6 keV band, and three times in the in the 8.6–13.0 keV band, with rms amplitudes increasing from $``$3% in the 2–5.0 keV band to $``$6% in the 8.6–13.0 keV band, and a FWHM of $``$10 Hz. Upper limits of the order of 3%–4%, and of 5%–7% were derived in the two lowest and three highest energy bands considered, respectively. These are comparable with or higher than the rms amplitudes of this component determined in the 5–60 keV band. ### 3.6. The NBO component The NBOs were not observed when the source was on the HB, with an upper limit of 0.5% just before the hard vertex (for an $`\mathrm{S}_\mathrm{z}`$ value of 0.96). They were detected along the entire NB and they evolved into a broad noise component on the FB. The properties are listed in Table 2. The rms amplitude of the NBO gradually increased while the source moved from the upper NB to the middle part of the NB where the rms amplitude is highest. On the lower part of the NB the NBO rms amplitude gradually decreased. Upper limits on the NBO components were determined using a fixed FWHM of 5 Hz. As the NBO got stronger towards the middle of the NB the profile of the NBO became detectably asymmetric (see Fig. 9); between $`\mathrm{S}_\mathrm{z}`$=1.25 and 1.54 the NBO was fitted using two Lorentzians. The FWHM of the NBO as a function of the position along the NB was first decreasing from $``$10 Hz at $`\mathrm{S}_\mathrm{z}`$=1.038 to around 2.5 Hz on the middle part of the NB ($`\mathrm{S}_\mathrm{z}`$ values from 1.25–1.54), and then increased again to $``$ 5 Hz on the lowest part of the NB. Due to the fact that the NBO profiles had to be fitted using two Lorentzians in part of the data, the behavior of the NBO frequency as a function of $`\mathrm{S}_\mathrm{z}`$ is also not determined unambiguously. Therefore, we weighted the frequencies of these two Lorentzians according to one over the square of the FWHM. The FWHM weighted average of the two centroid frequencies of the two Lorentzians used to describe the NBO was consistent with a small increase as a function of $`\mathrm{S}_\mathrm{z}`$ from $`5.27\pm 0.07`$ Hz at $`\mathrm{S}_\mathrm{z}`$=1.25 to $`5.83\pm 0.18`$ Hz at $`\mathrm{S}_\mathrm{z}`$=1.54. We combined all power spectra with an $`\mathrm{S}_\mathrm{z}`$ between 1.0 and 1.9 in order to investigate the energy dependence of the NBO. The rms amplitude of the NBO increased as of function of photon energy (see Figure 6 \[squares\]). ### 3.7. KHz QPOs Using the HBO frequency selection method in all data combined, the frequency of the kHz QPO peaks increased from $`197_{70}^{+26}`$ Hz to $`565_{14}^{+9}`$ Hz and from $`535_{48}^{+85}`$ Hz to $`840\pm 21`$ Hz for the lower and upper peak, respectively, while the frequency of the HBO increased from $`20.55\pm 0.02`$ Hz to $`48.15\pm 0.08`$ Hz. Using the $`\mathrm{S}_\mathrm{z}`$ selection method on the three data sets we defined in Section 2 (Figure 1), we found that the relation between the kHz QPO and the HBO is consistent with being the same in all three data sets (Fig. 10 upper panel). The same relation was found when we combined all data and selected the power spectra according to the HBO frequency. Upper limits on the kHz QPOs were determined with the FWHM fixed at 150 Hz. When only one of the two kHz QPO peaks was detected the upper limit on the other peak was determined by fixing the frequency at the frequency of the detected peak plus or minus the mean difference frequency between the two peaks, depending on whether the lower or the upper peak was detected. The properties of the kHz QPOs as determined in all data combined when selected according to the HBO frequency are listed in Table 3. The kHz QPO peak separation was consistent with being constant at 339$`\pm `$8 Hz over the observed kHz QPO range (Fig. 10 lower panel), but a decrease towards higher upper peak frequencies similar to that found in Sco X–1 (van der Klis et al. 1997), 4U 1608–52 (Méndez et al. 1998), 4U 1735–44 (Ford et al. 1998), 4U 1702–429 (Markwardt, Strohmayer, & Swank 1999), and 4U 1728–34 (Méndez & van der Klis 1999) cannot be excluded. The FWHM of neither the lower nor the higher frequency kHz QPO peak showed a clear relation with frequency. The rms amplitude of the lower and upper kHz QPO peak decreased from 3.1% to 1.1%, and from 4.2% to 1.2%, respectively when the HBO frequency increased from 20.55 to 48.15 Hz. ## 4. Discussion In the present work we combined all RXTE data presently available for the Z source GX 340+0 using our new selection method based on the frequency of the HBO peak. This allowed us to distinguish two new components in the low-frequency part of the power spectrum. These two extra components were strongest when the source was at the lowest count rates on the HB (see Fig. 1), between $`\mathrm{S}_\mathrm{z}=`$ 0.48–0.73, i.e., at the lowest inferred $`\dot{M}`$. The frequency of one of these components, the sub-HBO component, is close to half the frequency of the HBO component. The frequency ratio was consistent with being constant when the frequency of the sub-HBO changed from 9 to 14 Hz. A similar feature at sub-HBO frequencies has been reported by van der Klis et al. (1997) in Sco X–1. Since the frequency of this component is close to twice the predicted Lense-Thirring (LT) precession frequency for rapidly rotating neutron stars (Stella & Vietri 1998), we shall discuss the properties of this component within this framework. The other component we discovered, the HBO shoulder component, was used to describe the strong excess in power in the HBO profile towards higher frequencies. If this shoulder component is related to the HBO and not to a completely different mechanism which by chance results in frequencies close to the frequency of the HBO, it can be used to constrain the formation models of the HBO peak. We demonstrated that both the HBO and the NBO have a similar asymmetric profile. In the NBO this was previously noted by Priedhorsky et al. (1986) in Sco X–1. We shall consider the hypothesis that the formation of this shoulder is a common feature of the two different QPO phenomena, even if the two peaks themselves perhaps occur due to completely different physical reasons. Our results on the kHz QPOs based on more extensive data sets at three different epochs and using the new HBO selection method are consistent with those of Jonker et al. (1998). We discuss the properties of the kHz QPOs within the framework of precessing Keplerian flows (Stella & Vietri 1999), the sonic point model (Miller, Lamb, & Psaltis 1998), and the transition layer model described by Osherovich & Titarchuk (1999), and Titarchuk, Osherovich, & Kuznetsov (1999). ### 4.1. Comparison with other observations In various LMXBs, QPOs have been found whose profiles are clearly not symmetric. Belloni et al. (1997) showed that for the black hole candidate (BHC) GS 1124–68 the QPO profiles are asymmetric, with a high frequency shoulder. Dieters et al. (1998) reported that the 2.67 Hz QPO of the BHC 4U 1630–47 was also asymmetric with a high frequency shoulder. In the Z source Sco X–1 the NBO profile was also found to be asymmetric (Priedhorsky et al. 1986). It is clear that asymmetric shapes of the QPO profiles are frequently observed in LMXBs and are not restricted to either the black hole candidates or the neutron star systems. In the BHCs GS 1124–68 (Belloni et al. 1997) and XTE J1550–564 (Homan et al. 1999) several QPOs were discovered which seem to be harmonically related in the same way as we report for GX 340+0, i.e. the third harmonic is not detected, while the first, the second and the fourth harmonic are. If this implies that these QPOs are the same, models involving the magnetic field of the neutron star for their origin could be ruled out. The time lag properties of the harmonic components of the QPOs in XTE J1550–564 are complex and quite distinctive (Wijnands, Homan, & van der Klis 1999). In GX 340+0 no time lags of the harmonic components could be measured, but the time lags measured in the HBO in the similar Z source GX 5–1 (Vaughan et al. 1994) are quite different. In order to study in more detail the relationship found by Wijnands & van der Klis (1999) between the QPOs and the noise break frequency in the power spectrum of LMXBs, we fitted the LFN component using a broken power law. To determine the value for the break frequency we fixed the parameters of all other components to the values found when using a cut-off power law to describe the LFN. Wijnands & van der Klis (1999a) reported that the Z sources did not fall on the relation between the break and QPO frequency established by atoll sources and black hole candidates. They suggested that the Z source LFN is not similar to the atoll HFN but the noise component found in Sco X–1 at sub-HBO frequencies is. By using the centroid frequency of that peaked noise component as the break frequency instead of the LFN break frequency, the HBO frequencies did fall on the reported relation. On the other hand, we find that using the sub-HBO frequency instead of the HBO frequency together with the LFN break frequency, the Z source GX 340+0 also falls exactly on the relation. Therefore, the suggestion made by Wijnands & van der Klis (1999a) that the strong band-limited noise in atoll and Z sources have a different origin is only one of the two possible solutions to the observed discrepancy. Our proposed alternative solution is that the Z and atoll noise components are the same, but that it is the sub-HBO in Z sources which corresponds to the QPO in atoll sources. An argument in favour of the noise components in Z and atoll sources being the same is that the cut-off frequency of the LFN component increased as a function of $`\mathrm{S}_\mathrm{z}`$, in a similar fashion as the frequency associated with the atoll high frequency noise (van der Klis 1995, Ford & van der Klis 1998, van Straaten et al. 1999). Following Psaltis, Belloni, & van der Klis (1999) we plotted the sub-HBO frequency against the frequency of the lower-frequency kHz QPO. The sub-HBO does not fall on the relation found by Psaltis, Belloni, & van der Klis (1999) between the frequency of the HBO and the lower-frequency kHz QPO frequency. Instead the data points fall between the two branches defined by the HBO-like QPO frequencies vs. the lower kHz QPO frequency at high frequencies (see Psaltis, Belloni, & van der Klis 1999). ### 4.2. HBO – kHz QPO relations #### 4.2.1 Lense-Thirring precession frequency Stella & Vietri (1998) recently considered the possibility that the HBO is formed due to the LT precession of the nodal points of sligthly tilted orbits in the inner accretion disk, but as they already mentioned the Z sources GX 5–1 and GX 17+2 did not seem to fit in this scheme. For reasonable values of I/M, the neutron star moment of inertia divided by its mass, the observed frequencies were larger by a factor of $``$2 than the predicted ones. Jonker et al. (1998) showed that for GX 340+0 the predicted frequency is too small by a factor of 3, if one assumes that the higher frequency peak of the kHz QPOs reflects the Keplerian frequency of matter in orbit around the neutron star, and that the mean peak separation reflects the neutron star spin frequency. Using the same assumptions Psaltis et al. (1999) also concluded that a simple LT precession frequency model is unable to explain the formation of HBOs in Z sources. Detailed calculations of Morsink & Stella (1999) even worsen the situation, since their calculations lower the predicted LT frequencies. They find that the LT precession frequencies are approximately a factor of two too low to explain the noise components at frequencies $``$20–35 Hz observed in atoll sources (4U 1735–44, Wijnands & van der Klis 1998c; 4U 1728–34, Strohmayer et al 1996, Ford & van der Klis 1998). Stella & Vietri (1998) already put forward the suggestion that a modulation can be produced at twice the LT precession frequency if the modulation is produced by the two points where the inclined orbit intersects the disk plane (although they initially used this for explaining the discrepancy of a factor of two between the predicted and the observed LT precession frequencies for the Z sources). The sub-HBO peaked noise component we discovered could be harmonically related to the HBO component. If the sub-HBO is the second harmonic of the fundamental LT precession frequency, as needed to explain the frequencies in the framework of the LT precession model where the neutron star spin frequency is approximately equal to the frequency of the kHz QPO peak separation, the HBO must be the fourth and the harmonic of the HBO must be the eighth harmonic component, whereas the sixth and uneven harmonics must be much weaker. This poses strong (geometrical) constraints on the LT precession process. On the other hand, if the HBO frequency is twice the LT precession frequency, which implies a neutron star spin frequency of $``$900 Hz (see Morsink & Stella 1999), the frequency of the sub-HBO component is the LT precession frequency, and the frequency of the second harmonic of the HBO is four times the LT precession frequency. In that case only even harmonics and the LT precession frequency are observed. #### 4.2.2 Magnetospheric beat frequency and radial-flow models In this section, we discuss our findings concerning the QPOs and the LFN component in terms of the magnetic beat frequency model where the QPOs are described by harmonic series (e.g. Shibazaki & Lamb 1987). If the sub-HBO frequency is proven not to be harmonically related to the HBO, the sub-HBO peak might be explained as an effect of fluctuations entering the magnetospheric boundary layer periodically. Such an effect will be strongest at low HBO frequencies since its power density will be proportional to the power density of the LFN (Shibazaki & Lamb 1987). If it is the fundamental frequency and the HBO its first overtone then the magnetospheric beat frequency model proposed to explain the HBO formation (Alpar & Shaham 1985; Lamb et al. 1985) is not strongly constrained. Within the beat frequency model the high frequency shoulder of the HBO peak can be explained as a sign of radial drift of the blobs as they spiral in after crossing the magnetospheric boundary layer (Shibazaki & Lamb 1987). Shibazaki & Lamb (1987) describe another mechanism which may produce a high frequency shoulder. Interference between the LFN and the QPO caused by a non uniform phase distribution of the blobs will also cause the QPO to become asymmetric. This effect will be strongest when the LFN and the QPO components overlap, as observed. Finally, an asymmetric initial distribution of frequencies of the blobs when entering the magnetospheric boundary layer may also form an asymmetric HBO peak. The changes in the power law index of the LFN as a function of photon energy can be explained by varying the width or the steepness of the lifetime distribution of the blobs entering the magnetic boundary layer (Shibazaki & Lamb 1987). The decrease in increase of both the fractional and absolute rms amplitude of the HBO as a function of energy towards higher frequencies (Fig. 7) also constrains the detailed physical interactions occurring in the boundary layer. Fortner et al. (1989) proposed that the NBO is caused by oscillations in the electron scattering optical depth at the critical Eddington mass accretion rate. How a high frequency shoulder can be produced within this model is not clear. Both the HBO and the NBO shoulder components were detected when the rms amplitude of the HBO and the NBO was highest. In case of the NBO, this may be a result of the higher signal to noise. Since the rms amplitude of the NBO shoulder component is consistent with being $``$2/3 of the NBO rms amplitude (see Table 2), combining more observations should increase the range over which this shoulder component is detected, if this ratio is constant along $`\mathrm{S}_\mathrm{z}`$. In case of the HBO the two components seem to merge. While the fractional rms amplitude of the HBO shoulder component increased that of the HBO decreased. When the fractional rms amplitudes were comparable, the HBO was fitted with one Lorentzian. The rms amplitude of both shoulder components increased in a similar way as the rms amplitudes of the NBO and the HBO with photon energy. So, the formation of these shoulder components seems a common feature of both QPO forming mechanisms. #### 4.2.3 Radial oscillations in a viscous layer In Sco X–1, Titarchuk, Osherovich, & Kuznetsov (1999) interpreted the extra noise component in the power spectra (van der Klis et al. 1997) as due to radial oscillations in a viscous boundary layer (Titarchuk & Osherovich 1999). If the noise component in Sco X–1 is the sub-HBO component in GX 340+0, the model of Titarchuk & Osherovich (1999) can be applied to the frequencies and dependencies we found for the sub-HBO component in GX 340+0. Fitting our data to the relation between the frequency of the extra noise component and the Keplerian frequency, using the parameters and parametrization given by Titarchuk, Osherovich, & Kuznetsov (1999), we obtained a value of $`C_N=15`$ for GX 340+0. This value is much larger than the value obtained for Sco X–1 (9.76). According to Titarchuk & Osherovich (1999) a higher $`C_N`$ value implies a higher viscosity for the same Reynold’s number. ### 4.3. KHz QPOs and their peak separation Recently, Stella & Vietri (1999) have put forward a model in which the formation of the lower kHz QPO is due to the relativistic periastron precession (apsidal motion) of matter in (near) Keplerian orbits. The frequency of the upper kHz QPO peaks is the Keplerian frequency of this material. The peak separation is then equal to the radial frequency of matter in a nearly circular Keplerian orbit, and is predicted to decrease as the Keplerian frequency increases and approaches the predicted frequency at the marginally stable circular orbit. This model can explain the decrease in peak separation as observed in various sources (see Section 3.7). Beat frequency models stating that the upper kHz QPO peak is formed by Keplerian motion at a preferred radius in the disk (e.g. the sonic point radius, Miller, Lamb, & Psaltis 1998), whereas the lower kHz QPO peak formed at the frequency of the beat between the neutron star spin frequency and this Keplerian frequency, cannot in their original form explain the decrease in peak separation in these two sources. A relatively small extension of the model (Lamb, Miller, & Psaltis 1998) can, however, produce the observed decrease in peak separation. Osherovich & Titarchuk (1999) developed a model in which the kHz QPOs arise due to radial oscillations of blobs of accreting material at the magnetospheric boundary. The lower kHz QPO frequency is in their model identified with the Keplerian frequency. Besides this QPO two eigenmodes are identified whose frequencies coincide with the upper kHz QPO peak frequency and the frequency of the HBO component in the power spectra of Sco X–1 (Titarchuk & Osherovich 1999). Interpreting our findings within this framework did not result in stringent constraints on the model. We found that the peak separation is consistent with being constant (Fig. 10 A and B), but neither a decrease towards higher $`\dot{M}`$ as in Sco X–1, 4U 1608–52, 4U 1735–44, 4U 1702–429, and 4U 1728–34 nor a decrease towards lower $`\dot{M}`$, as predicted by Stella & Vietri (1999) can be ruled out. If the model of Stella & Vietri turns out to be the right one the mass of the neutron star most likely is in the range of 1.8 to 2.2 $`M_{}`$ (see Fig. 10 B). This is in agreement with the mass of Cyg X–2 derived by Orosz & Kuulkers (1999), and with the masses of the neutron stars derived when interpreting the highest observed kHz QPO frequencies as due to motion at or near the marginally stable orbit (Kaaret, Ford, & Chen 1997; Zhang, Strohmayer, & Swank 1997). This work was supported in part by the Netherlands Organization for Scientific Research (NWO) grant 614-51-002. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This work was supported by NWO Spinoza grant 08-0 to E.P.J.van den Heuvel. MM is a fellow of the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina. Support for this work was provided by the NASA through the Chandra Postdoctoral Fellowship grant number PF9-10010 awarded by the Chandra X-ray Center, which is operated by the Smitsonian Astrophysical Observatory for NASA under contract NAS8-39073.
no-problem/0002/astro-ph0002413.html
ar5iv
text
# Primordial Adiabatic Fluctuations from Cosmic Defects ## I Introduction Cosmology is entering a crucial stage, where a growing body of high-precision data will allow us to determine a number of cosmological parameters, and to identify the mechanism that produced the “seeds” for the structures we observe today . There are currently two classes of models that could be responsible for these—topological defect and inflationary models. The main difference between them is related to causality. Initial conditions for the defect network are set up on a Cauchy surface that is part of the standard history of the universe. Hence, there will not be any correlations between quantities defined at any two spacetime points whose backward light cones do not intersect on that surface. Inflation pushes this surface to much earlier times, and if the inflationary epoch is long enough there will be essentially no causality constraints. This can also be seen by noting that inflation can be defined as an epoch when the comoving Hubble length decreases. It starts out very large, and perturbations can be generated causally. Then inflation forces this length to decrease enough so that, even though it grows again after inflation ends, it’s never as large (by today) as the pre-inflationary era value. Once primordial fluctuations are produced they can simply freeze in comoving coordinates and let the Hubble length shrink and then (for small enough scales) grow past them. As a step towards identifying the specific model that operated in the early universe, one would like to determine which of the two mechanisms above was involved. The presence of super-horizon perturbations might seem a good enough test, but this is not the case: in defect models (as well as open or $`\mathrm{\Lambda }`$-models) significant contributions are generated after the epoch of last scattering due to the integrated Sachs-Wolfe effect. The presence of the ‘Doppler peaks’ on small angular scales is also not ideal: Turok has shown that a causal scaling source can be constructed so as to mimic inflation and reproduce its contribution to the CMB anisotropies. This source is constructed “by hand”, and there is no attempt to provide a framework in which it could be realized. In any case, it shows that inflationary predictions are not as unique as one might think. We should also mention, however, that a nice argument due to Liddle (see also ) shows that the existence of adiabatic perturbations on scales much larger than the Hubble radius implies that either inflation occurred in the past, the perturbations were there as initial conditions, or causality (or Lorentz invariance) is violated. On the other hand, it is also possible to construct “designer inflation” models that would have no secondary Doppler peaks, although these suffer from analogous caveats and they would still be identifiable by other means . Finally, there are Gaussianity tests. There have been recent claims of a non-Gaussian component in the CMB (but see also ). Defects will generally produce non-Gaussian fluctuations on small enough scales , whereas the simplest inflationary models produce Gaussian ones. It’s possible to build inflationary models that produce, eg. non-Gaussianity with a chi-squared distribution , but if one found non-Gaussianity in the form of line discontinuities, then it is hard to see how cosmic strings could fail to be involved. This discussion shows that although defect and inflationary models have of course a number of distinguishing characteristics, there is a greater overlap between them than most people would care to admit. It is also easy to obtain models where both defects and inflation generate density fluctuations . The aim of this letter is to present a further example of this overlap. We discuss a model where the primordial fluctuations are generated by a defect network, but are nevertheless very similar to a standard inflationary model. The only difference between these models and the standard inflationary scenario will be a small non-Gaussian component. A detailed discussion will be presented in a forthcoming publication . ## II The model Our model follows the recent work on so-called ‘varying speed of light’ theories , and more particularly the spirit of ‘two-metric’ theories , having two natural speed parameters, say $`c_\varphi `$ and $`c`$; the first is relevant for the dynamics of the scalar field which will produce topological defects, while the second is the ordinary speed of light that is relevant for gravity and all standard model interactions. We assume that $`c_\varphi c`$ so that the correlation length of the network of topological defects will be much greater than the horizon size. We could, in analogy with , define our effective theory by means of an action, and postulate a relation between the two metrics. However, this is not needed for the basic point we’re discussing, so we leave it for a future publication . We concentrate on the case of cosmic strings, whose dynamics and evolution are better known than those of other defects although much of what we will discuss will apply to them as well. Note that $`c_\varphi `$ could either be a constant (say $`[g_\varphi ]_{00}=(c_\varphi ^2/c^2)g_{00}`$) or, as in one could set up a model such that the two speeds are equal at very early and at recent times, and between these two epochs there is a period, limited by two phase transitions, where $`c_\varphi c`$. As will become clear below, the basic mechanism will work in both cases, although the observational constraints on it will be different for each specific realization. The string network evolution is qualitatively analogous to the standard case , and in particular a “scaling” solution will be reached after a relatively short transient period. The long-string characteristic length (or “correlation length”) $`L`$ will evolve as $`L=\gamma c_\varphi t`$, with $`\gamma =𝒪(1)`$, while the string RMS velocity will obey $`v_\varphi =\beta c_\varphi `$, with $`\beta <1`$. Note, however, that there are some differences relative to the standard scenario. The first one is obvious: if $`c_\varphi c`$, the string network will be outside the horizon, measured in the usual way. Hence these defects will induce fluctuations when they are well outside the horizon, thus avoiding causality constraints Note that compensation now acts outside the ‘$`c_\varphi `$-horizon’. We expect the effect of gravitational back-reaction to be much stronger than in the standard case . The general effect of the back-reaction is to reduce the scaling density and velocity of the network relative to the standard value . Thus we should expect fewer defects per “$`c_\varphi `$-horizon”, than in the standard case. However, despite this strong back-reaction, strings will still move relativistically. It can be shown that although back-reaction can slow strings down by a measurable amount, only friction forces can force the network into a strong non-relativistic regime. Thus we expect $`v_\varphi `$ to be somewhat lower than $`c_\varphi `$, but still larger than $`c`$. Only in the case of monopoles, which are point-like, one would expect the defect velocities to drop below $`c`$ due to graviton radiation . This does not happen for extended objects, since their tension naturally tends to make the dynamics take place with a characteristic speed $`c_\varphi `$ . This is actually crucial: if the network was completely frozen while it was outside the horizon (as in standard scenarios ) then no significant perturbations would be generated. A third important aspect is that the the symmetry breaking scale, say $`\mathrm{\Sigma }`$, which produces the defects can be significantly lower than the GUT scale, as density perturbations can grow for a longer time than usual. The earlier the defects are formed, the lighter they could be. Proper normalization of the model will produce a further constraint on $`\mathrm{\Sigma }`$. Finally, in the case where $`c_\varphi `$ is a time-varying quantity which only departs from $`c`$ for a limited period, the defects will become frozen and start to fall inside the horizon after the second phase transition. Here we require that the defects are sufficiently outside the horizon and are relativistic when density fluctuations in the observable scales are generated. This will introduce additional constraints on model parameters, notably on the epochs at which the phase transitions take place. ## III Cosmological consequences In the synchronous gauge, the linear evolution equations for radiation and cold dark matter perturbations, $`\delta _r`$ and $`\delta _m`$, in a flat universe with zero cosmological constant are $`\ddot{\delta }_m+{\displaystyle \frac{\dot{a}}{a}}\dot{\delta }_m{\displaystyle \frac{3}{2}}\left({\displaystyle \frac{\dot{a}}{a}}\right)^2\left({\displaystyle \frac{a\delta _m+2a_{eq}\delta _r}{a+a_{eq}}}\right)=4\pi G\mathrm{\Theta }_+,`$ (1) $`\ddot{\delta }_r{\displaystyle \frac{1}{3}}^2\delta _r{\displaystyle \frac{4}{3}}\ddot{\delta }_m=0,`$ (2) where $`\mathrm{\Theta }_{\alpha \beta }`$ is the energy-momentum tensor of the external source, $`\mathrm{\Theta }_+=\mathrm{\Theta }_{00}+\mathrm{\Theta }_{ii}`$, $`a`$ is the scale factor, “eq” denotes the epoch of radiation-matter equality, and a dot represents a derivative with respect to conformal time. We will consider the growth of super-horizon perturbations with $`ck\eta 1`$. Then eqn. (1) becomes: $$\ddot{\delta }_m+\frac{\dot{a}}{a}\dot{\delta }_m\frac{1}{2}\left(\frac{\dot{a}}{a}\right)^2\left(\frac{3a+8a_{eq}}{a+a_{eq}}\right)\delta _m=4\pi G\mathrm{\Theta }_+,$$ (3) and $`\delta _r=4\delta _m/3`$. Its solution, with initial conditions $`\delta _m=0`$, $`\dot{\delta }_m=0`$ can be written as $$\delta _m^S(𝐱,\eta )=4\pi G_{\eta _i}^\eta 𝑑\eta ^{}d^3x^{}𝒢(X;\eta ,\eta ^{})\mathrm{\Theta }_+(𝐱^{},\eta ^{}),$$ (4) $$𝒢(X;\eta ,\eta ^{})=\frac{1}{2\pi ^2}_0^{\mathrm{}}\stackrel{~}{𝒢}(k;\eta ,\eta ^{})\frac{\mathrm{sin}kX}{kX}k^2𝑑k.$$ (5) Here $`X=|𝐱𝐱^{}|`$ and ‘S’ indicates that these are the ‘subsequent’ fluctuations, according to the notation of , to be distinguished from ‘initial’ ones. We are interested in computing the inhomogeneities at late times in the matter era. When $`\eta _0\eta _{eq}`$, the Green functions are dominated by the growing mode, $`a_0/a_{eq}`$, so the function we would like to solve for is $$T(k;\eta )=\underset{\eta _0/\eta _{eq}\mathrm{}}{lim}\frac{a_{eq}}{a_0}\stackrel{~}{𝒢}(k,\eta _0,\eta ).$$ (6) Consider the growth of super-horizon perturbations, for which the transfer function can be written $$T(0;\eta )=\frac{\eta _{eq}}{10(32\sqrt{2})\eta }.$$ (7) Linear perturbations induced by defects are the sum of initial and subsequent perturbations: $`\delta _m(k;\eta _0)`$ $`=`$ $`\delta _m^I(k;\eta _0)+\delta _m^S(k;\eta _0)`$ (8) $`=`$ $`4\pi G(1+z_{eq}){\displaystyle _{\eta _i}^{\eta _0}}𝑑\eta T_c(k;\eta )\stackrel{~}{\mathrm{\Theta }}_+(k;\eta ),`$ (10) where $`\eta _i`$ is the epoch of defect formation. The transfer function for the subsequent perturbations, those generated actively, was obtained in eqn. (7) for super-horizon perturbations with $`ck\eta _01`$. To include compensation for the initial perturbations, $`\delta _m^I`$, we make the substitution $`T_c(k;\eta )=\left(1+(k_c/k)^2\right)^1T(k;\eta )`$, where $`k_c(c_\varphi \eta )^1`$ is a long-wavelength cut-off at the compensation scale. This results from the fact that defect perturbations cannot propagate with a velocity greater than $`c_\varphi `$. For $`(c_\varphi \eta _0)^1k(c_\varphi \eta _i)^1`$ the analytic expression for the power spectrum of density perturbations induced by defects is $$P(k)=16\pi ^2G^2(1+z_{\mathrm{eq}})^2_0^{\mathrm{}}𝑑\eta (k,\eta )|T_c(k,\eta )|^2,$$ (11) where $`(k,\eta )`$ is the structure function which can be obtained directly from the unequal time correlators . It can be shown that for a scaling network $`(k,\eta )=(k\eta )`$ which, combined with the above relations gives $$P(k)_0^{\mathrm{}}𝑑\eta 𝒮(k\eta )/\eta ^2k$$ (12) where the function $`𝒮`$, is just the structure function, $``$, times the compensation cut-off function. Up until now we only considered the spectrum of primordial (ie, generated at very early times) fluctuations induced by cosmic defects. In our model a Harrison-Zel’dovich spectrum is predicted just as in the simplest inflationary models. The final processed spectrum will also be the same as for the simplest inflationary models. We investigate the Gaussianity of the string-induced fluctuations as in . The conclusions can easily be extended for other defect models. In the standard cosmic string scenario the structure function $`(k,\eta )`$ has a turn-over scale at the network correlation length, $`k_\xi =20(c_\varphi \eta )^1`$ . At a particular time, perturbations induced on scales larger than the correlation length are generated by many string elements and are expected to have a nearly Gaussian. On the other hand, perturbations induced on smaller scales are very non-Gaussian because they can be either very large within the regions where a string has passed by or else very small outside these. This allows us to roughly divide the power spectrum of cosmic-string-seeded density perturbations into a nearly Gaussian component generated when the string correlation length was smaller than the scale under consideration, and a strongly skewed non-Gaussian component generated when the string correlation length was larger (we call these the ‘Gaussian’ and ‘non-Gaussian’ contributions respectively). The ratio of this two components may be easily computed by splitting the structure function in (11), in two parts: a Gaussian part $`_\mathrm{g}(k,\eta )=(k,\eta )`$ for $`k<k_\xi `$ ($`_\mathrm{g}=0`$ for $`k>k_\xi `$) and a non-Gaussian part $`_{\mathrm{ng}}(k,\eta )=(k,\eta )`$ for $`k>k_\xi `$ ($`_{\mathrm{ng}}=0`$ for $`k<k_\xi `$). We can then integrate (11) with this Gaussian/non-Gaussian split, to compute the relative contributions to the total power spectrum. The final result will depend on the choice of compensation scale $`k_c`$. If we take the maximum compensation scale allowed by causality ($`k_c2(c_\varphi \eta )^1`$) the Gaussian contribution to the total power spectrum will be less than $`5\%`$. In any case, the non-Gaussian contribution will always be smaller that the Gaussian one if, as expected, the compensation scale is larger or equal to the correlation length of the string network ($`k_ck_\xi `$). Departures from a Gaussian distribution are scale independent and analogous to those of standard defect models on large scales. By allowing for a characteristic velocity for the scalar field $`c_\varphi `$ much larger than the velocity of light (and gravity), we were able to construct a model with primordial, adiabatic ($`\delta _r=4\delta _m/3`$), nearly Gaussian fluctuations whose primordial spectrum is of the Harrison-Zel’dovich form. This is almost indistinguishable from the simplest inflationary models (as far as structure formation is concerned) except for the small non-Gaussian component which could be detected with future CMB experiments. The $`C_l`$ spectrum and the polarization curves of the CMBR predicted by this model should also be identical to the ones predicted in the simplest inflationary models as the perturbations in the CMB are not generated ‘directly’ by the defects. ## IV Discussion and conclusions We presented further evidence of the non-negligible overlap between topological defect and inflationary structure formation models. The key ingredient is having the speed of the defect-producing scalar field much larger than the speed of gravity and standard model particles. This provides a ‘violation of causality’, as required by . The only distinguishing characteristic of this model, by comparison with the simplest inflationary models, will be a small non-Gaussian signal. Admittedly our model could be considered “unnatural” in the context of our present theoretical prejudices, and the same can certainly be said about other examples such as “mimic inflation” and “designer inflation” . Be that as it may, however, the fact that these examples can be constructed (and one wonders how many more are possible) highlights the fact that extracting robust predictions from cosmological observations is a much more difficult and subtle task than many experimentalists (and theorists) believe. ###### Acknowledgements. We thank Paul Shellard for useful discussions and comments. C.M. is funded by FCT (Portugal) under ‘Programa PRAXIS XXI’, grant no. PRAXIS XXI/BPD/11769/97. We thank CAUP for the facilities provided.
no-problem/0002/astro-ph0002036.html
ar5iv
text
# The Merger Rate to Redshift One from Kinematic Pairs: Caltech Faint Galaxy Redshift Survey XI ## 1 Introduction Merging is a fundamental mode of stellar mass addition to galaxies. Moreover, merging brings in new gas and creates gravitational disturbances that enhance star formation or fuel a nuclear black hole. The general process of substructure infall may be the rate fixing process for the buildup of a galaxy’s stars and consequently may largely regulate its luminosity history. Gravitational forces on relatively large scales dominate merger dynamics which allows direct observation of the mechanism, although with the considerable complication that dark matter dominates the mass. N-body simulations (Toomre & Toomre, 1972; Barnes & Hernquist, 1992) give the detailed orbital evolution, morphological disturbances and eventual outcomes of the encounters of pairs of galaxies. The purpose of this paper is to estimate the rate of mass gain per galaxy due to mergers over the redshift zero to one interval. Our primary statistic is the fractional luminosity in close kinematic pairs, which is readily related to n-body simulations and sidesteps morphological interpretation. This approach provides a clear sample definition which is closely connected to the large scale dynamics of merging. In common with all merger estimates it requires an estimate of the fraction of the pairs that will merge and a mean time to merger. The number of kinematic pairs is proportional to the volume integral at small scales of the product of two-point correlation function, $`\xi `$, and the luminosity function (LF). The high luminosity galaxies appear to be evolving purely in luminosity (Lilly et al., 1995; Lin et al., 1999), which can be easily compensated. The measured evolution of $`\xi `$ suggests that the density of physical pairs should not vary much with redshift, $`(1+z)^{0\pm 1}`$ (LeFèvre et al., 1996; Carlberg et al., 1997, 1999). This inference is in notable contrast with the pair counts or morphological typing approaches to merger estimation (Zepf & Koo, 1989; Carlberg Pritchet & Infante, 1994; Yee & Ellingson, 1995; Patton et al., 1997; LeFèvre et al., 1999), which suggest that merging rate by number varies as $`(1+z)^{3\pm 1}`$. HST photometric pairs, with no redshift information leads to a dependence of $`(1+z)^{1.2\pm 0.4}`$ (Neuschaefer et al., 1997). In the next section we combine the Caltech Faint Galaxy Redshift Survey (CFGRS) and the Canadian Network for Observational Cosmology field galaxy survey (CNOC2) from which we construct evolution compensated, volume-limited, subsamples. In Section 3 we measure the fractional luminosity in 50 and 100$`h^1`$ kpc companions as a function of redshift. The CNOC2 sample is used in Section 4 to relate this wide pair sample to a close pair sample which is more securely converted into a mass merger rate. Section 5 discusses our conclusions. We use $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }_M=0.2`$ in open and flat cosmologies. ## 2 The CFGRS and CNOC2 Volume-Limited Samples The CFGRS sample of the HDF plus flanking fields is discussed in detail elsewhere (Hogg et al, 1999; Cohen et al., 2000). We use the high coverage subsample lying within a 240 arcsecond radius circle, with a center located at 12<sup>h</sup> 36<sup>m</sup> 50<sup>s</sup> and 62 12 55<sup>′′</sup> (J2000). The computed magnitude selection function, $`s(m_R)`$, (in Cousins R) is accurately approximated as a constant 90% spectroscopic completeness for $`m_R<22.8`$ mag with a linear decline to 19% at $`m_R<23.4`$ mag, our sample limit. The magnitude weight is $`1/s(m_R)`$. The CFGRS k-corrections and evolution compensation are here approximated as $`k(z)=Kz`$ mag from the tables of Poggianti (1997). For galaxies that Cohen et al. (2000) classify as “E” (emission), $`K=1.0`$, “A” types have $`K=2.0`$ and all types have $`K=1.7`$ The CNOC2 selection weights and k-corrections are discussed in Yee et al. (2000). . The evolution of the luminosity function is approximated as a uniform $`M_{}(z)=M_{}Qz`$, with $`Q1`$, (Lin et al., 1999) which we use over the entire CNOC2-CFGRS redshift range. The kinematic pair fraction is directly proportional to the mean density of the sample and is therefore sensitively dependent on correcting to a complete and uniform sample (Patton et al. 2000a, ). The most straightforward approach is to impose a strict volume limit. For our primary sample we will limit the CFGRS and the CNOC2 samples at $`M_R^{k,e}=19.8+5\mathrm{log}h`$ mag, which yields volume-limited samples of about 300 CFGRS galaxies between redshift 0.3 to 1.1 and 3000 CNOC2 galaxies between 0.1 to 0.5. The volume density of the sample is approximately constant at $`1.2\times 10^2h^3`$ Mpc<sup>-3</sup> over the entire redshift range for $`\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$ but rises roughly as $`(1+z)^{0.8}`$ for $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. Both the CFGRS and the CNOC2 surveys are multiply masked, which minimizes the effects of slit crowding, however there is still a measurable pair selection function. The CNOC2 catalogue has about a 20% deficiency of close angular pairs. We model the measured angular pair selection weight as, $`w(\theta )=[1+a_s\mathrm{tanh}(\theta /\theta _s)]^1,`$ where $`[a_s,\theta _s]`$ is $`[0.5,5^{\prime \prime }]`$ for the CFGRS sample and $`[0.3,10^{\prime \prime }]`$ for the CNOC2 sample with typical pair corrections being 10%. ## 3 The Pair Fractional Luminosity Fraction The preferred choice of pair statistic depends on the application (Patton et al. 2000a, ). Here we are primarily interested in the impact of merging on galaxy mass increase, for which the k-corrected, evolution compensated R luminosity is a stand-in. The rate of merging per galaxy depends on the density of galaxies in the near neighborhood and their velocity distribution. As a practical redshift space estimator, we compute the fractional luminosity in close kinematic pairs, $$f_L(z|\mathrm{\Delta }v^{\mathrm{max}},r_p^{\mathrm{max}},M_R^{k,e})=\frac{\underset{j}{}\underset{ij,<\mathrm{\Delta }v^{\mathrm{max}},<r_P^{\mathrm{max}}}{}w_jw_iw(\theta _{ij})L_i}{_jw_jL_j},$$ (1) where the weights, $`w_i`$, allow for the magnitude selection function. Note that the $`ij`$ and $`ji`$ pairs are both counted. The ratio has the benefit of being fairly stable for different luminosity limits, self-normalizing for luminosity evolution, identical to a mass ratio for a fixed $`M/L`$ population. For an unperturbed pair luminosity function it is mathematically identical to the $`N_c`$ of Patton et al. (2000a) although constructed out of somewhat different quantities. The two parameters, $`\mathrm{\Delta }v^{\mathrm{max}}`$ and $`r_p^{\mathrm{max}}`$, are chosen on the basis of merger dynamics and the characteristics of the sample. The rate of mass increase per galaxy is calculated from this statistical estimator using a knowledge of merger dynamics and the measured correlations and kinematics of galaxy pairs in the sample. The mean fractional pair luminosity, based on 18 CFGRS pairs and 91 CNOC2 pairs, with $`\mathrm{\Delta }v1000\text{km}s^1`$ and $`5r_p50h^1\mathrm{kpc}`$ pairs is displayed in Figure 1. These kinematic separation parameters are larger than is suitable for reliably identifying “soon-to-merge” pairs. However, they provide a statistically robust connection to those pairs and take into account the lower velocity precision and sample size of the CFGRS relative to CNOC2. The errors are computed from the pair counts, $`n_p^{1/2}`$. The measurements of $`f_L(z)`$ in Figure 1 are fit to $`f_L(\mathrm{\Delta }v,r_p,M_r^{ke})(1+z)^{m_L}`$, finding $`[f_L,m_L]`$ of $`[0.14\pm 0.07,0\pm 1.4]`$ for $`r_p50h^1\mathrm{kpc}`$ pairs and $`[0.37\pm 0.7,0.1\pm 0.5]`$ for $`r_p100h^1\mathrm{kpc}`$, both for $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$. The increase with $`r_p`$ of $`f_L`$ is consistent with a $`\gamma =1.8`$ two-point correlation function. If $`\mathrm{\Omega }_M=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, then $`m_L`$ at 100$`h^1`$ kpc rises by about 0.05 whereas if $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0.0`$, then $`m_L=0.50`$. The increase is largely as a result of the rise in the implied co-moving sample density over this redshift range. The merger probability of a kinematic pair depends sensitively on the pairwise velocity dispersion, $`\sigma _{12}`$, of galaxies. The model pairwise velocity distribution is computed as the convolution of the correlation function with the distribution of random velocities. The infall velocities are negligible at these small separations and we will assume that the peculiar velocities are drawn from a Gaussian distribution. The measured fraction of the CNOC2 pair sample with velocities smaller than some $`\mathrm{\Delta }v`$, normalized to the value at 1000 km s<sup>-1</sup>, is displayed in Figure 2. The 50$`h^1`$ kpc wide pairs limited at $`19.5`$ mag are plotted as open squares, the 20$`h^1`$ kpc pairs limited at $`18.5`$ and $`19.5`$ mag are plotted as octagons and diamonds, respectively. The upper curve assumes that $`\sigma _{12}`$ is 200 km s<sup>-1</sup> and the lower one 300 km s<sup>-1</sup>, which approximately span the data. ## 4 Merger Rate Estimation The merger rate is best estimated from very close kinematic pairs, 20$`h^1`$ kpc or less, about half of which are physically close and have significant morphological disturbance Patton et al. 2000a . However the fraction of galaxies in such close pairs is about one percent, giving poor statistics. Since the number of pairs increases smoothly as $`r^{3\gamma }`$, where $`\gamma `$ is the slope of the small scale correlation function, we can use pairs at somewhat larger separations as statistically representative of the close pairs, however we prefer to stay within the radius of virialized material around a galaxy over our redshift range, which is no larger than about 100$`h^1`$ kpc. The mass accretion rate from major mergers is therefore estimated as, $$_M=\frac{1}{2}f_L(\mathrm{\Delta }v,r_p,z)C_{zs}(\mathrm{\Delta }v,\gamma )F(v<v_{mg})MT_{mg}^1(z,r_p),$$ (2) where the factor of one half allows for the double counting of pairs, $`C_{zs}(\mathrm{\Delta }v,\gamma )`$ converts from redshift space to real space pairs, F gives the fraction of the pairs that will merge in the next $`T_{mg}`$ (the “last orbit” in-spiral time from $`r_p`$) and $`M`$ is the mean incoming mass as estimated assuming a constant M/L. For the relatively massive galaxies considered here the dynamical friction is so strong that it is more violent relaxation with little timescale dependence on the masses. The measured ratio of the numbers of 50 and 100$`h^1`$ kpc pairs to 20$`h^1`$ kpc pairs in the CNOC2 sample is $`3.8\pm 1.0`$ and $`9.4\pm 3.0`$, respectively, in accord with the expectation of a growth as $`r_p^{3\gamma }`$ with the inner cutoff of 5$`h^1`$ kpc. Not all kinematic pairs are close in physical space. The relation between the kinematic pairs closer than $`r_p`$ and $`\mathrm{\Delta }v`$ and pairs with a 3D physical distance $`r_p`$ is readily evaluated by integrating the velocity convolved correlation function over velocity and projected radius and ratioing to the 3D integral of the correlation function. We find that $`C_{zs}=0.54`$ for $`\mathrm{\Delta }v=1000\text{km}s^1`$ and $`\gamma =1.8`$ There is support for this value on the basis of morphological classification, as tested in Patton et al. (1999), where about half of the kinematic pairs exhibited strong tidal features. The fraction of physically close pairs that are at sufficiently low velocity to merge is a key part of the rate calculation. It is clear that many galaxies will have close encounters which do not lead to immediate mergers, although mergers could of course occur on subsequent orbital passages. The key quantity that we need is the ratio of the critical velocity to merge, $`v_{mg}`$, to $`\sigma _{12}`$. The timescale for close pairs to merge is much shorter than the time over which morphological disturbances are clearly evident, by nearly an order of magnitude (Barnes & Hernquist, 1992; Mihos & Hernquist, 1996; Dubinski Mihos & Hernquist, 1999). This is one of our reasons for preferring kinematic pairs as a merger estimator. The simulation results indicate that the time to merge is, on the average, roughly that of a “half-circle” orbit, which at $`r_p`$ of 20$`h^1`$ kpc at a velocity of 200 km s<sup>-1</sup> is close to 0.3 Gyr. A straight-line orbit with instantaneous merging would merge in about 0.1 Gyr, although that is not likely to be representative. To compute the merger probability, $`F(<v_{mg})`$, we need to know the maximal velocity to merge, $`v_{mg}`$, at a physical separation of 20$`h^1`$ kpc for a typical $`M_{}`$ galaxy. A not very useful lower bound is fixed by the Keplerian escape velocity at 20$`h^1`$ kpc, $`v_c\sqrt{2}\text{km}s^1`$, where the circular velocity is approximately 200 km s<sup>-1</sup>. An upper bound to $`v_{mg}`$ is the velocity that an object would have if it is captured into a galaxy’s extended dark halo at the virialization radius and orbits to 20$`h^1`$ kpc with no dynamical friction. The virialization radius is approximately at the radius where the mean interior overdensity is $`200\rho _c`$, implying $`r_{200}=v_c/(10H_0)`$, or, about 200$`h^1`$ kpc for our typical galaxy. The largest possible apogalactic velocity at $`r_{200}`$ is $`v_c`$, which leads to an undissipated velocity at 20$`h^1`$ kpc of $`2.37v_c`$. Using $`\sigma _{12}=200(300)\text{km}s^1`$ at 20$`h^1`$ kpc, we find that the fraction of all physical pairs that merge in one $`T_{mg}`$ is about 0.40(0.16). Therefore, we will normalize to a merger probability of 0.3, noting the 50% or so uncertainty. The absolute magnitude limit of $`19.8+5\mathrm{log}h`$ mag corresponds to $`L0.5L_{}`$ which contains about 58% of the luminosity for the mean CNOC2 LF, $`M_{}=20.4`$ and $`\alpha =1.2`$. To make our merger rate inclusive of major mergers we normalize to $`L0.2L_{}`$, which includes 85% of the luminosity. Within the current statistical accuracy, the paired and field galaxies have identical LFs. On the basis of n-body experiments (Barnes & Hernquist, 1992) galaxies with masses greater than about $`0.2M_{}`$ will merge in approximately one orbital time. On the basis of these considerations we find that the rate of mass accumulation of galaxies with luminosities of $`0.2M_{}`$ and above is, $$_M=(0.02\pm 0.01)M_{}(1+z)^{0.1\pm 0.5}\frac{F(v_{mg}/\sigma _{12})}{0.3}\frac{0.3\mathrm{Gyr}}{T_{mg}}\mathrm{Gyr}^1,$$ (3) where we have adopted the 100$`h^1`$ kpc $`m_L`$ value for a flat, low density cosmology and explicitly assumed that the velocity and timescale factors do not vary over this redshift range, as expected at these small scales in a low $`\mathrm{\Omega }`$ universe (Colin Carlberg & Couchman, 1997). There is direct evidence that once evolution compensated that the luminous galaxies retain have no evolution in their circular velocities (Vogt et al., 1997; Mallen-Ornelas Lilly Crampton & Schade, 1999). ## 5 Discussion and Conclusions Our main observational result is that for galaxies with $`M_R^{k,e}19.8+5\mathrm{log}h`$ mag, the fraction of galaxy luminosity in 50$`h^1`$ kpc wide kinematic pairs is about 14%, with no noticeable redshift dependence over the redshift zero to one range. This implies an integrated mass accretion rate of about 2% of $`L_{}`$ per Gyr per galaxy for merging galaxies having $`L0.2L_{}`$. Our rate is uncertain at about the factor of two level due to uncertainty in the dynamical details of merging for our sample definitions. This merger rate implies a 15% mass increase in an $`M_{}`$ galaxy since redshift one. If the correlations of lower luminosity galaxies are only somewhat weaker than these (Carlberg et al., 1998) then the same $`0.15M_{}`$ merged-in mass causes a 50% mass increase in a 0.3$`M_{}`$ galaxy. There are several issues that require further investigation. First, the rate of merging of similarly selected kinematic pairs should be studied in appropriately matched n-body experiments to better determine the orbital timescales. Second, the absence of a redshift dependence of $`\sigma _{12}`$ and $`v_{mg}`$ needs to be observationally checked. Third, the connection between close kinematic pairs and morphologically disturbed galaxies, which does conform to the kinematic pair predictions at low redshift (Patton et al. 2000a, ), needs to be better understood at high redshift. This research was supported by NSERC and NRC of Canada. HL and DWH acknowledge support provided by NASA through Hubble Fellowship grants HF-01110.01-98A and HF-01093.01-97A, respectively, awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555.
no-problem/0002/math0002144.html
ar5iv
text
# 1 Introduction ## 1 Introduction Turbulent boundary layer flow over a smooth flat plate outside a close vicinity of the plate tip contains two unambiguous elements: The viscous sublayer adjacent to the plate, where the velocity gradient is large and the viscous stress is comparable with the Reynolds stress, and the statistically uniform free stream. According to classical theory \[<sup>1</sup>\], the region intermediate between these two consists of two layers with different properties. The first, adjacent to the viscous sublayer is a universal, Reynolds-number-independent logarithmic layer. In the second, the “wake region”, there is a smooth transition from the universal logarithmic layer to the free stream. Our analysis of all available experiments \[<sup>2-4</sup>\] contradicts this classical theory. Indeed, in the clear-cut case of a smooth plate and low free stream turbulence, the intermediate structure does consist of two layers. However, the boundary between them is sharp. Most important, both layers are self-similar, substantially Reynolds-number-dependent, and described by different scaling laws. It is interesting to note (see the details below) that the same configuration of two self-similar layers with a sharp interface between them can be seen in all runs used in \[<sup>1</sup>\] for the illustration of the wake region model. We found it possible \[<sup>2-4</sup>\] to introduce a characteristic length scale $`\mathrm{\Lambda }`$ so that the average velocity distribution in the first intermediate layer coincides with Reynolds-number-dependent scaling law obtained previously for pipe flows, when the Reynolds number is chosen as $`U\mathrm{\Lambda }/\nu `$, with $`U`$ the free stream velocity and $`\nu `$ the fluid’s kinematic viscosity. The sharp boundary between the self-similar intermediate layers also defines a length scale $`\lambda `$. We show, by analysis of experimental data, that these two length scales $`\lambda `$ and $`\mathrm{\Lambda }`$ are close. ## 2 Background In a previous paper \[<sup>2</sup>\] we noted that when the turbulence level in the free stream is small, the intermediate structure between the viscous sublayer and the free stream consists of two self-similar layers: one adjacent to the viscous sublayer where the average velocity profile is described by the scaling law $$\varphi =A\eta ^\alpha ,$$ (2.1) and one adjacent to the free stream where $$\varphi =B\eta ^\beta .$$ (2.2) Here $$\varphi =\frac{u}{u_{}},u_{}=\sqrt{\frac{\tau }{\rho }},\eta =\frac{u_{}y}{\nu },$$ $`u`$ is the average velocity, $`\tau `$ is the shear stress at the wall, $`\rho `$ and $`\nu `$ are the fluid density and kinematic viscosity; $`A,B`$, $`\alpha `$ and $`\beta `$ are Reynolds number dependent constants. Our processing of all experimental data available in the literature \[<sup>3,4</sup>\] confirmed these observations and showed that it is always possible to find a length scale $`\mathrm{\Lambda }`$ so that, setting $`Re=U\mathrm{\Lambda }/\nu `$, we can represent the scaling law (2.1) in the form $$\varphi =\left(\frac{1}{\sqrt{3}}\mathrm{ln}Re+\frac{5}{2}\right)\eta ^{\frac{3}{2\mathrm{ln}Re}}$$ (2.3) obtained by us earlier for pipe flows (see, e.g. \[<sup>5</sup>\]). This suggests that the structure of wall regions in all wall-bounded shear flows at large Reynolds numbers is identical, if the length scale and velocity scale are properly selected. The natural question is, however, what is the physical meaning of this length scale $`\mathrm{\Lambda }`$ in boundary layer flow? This question is of substantial importance and should be clarified for proper understanding of the identity of scaling laws for different wall-bounded shear flows. We note that the intermediate structure has another characteristic length scale $`\lambda `$ — the wall-region thickness determined by the sharp intersection $`\eta =\eta _{}`$ of the two velocity distribution laws $`\varphi =A\eta ^\alpha `$ and $`\varphi =B\eta ^\beta `$ valid in the different layers. We have $$A\eta _{}^\alpha =B\eta _{}^\beta ,$$ (2.4) so that $$\eta _{}=\left(\frac{A}{B}\right)^{\frac{1}{\beta \alpha }},$$ (2.5) and the wall-region thickness $`\lambda `$ is determined by the relation $$\lambda =\left(\frac{A}{B}\right)^{\frac{1}{\beta \alpha }}\frac{\nu }{u_{}}.$$ (2.6) On the other hand, the characteristic length scale $`\mathrm{\Lambda }`$ is determined by the relation $$\mathrm{\Lambda }=Re\frac{\nu }{U}=\left(\frac{u_{}}{U}\right)\left(\frac{\nu }{u_{}}\right)Re$$ (2.7) so that the ratio of these two scales is $$\frac{\mathrm{\Lambda }}{\lambda }=\left(\frac{u_{}}{U}Re\right)\frac{1}{\eta _{}}=\left(\frac{u_{}}{U}Re\right)\left(\frac{B}{A}\right)^{\frac{1}{\beta \alpha }}.$$ (2.8) ## 3 Analysis of experimental data We analyzed the recent data of J.M. Österlund presented on the Internet (www.kth.se/$``$jens/zpg/). The data seem to us to be reliable, however much we disagree with their processing and interpretation in the paper by Österlund et al \[<sup>6</sup>\] (see \[<sup>4</sup>\]). All 70 runs presented on the Internet give the characteristic broken-line average velocity distribution in $`\mathrm{lg}\eta ,\mathrm{lg}\varphi `$ coordinates (see the examples in Figure 1; all the other cases are similar), so that the possibility of determining $`A,\alpha ,B,\beta `$ and $`\eta _{}`$ accurately from these experimental data is unquestionable. These results are presented in Table 1 for all Österlund’s experiments where $`Re_\theta =U\theta /\nu >10,000`$. Here $`\theta `$ is the momentum thickness; the runs in the Österlund’s experimental data are labelled by $`Re_\theta `$. The effective Reynolds number $`Re`$ was obtained \[<sup>4</sup>\] by the formula $$\mathrm{ln}Re=\frac{1}{2}(\mathrm{ln}Re_1+\mathrm{ln}Re_2)$$ (3.9) where $`\mathrm{ln}Re_1`$ and $`\mathrm{ln}Re_2`$ are the solutions of the equations $$\frac{1}{\sqrt{3}}\mathrm{ln}Re_1+\frac{5}{2}=A,\frac{3}{2\mathrm{ln}Re_2}=\alpha $$ (3.10) and the values of $`A`$ and $`\alpha `$ were obtained by standard statistical processing of Österlund’s data. For $`Re_\theta >10,000`$ the difference $`\delta `$ between $`\mathrm{ln}Re_1`$ and $`\mathrm{ln}Re_2`$ does not exceed 3%, so that they coincide within experimental accuracy. According to (2.7) and (2.8) $$\mathrm{lg}\frac{\mathrm{\Lambda }}{\lambda }=(\mathrm{lg}Re\mathrm{lg}\eta _{})+\mathrm{lg}\frac{u_{}}{U}.$$ (3.11) The data for $`u_{}`$ and $`U`$ are presented by Österlund on the Internet for each run. In Figure 2 we present the values of $`\mathrm{lg}(\mathrm{\Lambda }/\lambda )`$ for all runs. The mean value of $`\mathrm{lg}(\mathrm{\Lambda }/\lambda )`$ is approximately 0.2, so that the characteristic length scale $`\mathrm{\Lambda }`$ is about 1.6 times the thickness of the wall region. If we take into account that $`\mathrm{\Lambda }`$ is calculated from the value of $`Re`$, and that $`\mathrm{ln}Re`$, not $`Re`$ itself, has been determined from experiment, the ratios $`\mathrm{\Lambda }/\lambda `$ as shown in Figure 2 are close to unity. We processed in \[<sup>3,4</sup>\] the data of 90 zero-pressure-gradient boundary layer experiments at low free stream turbulence performed by different authors during the last 25 years — all the experiments of this kind available to us. Without exception, all runs revealed identical configurations of the intermediate structure in the boundary layer: two adjacent self-similar layers separated by a sharp interface. According to the classical model \[<sup>1</sup>\], the intermediate structure consists of the (universal) logarithmic layer and a non-self-similar “wake region” smoothly matching the logarithmic layer. It was natural to also process the very data used in \[<sup>1</sup>\] for the justification of the wake region model with the general procedure we used on the other data. The data presented in Figure 21 of \[<sup>1</sup>\] were scanned and replotted in $`\mathrm{lg}\eta ,\mathrm{lg}\varphi `$ coordinates as was done for all experimental data processed in \[<sup>3,4</sup>\]. Processing revealed the same broken-line structure, i.e. two adjacent self-similar layers (see Table 2 and Figure 3, where a typical example is presented). The difference between $`\mathrm{ln}Re_1`$ and $`\mathrm{ln}Re_2`$ determined from the wall layer data is small: this shows that the procedure is adequate. We conjecture that the values of $`\beta `$ are larger than in newer experiments because of a non-zero pressure gradient in all these runs (see \[<sup>1</sup>\]). The results of our processing fail to confirm the wake region model proposed in \[<sup>1</sup>\]. ## 4 Conclusion We have shown that one can find a length scale $`\mathrm{\Lambda }`$ so that, if the Reynolds number $`Re`$ in a zero-pressure-gradient boundary layer flow is defined by $`Re=U\mathrm{\Lambda }/\nu `$, with $`U`$ the free stream velocity and $`\nu `$ the kinematic viscosity, then the scaling law for the self-similar region adjacent to the viscous sublayer coincides with the scaling law for turbulent pipe flow. Using the recent experimental data of Österlund (www.kth.se/$``$jens/zpg/) we confirmed this fact and reached the important conclusion that $`\mathrm{\Lambda }`$ is roughly equal to the wall-region thickness. Our results are in disagreement with the classical model of the wake region in the boundary layer \[<sup>1</sup>\]. Acknowledgments. This work was supported in part by the Applied Mathematics subprogram of the U.S. Department of Energy under contract DE–AC03–76–SF00098, and in part by the National Science Foudnation under grants DMS 94–16431 and DMS 97–32710. References 1. Coles, D. (1956). J. Fluid Mech. 1, pp.191–226. 2. Barenblatt, G.I., Chorin, A.J., Hald, O. and Prostokishin, V.M. (1997). Proceedings National Academy of Sciences USA 94, pp.7817–7819. 3. Barenblatt, G.I., Chorin, A.J., and Prostokishin, V.M. (2000). J. Fluid Mech., in press. 4. Barenblatt, G.I., Chorin, A.J., and Prostokishin, V.M. (2000). Analysis of experimental investigations of self-similar structures in zero-pressure-gradient boundary layers at large Reynolds numbers. Center for Pure & Applied Mathematics, UC Berkeley, CPAM-777. 5. Barenblatt, G.I., Chorin, A.J., and Prostokishin, V.M. (1997). Applied Mechanics Reviews, vol. 50, no. 7, pp.413–429. 6. Österlund, J.M., Johansson, A.V., Nagib, H.M. and Hites, M.H. (2000). Physics of Fluids, vol. 12, no. 1, pp.1–4.
no-problem/0002/astro-ph0002404.html
ar5iv
text
# The Nature of Composite LINER/H II Galaxies, As Revealed from High-Resolution VLA Observations ## 1 Introduction LINER galaxies are a class of active galaxies characterized by the presence of strong, nuclear, low-ionization emission lines (Heckman 1980). Like Seyfert nuclei, LINERs emit much stronger optical low-ionization forbidden lines compared to H II nuclei, whose line emission is powered by photoionization from young massive stars, but LINERs have a characteristically lower ionization state than Seyferts. In a recent optical survey of nearly 500 nearby galaxies, Ho, Filippenko, & Sargent (1995, 1997a, 1997b) find that $``$20% of all galaxies brighter than $`B_T`$ = 12.5 mag display LINER-type spectra. An additional 13% of the objects show spectra intermediate between those of “pure” LINERs and “pure” H II nuclei. Ho, Filippenko, & Sargent (1993; see also Ho 1996) dubbed this class as “transition objects,” and they hypothesized that these might be composite systems in which the optical signal of a weak active nucleus (the LINER component) has been spatially blended by circumnuclear star-forming regions (the H II component). Models have shown that photoionization by hot stars are able to reproduce the optical spectra of LINERs (Terlevich & Melnick 1985), especially those of the transition-type variety (Shields 1992; Filippenko & Terlevich 1992). Other researchers have advocated shock-wave heating as a viable excitation mechanism for LINERs (Fosbury et al. 1978; Heckman 1980; Dopita & Sutherland 1995). At the same time, an increasing body of evidence suggests that a significant fraction of LINERs do contain irrefutable AGN characteristics (see reviews by Filippenko 1996 and Ho 1999b). Radio surveys, in particular, have shown that many LINERs exhibit weak nuclear radio emission (Heckman 1980; Sadler, Jenkins, & Kotanyi 1989; Wrobel & Heeschen 1991; Slee et al. 1994; Falcke, Wilson, & Ho 1997; Van Dyk & Ho 1998). As discussed in detail by Ho (1999a), weak nuclear radio emission in early-type (elliptical and S0) galaxies is most likely related to accretion-driven energy production in these objects, but at a very low level compared to powerful radio galaxies, which are also usually hosted by early-type systems. Radio observations provide an expecially attractive tool to further our understanding of the LINER phenomenon. Since radio frequencies are not plagued by dust obscuration and photoelectric absorption, high-resolution radio observations potentially offer the cleanest and most efficient method to isolate an AGN core. Although radio emission also arises from supernova remnants and H II regions, AGN cores can be identified through their high degree of central concentration, small angular size, flat spectral indices, and high brightness temperatures. Given that LINERs are so numerous, they may be the most common type of AGN known. Their large space densities could make a major impact on the faint end of the local AGN luminosity function, which in turn has ramifications for many astrophysical issues ranging from the cosmological evolution of AGN to the contribution of AGN to the cosmic X-ray background. We are therefore conducting high-resolution radio observations of selected LINER samples in order to investigate the interrelation between LINERs and classical AGN. For the present paper — the first in a series — we have concentrated on transition-type LINERs. We report on sensitive Very Large Array (VLA)<sup>1</sup><sup>1</sup>1The VLA is a facility of the National Radio Astronomy Observatory (NRAO) which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. C-array observations of a sample of 25 such transition LINERs at 8.4 GHz. The resulting images, having angular resolution of about 2$``$ $``$ .5, allow us to assess the strength and spatial distribution of the nuclear radio emission. For comparison, we have also observed a small sample of 12 pure H II nuclei. In the following discussion and in upcoming papers, we adopt a Hubble constant of H<sub>0</sub> = 75 km s<sup>-1</sup> Mpc<sup>-1</sup>. ## 2 Sample Selection The sample of 25 transition LINER/H II galaxies and 12 pure H II nuclei was extracted from the original magnitude-limited Palomar survey of 486 bright northern galaxies (Ho et al. 1995), following the classification criteria outlined in Ho et al. (1997a). The Palomar sample contains 65 transition nuclei (Ho et al. 1997a), from which we chose a subset of 25 that had little or no arcsec-scale resolution radio continuum data published, and that fell within the observation window assigned to us by the VLA. In addition, we selected for comparison a small sample of 12 pure H II nuclei (out of 206 such objects in the full Palomar sample). These were chosen to mimic the Hubble type distribution of the sample of transition objects, again taking into consideration scheduling constraints. We are not aware of any strong biases introduced by our sample selection. Table 1 lists the 37 sample objects. In addition to the optical positions of the galaxies (columns 3 and 4), this table lists their distance (column 5) and their Hubble type (column 6). The Hubble types and distances are taken from Ho et al. (1997a). The “reference” column (7), together with Table 2, supplies references to earlier radio studies; it is readily apparent that several of the sample objects are well-studied objects, while some are not. We will discuss our observational results in light of these existing data in Section 4. The sample galaxies span a wide range of Hubble types, but the majority are spirals. – TABLEs 1/2: Sample of transition LINER galaxies and H II nuclei – ## 3 Observations and Data Reduction The resulting sample of 25 composite LINER/H II and 12 pure H II galaxies was observed with the X-band (8.4 GHz) system of the VLA in two observing sessions, on 1997 August 21 and August 25. The array was in its C-configuration, yielding a typical resolution of 2$``$ $``$ .5. Two bands of 50 MHz each were combined, yielding a total bandwidth of 100 MHz. Snapshot observations combining two scans of 8–9 minutes each, at different hour angles, were obtained in order to improve the shape of the synthesized beam. Secondary phase and amplitude calibrators were observed before and after each scan. The primary calibrator was 3C 286, with appropriate baseline constraints, and the adopted 8.4 GHz flux densities, as provided by the VLA staff, were 5.22 Jy and 5.20 Jy, at IF1 and IF2, respectively. The calibration uncertainty is dominated by the uncertainty in the absolute flux density of the primary calibrator, which is a few percent. The array performed well: judging from the calibration sources, the antenna phase and amplitude calibration appeared stable to within a few percent. The radio data are of high quality, and there was no need for extensive flagging of discrepant points. Reduction of the data was performed using standard NRAO AIPS (version 15OCT98) image processing routines. AIPS task IMAGR was employed to Fourier transform the measured visibilities and obtain CLEAN (Högbom 1974; Clark 1980) maps of the sources. Full-resolution maps, having synthesized beams of roughly 2$``$ $``$ .5, and tapered maps, having beams between about 5$``$ $``$ and 12$``$ $``$ , were made. This proved useful to detect weak low surface brightness emission. As a rule of thumb, 1$``$ $``$ corresponds to 50–100 pc for the typical distances involved. Most images reach the theoretical noise level of $`0.04`$ mJy/beam (Perley, Schwab, & Bridle 1989) to within a factor of two. Self-calibration was employed in the analysis of two of the stronger sources (NGC 4552 and NGC 5354), leading to considerable improvements. ## 4 Results ### 4.1 Radio Maps Noting that a non-detection should be taken to imply no correlated flux density in excess of 0.5 mJy on the arcsec scale, radio contour maps of the detected galaxies are presented in Figures 1–7, at the end of the paper. The radio maps are overlayed on optical images taken from the Digitized Sky Survey (DSS). In a number of cases only the high-resolution map (typical beamsize 2$``$ $``$ .5) or the low-resolution map (typical beamsize 7$``$ $``$ – 10$``$ $``$ ) are shown. Unrelated background radio sources were found in several fields; their number is entirely as expected given the size of the primary beam and the sensitivity of the observations. In cases of large distances from the field center, primary beam corrections were applied. Some of the background sources have been identified using NED (NASA/IPAC Extragalactic Database) and the Cambridge APM (Automatic Plate Measuring) facility. Where possible, we have included them by their name in the tables below. Previously uncatalogued background sources are designated by their sky orientation with respect to the field center for the relevant galaxy. All maps use contouring according to CLEV $`\times `$ (–3, 3, 6, …, MLEV) mJy/beam where MLEV are powers $`3\times 2^\mathrm{n}`$ up to a maximum of 96 and CLEV is the typical noise level on each radio map (see Table 3). Table 3 lists the various map parameters, with the following column headings: the source name (1), the applied taper (2), the resulting restoring beam (3), the position angle of the beam (4), the image noise level (5), and the relevant figure number (6). The H II nuclei appear at the bottom of the table. – TABLE 3: Radio map parameters – Given that good phase stability was obtained, as judged from the VLA phase calibrators, the astrometric accuracy of our overlay procedure must be dominated by the DSS accuracy, which is known to be about 0.6$``$ $``$ (Véron-Cetty & Véron 1996). This is born out by the images of several unresolved and slightly resolved nuclei (NGC 4552, NGC 5354, NGC 5838, NGC 5846) which appear accurate to within a few tenths of an arcsec. ### 4.2 Comments on Individual Galaxies With reference to the images in the Appendix, we proceed by discussing the results of our VLA imaging in the light of earlier studies of the sample objects. This discussion also deals with the detected background sources and includes important bibliographic information. We will frequently compare our measurements with sample galaxy flux densities obtained within the VLA 1.4 GHz NVSS and FIRST surveys, having typical resolution of 45$``$ $``$ and 5$``$ $``$ , respectively (Condon et al. 1998a; Becker, White, & Helfand 1995), as well as with the Green Bank 20 cm and 6 cm surveys, having typical resolution of $``$700$``$ $``$ and 3$``$ .5, respectively (White & Becker 1992, WB92 hereafter; Becker, White, & Edwards 1991, BWE91 hereafter). IC 520 : No compact structure was seen in this object, not even on the tapered map. It was also not detected in the NVSS. However, Condon (1987) reports a detection at 1.49 GHz: at 0$``$ .8 resolution, he measured a 3.1 mJy radio source. NGC 2541 : We did not detect compact structure, not even on the tapered map. This source was also not detected in the FIRST or in the NVSS survey. However, Condon (1987) reports a possible detection at 1.49 GHz, 0$``$ .8 resolution, of a 3.2 mJy source, slightly extended to the SW relative to the optical position of the galaxy. NGC 2985 : We detect a 1.9 mJy source. At 2$``$ $``$ .5 resolution, the source is slightly extended in the N-S direction, along the orientation of the optical galaxy. The tapered map shows a 2.7 mJy core and about 1 mJy weak low surface brightness emission to the SW. The object was also detected in the NVSS: 44.1 mJy. At 1.49 GHz, 1$``$ .0 resolution, Condon (1987) measured a total flux of 61.9 mJy: the radio emission of NGC 2985 must be extended over tens of arcseconds. NGC 3593 : On both the untapered and tapered maps we measure about 20 mJy of diffuse, 45$``$ $``$ E-W elongated emission, oriented along the major axis of the host of this H II nucleus. NVSS measured 87 mJy emission, WB92 measured 132 mJy (hence the radio source exceeds one arcminute in size), whereas BWE91 measured 53 mJy. The 1.4 GHz, 5$``$ $``$ resolution map of Condon et al. (1990) shows an E-W oriented source of 63.4 mJy: these and our VLA data imply a steep-spectrum radio source. We also detect an unrelated 10.6 mJy source, 3$``$ to the N, extended towards the E. As judged from the APM survey, the POSS plates show an as yet unclassified nonstellar source at this position: background source NGC 3593N is most likely a weak distant radio galaxy. NGC 3627, M 66 : This interacting galaxy belongs to the Leo triplet. The 8.4 GHz maps show a 2$``$ triple source aligned with the inner bar of the optical galaxy. The outer radio components are related to star formation in the disk of the galaxy (Urbanik, Gräve, & Klein 1985). We measure 3.9 mJy integrated emission in the compact central component, while $``$25 mJy are distributed in the NW and SE components. From comparison with low-resolution surveys (NVSS: 324.9 mJy; WB92: 434 mJy; BWE91: 141 mJy) we infer that most of the radio emission of this steep-spectrum radio source is resolved out by our observations. The triple radio structure was also observed by Saikia et al. (1994), at 5 GHz with 2$``$ $``$ resolution. Combining our data with the Saikia et al. (1994) and the Hummel et al. (1987) data, we conclude that the nuclear radio source in NGC3627 must be of variable, flat-spectrum nature. More recently, high resolution (0$``$ $``$ .15) 15 GHz observations detected an unresolved 1.4 mJy core (N. Nagar, private communication). NGC 3628 : Like NGC 3627, this galaxy also belongs to the Leo triplet. Our 8.4 GHz full-resolution map shows some extended emission to the N and an eastern extension that is roughly aligned with the projected disk of the galaxy. The nuclear region is clearly resolved. The total flux density on our low-resolution map is about 70 mJy, 70$`\%`$ of which is in $``$15$``$ $``$ diffuse emission. The NVSS measured 292 mJy whereas WB92, at lower resolution, detected 402 mJy, indicating resolved large-scale emission. At 1.4 GHz Condon et al. (1990) measured 203 mJy and 205 mJy at 5$``$ $``$ and 1$``$ $``$ .5 resolution, respectively. These and our data imply a steep-spectrum radio source, in agreement with the results from the Effelsberg 100m telescope (Schlickeiser, Werner, & Wielebinski 1984). Carral, Turner, & Ho (1990) observed this object at 15 GHz using the VLA in A-array and measured a total flux of 23 mJy. These authors sampled the innermost part of the source and obtain a $`4`$$``$ $``$ string of a dozen components aligned with the major axis of the galaxy, that they suggest could be star-forming regions in the disk. NGC 3675 : We detect about 1.5 mJy of low surface brightness emission on our tapered maps. NVSS detected a 48.9 mJy source, whereas FIRST measured 8.04 mJy: the source must be strongly resolved. 1.49 GHz VLA D-array observations (Condon 1987; Gioia & Fabbiano 1987) detected a $``$50 mJy source, oriented N-S, along the major axis of the optical galaxy. Condon, Frayer, & Broderick (1991) report a flux density of 27 mJy at 4.85 GHz and a steep spectral index between 4.8 and 1.4 GHz. NGC 3681 : We did not detect a radio source on the untapered or tapered maps. In contrast, NVSS measured a weak source of about 4.2 mJy – at much lower resolution. NGC 3684 : This H II nucleus had no radio source detected, either on the untapered or on the tapered maps, but NVSS detected a 15.9 mJy source. The radio emission of NGC 3684 must be diffuse. We do, however, detect a partially resolved 18.1 mJy background source, 3$``$ to the SW, which APM identifies with a 19.5 mag stellar-like object. Further identification is still lacking. NGC 4013 : We detect a moderately resolved 1.1 mJy core and a small extension to the NE, parallel to the projected disk of this edge-on galaxy. There is an additional 3 mJy of weak disk emission. On the basis of the widely different NVSS and FIRST detections (40.5 vs. 11.9 mJy), this disk emission must extend tens of arcseconds. Hummel, Beck, & Dettmar (1991) present 10$``$ $``$ resolution 5 GHz data of NGC 4013 (UGC 6963) which show both the prominent core and the extended disk emission. Recent high resolution (0$``$ $``$ .15) 15 GHz observations did not detect NGC 4013 above a 10$`\sigma `$ limit of 1 mJy (N. Nagar, private communication). We also detect a slightly resolved 3.8 mJy background source, 2$``$ to the far SE. This source is also clearly seen in the map of Condon (1987). The APM facility reveals a 19.6 mag stellar-like object, which lacks further identification as yet. NGC 4100 : We detect about 7 mJy of low surface brightness emission associated with this H II nucleus. The tapered map shows a slight extension to the SE along the major axis of the optical galaxy. NVSS detected a 50.3 mJy source and FIRST detected 17.8 mJy, indicating resolution effects. NGC 4217 : We detect about 22 mJy, distributed along $``$1$``$ .5 of the projected galactic disk of this H II nucleus. From low-resolution observations (NVSS: 123 mJy; WB92: 139 mJy; BWE91: 40 mJy) we conclude that NGC 4217 harbours extended low surface brightness radio emission, which must have a steep radio spectrum. NGC 4245 : No radio source associated with this H II nucleus was detected on the untapered or tapered maps. Neither NVSS nor FIRST has detected this object at 1.4 GHz. NGC 4321, M 100 : We detect about 16 mJy of low surface brightness emission in the nuclear region of this well-known grand design spiral in the Virgo cluster. The tapered map shows emission slightly extended to the NW. NVSS detected 87 mJy, while WB92 measured 323 mJy and BWE91 87 mJy. At 1.49 GHz, 0$``$ .9 resolution Condon (1987) measured 180 mJy total flux in a $``$ 3$``$ $`\times `$ 2$``$ region. At 4.9 GHz, 1$``$ $``$ .5 resolution Collison et al. (1994) detect a ring-like structure coincident with our main feature. At 8.5 GHz, 0$``$ $``$ .2 resolution, these authors detect two unresolved sources, having peaks of 0.37 mJy/beam (E) and 0.22 mJy/beam (W), respectively. The eastern radio source is coincident with their 4.9 GHz main component while the one to the W is coincident with the optical nucleus and has a flat spectrum (Collison et al. 1994). We also detect an unresolved source, 1.5$``$ to the SE, with 1.3 mJy integrated flux, apparently located near the middle of the southern spiral arm of the galaxy. It is clearly visible on the NGC 4321 map. We have identified this object with SN 1979C (Weiler et al. 1981). Collison et al. (1994) measured 2 mJy for this SN 1979C, at 4.9 GHz. NGC 4369, Mrk 439 : We detect 3.8 mJy of low surface brightness disk emission associated with this H II nucleus. On the basis of widely different NVSS and FIRST detections (24.3 vs. 4.67 mJy), this disk emission must extend over tens of arcsec. The Condon et al. (1990) 1.49 GHz image, at 15$``$ $``$ resolution, shows a 18.3 mJy source with extended emission to the E. NGC 4405, IC 788 : No compact emission was detected in the untapered or tapered maps. NVSS detected weak (4.5 mJy) emission for this H II galaxy. We did, however, detect a partially resolved 10.1 mJy source, 2$``$ SW of the NGC 4405 target position. As judged from the APM, the POSS plates show a 19.2 mag stellar-like object at this position, which we subsequently identify as the $`z=1.929`$ QSO, LBQS 1223+1626 (e.g., Hewett, Foltz, & Chaffee 1995). NGC 4414 : We detect $``$29 mJy of low surface brightness emission, distributed along $``$1$``$ .5, in a structure aligned with the galaxy’s major axis. NVSS detected a 242 mJy source while FIRST detected a double-peaked source with 44 and 64 mJy components. The Condon (1983), 1.465 GHz, 12$``$ $``$ .5 resolution map and the Condon (1987), 1.49 GHz, 1$``$ .0 resolution map show comparable N-S structure, measuring $``$0.22 Jy. Using this last value plus the 78 mJy measured at 4.85 GHz, 15$``$ $``$ resolution, Condon et al. (1991) obtain a steep spectral index. NGC 4424 : On the tapered map we detect a resolved $``$1 mJy core plus 1.5 mJy weak low surface brightness emission to the E along the projected disk of this candidate merger galaxy (Kenney et al. 1996). NVSS detected a weak 4.5 mJy source. Our full-resolution data show the core of this H II nucleus to be somewhat resolved. This is borne out of our recent high resolution (0$``$ $``$ .25) 8.4 GHz observations which did not detect NGC 4424 above a 3$`\sigma `$ limit of 0.2 mJy. We also detect an unrelated, slightly resolved 13.3 mJy source, 2$``$ .5 to the SE. The APM identifies it with a 20.2 mag stellar-like object. Further properties of this background source remain as yet unknown. NGC 4470 : We detect $``$3 mJy of weak low surface brightness emission, extending 30$``$ $``$ along the galaxy major axis. NVSS detected 17.1 mJy associated with this H II nucleus. We also detect an unrelated, resolved 16.0 mJy radio source, 1$``$ .5 to the NE of NGC 4470, which is not identified on the POSS plate (APM). We identify this object with the radio source TXS 1227+081 (Douglas et al. 1996), which has as yet no optical identification. NGC 4552, M 89 : We detect about 77 mJy in a strong core plus possibly a jet-like structure to the NE. NVSS detected a 103 mJy source. Since BWE91 measured 64 mJy at 5 GHz, the radio source in NGC 4552 must have a relatively flat radio spectrum, which is in agreement with findings by Condon et al. (1991). The radio source appears to be variable (Wrobel & Heeschen 1984; Ekers, Fanti, & Miley 1983; Sramek 1975a, 1975b; Ekers & Ekers 1973). Our recent subarcsec resolution 8.4 GHz images have confirmed this object’s compactness. NGC 4643 : No compact radio structure was detected on the untapered or the tapered maps. NVSS also did not detect this source. However, we have detected several very weak sources ($``$1 mJy) on the tapered maps, some of which may be due to weak diffuse disk emission. NGC 4710 : We detect about 6 mJy of weak disk emission associated with this H II nucleus. NVSS measured a 19.3 mJy source. At 1.49 GHz, Condon et al. (1990) measure 17.2 mJy and 14.7 mJy at 15 and 5$``$ $``$ resolution, respectively. The data imply a resolved steep-spectrum radio source. We also detect a 4.8 mJy resolved background source, 2.5$``$ to the E of NGC 4710. No indentification could be found on the POSS. NGC 4713 : We detect $``$1.2 mJy weak low surface brightness disk emission on our tapered maps. NVSS measured a 46.9 mJy source: our observations must have resolved out much of the flux. NGC 4800 : We detect about 1.2 mJy of weak low surface brightness emission, stretching $`30`$$``$ $``$ along the galaxy major axis. NVSS detected a 23.5 mJy source and FIRST 12.2 mJy. Therefore, our observations are resolving out much of the emission of this H II nucleus. NGC 4826 , M 64: We detect about 21 mJy of low surface brightness emission. The central region on the full-resolution map (not presented here) shows a double-peaked component, which can be compared to the complex inner triple structure of the 15 and 5 GHz, 2$``$ $``$ resolution maps of Turner & Ho (1994). Combining the data indicates that this is a steep spectrum source. NVSS detected a 101 mJy source and BWE91 detected a 56 mJy source. The 1.49 GHz, 1$``$ .0 resolution data of Condon et al. (1998b) and Condon (1987) show a $``$100 mJy source, as do the data of Gioia & Fabbiano (1987) at 40$``$ $``$ resolution. NGC 4845 : On our tapered maps we detect a 2 mJy core and some emission to the N, perpendicular to the galaxy major axis, plus 10 mJy of $`20`$$``$ $``$ elongated disk emission. NVSS detected a 43.8 mJy source associated with this H II nucleus. The data of Condon et al. (1990) imply resolution effects on the 10 arcsec scale. NGC 5012 : We detect some very weak ($``$1 mJy) features near the target phase center. NVSS detected a 31.4 mJy source, but as FIRST did not detect it, we must be dealing with extended low surface brightness emission. NGC 5354 : We detect an unresolved 11.7 mJy core. NVSS and FIRST detected a 8.4 and 8.0 mJy source, respectively, implying that the NGC 5354 radio source is unresolved at a resolution of 5$``$ $``$ . The 4.85 GHz, 15$``$ $``$ resolution maps of Condon et al. (1991) show 7 mJy emission, implying that the nuclear radio source has an inverted spectrum. Our recent subarcsec resolution 8.4 GHz images have confirmed this object’s compactness. Paired galaxy NGC 5353 is also detected, 1$``$ .25 to the S, with a 26.7 mJy core. Condon et al. (1991) measure 29 mJy at 4.85 GHz, 15$``$ $``$ resolution, implying that NGC 5353 also hosts a flat spectrum radio source. NGC 5656 : We detect $``$1 mJy of weak low surface brightness emission, stretching over $``$25$``$ $``$ along the major axis of the galaxy. NVSS detected a 22 mJy source. NGC 5678 : We detect about 8.5 mJy of low surface brightness emission stretching over $``$1$``$ .2 along the major axis of the galaxy. The tapered map shows what could be a double-peaked source, similar to the 6$``$ $``$ resolution, 1.4 GHz map of Condon (1983). NVSS detected a 112 mJy source while BWE91 measured 68 mJy. The NVSS and BWE91 results, combined with Condon (1983) reporting 109 mJy at 1.4 GHz, imply a steep spectrum source. NGC 5838 : We have detected a slightly resolved 2.2 mJy source. NVSS detected 3.0 mJy, while Wrobel & Heeschen (1991) report a 2 mJy source (5 GHz, 5$``$ $``$ resolution): the nuclear radio source in NGC 5838 must have a flat radio spectrum. Our recent subarcsec resolution 8.4 GHz images have confirmed this object’s compactness. There is also an unrelated, unresolved 1.3 mJy source, about 1$``$ .5 to the S of NGC 5838. The POSS plates reveal an 18.4 mag star-like object at that position, for which as yet no redshift is available. NGC 5846 : We have detected a partially resolved 7.1 mJy source. NVSS measured a 22 mJy source. 1.4 GHz VLA observations by Möllenhoff, Hummel, & Bender (1992) measured an unresolved core of 9 mJy plus 10 mJy of additional diffuse emission: NGC 5846 must possess a compact flat-spectrum core component. Our recent subarcsec resolution 8.4 GHz images have confirmed this object’s compactness. NGC 5879 : No radio structure was detected on the untapered or the tapered maps. NVSS detected a 21 mJy source, which hence must arise in an extended low surface brightness region. However, we did detect a 292.1 mJy, slightly resolved background source, about 2$``$ NE of our target. It shows up only on the POSS red plate as a noise-like source. We have identified this source as QSO 1508+5714, at a redshift of 4.301 (Hook et al. 1995). It is in the WENSS (Rengelink et al. 1997) and in BWE91 (282 mJy) as well as in WB92 (149 mJy). Patnaik et al. (1992) have observed the source with the VLA A-array at 8.5 GHz and measured 153 mJy: 1508+5714 must be a variable flat-spectrum quasar. NGC 5921 : We have detected a very weak 0.5 mJy core and some additional weak extended emission. At 1.49 GHz and 0$``$ .9 resolution, Condon (1987) measured a 20.8 mJy source with an additional 2.8 mJy eastern component which we do not detect at 8.4 GHz. All emission must be of low surface brightness nature. NGC 6384 : Following the NVSS non-detection, we also do not detect (compact) radio emission from NGC 6384. However, Condon (1987), at 1.49 GHz and 1$``$ .2 resolution, found the radio source, having $``$ 35 mJy, to be very extended. NGC 6482 : We have not detected this source at 8.4 GHz and neither did NVSS. We did, however, detect a slightly resolved 23.6 mJy background source, about 3$``$ to the S of NGC 6482. On the full-resolution map this source is slightly extended. The POSS plates show a 20.9 mag stellar object. We have identified this source with the radio source 1749+2302 included in GB6, with 26 mJy flux density. NGC 6503 : We do not detect radio emission from this galaxy on the full-resolution or on the tapered maps. 1.4 GHz data (NVSS and Condon 1987) indicate a NW-SE extended $``$40 mJy source, which consequently must be diffuse. We also detect a background source, about 6$``$ to the SW. Due to the large distance from the phase center, which implies a large and uncertain primary beam correction, we cannot assess the flux density of this source. The source appears as a 16.9 mag stellar-like object on the POSS plates. We have identified this object with the BL Lacertae source 1749+701 at redshift 0.77 (Hughes, Aller, & Aller 1992). This object has been extensively observed at other radio-frequencies and is included in the WENSS (Rengelink et al. 1997) as well as BWE91 and WB92. The only other 8.5 GHz measurement of this source comes from Patnaik et al. (1992). They measure 558 mJy with the VLA A-array. Table 4 summarizes the properties of the background sources, including the previously identified, as well as the new, identifications. Tabulated 8.4 GHz fluxes have been corrected for primary beam attenuation. – TABLE 4: Field/background sources – ### 4.3 Radio Source Parameters For each galaxy with a 8.4 GHz detection, we list in Table 5 the radio source parameters, measured both from the full-resolution and the tapered maps. Peak and integrated values (columns 3 and 6), as well as well as sky positions of bidimensional gaussian fits (columns 4 and 5) are tabulated. These values were obtained using the AIPS task IMFIT. As we are often dealing with resolved and/or asymmetric emission, such measurements are often inaccurate and must represent lower limits. In such cases, we have estimated the integrated 8.4 GHz flux density from inspection of the map and the relevant clean component file and we have given it the prefix “$``$” (column 6). Also, given the fact that we have fitted single gaussian to the brightest source components, the accuracy of the radio peak positions is judged to be $`<`$ $``$1$``$ $``$ (see images). For NGC 3627, the quoted peak and integrated value are of the central core. In parentheses we include the total flux of the three components (see Section 4.2). – TABLE 5: Radio parameters of detected sources – From a quick comparison with the integrated NVSS 1.4 GHz flux densities, it is found that in many cases the high-resolution observations must have resolved a substantial fraction of the emission. This can be quantified by examination of the spectral index values. Table 6 lists spectral indices $`\alpha _{1.4}^{8.4}`$ ($`S_\nu \nu ^\alpha `$) (column 5), obtained by combining the NVSS flux densities (column 2) and the integrated 8.4 GHz flux densities (column 3) taken from Table 5. Due to the fact that the NVSS beam (45$``$ $``$ ) exceeds the 8.4 GHz beam (7$``$ $``$ –10$``$ $``$ ) by a substantial factor, these spectral index values should be considered as upper limits. It indeed appears that in most of these cases the radio emission was documented to be extended over tens of arcseconds. We also compile calculated NVSS radio luminosities (column 4) for the 37 galaxies under consideration. Again, for NGC 3627 quoted values refer to the compact core except when in parentheses, in which case the values refer to all three source components. – TABLE 6: summary – Most sample galaxies are seen to display steep spectral indices, as commonly found for star-forming late-type galaxies (Condon 1992). The following objects have flat ($`\alpha 0.6`$) spectra: NGC 4552, NGC 5354, NGC 5838 and NGC 5846 and the H II nucleus in NGC 4424. We will return to this issue in the next Section. ## 5 Discussion Our VLA imaging observations have yielded X-band detections of 27 of the 37 sample galaxies. These 27 include NGC 4643 and NGC 5012, which were marginally detected. As for the non-detections, they are equally distributed over the transition LINERs and the H II nuclei. It appears that non-detection at X-band, at a resolution of 2$``$ $``$ .5–12$``$ $``$ , correlates with 1.4 GHz (NVSS) weakness and/or low surface brightness. Table 6 indicates that the NVSS radio luminosity distributions of the subsamples are comparable. The integrated 1.4 GHz radio luminosities imply that the sample sources span the usual luminosity range for nearby galaxies (10<sup>26</sup> – 10<sup>30.5</sup> erg s<sup>-1</sup> Hz<sup>-1</sup>; see, e.g., Condon 1987, 1992). With the exception of NGC 4424 (see Section 4.2), none of the H II nuclei display compact nuclear emission and/or a flat radio spectrum. This is in strong contrast to the transition LINER galaxies. Purely based on the radio morphology, the transition LINER objects can be divided into two categories. The first is made up of galaxies displaying extended, steep-spectrum, low surface brightness radio emission, usually tracing the optical isophotes of the host galaxy. The second category refers to objects that, in addition to extended emission, also show compact nuclear radio emission. The first category displays radio morphologies that are consistent with their being due to large-scale star formation; in several cases the radio morphology is seen to trace the H II regions in the host. However, we stress that the presence of a very weak nuclear component in these objects may still be masked by the more dominant radio emission from star-forming regions. The second category, which we will refer to in the following as AGN candidates, is comprised of NGC 3627, NGC 4552, NGC 5354, NGC 5838, and NGC 5846. The last four of these are characterized with a flat radio spectral index between 1.4 and 8.4 GHz (see end of Section 4.3). Judging from high-resolution observations which isolate the nuclear emission, the compact nucleus of NGC 3627 also displays a flat radio spectrum (see Section 4.2). These AGN candidates are on average more luminous than the non-AGN and, as we shall see below, their hosts are early rather than late-type galaxies. While at first sight the H II nucleus NGC 4424 also belongs in this AGN candidate class, the radio image (Fig. 3d) shows it to be resolved. Higher resolution observations recently carried out by us indeed resolve all the nuclear emission in NGC 4424 (see Section 4.2), in contrast to the AGN candidates from the transition LINER sample. As such there is a clear separation between the non-AGN and AGN candidate classes. Apparently, galaxies with composite LINER and star-formation spectra separate out into objects with and without a clear signature of an AGN, that is to say, a compact, flat-spectrum radio source. The prime question to address, of course, is whether the optical spectra of these classes differ in any way. Deferring the full analysis of this issue to another paper, we here just note that the class of AGN candidates indeed displays somewhat stronger \[N II\] and \[S II\] lines in their optical spectra. As we will demonstrate in the forthcoming paper in this series, the behavior of the so called $`u`$-parameter, which measures the radio/far-IR ratio, also supports the weak AGN classification. However, like in the case of Seyfert galaxies showing a $``$30% incidence rate of compact radio cores (e.g., Norris et al. 1990), there is not a one-to-one correspondence between compact radio emission and nuclear activity. We conclude that the radio properties permit to isolate AGN candidates among the sample of transition LINERs, although the absence of radio cores cannot be used to argue against their being AGN. These conclusions strongly support the hypothesis which resulted from the VLA observations of transition LINER NGC 7331 by Cowan et al. (1994). Inspection of Table 6 readily shows that these AGN candidates are hosted by rather early-type galaxies (E–Sb). The relevant radio core luminosities L$`_{8.4\mathrm{GHz}}`$ range from $``$ 10<sup>26.2</sup> – 10<sup>28.5</sup> erg s<sup>-1</sup> Hz<sup>-1</sup>, which is several orders of magnitude less than the core luminosities in FR I or FR II type (Fanaroff & Riley 1974) radio galaxies and quasars, and in the range of the weakest radio cores in Seyfert galaxies (Giuricin et al. 1996). This, then, implies that weak LINER AGN preferentially occur in bulge-dominated hosts, not necessarily just in ellipticals, consistent with statistical results from other lines of evidence (Ho et al. 1997b; Ho 1999b). This is in agreement with the case of Seyfert galaxies, for which the radio core emission was also found to be stronger in early-type hosts (Giuricin et al. 1996). As such, these weak AGN differ from the more powerful FR I radio galaxies, which are commonly associated with elliptical galaxies, not seldom brightest cluster galaxies (Zirbel & Baum 1995). They exceed the very weak nuclear source in the nearby transition LINER NGC 7331 (Cowan et al. 1994) by a factor of $`15`$, and are comparable in strength to the well-known variable radio core of the LINER nucleus in M 81 (e.g., Ho et al. 1999). Hence, it must be concluded that at least some weak LINER AGN reveal themselves by low-power radio cores. This supports the analysis of Ho (1999a), who proposed that the weak nuclear radio sources (radio power 10<sup>26</sup> – 10<sup>29</sup> erg s<sup>-1</sup> Hz<sup>-1</sup>) in nearby elliptical and S0 galaxies are the low-luminosity counterparts of more powerful AGN. Finally, it is intriguing that the transition LINER in the elliptical galaxy NGC 6482 remained undetected in our observations (as well as in the NVSS). Given that NGC 6482 is the most distant object in our sample, our non-detection would still allow a $`10^{27}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> compact radio source to reside in this object. Full analysis of our sample, including optical emission-line and infrared luminosities, will be presented in forthcoming papers. To address the physical origin behind the findings presented here will be the challenge for future work. ## 6 Conclusions A sample of composite LINER/H II galaxies and pure H II nuclei was studied using the VLA (C-array) at 8.4 GHz. On the basis of their radio morphological properties, the composite sources can be divided into objects with low surface brightness emission confined mainly to the plane of the galaxy and objects displaying compact nuclear components. The former objects are similar in radio morphology to the H II nuclei in our sample, whereas the latter show morphologies consistent with AGN. All five of the LINER AGN are hosted by bulge-dominated galaxies (Hubble type E–Sb), and four of them show flat spectral indices between 8.4 and 1.4 GHz. In terms of radio luminosity, the present LINER AGN populate the range of low core radio luminosities, which implies that classical AGN of low luminosity exist in a wide range of galaxy types. ###### Acknowledgements. M. E. F. is supported by grant PRAXIS XXI/BD/15830/98 from the Fundação para a Ciência e Tecnologia, Ministério da Ciência e Tecnologia, Portugal. P. D. B. acknowledges a visitor’s grant from the Space Telescope Science Institute, where a large part of this paper could be written. L. C. H. is partly funded by NASA grant NAG 5-3556, and by NASA grants GO-06837.01-95A and AR-07527.02-96A from the Space Telescope Science Institute (operated by AURA, Inc., under NASA contract NAS5-26555). We want to thank Neil Nagar for providing us with some of his preliminary results. This research was supported in part by the European Commission TMR Programme, Research Network Contract ERBFMRXCT96-0034 “CERES.” We made extensive use of the APM (Automatic Plate Measuring) Facility, run by the Institute of Astronomy in Cambridge, the STScI DSS (Digitized Sky Survey), produced under US government grant NAGW – 2166, and NED (NASA/IPAC Extragalactic Database), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
no-problem/0002/cond-mat0002454.html
ar5iv
text
# The Specific Heat of a Trapped Fermi Gas: an Analytical Approach ## Trapped ultra-cold Fermi gases have received a lot of attention in the last few years. This was partly triggered by the first observations of Bose-Einstein condensation in 1995 . Since then, the field has vastly expanded. The trapping and cooling of fermions is in a much less developed state than that of bosons but it will probably not be long before quantum degeneracy is achieved . (For recent experiments see .) On the theoretical side, Butts and Rokhsar have calculated the specific heat numerically and provided analytical results for the spatial and momentum distribution of a non-self-interacting spin polarised Fermi gas at $`T=0`$ in a harmonic trap using the semi-classical (Thomas-Fermi) approximation. Their results are valid for large particle numbers. In addition, Schneider and Wallis made a numerical study of a similar gas and focused on the effects of small particle numbers. The present paper complements that of Butts and Rokhsar in that we provide an analytical expression for the specific heat of a non-self-interacting spin polarised Fermi gas in the context of a semi-classical approximation. We then compare it to the exact result given by a numerical calculation. As we shall see, the semi-classical approximation produces extremely good results for $`kT\mathrm{}w_{x,y,z}`$, where $`w_{x,y,z}`$ are the frequencies of the trap in the three spatial directions, for particle numbers as low as $`N=1000`$. This high level of accuracy is perhaps unexpected. We will discuss this and an explanation will be given. The characteristic temperature for quantum degeneracy, the Fermi temperature, is given by $`kT_F=\mathrm{}w(6N)^{\frac{1}{3}}`$ for an isotropic trap. The condition we impose is therefore valid for a quantum degenerate regime. It is also worth noting that this condition holds in current experiments with trapped fermions . The effects of interactions has been dealt with in several works and in particular the possibility of a BCS transition has been studied in the case of <sup>6</sup>Li . Considering the gas to be non-interacting is a very good approximation for a dilute neutral atomic gas. The only case where interactions could be important is trapped <sup>6</sup>Li with two hyper-fine spin states . <sup>6</sup>Li has an anomalously large s-wave scattering length. At least two different states are needed for s-wave scattering as it is forbidden for particles in the same state. However, the results of ref. are for $`kT<<\mathrm{}w`$, lying outside the range of temperatures concerned in the present work. Consider a gas of fermions in a harmonic potential. Neglecting interactions between the particles, the energy levels of each particle are $`E_{n_1n_2n_3}`$ $`=`$ $`(n_1+{\displaystyle \frac{1}{2}})\mathrm{}w_1+(n_2+{\displaystyle \frac{1}{2}})\mathrm{}w_2+(n_3+{\displaystyle \frac{1}{2}})\mathrm{}w_3,`$ (1) $`n_i`$ $`=`$ $`0,1,2,\mathrm{},`$ (2) where $`w_{1,2,3}`$ are the frequencies of the trap. We will use grand–canonical statistics throughout. This makes the calculations much easier and is justified as the difference between canonical and grand-canonical results is minute for the particle numbers we consider . The number of particles in the system is $$N=\underset{n}{}[e^{\beta (E_n\mu )}+1]^1.$$ (3) The internal energy of the system is given by $$U=\underset{n}{}E_n[e^{\beta (E_n\mu )}+1]^1.$$ (4) $`\mu `$ is the chemical potential and the sum in (2) and (3) is over all particle states. Here it is convenient for simplicity of the expression, to make the substitutions $`w_1`$ $`=`$ $`w,`$ (5) $`w_2`$ $`=`$ $`\lambda w,`$ (6) $`w_3`$ $`=`$ $`\lambda ^{}w,`$ (7) $`x`$ $`=`$ $`\beta \mathrm{}w,`$ (8) $`\mu `$ $`=`$ $`\mathrm{}w\left({\displaystyle \frac{1+\lambda +\lambda ^{}}{2}}ϵ\right),`$ (9) where $`x`$, $`ϵ`$, $`\lambda `$ and $`\lambda ^{}`$ are newly defined dimensionless variables. The internal energy becomes $$U=\frac{1}{2}\mathrm{}w(1+\lambda +\lambda ^{})N+\frac{1}{2}\mathrm{}wu,$$ (10) where $`u`$ $`=`$ $`2{\displaystyle \underset{n_1=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{n_2=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{n_3=0}{\overset{\mathrm{}}{}}}k[e^{x(k+ϵ)}+1]^1,`$ (11) $`k`$ $`=`$ $`n_1+\lambda n_2+\lambda ^{}n_3.`$ (12) Only the second term contributes to the specific heat since $`N`$ is held fixed. We have $$C=\left(\frac{U}{T}\right)_{N,w}=\frac{1}{2}\mathrm{}w\left(\frac{u}{T}\right)_{N,w}.$$ (13) Note that the partial derivative at constant particle number and trap frequency is the natural way of defining the specific heat for a gas in the situation concerned. After a change of variables from $`T`$ to $`x`$ this becomes $$\frac{C}{k_B}=\frac{1}{2}x^2\left(\frac{u}{x}\right)_{N,w},$$ (14) where $`k_B`$ is Boltzmann’s constant. The simplest approximation consists in replacing the triple sums in (6) by a triple integral. Thus, $$uv=2_0^+\mathrm{}_0^+\mathrm{}_0^+\mathrm{}k[e^{x(k+ϵ)}+1]^1𝑑y_1𝑑y_2𝑑y_3$$ (15) and $`k`$ is now the real function $`k=y_1+\lambda y_2+\lambda ^{}y_3`$. Changing the integrating variable to $`k`$ we have $$v=\frac{1}{\lambda \lambda ^{}}_0^+\mathrm{}k^3[e^{x(k+ϵ)}+1]^1𝑑k.$$ (16) Note that for the isotropic case we could have transformed the triple sum in (6) into a single sum and only then convert it to an integral, yielding $$v=\frac{1}{\lambda \lambda ^{}}_0^+\mathrm{}(k^3+3k^2+2k)[e^{x(k+ϵ)}+1]^1𝑑k,$$ (17) which, at first sight, might be thought to be a more accurate expression. For the anisotropic case, though not trivial, we could likewise have a density of states expanded to three terms . We will coment on this shortly. To solve the integral in (10) we Taylor expand the integrand in powers of $`e^{x(k+ϵ)}`$ and integrate each resulting term. Here we have to consider two different cases: $`ϵ>0`$ and $`ϵ<0`$. For $`ϵ>0`$ this yields $$v=\frac{6}{\lambda \lambda ^{}x^4}\underset{n=1}{\overset{\mathrm{}}{}}(1)^{n+1}\frac{e^{nxϵ}}{n^4}.$$ (18) From (8) and (12) and retaining the fact that $`v`$ is an approximated version of $`u`$, we have $$\frac{C}{k_B}\frac{12}{\lambda \lambda ^{}x^3}A_4+\frac{3}{\lambda \lambda ^{}x^2}\left(\frac{(ϵx)}{x}\right)_{N,w}A_3,$$ (19) where $`A_i`$ denotes the sum $$A_i=\underset{n=1}{\overset{\mathrm{}}{}}(1)^{n+1}\frac{e^{nx|ϵ|}}{n^i}.$$ (20) The use of the absolute value of $`ϵ`$ is redundant in the case we are considering ($`ϵ>0`$) but it is useful when we consider the $`ϵ<0`$ case. To obtain an expression for the partial derivative in (13) we use the equality $$\left(\frac{N}{x}\right)_{N,w}=0,$$ (21) with $`N`$, the number of particles, being $$N\frac{A_3}{\lambda \lambda ^{}x^3}.$$ (22) Expression (16) is obtained using the same process as was used to obtain (13). We then have $$\left(\frac{(ϵx)}{x}\right)_{N,w}\frac{3}{x}\frac{A_3}{A_2}$$ (23) and $$\frac{C}{k_B}\frac{3}{\lambda \lambda ^{}x^3}\left(4A_43\frac{A_3^2}{A_2}\right).$$ (24) Finally we note that the sums $`A_i`$ can be put in terms of polylogarithms , yielding $$\lambda \lambda ^{}\frac{C}{k_B}3x^3\left[4Li_4(e^{xϵ})\frac{1}{2}Li_4(e^{2xϵ})3\frac{[Li_3(e^{xϵ})\frac{1}{4}Li_3(e^{2xϵ})]^2}{Li_2(e^{xϵ})\frac{1}{2}Li_2(e^{2xϵ})}\right].$$ (25) For $`ϵ<0`$, the procedure is similar but slightly more complicated. The integral of expression (10) must be divided in two parts, according to whether the exponential is greater or smaller than $`1`$, as the Taylor expansion of the integrand is different for the two cases. The final result for $`ϵ<0`$ is $$\lambda \lambda ^{}\frac{C}{k_B}12A_4x^3+21\zeta (4)x^3+6\zeta (2)ϵ^2x^1+\frac{1}{2}ϵ^4x+\frac{(A_3\zeta (2)ϵx\frac{1}{6}ϵ^3x^3)^2}{A_2\zeta (2)\frac{1}{2}ϵ^2x^2},$$ (26) where $`\zeta `$ is the Riemann zeta function. Putting the sums $`A_i`$ in terms of polylogarithms we have $`\lambda \lambda ^{}{\displaystyle \frac{C}{k_B}}`$ $``$ $`[12Li_4(e^{xϵ})+{\displaystyle \frac{3}{2}}Li_4(e^{2xϵ})+21\zeta (4)]x^3+6\zeta (2)ϵ^2x^1+{\displaystyle \frac{1}{2}}ϵ^4x+`$ (28) $`{\displaystyle \frac{[Li_3(e^{xϵ})\frac{1}{4}Li_3(e^{2xϵ})\zeta (2)ϵx\frac{1}{6}ϵ^3x^3]^2}{Li_2(e^{xϵ})\frac{1}{2}Li_2(e^{2xϵ})\zeta (2)\frac{1}{2}ϵ^2x^2}}.`$ Note that these analytical expressions are not an approximation over the Thomas-Fermi result. They are the exact Thomas-Fermi result, which of course, is in itself an approximation. We computed the specific heat numerically using an exact expression composed of several sums, which is easily obtained inserting (6) in (8) and using (15) to get an expression for $`((ϵx)/x)_{N,w}`$. This is then compared to the values for the specific heat given by (18) and (20). The accuracy of the Thomas-Fermi approximation is extremely good for $`x`$ of order 1 or less, as can be seen in figures 1 and 2. As an example, for $`N=1000`$ and $`x_1=x_2=x_3<2.5`$ the error in $`C`$ is always less than $`1\%`$ and if we take $`x_i<1`$, the error is always less than $`0.1\%`$. For larger values of $`x`$ the approximation rapidly deteriorates. The results for a higher number of particles are even more accurate, as expected from this kind of approximation . Given that for this approximation we have replaced triple sums by triple integrals, such a level of accuracy is quite surprising. Note that the expression inside the sums in (6) falls off exponentially. Therefore the first few terms are very important and expression (9) should differ significantly from (6). And indeed it does. The reason our results are so accurate is that the value of $`ϵ`$ used in (18) and (20) is not the same as the one used in (6). For our Thomas-Fermi approximation we took $`ϵ`$ from expression (16), ie, an approximate $`ϵ`$. So, the insertion of an approximate $`ϵ`$ in an approximate expression leads to a fortunate cancelation of errors. We have tested this by inserting the exact value of $`ϵ`$ into (18) and (20). This leads to much worse results. Also, we have tried to further improve our approximation by considering the first two terms of the cubic polynomial in (11) instead of only the first term and doing the same to the number of particles, obtaining an additional term in (16) as well as additional terms in (18) and (20). To our surprise, this only increases the error. We can only assume that the very high level of accuracy of the Thomas-Fermi approximation is a happy coincidence. In fact, if we take the exact value of $`ϵ`$ and input it in expressions (18) and (20) and in the equivalent expressions obtained using $`k^3+3k^2`$ in (11) instead of only $`k^3`$, then the latter yield the most accurate result, though still not a very good one. ###### Acknowledgements. J.N. acknowledges financial support by the Portuguese Foundation for Science and Technology under grant PraxisXXI/ BD/5660/95.
no-problem/0002/cond-mat0002176.html
ar5iv
text
# A comparison between broad histogram and multicanonical methods ## I Introduction Development of tools for optimization of computer simulations is a field of great interest and activity. Cluster updating algorithms , probability reweighting procedures and, more recently, methods that obtain directly the spectral degeneracy $`g(E)`$ are a few examples of very successful approaches (for reviews of these methods see, for instance, and references therein). The Multicanonical (MUCA) and Broad Histogram (BHM) methods belong to the former category. The Entropic Sampling Method (ESM) was proven to be an equivalent formulation of MUCA . From the knowledge of $`g(E)`$, these methods allow us to obtain any thermodynamical quantity of interest for the system under study, as the canonical average $$Q_T=\frac{\underset{E}{}g(E)Q(E)\mathrm{exp}(E/T)}{_Eg(E)\mathrm{exp}(E/T)}$$ (1) of some macroscopic quantity $`Q`$ (magnetization, density, correlations, etc). Both sums run over all allowed energies (for continuous spectra, they must be replaced by integrals), $`T`$ is the fixed temperature, and the Boltzmann constant was set to unity. The degeneracy function $`g(E)`$ simply counts the number of states with energy $`E`$ (which, must be interpreted as a density within a narrow window $`\mathrm{d}E`$, for continuous spectra). Also, $$Q(E)=\frac{\underset{S[E]}{}Q_S}{g(E)}$$ (2) is the microcanonical, fixed-$`E`$ average of $`Q`$. The sum runs uniformly over all states $`S`$ with energy $`E`$ (or, again, within the small window $`\mathrm{d}E`$, for continuous spectra). Note that neither $`g(E)`$ nor $`Q(E)`$ depend on the particular environment the system is actually interacting with, for instance the canonical heat bath represented by the exponential Boltzmann factors in eq. (1). Thus, these methods go far beyond the canonical ensemble: once $`g(E)`$ and $`Q(E)`$ were already determined for a given system, one can study its behavior under different environments, or different ensembles, using the same $`g(E)`$ and $`Q(E)`$. Accordingly, as an additional advantage, only one computer run is enough to evaluate the quantities of interest in a large range of temperatures and other parameters. MUCA was introduced in 1991 by Berg and Neuhaus. The basic idea of the method is to sample microconfigurations performing a biased random walk (RW) in the configuration space leading to another unbiased random walk (i.e. uniform distribution) along the energy axis. Thus, the visiting probability for each energy level $`E`$ is inversely proportional to $`g(E)`$. By tuning the acceptance probability of movements in order to get a uniform distribution of visits along the energy axis, one is able to get $`g(E)`$ at the end. MUCA has been proven to be very useful and efficient to obtain results in many different problems, such as first order phase transitions, confinement/deconfinement phase transition in SU(3) gauge theory, relaxation paths , conformal studied of peptides, helix-coil transition and protein folding, evolutionary problems and to study phase equilibrium in binary lipid bilayer (for reviews of the method see ). BHM was introduced three years ago by de Oliveira et al . It is based on an exact relation between the spectral degeneracy, $`g(E)`$, and the microcanonical averages of some special macroscopic quantities. A remarkable feature of BHM is its generality, since these macroscopic quantities can be averaged by different procedures . Because it is not restricted to a rule like a biased random walk, more adequate dynamics can be adopted for each different application. BHM has been applied to a variety of magnetic systems such as the 2D and 3D Ising Models (also with external fields, next nearest neighbor interactions), 2D and 3D XY and Heisenberg Models, Ising Spin Glass with accurate results in a very efficient way. Theoretically, the method can be applied to any statistical system . Another distinguishing feature of BHM is that its numerical results do not rely on the number of visits $`H(E)`$ to each energy level, a quantity which is updated by one unit for each new visited state. On the contrary, BHM is based on microcanonical averages of macroscopic quantities. Each visited state contributes with a macroscopic upgrade for the measured quantities. Thus the numerical accuracy is much better than that obtained within all other methods, better yet for larger and larger systems, considering the same computer effort. Besides the practical points described above, the most important feature of BHM is the following conceptual one. All reweighting methods depend on the final distribution of visits $`H(E)`$ along the energy axis. Histogram methods adopt a canonical dynamics, getting $`H_{T_0}(E)`$ for some fixed temperature $`T_0`$; a new distribution $`H_T(E)`$ is then analytically inferred for another (not simulated) temperature $`T`$. Following the same reasoning, one can also obtain $`g(E)`$ . Multicanonical approaches , on the other hand, tune appropriate dynamics in order to obtain a flat distribution $`H(E)`$. In both cases, the actually implemented transition probabilities from energy level $`E`$ to another value $`E^{}`$ are crucial. In other words, in both cases the results depend on the comparison of $`H(E)`$ with the neighboring $`H(E^{})`$. All those reweighting methods are, thus, extremely sensitive to the particular dynamic rule adopted during the computer run, i.e. to the prescribed transition probabilities from $`E`$ to $`E^{}`$. BHM is not a reweighting method. It does not perform any reweighting on the distribution of visits $`H(E)`$. It needs only the knowledge of the microcanonical, fixed-$`E`$ averages of some particular macroscopic quantities. The possible transitions from the current energy $`E`$ to other values are exactly taken into account within these quantities (see section III), instead of performing a numerical measurement of the corresponding probabilities during the computer run. Thus, the only important role played by the actually implemented dynamic rule is to provide a good statistics within each energy level, separately: the relative weight of $`H(E)`$ as compared to $`H(E^{})`$, i.e. the relative visitation frequency for different energy levels, is completely irrelevant. One can even decide to sample more states inside a particularly important region of the energy axis (near the critical point, for instance) , instead of a flat distribution. In short, any dynamic rule can be adopted within BHM, the only constraint is to sample with uniform probability the various states belonging to the same energy level, not the relative probabilities concerning different energies. In this paper we have used the formulation of MUCA given by Lee called Entropic Sampling, which, from now on, we call ESM. We present a comparison between ESM/MUCA and BHM, focusing on both accuracy and the use of CPU time. We choose to start our study with the same example used in the original ESM paper by Lee : the $`4\times 4\times 4`$ simple cubic Ising model, for which the exact energy spectrum is known . Our results show that BHM gives more accurate results than ESM/MUCA with the same number of Monte Carlo steps. Despite the fact that one Monte Carlo step in BHM takes more CPU time, in measuring further macroscopic averages, the overall CPU time is smaller for the same accuracy, at least for this model. Also, BHM can be applied to larger lattices without the problems faced by ESM/MUCA, as we show in our simulations of the same model in a $`32\times 32`$ lattice. This paper is structured as follows: in section II and III we review the implementation of ESM/MUCA and BHM (including a detailed description of the distinct dynamics adopted in this work). In section IV our numerical tests are presented and discussed. Conclusions are in section V. ## II The Multicanonical Method The idea of the Multicanonical method is to obtain the spectral degeneracy of a given system using a biased RW in the configuration space . The transition probability between two states $`X_{\mathrm{old}}`$ and $`X_{\mathrm{new}}`$ is given by $$\tau (X_{\mathrm{old}},X_{\mathrm{new}})=e^{[S(E(X_{\mathrm{new}}))S(E(X_{\mathrm{old}}))]}=\frac{g(E_{\mathrm{old}})}{g(E_{\mathrm{new}})}$$ (3) where $`S(E(X))=\mathrm{ln}g(E)`$ is the entropy and $`E(X)`$ is the energy of state $`X`$. The transitional probability (3) satisfies a detailed balance equation and leads to a distribution of probabilities where a state is sampled with probability $`1/g(E)`$. The successive visitations along the energy axis follow a uniform distribution. However, $`g(E)`$ is not known, a priori. In order to obtain $`g(E)`$, Lee proposes the following algorithm: Step 1: Start with $`S(E)=0`$ for all states; Step 2: Perform a few unbiased RW steps in the configuration space and store $`T(E)`$, the number of tossed movements to each energy $`E`$ (in this stage, $`T(E)=H(E)`$ because all movements are accepted); Step 3: Update $`S(E)`$ according to $$S(E)=\{\begin{array}{cc}S(E)+\mathrm{ln}T(E)\hfill & \text{, if }T(E)0\hfill \\ S(E)\hfill & \text{, otherwise.}\hfill \end{array}$$ (4) Step 4: Perform a much longer MC run using the transitional probability given by eq. (3), storing $`T(E)`$. Step 5: Repeat 3 and 4. This is considered one iteration. This implementation is known to be quite sensitive to the lengths of the MC runs in steps 2 and 4. In section IV we study, in two examples, how the accuracy depends on the total number and size of each iteration. ## III The Broad Histogram Method BHM enables us to directly calculate the energy spectrum $`g(E)`$, without any need for a particular choice of the dynamics to be used . Many distinct dynamic rules could be used, and indeed some were already tested . Within BHM, the energy degeneracy is calculated through the following steps (alternatively, other quantities could replace $`E`$): Step 1: Choice of a reversible protocol of allowed movements in the state space. Reversible means simply that for each allowed movement $`X_{\mathrm{old}}X_{\mathrm{new}}`$ the back movement $`X_{\mathrm{new}}X_{\mathrm{old}}`$ is also allowed. It is important to note that these movements are virtual, since they are not actually performed. In this work we take the flips of one single spin as the protocol of movements; Step 2: For a configuration $`X`$, to compute $`N(X,\mathrm{\Delta }E)`$, the number of possible movements that change the energy $`E(X)`$ by a given amount $`\mathrm{\Delta }E`$. Therefore $`g(E)N(E,\mathrm{\Delta }E)`$ is the total number of movements between energy levels $`E`$ and $`E+\mathrm{\Delta }E`$, according to the definition (2) of microcanonical averages; Step 3: Since the total number of possible movements from level $`E+\mathrm{\Delta }E`$ to level $`E`$ is equal to the total number of possible movements from level $`E`$ to level $`E+\mathrm{\Delta }E`$ (step 1, above), we can write down the equation $$g(E)N(E,\mathrm{\Delta }E)=g(E+\mathrm{\Delta }E)N(E+\mathrm{\Delta }E,\mathrm{\Delta }E).$$ (5) The relation above is exact for any statistical model and energy spectrum . It can be rewritten as $$\mathrm{ln}g(E+\mathrm{\Delta }E)\mathrm{ln}g(E)=\mathrm{ln}\frac{N(E,\mathrm{\Delta }E)}{N(E+\mathrm{\Delta }E,\mathrm{\Delta }E)}$$ (6) This equation can be easily solved for all values of $`E`$, after $`N(E,\mathrm{\Delta }E)`$ is obtained by any procedure, determining $`g(E)`$ along the whole energy axis. In cases where $`\mathrm{\Delta }E`$ can assume more than one value, eq. (5) becomes an overdetermined system of equations. However, the spectral degeneracy can be obtained without need of solving all equations simultaneously, since the spectral degeneracy is the same for all values of $`\mathrm{\Delta }E`$. The exact Broad Histogram relation (5) is independent of the procedure by which $`N(E,\mathrm{\Delta }E)`$ is obtained . Therefore, virtually any procedure can be adopted in this task, for instance, an unbiased energy RW , a microcanonical simulation , or a mixture of both . Even the juxtaposition of histograms obtained through canonical simulations at different temperatures, a completely unphysical procedure, could be used, as in some of the results presented in , and explicitly used in where BHM is re-formulated under a transition matrix approach. Here, we are going to introduce an alternative procedure, referred as Entropic Sampling-based Dynamics for BHM (ESDYN, hereafter). First, one implements ESM, as described in section II, in order to perform the visitation in the configuration space. Additionally, for each visited state $`X`$, we store the values of $`N(X,\mathrm{\Delta }E)`$ cumulatively into $`E`$histograms. Therefore, at the end we have two choices for the determination of the spectral degeneracy, either by using the entropy accumulated through $`T(E)`$ (that is the traditional ESM/MUCA) or by using the accumulated $`N(E,\mathrm{\Delta }E)`$ and the BHM relation, eq.(5). Because of this special implementation, we can guarantee that exactly the same states are visited for both methods. Hence, the eventual difference in the performances reported in this work must be credited to the methods themselves and not to purely statistical factors. The dynamic rule originally used in order to test BHM prescribes an acceptance probability $`p=N(E+\mathrm{\Delta }E,\mathrm{\Delta }E)/N(E,\mathrm{\Delta }E)`$. Both the numerator and the denominator are read from the currently accumulated histograms, and thus $`p`$ varies during the simulation. Wang has proposed a new approach: instead of using the dynamically updated values of $`N(E,\mathrm{\Delta }E)`$ as in , the transitional probabilities follow a previously obtained (from a canonical simulation, for example) distribution $`N_{\mathrm{fixed}}(E,\mathrm{\Delta }E)`$, kept fixed during the simulation. An alternative and simpler derivation of Wang’s dynamics can be done by using the BHM relation (5) itself. ¿From this, we readily obtain that Wang’s dynamics is the same as using the transitional probability $`p=g_{\mathrm{fixed}}(E)/g_{\mathrm{fixed}}(E+\mathrm{\Delta }E)`$ with approximated values $`g_{\mathrm{fixed}}(E)`$ kept fixed during the simulation. We refer to this dynamics as approximated Wang’s dynamics, since $`N_{\mathrm{fixed}}(E,\mathrm{\Delta }E)`$ (or $`g_{\mathrm{fixed}}(E)`$) is actually only an approximation of the real $`N(E,\mathrm{\Delta }E)`$ (or $`g(E)`$). We will use the results obtained by ESM or BHM with ESDYN as inputs to the approximate Wang’s dynamics. These dynamics will be called AWANG1 and AWANG2, respectively. For comparison, we also implemented a dynamics that uses the ESM probabilities taken from the exact values of $`g(E)`$. It is worth noticing that, as pointed out in the previous paragraph, this is equivalent to Wang’s proposal with exact values for $`N_{\mathrm{fixed}}(E,\mathrm{\Delta }E)`$. We refer to this dynamics as WANG. Its purpose is only to test the accuracies of the other approaches, once one does not know the exact $`g(E)`$ a priori, in real implementations. ## IV Numerical Tests We start our comparison with the smallest system, since it is also present in the ESM original paper by Lee . The partition function for the $`4\times 4\times 4`$ simple cubic Ising model is exactly known . It is given by the polynomial function $$Z(\beta )=\underset{n=0}{\overset{96}{}}C(n)u^n$$ (7) where $`u=\mathrm{exp}(4\beta )`$, $`\beta =1/T`$, $`T`$ is the temperature. The energy spectrum is written in terms of the coefficients $`C(n)`$ as $`g(E)=g(2n)=C(n)`$ for $`n=0`$ to $`n=96`$. For this model, only the first $`49`$ coefficients are necessary by symmetry. The other $`48`$ coefficients are mirror images of the first ones. Our results will be expressed in terms of $`S(E)=\mathrm{ln}g(E)`$. In order to compare the entropies as obtained by both ESM and BHM, we normalize the entropy such that $`g(96)=1`$, i.e. $`S(96)=0`$. This point corresponds to the center of the energy spectrum, or, alternatively, to infinite temperature. Of course, the error relative to the exact value vanishes for $`E=96`$. In fig. (1), we compare the normalized (with respect to its exact value) entropy as function of $`E`$ obtained by ESM and BHM (with four different dynamics). In AWANG1 and AWANG2 dynamics we use as $`N_{\mathrm{fixed}}(E,\mathrm{\Delta }E)`$ the results obtained by ESM and BHM with ESDYN, respectively. BHM with the entropic sampling dynamics or using the exact relation proposed by Wang present errors within the same order of magnitude, while pure ESM gives the worst results, as clearly seen in the inset. It is also clear that AWANG1 and AWANG2 results are worse than BHM with ESDYN. The methods can be better compared by the ratio of their relative errors rather than their absolute values. We define the relative errors in the entropy, for a given energy $`E`$, as $$ϵ(E)=\left|\frac{S(E)S(E)_{\mathrm{exact}}}{S(E)_{\mathrm{exact}}}\right|$$ (8) In fig. (2) we show the ratio between the BHM relative errors (obtained by ESDYN, AWANG1 and AWANG2 dynamics) and the ESM ones. The inset shows the ratio between the errors from BHM with WANG and the ESM ones. In its worst performance, the error obtained by BHM with ESDYN is roughly one third of the one obtained by ESM. BHM with Wang’s exact dynamics gives slightly better results once it uses the exact $`g(E)`$ as input in order to get $`g(E)`$ as output. BHM with ESDYN gives on average results $`11`$ times more accurate than ESM, for this number of iterations. However, BHM with the AWANG1 and AWANG2 dynamics give relatively poorer results. Therefore, we have shown that the ES dynamics is a powerful approach (among other possibilities ) to obtain with great accuracy the microcanonical averages $`N(E,\mathrm{\Delta }E)`$ needed by BHM. Let us stress that the results for $`g(E)`$ obtained with AWANG2 are worse than the input they use, namely the values of $`g(E)`$ obtained as output of BHM with ESDYN. In order to obtain good results with Wang’s dynamics, we need a pretty good estimative of $`g(E)`$ as input, and not only some crude estimation as claimed in . In fig. (3), we show the time evolution of the mean error as a function of the number of Monte Carlo steps (MCS). One iteration in ESM corresponds to perform many RW steps, according to eq. (3), storing the number of visits at each energy and, after this fixed number of RW steps, to update the entropy, according to eq. (4). As we pointed out before, the ESM performance is quite sensitive to the choice of the number of RW steps before each entropy update. For a fixed number of MCS ($`10^6`$), we plot the time evolution for both ESM and BHM, but with different number of iterations. As one can see in fig.(3), the best results for ESM correspond to the smaller number of iterations since, in this case, more RW steps are performed and, consequently, we have better statistics for the determination of the entropy. It is worth noticing that the ESM/MUCA errors seem to stabilize after a few iterations. We believe that this effect occurs because the number of visits to each energy is not sufficient to provide a good statistics in determining the entropy. The only way to improve the accuracy in the spectral degeneracy is to perform more RW steps between each entropy update (it does not increase the CPU time if the total number of MCS is kept constant). Conversely, for BHM the error decreases monotonically because macroscopic quantities are stored, leading to a good statistics even if some states are not frequently visited. After the very first steps, the errors decay as $`t^{1/2}`$, as expected. It is also remarkable the accuracy when using BHM: we reach the same accuracy of the best performance of ESM/MUCA, by performing roughly 30 times less MCS. Of course, accuracy is not the only important factor concerning the efficiency of a computational method. BHM with ES dynamics has one additional step compared to the traditional implementation of ESM, namely the storage of the macroscopic quantities $`N(X,\mathrm{\Delta }E)`$. So, we need to know the cost of this additional step. In table I, we show the CPU time (in seconds) spent in both implementations on a 433 MHz DEC Alpha, in order to obtain the results shown in fig.(1). For direct comparison, we also present the CPU time relative to the ESM CPU time. All dynamics tested within BHM takes roughly the same CPU time. As one can see, BHM uses twice more CPU time than ESM. However, for the same number of steps, and the best strategy for ESM, BHM is at least 10 times more accurate. If we consider accuracy and CPU use, we argue that BHM is more efficient than ESM. Up to now, we have tested both approaches in a very small lattice. Nevertheless, we can show that BHM is even more efficient for larger systems, as expected due to the macroscopic character of the quantities $`N(X,\mathrm{\Delta }E)`$. For $`L^d`$ Ising spins on a lattice, for instance, even restricting the allowed movements only to single-spin flips, the total number of movements starting from $`X`$ is just $`L^d`$. Thus, being a finite fraction of them, $`N(X,\mathrm{\Delta }E)`$ is a macroscopic quantity (this is true along the whole energy axis, except at the ground state where $`g(E)`$ presents a macroscopic jump relative to the neighboring energy levels). In fig. (4) we present results for the time evolution of the mean error for a $`32\times 32`$ square lattice Ising model (a lattice that is 16 times larger than the one in the previous results). The exact solution for this system is also known . Here, the accuracy of BHM is two orders of magnitude higher than that of ESM. Again the errors decrease as $`t^{1/2}`$ for BHM with ESDYN, while the ESM mean errors seem to stabilize. In summary, BHM with ESDYN can obtain accurate results for lattices much larger than the ones considered as limit for ESM. ## V Conclusions The Multicanonical and the Broad Histogram methods are completely distinct from canonical Monte Carlo methods, once they focus on the determination of the energy spectrum degeneracy $`g(E)`$. This quantity is independent of thermodynamic concepts and depends only on the particular system under study. It does not depend on the interactions of the system with the environment. Thus, once one has determined $`g(E)`$, the effects of different environments can be studied using always the same data for $`g(E)`$. Different temperatures, for instance, can be studied without need of a new computer run for each $`T`$. The goal of this paper is to discuss the conceptual differences between Multicanonical and Broad Histogram frameworks, and to compare both methods concerning accuracy and speed. We obtained the energy spectrum of the Ising model in $`4\times 4\times 4`$ and $`32\times 32`$ lattices, for which the exact results are known and, therefore, provide a good basis for comparison. Our findings show that a combination of the Broad Histogram method and the Entropic Sampling random walk dynamics (BHM with ESDYN) gives very accurate results and, in addition, it needs much less Monte Carlo steps to obtain the same accuracy as the pure Entropic Sampling method. This advantage of the Broad Histogram method grows with the system size, and it does not present the limitations of the Multicanonical or Entropic Sampling methods concerning large systems. The reason for the better performance is that the BHM uses the microcanonical averages $`N(E,\mathrm{\Delta }E)`$ of the macroscopic quantity $`N(X,\mathrm{\Delta }E)`$ \- the number of potential movements which could be done starting from the current state $`X`$, leading to an energy variation of $`\mathrm{\Delta }E`$. In this way, each new visited state contributes with a macroscopic value for the averages one measures during the computer simulation. Being macroscopic quantities, the larger the system, the more accurate are the results for these averages. Conversely, Histogram and Multicanonical approaches rely exclusively on $`H(E)`$, the number of visits to each energy. Therefore, each new averaging state contributes with only one more count to the averages being measured, i.e., $`H(E)H(E)+1`$, independent of the system size. Under a conceptual point of view, BHM is also completely distinct from the other methods which are based on the final distribution of visits $`H(E)`$. Alternatively, it is based on the determination of microcanonical, fixed-$`E`$ averages $`N(E,\mathrm{\Delta }E)`$ , concerning each energy level separately. Thus, the relative frequency of visitation between distinct energy levels, which is sensitive to the particular dynamic rule one adopts, i.e. the comparison between $`H(E)`$ and $`H(E^{})`$, does not matter. The only requirement for the dynamics is to provide a uniform sampling probability for the states belonging to the same energy level. The transition probabilities from one level to the others are irrelevant. ## VI Acknowledgments This work has been partially supported by Brazilian agencies CAPES, CNPq and FAPERJ. The authors acknowledge D. C. Marcucci and J. S. Sá Martins for suggestions, discussions and critical readings of the manuscript. FIGURE CAPTIONS * Entropies (normalized by its exact values) for the $`4\times 4\times 4`$ Ising ferromagnet, obtained by ESM/MUCA and BHM for $`100`$ iterations of $`10^6`$ Monte Carlo steps, each. The inset shows a detailed view of the first fourth of the whole spectrum. ESM/MUCA gives the largest errors. AWANG1 and AWANG2 dynamics also give worse results than BHM with both ES or WANG dynamics. * Ratio between the relative errors for BHM and the ESM/MUCA ones. The horizontal lines are the mean relative errors. BHM is on average more than ten times more accurate than ESM/MUCA, for this number of iterations. The exact Wang’s dynamics is, as expected, slightly more accurate than ESDYN (as shown in the inset) once it uses the exact $`g(E)`$ as input in order to get $`g(E)`$ as output. AWANG1 and AWANG2 give the worst results among the four distinct dynamics presently used in order to test BHM (see text). * Time evolution of the mean error for BHM with ESDYN (a) and ESM/MUCA (b). In both cases, exactly the same averaging states are visited: thus, the differences are due to the methods themselves, not to statistics. As described in the text, the entropic sampling dynamics is quite sensitive to the number of Monte Carlo steps between each iteration, that correspond to an update of the entropy. Here, we present the results for $`N=10,100,1000`$ iterations, that means $`10^5,10^4`$ and $`10^3`$ Monte Carlo steps, respectively, between each iteration. The more Monte Carlo steps, the smaller is the error for the entropic sampling. Conversely BHM does not seem to depend on the computational strategy adopted. Moreover, the errors seem to stabilize after some steps, in the ESM case. On the contrary, for BHM, to get a better accuracy is a simple matter of increasing the computer time, once the errors decay as $`t^{1/2}`$. The results are averaged over ten realizations. * Time evolution of the mean error for BHM with ESDYN (a) and ESM/MUCA (b) for the $`32\times 32`$ square lattice Ising Model. Now we consider $`10^7`$ MC steps. Again we present the results for $`N=100,1000`$ iterations, that means $`10^5`$ and $`10^4`$ Monte Carlo steps, respectively, between each iteration. The results are averaged over ten realizations. The ratio between the accuracies of BHM and ESM/MUCA is even higher for large systems.
no-problem/0002/hep-th0002072.html
ar5iv
text
# Opening up extra dimensions at ultra-large scales ## Abstract The standard picture of viable higher-dimensional theories is that direct manifestations of extra dimensions occur at short distances only, whereas long-distance physics is described by effective four-dimensional theories. We show that this is not necessarily true in models with infinite extra dimensions. As an example, we consider a five-dimensional scenario with three 3-branes in which gravity is five-dimensional both at short and very long distance scales, with conventional four-dimensional gravity operating at intermediate length scales. A phenomenologically acceptable range of validity of four-dimensional gravity extending from microscopic to cosmological scales is obtained without strong fine-tuning of parameters. preprint: DTP/00/13, hep-th/0002072 Our world is often thought to have more than four fundamental space-time dimensions. This point of view is strongly supported by string/M-theory, and higher dimensional theories are currently being developed in various directions. The standard lore is that in phenomenologically viable models, extra dimensions open up at short distances only, whereas above a certain length scale, and all the way up to infinite distances, physics is described by effective four-dimensional theories. In this letter we show that the latter picture is not universal: there are models in which extra dimensions open up both at short and very long distances. Namely, gravity may become higher-dimensional in both of these extremes. At intermediate length scales, physics is described by conventional four-dimensional laws, so the phenomenology of these models is still acceptable. The starting point for our discussion is the observation that extra dimensions may be infinite, with the usual matter residing on a 3-brane embedded in higher-dimensional space. In the original Randall–Sundrum (RS) model , gravity is effectively four-dimensional at large scales due to the existence of a graviton bound state localized near the 3-brane in five dimensions. We point out that in other five-dimensional models with an infinite extra dimension (in particular, in the model introduced in Ref.), localization of the graviton may be incomplete, and it is this property that leads to the restoration of the five-dimensional form of gravity at very long distances. The latter feature is manifest both in the case of static sources, where the four-dimensional Newton’s gravity law changes to its five-dimensional counterpart above a certain length scale, and in dynamics of gravitational waves which disappear into the fifth dimension thus violating four-dimensional energy conservation, after travelling long enough distance. It is likely that these phenomena are not peculiar to five-dimensional theories and occur in a number of models with more than one extra dimension. It has been found recently that exotic large-distance effects may also appear in models with compact extra dimensions, due to the possible presence of very light Kaluza–Klein states. This interesting scenario is, however, considerably different from ours: in compact models, extra dimensions show up at large distances somewhat indirectly, through the spectrum of the corresponding four-dimensional effective theory: four-dimensional energy is conserved, and so on. In our case, physics at ultra-large distances in intrinsically five-dimensional. As our concrete example, let us consider a five-dimensional model of Ref. . The model contains one brane with tension $`\sigma >0`$ and two branes with equal tensions $`\sigma /2`$ placed at equal distances to the right and to the left of the positive tension brane in the fifth direction. The two negative tension branes are introduced for simplicity, to have $`Z_2`$ symmetry, $`zz`$, in analogy to RS model (hereafter $`z`$ denotes the fifth coordinate). We assume that conventional matter resides on the positive tension brane, and in what follows we will be interested in gravitational interactions of this matter. The $`Z_2`$ symmetry enables us to consider explicitly only the region to the right of the positive tension brane. The physical set-up (for $`z>0`$) is as follows: The bulk cosmological constant between the branes, $`\mathrm{\Lambda }`$, is negative as in the RS model, however, in contrast to that model, is zero to the right of the negative tension brane. With appropriately tuned $`\mathrm{\Lambda }`$, there exists a solution to the five-dimensional Einstein equations for which both positive and negative tension branes are at rest at $`z=0`$ and $`z=z_c`$ respectively, $`z_c`$ being an arbitrary constant. The metric of this solution is $$ds^2=a^2(z)\eta _{\mu \nu }dx^\mu dx^\nu dz^2$$ (1) where $$a(z)=\{\begin{array}{cc}e^{kz}\hfill & 0<z<z_c\hfill \\ e^{kz_c}a_{}\hfill & z>z_c\hfill \end{array}$$ (2) The constant $`k`$ is related to $`\sigma `$ and $`\mathrm{\Lambda }`$ as follows: $`\sigma =\frac{3k}{4\pi G_5}`$, $`\mathrm{\Lambda }=\sigma k`$, where $`G_5`$ is the five-dimensional Newton constant. The four-dimensional hypersurfaces $`z=const.`$ are flat, the five-dimensional space-time is flat to the right of the negative-tension brane and anti-de Sitter between the branes. The spacetime to the left of the positive tension brane is of course a mirror image of this set-up. This background has two length scales, $`k^1`$ and $`\zeta _ck^1e^{kz_c}`$. We will consider the case of large enough $`z_c`$, in which the two scales are well separated, $`\zeta _ck^1`$. We will see that gravity in this model is effectively four-dimensional at distances $`r`$ belonging to the interval $`k^1r\zeta _c(k\zeta _c)^2`$, and is five-dimensional both at short distances, $`rk^1`$ (this situation is exactly the same as in RS model), and at long distances, $`r\zeta _c(k\zeta _c)^2`$. In the latter régime of very long distances the five-dimensional gravitational constant gets effectively renormalized and no longer coincides with $`G_5`$. To find the gravity law experienced by matter residing on the positive tension brane, let us study gravitational perturbations about the background metric (1). We will work in the Gaussian Normal (GN) gauge, $`g_{zz}=1`$, $`g_{z\mu }=0`$. The linearized theory is described by the metric $$ds^2=a^2(z)\eta _{\mu \nu }dx^\mu dx^\nu +h_{\mu \nu }(x,z)dx^\mu dx^\nu dz^2$$ (3) There are two types of linearized excitations in this model. One of them is a four-dimensional scalar — the radion — that corresponds to the relative motion of the branes (see, e.g., Refs. ) The wave function of the radion is localized between the branes, and its interactions with matter on the positive tension brane result in a scalar force of the Brans–Dicke type. If the distance between the branes is not stabilized by one or another mechanism, the radion has zero four-dimensional mass and gives rise to the usual $`1/r`$ potential in an effective four-dimensional theory. In the particular model under discussion, the radion excitation has been studied in Ref. . It is conceivable that the distance between the branes may be stabilized (cf. Ref. ); in which case the interactions due to the radion switch off at large distances. In this letter we are interested in other types of excitation that leave the branes at rest, namely the five-dimensional gravitons. When the radion is disregarded, there exists a frame which is GN with respect to both branes simultaneously. In this frame, the transverse-traceless gauge can be chosen, $`h_\mu ^\mu =0`$, $`h_{\mu ,\nu }^\nu =0`$, and the linearized Einstein equations take one and the same simple form for all components of $`h_{\mu \nu }`$, $$\{\begin{array}{cc}h^{\prime \prime }4k^2h\frac{1}{a^2}\mathrm{}^{(4)}h=0\hfill & 0<z<z_c\hfill \\ h^{\prime \prime }\frac{1}{a_{}^2}\mathrm{}^{(4)}h=0\hfill & z>z_c\hfill \end{array}$$ (4) The Israel junction conditions on the branes are $$\{\begin{array}{cc}h^{}+2kh=0\hfill & \text{at }z=0\hfill \\ \left[h^{}\right]2kh=0\hfill & \text{at }z=z_c\hfill \end{array}$$ (5) where $`\left[h^{}\right]`$ is the discontinuity of the $`z`$-derivative of the metric perturbation at $`z_c`$, and four-dimensional indices are omitted. A general perturbation is a superposition of modes, $`h=\psi (z)e^{ip_\mu x^\mu }`$ with $`p^2=m^2`$, where $`\psi `$ obeys the following set of equations in the bulk, $$\{\begin{array}{cc}\psi ^{\prime \prime }4k^2\psi +\frac{m^2}{a^2}\psi =0\hfill & 0<z<z_c\hfill \\ \psi ^{\prime \prime }+\frac{m^2}{a_{}^2}\psi =0\hfill & z>z_c\hfill \end{array}$$ (6) with the junction conditions (5) (replacing $`h`$ by $`\psi `$). It is straightforward to check that there are no negative modes, i.e., normalizable solutions to these equations with $`m^2<0`$. There are no normalizable solutions with $`m^20`$ either, so the spectrum is continuous, beginning at $`m^2=0`$. To write the modes explicitly, it is convenient to introduce a new coordinate between the branes, $`\zeta =\frac{1}{k}e^{kz}`$, in terms of which the background metric is conformally flat. Then the modes have the following form, $$\psi _m=\{\begin{array}{cc}C_m\left[N_1\left(\frac{m}{k}\right)J_2(m\zeta )J_1\left(\frac{m}{k}\right)N_2(m\zeta )\right]\hfill & 0<z<z_c\hfill \\ A_m\mathrm{cos}\left(\frac{m}{a_{}}(zz_c)\right)+B_m\mathrm{sin}\left(\frac{m}{a_{}}(zz_c)\right)\hfill & z>z_c\hfill \end{array}$$ (7) where $`N`$ and $`J`$ are the Bessel functions. The constants $`A_m,B_m`$ and $`C_m`$ obey two relations due to the junction conditions at the negative tension brane. Explicitly, $`A_m`$ $`=`$ $`C_m\left[N_1\left({\displaystyle \frac{m}{k}}\right)J_2(m\zeta _c)J_1\left({\displaystyle \frac{m}{k}}\right)N_2(m\zeta _c)\right]`$ (9) $`B_m`$ $`=`$ $`C_m\left[N_1\left({\displaystyle \frac{m}{k}}\right)J_1(m\zeta _c)J_1\left({\displaystyle \frac{m}{k}}\right)N_1(m\zeta _c)\right]`$ (10) The remaining overall constant $`C_m`$ is obtained from the normalization condition. The latter is determined by the explicit form of Eq. (6) and reads $$\psi _m^{}(z)\psi _m^{}(z)\frac{dz}{a^2(z)}=\delta (mm^{})$$ (11) One makes use of the asymptotic behaviour of $`\psi _m`$ at $`z\mathrm{}`$ and finds $$\frac{\pi }{a_{}}(|A_m|^2+|B_m|^2)=1$$ (12) which fixes $`C_m`$ from (9) and (10). It is instructive to consider two limiting cases. At $`m\zeta _c1`$ we obtain by making use of the asymptotics of the Bessel functions, $$C_m^2=\frac{m}{2k}\left[J_1^2\left(\frac{m}{k}\right)+N_1^2\left(\frac{m}{k}\right)\right]^1$$ (13) which coincides, as one might expect, with the normalization factor for the massive modes in RS model. In the opposite case $`m\zeta _c1`$ (notice that this automatically implies $`m/k1`$), the expansion of the Bessel functions in Eqs. (9) and (10) yields $$C_m^2=\frac{\pi }{(k\zeta _c)^3}\left(1+\frac{4}{(m\zeta _c)^2(k\zeta _c)^4}\right)^1$$ (14) It is now straightforward to calculate the static gravitational potential between two unit masses placed on the positive-tension brane at a distance $`r`$ from each other. This potential is generated by the exchange of the massive modes (cf. Refs. – recall that the zero mode here is not localized) $$V(r)=G_5_0^{\mathrm{}}𝑑m\frac{e^{mr}}{r}\psi _m^2(z=0)$$ (15) It is convenient to divide this integral into two parts, $$V(r)=G_5_0^{\zeta _c^1}𝑑m\frac{e^{mr}}{r}\psi _m^2(0)+G_5_{\zeta _c^1}^{\mathrm{}}𝑑m\frac{e^{mr}}{r}\psi _m^2(0)$$ (16) At $`rk^1`$, the second term in Eq. (16) is small and it is similar to the contribution of the continuum modes to the gravitational potential in RS model. It gives short distance corrections to Newton’s law, $$\mathrm{\Delta }V_{short}(r)\frac{G_5}{kr^3}=\frac{G_N}{r}\frac{1}{k^2r^2}$$ (17) where $`G_N=G_5k`$ is the four-dimensional Newton constant. Of greater interest is the first term in Eq. (16) which dominates at $`rk^1`$. Substituting the normalization factor (14) into this term, we find $$V(r)=\frac{G_5}{r}_0^{\zeta _c^1}𝑑m\frac{\pi }{(k\zeta _c)^3}\left(1+\frac{4}{(m\zeta _c)^2(k\zeta _c)^4}\right)^1\frac{4k^2}{\pi ^2m^2}e^{mr}$$ (18) This integral is always saturated at $`m\stackrel{<}{_{}}r_c^1\zeta _c^1`$, where $$r_c=\zeta _c(k\zeta _c)^2k^1e^{3kz_c}$$ (19) Therefore, we can extend the integration to infinity and obtain $`V(r)`$ $`=`$ $`{\displaystyle \frac{G_N}{r}}{\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\mathrm{}}}𝑑x{\displaystyle \frac{e^{\frac{2r}{r_c}x}}{x^2+1}}`$ (20) $`=`$ $`{\displaystyle \frac{2G_N}{\pi r}}\left[\text{ci}(2r/r_c)\mathrm{sin}(2r/r_c)\text{si}(2r/r_c)\mathrm{cos}(2r/r_c)\right]`$ (21) where $`x=mr_c/2`$, and $`\text{ci/si}(t)=_t^{\mathrm{}}\frac{\mathrm{cos}/\mathrm{sin}(u)}{u}𝑑u`$ are the sine and cosine integrals. We see that $`V(r)`$ behaves in a peculiar way. At $`rr_c`$, the exponential factor in Eq. (20) can be set equal to one and the four-dimensional Newton law is restored, $`V(r)=G_N/r`$. Hence, at intermediate distances, $`k^1rr_c`$, the collection of continuous modes with $`mr_c^1`$ has the same effect as the graviton bound state in RS model. However, in the opposite case, $`rr_c`$, we find $$V(r)=\frac{G_Nr_c}{\pi r^2}$$ (22) which has the form of “Newton’s law” of five-dimensional gravity with a renormalized gravitational constant. It is clear from Eq. (20) that at intermediate distances, $`k^1rr_c`$, the four-dimensional Newtonian potential obtains not only short distance corrections, Eq. (17), but also long distance ones, $`V(r)=G_N/r+\mathrm{\Delta }V_{short}(r)+\mathrm{\Delta }V_{long}(r)`$. The long distance corrections are suppressed by $`r/r_c`$, the leading term being $$\mathrm{\Delta }V_{long}(r)=\frac{G_N}{r}\frac{r}{r_c}\frac{4}{\pi }\left(\mathrm{ln}\frac{2r}{r_c}+𝐂1\right)$$ (23) where $`𝐂`$ is the Euler constant. The two types of corrections, Eqs. (17) and (23), are comparable at roughly $`r\zeta _c`$. At larger $`r`$, deviations from the four-dimensional Newton law are predominantly due to the long-distance effects. In our scenario, the approximate four-dimensional gravity law is valid over a finite range of distances. Without strong fine-tuning however, this range is large, as required by phenomenology. Indeed, the exponential factor in Eq. (19) leads to a very large $`r_c`$ even for microscopic separations, $`z_c`$, between the branes. As an example, for $`kM_{Pl}`$ we only require $`z_c50l_{Pl}`$ to have $`r_c10^{28}`$ cm, the present horizon size of the Universe, i.e., with mild assumptions about $`z_c`$, the four-dimensional description of gravity is valid from the Planck to cosmological scales (in this example, long distance corrections to Newton’s gravity law dominate over short distance ones at $`r\stackrel{>}{_{}}\zeta _c10^{13}`$ cm). An interesting consequence of the incomplete localization of graviton is the dissipation into the fifth dimension of gravitational waves propagating to large distances. Let us consider gravitational waves generated by a periodic pointlike source on the brane, $`T(x,z)=T(𝐱)e^{i\omega t}\delta (z)`$, where the four-dimensional indices are again omitted. The gravitational field on the brane is given by the convolution of the source, $`T(𝐱)`$, with the Green’s function $$G(𝐱𝐱^{};\omega )=8\pi G_5d(t^{}t)G(x,x^{};z=z^{}=0)e^{i\omega (tt^{})}$$ (24) Here the five-dimensional gravitational constant is included for convenience of comparison with four-dimensional formulae, and $`G(x,x^{};z,z^{})`$ is the retarded Green’s function of the linearized Einstein equations. The latter is constructed from the full set of eigenmodes in the usual way: $$G(x,x^{};z,z^{})=_0^{\mathrm{}}𝑑m\psi _m(z)\psi _m(z^{})\frac{1}{(2\pi )^4}d^4p\frac{e^{ipx}}{m^2p^2iϵp^0}$$ (25) After substitution of (25) into (24) and simplifications we get $$G(𝐱𝐱^{};\omega )=\frac{2G_5}{r}_0^{\mathrm{}}𝑑m\psi _m^2(0)e^{ip_\omega r}$$ (26) where $`r=|𝐱𝐱^{}|`$, $`p_\omega =\sqrt{\omega ^2m^2}`$ when $`m<\omega `$ and $`p_\omega =i\sqrt{m^2\omega ^2}`$ when $`m>\omega `$. We see that the gravitational field on the brane has the form of a superposition of massive four-dimensional modes. Only modes with $`m<\omega `$ are actually radiated, the other ones exponentially fall off from the source. Thus, as long as we are interested in gravitational waves, we can integrate in (26) only up to $`m=\omega `$. Let us study the case $`r_c^1\omega k`$. Then, the main contribution to (26) is given by modes with $`mr_c^1`$, whereas modes with larger masses give rise to corrections suppressed by $`\omega /k`$ (cf.(16), (17)). In this region of $`m`$, the eigenfunctions $`\psi _m`$ are given by the explicit expressions (7), (14) and $`p_\omega `$ is approximated by $`\omega \frac{m^2}{2\omega }`$. In this way we get (cf.(20)) $$G(𝐱𝐱^{};\omega )=\frac{2G_N}{r}e^{i\omega r}\frac{2}{\pi }_0^{\mathrm{}}𝑑x\frac{e^{i\frac{2r}{\omega r_c^2}x^2}}{1+x^2}$$ (27) where again $`x=mr_c/2`$ and we extended the integration to infinity. This integral is expressed in terms of the error function: $$G(𝐱𝐱^{};\omega )=\frac{2G_N}{r}e^{i\omega r}\left(1\mathrm{erf}(\beta )\right)e^{\beta ^2}$$ (28) where $`\beta =e^{i\frac{\pi }{4}}\sqrt{\frac{2r}{\omega r_c^2}}`$. When $`\beta 1`$ (that is $`r\omega r_c^2`$) one has $`\mathrm{erf}(\beta )0`$, and we obtain the usual $`1/r`$-dependence of the gravity wave amplitude on the distance to the source. In the opposite case $`\beta 1`$ ($`r\omega r_c^2`$), we make use of $`\mathrm{erf}(\beta )=1\frac{1}{\sqrt{\pi }\beta }e^{\beta ^2}`$ to obtain $$G(𝐱𝐱^{};\omega )=\frac{2G_N}{r^{3/2}}\sqrt{\frac{\omega r_c^2}{2\pi }}e^{i\omega ri\frac{\pi }{4}}$$ (29) that is, the amplitude is proportional to $`r^{3/2}`$. Thus, the gravitational waves dissipate into the fifth dimension and from the point of view of a four-dimensional observer on the brane, energy of gravity waves is not conserved. This effect becomes considerable after the wave travels the distance of order $`r\omega r_c^2`$ from the source. Note that at $`\omega r_c^1`$ this distance is much larger than the distance $`r_c`$ at which the violation of four-dimensional Newton’s law is appreciable. This difference between the two distance scales can in fact be seen to be a relativistic effect. The collection of five-dimensional graviton states with $`mr_c^1`$ may be viewed as an RS bound state which becomes metastable in our model. The width of this metastable state is of order $`\mathrm{\Gamma }=\mathrm{\Delta }mmr_c^1`$. In its own reference frame, the graviton disappears into the fifth dimension with time scale $`\tau \mathrm{\Gamma }^1r_c`$. This time scale determines the distance at which four-dimensional gravity of static sources gets modified. On the other hand, when the graviton moves in four dimensional space-time with momentum $`p\omega `$, there is an additional gamma-factor of order $`\gamma \omega /m\omega r_c`$. The graviton therefore remains effectively four-dimensional during a time of order $`\gamma \tau \omega r_c^2`$ due to the relativistic time delay. Because of this property, the leaking of the gravity waves into the extra dimension is negligible at relatively short wavelengths. However, the corresponding time scale becomes of order $`r_c`$ for wavelengths of the same order. This is another manifestation of our observation that extra dimensions open up at the length scale $`r_c`$. Finally, we point out that in more complicated higher-dimensional models, the long distance properties of gravitational interactions may be even more intriguing. Indeed, let us modify the model discussed throughout this paper by compactifying the fifth dimension to a very large radius, $`z_{}z_c`$. Then the mass spectrum of Kaluza–Klein gravitons becomes discrete, and the graviton zero mode reappears. However, the spacing between the masses may be tiny, depending on $`z_{}`$. With appropriately chosen $`z_{}`$, the five-dimensional gravity law, Eq. (22), will itself be valid in a finite interval of distances, and the four-dimensional gravity will again be restored at largest scales, well above $`r_c`$. We conclude that higher-dimensional theories provide, somewhat unexpectedly, valid alternatives to four-dimensional gravity at large distances. With hindsight, it is perhaps not surprising that this happens, since at the very large scale, the anti-de Sitter sandwich becomes very slim, and spacetime is nearly flat, however, from the perspective of our putative universe – the four-dimensional central wall – this conclusion is not so transparent. It would be worthwhile to explore such models in the context of cosmology and astrophysics. The long-distance phenomena in our Universe may become a window to microscopic extra dimensions! We would like to thank Victor Berezin, Sergei Dubovsky, Dmitry Gorbunov, Maxim Libanov and Sergei Troitsky for useful discussions. We would also like to thank Ian Kogan for drawing our attention to Ref. . R.G. and V.R. acknowledge the hospitality of the Isaac Newton Institute for Mathematical Sciences, where this work has begun. R.G. was supported in part by the Royal Society, and V.R. and S.S by the Russian Foundation for Basic Research, grant 990218410.
no-problem/0002/cond-mat0002212.html
ar5iv
text
# A POSSIBLE CRYPTO-SUPERCONDUCTING STRUCTURE IN A SUPERCONDUCTING FERROMAGNET ## I INTRODUCTION It has long been demonstrated that, while an antiferromagnetic order can coexist with a uniform superconducting order (in a Meissner state), a long-range ferromagnetic order cannot. However, it was later shown that a non-uniform ferromagnetic order could coexist with superconductivity. This non-uniform ferromagnetic order manifests itself in the form of a spiral structure, or a domain-like structure, that reduces the compound’s effectiveness in suppressing superconductivity. At the same time, it was also shown that superconductivity suppresses the ferromagnetic order. Many ternary compounds, comprised of both superconducting and ferromagnetic sublattices, indeed exhibit these fascinating magnetic structures in the superconducting state. For instance, ErRh<sub>4</sub>B<sub>4</sub> superconducts below its superconducting transition temperature ($`T_s`$) of 8.7 K and orders ferromagnetically below its transition temperature ($`T_m`$) of 0.8 K, before returning to a normal state below a re-entrant temperature ($`T_r`$) of 0.7 K. A spiral ferromagnetic structure was found to coexist with the superconducting state in this compound between $`T_m`$ and $`T_r`$. On the other hand, a domain-like ferromagnetic structure was observed between $`T_m`$ and $`T_r`$ of HoMo<sub>6</sub>S<sub>8</sub> and HoMo<sub>6</sub>Se<sub>8</sub> with ($`T_s`$, $`T_m`$, $`T_r`$) of (1.8 K, 0.7 K, 0.65 K) and (5.5 K, 0.53 K, 0 K), respectively. A spontaneous vortex lattice state was also proposed for the case in which the internal field of the ferromagnet ($`4\pi M`$) is greater than the lower critical field ($`H_{c1}`$) but lower than the upper critical field ($`H_{c2}`$). The compounds previously studied always have a $`T_s`$ higher than $`T_m`$ and have been known as ferromagnetic superconductors . In these ferromagnetic superconductors, superconductivity is the dominant state and the ferromagnetic state is modified with a non-uniform structure to fit the superconducting state. Recently, several ruthenate-cuprates have been reported to undergo a transition at $`T_m`$ to a ferromagnetic state that coexists with a superconducting state at a lower transition temperature $`T_s`$. They are RuSr<sub>2</sub>GdCu<sub>2</sub>O<sub>8</sub> (Ru-1212) and RuSr<sub>2</sub>(R<sub>0.7</sub>Ce<sub>0.3</sub>)<sub>2</sub>Cu<sub>2</sub>O<sub>10</sub> (Ru-1222) , in which R = Gd or Eu, with ($`T_m`$, $`T_s`$) of (133 K, 14–47 K) for Ru-1212, (180 K, 42 K) for Ru-1222 when R = Gd, and (122 K, 32 K) for Ru-1222 when R = Eu. They have therefore been called superconducting ferromagnets . The ferromagnetic state below $`T_m`$ has been firmly established to be bulk and uniform to a scale of $`20`$ Å across the samples, as examined by magnetization, muon-relaxation, and Mössbauer measurements. However, the cited evidence for superconductivity rests entirely on the zero resistivity ($`\rho `$) observed and a diamagnetic shift of the $`dc`$ magnetic susceptibility ($`\chi _{dc}`$) measured in the zero-field-cooled (ZFC) mode. No diamagnetic shift has been detected in $`\chi _{dc}`$ of these samples when measured in the field-cooled (FC) mode. It is known that a diamagnetic shift in the ZFC-$`\chi _{dc}`$, which is a measure of the superconducting shielding effect and consistent with the zero-$`\rho `$ observed, is not proof for the existence of the Meissner state, a conventional criterion for the existence of bulk superconductivity. To determine the possibility of a nearly perfect pinning in Ru-1212 that reduces the size of the diamagnetic shift in FC-$`\chi _{dc}`$, we have recently determined the reversible magnetization ($`M_r`$) of the sample as a function of field ($`H`$) . The results showed that a Meissner state exists in no more than a few percent of the bulk sample, if it exists at all. Several possibilities have been proposed to account for the absence of a bulk Meissner state in Ru-1212: the possible formation of a spontaneous vortex state in the bulk superconducting ferromagnet Ru-1212; the possible presence of a spatially non-uniform superconducting structure in the chemically uniform Ru-1212; and the possible appearance of a filamentary superconductivity associated with a crypto-superconducting fine structure in a chemically uniform superconducting ferromagnet or with an impurity phase in this otherwise non-superconducting ferromagnet. In this paper, we further examine these possibilities proposed for the absence of the bulk Meissner effect in superconducting ferromagnets. ## II EXPERIMENTAL Ru-1212 samples investigated here were either the same ones previously studied or freshly prepared by thoroughly reacting a mixture of RuO<sub>2</sub>, SrCO<sub>3</sub>, Gd<sub>2</sub>O<sub>3</sub>, and CuO, with a cation composition of Ru:Sr:Gd:Cu = 1:2:1:2, following steps previously reported, including the prolonged annealing at 1060 C . The structure was determined by powder X-ray diffraction (XRD), using the Rigaku DMAX-IIIB diffractometer; the resistivity by the standard four-lead technique, employing the Linear Research Model LR-700 Bridge; and both the $`dc`$ and $`ac`$ magnetizations by the Quantum Design SQUID magnetometer. Powder samples were prepared by pulverizing the pellets after prolonged annealing and selecting particles of different sizes by using sieves of different meshes. ## III RESULTS AND DISCUSSION The samples investigated are pure Ru-1212 within the XRD resolution of a few percent. They display a tetragonal structure with a space group $`P4/mmm`$. The lattice parameters are $`a=3.8375(8)`$ Å and $`c=11.560(2)`$ Å in good agreement with values reported previously . The scanning electron microscope results show grains of 1–5 $`\mu `$m in size and the scanning microprobe data show a uniform composition across the sample to a scale of 1–2 $`\mu `$m. The temperature dependence of $`\chi _{dc}`$ is shown in Fig. 1 and was measured at 5 Oe during both the ZFC and FC modes. A strong diamagnetic shift at $`25`$ K is detected in the ZFC-$`\chi _{dc}`$, but not in the FC-$`\chi _{dc}`$, similar to previous studies . It should be noted that such a diamagnetic shift in ZFC-$`\chi _{dc}`$ was sometimes absent from some samples that have the same XRD pattern. The temperature dependence of $`\rho `$ is exhibited in Fig. 2. A drastic $`\rho `$-drop starting at $`45`$ K and reaching zero at $`30`$ K ($`T_s`$) is observed. In the presence of a magnetic field, $`T_s`$ is suppressed to $`8`$ K at 7 T, characteristic of a superconducting transition below $`T_s`$. To determine if the absence of a diamagnetic shift in the FC-$`\chi `$ in the Ru-1212 sample is associated with a possible strong pinning, we determined $`M_r`$ as a function of $`H`$ from negative to positive $`H`$ in very fine $`H`$-steps, especially in the low-$`H`$ region. Since the surface pinning in cuprate superconductors is small, $`M_r`$ can be taken as $`(1/2)(M_++M_{})`$, where $`M_+`$ and $`M_{}`$ are magnetizations measured during the increasing and the decreasing of the field, respectively. The results of $`M_r(H)`$ and $`dM_r(H)/dH`$ are summarized in Fig. 3. $`M_r`$ does not exhibit any clear deviation from the almost linear $`M_r`$-$`H`$ relation; nor does $`dM_r(H)/dH`$ display the large initial increase expected of a Meissner state as $`H`$ passes over $`H_{c1}`$, even for $`H`$ as small as $`\pm 0.25`$ Oe. The results demonstrate unambiguously the absence of a bulk Meissner state in the Ru-1212 samples investigated down to 0 Oe. The slight initial increase in $`dM_r/dH`$ with $`H`$ enables us to estimate that no more than a few percent of the bulk sample is in the true Meissner state, if the state exists at all. In a superconducting ferromagnet with a $`T_m>T_s`$, a spontaneous magnetization $`M`$ occurs below $`T_m`$. If the internal field associated with the moments $`4\pi M`$ is greater than $`H_{c1}`$ but smaller than $`H_{c2}`$, a spontaneous vortex lattice has been proposed to form below $`T_s`$, similar to that predicted for a ferromagnetic superconductor where $`T_s>T_m`$. Since $`4\pi M`$ and $`H_{c1}`$ are temperature-dependent, there may exist two scenarios for the superconducting ferromagnet. When its $`4\pi M`$ is always greater than $`H_{c1}`$ below $`T_m`$, the compound will undergo a paramagnetic-to-ferromagnetic transition at $`T_m`$ and then enter the spontaneous vortex state at and below $`T_s`$, as shown in Fig. 4a. This was proposed to be a possible case for Ru-1212 . When the growth of $`H_{c1}`$ outpaces that of $`4\pi M`$ below a certain temperature $`T_{so}`$, as shown in Fig. 4b, the compound first undergoes a paramagnetic-to-ferromagnetic transition at $`T_m`$ upon cooling and then enters the spontaneous vortex state at $`T_s`$, before it recovers the Meissner state at $`T_{so}`$. Such a case has been suggested to take place in Ru-1222 , although the Meissner state below $`T_{so}`$ was not evident in their FC-$`\chi _{dc}`$. Another possible scenario is a triplet superconducting pairing, in which $`T_m`$ may coincide with $`T_s`$ . In this scenario, the compound will directly enter the spontaneous state at $`T_s`$ from its paramagnetic state when $`4\pi M>H_{c1}`$ (Fig. 4c) or recover its Meissner state if $`4\pi M`$ becomes smaller than $`H_{c1}`$ below $`T_{so}`$ (Fig. 4d). We conjecture that the superconducting Sr<sub>2</sub>RuO<sub>4</sub> may fall into this third category. In the above discussions, a bulk superconducting state is assumed to exist in the superconducting ferromagnet below $`T_s`$. Its existence is usually best demonstrated by the detection of a specific heat anomaly ($`\mathrm{\Delta }C_p`$) at $`T_s`$. A $`\mathrm{\Delta }C_p`$ was indeed reported in Ru-1212 near $`T_s`$, with a magnitude close to that of an underdoped superconducting cuprate. Unfortunately, it was also reported that while the magnitude of $`\mathrm{\Delta }C_p`$ is suppressed by an externally applied $`H`$, as expected of a bulk superconducting transition, the peak temperature of $`\mathrm{\Delta }C_p`$ increases with $`H`$, in strong contrast to the $`T_s`$-suppression by $`H`$ observed. This suggests that the detected $`\mathrm{\Delta }C_p`$ may be of a magnetic origin. It is known that Sr<sub>2</sub>GdRuO<sub>6</sub>, which forms easily in the Ru-1212 matrix, is an antiferromagnet with a Neel temperature close to $`T_s`$ at $`30`$ K and a transition drastically suppressed by $`H`$ . The presence of a few tenths of a percent of Sr<sub>2</sub>GdRuO<sub>6</sub> as an impurity in Ru-1212 can easily account for the $`\mathrm{\Delta }C_p`$-anomaly detected near $`T_s`$ . In view of the possible absence of $`\mathrm{\Delta }C_p`$, and thus of bulk superconductivity in Ru-1212, we decided to investigate both the chemical and the electrical uniformity of the Ru-1212 samples. As mentioned earlier, the XRD pattern of our samples shows only the Ru-1212 phase within our experimental resolution of a few percent. The microprobe scan indicates that the sample is chemically uniform to a scale of $``$ 1–2 $`\mu `$m in agreement with previous report . To determine the electrical uniformity of the sample in terms of its superconducting property, we examined the superconducting shielding effect of Ru-1212 powder samples of different particle sizes by measuring the $`ac`$ magnetization ($`M_{ac}`$) as a function of the $`ac`$ field ($`H_{ac}`$) in fine steps to $`10^4`$ Oe in zero $`dc`$ field. Details of the experiment will be published elsewhere . Typical results of the $`ac`$ susceptibility $`\chi _{ac}M_{ac}/H_{ac}`$ for samples of particle sizes down to 20 $`\mu `$m, all pulverized sequentially from the same bulk sample of $`6`$ mm, at 2 K are displayed in Fig. 5. For the 6 mm bulk sample, $`4\pi \chi _{ac}`$ varies negligibly with $`H_{ac}<0.1`$ Oe, but decreases rapidly with $`H_{ac}>1`$ Oe. The $`ac`$ superconducting shielding effect as measured by $`4\pi \chi _{ac}`$ before the demagnetization factor correction is $`200`$%. As the particle size decreases, $`4\pi \chi _{ac}`$ decreases, but remains almost $`H_{ac}`$-independent until $`H_{ac}>1`$ Oe. To assure that the $`4\pi \chi _{ac}`$-drop is not caused by the degradation of the samples due to pulverization, we compared the $`M_r`$’s for both the bulk and the powdered sample of a particle-size of 50 $`\mu `$m as shown in the inset in Fig. 1. The same $`M_r`$ was detected for the two samples, suggesting that the powdered samples retain their integrity. For a uniform bulk superconductor, $`\chi _{ac}`$ is expected to be independent of the particle-size, provided that the penetration depth $`\lambda d`$, where $`d`$ is the size of the particle. However, the field $`H_{ac}^{}`$, beyond which $`4\pi \chi _{ac}`$ starts to decrease, is determined by $`J_cd`$, where $`J_c`$ is the critical current density of the compound, and is expected to decrease with $`d`$, according to the Bean model. This is borne out by our results for a similar experiment on the uniform bulk and powdered superconducting YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> . Therefore, our observation in Ru-1212 is in variance with what is expected of a uniform bulk superconductor in two respects: the $`\chi _{ac}`$-decrease with $`d`$ and the almost $`d`$-independent $`H_{ac}^{}0.5`$ Oe. The $`H_{ac}^{}`$ predicted by the Bean model is also shown in Fig. 5 for comparison. These variances can be explained by assuming that Ru-1212 is a bulk superconductor and has a penetration depth $`\lambda `$ 50 $`\mu `$m. However, the estimated $`\lambda `$ 50 $`\mu `$m seems to be too large for a cuprate superconductor which has a reported $`\lambda `$ value usually about 20–100 times smaller. To explain the large $`\lambda `$, we assume that there may exist a sponge-like superconducting network uniformly distributed across the Ru-1212 sample to a scale of $`<20`$ $`\mu `$m. In other words, the superconducting order may have modified itself with such a non-uniform fine structure (crypto-superconductivity) in order to fit into the ferromagnetic state in the superconducting ferromagnet Ru-1212, in a way very similar to the proposed crypto-ferromagnetism in a ferromagnetic superconductor . For instance, superconductivity may exist between the ferromagnet-domain boundaries where the magnetic field can be smaller than $`H_{c2}`$ and the magnetic scattering is suppressed. The formation of these fine ferromagnet-domains in the chemically uniform itinerant ferromagnet Ru-1212, driven by the electromagnetic interaction, appears to be not unreasonable in view of what has been observed in the closely related highly correlated colossal magnetoresistance manganites . Such a superconducting structure is expected to display a very small condensation energy associated with the superconducting transition at $`T_s`$. This proposed structure is consistent with the zero $`\rho `$ observed and with our preliminary results suggesting a negligible condensation energy determined by the magnetization method . While the large $`\lambda `$ suggested by our $`\chi _{ac}`$-$`H_{ac}`$ results can be explained in terms of the possible appearance of crypto-superconductivity, we would also like to address the other possible cause, i.e. the presence of a minor superconducting impurity phase in the otherwise non-superconducting ferromagnet Ru-1212. RSr<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> (Cu-1212) with R = Y or a rare-earth element is known to form at ambient pressure by slight doping with a high valence element or under high pressure without doping . For example, the Cu-1212 with its chain-Cu partially replaced by Ru has been reported to form at ambient pressure and has a $`T_s`$ of $``$ 20–65 K. A small amount of the slightly Ru-doped Cu-1212 present in an otherwise non-superconducting Ru-1212 may thus account for the observation of the large $`\lambda `$, the zero $`\rho `$, and the absence of a bulk Meissner state. However, more than $`10`$% of the minor phase of Ru-doped Cu-1212 is required to be present in order to account for the observed percolative path of zero $`\rho `$ in Ru-1212. Although the similar structures of Ru-1212 and Cu-1212 make the XRD less decisive in differentiating the two phases, the difference between the Cu/Ru-ratios of the two phases (2/1 and 2.7/0.3 for the Ru-1212 and Ru-doped Cu-1212, respectively) can be easily detected by the microprobe for their separate presence. Unless the slightly Ru-doped Cu-1212 or other unknown superconducting impurity phase is finely dispersed to a scale below one micrometer or is deposited in the grain boundaries of the sample, our microprobe data suggest the unlikely presence of such superconducting impurities, in agreement with previous reports . In conclusion, we have measured the electrical and magnetic properties of both bulk and powdered Ru-1212 of different particle sizes in various $`dc`$ and $`ac`$ fields. The XRD and microprobe data show that the Ru-1212 samples examined are chemically pure to a few percent and uniform to a scale of 1–2 $`\mu `$m. The reversible magnetization as a function of field clearly demonstrates the absence of a bulk Meissner state in Ru-1212. The $`4\pi \chi _{ac}`$ decreases as the particle size decreases and remains constant initially as the $`ac`$ field increases, but decreases rapidly, independent of the particle size, as the $`ac`$ field increases to above $`0.5`$ Oe. Several possible scenarios have been considered and discussed. While a spontaneous vortex state may exist in a bulk superconducting ferromagnet Ru-1212, we also propose that the superconducting state in a superconducting ferromagnet is more likely to have modified itself to a sponge-like non-uniform superconducting fine (crypto-superconducting) structure to fit the ferromagnetic state in the chemically uniform Ru-1212. While the spontaneous vortex lattice shows a reduced condensation energy as the compound undergoes the superconducting transition, the crypto-superconducting structure is expected to display a negligible condensation energy. Further experiments to determine the condensation energy and the spontaneous vortex lattice in superconducting ferromagnets are needed to differentiate the above suggestions. ###### Acknowledgements. The work in Houston is supported in part by NSF Grant No. DMR-9804325, the T. L. L. Temple Foundation, the John J. and Rebecca Moores Endowment, and the State of Texas through the Texas Center for Superconductivity at the University of Houston; and at Lawrence Berkeley Laboratory by the Director, Office of Energy Research, Office of Basic Energy Sciences, Division of Materials Sciences of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
no-problem/0002/astro-ph0002059.html
ar5iv
text
# Kinematics of Molecular Hydrogen Emission from Planetary and Pre-planetary Nebulae ## 1. Introduction The presence of molecular hydrogen emission is now recognized as a reliable indicator of bipolar structure in planetary nebulae (Zuckerman & Gatley 1988; Kastner et al. 1994, 1996). While the polar lobes often display H<sub>2</sub>, the molecular emission is, with few exceptions, brightest toward the waists of bipolar planetaries. These molecule-rich regions of planetary nebulae (PNs) appear to be the remnants of circumstellar disks or tori formed during previous, asymptotic giant branch (AGB) or post-AGB phases of the central stars. Furthermore, the available evidence suggests that the onset of H<sub>2</sub> emission postdates the AGB stage but precedes the formation of the PN (Weintraub et al. 1998). This onset likely signals the beginning of a high-velocity, collimated, post-AGB wind, which shocks the previously ejected, “slow,” AGB wind and thereby produces the observed H<sub>2</sub> emission (Kastner et al. 1999). These observations make clear that further investigations of H<sub>2</sub> emission are important to our understanding of the origin of bipolarity in PNs. It is of particular interest to establish whether the spatially distinct waist and lobe H<sub>2</sub> emission regions are kinematically distinct as well and, furthermore, whether the kinematics bear evidence of the presence of circumstellar disks and/or high-velocity polar flows. To this end, we have undertaken a program of spectroscopic mapping of near-infrared H<sub>2</sub> emission from planetary and pre-planetary nebulae at high spectral resolution. First results from this program were presented in Weintraub et al. (1998), in which H<sub>2</sub> emission was detected from a pair of bipolar pre-planetary nebulae (PPNs), and in Kastner et al. (1999), where we described preliminary results for the seminal PPN RAFGL 2688. Here we present further analysis and interpretation of H<sub>2</sub> velocity mapping of RAFGL 2688, as well as H<sub>2</sub> velocity mapping results for the PPN RAFGL 618 and the bipolar planetary nebula NGC 2346 (see also Arias & Rosado, in this volume). ## 2. Observations Data presented here were obtained with the NOAO<sup>1</sup><sup>1</sup>1National Optical Astronomy Observatories is operated by Associated Universities for Research in Astronomy, Inc., for the National Science Foundation. Phoenix spectrometer on the 2.1 m telescope at Kitt Peak, AZ, in 1997 June (RAFGL 2688) and 1997 December (RAFGL 618, NGC 2346). Phoenix illuminates a $`256\times 1024`$ section of an Aladdin InSb detector array. The spectrograph slit was $`60^{\prime \prime }\times 1.4^{\prime \prime }`$ oriented approximately east-west. The velocity resolution was $`4`$ km s<sup>-1</sup> and the spatial resolution $`1.5^{\prime \prime }`$ at the time these spectra were obtained. A spectral image centered near the 2.121831 $`\mu `$m $`S(1)`$, $`v=10`$ transition of H<sub>2</sub> was obtained at each of 12 spatial positions as the slit was stepped from south to north across RAFGL 2688. The step size, $`1.0^{\prime \prime }`$, provided coverage of the entire H<sub>2</sub> emitting region with spatial sampling approximating the slit height. For RAFGL 618, whose bright H<sub>2</sub> emission regions are oriented almost perfectly east-west (Latter et al. 1995), parallel to the Phoenix slit, we obtained a single spectral image centered on the object. For NGC 2346 we obtained spectral images at selected positions near the waist of the nebula. Spectral images were reduced and wavelength calibrated as described in Weintraub et al. (1998). For the RAFGL 2688 data, the reduced spectral images were stacked in declination according to the commanded telescope offsets, to produce a (RA, dec, velocity) data cube of H<sub>2</sub> emission. ## 3. Results and Discussion ### 3.1. RAFGL 2688 Kastner et al. (1999) presented selected velocity planes from the RAFGL 2688 Phoenix data cube. The four principal “lobes” of H<sub>2</sub> emission seen in direct H<sub>2</sub> images (e.g., Sahai et al. 1998) are also apparent in these Phoenix data, with one pair oriented parallel to the polar axis (roughly N-S) and one perpendicular (roughly E-W). Each of these H<sub>2</sub> lobe pairs displays a velocity gradient, with the N and E lobes blueshifted by up to $`30`$ km s<sup>-1</sup> and the S and W lobes similarly redshifted. However, the N-S and E-W H<sub>2</sub> lobe pairs differ in their detailed kinematic signatures (Kastner et al.). The H<sub>2</sub> kinematic data for RAFGL 2688, like velocity maps obtained from radio molecular line emission, can be described in terms of a multipolar system of purely radially directed jets (Cox et al. 1997; Lucas et al., these proceedings). Given the constraints imposed by Phoenix and Hubble Space Telescope data, however, this model would require that the “equatorial” components located east and west of the central star are in fact directed well above and below the equatorial plane, respectively (Kastner et al. 1999). If one postulates instead that the E-W H<sub>2</sub> emission lobes are confined to the equatorial plane of the system — a hypothesis that appears to be dictated by certain details of the H<sub>2</sub> surface brightness distribution, as well as by simple symmetry arguments — then one must invoke a model combining radial expansion with a component of azimuthal (rotational) velocity along the equatorial plane (Kastner et al.). In a forthcoming paper we will compare these two alternative models in more detail. Here, we describe a specific formulation of the latter (expansion $`+`$ rotation) model that reproduces many of the salient features of the Phoenix data. To construct this empirical model of the H<sub>2</sub> kinematics of RAFGL 2688, we are guided by the basic results described above. That is, the polar lobes are characterized by velocity gradients in which the fastest moving material is found closest to the star, and the slowest moving material is found at the tips of the H<sub>2</sub> emission regions. For simplicity, we assume this behavior can be described by an inverse power law relationship between velocity and radius. For the equatorial plane H<sub>2</sub> emission, meanwhile, we assume a combination of azimuthal (rotation) and radial (expansion) velocity components, whose magnitudes we denote by $`v_r`$ and $`v_e`$, respectively. To constrain these model parameters, we compared model velocity field images with a velocity centroid image which we obtained from the Phoenix data cube. For the polar lobes, we find that the exponent of the inverse power law velocity-distance relationship is roughly $`0.7`$ and that the outflow velocities at the tips of the N and S lobes are $`20`$ km s<sup>-1</sup>. For the equatorial regions, good agreement between model and data is obtained for values of $`v_e`$ and $`v_r`$ that lie in the range $`510`$ km s<sup>-1</sup>, with the additional constraint $`v_e+v_r15`$ km s<sup>-1</sup>. An example of the results for a representative model (with $`v_e=5`$ km s<sup>-1</sup> and $`v_r=10`$ km s<sup>-1</sup>) is displayed in Fig. 1. There is clear qualitative agreement between the model and observed velocity images for these parameter values, in the sense that the overall distribution of redshifted and blueshifted emission is captured by the model. Furthermore, this model reproduces specific details of the observed H<sub>2</sub> velocity distribution, such as the approximate magnitudes and positions of the velocity extrema in the four H<sub>2</sub> lobes. While this model is by no means unique, the comparison of calculated and observed velocity fields provides further support for a component of azimuthal velocity along the equatorial plane of RAFGL 2688, and offers an indication of the magnitude of this “rotational” component relative to the components of radial expansion both parallel and perpendicular to the polar axis of the system. ### 3.2. RAFGL 618 The Phoenix spectral image obtained for RAFGL 618 is displayed in Fig. 2. Bright H<sub>2</sub> emission is detected along the entire polar axis of RAFGL 618. These data demonstrate further that very high velocity H<sub>2</sub> emission is present in this bipolar outflow. The highest velocity molecular material is found closest to the central star of RAFGL 618. We conclude that the velocity gradients along the polar axes of both RAFGL 2688 and RAFGL 618 trace rapid transitions from the “slow,” spherically symmetric winds of their AGB progenitors to faster, collimated, post-AGB winds (Kastner et al. 1999). ### 3.3. NGC 2346 In Fig. 3 we display an H<sub>2</sub> image of NGC 2346 obtained with the NOAO Cryogenic Optical Bench (Kastner et al. 1996) and we illustrate the slit positions used for Phoenix spectroscopic observations. Phoenix spectral images of NGC 2346 obtained at these positions are presented in Fig. 4. These images demonstrate that the H<sub>2</sub> emission from the bipolar NGC 2346 forms rings or ellipses in position-velocity space, an observation that reinforces our prior conclusion that ring-like planetaries which display H<sub>2</sub> are bipolar in structure (Kastner et al. 1994, 1996). The position-velocity ellipse represented in the spectral image obtained with the slit at $`0^{\prime \prime }`$ offset (leftmost panels) is noticably tilted, with the largest redshifts found $`15^{\prime \prime }`$ to the east and the largest blueshifts $`15^{\prime \prime }`$ to the west of the central star. It is apparent from Fig. 3 that this tilt is due to the orientation of the Phoenix slit with respect to the object. That is, to the east of the star the slit takes in portions of the rearward-facing (redshifted) south polar lobe of the nebula, whereas to the west the slit samples portions of the forward-facing (blueshifted) north polar lobe. Furthermore, the position-velocity ellipses in Fig. 4 contain two distinct kinematic components: a central ring associated with lower velocity material in the nebular waist and a pair of rings associated with higher velocity material in the bipolar outflow lobes. The central ring is centered at the systemic velocity of the nebula and is most apparent in the spectral images obtained near the position of the central star (i.e., in the four lefthand panels). The southern ring is primarily redshifted (righthand bottom panels) while the northern ring (righthand top panels) is primarily blueshifted. All three rings are present in the images obtained nearest the position of the central star (lefthand panels), whereas the images obtained further from the central star display emission from only a portion of the central ring and one of the outer rings. Hence Figs. 3 and 4 indicate that the H<sub>2</sub> emission from the nebula’s waist produces the inner position-velocity ring, while the outer rings arise from H<sub>2</sub> emission from the polar lobes. Because of the tilt of the slit with respect to the waist of the nebula, a given slit position samples both the waist region and one or both polar lobes, resulting in a superposition of these kinematic features in a given spectral image. In summary, the Phoenix spectral images of NGC 2346 provide strong evidence for distinct kinematic components in this nebula. These components consist of an equatorial ring or disk which is expanding at relatively low velocity ($`15`$ km s<sup>-1</sup> projected along our line of sight; Fig. 4, leftmost panels) and polar lobes that are expanding at larger velocities (Fig. 4, rightmost panels). Put differently, the equatorial confinement that is apparent in the morphology of this classical bipolar PN has a direct kinematic counterpart. It is tempting, therefore, to conclude that the pinched waist of NGC 2346 has its roots in processes which we are now beginning to explore in objects such as RAFGL 2688. #### Acknowledgments. J.H.K. acknowledges support from a JPL Infrared Space Observatory grant to RIT. LeeAnn Henn (MIT) reduced many of the Phoenix spectral images used in this study. ## References Cox, P., et al. 1997, A&A, 321, 907 Kastner, J.H., Gatley, I., Merrill, K.M., Probst, R.P., & Weintraub, D.A. 1994, ApJ, 421, 600 Kastner, J.H., Weintraub, D.A., Gatley, I., Merrill, K.M., & Probst, R.P. 1996, ApJ, 462, 777 Kastner, J.H., Henn, L., Weintraub, D.A., & Gatley, I. 1999, in IAU Symp. 191, “Asymptotic Giant Branch Stars,” eds. T. LeBertre, A. Lebre, & C. Waelkens, p. 431 Latter, W. B., Kelly, D. M., Hora, J. L., & Deutsch, L. K. 1995, ApJS, 100, 159 Sahai, R., Hines, D., Kastner, J.H., Weintraub, D.A., Trauger, J.T., Rieke, M.J., Thompson, R.I., & Schneider, G. 1998, ApJ, 492, 163L Weintraub, D.A., Huard, T., Kastner, J.H., & Gatley, I. 1998, ApJ, 509, 728 Zuckerman, B., & Gatley, I. 1988, ApJ, 324, 501
no-problem/0002/hep-ph0002174.html
ar5iv
text
# 1 Introduction ## 1 Introduction The solar neutrino anomaly is strongly established both from five solar neutrino experiments and from the theoretical predictions of the Standard Solar Model (SSM) confirmed by recent helioseismology observations . One of the possible solutions to the SNP is RSFP scenario based on the presence of a non-zero neutrino transition magnetic moment $`\mu =\mu _{ij},ij`$ is fully consistent with all currently available solar neutrino data. In the present work we are considering the RSFP solution for various models of regular magnetic fields in the Sun varying three parameters: two fundamental particle physics ones: (i) the neutrino mass difference $`\mathrm{\Delta }m^2`$, (ii) the neutrino mixing $`\mathrm{sin}^22\theta `$ and (iii) the magnetic field parameter $`\mu B_{}(r)`$ scaled by the factor $`\mu `$ with the normalized magnetic moment $`\mu /10^{11}\mu _B=\mu _{11}`$. In our plots we put $`\mu _{11}=1`$ obeying most known constraints on an active-active neutrino transition magnetic moment including the present laboratory (reactor) bound, $`\mu <\mathrm{\hspace{0.33em}1.8}\times 10^{10}\mu _B`$ . In contrast to the case of a Dirac neutrino, for a Majorana neutrino there is only one more stringent astrophysical constraint coming from the cooling of red giants $`\mu <\mathrm{\hspace{0.33em}3}\times 10^{12}\mu _B`$ . Nevertheless, even this exception does not prevent from our analysis of the RSFP scenario for the Sun since one can easily to choose the neighboring allowed $`B_{}`$\- regions with higher values of the magnetic field strength ($`B_{max}`$) if a more stringent bound on the neutrino magnetic moment is accepted. ## 2 Magnetic fields in the Sun It is very little known about magnetic field in the Sun. The MHD models do not exclude the presence of a significant magnetic field (a few hundred kilogauss) at the bottom of the convective zone of the Sun. Moreover, modern dynamo theories forbid large scale magnetic fields in the central part of the Sun with the strength more than 30 Gauss . In the most realistic MHD model of regular magnetic fields in the convective zone one finds the well-known toroidal field in both hemispheres of the Sun which has visible traces in forms of magnetic field loops floating up to active bipolar regions seen on the solar photosphere. The shape and topology of toroidal fields is very complicated and some simplifications of their profile is used for different magnetic field scenarios to solve SNP. In the present consideration two kinds of magnetic field profiles were used. First one was simple triangle profile of the magnetic field, second - ”smooth” profile (see Fig.1). We have varied magnetic field amplitude from a few kilogauss up to $`300`$ kG . Thus, we suppose that magnetic field has a large scale structure and it can be considered as a regular field fixed during total observation time for all neutrino experiments. ## 3 Master equation In the case of non-zero vacuum mixing and non-zero transition magnetic moment, the neutrino propagation in the solar medium with a magnetic field can be described by the Schrödinger-like $`4\times 4`$ evolution equation for two neutrino flavors $`\nu _e`$ and $`\nu _\mu `$ with two helicities $$i\left(\begin{array}{c}\dot{\nu }_{eL}\hfill \\ \dot{\stackrel{~}{\nu }}_{eR}\hfill \\ \dot{\nu }_{\mu L}\hfill \\ \dot{\stackrel{~}{\nu }}_{\mu R}\hfill \end{array}\right)=\left(\begin{array}{cccc}V_ec_2\delta & 0& s_2\delta & \mu B_+(t)\\ 0& V_ec_2\delta & \mu B_{}(t)& s_2\delta \\ s_2\delta & \mu B_+(t)& V_\mu +c_2\delta & 0\\ \mu B_{}(t)& s_2\delta & 0& V_\mu +c_2\delta \end{array}\right)\left(\begin{array}{c}\nu _{eL}\\ \stackrel{~}{\nu }_{eR}\\ \nu _{\mu L}\\ \stackrel{~}{\nu }_{\mu R}\end{array}\right),$$ (1) where $`c_2=\mathrm{cos}2\theta `$, $`s_2=\mathrm{sin}2\theta `$, $`\delta =\mathrm{\Delta }m^2/4E`$ are the neutrino mixing parameters; $`\mu =\mu _{ij}`$, $`ij`$, is the neutrino active-active transition magnetic moment; $`B_\pm =B_x\pm iB_y`$, are the regular magnetic field components which are perpendicular to the neutrino trajectory in the Sun; $`V_e(t)=G_F\sqrt{2}(\rho (t)/m_p)(Y_eY_n/2)`$ and $`V_\mu (t)=G_F\sqrt{2}(\rho (t)/m_p)(Y_n/2)`$ are the neutrino vector potentials for $`\nu _{eL}`$ and $`\nu _{\mu L}`$ in the Sun given by the abundances of the electron ($`Y_e=m_pN_e(t)/\rho (t)`$) and neutron ($`Y_n=m_pN_n(t)/\rho (t)`$) components and the SSM density profile $`\rho (t)=`$ $`250g/cm^3`$ $`\mathrm{exp}(10.54t)`$ . Using the experimental data and errors one can find common regions for the neutrino mixing and for magnetic field parameters that obey solutions for all experiments. These regions are named ”allowed”. All presented plots except of some cases are made at 95% C.L. ($`2\sigma `$). ## 4 Results and discussion We have found allowed $`\mathrm{\Delta }m^2`$, $`\mu B_{}`$, $`\mathrm{sin}^22\theta `$ – regions in the case of the RSFP solution to the SNP . These results are fitted with total rates (Figures 2-7) of all acting solar neutrino experiments such as Homestake, Kamiokande, SAGE, GALLEX, SuperKamiokande : $$\begin{array}{cc}& \\ & \\ \text{Experiment}& \text{ratio DATA/BP98}\\ & \\ & \\ \text{Homestake}& 0.33\pm 0.032\\ & \\ \text{GALLEX+SAGE}& 0.568\pm 0.076\\ & \\ \text{SuperKamiokande}& 0.470\pm 0.008\pm 0.013\\ & \end{array}$$ Here BP98 are the theoretical predictions of the SSM . All allowed regions have squared mass difference $`\mathrm{\Delta }m^2510^9\mathrm{eV}^2210^8\mathrm{eV}^2`$ (Figures 2-5). In this case neutrinos have resonant spin-flavor conversions in the convective zone. If neutrinos would have much bigger mass difference (up to the MSW solution $`\mathrm{\Delta }m_{MSW}^210^5\mathrm{eV}^2`$) resonant points are deep in the Sun core (less than $`0.3`$ of the Sun radius), while no significant magnetic fields are expected there . Allowed regions are some discrete areas in a wide range of magnetic field amplitude values $`B_{max}`$ changing from $`30`$ kG to $`300`$ kG for the fixed neutrino transition magnetic moment $`\mu `$ ($`\mu _{11}=1`$) and for the regular magnetic field profiles considered here. Allowed regions are periodical over the magnetic field amplitude $`B_{max}`$. This result can been explained in the framework of the simplest model: constant magnetic field in the convective zone . In this case the probability of $`\nu _{eL}\nu _{\mu R}`$ conversion is $$P=\frac{(2\mu B_{})^2}{(V\mathrm{\Delta }\mathrm{cos}2\theta )^2+(2\mu B_{})^2}\mathrm{sin}^2\left(\sqrt{(V\mathrm{\Delta }\mathrm{cos}2\theta )^2+(2\mu B_{})^2}\frac{\mathrm{\Delta }r}{2}\right),$$ (2) where $`\mathrm{\Delta }=\mathrm{\Delta }m^2/2E`$, $`V=V_e+V_\mu `$, $`\mathrm{\Delta }r`$ is the effective width of the magnetic field region in the convective zone. As one can see in Fig.1 the effective width of the triangle profile is a little wider than the effective width of ”smooth” profile. Hence both the periodicity of allowed regions (Fig.2 and Fig.3) and their dependence on the magnetic field profile can be explained from Eq.(2) . Let us emphasize that first allowed regions (minimal values of magnetic fields, $`B_{max}50`$ kG) both for triangle (Fig.2) and ”smooth” (Fig.3) profiles are very similar, because in the case of small magnetic fields the probabilities depend poorly on a shape of the profile. Allowed solutions depend very significantly on the neutrino mixing angle (Fig.4 and Fig.5). There are two trends. First one is the disappearance of allowed regions with the increase of mixing angle. The upper limit for mixing angle is $`\mathrm{sin}^22\theta 0.05`$. One concludes that RSFP solution is possible for small mixing angle cases only . Second trend is the disappearance of allowed regions in the case of non-zero mixing for large magnetic field amplitudes although the small magnetic field case is still alive. The non-observation of electron antineutrinos in the capture reaction $`\overline{\nu }_ep`$ $``$ $`ne^+`$ in the SuperKamiokande (SK) detector ($`\mathrm{\Phi }_{\overline{\nu }}<\mathrm{\hspace{0.33em}0.035}\mathrm{\Phi }_{\nu _B}`$, $`E_\nu >`$ $`8.3\mathrm{MeV}`$ ) gives additional limit on neutrino mixing angles. The antineutrino SK limit bounds the mixing ($`s_2^2=\mathrm{sin}^22\theta 0.2`$) at 99.9% C.L. ($`\pm 3\sigma `$) (Fig.6). This limit occurs also sensitive to the magnetic field parameter $`\mu B_{}`$. The bound is more stringent for larger magnetic fields than for the small ones. All above mentioned results are in a good agreement with . A new allowed region appears if the additional assumption is taken into account that the Homestake data are 1.3 times less than the experimental event rate \[5, (Homestake)\]. In this case ($`f_{Cl}=1.3`$ ) the squared mass difference is $`\mathrm{\Delta }m^210^7\mathrm{eV}^2`$ and the mixing is $`\mathrm{sin}^22\theta 0.01`$ (Fig.7). To conclude the RSFP solution to SNP has a good fit for rates observed in all acting neutrino experiments. This fit is not worse than for other solutions such as MSW or vacuum oscillation solution . Since a sizable neutrino transition moment is not forbidden and due to the existence of efficient solar magnetic field, the RSFP solution is still one of the possible solutions to SNP. ## Acknowledgments Authors thank the RFBR grants 97-02-16501, 99-02-26124, T.I.R. and V.B.S. thank the INTAS grant 96-0659. ## Figure captions Figure 1. Magnetic field profiles. Figure 2. Ratio DATA/BP98 for all acting experiments (Homestake, SAGE, GALLEX, SuperKamiokande, see Table above) on the plane $`\mathrm{lg}\mathrm{\Delta }m^2`$ (eV<sup>2</sup>) and $`B_{max}/100`$ kG for triangle magnetic field profile and zero mixing angle ($`\mathrm{sin}^22\theta =0`$) at 95% C.L. Figure 3. Same as Fig.2 for ”smooth” profile and $`\mathrm{sin}^22\theta =0`$. Figure 4. Same as Fig.2 for triangle profile and $`\mathrm{sin}^22\theta =0.05`$. Figure 5. Same as Fig.2 for ”smooth” profile and $`\mathrm{sin}^22\theta =0.05`$. Figure 6. Same as Fig.2 for ”smooth” profile and $`\mathrm{sin}^22\theta =0.2`$. Cross-lined zone is the region excluded by the SK antineutrino bound . Figure 7. Same as Fig.2 for triangle profile and $`\mathrm{sin}^22\theta =0.01`$ and for the case of the increased Homestake flux $`f_{Cl}=1.3`$.
no-problem/0002/astro-ph0002113.html
ar5iv
text
# Search and Discovery Tools for Astronomical On-line Resources and Services ## 1 Introduction How to help the users find their way through the jungle of information services is a question which has been raised since the early development of the WWW (see e.g., Egret jungle (1994)), when it became clear that a big centralized system was not the efficient way to go. Obviously the World Wide Web is a very powerful medium for the development of distributed resources: on the one hand the WWW provides a common medium for all information providers – the language is flexible enough so that it does not bring unbearable constraints on existing databases – on the other hand the distributed hypertextual approach opens the way to navigation and links between services (provided a minimum of coordination can be achieved). Let us note that it has been already widely demonstrated that coordinating spirit is not out of reach in a small community such as astronomy, largely sheltered from commercial influence. Searching for a resource (either already visited, or unknown but expected), or browsing lists of existing services in order to discover new tools of interest implies a need for query strategies that cannot generally be managed at the level of a single data provider. There is a need for road-guides pointing to the most useful resources, or to compilations or databases where information can be found about these resources. Such guides have been made in the past, and are of very practical help for the novice as well as the trained user, for example: Andernach et al. AHM94 (1994), Egret & Heck waw (1995), Egret & Albrecht amp2 (1995), Heck eppa (1997), Grothkopf librarians (1995), Andernach ALDIA (1999). In the present paper our aim is to address the questions related to the collection, integration and interfacing of the wealth of astronomical Internet resources, and also to describe some strategies that have to be developed for building cooperative tools which will be essential in the research environment of the decade to come. ## 2 Compilations of astronomical Internet resources At a first level, the user looking for new sources of information can consult compilations of existing resources. Examples of such databases, or yellow-page services are given in this section. ### 2.1 The StarPages *Star\*s Family* is the generic name for a collection of directories, dictionaries and databases which has been described in details by Heck (1995a ) who has been building up their contents for more than twenty-five years. These very exhaustive data sets are carefully updated and validated, thus constituting a gold mine for professional, amateur astronomers, and more generally all those who are curious of space-related activities, and want to locate existing resources. The Star\*s Family of products can be queried on-line from the CDS Web site (Strasbourg, France) under the generic name of StarPages<sup>1</sup><sup>1</sup>1http://cdsweb.u-strasbg.fr/starpages.html. It includes the following databases: a directory of astronomy, space sciences, and related organizations (Heck et al. starworlds (1994)); it includes URLs of Web sites when available, as well as e-mail addresses; unlike most of the services mentioned in the present paper, it is not restricted to describing on-line resources, but also lists directory entries for organizations which do not provide any on-line information. individual Web pages essentially of astronomers and related space scientists (Heck 1995b ). a very comprehensive dictionary of abbreviations, acronyms, contractions, and symbols used in astronomy and space sciences (Heck 1995b ). All three databases are associated with a query engine based on character string searches. Filters prevent extraction of too large subsets of the database. ### 2.2 AstroWeb *AstroWeb* (Jackson et al. astroweb (1994)) is a collection of pointers to astronomically relevant information resources available on the Internet. The browse mode of AstroWeb opens a window on the efforts currently developed – in some cases, unfortunately, in a rather disorganized way – for making astronomically related, and hopefully pertinent, information available on-line through the World Wide Web. AstroWeb is maintained by a small consortium of individuals located at CDS, STScI, MSSSO, NRAO, and Vilspa. The master database is currently hosted at CDS<sup>2</sup><sup>2</sup>2http://cdsweb.u-strasbg.fr/astroweb.html (after having been for a long time at STScI), and all the above-mentioned places, as well as the Institute of Astronomy, Cambridge, host a mirror copy with customized presentation of the same data. Each URL is checked by a robot on a daily basis to ensure aliveness of all referenced resources. The resource descriptions are usually submitted by the person or organization responsible for the resource, but are checked and eventually modified by one of the consortium members. The search engine is a wais search index. The index is constructed from the resource descriptions, and also includes all the words contained in the referenced home page. This latter feature is quite powerful for bringing new names of projects, topics, research groups, very quickly to the index. Table 1 lists the resources present in the AstroWeb database in December 1999. ## 3 Current status of on-line astronomy resources Following the classification scheme adopted by AstroWeb, we will outline in this section the current status of the main categories of on-line astronomy resources, pointing to meta-resources (i.e. organized lists of resources) when they are available. ### 3.1 Organizations Most of the active astronomical organizations (institutes, astronomy departments, etc.) now have home pages on the Internet. StarWorlds<sup>3</sup><sup>3</sup>3http://cdsweb.u-strasbg.fr/starworlds.html is currently the most comprehensive searchable directory of such resources ; it can be queried by names, keywords, or character strings. For browsing lists sorted by alphabetical order or by country, see AstroWeb (Section 2.2). National or international organizations also maintain useful lists. ### 3.2 Observational projects and missions It is now difficult to envisage an observational project without a web site. As they are more dynamic and often involve multiple organizations or institutions, the best way to find them may be to use one of the powerful commercial search engines that routinely index millions of web pages on the Internet. The indexing system of AstroWeb may also be helpful, especially when it is important to limit the investigation domain to astronomy, or to keep track of new emerging projects. ### 3.3 Data and information systems Astronomy data and information centers are becoming increasingly interconnected, with both explicit links to other relevant resources and automatic cross-links that may be invoked transparently to the end-user. Section 5 describes current efforts to provide interoperability within astrophysics (*Astrobrowse*) and across the space sciences (*ISAIA*). ### 3.4 Bibliographic resources Here also a virtual network is being organized, as exemplified by the *Urania*<sup>4</sup><sup>4</sup>4http://www.aas.org/Urania/ initiative, or by the coordinated efforts to create links between ADS and other services (Kurtz et al. ADS (2000)). Note that many of the bibliographical resources are electronic journals for which a subscription may be required. ### 3.5 People-related resources Some databases (RGO E-mail directory<sup>5</sup><sup>5</sup>5http://star-www.rl.ac.uk/astrolist/astrosearch.html, StarHeads<sup>6</sup><sup>6</sup>6http://cdsweb.u-strasbg.fr/starheads.hml) follow the development of electronic mail addresses and personal Web pages. Directories from national or international societies (e.g., AAS, EAS, IAU) are also generally very carefully kept up to date. The database of meetings and conferences maintained by CFHT<sup>7</sup><sup>7</sup>7http://cadcwww.dao.nrc.ca/meetings/meetings.html is very complete and well organized. Astronomical societies also maintain their own lists. ### 3.6 Astronomical software The Astronomical Software and Documentation Service (ASDS<sup>8</sup><sup>8</sup>8http://asds.stsci.edu/) is a network service that allows users to locate existing astronomical software, associated technical documentation, and information about telescopes and astronomical instrumentation (Payne et al. ASDS96 (1996)). ASDS originated as a service devoted entirely to astronomical software packages and their associated on-line documentation and was originally called the Astronomical Software Directory Service. Much code is rewritten these days, not because anyone has found a fundamentally better way to solve the problem, but because developers simply don’t know who has already done it, whether the code runs on the system they have available, or where to get it if it does. That is the problem that ASDS was intended to solve. In 1998 the scope of ASDS was expanded to include astronomical observing sites and their associated telescope and instrument manuals, taken from a listing maintained at CFHT. The service was renamed at this point. ### 3.7 Educational resources Education and public outreach have always been a strong concern in astronomy, but the importance of this activity is growing at a higher rate, with the advent of the World Wide Web. It is difficult to give general rules for such a wide field, going far beyond the limits of astronomical institutions. Let us just say that we expect to see in the future an increasing rôle of educational institutions (planetariums, or outreach departments of big societies or institutions), for conveying general astronomy knowledge, or news about recent discoveries, to the general public. The yellow-page services mentioned above do keep lists of the most important education services. ## 4 Towards a global index of astronomical resources In the following we will focus on Internet resources that actually provide data, of any kind, as opposed to those describing or documenting an institution or a research project, without giving access to any data set or archive. One main trend is certainly the increase of interconnections between distributed on-line services, the ‘Weaving of the Astronomy Web’ (which was the title of a Conference organized in Strasbourg by Egret & Heck waw (1995)). More generally, with the development of the Internet, and of a large number of on-line services giving access to data or information, it is clear that tools giving coordinated access to distributed services are needed. This is, for instance, the concern expressed by NASA through the Astrobrowse project (Heikkila et al. astrobrowse-2 (1999)). In this section we will first describe a tool for managing a “metadata” dictionary of astronomy information services (GLU); then we will show how the existence of such a metadatabase can be used for building efficient search and discovery tools. ### 4.1 The CDS GLU The CDS (Centre de Données astronomiques de Strasbourg) has recently developed a tool for managing remote links in a context of distributed heterogeneous services (GLU<sup>9</sup><sup>9</sup>9http://simbad.u-strasbg.fr/glu/glu.htx, Générateur de Liens Uniformes, i.e. Uniform Link Generator; Fernique et al. glu (1998)). First developed for ensuring efficient interoperability of the several services existing at CDS (VizieR, Simbad, Aladin, bibliography, etc.; see Genova et al. CDS (2000)), this tool has also been designed for maintaining addresses (URLs) of distributed services (ADS, NED, etc.). A key element of the system is the “GLU dictionary” maintained by the data providers contributing to the system, and distributed to all sites of a given domain. This dictionary contains knowledge about the participating services (URLs, syntax and semantics of input fields, descriptions, etc.), so that it is possible to generate automatically a correct query for submission to a remote database. The service provider (data center, archive manager, or webmaster of an astronomical institute) can use GLU for coding a query, taking benefit of the easy update of the system: knowing which service to call, and which answer to expect from this service, the programmer does not have to worry about the precise address of the remote service at a given time, nor of the detailed syntax of the query (expected format of the equatorial coordinates, etc.). ### 4.2 New search and discovery tools The example of GLU demonstrates the usefulness of storing into a database the knowledge about information services (their address, purpose, domain of coverage, query syntax, etc.). In a second step, such a database can be queried when the challenge is to provide information about whom is providing what, for a given object, region of the sky, or domain of interest. Several projects are working toward providing general solutions. #### 4.2.1 Astrobrowse *Astrobrowse* is a project that began within the United States astrophysics community, primarily within NASA data centers, for developing a user agent which significantly streamlines the process of locating astronomical data on the web. Several prototype implementations are already available<sup>10</sup><sup>10</sup>10http://heasarc.gsfc.nasa.gov/ab/. With any of these prototypes, a user can already query thousands of resources without having to deal with out-of-date URLs, or spend time figuring out how to use each resource’s unique input formats. Given a user’s selection of web-based astronomical databases and an object name or coordinates, Astrobrowse will send queries to all databases identified as containing potentially relevant data. It provides links to these resources and allows the user to browse results from each query. Astrobrowse does not recognize, however, when a query yields a null result, nor does it integrate query results into a common format to enable intercomparison. #### 4.2.2 AstroGLU Consider the following scenario: we have a data item $`I`$ (for example an author’s name, the position or name of an astronomical object, a bibliographical reference, etc.), and we would like to know more about it, but we do not know a priori which service $`S`$ to contact, and what are the different data types $`D`$ which can be requested. This scenario is typical of a scientist exploring new domains as part of a research procedure. The GLU dictionary can actually be used for helping to solve this question: the dictionary can be considered as a reference directory, storing the knowledge about all services accepting data item $`I`$ as input, for retrieving data $`D_1`$ or $`D_2`$. For example, we can easily obtain from such a dictionary the list of all services accepting an author’s name as input; information which can be accessed, in return, may be an abstract (service ADS), a preprint (LANL/astro- ph), the author’s address (RGO e-mail directory) or personal Web page (StarHeads), etc. Based on such a system, it becomes possible to create automatically a simple interface guiding the user towards any of the services described in the dictionary. This idea has been developed as a prototype tool, under the name of AstroGLU<sup>11</sup><sup>11</sup>11http://simbad.u-strasbg.fr/glu/cgi-bin/astroglu.pl (Egret et al. astroglu (1998)). The aim of this tool is to help the users find their way among several dozens (for the moment) of possible actions or services. A number of compromises have to be taken between providing the user with the full information (which would be too abundant and thus unusable), and preparing digest lists (which implies hiding some amount of auxiliary information and making somewhat subjective selections). A resulting issue is the fact that the system puts on the same line services which have very different quantitative or qualitative characteristics. AstroGLU has no efficient ways yet to provide the user with a hierarchy of services, as a gastronomic guide would do for restaurants. This might come to be a necessity in the future, as more and more services become (and remain) available. ## 5 Towards an integration of distributed data and information services To go further, one needs to be able to integrate the result of queries provided by heterogeneous services. This is the aim of the ISAIA (Integrated System for Archival Information Access) project<sup>12</sup><sup>12</sup>12http://heasarc.gsfc.nasa.gov/isaia/ (Hanisch 2000a , 2000b ). The key objective of the project is to develop an interdisciplinary data location and integration service for space sciences. Building upon existing data services and communications protocols, this service will allow users to transparently query a large variety of distributed heterogeneous Web-based resources (catalogs, data, computational resources, bibliographic references, etc.) from a single interface. The service will collect responses from various resources and integrate them in a seamless fashion for display and manipulation by the user. Because the scope of ISAIA is intended to span the space sciences – astrophysics, planetary science, solar physics, and space physics – it is necessary to find a way to standardize the descriptions of data attributes that are needed in order to formulate queries. The ISAIA approach is based on the concept of *profiles*. Profiles map generic concepts and terms onto mission or dataset specific attributes. Users may make general queries across multiple disciplines by using the generic terms of the highest level profile, or make more specific queries within subdisciplines using terms from more detailed subprofiles. The profiles play three critical and interconnected roles: 1. They identify appropriate resources (catalogs, mission datasets, bibliographic databases): the *resource profile* 2. They enable generic queries to be mapped unambiguously onto resource-specific queries: the *query profile* 3. They enable query responses to be tagged by content type and integrated into a common presentation format: the *response profile* The resource, query, and response profiles are all aspects of a common database of resource attributes. Current plans call for these profiles to be expressed using XML (eXtensible Markup Language, an emerging standard which allows embedding of logical markup tags within a document) and to be maintained as a distributed database using the CDS GLU facility. The profile concept is critical to a distributed data service where one cannot expect data providers to modify their internal systems or services to accommodate some externally imposed standard. The profiles act as a thin, lightweight interface between the distributed service and the existing specific services. Ideally the service-specific profile implementations are maintained in a fully distributed fashion, with each data or service provider running a GLU daemon in which that site’s services are fully described and updated as necessary. Static services or services with insufficient staff resources to maintain a local GLU implementation can still be included, however, as long as their profiles are included elsewhere in the distributed resource database. The profile concept is not unique to space science, but would apply equally well to any distributed data service in which a common user interface is desired to locate information in related yet traditionally separate disciplines. ## 6 Information clustering and advanced user interfaces A major challenge in current information systems research is to find efficient ways for users to be able to visualize the contents and understand the correlations within large databases. The technologies being developed are likely to be applicable to astronomical information systems. For example, information retrieval by means of “semantic road maps” was first detailed in Doyle (doyle (1961)), using a powerful spatial metaphor which lends itself quite well to modern distributed computing environments such as the Web. The Kohonen self-organizing feature map (SOM; Kohonen origk (1982)) method is an effective means towards this end of a visual information retrieval user interface. ### 6.1 Interfacing datasets with a Self-organizing Map The Kohonen map is, at heart, $`k`$-means clustering with the additional constraint that cluster centers be located on a regular grid (or some other topographic structure) and furthermore their location on the grid be monotonically related to pairwise proximity (Murtagh & Hernández-Pajares mhp (1995)). A regular grid is quite convenient for an output representation space, as it maps conveniently onto a visual user interface. In a web context, it can easily be made interactive and responsive. Fig. 1 shows an example of such a visual and interactive user interface map, in the context of a set of journal articles described by their keywords. Color is related to density of document clusters located at regularly spaced nodes of the map, and some of these nodes/clusters are annotated. The map is installed on the Web as a clickable image map, with CGI programs accessing lists of documents and – through further links – in many cases, the full documents. In the example shown, the user has queried a node and results are seen in the right-hand panel. Such maps are maintained for (currently) 12,000 articles from the Astrophysical Journal, 7000 from Astronomy and Astrophysics, over 2000 astronomical catalogs, and other data holdings. More information on the design of this visual interface and user assessment can be found in Poinçot et al. (poin1 (1998, 2000)). ### 6.2 Hyperlink clustering Guillaume & Murtagh (guill (2000)) have recently developed a Java-based visualization tool for hyperlink-based data, in XML, consisting of astronomers, astronomical object names, article titles, and possibly other objects (images, tables, etc.). Through weighting, the various types of links could be prioritized. An iterative refinement algorithm was developed to map the nodes (objects) to a regular grid of cells, which, as for the Kohonen SOM map, are clickable and provide access to the data represented by the cluster. Fig. 2 shows an example for an astronomer (Prof. Jean Heyvaerts, Strasbourg Astronomical Observatory). These new cluster-based visual user interfaces are not computationally demanding. In general they cannot be created in real time, but they are scalable in the sense that many tens of thousands of documents or other objects can be easily handled. Document management (see e.g. Cartia<sup>13</sup><sup>13</sup>13http://www.cartia.com/) is less the motivation as is instead the interactive user interface. Further information on these visual user interfaces can be found in Guillaume (guill2 (2000)) and Poinçot (poin3 (1999)). ### 6.3 Future developments for advanced interfaces Two directions of development are planned in the near future. Firstly, visual user interfaces need to be coupled together. A comprehensive “master” map is one possibility, but this has the disadvantage of centralized control and/or configuration control. Another possibility is to develop a protocol such that a map can refer a user to other maps in appropriate circumstances. Such a protocol was developed a number of years ago in a system called Ingrid<sup>14</sup><sup>14</sup>14http://www.ingrid.org/ developed by P. Francis at NTT Software Labs in Tokyo (see Guillaume guill2 (2000)). However this work has been reoriented since then. Modern middleware tools may offer the following solution. This is to define an information sharing bus, which will connect distributed information maps. It will be interesting to look at the advantages of CORBA (Common Object Request Broker Architecture) or, more likely, EJB (Enterprise Java Beans), for ensuring this interoperability infrastructure (Lunney & McCaughey corba (2000)). A second development path is to note the clustering which is at the core of these visual user interfaces and to ask whether this can be further enhanced to facilitate construction of query and response agents. It is clear to anyone who uses Internet search engines such as AltaVista, Lycos, etc. that clustering of results is very desirable. A good example of such clustering of search results in practice is the Ask Jeeves search engine<sup>15</sup><sup>15</sup>15http://www.askjeeves.com/. The query interface, additionally, is a natural language one, another plus. ## 7 Conclusion The on-line “Virtual Observatory” is currently under construction with on-line archives and services potentially giving access to a huge quantity of scientific information: its services will allow astronomers to select the information of interest for their research, and to access original data, observatory archives and results published in journals. Search and discovery tools currently in development will be of vital importance to make all the observational data and information available to the widest scientific community. ###### Acknowledgements. CDS acknowledges the support of INSU-CNRS, the Centre National d’Etudes Spatiales (CNES), and Université Louis Pasteur.
no-problem/0002/cond-mat0002214.html
ar5iv
text
# Does hardcore interaction change absorbing type critical phenomena? ## Abstract It has been generally believed that hardcore interaction is irrelevant to absorbing type critical phenomena because the particle density is so low near an absorbing phase transition. We study the effect of hardcore interaction on the $`N`$ species branching annihilating random walks with two offspring and report that hardcore interaction drastically changes the absorbing type critical phenomena in a nontrivial way. Through Langevin equation type approach, we predict analytically the values of the scaling exponents, $`\nu _{}=2,z=2,\alpha =1/2,\beta =2`$ in one dimension for all $`N>1`$. Direct numerical simulations confirm our prediction. When the diffusion coefficients for different species are not identical, $`\nu _{}`$ and $`\beta `$ vary continuously with the ratios between the coefficients. The study of nonequilibrium systems with trapped (absorbing) states has been very active in recent years . Models displaying absorbing phase transitions describe a wide range of phenomena, in particular, epidemic spreading, catalytic chemical reactions, and transport in disordered media. Furthermore, the absorbing transition is one of the simplest and natural extensions of the well-established equilibrium phase transition to non-equilibrium systems, which are still poorly understood. The concept of universality, which plays a key role in equilibrium critical phenomena, was shown to be applicable also to nonequilibrium absorbing transitions. Critical behavior near an absorbing transition is determined by properties such as dimensionality and symmetry, and is not affected by details of the system. Finding a new universality class is difficult, and only a few classes of absorbing transitions are known . Hardcore interaction between particles or kinks has been believed to be irrelevant to absorbing type critical phenomena, because the particle density is so low near an absorbing transition that the probability of multiple occupations at a site should be too small to be significant. This conventional belief leads to recent successes of field theoretical techniques using bosonic type operators . However, it is well known that hardcore interaction does changes the asymptotic decay behavior of the particle density in many multi-species diffusion-reaction models near an annihilation fixed point. Since many absorbing transition models can be mapped onto diffusion-reaction ones, it may seem natural to ask a question whether hardcore constraint changes the absorbing type universality classes in multi-species models. Despite recent efforts using fermionic formulation incorporating hardcore interactions , the effect of hardcore interactions is barely understood both analytically and numerically. In this Letter, we study the $`N`$ species branching annihilating random walks with two offspring ($`N`$-BAW(2)), introduced recently by Cardy and Täuber . The model was solved exactly for all $`N>1`$, using renormalization group techniques in bosonic type formulation which ignores hardcore interactions. We employ Langevin equation type approach incorporating hardcore interactions and obtain analytically the values of critical exponents associated with the absorbing transition. It turns out that hardcore interaction drastically changes the universality class in a nontrivial way and the critical exponents vary continuously with the ratio of diffusion constants of different species. Direct numerical simulations confirm our predictions. The $`N`$-BAW(2) model is a classical stochastic system consisting of $`N`$ species of particles, $`A_i`$ $`(i=1,\mathrm{},N)`$. Each particle diffuses on a $`d`$-dimensional lattice with two competing dynamic processes: pair annihilation and branching. Pair annihilation is allowed only between identical particles $`(A_i+A_i\mathrm{})`$. In the branching process, a particle $`A_i`$ creates two identical particles in its neighborhood $`(A_iA_i+2A_j)`$, with rate $`\sigma `$ for $`i=j`$ and rate $`\sigma ^{}/(N1)`$ for $`ij`$. For $`N=1`$, this model exhibits an absorbing transition of directed Ising type ($`Z_2`$ symmetry) at finite branching rate . The $`N`$ species generalization imposes the permutation symmetry $`S_N`$ between species. Like in the Potts type generalization of the absorbing transition models , this model for $`N>1`$ is always active except at the annihilation fixed point of zero branching rate. Critical properties near the annihilation fixed point have been explored exactly by Cardy and Täuber for $`N>1`$ in the framework of bosonic field theory . The upper critical dimension $`d_c`$ is $`2`$. Using a perturbation expansion, they showed that the branching process associated with $`\sigma `$ is irrelevant. For $`\sigma =0`$, it was found that the models for all $`N>1`$ are active for $`\sigma ^{}0`$ and their scaling behavior near the annihilation fixed point ($`\sigma ^{}=\sigma _c^{}=0`$ ) forms a new universality class independent of $`N`$. For $`d<d_c`$, the critical behavior is characterized by the exponents $$\nu _{}=1/d,z=2,\alpha =d/2,\beta =1.$$ (1) Here, the exponents are defined as $`\xi `$ $`\mathrm{\Delta }^\nu _{},\tau `$ $`\xi ^z,`$ (2) $`\rho (t)`$ $`t^\alpha ,\rho _s`$ $`\mathrm{\Delta }^\beta ,`$ (3) where $`\mathrm{\Delta }=\sigma ^{}\sigma _c^{}`$, $`\xi `$ is the correlation length, $`\tau `$ the characteristic time, $`\rho (t)`$ the particle density at time $`t`$, and $`\rho _s`$ the steady-state particle density. Even in the presence of hardcore interaction, the scaling exponents $`\alpha `$ and $`z`$ should follow from the simple random walk exponents; $`z=2`$ and $`\alpha =d/z`$ for $`d<d_c`$ . Near the annihilation fixed point, elementary scaling theory ensures $`\beta =\nu _{}z\alpha `$, which leads to $`\beta =\nu _{}d`$. We determine the value of $`\nu _{}`$ through Langevin equation type approach. The particle density can change by branching processes and pair annihilation processes. If we start with a configuration of very low particle density, the particle density initially grows by branching processes, $`(A_iA_i+2A_j)`$. In this growth regime, a newly created pair of offspring and its parent are far more likely to annihilate against each other than other particles in the system. Dynamics of such three particle configurations or “triplet” governs the growth behavior of the particle density and the inter-triplet interactions can be ignored. The particle density growth will be finally capped by pair annihilations processes of independent particles and the system reaches a steady state. We focus on the growth regime dominated by triplet dynamics, from which $`\nu _{}`$ can be evaluated . We only consider the case of $`\sigma =0`$, where a newly created pair is always dissimilar to its parent. The survival probability $`S(t)`$ of the triplet of the same species decays much faster ($`t^{3/2}`$) than that of different species, so the branching process associated with $`\sigma `$ is irrelevant. Near $`\sigma ^{}=0`$, the time evolution of the particle density of $`i`$-th species, $`\rho _i(t)`$, is written as $$\frac{d\rho _i}{dt}=2\frac{\sigma ^{}}{N1}\underset{ji}{}\left[\rho _j(t)_0^tL_{ij}(tt^{})\rho _j(t^{})𝑑t^{}\right],$$ (4) where $`L_{ij}(tt^{})dt`$ is the probability that an $`i`$-th species pair created by an $`j`$-th species particle at time $`t^{}`$, annihilates in an interval between $`t`$ and $`t+dt`$. The two terms in the right-hand side represent the creation and annihilation process of a triplet, respectively. Pair annihilation contribution from independent particles is $`𝒪(\rho ^2)`$, which is ignored in the growth regime. The kernel $`L_{ij}(t)`$ is simply related to the survival probability $`S_{ij}(t)`$ of the triplet $`(A_j+2A_i)`$ as $`L_{ij}(t)=dS_{ij}(t)/dt`$. To keep the lowest order of $`\sigma ^{}`$ in Eq. (4), we evaluate $`S_{ij}(t)`$ at $`\sigma ^{}=0`$. When hardcore interaction is not present, a pair of $`A_i`$’s does not see its parent $`A_j`$, so annihilate each other freely by random walks. In that case, it is well known that $`S(t)=S_{ij}(t)`$ decays asymptotically as $`S(t)t^\delta `$ with $`\delta =1d/2`$ for $`d<2`$ and becomes finite ($`\delta =0`$) for $`d>2`$, irrespective of their diffusion constants . However, with hardcore interaction, the pair annihilation process changes significantly due to an effective bias in their diffusive behavior, generated by their parent particle $`A_j`$. The motion of $`A_i`$ near $`A_j`$ picks up a convective component with velocity proportional to $`t^{1/2}`$, so the convective displacement is of the same order of diffusive displacement $`t^{1/2}`$. In this case, the competition between the convection and diffusion becomes nontrivial and the scaling exponent $`\delta `$ depends continuously on the parameters of the system . We calculate the survival probability $`S(t)`$ of a triplet in one dimension. With hardcore interaction, $`S(t)`$ depends crucially on where to create two offspring with respect to their parent. When two offspring are divided by their parent (static branching) , they have no chance to meet each other. The survival probability never decays ($`\delta =0`$). When two offspring are placed both to the left or both to the right side of the parent particle with equal probability (dynamic branching) , $`S(t)`$ decays with a nontrivial scaling exponent. Consider three random walkers on a line, labeled as $`A`$, $`B`$ and $`C`$. $`A`$ is a parent particle that created two offspring, $`B`$ and $`C`$, to the right side of $`A`$. Two offspring $`B`$ and $`C`$ are of the same species, which is different from its parent $`A`$. Hardcore repulsion is present between $`A`$ and $`B`$. $`B`$ and $`C`$ annihilate instantaneously upon collision. The calculation of $`S(t)`$ belongs to the class of problems known as “capture process” . Let the coordinates of the walkers be $`x_A,x_B`$ and $`x_C`$, and their diffusion coefficients $`D_A,D_B`$ and $`D_C`$, respectively. In our case, $`D_B=D_C`$. It is useful to introduce the scaled coordinates $`y_i=x_i/\sqrt{D_i}`$, where $`i=A,B,C`$. Then we can map this triplet system to a single walker system with isotropic diffusion in three dimensional space $`(y_A,y_B,y_C)`$ . The walker survives inside the wedge bounded by two planes: a reflecting plane $`P_r`$ of $`\sqrt{D_A}y_A=\sqrt{D_B}y_B`$ and an absorbing plane $`P_a`$ of $`\sqrt{D_B}y_B=\sqrt{D_C}y_C`$. The survival probability $`S(t)`$ of an isotropic random walker in a $`d`$ dimensional cone with absorbing boundary is known . In particular, $`S(t)`$ in a wedge with an opening angle $`\mathrm{\Theta }`$ asymptotically decays as $`t^{\pi /2\mathrm{\Theta }}`$ . In our case, one of the boundary planes, $`P_r`$, is not absorbing but reflecting. The probability of the walker at $`P_r`$ is nonzero and there is no net flux across this plane. Using this fact, one can easily show that our system should be equivalent to the system in a wedge bounded by two absorbing planes with twice bigger opening angle. We find that the survival probability of the triplet decays with the exponent $$\delta =\frac{\pi }{4\mathrm{\Theta }}=\left[\frac{4}{\pi }\mathrm{cos}^1\left(\frac{1}{\sqrt{2(1+r)}}\right)\right]^1,$$ (5) where $`\mathrm{\Theta }`$ is an opening angle of the wedge and $`r=D_A/D_B`$. The exponent $`\delta `$ monotonically decreases from 1 to 1/2 as the diffusivity ratio $`r`$ varies from 0 to $`\mathrm{}`$. At $`r=1`$ (the same diffusivity for all walkers), $`\delta =3/4`$. First, we consider the case that diffusion coefficients are identical for all species. The $`N`$ coupled Langevin equations, Eq. (4), can be simplified in terms of the total particle density, $`\rho (t)=_i\rho _i(t)`$, as $$\frac{d\rho }{dt}=2\sigma ^{}\rho (t)2\sigma ^{}_0^tL(tt^{})\rho (t^{})𝑑t^{},$$ (6) where $`L(t)=L_{ij}(t)`$ is independent of $`i`$ and $`j`$. Taking Laplace transformation, we find $$s\stackrel{~}{\rho }(s)\rho (0)=2\sigma ^{}(1\stackrel{~}{L}(s))\stackrel{~}{\rho }(s)=2\sigma ^{}s\stackrel{~}{S}(s)\stackrel{~}{\rho }(s),$$ (7) where $`\stackrel{~}{\rho }(s)=_0^{\mathrm{}}\rho (t)e^{st}𝑑t`$, and similarly $`\stackrel{~}{L}(s)`$ and $`\stackrel{~}{S}(s)`$ are the Laplace transform of $`L(t)`$ and $`S(t)`$, respectively. With $`S(t)t^\delta `$, one can show that $`\stackrel{~}{S}(s)s^{\delta 1}`$ for $`\delta >0`$. The function $`\stackrel{~}{\rho }(s)`$ has a pole in the positive real axis at $`s_o\sigma _{}^{}{}_{}{}^{1/(1\delta )}`$. When the initial density $`\rho (0)`$ is small, the density $`\rho (t)`$ increases exponentially as $`\mathrm{exp}(s_ot)`$. Using the definition of the characteristic time $`\tau `$ (Eq. (3)), we find $$\tau =\sigma _{}^{}{}_{}{}^{\nu _{}z}=1/s_o=\sigma _{}^{}{}_{}{}^{1/(1\delta )}.$$ (8) With $`\delta =3/4`$ for the dynamic branching model, we arrive at $`\nu _{}z=1/(1\delta )=4`$. Therefore we predict that the critical exponents for the dynamic branching $`N`$-BAW(2) model with hardcore interaction in one dimension are $$\nu _{}=2,z=2,\alpha =1/2,\beta =2,$$ (9) which should be valid for all $`N>1`$. For the static branching $`N`$-BAW(2) model, $`\delta =0`$ and $`\nu _{}=\beta =1/2`$. Without hardcore interactions, branching methods do not matter and $`\delta =1/2`$, which leads to Eq. (1). We check the above predictions for the $`N`$-BAW(2) model by direct numerical simulations for $`N=2,3`$ and 4. We start with a pair of particles. With probability $`p`$, a randomly chosen particle ($`A_i`$) creates two offspring ($`2A_j`$) on two nearest neighboring sites (dynamic/static branching). The branching probability $`p`$ is distributed as $`\gamma p`$ for $`i=j`$ and $`(1\gamma )p/(N1)`$ for $`ij`$. Otherwise, the particle hops to a nearest neighboring site. Two particles of the same species at a site annihilate instantaneously. In case of models with hardcore interactions, branching/hopping attempts are rejected when two particles of different species try to occupy the same site. Critical probability $`p_c=0`$ for all models considered here. We measure the total particle density $`\rho _s`$ in the steady state, averaged over $`5\times 10^25\times 10^4`$ independent samples for several values of $`\mathrm{\Delta }=pp_c`$ ($`0.0010.05`$) and lattice size $`L`$ ($`2^52^{11}`$). We set $`\gamma =1/2`$. Using the finite-size scaling theory $$\rho _s(\mathrm{\Delta },L)=L^{\beta /\nu _{}}F(\mathrm{\Delta }L^{1/\nu _{}}),$$ (10) the value of $`\nu _{}`$ is determined by “collapsing” data of $`\rho _s`$ with $`\beta /\nu _{}=1`$ (Fig. 1). Numerical data show that $`\nu _{}`$ does not depend on $`N`$ in all models as expected. We find $`\nu _{}=1.00(5)`$ for models without hardcore interactions, which agrees with the result by Cardy and Täuber . With hardcore interactions, we find $`\nu _{}=1.9(1)`$ for the dynamic branching models and $`\nu _{}=0.50(3)`$ for the static branching models, which confirm our predictions within statistical errors. When the diffusion coefficients are not identical for different species, $`S_{ij}(t)`$ decays with the exponent $`\delta `$ depending on diffusivity ratio $`r=D_j/D_i`$. Instead of a single Langevin equation, we are then forced to deal with the $`N`$ coupled Langevin equations. The solution of the system of equations is difficult in general, but the equations become quite simple for $`N=2`$. Laplace-transformed coupled equations for $`N=2`$ become $`s\stackrel{~}{\rho }_1(s)\rho _1(0)`$ $`=`$ $`2\sigma ^{}s\stackrel{~}{S}_{12}(s)\stackrel{~}{\rho }_2(s),`$ (11) $`s\stackrel{~}{\rho }_2(s)\rho _2(0)`$ $`=`$ $`2\sigma ^{}s\stackrel{~}{S}_{21}(s)\stackrel{~}{\rho }_1(s).`$ (12) We take $`\rho _2(0)=0`$ as an initial condition and solve the equations for $`\stackrel{~}{\rho }_1(s)`$: $$s\stackrel{~}{\rho }_1(s)\rho _1(0)=4\sigma _{}^{}{}_{}{}^{2}s\stackrel{~}{S}_{12}\stackrel{~}{S}_{21}\stackrel{~}{\rho }_1(s).$$ (13) Note that $`S_{12}(t)`$ decays with exponent $`\delta (r)`$ with $`r=D_2/D_1`$ and $`S_{21}(t)`$ with $`\delta (1/r)`$, see Eq. (5). From the pole position of $`\stackrel{~}{\rho }_1(s)`$, we arrive at $$\nu _{}(r)=\frac{1}{2\delta (r)\delta (1/r)}.$$ (14) The exponent $`\delta (r)`$ ranges from $`1/2`$ to 1, but $`\delta (r)+\delta (1/r)`$ varies only slightly with $`r`$. It ranges from $`3/2`$ to 1.5255, so $`\nu _{}(r)`$ varies only within a few percent. Due to rather large statistical errors ($`10\%`$), we could not confirm numerically the $`r`$ dependence of $`\nu _{}`$. However, it is clear from our derivation that $`\nu _{}`$ should vary continuously with diffusivity ratio. Although we were not able to obtain a similar expression for $`\nu _{}`$ for general $`N`$, we expect that $`\nu _{}`$ varies continuously but only slightly with $`r`$ for all $`N>1`$. In summary, we showed that hardcore interaction in the $`N`$-BAW(2) model changes its universality class in a nontrivial way. Details of branching methods (static/dynamic branching) and also the diffusivity ratios between different species change drastically the absorbing type critical phenomena. We find that, for all $`N>1`$, the dynamic branching models with hardcore interaction form a new universality class, different from the models without hardcore interaction. Especially, the scaling exponents vary continuously with the diffusivity ratios. The static branching models with hardcore interaction form yet another new universality class. Numerical simulations confirm most of our predictions, but large scale simulations are necessary to measure the diffusivity ratio dependence of the scaling exponents. The present analytic method to study the effect of hardcore interaction can be applied to a wide range of multi-species diffusion-reaction models near the annihilation fixed point. Our analysis implies that many multi-species models with hardcore interaction may exhibit a nontrivial absorbing phase transition with continuously varying exponents. We thank the NEST group at Inha University for many useful discussions. This work was supported by the Korea Research Foundation for the 21st Century. J.L. acknowledges support from Creative Research Initiatives of the Korean Ministry of Science and Technology.
no-problem/0002/cond-mat0002007.html
ar5iv
text
# Adsorption of benzene on Si(100) from first principles \[ ## Abstract Adsorption of benzene on the Si(100) surface is studied from first principles. We find that the most stable configuration is a tetra-$`\sigma `$-bonded structure characterized by one C-C double bond and four C-Si bonds. A similar structure, obtained by rotating the benzene molecule by 90, lies slightly higher in energy. However, rather narrow wells on the potential energy surface characterize these adsorption configurations. A benzene molecule impinging on the Si surface is most likely to be adsorbed in one of three different di-$`\sigma `$-bonded, metastable structures, characterized by two C-Si bonds, and eventually converts into the lowest-energy configurations. These results are consistent with recent experiments. \] Adsorption of benzene on the Si(100) surface is a topic of great current interest both because it represents a prototype system for the study of molecular adsorption (and desorption) of hydrocarbons on semiconductor surfaces, and because it is considered a promising precursor for technologically relevant processes, such as the growth of Si-C and CVD diamond thin films on Si surfaces. However, despite many experimental and theoretical investigations, the adsorption mechanism is not yet well understood. In particular, at present there is no consensus about the lowest-energy structure of benzene on Si(100): results obtained from surface science experimental techniques, semiempirical methods, and first-principles approaches provide a number of different predictions. Benzene is known from experiments to adsorb exclusively on top of the Si(100) surface dimer rows, thus avoiding energetically disfavored structures with unsaturated, isolated Si dangling bonds. Even so, since the size of the benzene molecule is comparable to the spacing between two adjacent dimers on the same row, many different bonding configurations are possible. Among the structures proposed in the literature as the lowest-energy configurations, the 1,4-cyclohexadiene-like (“butterfly”) configuration, in which the benzene molecule is di-$`\sigma `$ bonded to the two dangling bonds of the same Si surface dimer, is supported by thermal desorption and angle-resolved photoelectron spectroscopy, STM, vibrational IR spectroscopy and near-edge X-ray absorption fine structure techniques, and first-principles cluster calculations. Instead, other STM experiments suggest the 1,3-cyclohexadiene-like (“tilted”) structure. Finally, semiempirical calculations, STM and IR spectroscopy experiments favor a tetra-$`\sigma `$-bonded configuration where benzene is bonded to two adjacent surface dimers. Another open issue concerns the occurrence and nature of metastable adsorption states. In fact, the results of STM and IR spectroscopy support the hypothesis that benzene is initially chemisorbed in a metastable, “butterfly”-like state, and then slowly converts (within minutes) to a lower-energy final state, which is a “tilted” structure according to Ref. , or a tetra-$`\sigma `$-bonded one according to Ref. . Moreover, recent IR experiments suggest that, at room temperature, benzene is predominantly adsorbed in the “butterfly” configuration, while the existence of a less stable structure, consistent with a tetra-$`\sigma `$-bonded configuration, is proposed. Previous theoretical calculations on benzene on Si(100) have been restricted to semiempirical or ab initio cluster-model methods. In the latter approach the Si surface is modeled with a cluster of Si atoms, thus considerably reducing the cost of a first-principles calculation. However, the effects of such an approximation can be relevant. It is well known, for instance, that the characteristic buckling of the Si dimers on the clean Si(100) surface can only be obtained by using models with a slab geometry and periodic boundary conditions. As shown in the following, the details of the surface reconstruction (i.e. buckling and periodicity of the surface dimers) are crucial ingredients in determining the adsorption structure of benzene. Moreover, the convergence of different properties, such as the binding energies of adsorbed molecules, is rather slow as a function of the cluster size. In order to overcome these limitations and to clarify the open issues discussed above we have performed a full ab initio study of benzene adsorption on Si(100). Total-energy calculations and molecular dynamics (MD) simulations have been carried out within the Car-Parrinello approach in the framework of the density functional theory, in the local spin density approximation. Tests have been also performed using gradient corrections in the BLYP implementation. The calculations have been carried out considering the $`\mathrm{\Gamma }`$-point only of the Brillouin zone, and using norm-conserving pseudopotentials, with $`s`$ and $`p`$ nonlocality for C and Si. Wavefunctions were expanded in plane waves with an energy cutoff of 35 Ry. The Si(100) surface is modeled with a periodically repeated slab of 5 Si layers and a vacuum region of 7 Å (tests have been also carried out with a vacuum region of 10 Å , without any significant change in the results). A monolayer of hydrogen atoms is used to saturate the dangling bonds on the lower surface of the slab. We have used a supercell with $`p(\sqrt{8}\times \sqrt{8})R45^{}`$ surface periodicity, corresponding to 8 Si atoms/layer; however, in order to check finite-size effects, the geometry optimizations have been repeated using a larger $`p(4\times 4)`$ supercell with 16 atoms/layer. Structural relaxations of the ionic coordinates are performed using the method of direct inversion in the iterative subspace . During ionic relaxations and MD simulations the lowest Si layer and the saturation hydrogens are kept fixed. We verified that, by starting with the unreconstructed, clean Si(100) surface, the structural optimization procedure correctly produces asymmetric surface dimers, with a dimer bond length and buckling angle in good agreement with previous, highly converged ab initio calculations . We have considered different surface periodicities for the dimer reconstruction which may occur on the Si(100)surface, i.e. $`(2\times 1)`$, $`p(2\times 2)`$ and $`c(4\times 2)`$. A single benzene molecule is added on top of the slab and the system is then fully relaxed towards the minimum energy configuration. To better explore the complex potential energy surface of this system, in most of the cases the optimization procedure was repeated using a simulated-annealing strategy and also starting from different initial configurations. We find that the lowest-energy configurations are given by two tetra-$`\sigma `$-bonded structures, characterized by the presence of one C-C double bond, which we refer to as “tight bridge” (TiB) and “twisted bridge” (TwB) (see Fig. 1). TwB is similar to TiB but the benzene molecule is rotated by 90 with respect to the Si surface and is slightly higher in energy (see Table I). This result is in agreement with the findings of Ref. and turns out to be independent on the size of the supercell used in the simulation and on the different reconstructions of the Si(100) surface. It remains true also using BLYP gradient corrections, as can be seen in Table I. We also find, at somewhat higher energies, three different, metastable “butterfly” structures, characterized by two C-Si bonds, which are shown in Fig. 1. One of them (“standard butterfly”, SB) is the well-known configuration with the benzene molecule adsorbed on top of a single Si dimer. The others (“tilted-bridge butterfly”, TB, and “diagonal-bridge butterfly”, DB), which bridge two adjacent surface dimers, have not been reported in any previous study. The Si(100) reconstruction crucially affects the occurrence and energetic ordering of the three “butterfly” structures. In fact, in the $`(2\times 1)`$ reconstruction (with parallel buckled dimers), SB and TB are the most stable (almost isoenergetic) “butterfly” configurations, while DB is considerably less favored; in contrast, with reconstructions involving alternating buckled Si dimers, such as the $`p(2\times 2)`$ and the $`c(4\times 2)`$, SB and DB are the lowest-energy configurations, while the binding energy of TB is significantly smaller. This clearly happens because the two C-Si bonds of the TB structure are more easily created when the benzene molecule is adsorbed onto Si(100) $`(2\times 1)`$, while the formation of the DB structure is favored by the presence of alternating buckled Si dimers. The other configurations proposed in the literature, that is the “tilted” (T) and the “pedestal” (P) ones, lie higher in energy for all the Si(100) reconstructions considered (see Table I). In particular, the P structure is only found to be stable in the $`(2\times 1)`$ reconstruction; however, even in this case, a MD simulation performed at 300 K shows that the structure converts very rapidly (in less than 1 ps) into a DB structure. Although the P structure has four C-Si bonds, it is energetically disfavored because it involves the presence of two radical centers. Inspection of the C-C distances for the various stable structures reveals the existence of two kind of bonds: a long one (”single”) and a short one (”double”), of length $`1.491.59`$ and $`1.341.36`$ Å, respectively. These values should be compared with the C-C bond length in the isolated benzene molecule, 1.39 Å. One double bond characterizes the TiB and TwB structures, while two double bonds are found in the “butterfly” structures. In contrast, in the P configuration all the C-C bonds are single ones. These conclusions are confirmed by a more quantitative analysis of the electronic orbitals, which we performed by using both the notion of Mayer bond order and the method of the Localized Wannier functions . In the three “butterfly” configurations (SB, TB, DB), the bond angles ($`119^{}`$-$`122^{}`$) at the C atoms not involved in the Si-C bonds are close to that ($`120^{}`$) of the isolated benzene molecule, while those ($`103^{}`$-$`113^{}`$) at the 4-fold coordinated C atom are closer to the ideal tetrahedron ($`109.5^{}`$) angle. This clearly indicates $`sp^2`$ and $`sp^3`$ hybridization, respectively. After benzene chemisorption, although the Si-Si dimers are preserved, the Si dimer buckling angle is almost reduced to zero, with the exception of the TB and DB structures. In the lowest-energy TiB structure the angle between the double bond and the Si(100) surface is $`45^{}`$ in good agreement with the experimental estimate , $`43^{}`$. The structural parameters do not change appreciably when a larger $`p(4\times 4)`$ surface supercell is used. Use of BLYP gradient corrections makes bond lengths about 1-2 % longer, while binding energies are significantly reduced (see Table I). Moreover, in the $`(2\times 1)`$ reconstruction, the P configuration is no longer stable and, among the three “butterfly” structures, BLYP favors SB, while the binding energy of DB is even smaller than that of the T structure. Note, however, that TiB and TwB remain the lowest-energy configurations. According to the results of some experiments and theoretical calculations, adsorbed benzene predominantly forms a “butterfly” (SB) configuration, while the TiB one (and perhaps TwB) appears in detectable amounts on relatively long timescales only, thus indicating the existence of an energy barrier between the two structures. In order to identify possible metastable states, occurring in the early stages of adsorption, we have tried to find, in the simplest way, the most probable structure of a benzene molecule impinging on the Si(100) surface. If we place the molecule at some distance from the surface we observe that, regardless of the initial position and orientation of the molecule, after full relaxation the final structure is almost invariably one of the three “butterfly” configurations. This happens because the dimers are tilted, favoring the formation of the di-bonded “butterfly” structures rather than the tetra-bonded ones. The specific “butterfly” configuration which is actually formed depends critically on the type of reconstruction of the Si surface that is considered, as already discussed above. On the contrary, there are only very few initial positions which lead to the low-energy TiB and TwB configurations. We have tried to characterize the energy barrier which must be overcome to relax from the “butterfly” configurations to the lower-energy TiB and TwB structures. To this aim we started with the benzene molecule in the SB configuration. Let C<sub>d</sub> be one of the C atoms involved in the Si-C bonds. Many calculations have been performed in which the ionic coordinates of both the molecule and the substrate were optimized under the constraint that the $`x,y`$ coordinates of the two C<sub>d</sub> atoms are held fixed. A particular pathway, connecting the SB to the DB structure, is shown in Fig. 2, where the reaction coordinate is defined as the distance between the C<sub>d</sub>-C<sub>d</sub> axis of the initial configuration and that of the displaced structure. The pronounced energy minimum corresponds to the occurrence, during the transformation, of the lowest-energy Tib structure. Note however that this is characterized by a very narrow well. From Fig. 2 a lower bound of $`0.5`$ eV can be inferred for the energy barrier, to be compared with the experimental estimates, $``$ 0.9-1.0 eV. A similar calculation for the TB$``$TwB transition gives a smaller value of $`0.4`$ eV. As a consequence the conversion from TB to TwB is expected to be somewhat faster than that from SB to TiB. A large fraction of experiments on benzene on Si(100) is based on STM techniques. However, different interpretations of similar STM images led to contradictory conclusions about the adsorption sites and geometry of the adsorbed molecules. For each of the structures reported in Table I we have produced “theoretical” STM images to be compared with the experimental ones, following the recipe of Ref. . Charge density iso-surfaces have been obtained by including electron states in an energy range down to $`2`$ eV below the highest occupied state, which corresponds to typical STM bias voltages. The simulated images are obtained by viewing these iso-surfaces at typical tip-surface distances (a few Å above the benzene molecule). Our computed STM image for the TiB structure exhibits a density maximum above one of the two Si dimers involved in bonding with benzene, while the TwB configuration produces a similar image but rotated by 90. These images resemble those obtained by Lopinski et al.. The theoretical STM image for the SB structure is characterized by a bright two-lobe protrusion centered symmetrically above a single Si dimer unit and oriented orthogonal to the dimer axis, in qualitative agreement with the experimental findings. Instead, the STM images of the TB and DB structures are quite different from that of SB. In fact the TB image is qualitatively similar to that of TwB (and the experimental STM resolution could be insufficient to distinguish between the two configurations), while DB gives rise to a much fainter feature, bridging in diagonal two Si dimers, which is probably hardly visible in experiments. These observations could explain why the DB and TB structures have not been detected in STM experiments. The T configuration produces an asymmetric (with respect to Si dimers) image, appearing as a bright region (placed between two Si dimers) adjacent to a dark region. Finally the P structure is characterized by two spots corresponding to the dangling bonds of benzene; this result supports the conjecture which rules out the presence of a significant fraction of benzene molecules adsorbed in the P structure because of the absence of such spots in the STM images. We have also computed the vibrational spectra for a representative “butterfly” structure, SB, and for the lowest-energy TiB configuration, by performing Car-Parrinello MD simulations at room temperature. Our results for TiB show a slightly more quantitative agreement with the experimental results than those for the SB structure, although the main features of the spectra are similar in the two structures. Let C$`^{\prime \prime }`$ (C$`^{}`$) denote a C atom which shares a double (single) bond with another C atom. The C$`^{}`$-H and C$`^{\prime \prime }`$-H frequencies (2880 and 3010 cm<sup>-1</sup>) are in agreement with the $`sp^3`$ and $`sp^2`$ stretching modes observed in recent IR spectroscopy experiments (2945 and 3044 cm<sup>-1</sup>), and semiempirical cluster calculations . Note that the C-H vibrations for the isolated benzene molecule are characterized by a single frequency of 3040 cm<sup>-1</sup>. For the C$`^{}`$-C$`^{\prime \prime }`$ and C$`^{\prime \prime }`$-C$`^{\prime \prime }`$ frequencies we find 1230 and 1520 cm<sup>-1</sup>, respectively, to be compared with the EELS experimental values , 1170 and 1625 cm<sup>-1</sup>. The C-H bending modes are found at 900 and 1100 cm<sup>-1</sup>, whereas experimentally they are at 910 and 1075 cm<sup>-1</sup>. In conclusion, using state-of-the-art ab initio simulations, we have shown that a tetra-$`\sigma `$ bonded structure is the most stable configuration for benzene adsorbed on Si(100). However, this structure and a very similar one, lying only slightly higher in energy, correspond to very narrow wells in the potential energy surface for a benzene molecule impinging on the surface. Therefore it is more likely for the molecule to be adsorbed into one of three different, metastable “butterfly” configurations, and eventually convert into the lowest-energy structures. Our study provides detailed information about structural, electronic, and vibrational properties of the system, and allows a critical comparison with results obtained from different experimental techniques and previous theoretical calculations. We thank M. Boero and A. Vittadini for useful discussions. This work is partially supported by INFM through the Parallel Computing Initiative.
no-problem/0002/cond-mat0002314.html
ar5iv
text
# Steady-state mode I cracks in a viscoelastic triangular lattice ## I Introduction In the previous paper (Pechenik, et al., 2000), we showed how one could extend the methodology originally devised by Slepyan (1981) and obtain a closed form solution for mode III cracks propagating steadily in a square lattice in the presence of dissipation in the form of a Kelvin viscosity. This approach allowed us to determine the effect of dissipation on the crack velocity as a function of driving. Similarly, we found the critical velocity (and the associated critical time) at which other bonds in the lattice have displacements larger than the assumed breaking criterion, signaling an instability in the steadily-propagating crack and the onset of more complex spatio-temporal behavior. In this second paper, we extend this analysis to the more interesting (and more experimentally relevant) case of mode I cracks. We now work on a triangular lattice and assume that there are central force ideally brittle viscoelastic springs connecting particles at the lattice sites. Again, we use ideas borrowed from Kulamekhtova (1984) to solve this problem exactly on an infinite lattice. We also solve the same system numerically on a lattice of finite transverse extent so as to provide an independent check on some of the results. Perhaps the most interesting finding reported here concerns the aforementioned onset of additional bond-breaking at large enough velocity. At small dissipation, diagonal bonds that are offset from the assumed crack line in the vertical direction are the most “dangerous”. The critical velocity at which one of these bonds exceeds critical displacement is more or less independent of the dissipation at low values thereof and eventually rises sharply as the system becomes increasingly damped. This is similar to the behavior obtained for mode III. At moderate to large damping, however, horizontal bonds become more relevant and eventually give rise to a critical velocity which decreases with increasing damping. At the end, we will comment on the possible relevance of our results for trying to make sense of recent experiments (Fineberg, et al., 1991, 1992; Sharon, et al., 1995, 1996) and simulations on instabilities during dynamic fracture. ## II Mode I on a triangular lattice Our model consists of central force springs between mass points located on a triangular lattice. We introduce unit vectors in the lattice directions as follows: $`\widehat{d}_1`$ points in the direction of the $`x`$-axis and each of the subsequent unit vectors $`\widehat{d}_i`$ with $`2i6`$ rotated by an additional $`\pi /3`$ in the counter-clockwise direction. This means that $`\widehat{d}_{i+3}=\widehat{d}_i`$. We will always assume that the steady-state crack breaks bonds between the rows with $`y=0`$ and $`y=\sqrt{3}/2`$, as it moves in the direction of the $`x`$-axis. As already discussed in our previous paper on mode III cracks, each bond is taken to be linear until a critical displacement at which point it completely breaks. Also, dissipation is incorporated as a Kelvin viscosity term. Given these assumptions, the equation describing the motion of the masses in the lattice is $$(1+\eta \frac{}{t})\underset{i=1}{\overset{6}{}}Q_i(t,\stackrel{}{x})\widehat{d}_i\frac{^2\stackrel{}{u}(t,\stackrel{}{x})}{t^2}=\stackrel{}{\sigma }(t,\stackrel{}{x}).$$ (1) Here, $`\stackrel{}{x}=(x,y)`$ is a lattice vector in the plane and $`Q_i(t,\stackrel{}{x})`$ is the elongation of bond $`i`$ emanating from the specific lattice site, in the approximation that this elongation is much smaller than the unstretched bond length, $$Q_i(t,\stackrel{}{x})=\left(\stackrel{}{u}(t,\stackrel{}{x}+\widehat{d}_i)\stackrel{}{u}(t,\stackrel{}{x})\right)\widehat{d}_i.$$ (2) Finally, the driving term $`\stackrel{}{\sigma }(t,\stackrel{}{x})`$ consists of forces which compensate for the forces from broken bonds (which are included on the left-hand side) as well as any external forces acting on the crack boundaries. Next we suppose that the crack is moving with a constant speed $`v`$. This allows us to change variables to $`\tau =xvt`$, $`\stackrel{}{z}=(\tau ,y)`$, which gives $$(1\eta v\frac{}{\tau })\underset{i=1}{\overset{6}{}}Q_i(\stackrel{}{z})\widehat{d}_iv^2\frac{^2\stackrel{}{u}(\stackrel{}{z})}{\tau ^2}=\stackrel{}{\sigma }(\stackrel{}{z}).$$ (3) Because of the form of $`Q_i(\stackrel{}{z})`$, we have the obvious symmetries $$Q_2(\stackrel{}{z})=Q_5(\stackrel{}{z}+\widehat{d}_2),Q_3(\stackrel{}{z})=Q_6(\stackrel{}{z}+\widehat{d}_3).$$ (4) We will impose one additional symmetry condition on the $`Q_i`$. As the crack propagates, it alternately breaks bonds in the $`\widehat{d}_2`$ and $`\widehat{d}_3`$ directions. Let us define our coordinates such that at $`\tau =0`$ and $`t=0`$, (so that $`x=0`$), the crack breaks a bond in the direction of $`\widehat{d}_2`$. This means that for $`\tau >0`$ this bond ($`Q_2(\stackrel{}{z}_0)`$ with $`\stackrel{}{z}_0=(\tau ,0)`$) is unbroken and for $`\tau <0`$ it is always broken. If we shift one lattice spacing to the left, the $`\widehat{d}_2`$ bond at $`x=1`$ will break at $`t=1/v`$ (here also $`\tau =0`$ in $`Q_2(\stackrel{}{z}_0)`$). It is clear that the bond in the $`\widehat{d}_3`$ direction at $`x=0`$ breaks at a time-point between these two events; we will assume that this time is exactly at the midpoint, $`t=1/(2v)`$ or equivalently $`\tau =1/2`$ in $`Q_3(\stackrel{}{z}_0)`$. This means that for $`\tau >1/2`$ this bond is unbroken and for $`\tau <1/2`$ it is always broken. We will assume an even more strict symmetry condition for these bonds, namely $$Q_2(\stackrel{}{z}_0)=Q_3(xv(t\frac{1}{2v}),y=0)=Q_3(\stackrel{}{z}_0+\frac{1}{2}\widehat{d}_1)$$ (5) which will need to be checked from our solution later. Let us denote $`Q_2(\stackrel{}{z}_0)Q(\tau )`$. Then, we can write down an explicit expression for the forces $`\stackrel{}{\sigma }(z)`$ on the right hand side of Eq. 3. For the row with $`y=0`$ $`\stackrel{}{\sigma }(\stackrel{}{z}_0)`$ $`=`$ $`\theta (\tau )\left(\stackrel{}{\sigma }_e(\stackrel{}{z}_0)(1\eta v{\displaystyle \frac{}{\tau }})Q_2(\stackrel{}{z}_0)\widehat{d}_2\right)`$ $`+\theta (\tau +{\displaystyle \frac{1}{2}})\left(\stackrel{}{\sigma }_e^{}(\stackrel{}{z}_0)(1\eta v{\displaystyle \frac{}{\tau }})Q_3(\stackrel{}{z}_0)\widehat{d}_3\right)`$ $`=`$ $`\theta (\tau )\left(P_0(\tau )(1\eta v{\displaystyle \frac{}{\tau }})Q(\tau )\right)\widehat{d}_2`$ $`+\theta (\tau +{\displaystyle \frac{1}{2}})\left(P_0(\tau {\displaystyle \frac{1}{2}})(1\eta v{\displaystyle \frac{}{\tau }})Q(\tau {\displaystyle \frac{1}{2}})\right)\widehat{d}_3.`$ (7) Here $`\stackrel{}{\sigma }_e`$ and $`\stackrel{}{\sigma }_e^{}`$ are the assumed external forces. In the second form, these have been taken to match the vectorial nature and also the symmetry of the bond-breaking terms. The (scalar) $`P_0`$ will be specified later. Note that the second theta-function in (II) reflects the already mentioned fact that the bond from point $`\stackrel{}{z}_0`$ in the $`\widehat{d}_3`$ direction breaks earlier (by time interval $`1/2v`$) than the bond from the same point in the $`\widehat{d}_2`$ direction. Let us write down the corresponding force for the row $`y=\sqrt{3}/2`$. For the point $`\stackrel{}{x}=\widehat{d}_2`$, the bond in the $`\widehat{d}_5`$ direction is physically the same as the $`\widehat{d}_2`$ bond at $`\stackrel{}{x}=(0,0)`$ and therefore breaks at $`\tau =0`$ in $`Q_5(\stackrel{}{z}_0+\widehat{d}_2)=Q_2(\stackrel{}{z}_0)`$. Now, the $`\widehat{d}_6`$ bond at $`\stackrel{}{x}=\widehat{d_2}`$ breaks later, at $`\tau =1/2`$ in $`Q_6(\stackrel{}{z}_0+\widehat{d}_2)`$. Thus we have $`\stackrel{}{\sigma }(\stackrel{}{z}_0+\widehat{d}_2)`$ $`=`$ $`\theta (\tau )\left(\stackrel{}{\sigma }_e(\stackrel{}{z}_0+\widehat{d}_2)(1\eta v{\displaystyle \frac{}{\tau }})Q_5(\stackrel{}{z}_0+\widehat{d}_2)\widehat{d}_5\right)`$ $`+\theta (\tau {\displaystyle \frac{1}{2}})\left(\stackrel{}{\sigma }_e^{}(\stackrel{}{z}_0+\widehat{d}_2)(1\eta v{\displaystyle \frac{}{\tau }})Q_6(\stackrel{}{z}_0+\widehat{d}_2)\widehat{d}_6\right)`$ $`=`$ $`\theta (\tau )\left(P_0(\tau )(1\eta v{\displaystyle \frac{}{\tau }})Q(\tau )\right)\widehat{d}_5+`$ $`+\theta (\tau {\displaystyle \frac{1}{2}})\left(P_0(\tau +{\displaystyle \frac{1}{2}})(1\eta v{\displaystyle \frac{}{\tau }})Q(\tau +{\displaystyle \frac{1}{2}})\right)\widehat{d}_6.`$ (9) where we used the symmetry in Eq. 5 to write $`Q_6(\stackrel{}{z}_0+\widehat{d}_2)=Q_3(\stackrel{}{z}_0+\widehat{d}_2\widehat{d}_3)=Q_2(\stackrel{}{z}_0+\widehat{d}_2\widehat{d}_3\widehat{d}_1/2)=Q(\tau +1/2)`$. Finally, we can rewrite all these forces in a more compact form, $`\stackrel{}{\sigma }(\stackrel{}{z}_0)`$ $`=`$ $`N(\tau )\widehat{d}_5+N(\tau {\displaystyle \frac{1}{2}})\widehat{d}_6,`$ (10) $`\stackrel{}{\sigma }(\stackrel{}{z}_0+\widehat{d}_2)`$ $`=`$ $`N(\tau )\widehat{d}_2+N(\tau +{\displaystyle \frac{1}{2}})\widehat{d}_3,`$ (11) where $$N(\tau )=\theta (\tau )\left(P_0(\tau )+(1\eta v\frac{}{\tau })Q(\tau )\right).$$ (12) Next, we proceed by doing a Fourier transformation of Eq. (3), over the continuous variable $`\tau `$ and the discrete variable $`y=n\sqrt{3}/2`$, with integer $`n`$. In detail, $`u^F(\tau ,s)`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}}u(\tau ,{\displaystyle \frac{\sqrt{3}}{2}}n)e^{i\frac{\sqrt{3}}{2}sn},`$ (13a) $`u(\tau ,y)`$ $`=`$ $`{\displaystyle \frac{\sqrt{3}}{4\pi }}{\displaystyle _{\frac{2\pi }{\sqrt{3}}}^{\frac{2\pi }{\sqrt{3}}}}u^F(\tau ,s)e^{isy}𝑑s;`$ (13b) $`u^{FF}(q,s)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}u^F(\tau ,s)e^{iq\tau }𝑑\tau ,`$ (14a) $`u^F(\tau ,s)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _{\mathrm{}}^+\mathrm{}}u^{FF}(q,s)e^{iq\tau }𝑑q.`$ (14b) We thereby obtain $$v^2q^2\stackrel{}{u}^{FF}(1+i\eta vq)\underset{i}{}Q_i^{FF}\widehat{d}_i=\stackrel{}{\sigma }^{FF},$$ (15) with $$Q_i^{FF}=(e^{i\stackrel{}{r}\widehat{d}_i}1)\stackrel{}{u}^{FF}\widehat{d}_i$$ (16) and $`\stackrel{}{r}=(q,s)`$. To find $`\stackrel{}{\sigma }^{FF}`$ we first transform Eqs. (10, 11) over $`\tau `$ $`\stackrel{}{\sigma }^F(q,n=0)`$ $`=`$ $`N(q)\widehat{d}_5+e^{i\frac{q}{2}}N(q)\widehat{d}_6,`$ (17a) $`\stackrel{}{\sigma }^F(q,n=1)e^{i\frac{q}{2}}`$ $`=`$ $`N(q)\widehat{d}_2+e^{i\frac{q}{2}}N(q)\widehat{d}_3,`$ (17b) with $$N(q)=P_0^{}+(1+i\eta vq)Q^{}\eta vQ(0).$$ (18) The superscript “$``$” refers to the part of the Fourier transform which is analytic in the lower half plane of the variable $`q`$; our notation follows that in our previous mode III paper. Finally, using (13a) we get $$\stackrel{}{\sigma }^{FF}=N(q)\widehat{d}_5+e^{i\frac{q}{2}}N(q)\widehat{d}_6+\left(e^{i\frac{q}{2}}N(q)\widehat{d}_2+N(q)\widehat{d}_3\right)e^{is\frac{\sqrt{3}}{2}}.$$ (19) This then completes the derivation of the basic equation that needs to be solved to determine the steady-state crack field. ## III Wiener-Hopf Solution We wish to use the Wiener-Hopf technique to solve the equation derived above for the elongation field $`Q`$. To start, we note that Eq. (15) can be written in matrix form $$𝐌\stackrel{}{u}^{FF}=\stackrel{}{\sigma }^{FF}$$ (20) with the 2x2 matrix $$𝐌=\left(\begin{array}{cccc}𝐀_\mathrm{𝟏}& 𝐀_\mathrm{𝟑}\hfill & & \\ 𝐀_\mathrm{𝟑}& 𝐀_\mathrm{𝟐}\hfill & & \end{array}\right)$$ (21) where $`A_1`$ $`=`$ $`v^2q^2+(1+i\eta vq)\left(4\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_1}{2}}+\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_2}{2}}+\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_3}{2}}\right),`$ (22a) $`A_2`$ $`=`$ $`v^2q^2+3(1+i\eta vq)\left(\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_2}{2}}+\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_3}{2}}\right),`$ (22b) $`A_3`$ $`=`$ $`\sqrt{3}(1+i\eta vq)\left(\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_2}{2}}\mathrm{sin}^2{\displaystyle \frac{\stackrel{}{r}\widehat{d}_3}{2}}\right),`$ (22c) and $$\stackrel{}{\sigma }^{FF}=\frac{N(q)}{2}\left(\begin{array}{cccc}(1e^{i\frac{q}{2}})(1+e^{i\frac{\sqrt{3}}{2}s})& & & \\ \sqrt{3}(1+e^{i\frac{q}{2}})(1e^{i\frac{\sqrt{3}}{2}s})& & & \end{array}\right).$$ (23) The solution of (20) is therefore $$\stackrel{}{u}^{FF}=\frac{1}{det𝐌}\left(\begin{array}{cccc}det𝐌_1& & & \\ det𝐌_2& & & \end{array}\right)$$ (24) where $$𝐌_1=\left(\begin{array}{cccc}\sigma _x^{FF}& \sigma _y^{FF}\hfill & & \\ A_3& A_2\hfill & & \end{array}\right),𝐌_2=\left(\begin{array}{cccc}A_1& A_3\hfill & & \\ \sigma _x^{FF}& \sigma _y^{FF}\hfill & & \end{array}\right).$$ (25) Now, we want to extract from the solution (24) an equation for the variable we are interested in, namely $`Q^F(q)`$. From the definition of $`Q`$ we have $$Q^F(q)=Q_2^F(q,n=0)=\frac{\sqrt{3}}{4\pi }_{\frac{2\pi }{\sqrt{3}}}^{\frac{2\pi }{\sqrt{3}}}(e^{i\stackrel{}{r}\widehat{d}_2}1)\stackrel{}{u}^{FF}\widehat{d}_2𝑑s.$$ (26) From the basic solution above, we can evaluate $`\stackrel{}{u}^{FF}\widehat{d}_2`$. $$\stackrel{}{u}^{FF}\widehat{d}_2=\frac{\left|\begin{array}{cccc}d_{2x}A_3+d_{2y}A_1& d_{2x}A_2+d_{2y}A_3\hfill & & \\ \sigma _x^{FF}& \sigma _y^{FF}\hfill & & \end{array}\right|}{|𝐌|}.$$ (27) Before proceeding to evaluate the integral, it is worthwhile to pause and check the basic symmetry conditions. We find that $$Q_3^F(q,n=0)=\frac{\sqrt{3}}{4\pi }_{\frac{2\pi }{\sqrt{3}}}^{\frac{2\pi }{\sqrt{3}}}(e^{i\stackrel{}{r}\widehat{d}_3}1)\stackrel{}{u}^{FF}\widehat{d}_3𝑑s,$$ (28) where the new dot product is given by $$\stackrel{}{u}^{FF}\widehat{d}_3=\frac{\left|\begin{array}{cccc}d_{2x}A_3+d_{2y}A_1& d_{2x}A_2+d_{2y}A_3\hfill & & \\ \sigma _x^{FF}& \sigma _y^{FF}\hfill & & \end{array}\right|}{|𝐌|}.$$ (29) Now, we can change variables of integration in (28) from $`s`$ to $`s`$. Under this transformation $`A_1|_{ss}=A_1`$, $`A_2|_{ss}=A_2`$, $`A_3|_{ss}=A_3`$, $`\sigma _x^{FF}|_{ss}=e^{i\sqrt{3}s/2}\sigma _x^{FF}`$, $`\sigma _y^{FF}|_{ss}=e^{i\sqrt{3}s/2}\sigma _y^{FF}`$, which gives that $`det𝐌|_{ss}=det𝐌`$ and $`\stackrel{}{u}^{FF}\widehat{d}_3|_{ss}=e^{i\sqrt{3}s/2}\stackrel{}{u}^{FF}\widehat{d}_2`$. Therefore (28) changes into $$Q_3^F(q,n=0)=\frac{\sqrt{3}}{4\pi }_{\frac{2\pi }{\sqrt{3}}}^{\frac{2\pi }{\sqrt{3}}}(e^{iqd_{2x}+isd_{2y}}1)e^{id_{2y}s}\stackrel{}{u}^{FF}\widehat{d}_2𝑑s=e^{i\frac{q}{2}}Q_2^F(q,n=0).$$ (30) This is exactly the symmetry condition (5); hence, our solution is consistent with the assumed symmetry. We also note here that a similar derivation gives $$Q_2(\tau ,y)=Q_3(\tau +\frac{1}{2},y)=Q_6(\tau +\frac{1}{2},y+\frac{\sqrt{3}}{2}).$$ (31) More generally, we can see what these symmetries mean for the displacement field $`\stackrel{}{u}(z)`$. Using $`det𝐌_1|_{ss}=e^{i\sqrt{3}s/2}det𝐌_1`$, $`det𝐌_2|_{ss}=e^{i\sqrt{3}s/2}det𝐌_2`$, we obtain $`u_x^{FF}|_{ss}=e^{i\sqrt{3}s/2}u_x^{FF}`$ and $`u_y^{FF}|_{ss}=e^{i\sqrt{3}s/2}u_y^{FF}`$. These give $`u_x(\tau ,y)=u_x(\tau ,y+\sqrt{3}/2)`$ and $`u_y(\tau ,y)=u_y(\tau ,y+\sqrt{3}/2)`$. We now proceed to calculate the integral in (26). The explicit form of the integrand in (26) is $$\frac{N(q)}{|𝐌|}\left|\begin{array}{cccc}3\left(\frac{v^2q^2}{2}(1+i\eta vq)(2\mathrm{sin}^2\frac{q}{2}+\mathrm{sin}^2\frac{\stackrel{}{r}\widehat{d}_3}{2})\right)& \frac{v^2q^2}{2}+3(1+i\eta vq)\mathrm{sin}^2\frac{\stackrel{}{r}\widehat{d}_3}{2}\hfill & & \\ (\mathrm{cos}\stackrel{}{r}\widehat{d}_21)(\mathrm{cos}\frac{\sqrt{3}}{2}s\mathrm{cos}\frac{q}{2})& (\mathrm{cos}\stackrel{}{r}\widehat{d}_21)+(\mathrm{cos}\frac{\sqrt{3}}{2}s\mathrm{cos}\frac{q}{2})\hfill & & \end{array}\right|.$$ (32) As $`|𝐌|`$ is an even function of $`s`$, only the even part of the other determinant contributes to the integral. Then, by the change of variable to $`w=e^{i\sqrt{3}s/2}`$ we can rewrite (26) as an integral over the unit circle in the complex $`w`$ plane. After some tedious algebra, we find that we can write $$det𝐌=3\xi ^2+\alpha \xi +\beta $$ (33) where $`\xi `$ $`=`$ $`(1+i\eta vq){\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1}{w}}+w\right),`$ (34a) $`\alpha `$ $`=`$ $`2\mathrm{cos}{\displaystyle \frac{q}{2}}\left(2v^2q^2+3(1+i\eta vq)(1+2\mathrm{sin}^2{\displaystyle \frac{q}{2}})\right),`$ (34b) $`\beta `$ $`=`$ $`3(1+i\eta vq)^2(1+3\mathrm{sin}^2{\displaystyle \frac{q}{2}})4(1+i\eta vq)v^2q^2(1+\mathrm{sin}^2{\displaystyle \frac{q}{2}})+v^4q^4.`$ (34c) Similarly, the even part of the determinant in the numerator becomes $$\frac{F(\xi )}{1+i\eta vq},$$ (35) with $`F(\xi )`$ $`=`$ $`3\left[(\xi (1+iq\eta v)\mathrm{cos}{\displaystyle \frac{q}{2}})^2+2(1+iq\eta v)\mathrm{sin}^2{\displaystyle \frac{q}{2}}(1+\mathrm{cos}{\displaystyle \frac{q}{2}})(1+iq\eta v\xi )\right]`$ (36) $`v^2q^2\left[(1+iq\eta v\xi )(1+\mathrm{cos}{\displaystyle \frac{q}{2}})\xi \mathrm{cos}{\displaystyle \frac{q}{2}}+1+i\eta vq\right].`$ We write down here also the odd part of this determinant for future reference, $$G(w)=i(\frac{1}{w}w)\mathrm{sin}\frac{q}{2}(v^2q^23(1+iq\eta v)\mathrm{sin}^2\frac{q}{2}).$$ (37) Inserting (33) and (35) into (32) we can see that (26) becomes $$Q^F(q)=\frac{N(q)}{2\pi i(1+i\eta vq)}\underset{|w|=1}{}\frac{F(\xi )}{3\xi ^2+\alpha \xi +\beta }\frac{dw}{w}$$ (38) with integration around the unit circle in the counter-clockwise direction. The integrand in Eq. (38) has three poles inside the unit circle; i.e., $`w=0`$ and one $`w`$ for each of the two roots of the expression (33), $$\xi _{1,2}=C\pm \sqrt{C^2D},$$ (39) with $`C`$ $`=`$ $`\mathrm{cos}{\displaystyle \frac{q}{2}}\left({\displaystyle \frac{2}{3}}v^2q^2+(1+i\eta vq)(1+2\mathrm{sin}^2{\displaystyle \frac{q}{2}})\right),`$ (40a) $`D`$ $`=`$ $`(1+i\eta vq)^2(1+3\mathrm{sin}^2{\displaystyle \frac{q}{2}}){\displaystyle \frac{4}{3}}(1+i\eta vq)v^2q^2(1+\mathrm{sin}^2{\displaystyle \frac{q}{2}})+{\displaystyle \frac{1}{3}}v^4q^4.`$ (40b) Then calculating the residues at these poles we get $$Q^F(q)=\frac{N(q)}{1+i\eta vq}(1S)$$ (41) where $$S(q)=\frac{F(\xi _1)\sqrt{\xi _2^2(1+i\eta vq)^2}F(y_2)\sqrt{\xi _1^2(1+i\eta vq)^2}}{3(\xi _1\xi _2)\sqrt{\xi _1^2(1+i\eta vq)^2}\sqrt{\xi _2^2(1+i\eta vq)^2}}.$$ (42) Note that we are supposed to take those branches of the square roots in (42) which ensure that $$|w_{1,2}|<1$$ (43) with $`w_{1,2}`$ defined by $$w_{1,2}=\frac{\xi _{1,2}\sqrt{\xi _{1,2}^2(1+i\eta vq)^2}}{1+i\eta vq}$$ (44) If we now substitute $`N(q)`$ in Eq. (41) into Eq. (18), we get exactly the same equation as was obtained for the mode III case in the first paper (Pechenik, et al., 2000) $$Q^++SQ^{}+\frac{\eta v}{1+i\eta vq}(1S)Q(0)=\frac{(1S)P_0^{}}{1+i\eta vq}$$ (45) with $`Q`$ having the same meaning as $``$ in (Pechenik, et al., 2000), namely the elongation of bonds between rows where the crack propagates. Therefore, we do not need to repeat any of the discussion from our previous paper regarding the choice of $`P_0^{}`$ and the factorization into pieces analytic in the upper and lower half planes. Instead, we can just write down the final answer. For the relationship between the velocity and the driving displacement, we have $$\frac{\mathrm{\Delta }}{\mathrm{\Delta }_G}=\sqrt{\frac{1+\varphi \eta v}{A\varphi }}\mathrm{exp}\frac{1}{4\pi i}_{\mathrm{}}^+\mathrm{}\frac{\mathrm{ln}K(\chi )}{\chi (1+i\eta v\chi )}𝑑\chi $$ (46) where for this mode I case $$A=\frac{1}{\sqrt{2}v^2}\left(\frac{\sqrt{\frac{3}{8}v^2}}{2}\frac{(\frac{3}{4}+v^2)^2}{\sqrt{3}\sqrt{\frac{9}{8}v^2}}\right)$$ (47) and $$K(q)=\frac{S^2(q)(q^2+\varphi ^2)}{A^2q^2\varphi ^2}.$$ (48) This expression for $`A`$ contains the transverse speed $`c_t=\sqrt{3/8}`$ and longitudinal speed $`c_l=\sqrt{9/8}`$. The zero of $`A`$ gives the limiting speed of crack propagation $`c_R=\frac{\sqrt{3\sqrt{3}}}{2}=0.563`$, which is as expected equal to the Raleigh wave speed. In our previous paper, we discussed the details of how to evaluate this equation so as to obtain numerical results for the crack velocity response curve. These methods can be applied to this case as well and hence we do not repeat any of the details here. The results of these calculations are presented in Figs. 1 and 2. In fact, these findings are quite similar to the analogous mode III results. Later, we will show that the upper limiting value of $`\mathrm{\Delta }/\mathrm{\Delta }_G`$ for an arrested (i.e. non-moving) crack is around $`1.94`$. As was the case for mode III, the $`\mathrm{\Delta }v`$ curves in Figures 1, 2 approach this value as $`v`$ goes to 0 at any value of the damping $`\eta `$. Slepyan’s original calculations, done for the dissipationless limit $`\eta =0`$, show the same asymptotic value. ## IV consistency of the steady-state solution In this section we calculate the bond displacements in the vicinity of the crack. This is crucial, as we need to check whether all bonds assumed to be unbroken in the derivation in fact have elongations less than unity, the value at which bonds break. Let us start with bonds along the crack path, where the elongation is just determined by transforming $`Q^F(q)`$ back to physical space. This part follows exactly the analogous mode III calculation, and we easily obtain for $`Q^\pm `$ $`Q^+`$ $`=`$ $`{\displaystyle \frac{Q(0)}{(iq+0)(1+iq\eta v)}}{\displaystyle \frac{S^+}{S^+|_{q=\frac{i}{\eta v}}}}{\displaystyle \frac{\eta vQ(0)}{1+iq\eta v}},`$ (49a) $`Q^{}`$ $`=`$ $`{\displaystyle \frac{Q(0)}{S^+|_{q=\frac{i}{\eta v}}S^{}}}{\displaystyle \frac{1}{(iq+0)(1+iq\eta v)}}+{\displaystyle \frac{\eta vQ(0)}{1+iq\eta v}},`$ (49b) and for $`Q(\tau )`$ $`Q(\tau )`$ $`=`$ $`Q(0){\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle \frac{\sqrt{A\varphi }e^{iq\tau }}{\sqrt{(iq0)(iq\varphi )}(1+iq\eta v)}}{\displaystyle \frac{(K^+)^{\frac{1}{2}}}{S^+|_{q=\frac{i}{\eta v}}}}{\displaystyle \frac{dq}{2\pi }},\text{ for }\tau >0`$ (50a) $`Q(\tau )`$ $`=`$ $`Q(0){\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle \frac{e^{iq\tau }}{\sqrt{A\varphi }(iq+0)(1+iq\eta v)}}\left({\displaystyle \frac{iq+\varphi }{iq+0}}\right)^{\frac{1}{2}}{\displaystyle \frac{(K^{})^{\frac{1}{2}}}{S^+|_{q=\frac{i}{\eta v}}}}{\displaystyle \frac{dq}{2\pi }}`$ (50b) $`+Q(0)e^{\frac{\tau }{\eta v}},\text{ for }\tau <0.`$ The numerical evaluation of these expressions follows the same methodology as described for mode III. Typical results are shown in Fig. 3. We note that the elongation along the crack line is rather similar to the same object in the mode III case. In that case, however, this function also determined the horizontal bond elongations by simple subtraction. This is no longer true for mode I, since the vectorial nature of the problem requires that we take a different component of the displacement (different than the ones which goes into $`Q`$) to evaluate this elongation. In detail, we now have for $`Q_h(q)Q_1^F(q,n=0)={\displaystyle \frac{\sqrt{3}}{4\pi }}{\displaystyle _{\frac{2\pi }{\sqrt{3}}}^{\frac{2\pi }{\sqrt{3}}}}(e^{ir\widehat{d}_1}1)\stackrel{}{u}^{FF}\widehat{d}_1𝑑s=`$ (51) $`(e^{iq}1){\displaystyle \frac{\sqrt{3}}{4\pi }}{\displaystyle _{\frac{2\pi }{\sqrt{3}}}^{\frac{2\pi }{\sqrt{3}}}}{\displaystyle \frac{det𝐌_1}{det𝐌}}𝑑s`$ (52) where these matrices were defined in the last section. We now proceed as before to change variables to $`w`$ and re-write this expression in terms of the auxiliary variable $`\xi `$ defined in Eq. (34a). $$det𝐌_1=\frac{N(q)H(\xi )}{1+i\eta vq}+\text{term odd in }s$$ (53) where $`H(\xi )`$ $`=`$ $`{\displaystyle \frac{3}{2}}(1e^{i\frac{q}{2}})(1+i\eta vq+\xi )(1+i\eta vq\xi \mathrm{cos}{\displaystyle \frac{q}{2}}{\displaystyle \frac{v^2q^2}{3}})`$ (54) $`{\displaystyle \frac{3i}{2}}\mathrm{sin}{\displaystyle \frac{q}{2}}(1+e^{i\frac{q}{2}})(\xi ^2(1+i\eta vq)^2)`$ As before, the integrand in (52) has three poles. Dropping the irrelevant odd term and performing the integral leads to the expression $$Q_h(q)=\frac{N(q)}{1+i\eta vq}(1e^{iq})\psi _h(q)$$ (55) with $$\psi _h(q)=\frac{1e^{i\frac{q}{2}}}{2}\frac{H(\xi _1)\sqrt{\xi _2^2(1+i\eta vq)^2}H(\xi _2)\sqrt{\xi _1^2(1+i\eta vq)^2}}{3(\xi _1\xi _2)\sqrt{\xi _1^2(1+i\eta vq)^2}\sqrt{\xi _2^2(1+i\eta vq)^2}}.$$ (56) Branches of the square root satisfy the same conditions (43), (44), as earlier. The function $`N(q)`$ can be found from Eq. (41) $$N(q)=\frac{(1+i\eta vq)Q^F(q)}{1S}.$$ (57) Our final answer is $$Q_h(q)=\frac{Q(0)}{S^+|_{q=\frac{i}{\eta v}}S^{}}\frac{(1e^{iq})\psi _h(q)}{(iq+0)(1+iq\eta v)}.$$ (58) For $`q`$ close to $`0`$, the function $`(1e^{iq})\psi _h(q)`$ behaves as $`q`$, giving a divergence $`1/q^{1/2}`$ for $`Q_h(q)`$. This is similar to the divergence in $`Q^+(q)`$. Thus, the numerical calculation of $`Q_h(\tau )`$ is similar to the calculation of $`Q(\tau )`$ for $`\tau >0`$. Figure 4 displays $`Q_h(\tau )`$ for several values of $`\eta `$ and $`v`$. Generally speaking, the function $`Q_h(\tau )`$ has maxima in two different places, one somewhere in the vicinity of $`1`$ and a second for $`\tau >0`$. We also need to find the bond elongation between the layers with $`n=1`$ and $`n=2`$ and between the layers with $`n=1`$ and $`n=0`$. Due to the symmetry (31) it is sufficient to consider only $`Q_2^F(q,n=\pm 1)`$, which we will denote as $`Q_\pm (q)`$. We can derive for it an expression similar to that in Eq. (38) with the major difference being that in this case the odd parts of the explicit determinant in Eq. (32) do not cancel out because of the additional factor $`w^{\pm 1}`$. $`Q_\pm (q)`$ $`=`$ $`{\displaystyle \frac{N(q)}{2\pi i}}{\displaystyle \underset{|w|=1}{}}w^{\pm 1}{\displaystyle \frac{\frac{F(\xi )}{1+i\eta vq}+G(w)}{3\xi ^2+\alpha \xi +\beta }}{\displaystyle \frac{dw}{w}}`$ (59) $`=`$ $`{\displaystyle \frac{N(q)}{2\pi i}}{\displaystyle \underset{|w|=1}{}}{\displaystyle \frac{\frac{F(\xi )}{1+i\eta vq}\pm G(w)}{3\xi ^2+\alpha \xi +\beta }}𝑑w.`$ To derive the second equality, we used a change of variables $`w1/w`$, along with the symmetries of $`F`$ and $`G`$. Again the integration in (59) is in the counter-clockwise direction. In this case, the integrand in the second form of the integral has just two poles inside the unit sphere, at $`w_{1,2}`$. After the by now familiar tedious calculations, we find $$Q_\pm (q)=\frac{N(q)}{1+i\eta vq}\psi _\pm (q)$$ (60) where $`\psi _\pm (q)={\displaystyle \frac{F(\xi _1)w_1}{3(\xi _1\xi _2)\sqrt{\xi _1^2(1+i\eta vq)^2}}}+{\displaystyle \frac{F(\xi _2)w_2}{3(\xi _1\xi _2)\sqrt{\xi _2^2(1+i\eta vq)^2}}}`$ $`\pm {\displaystyle \frac{g(q)(w_1w_2)}{3(\xi _1\xi _2)}}`$ (61) with $$g(q)=2i\mathrm{sin}\frac{q}{2}(v^2q^23(1+iq\eta v)\mathrm{sin}^2\frac{q}{2})$$ (62) and $`N(q)`$ is again given by (57). Finally we obtain $$Q_\pm (q)=\frac{Q(0)}{S^+|_{q=\frac{i}{\eta v}}S^{}}\frac{\psi _\pm (q)}{(iq+0)(1+iq\eta v)}$$ (63) Just as was the case for $`Q_h(q)`$ and $`Q^+(q)`$, $`Q_\pm (q)`$ behaves as $`1/q^{1/2}`$ near $`0`$. Thus numerical calculations can be performed just as in the previous cases. Figure 5 shows several curves $`Q_\pm (\tau )`$ for differing parameters. Given the elongations of these “vulnerable” bonds, we can investigate the critical speed at which one of these bonds should be broken. Figure 6 shows the results of our calculations of this critical speed. In Figure 7 we plotted $`\tau _{cr}`$ at which these bonds break; this will allow us to identify spatial and time coordinates of braking and make contact with our finite lattice calculations later. We found that the maximum value of $`Q_{}(\tau )`$ always reaches 1 before $`Q_+(\tau )`$ and that this is the most dangerous bond for small dissipation. This curve turns around at an $`\eta `$ of around $`1.2`$, so that $`Q_{}(\tau )`$ is always subcritical for larger $`\eta `$. This is a result of the fact that the maximum value of $`Q_{}(\tau )`$ is, surprisingly enough, not monotonic with velocity, but reaches a maximum and then decreases with increasing velocity. The maximum extension of this type of bond occurs, for large velocity, at some distance from the crack surface. For small $`\eta `$, where the maximum $`Q_{}(\tau )`$ does exceed critical extension, the decrease of the maximum $`Q_{}`$ with $`v`$ restabilizes the bond beyond some velocity. The dominant threshold for $`\eta `$ above about $`1.1`$, then, comes from the horizontal bond breaking. Note that as $`\eta `$ increases, the critical velocity decreases. Also, for small $`\eta `$, a crossover occurs as the relative importance of the two different maxima in the horizontal bond elongation reverse; this region is anyway irrelevant as the next-layer vertical bond breaks at a lower velocity. Whenever the horizontal bond dominates, the maximum at $`\tau 1`$ is the governing one. As we see from Figure 7 even for large $`\eta `$ this critical $`\tau `$ stays near $`1`$, what means that horizontal bond always breaks near the tip of the crack, contrary to the Mode III case, where the point of breaking drifts backward with increasing $`\eta `$. Many of these features are special to the Mode I problem, and have no analog in the Mode III calculation. For Mode III, the horizontal bond breaking always dominates. Furthermore, the maximum off-crack surface bond extension is a strictly increasing function of the velocity. This points to the possibility that the dynamics of Mode I may be much richer than the Mode III dynamics. ## V Finite lattice model. For our mode III calculations, it proved interesting to compare the exact results derived for an infinite lattice with the numerical determination of crack propagation properties in lattices of small transverse size (Kessler and Levine, 1998; Kessler, 2000). We now discuss similar calculations for our mode I model. In addition to providing details regarding the size needed to attain answers relevant to the macroscopic limit (a question of direct relevance for direct molecular dynamics simulations (Abraham, et al., 1994; Zhou, et al, 1997; Gumbsch, et al., 1997; Holland and Marder, 1997, 1998; Omeltchenko, et al., 1997), for example), finite lattice results can be used as a rough check on some of the predictions obtained above. Given the complexity of the analysis, having such a check is quite useful. ### V.1 Arrested crack Let us start with trying to find arrested cracks, expected to exist in some range of the driving $`\mathrm{\Delta }`$ around the Griffith’s displacement. This involves looking for a solution of Eq. (1) with $`\stackrel{}{u}(x,y)`$ independent of time and with the forces on the right hand side arising solely from the broken bonds. If we have a system with a finite number of rows, the natural boundary condition requires that the top and bottom rows have fixed vertical displacements of $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }`$ respectively. Since the entire system is linear, we can choose to use $`\mathrm{\Delta }=1`$ and rescale the breaking criterion accordingly. Since we are doing a numerical calculation, we must introduce artificial boundaries in the direction along the crack, $`\widehat{x}`$. What we do is cut off the range of points whose coordinates are variables and outside of this range impose fixed asymptotic displacements. On the cracked side, these are just $`\stackrel{}{u}=\pm \mathrm{\Delta }\widehat{y}`$ for positive and negative $`y`$. For the uncracked side, the asymptotic displacement involves constant strain. At each of the sites that has a variable displacement, we impose the two components of the (vectorial) equation of motion. We also impose the equations of motion at the boundaries of the system. This gives us more equations than we have variables. The coupling of sites which contain variable displacements with the ones with fixed displacements gives rise to inhomogeneous terms. Combining these observations, we can write our system in the schematic form $`Mub=0`$, with a non-square matrix $`M`$. The field $`u`$ is then determined by the requirement that the error be minimal. In this manner, the small errors introduced at the boundary by having to have a fixed box size are prevented from causing any large errors (by modes which grow exponentially away from the edges) in the bulk of the lattice. The solution of this linear system determines the elongation of all bonds. In Figure 8 we show a typical lattice given by this solution. In fact we need to know elongation of just two bonds right at the tip of the crack; the one which in the case of the moving crack would break next ($`Q_n`$) and the other which would have been the last one broken ($`Q_l`$). Now, recall that we have scaled our displacement to equal unity and the only remaining displacement scale is the elongation at which the springs break, which we can call $`ϵ`$. The solution we have found is consistent as long as $`ϵ`$ is between the lower limit set by the $`Q_n`$ and the upper limit set by $`Q_l`$. Since $`\mathrm{\Delta }_G=ϵ\sqrt{(2N+1)/3}`$, where $`2N`$ is the number of rows in the $`y`$-direction (excluding the boundary rows whose displacement is constrained), we directly obtain the upper and lower limits of the arrested crack band $$\frac{1}{\sqrt{\frac{1}{3}(2N+1)}Q_l}\mathrm{\Delta }/\mathrm{\Delta }_G\frac{1}{\sqrt{\frac{1}{3}(2N+1)}Q_n}$$ (64) The results for these thresholds as a function of lattice size are shown in Figs. 9, 10. In the limit of infinite lattice, the lower limiting value $`\mathrm{\Delta }_{}/\mathrm{\Delta }_G`$ approaches $`0.515`$, while the upper limiting value $`\mathrm{\Delta }_+/\mathrm{\Delta }_G`$ approaches $`1.94`$. ### V.2 Stable moving cracks We now turn to the moving crack problem. Again, we need to solve Eq. (1) with the forces in the right part due to the broken bonds. Now, the displacement field $`\stackrel{}{u}(x,y,t)`$ is of course time dependent. Therefore, we need to introduce a time-step $`\mathrm{\Delta }t`$ so as to define the time-points at which we will obtain a numerical solution. Given some specific speed $`v`$ of the crack, we choose to divide half of the time interval that it takes for the crack to propagate one lattice spacing, $`1/(2v)`$, into $`n`$ equal time intervals; thus $`\mathrm{\Delta }t=1/(2vn)`$. Now we can discretize our system, obtaining equations at times $`t=0,\mathrm{\Delta }t,\mathrm{},(n1)\mathrm{\Delta }t`$. We use a symmetric discretization of first and second derivatives $`(\stackrel{}{u}(t+\mathrm{\Delta }t)\stackrel{}{u}(t\mathrm{\Delta }t))/(2\mathrm{\Delta }t)`$ and $`(\stackrel{}{u}(t+\mathrm{\Delta }t)+\stackrel{}{u}(t\mathrm{\Delta }t)2\stackrel{}{u}(t))/\mathrm{\Delta }t^2`$ correspondingly. These equations depend on displacements $`\stackrel{}{u}`$ outside of this time interval, because of the temporal derivatives. These can be found via use of the assumed symmetries of the moving crack; in fact, it is easy to see that we thereby trade in displacements outside of the modeled interval with displacements inside this interval albeit at a different spatial location. The boundaries are treated the same way as described for the arrested crack; the displacements outside some range are replaced by asymptotic values for all of the time-points in our interval. Including the equations at these boundary points again gives us a system in which the number of equations exceeds the number of variables and again a least squared error algorithm is used to find the required solution. Fig. 11 shows a snapshot of the lattice near the tip of the crack at one particular time for one set of parameters. We used these finite lattice calculations to provide an independent check on our analytic infinite lattice results. We were specifically interested in checking the critical speed estimates. The difficulty is that convergence in the parameter $`1/N`$ is fairly slow; and, using large lattices is rather time-consuming and memory-demanding. In practice, we limited our calculations to $`N`$’s ranging from $`6`$ to $`24`$. It turns out that for small dissipation where the next-row vertical bond vulnerability is the thing which determines the critical velocity, we can do a credible job of verifying that the infinite lattice results are consistent with the finite lattice ones. For example, in Fig. 12, we show the critical speed as a function of lattice size for several different values of $`\eta `$. These numbers, if extrapolated to infinite $`N`$ are clearly consistent with the results given earlier in Fig. 6, especially considering that having a finite time-step does introduce a small numerical error on its own. We can also check the critical time $`\tau `$ at which the bond breaking occurs. From Fig. (7), we see that in the infinite lattice the break occurs for $`\tau 0.5`$. For the finite lattice with $`n=5`$, this mean that the bond $`a`$ highlighted in Fig. 11 should go above breaking threshold at the first time-point of the modeled interval. This is exactly what occurs. A different strategy is necessary for studying the onset of horizontal bond breaking. This is due to the fact that quite large $`N`$’s are required in order to for the results to quantitatively approach the infinite $`N`$ limit. This is another striking difference between the Mode I results and those of Mode III. For Mode III, the qualitative nature of the extraneous bond breaking was always similar to that of the infinite width limit, and the results were quantitatively accurate even for fairly small $`N`$’s. Here, however, the picture of which bonds are critical changes dramatically between finite $`N`$ and the infinite $`N`$ limit. In Fig. (13), we present the “phase diagram” for critical bonds for $`N=38`$, plotted together with the infinite $`N`$ result, and in Fig. (14), we compare the $`N=38`$ results to those for $`N=26`$. Since the steady-state code is difficult to run at these large values of $`N`$, the data in these two plots were obtained from time-dependent simulations, with only the central vertical bonds allowed to break. After running to times of $`t=100`$ to eliminate transients, the extensions of the dangerous bonds were checked to see if they exceeded criticality. Note that, in contradistinction to the “phase diagram” plotted in Fig. (6), we now plot $`\mathrm{\Delta }/\mathrm{\Delta }_G`$ on the vertical axis, since this is the input control parameter for the simulations. We see that qualitative behavior of the “$`Q_{}`$” bonds is similar to the infinite-$`N`$ result, with these bonds being below critical extension for both large $`\eta `$ and large $`\mathrm{\Delta }/\mathrm{\Delta }_G`$. The horizontal bonds also behave qualitatively like their infinite-$`N`$ counterparts, but the quantitative agreement, is as we noted above, significantly worse. In fact, for large $`\eta `$, the most dangerous bond is in fact of “$`Q_+`$” type. However, the threshold at which the “$`Q_+`$” type bond becomes dominant is pushed to larger $`\eta `$ as the system size is increased, presumably going to infinity with $`N`$. Also, there is a small region of $`\eta `$ around 1.1 where the inconsistency is re-entrant, so that for intermediate values of $`\mathrm{\Delta }`$, no bonds of the crack surface are critical. Thus, to see dynamics qualitatively representative of the macroscopic system requires very large system sizes at moderate to large $`\eta `$. Presumably, this is connected to the process zone increasing in size with $`\eta `$. Again, it is worth emphasizing that this is a feature not present in Mode III, and leads to the conclusion that in molecular dynamics simulations, the width of the material should be taken very large to accurately study the micro-branching instability. ## VI Discussion In this work, we have discussed steady-state mode I crack propagation in a viscoelastic lattice model. Our primary method of analysis utilizes the Wiener-Hopf technique to write down closed form expressions for both the $`v\mathrm{\Delta }`$ curves (for various values of the dissipation $`\eta `$) and for the bond elongation field. The latter enables us to define a limit of consistency for the solution past which some bonds not along the crack path have elongations greater than the assumed breaking criterion. These results are new, and correspond to non-trivial extensions of the work of Kulamekhtova (1984) on the dissipationless limit and the work of Marder and Gross (1995) on small lattices. The most interesting results, in our opinion, concern the dependence of the critical velocity (where the aforementioned inconsistency sets in) on the amount of dissipation. For small dissipation, this threshold is relatively insensitive to $`\eta `$, as already suggested in direct numerical simulations. This threshold, occurring at roughly .73 of the Rayleigh speed is in some ways reminiscent of the Yoffe (1951) branching criterion which suggests that straight crack propagation will become unstable once the largest stress direction shifts away from being straight ahead. It is unfortunately hard to be more precise regarding this correspondence since the crack dynamics in our model is fundamentally tied to the lattice scale, not the macroscopic scale - in fact, the latter is completely invisible in the Slepyan approach aside from providing a driving force in the form of a stress intensity factor. At larger $`\eta `$, the instability picture changes. Now, it is a horizontal bond breaking which signals the onset of more complex crack dynamics. Also, the threshold goes down with increasing dissipation. This was not the case for our mode III calculations. This instability has nothing to do with Yoffe, as it is strongly dissipation dependent and in any case is not associated with crack branching. Much of the recent theoretical work on mode I cracks has been motivated by experiments which show clearly that instabilities limit the range of steady-state crack propagation. These instabilities introduce more complex spatio-temporal dynamics to the fracture process, causing additional dissipation and leaving behind a roughened crack surface. It has been tempting to associate these results with the onset of inconsistencies in lattice models, although the details of this correspondence remains uncertain. First, most of the experimental work has been carried out in amorphous materials, making the idea of an ordered lattice model somewhat suspect. Next, the instabilities seen experimentally typically occur at smaller speeds than the ones seen in lattice systems with small dissipation. Finally, the experiments seem to show a typical frequency for micro-branching which is not connected in any obvious way to dynamics at a lattice scale. Notwithstanding all these issues, we remain optimistic that the study of this class of models will lead to insight into dynamic fracture. There are several intriguing possibilities that need to be investigated in the future. First, we have shown that for mode I cracks, increasing the dissipation (in the form of a Kelvin viscosity) eventually results in a decrease of the instability threshold. If our model is really applied at the atomic scale, it is hard to see why there should be a large linear dissipation; on the other hand, it is well-known that lattice models miss an essential non-linear dissipation mechanism, namely the creation of dislocations which remain pinned to the crack. Would inclusion of these effects also lower the threshold? On the other hand, applying the model on a large scale (possibly for a disordered system) would naturally require a large dissipation and recent numerical simulations indicate that proper inclusion of the thermal fluctuations might also push the model into better agreement with experiment (Pla, et al., 1998; Sander and Ghasias, 1999). Finally, the exact nature of the state which occurs past the instability onset has not been addressed in our work to date, and in fact cannot be addressed by the elegant but ultimately limiting analytic methods utilized for the steady-state problem. Instead, we plan to study a generalized force law in which the sharp breaking criterion is replaced by an analytic nonlinear force versus displacement (Kessler and Levine, 1999). In this formulation, the inconsistency found here becomes a linear instability of the steady-state crack (Kessler and Levine, 2000) and one can use some of the methods developed in the field of nonequilibrium pattern formation to unravel the dynamics past onset. These studies, together with additional experimental data regarding the differences between brittle fracture in crystalline versus non-crystalline materials, will hopefully lead to a better understanding of dynamic fracture. ###### Acknowledgements. DAK acknowledges the support of the Israel Science Foundation. The work of HL and LP is supported in part by the NSF, grant no. DMR94-15460. DAK and LP thank Prof. A. Chorin and the Lawrence Berkeley National Laboratory for their hospitality during the initial phase of this work.
no-problem/0002/quant-ph0002085.html
ar5iv
text
# NMR Quantum Computation: a Critical Evaluation ## I Introduction Nuclear Magnetic Resonance (NMR) Ernst:1987 ; Goldman:1988 ; Munowitz:1988 ; Emsley:1992 ; Hore:1995 ; Freeman:1997 is almost unique among potential quantum technologies in that it has already been used to build small quantum computers Cory:1996 ; Cory:1997 ; Gershenfeld:1997 ; Jones:1998a ; Jones:2000b . Although other techniques have been used to implement quantum logic gates, such as the ion trap controlled-not gate Monroe:1995 , NMR provided the first complete implementation of a quantum algorithm Jones:1998a . Since then progress has been extremely rapid, with demonstrations of Deutsch’s algorithm Jones:1998a ; Chuang:1998a , Grover’s quantum search Chuang:1998b ; Jones:1998b ; Jones:1998c , versions of the Deutsch-Jozsa algorithm with three Linden:1998a and five Marx:1999 qubits, and quantum counting Jones:1999a . In addition to being one of the most successful quantum computing technologies, liquid state NMR is also among the oldest. Although explicit experimental demonstrations of NMR quantum computation only date from 1996 Cory:1996 , many “conventional” NMR experiments, such as COSY Jeener:1971 ; Aue:1976 and INEPT Morris:1979 can in retrospect be reinterpreted as quantum computations. These and related experiments, based on coherence transfer sequences, have been in use all over the world for decades, and are regularly used to study complex biomolecules containing thousands of nuclear spins Cavanagh:1996 . Even older than these are simpler experiments such as selective population transfer Pachler:1973 , which corresponds to the implementation of a controlled-not gate, although in this case the gate was not applied to spins in superposition states. The rapid progress made in NMR quantum computation builds on this pre-existing experimental sophistication: decades of experience in manipulating nuclear spins in coherent states has inevitably resulted in a wide range of “tricks of the trade”. Given this success it may seem surprising that some early papers suggested that this approach might be limited to quantum computers containing about 10 qubits Warren:1997 ; Gershenfeld:1997b , while many more modern estimates are similar at 10–20 qubits. The explanation for this is simple: the highly developed nature of NMR experiments, in comparison with many other putative quantum technologies, means that the limits of the technique are fairly well known and understood. Thus unduly optimistic predictions about the power of NMR quantum computation may be easily debunked by experienced NMR spectroscopists. ## II Structure and scope It is conventional when assessing proposals for quantum computation technologies to consider how the technique can be used to implement five basic elements required to build a quantum computer, and then to discuss whether these techniques can be scaled up for use in computers with large numbers of qubits. This approach is not appropriate for NMR, as it is already clear that all the five basic elements can be constructed; indeed they have all been experimentally demonstrated. Thus, the only topic remaining to be addressed is that of scaling. It is, however, useful to consider each of the five basic elements in turn, and discuss their impact upon the practicality of building a large NMR quantum computer. The sections below are largely concerned with what might be called “conventional” liquid state NMR quantum computers, by which I mean computers implemented using standard techniques from liquid state NMR with spin-$`\frac{1}{2}`$ nuclei. I will, however, briefly discuss some relatively simple techniques, such as the use of optical pumping or the use of liquid crystal solvents, which may allow small extensions in the range accessible to NMR quantum computers without greatly altering the underlying physics. I will not consider solid-state NMR: although ultimately based on the same physical interactions the solid state NMR Hamiltonian Abragam:1961 ; Slichter:1990 ; Schmidt-Rohr:1994 is *much* more complicated than the liquid state form, and liquid and solid state NMR form two largely separate sub-fields. While some solid state NMR experiments could be considered as implementations of quantum simulations (*e.g.*, Zhang:1998 ), they have not as yet been used to build general purpose quantum computers. Similarly, while nuclei with spin quantum numbers greater than $`\frac{1}{2}`$ (quadrupolar nuclei) are of some theoretical interest Kessel:1999 ; Kessel:2000 , their use raises considerable experimental difficulties. Finally I will largely ignore proposed systems which use single isolated atomic nuclei in solid state devices, such as the proposal due to Kane Kane:1998 ; DiVincenzo:1998 ; although such systems are ultimately based upon NMR, they differ from current liquid state NMR implementations so profoundly that it is difficult to draw detailed parallels. ## III Initialization Initialization is the process of placing a quantum computer in some well defined initial state, typically $`|\mathrm{𝟎}=|000\mathrm{}0`$, prior to beginning the computation. In principle any initial state is as good as any other, but in practice $`|\mathrm{𝟎}`$ is the most widely chosen, both because it corresponds to the traditional starting point of many quantum algorithms and because it often corresponds to the system’s energetic ground state. This is indeed the case in NMR quantum computation, where the computational basis corresponds with the natural experimental basis. As initialization requires that the quantum computer be placed in the state $`|\mathrm{𝟎}`$, independent of its state before the beginning of the initialization process, it is clear that it cannot be achieved by any unitary process; thus initialization schemes must be quite different from quantum logic gates. As the desired initial state is usually an energetic ground state, initialization is typically achieved by cooling. This is not a practical approach within NMR, as the energy gaps involved are tiny compared with the Boltzmann factor at room temperature. The energy gap between nuclear spin levels in NMR experiments is principally determined by the Zeeman interaction between the nucleus and the applied magnetic field (with the exception of the quadrupolar interaction, which does not occur in spin-$`\frac{1}{2}`$ nuclei, all other nuclear spin interactions are small compared with the main Zeeman interaction). The Zeeman splitting, $`\mathrm{\Delta }E=h\nu =\mathrm{}\gamma B`$, is usually described in terms of the corresponding Larmor frequency, $`\nu `$, and is proportional to the magnetic field strength, $`B`$, and the gyromagnetic ratio, $`\gamma `$, which is an intrinsic property of the nucleus. For the magnetic field strengths typically used in NMR experiments ($`2.3`$$`21.1\text{T}`$), the Larmor frequency of $`{}_{}{}^{1}\mathrm{H}`$ nuclei lies in the range $`100`$$`900\text{MHz}`$, corresponding to an energy of $`0.4`$$`3.7\mu \text{eV}`$. For all other nuclei (with the exception of the radioactive nucleus tritium, $`{}_{}{}^{3}\mathrm{H}`$), $`\gamma `$ is lower than for $`{}_{}{}^{1}\mathrm{H}`$, with a corresponding reduction in $`\mathrm{\Delta }E`$. These energies are much smaller than the Boltzmann energy at room temperature (about $`25\text{meV}`$), and so at thermal equilibrium the excess population in the lower Zeeman level is tiny, less than one part in $`10^4`$. For this reason conventional liquid state NMR was long ruled out as a practical technology for quantum computation. In 1996, however, it was realised that it is not strictly necessary to start quantum computations from a pure state: a pseudo-pure, or effective pure state will suffice Cory:1996 . ### III.1 Pseudo-pure states A pseudo-pure state is a mixed quantum state corresponding to a mixture of the desired quantum state, $`|\psi `$ and the maximally mixed state, $`\mathrm{𝟏}/N`$, where $`N=2^n`$ is the dimension of the Hilbert space describing the system. To perform a quantum computation it suffices to form the pseudo-pure ground state, $$\rho =(1ϵ)\mathrm{𝟏}/N+ϵ|\mathrm{𝟎}\mathrm{𝟎}|.$$ (1) An otherwise error-free quantum computer which begins a computation in such a state will end up in the state $$\rho ^{}=(1ϵ)\mathrm{𝟏}/N+ϵ|\psi \psi |$$ (2) where $`|\psi `$ is the result of the computation (assumed for the moment to be an eigenstate in the natural basis). This result may be immediately deduced by noting that any quantum computation corresponds to some unitary evolution of the quantum state; such evolutions are linear and have no effect upon $`\mathrm{𝟏}`$. A quantum computer in this mixed state will return the correct answer with probability $`ϵ+(1ϵ)/N`$, and a wrong answer with probability $`(N1)(1ϵ)/N`$ (note that the maximally mixed state itself contains a fraction $`1/N`$ corresponding to the ground state). Clearly it is possible to determine the desired answer by statistical analysis of the results of a sufficiently large number of repetitions of the computation, where the number of repetitions required depends on $`ϵ`$. Equivalently it is possible to use an ensemble of quantum computers, and determine the ensemble averaged result of the computation; as long as the ensemble is sufficiently large this process will unambiguously point to the desired answer. This latter approach is precisely that adopted for NMR quantum computation, and indeed for conventional NMR spectroscopy. NMR transition frequencies are so low that it is essentially impossible to detect a single transition, and so it is necessary to use macroscopic samples, typically containing about $`10^{17}`$ molecules, with an excess population of about $`10^{13}`$ nuclei in the low energy state. It might seem that the small signal from the excess population would be swamped by a huge background, but this is not the case as the maximally mixed state gives rise to no overall signal. This is because the signals from different components in the maximally mixed state cancel each other out; the operators corresponding to the effective observables in NMR spectra are all traceless. Thus the NMR signal arises entirely from the small excess population, and the signal from an NMR quantum computer in the pseudo pure state, equation (2), is identical to that from one in the pure state $`|\psi `$$`\psi |`$, except that the signal strength is reduced by a factor of $`ϵ`$. ### III.2 Assembling pseudo-pure states While pseudo-pure states offer the theoretical possibility of performing quantum computations with mixed states, this approach is useful only if some practical procedure for assembling such states can be devised. For the simplest possible quantum computer, comprising a single qubit, the process is trivial, as the thermal equilibrium density matrix has exactly the desired form, but with larger systems the situation is more complicated. For a system of $`n`$ spin-$`\frac{1}{2}`$ nuclei, all of the same nuclear species (a homonuclear spin system), the $`2^n`$ eigenstates will be distributed across an evenly spaced ladder of $`n+1`$ groups of energy levels, with the number of (nearly degenerate) states within each group given by Pascal’s triangle; the population of each state will be determined by the Boltzmann equation. If the system contains several different nuclear species (a heteronuclear system), the situation is similar but slightly more complicated. Normally NMR experiments are conducted in the high temperature limit (see below), and so the pattern of excess populations will simply be proportional to the energy of each state. Assembling a pseudo-pure state from such a complex mixture might seem difficult, but it is in fact a fairly conventional problem from the viewpoint of NMR Cory:1996 . The apparent problem is that reaching even a pseudo-pure state requires a non-unitary process, and the most obvious such process (relaxation to thermal equilibrium) leads to a state which is neither the desired state, nor unitarily related to it. Thus some additional non-unitary step is required. In fact there are two elements commonly used in NMR pulse sequences with non-unitary effects: magnetic field gradients and phase cycling. The first of these relies on the fact that the sample forms a macroscopic ensemble; by applying Hamiltonians which vary over the sample the *ensemble averaged* evolution can be non-unitary. This is most commonly achieved by momentarily destroying the spatial homogeneity of the main magnetic field (a $`B_0`$ field gradient pulse Keeler:1994 ), but similar effects can be achieved using spatially inhomogeneous RF fields (a $`B_1`$ field gradient pulse Zhang:1995 ). The second approach relies on combining the results of several subtly different NMR experiments by post-processing; as this is done by classical methods, such processing is not confined to unitary transformations. In conventional NMR this is referred to as phase cycling Bodenhausen:1984 , and plays a central role in many experiments, although for many purposes it has now been replaced by the use of gradient techniques. Both of these approaches have been used to assemble pseudo-pure states. The original approach of Cory *et al.* Cory:1996 , based on magnetic field gradients, is in many ways the most satisfying, but an alternative “temporal averaging” scheme based on phase cycling Knill:1998 has also proved extremely popular. Recently Knill *et al.* have described a simple and general approach Knill:1999 for building pseudo-pure states in a system of any size; their scheme may be used with either gradient or phase cycling techniques. In addition to these schemes, another quite different approach, logical labeling, was suggested very early on by Gershenfeld and Chuang Gershenfeld:1997 . Their approach is based on the observation that while the thermal equilibrium spin density matrix for an $`n`$ spin system ($`n>1`$) does not have the desired form, equation (1), subsets of energy levels can be chosen which *do* have the correct pattern of populations; the computation is then performed within this subset of states. Although theoretically elegant this scheme appears to be more complex than the other approaches, and only two experimental implementations have been reported Vandersypen:1999 ; Dorai:1999 . ### III.3 Scaling the system up While pseudo-pure states provide a practical approach for building small NMR quantum computers it is not possible to simply scale this approach up to larger systems. The basic problem is the size of $`ϵ`$, or rather the manner in which $`ϵ`$ scales with $`n`$, the number of spins in the system. The exact value of $`ϵ`$ will vary with details of the scheme used to prepare the pseudo-pure state, and so it is more useful to consider an upper bound, which gives the maximum amount of pseudo-pure state which can possibly be extracted from thermal equilibrium. This is equal to the population difference between the lowest and highest energy states Warren:1997 , which for an $`n`$ spin homonuclear spin system is given by $$ϵ=\frac{2\mathrm{sinh}(nh\nu /2kT)}{2^n\mathrm{cosh}^n(h\nu /2kT)}$$ (3) where $`h\nu `$ is the Zeeman splitting. In the high temperature limit ($`h\nu kT`$) this expression simplifies to $$ϵ\frac{nh\nu /kT}{2^n}.$$ (4) Thus, in the high temperature limit the amount of pseudo-pure state which can be obtained decreases *exponentially* with the size of the spin system. To overcome this it is necessary to use an exponentially large sample, or some equivalent approach such as repeating the experiment an exponentially large number of times. It therefore seems that the pseudo-pure state approach does not scale. In passing it should be noted that this is by no means a feature unique to NMR quantum computation: it occurs for any ensemble quantum computation scheme in the high temperature limit. Physically this is because such a system has $`2^n`$ levels, and the population deviations must be distributed among them; hence the excess population in any one state is inevitably exponentially small. The exponentially small size of $`ϵ`$ has caused some authors to dismiss NMR quantum computation as a practical approach; while this point of view has merit it may, as discussed below, be over hasty. More recently, it has been suggested Braunstein:1998 that NMR might not be quantum mechanical at all! As NMR experiments are conducted in the high temperature limit, the density matrix is close to a maximally mixed state, and such high temperature states can always be decomposed as a mixture of product states (that is, states containing no entanglement between different nuclei). As NMR states can be described without invoking entanglement, they can therefore be modeled classically, although the classical models involved may be somewhat contrived. While this conclusion is clearly correct it has proved difficult to develop classical models which fully describe NMR experiments Schack:1999 , and the real significance of these observations remains unclear. ### III.4 High fields and low temperatures As the problem with using pseudo-pure states arises from operating in the high temperature limit, the most obvious solution is to use either low temperatures or high fields, so that this limit no longer applies. Unfortunately NMR lies so far into the high temperature regime that this approach is unlikely to lead to success. The critical fields and temperatures required are given by $`\mathrm{}\gamma BkT`$; for $`{}_{}{}^{1}\mathrm{H}`$ nuclei and a magnetic field strength of $`21.1\text{T}`$ (the largest NMR magnet currently available) this corresponds to $`T0.043\text{K}`$. Reaching such temperatures is possible, but clearly the sample will no longer remain in the liquid state! At such low temperatures only solid state NMR is possible. Alternatively if the sample is held at room temperature then a magnet with a field strength $`B150000\text{T}`$ will be required; this lies far beyond anything which is likely to be achieved in the foreseeable future. The arguments above do not entirely rule out the possibility that some combination of high fields and low temperatures might one day be used to achieve reasonable polarizations, and thus interesting pseudo-pure states (in particular, it may be possible to generate high polarizations under one set of conditions and then observe them in another), but in the short term this does not seem a particularly sensible approach. There are, however, a wide range of alternatives. ### III.5 Optical pumping A more subtle approach to increasing spin polarization is to use techniques such as optical pumping; this has the effect of decreasing the apparent temperature of the spin system without affecting the rest of the sample. Optical pumping techniques are in fact quite widely used within NMR, but it is not yet clear whether they can be usefully applied within NMR quantum computation. The best known optical pumping process within NMR is the spin-exchange optical pumping of noble gas nuclei Walker:1997 , most notably the spin-$`\frac{1}{2}`$ nuclei $`{}_{}{}^{3}\mathrm{He}`$ and $`{}_{}{}^{129}\mathrm{Xe}`$ and the quadrupolar nucleus $`{}_{}{}^{131}\mathrm{Xe}`$. The process involves conventional optical pumping of the electron spin states of alkali metal atoms, followed by spin exchange in binary collision pairs (He) or van der Waals complexes (Xe). The resulting highly polarized noble gases have been used in a variety of NMR experiments, including NMR imaging Albert:1994 . Unfortunately noble gases are unsuitable for constructing NMR quantum computers as they consist of isolated atoms, that is systems containing only a single spin. It is in principle possible to transfer the polarization from the noble gas to more interesting species using a variety of cross polarization techniques Gaede:1995 ; Navon:1996 ; Pietrass:1999 , although it has so far proved difficult to obtain high transfer efficiency except when transferring polarization to surface species in microporous materials. The second common form of optical pumping in NMR is quite different: optical pumping in bulk semiconductors such as Si, GaAs and InP Lampel:1968 . This approach, which is related to the DNP schemes described below, is confined to solid state systems, and it is difficult to see how it might be used to improve current liquid state implementations of NMR quantum computers. ### III.6 Other approaches There are many other techniques which can be used to increase the initial polarization in NMR experiments: the low sensitivity of NMR is perhaps its biggest drawback in conventional spectroscopic studies, and seeking to improve sensitivity is a common research topic. Perhaps the most important technique for sensitivity enhancement is the nuclear Overhauser effect (NOE), which arises from the correlated relaxation of two or more nuclei. If the polarization of one nucleus is perturbed from its equilibrium value cross relaxation will transfer some of this perturbation to other nearby nuclei. This technique is widely used, both to enhance the polarization of low sensitivity nuclei, and to probe internuclear distances Neuhaus:1989 . The maximum polarization gain which can be achieved, however, is proportional to $`\gamma _S/\gamma _I`$, where $`\gamma _S`$ and $`\gamma _I`$ are the gyromagnetic ratios of the sensitive and insensitive nuclei respectively, and so this method cannot be used to increase the polarization of $`{}_{}{}^{1}\mathrm{H}`$, which has the highest gyromagnetic ratio among stable nuclei. A better approach for $`{}_{}{}^{1}\mathrm{H}`$ nuclei is to use the original Overhauser effectOverhauser:1953 ; Overhauser:1953b , which transfers polarization from electrons to nuclei. Initially greeted with scepticism Abragam:1989 , Overhauser’s theoretical predictions were confirmed by Carver and Slichter Carver:1953 , who demonstrated huge Overhauser enhancements in the spectrum of metallic $`{}_{}{}^{7}\mathrm{Li}`$. The Overhauser effect, and related phenomena collectively known as Dynamic Nuclear Polarization or DNP Hall:1997 ; Wind:1985 ; Muller-Warmutht:1983 , can be used to generate quite large polarization enhancements in a range of solid systems containing unpaired electrons, and when combined with optical techniques for pumping the electron polarization Lampel:1968 ; Patel:1999 dramatic enhancements can be observed Iinuma:2000 . However the technique performs poorly in the liquid state. Another technique which gives enhanced polarizations is Chemically Induced Dynamic Nuclear Polarization, or CIDNP Hore:1993 . Despite the name CIDNP is unrelated to DNP; instead the non-equilibrium polarizations arise from a spin sorting mechanism which takes place during chemical reactions. This intriguing effect has proved a powerful tool for investigating a range of biomolecular systems Hore:1993 ; Lyon:1999 , but the polarizations achievable are unlikely to be useful for quantum computation. Yet another possible approach is a family of experiments using *para*-hydrogen induced polarization, or PHIP Natterer:1997 . When hydrogen molecules are cooled into their rotational ground state, the Pauli principle dictates that the nuclear spin wavefunction of the two $`{}_{}{}^{1}\mathrm{H}`$ nuclei must be antisymmetric, and so the two nuclei must have *opposite* spin states. If *para*-hydrogen is used in an addition reaction, for example adding $`\text{H}_2`$ across a carbon–carbon double bond, then the product of the reaction will also have non-equilibrium spin states; these can be converted into a greatly enhanced polarization by conventional NMR pulse sequences. As *para*-hydrogen has a pure nuclear spin state it should in principle be possible to produce completely polarized molecules; in practice the enhancements are usually somewhat smaller Natterer:1997 . Unlike many of the schemes discussed above, PHIP works well in the liquid state using fairly “normal” organic molecules, and so may prove useful in NMR quantum computation. Unfortunately the scheme only allows the production of molecules which are highly polarized at two sites, and while it is in principle possible to use two or more addition reactions within the same molecule to produce polarization at four or more sites creating an entire spin system by PHIP addition reactions does not seem a plausible process. However, as discussed below, in some cases it may suffice to produce high polarizations at a single spin; in this case PHIP may indeed turn out to be a useful approach. ### III.7 Reinitialization One important point, frequently neglected in discussions of this kind, is that the methods described above are essentially methods for initializing an entire quantum computer at the start of a computation. They do not obviously permit the selective reinitialization of individual qubits in the middle of a computation. This makes it difficult to implement effective error correction schemes, as discussed in section VI below. ### III.8 Computational solutions In addition to the physical approaches outlined above, two computational approaches might allow the problem of low spin polarization to be bypassed. The first approach, due to Schulman and Vazirani Schulman:1998 , uses computational methods to purify mixed states. Their scheme works by concentrating the polarization from a large number of weakly polarized spins into a small number of spins which become strongly polarized; alternatively the calculation can be though of as occurring inside a small effectively pure subspace within the large spin space. Unfortunately the size of the subspace which may be extracted is $`O(ϵ^2n)`$, and so an $`n`$ spin quantum computer requires a spin system with $`O(n/ϵ^2)`$ spins; for the values of $`ϵ`$ achievable by direct cooling the resulting overhead is enormous. If, however, the polarization can be substantially increased by other means then this might provide a good method for further purification. Another recent theoretical discovery may allow all these problems to be sidestepped, as it may be possible to perform many important quantum computations without starting from a pure (or even pseudo-pure) ground state. It has been known for some time that some quantum computations can be performed with a starting state comprising one pure qubit together with a number of qubits in the maximally mixed state Knill:1998b ; more recently it has been shown that Shor’s quantum factoring algorithm can be performed in this way Parker:2000 (but see section VII below). If this proves to be a useful approach then it could simplify the problem of constructing an NMR quantum computer, as single qubit pure states may be substantially easier to reach than their multi qubit equivalents (for example, by using *para*-hydrogen based schemes). ## IV Gates In comparison with initialization, building quantum gates is a relatively simple process for NMR quantum computers. Indeed, as described above, many conventional NMR pulse sequences used in the molecular sciences can be viewed as sequences of quantum gates. Any quantum gate can be built out of one qubit and two qubit gates Barenco:1995a , and early work concentrated on building these gates in two qubit NMR quantum computers (NMR systems containing two spins). Unlike many other proposed implementations of quantum computation, these gates cannot simply be transferred to larger spin systems without modification, and there was some initial concern that this transfer might prove difficult, involving an exponential increase in the complexity of the gate design. In fact, while the gates do need to be modified, the transfer can be done fairly simply, and approaches are known for doing this with only quadratic overhead. More recently some authors have moved on from simply imitating gates used in “conventional” quantum computers, and started designing gates which harness the full power of the NMR Hamiltonian directly. ### IV.1 One qubit gates One qubit gates are easy to implement in NMR as they correspond to rotations of a single spin within its own subspace. This is most simply described using the Bloch vector model Hore:1995 ; Bloch:1956 in which the state of a spin (or indeed any qubit) is represented by a point on the surface of the Bloch sphere. One qubit gates correspond to rotations on this sphere, and can be achieved using resonant radiofrequency (RF) fields; the power and length of the RF pulse determines the rotation angle, while the phase of the RF radiation corresponds with the rotation axis within the $`xy`$ plane. Rotations around other axes can be achieved by composite rotations, implemented by applying several RF pulses in sequence Freeman:1997 . This description assumes that it is possible to apply RF pulses selectively to individual qubits. In more conventional proposals for implementing quantum computation such qubit selection is achieved using spatial localisation: each qubit is stored on some physical object with a well defined location. This approach is not possible within NMR for three reasons. Firstly, each qubit is not stored on a single spin, but rather in an ensemble of spins distributed throughout the sample. Secondly, as the sample is in the liquid state individual spins are in continual rapid motion. Thirdly, the wavelength of RF radiation (around 1 metre) is huge compared with the separation between spins, rendering conventional spatial localisation impossible (although this could in principle be overcome by using techniques from magnetic resonance imaging (MRI) combined with enormous field gradients Callaghan:1993 ; Zhang:1998b ). Instead of using spatial selection, NMR quantum computation relies on frequency selection to pick out individual qubits. Different nuclear species have different gyromagnetic ratios, and thus different resonance frequencies; therefore RF pulses on resonance with one spin will have little or no effect on other spins. With two or more nuclei of the same type the situation is slightly more complex, but even these will have different resonance frequencies as different chemical environments will result in different chemical shift (shielding) interactions Hore:1995 . In this case it is necessary to use low power selective RF pulses (“soft” pulses) Freeman:1997 , or their multipulse equivalents, Dante sequences Morris:1978 , in order to excite one spin while leaving other spins with only slightly different resonance frequencies untouched (the excitation bandwidth of an RF pulse depends on its power). This approach works well in small spin systems, but will be difficult in large systems as discussed below. ### IV.2 Two qubit gates Two qubit gates are slightly more complex, as they require some sort of interaction between pairs of spins. In liquid state NMR quantum computers this is provided by scalar spin–spin coupling Hore:1995 . The NMR Hamiltonian describing two weakly coupled spins is $$=\omega _II_z+\omega _SS_z+\pi J_{IS}2I_zS_z$$ (5) where, following NMR practice, the spin operators are described using product operators Sorensen:1983 ; Hore:2000 , and energies are written as multiples of $`\mathrm{}`$. The one spin angular momentum operators $`I_z`$ and $`S_z`$ are simply scaled versions of the Pauli matrices (the use of I and S to describe two spin systems is traditional Solomon:1955 ); $`\omega _I`$ and $`\omega _S`$ are the resonance frequencies (Larmor frequencies) of the two spins, and $`J_{IS}`$ is the scalar spin–spin coupling (J-coupling), usually measured in Hertz. Note that this simplified Hamiltonian is only correct in the *weak coupling limit*, that is when $`|2\pi J||\omega _I\omega _S|`$. The most obvious way to implement a two qubit gate, such as the controlled-not gate, is to use soft pulse techniques to selectively excite one spin only when its neighbouring spin is in one of its two eigenstates Barenco:1995b ; these two transitions have different energies as they are split by spin–spin coupling. This direct approach, picking out one transition from the multiplet corresponding to transitions of a single spin, is in fact identical to the old selective population transfer experiment Pachler:1973 , and so this is the oldest method for building quantum logic gates. More recently this approach has been used to build two and three qubit gates directly Linden:1998a ; Dorai:1999 ; Arvind:1999 . This direct approach is, however, relatively unpopular. Instead, most authors Cory:1996 ; Cory:1997 ; Gershenfeld:1997 ; Jones:1998a ; Chuang:1998a ; Chuang:1998b ; Jones:1998b ; Jones:1998c ; Marx:1999 ; Jones:1999a ; Vandersypen:1999 ; Cory:1998a ; Cory:1998b ; Laflamme:1998 ; Nielsen:1998 ; Madi:1998 ; Jones:1998d ; Linden:1999a ; Linden:1999b ; Leung:1999 ; Linden:1999c ; Price:1999a ; Price:1999b ; Jones:1999b ; Collins:1999 ; Jones:2000c prefer to use multipulse NMR techniques Ernst:1987 ; Goldman:1988 ; Munowitz:1988 ; Emsley:1992 ; Freeman:1997 to sculpt the Hamiltonian, equation (5), into a more suitable form. This is analogous to the replacement of selective population transfer by the INEPT pulse sequence Morris:1979 . The basic idea behind Hamiltonian sculpting is to use spin echoes Freeman:1997 ; Hahn:1950 ; Bagguley:1992 to refocus specific interactions in the Hamiltonian, thus creating an effective Hamiltonian obtained by rescaling elements of the original Hamiltonian, or indeed completely deleting them. While this approach to implementing two qubit gates can be described in a variety of different ways, they are all fundamentally equivalent. The central feature is the implementation of a controlled phase gate Jones:1998d , such as $$\mathit{\varphi }=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& e^{i\varphi }\end{array}\right).$$ (6) This can be decomposed in product operator notation as $$\mathit{\varphi }=\mathrm{exp}\left[i\times {\scriptscriptstyle \frac{1}{2}}\varphi \times \left(({\scriptscriptstyle \frac{1}{2}}E)+I_z+S_z2I_zS_z\right)\right].$$ (7) The first term ($`{\scriptscriptstyle \frac{1}{2}}E`$) can be ignored as it simply corresponds to an (undetectable) global phase; the remaining three terms can be achieved by sculpting the Hamiltonian, equation (5), into the desired form. More recently a third approach for implementing two qubit gates has been suggested Jones:2000a . Once again this method relies on the use of controlled phase gates, but the phase shifts are generated not by using conventional dynamic phases, but instead by geometric phases Shapere:1989 , such as Berry’s phase Berry:1984 . Berry phases have been demonstrated in a wide variety of systems Shapere:1989 , including NMR Suter:1987 ; Goldman:1996 and the closely related technique of NQR Tycko:1987 ; Appelt:1994 ; Jones:1995 ; Jones:1997 . They can be used to implement controlled phase shift gates in NMR systems Jones:2000a , but it seems that this approach has few advantages for NMR quantum computation over the more conventional dynamic approach; the idea may, however, prove useful in other systems Ekert:2000a . ### IV.3 Gate times It is not sufficient simply to show that a gate can in principle be implemented; it is also important to consider how long it takes to implement it. As discussed in section V below, what matters is not the absolute time taken, but how this compares with the natural decoherence time. It is, therefore, important to consider what factors limit the rate at which NMR quantum, logic gates operate. One qubit gates are not only simple to build, they are also rapid to operate. As single qubit gates correspond to rotations, a reasonable measure is provided by the inverse of the time required to perform a $`2\pi `$ rotation. For a fully heteronuclear spin system (that is, a spin system containing only one spin of any given nuclear species) this rate is limited only by the available RF power and the breakdown voltages of the RF coils; typical values lie in the range $`10`$$`100\text{kHz}`$. For a homonuclear spin system the situation is more complex as it is necessary to use selective excitation. In this case the gate rate is constrained by the frequency difference between the resonances which must be excited and those which must be left untouched. This can be seen either by considering a Fourier spectral model of excitation Ernst:1987 , or by analysis of the “jump and return” pulse sequence Plateau:1982 . Thus in homonuclear spin systems the one qubit gate rate is often below $`1000\text{Hz}`$. Two qubit gates are typically much slower. If they are implemented by direct selective excitation, then the situation is the same as for one qubit gates in homonuclear systems, except that the relevant frequency splitting is the scalar coupling between the spins, $`J_{IS}`$. A similar argument applies if the gates are implemented using multiple pulse techniques: the relevant rate is the inverse of the time required to achieve the “antiphase” condition, $`1/2J_{IS}`$. Scalar couplings are quite variable, but they are frequently less than $`10\text{Hz}`$; thus the gate times required for two qubit gates can be quite long. In our two qubit NMR quantum computer based on cytosine, a two qubit gate takes about $`70\text{ms}`$. If implemented using geometric phases Jones:2000a , the time required is even greater. In order to reduce these two qubit gate times it is necessary to increase the size of the spin–spin coupling constant. In general this is not possible, as $`J_{IS}`$ is fixed by the chemical system chosen (it is, of course, possible to choose a new system with a larger coupling constant, but this approach is obviously quite limited). It is, however, possible to change the apparent size of $`J_{IS}`$ by partially reintroducing dipolar couplings. The scalar coupling is in fact a quite small effect in comparison with the principal spin–spin coupling interaction, dipolar coupling. This arises from the direct magnetic interaction between two magnetic dipoles, and in the high field approximation Ernst:1987 $$_D=\frac{\mu _0\gamma _I\gamma _S\mathrm{}(13\mathrm{cos}^2\theta )}{8\pi r^3}\left(3I_zS_zIS\right),$$ (8) where $`r`$ is the length of the internuclear vector $`𝒓`$, and $`\theta `$ is the angle between $`𝒓`$ and the magnetic field. In solid samples this results in coupling between one spin and all its neighbours, but in liquids and solutions rapid motion causes $`𝒓`$, and thus $`\theta `$, to fluctuate, so that $`_D`$ is replaced by its isotropic average, which is zero. For this reason dipolar coupling has little direct effect in liquid state spectra (it cannot be entirely neglected, as it is one of the main sources of spin relaxation). The scalar coupling is a correction to the dipolar coupling which arises from the Fermi contact interaction: valence electrons can interact with two or more nuclei and thus mediate an interaction between them. Like dipolar coupling scalar coupling is anisotropic, but its isotropic average is non-zero; thus the relatively small isotropic scalar coupling is the dominant spin–spin interaction in the liquid state. Dipolar coupling is removed because isotropic tumbling reduces it to its isotropic average. If the motion is *anisotropic* then the average may become non-zero; this can occur if the anisotropy of the molecular magnetic susceptibility causes molecules to align slightly with the magnetic field, resulting in small residual dipolar couplings. The effect is largest when the magnetic susceptibility is high, and this approach has proved helpful in investigating proteins bound to oligonucleotides Tjandra:1997 . Larger alignments, and thus larger residual couplings, can be observed using liquid crystalline solvents or cosolvents Bax:1997 , which align strongly in the magnetic field and then themselves act to order other dissolved molecules. This approach has been used Yannoni:1999 to implement Grover’s quantum search on a two qubit NMR quantum computer based on chloroform dissolved in a liquid crystal solvent. The effective Hamiltonian in this system is $$=\omega _II_z+\omega _SS_z+\pi (J_{IS}+2D)2I_zS_z$$ (9) where $`D`$ is the residual dipolar coupling Yannoni:1999 , allowing the gate rate to be increased by a factor of $`8`$. It is not yet clear how generally useful this approach will be. ### IV.4 Two qubit gates in larger systems The description above is not only confined to one and two qubit gates; it is in fact confined to such gates in two qubit computers. If these gates are to be used to build networks with three or more qubits it is necessary to consider how they will function in larger spin systems. The situation for one qubit gates is relatively straightforward, as these can still be implemented using selective RF pulses (although the problem of frequency selection, discussed below, becomes more serious). For two qubit gates, however, a more detailed analysis is necessary. For a general $`n`$ spin system, the Hamiltonian is $$=\underset{i}{}\omega _iI_z^i+\underset{i<k}{}\pi J_{ik}2I_z^iI_z^k,$$ (10) including $`n`$ Larmor frequency terms for each spin, and a total of $`n(n1)/2`$ spin–spin coupling terms, connecting every pair of spins. In real systems some of the $`J_{ik}`$ coupling constants will be so small that they can be neglected, but it is still useful to consider the most general case. Consider, for example, a system of three spins, conventionally called $`I`$, $`R`$ and $`S`$; in this case there will be three Larmor frequency terms, and three spin–spin couplings, $`J_{IR}`$, $`J_{IS}`$, and $`J_{RS}`$. It is in principle relatively simple to implement gates using the direct method. Each transition is split into four (under the influence of two couplings); if selective excitation is applied at just one of these four frequencies this corresponds to a doubly controlled three qubit gate, such as the Toffoli gate. Alternatively by exciting two transitions two qubit gates, such as controlled-not, can be achieved. In practice in large spin systems it will become difficult to pick out the right set of transition frequencies, and it is likely that some transitions will overlap. The more common approach is to use Hamiltonian sculpting to convert equation (10) into the desired form. Clearly this requires refocusing the $`n2`$ additional Larmor frequency terms, as well as the $`(n1)(n2)/2`$ extra spin–spin couplings. This can be achieved using spin echoes Linden:1999a ; Jones:1999b . Unfortunately it is not possible to use just a single echo to refocus all the interactions; instead it is necessary to consider how the echo sequences interact with one another. The simplest method is to nest spin echoes applied to each spin within one another, but this naïve approach requires an exponentially large number of refocusing pulses. Fortunately this problem can be sidestepped by using efficient refocusing sequences Jones:1999b ; Leung:1999b , which allow refocusing to be achieved with a quadratic overhead. It is rare to find a large spin system where all the couplings have significant size. Coupling is a fairly local effect and so large coupling constants are only found between spins that are fairly close within the spin system; thus the coupling network is described by a non-complete graph Jones:1999b . This greatly simplifies the problem: not only does it reduce the number of couplings which have to be refocused, but it also simplifies the echo patterns needed to refocus the Larmor frequencies Linden:1999c ; Jones:1999b . Thus the overhead required for refocusing echoes is greatly reduced. It might seem that it would not be possible to implement all the desired logic gates in such a spin system, as some of the requisite spin–spin couplings are missing. In fact this is not a problem, as long as every pair of spins is connected by some chain of couplings. Quantum swap gates Madi:1998 ; Linden:1999b can be used to move quantum information along this chain; the overhead imposed by these swap gates is at worst linear in the size of the spin system. In most cases the advantages of using a partially coupled spin system significantly outweigh the disadvantages. ### IV.5 Multiqubit gates As described above, NMR techniques can be used to implement conventional one and two qubit gates, and thus any desired quantum network. This approach, imitating existing theoretical models of quantum computers, was adopted by all the early papers on NMR quantum computation. It is not clear, however, that this is the best approach: it might be more sensible to consider what types of gates NMR systems are good at providing, and then seeing whether these are valuable in quantum computation. To take a simple example the natural two qubit gate for NMR systems is the controlled phase shift, not the controlled-not gate, and in many quantum algorithms phase shifts are exactly what is needed. For example, NMR quantum computers can readily implement Grover’s quantum search without using an ancilla qubit to convert oracle calls into phase shifts Chuang:1998b ; Jones:1998b ; Jones:1998c . A more complex, but more interesting, example is the implementation of multiqubit gates. As the NMR Hamiltonian, equation (10), contains terms connecting multiple pairs of qubits, it should be possible to use this Hamiltonian to build certain multiqubit gates directly. This was briefly discussed above, when considering the direct implementation of Toffoli gates in a multi-spin system, but Hamiltonian sculpting should allow the approach to be used more widely. To date this idea has received only brief analysis Price:1999b , but this is likely to be a productive source of gates in the next few years. ### IV.6 The problem of selective excitation From the descriptions above it might seem that implementing quantum logic gates in NMR quantum computers is essentially solved, and that the solutions scale fairly well. This is almost, but not quite, true as one major problem remains: selective excitation. All the techniques described above rest on the assumption that it is possible to address individual qubits, so that interactions can be refocused. For NMR quantum computation (or indeed any NMR pulse sequence) this is achieved using frequency selection, rather than more conventional spatial localisation techniques, and this approach is only possible if the NMR transitions all have well separated frequencies; in particular it is simplest if the separation between any pair of Larmor frequencies is much greater than the width of the NMR multiplets, that is the sum of the spin–spin couplings. Note that in contrast with conventional NMR experiments it is not sufficient simply to selectively excite one spin; it is also essential that other spins remain *completely* unaffected. With small spin systems this is fairly easy to achieve, but with larger systems it can become quite difficult. For example, the range of Larmor frequencies found for $`{}_{}{}^{1}\mathrm{H}`$ nuclei in simple organic compounds, and working at a $`{}_{}{}^{1}\mathrm{H}`$ frequency of about $`500\text{MHz}`$ is only about $`5000\text{Hz}`$, and this limited frequency range can soon “fill up”. Increasing the number of spins not only increases the number of NMR multiplets, but also increases the width of each multiplet by introducing more spin–spin couplings. With other nuclei the situation is similar, although usually less serious (the $`{}_{}{}^{1}\mathrm{H}`$ frequency range is unusually narrow). A partial solution is provided by turning to heteronuclear spin systems. As the NMR frequencies of different nuclei are very different, it is trivial to achieve nucleus selective, and thus spin selective, excitation. The comparative simplicity of heteronuclear NMR quantum computation explains its early, and enduring, popularity. Unfortunately this approach can not be continued indefinitely, as the number of suitable nuclei is small, the obvious candidates being $`{}_{}{}^{1}\mathrm{H}`$, $`{}_{}{}^{13}\mathrm{C}`$, $`{}_{}{}^{15}\mathrm{N}`$, $`{}_{}{}^{19}\mathrm{F}`$ and $`{}_{}{}^{31}\mathrm{P}`$. It seems likely that the problem of selective excitation will prove a serious difficulty in constructing large NMR quantum computers. Although it is difficult to assess exactly what the limit will be, it is notable that the largest number of spins of one nuclear type used to date is six $`{}_{}{}^{1}\mathrm{H}`$ nuclei Linden:1999c ; one other paper has described computations involving four $`{}_{}{}^{13}\mathrm{C}`$ nuclei and three $`{}_{}{}^{1}\mathrm{H}`$ nuclei Knill:1999 , while all other authors have used at most three spins of any one nuclear type. Assuming that it is practical to address six spins of each of the five nuclei listed above, this suggests a limit of around 30 qubits imposed by the problem of selective excitation. Actually designing and synthesising such a spin system is another problem entirely. ## V Decoherence In order to perform large quantum computations it is essential that errors be kept under control. In practice this means that the decoherence time must be very long in comparison with the gate time, although methods of error correction (see section VI) allow this criterion to be slightly relaxed. The situation for NMR quantum computation might appear extremely good, as systems with very long relaxation times are known: for example, the spin–lattice relaxation time ($`T_1`$) of $`{}_{}{}^{129}\mathrm{Xe}`$ can be thousands of seconds Haake:1998 . As scalar spin–spin coupling constants can reach hundreds of Hertz, and dipolar couplings can be even larger, a naïve calculation suggests that it should be possible to implement about $`10^4`$$`10^6`$ gates before decoherence becomes a serious problem. In fact this calculation is meaningless for two reasons. First, these extremely long relaxation times are always spin–lattice relaxation times; the spin–spin relaxation times ($`T_2`$), which provide a better measure of the decoherence rate, are much shorter (typically below ten seconds). Secondly, the relaxation times and gate rates are taken from different spin systems, and it is not possible to simultaneously achieve them in a single molecule. The long relaxation times observed for $`{}_{}{}^{129}\mathrm{Xe}`$ gas arise precisely because Xe gas atoms have only weak interactions with their neighbours, and such systems are of little apparent use for quantum computation as they provide no mechanism for logic gates. As the scalar coupling is related to an underlying dipolar coupling, and dipolar coupling is one of the two principal sources of relaxation in spin-$`\frac{1}{2}`$ nuclei, spin systems suitable for constructing NMR quantum computers inevitably possess shorter relaxation times. One further feature of decoherence in NMR quantum computation deserves consideration. As NMR systems are ensemble quantum computers, the effect of decoherence is not simply to introduce errors, as occurs with more conventional designs. Rather these errors must be averaged over the ensemble, and if, as is often the case, these errors are fairly random in character, the overall effect is that the error terms will largely cancel out. Thus the principal effect of decoherence is to reduce the apparent signal strength. This is clearly visible in some quantum counting experiments Jones:1999a ; Cummins:2000 , where decoherence appears as an exponential decay in signal. It is difficult to estimate a realistic limit to NMR quantum computation arising from decoherence; however, current experiments have been performed involving hundreds of logic gates Jones:1999a ; Cummins:2000 , and it seems likely that the limit is about one thousand gates. ## VI Quantum error correction Quantum error correction, and its companion fault tolerant computation Shor:1995 ; Steane:1996 ; DiVincenzo:1996 ; Preskill:1998 ; Steane:1998 ; Steane:1999 , play a central role in considerations of the practicality of large quantum computers: while small quantum computers may handle errors fairly well the fragility of highly entangled states renders large quantum computations highly vulnerable. Error correction tackles this by diagnosing these errors and correcting them, while fault tolerant computation provides methods for minimizing the spread of errors, and in particular reducing the impact of errors in error correction schemes. Simple examples of such schemes have been implemented on NMR quantum computers Cory:1998b ; Leung:1999 . The importance of these techniques can hardly be overstated; until their discovery many authors believed that it would be completely impossible to build a large quantum computer as it would be hopelessly error prone. Unfortunately this benefit comes at a price: error correction involves a substantial overhead as each logical qubit must be encoded using many ancilla qubits. While this overhead has been substantially reduced by new codes Steane:1999 it is still at least an order of magnitude. More importantly, however, this assumes that it is possible to reuse ancilla qubits in order to repeatedly correct errors; in turn this requires the ability to reinitialize ancilla qubits at will. With current NMR quantum computers this cannot be achieved, and it is necessary to use a supply of fresh ancilla qubits at each stage. In this case the overhead becomes so large as to be completely impractical. ## VII Read-out Once a quantum computation has been performed it is necessary to use some read-out scheme in order to extract the result. As NMR quantum computation is implemented using an ensemble of spin systems, read-out involves ensemble measurements, and thus expectation values. This can have a number of profound consequences. In simple cases NMR read-out is little different from more conventional schemes. Suppose that a quantum computation ends with the answer qubits in eigenstates. In this case it is only necessary to determine whether each qubit is in state $`|0`$ or $`|1`$, which is equivalent to determining the expectation value of $`\sigma _z`$. This can be achieved either by exciting the corresponding spin with an RF pulse and observing the phase of the NMR signal Jones:1998a ; Jones:1999a , or by examining the multiplet structure in the NMR spectrum of a neighbouring spin, or by a combination of these techniques Chuang:1998a . A more interesting situation occurs when the algorithm ends with one or more answer qubits in superposition states. A conventional quantum computer will return one of the corresponding answers at random, while an ensemble quantum computer will return an ensemble average over the set of all possible answers Gershenfeld:1997 ; Jones:1999a . In most cases this result is not particularly useful, and it is necessary to recast the algorithm so that a single well defined result is obtained Gershenfeld:1997 ; Jones:1999a . In some special cases, however, ensemble measurements can be advantageous Jones:1999a . A similar situation arises when NMR techniques are used to implement phenomena such as quantum teleportation Nielsen:1998 ; Bennett:1993 . Traditional teleportation schemes use strong measurements to project an unknown quantum state into the Bell basis; classical results from these measurements are then used to determine which of a set of unitary transformations must be used to finish the protocol. In NMR teleportation, however, such projective measurements and classical readout cannot be used. Instead it is necessary to use conditional evolution to perform the final unitary transformation Nielsen:1998 . This lack of projective measurements may also have consequences for quantum computation using a single pure qubit Knill:1998b ; Parker:2000 . The model of Knill and Laflamme Knill:1998b assumes only ensemble measurements, and so accurately reflects the nature of NMR quantum computation, while the work of Parker and Plenio Parker:2000 assumes that projective measurements can be made. It remains to be seen how significant this difference actually is. ## VIII Conclusions It is useful to draw some overall conclusions about the potential usefulness of liquid state NMR as a technique for implementing *large* quantum computers. For *small* quantum computers liquid state NMR techniques are well ahead of the competition: indeed in most areas of experimental quantum information processing there quite simply is no competition! However, there are several serious difficulties with extending this approach to large systems, and it seems unlikely that any very large liquid state NMR quantum computer will ever be built. The issue most commonly raised is initialization: the pseudo-pure states used in NMR are far from pure, with typical polarizations below $`10^4`$, and, more seriously, an exponential fall off as the size of the computer is increased. This alone would appear to limit liquid state NMR quantum computers to about $`30`$ qubits. In fact this assessment is probably too pessimistic. There are many techniques for increasing signal strengths, and while none of them offer an immediate solution, several have potential. Furthermore, recent theoretical results suggest that pure states may be less important than previously believed. If low polarizations were the only difficulty preventing the construction of a large scale NMR quantum computer, then it seems highly likely that a solution would be found. Unfortunately there are other, more serious, problems. Constructing quantum logic gates in small systems is easy, indeed almost trivial; this simplicity explains the rapid initial progress in NMR quantum computation. Early concerns about a potential exponential increase in the complexity of implementing these gates have proved unfounded, with the discovery of methods for implementing two qubit gates in multi-spin systems with (at worst) quadratic overhead. Little thought has, however, been paid to the problem of selective excitation and the crowding of frequency space; in my personal opinion this is likely to be the first serious barrier to building NMR quantum computers with many more than ten qubits. The problem of decoherence is, of course, common to all potential implementations of quantum computers. The relatively long decoherence times of nuclear spins, which appear to make them good candidates as qubits, arise because they interact only weakly with their environment. Unfortunately these weak interactions are also manifested as low gate rates, and so the ratio of decoherence time to gate time is not as large as one might hope. Nevertheless NMR decoherence processes are fairly well behaved, and are not likely to prove an insuperable problem in systems of ten qubits. The difficulty of performing practical quantum error correction schemes, arising from the lack of selective reinitialization, is a serious problem for building *large* quantum computers; this, however, is only likely to become an issue if all other difficulties are solved. The inability to perform projective measurements, and the difficulty of dealing with ensemble averaged data, is probably not a serious problem in its own right. This property makes it difficult to use NMR experiments to test fundamental questions in quantum mechanics, but has few major implications for computation. Indeed in some cases, such as when considering the effects of decoherence, it can actually be an advantage. Unlike some other issues, however, this limitations appears to be inherent in the liquid state NMR experiment, and it is not at all clear how it can be bypassed. Finally, it is interesting to compare current liquid state NMR quantum computers with Kane’s radically different, but ultimately related, proposal for a solid state NMR quantum computer Kane:1998 . It is notable that Kane’s proposal keeps many of the advantages of NMR, but also manages to tackle some of the most serious difficulties described above. Firstly, Kane’s proposal uses external “gates” to modulate both the Larmor frequencies and the spin–spin coupling constants of nuclei. This should remove all the difficulties with implementing quantum logic gates. Secondly, the proposal includes a scheme for making projective measurements on single spins, thus also providing a simple initialization scheme. Even if, as seems likely, a large liquid state NMR quantum computer is never built, quantum computation will still owe much to NMR. By providing the first working quantum computers, no matter how small, NMR has reinvigorated the field. Tricks long known to NMR spectroscopists are now being applied in NMR quantum computations, and many of these will have applications in other technologies. NMR quantum computation has had, and still has, much to offer. ## Acknowledgements I thank Mark Bowdrey, Holly Cummins, Patrick Hayden, Peter Hore, Charles Lyon and Tanja Pietraß for helpful conversations. I am grateful to the Royal Society of London for a University Research Fellowship.
no-problem/0002/hep-ph0002116.html
ar5iv
text
# TUM-HEP-366/00 LPT-Orsay/00-16 Final State Interactions and 𝜀'/𝜀: A Critical Look ## 1 Introduction One of the crucial issues for our understanding of CP violation is the question whether the size of the observed direct CP violation in $`K_L\pi \pi `$ decays, expressed in terms of the ratio $`\epsilon ^{}/\epsilon `$, can be described within the Standard Model. Experimentally the grand average, including the results from NA31 , E731 , KTeV and NA48 reads $$\mathrm{Re}(\epsilon ^{}/\epsilon )=(21.2\pm 4.6)\times 10^4.$$ (1) There are different opinions whether this result can be accommodated within the Standard Model. In the central value of the predicted $`\epsilon ^{}/\epsilon `$ is typically by a factor of 2-3 below the data, with an estimated theoretical uncertainty of the order of $`60÷100`$ %. The result in Eq. (1), then, can only be accommodated if all relevant parameters are chosen simultaneously close to their extreme values. Higher values of $`\epsilon ^{}/\epsilon `$, compatible with the data within one standard deviation have been found, instead, in . Recent reviews can be found in . More recently, following the previous work of Truong , it has been pointed out by Pallante and Pich that the inclusion of final state interactions substantially affects the estimates of $`\epsilon ^{}/\epsilon `$. The size of the effect is parametrized in terms of dispersive correction factors $`_I`$ multiplying the corresponding isospin amplitudes $`A_I`$. With $`_01.4`$ and $`_20.9`$, as found in their numerical evaluation, a substantial enhancement of $`\epsilon ^{}/\epsilon `$ was obtained. Indeed, by including these factors, the authors of Ref. find that the central value $`\epsilon ^{}/\epsilon =7.0\times 10^4`$ of gets increased to $`15\times 10^4`$ which is much closer to the experimental average (1). Similar results, although with a different dispersive approach, have been found in . The interesting paper of Pallante and Pich stimulated us to give a closer look to the whole procedure followed to obtain the correction factors $`_I`$. The phenomenological application of the Muskhelishvili-Omnès equation to the $`K\pi \pi `$ case requires some arguable assumptions (see Section 4). However, even within those assumptions, we found a major objection which unfortunately makes the calculation of Ref. questionable. On the basis of symmetry arguments only, one can show that there is a point, out of the physical cut, where the amplitudes induced by left-handed operators are expected to vanish. As demonstrated below, the authors of Ref. made an assumption on the value of the derivative of the amplitude at this point which is not justified. All their results rely on this assumption. We will show that, by replacing the latter condition with others, equally acceptable on physical grounds, it is possible to obtain very different values of the correction factors $`_I`$. Thus we conclude that the results found in are subject to substantial uncertainties. In spite of this, we found some interesting application of the dispersive analysis to non-perturbative calculations of weak amplitudes. For example, lattice calculations could provide a condition which replaces the arbitrary assumption on the derivative of the amplitude and fix unambiguously the factors $`_I`$ (under the hypotheses inherent to the use of the Muskhelishvili-Omnès equation in $`K\pi \pi `$ decays). Our paper is organized as follows. In section 2 we present general formulae for the Muskhelishvili-Omnès solution of dispersion relations applied to $`K\pi \pi `$ amplitudes, focusing on the assumptions needed to obtain the final result. Using these formulae we demonstrate in Section 3 that the evaluation of the dispersive factors $`_I`$ along the lines proposed in is ambiguous. In particular we stress that there is no justification to use the numerical values of $`_I`$ found in in conjunction with present lattice and large-$`N_C`$ calculations of hadronic matrix elements. Similar comments apply to . In Section 4 we briefly illustrate how modifications of the technique developed in could in principle improve the accuracy of the predictions for $`\epsilon ^{}/\epsilon `$ in conjunction with future lattice calculations. We conclude in Section 5. ## 2 General formulae In this section we recall the basic ingredients used to derive the solution of the Muskhelishvili-Omnès equation in the case of $`K\pi \pi `$ amplitudes. The starting point is the $`N`$-subtracted dispersion relation. We denote by $`A(s)`$ the decay amplitude of a kaon of mass $`m_K=\sqrt{s}`$ into two pions of mass $`m_\pi `$. The invariant mass of the two pions is taken as $`(p_1+p_2)^2=s`$, corresponding to the insertion of a weak operator carrying zero momentum. The following discussion applies as well to the case where $`A(s)`$ is the $`K\pi \pi `$ matrix element of a local operator renormalized at a fixed scale $`\mu `$. Following Ref. we assume that $`A(s)`$ is analytic in the cut $`s`$ plane, with the cut going from $`4m_\pi ^2`$ to $`\mathrm{}`$ <sup>1</sup><sup>1</sup>1 These analiticity properties of $`A(s)`$ are certainly not fully correct. However one can argue that Eq. (2) still holds to a reasonable accuracy in a region close to the physical point (see Section 4).. Further assuming that $`s^NA(s)0`$ for $`s\mathrm{}`$, we can write $$A(s)=P_{N1}(s,s_0)+\frac{(ss_0)^N}{\pi }_{4m_\pi ^2}^{\mathrm{}}𝑑z\frac{\mathrm{Im}A(z)}{(zs_0)^N(zsiϵ)},$$ (2) where $`P_{N1}(s,s_0)`$ is a $`N1`$ degree $`s`$ polynomial, the coefficients of which are to be fixed by $`N`$ independent conditions on the amplitude and $`s_0`$ denotes the subtraction point (in general one could choose $`N`$ different subtraction points), which has to lie outside the physical cut. Writing the imaginary part of the amplitude as $$\mathrm{Im}A(s)=A(s)e^{i\delta (s)}\mathrm{sin}\delta (s),$$ (3) the dispersion relation (2) becomes a Muskhelishvili-Omnès equation . Here $`\delta (s)`$ is the strong phase of the amplitude, which in the elastic region is equal to the phase shift of the $`\pi \pi `$ scattering appropriate to a given isospin channel <sup>2</sup><sup>2</sup>2 Note that even in the elastic region $`\delta (s)`$ can be identified with the phase shift measured from $`\pi \pi `$ scattering only if the dependence of the latter on the kaon mass is negligible. We thank G. Colangelo for pointing this out to us.. The general solution of this equation is given by $$A(s)=Q_{N1}(s)O(s),$$ (4) where $$O(s)=\mathrm{exp}\left(\frac{1}{\pi }_{4m_\pi ^2}^{\mathrm{}}𝑑z\frac{\delta (z)}{(zsiϵ)}\right)$$ (5) and $`Q_{N1}(s)`$ is a $`N1`$ degree polynomial. Equation (4) shows explicitly that $`N`$ independent conditions on $`A(s)`$ and the knowledge of the phase on the whole cut are sufficient to fully determine the amplitude. Note that there is no longer any reference to the subtraction point $`s_0`$ of the original dispersion relation. We point out that, if we were able to obtain a number of conditions on $`A(s)`$ larger than $`N`$, we could reduce the sensitivity of the solution to the knowledge of $`\delta (s)`$ at large $`s`$. Indeed, we can always rewrite the solution (4) as $`A(s)`$ $`=`$ $`Q_{N1}(s)\mathrm{exp}\left({\displaystyle \underset{i=0}{\overset{M}{}}}c_is^i\right)`$ (6) $`\times \mathrm{exp}\left({\displaystyle \frac{s^{M+1}}{\pi }}{\displaystyle _{4m_\pi ^2}^{\mathrm{}}}𝑑z{\displaystyle \frac{\delta (z)}{z^{M+1}(zsiϵ)}}\right),`$ where the product of the first two terms is completely determined by $`N+M`$ independent conditions. If possible, this could be useful since, in practice, $`\delta (s)`$ can be extracted from the data only up to the inelastic threshold. To make contact with the work of Pallante and Pich , we now discuss two examples on how $`A(s)`$ is determined from a specific set of conditions in the case $`N=2`$. In this case, corresponding to an amplitude going at most linearly in $`s`$ for large $`s`$, two conditions on the amplitude are required: 1. In order to fix the coefficients of $`Q_1(s)`$, a possible set of conditions is given by the knowledge of the amplitudes at two different points $`s_1`$ and $`s_2`$, namely $$A(s_1)=A_1,A(s_2)=A_2.$$ (7) In this case, one finds $$A(s)=\frac{A_1O(s_2)(ss_2)+A_2O(s_1)(s_1s)}{(s_1s_2)O(s_1)O(s_2)}O(s).$$ (8) We stress that, although the r.h.s of this equation is written in terms of $`s_{1,2}`$, the physical amplitude is obviously independent of the choice of these points, provided that $`A_{1,2}`$ are given by Eq. (7). 2. A different set of conditions can be given by $$A(s_1)=0,A^{}(s_1)=C,$$ (9) namely by knowing the point $`s_1`$ where the amplitude vanishes, or assumes a specific value, and its first derivative in that point. With the conditions in Eq. (9) the solution is particularly simple and takes the form $$A(s)=C(ss_1)O(s_1)^1O(s).$$ (10) ## 3 Why we are not able to compute universal dispersive factors We now critically examine the procedure followed by Pallante and Pich to compute the so-called dispersive factors $`_I`$ for the $`I=0`$ and $`I=2`$ amplitudes. To this end, we start by summarizing the main steps of the procedure adopted in Ref. . On the basis of the lowest-order chiral Lagrangian, Pallante and Pich assume that the solution of the Muskhelishvili-Omnès equation is given by $$A_I(s)=C_I(sm_\pi ^2)\mathrm{\Omega }_I(s),$$ (11) where $$\mathrm{\Omega }_I(s)=\frac{O_I(s)}{O_I(m_\pi ^2)}e^{i\delta _I(s)}_I(s).$$ (12) Here $`O_I`$ is given by Eq. (5) with $`\delta `$ replaced by $`\delta _I`$, the experimental elastic $`\pi \pi (I)`$ phase shift, integrated up to a certain cutoff. This solution corresponds to the case in Eq. (10) with $`s_1=m_\pi ^2`$, i.e. the authors of Ref. impose $`A_I(m_\pi ^2)=0`$ and implicitly assume the knowledge of $`A_I^{}(m_\pi ^2)=C_I`$.<sup>3</sup><sup>3</sup>3 The need of these two assumptions is common also to Ref. , as pointed out in . In order to fulfill the latter condition, they assume without justification that the matrix element at the physical point, $$_I=\pi \pi (I)|_{\mathrm{eff}}|K|_{s=m_K^2},$$ (13) provided by non-perturbative methods such as lattice QCD or large $`N_C`$ estimates, corresponds to $`C_I(sm_\pi ^2)|_{s=m_K^2}`$. Their solution can thus be written as $$A_I(s)=_I\frac{(sm_\pi ^2)}{(m_K^2m_\pi ^2)}e^{i\delta _I(s)}_I(s)$$ (14) and the dispersive factors $`_I`$ are simply given by $$_I_I(m_K^2)=\left|\frac{A_I(m_K^2)}{_I}\right|=\left|\frac{O_I(m_K^2)}{O_I(m_\pi ^2)}\right|.$$ (15) We have the following comments on this approach: * One of the two conditions used to determine the solution is $`A_I(m_\pi ^2)=0`$. This is justified in Ref. on the basis of the lowest-order chiral realization of $`(8_L,1_R)`$ and $`(27_L,1_R)`$ operators. The zero at $`s=m_\pi ^2`$ of the amplitudes induced by these operators actually holds beyond the lowest order . Indeed, it is a consequence of the vanishing of these amplitudes in the $`SU(3)`$ limit, as follows by an old result by Cabibbo and Gell-Mann . In modern language, we can rephrase the result of Ref. by saying that the on-shell $`K2\pi `$ matrix element of any local $`(8_L,1_R)`$ operator invariant under $`CPS`$ symmetry vanishes in the absence of $`SU(3)`$ breaking. Therefore we think that this condition is well justified in the case of $`(8_L,1_R)`$ operators and, particularly, in the case of $`Q_6`$ considered in . * Our main criticism is related to the determination of the second condition on $`A_I(s)`$, namely the one on $`A_I^{}(m_\pi ^2)`$. Chiral perturbation theory alone cannot fix the value of this derivative, which must be known from some non-perturbative information. In their procedure, Pallante and Pich implicitly assume that the matrix elements determined by non-perturbative methods such as lattice or large $`N_C`$, fix the value of $`A_I^{}(m_\pi ^2)`$ according to the condition $$A_I^{}(s=m_\pi ^2)=C_I=\frac{_I}{m_K^2m_\pi ^2}.$$ (16) This assumption, which may look reasonable on the basis of the lowest-order chiral Lagrangian, actually involves an ambiguity of the same order of the dispersive correction itself. To illustrate this point, we make a different choice of the initial conditions (while keeping the same $`_I`$): we use $`_I`$ to fix the value of the amplitude at threshold, via the relation $$A_I(s=4m_\pi ^2)=_I\frac{3m_\pi ^2}{m_K^2m_\pi ^2}.$$ (17) In other words, we assume that $`_I(sm_\pi ^2)/(m_K^2m_\pi ^2)`$ provides a good approximation to the real amplitude near $`s=4m_\pi ^2`$ instead of the point $`s=m_\pi ^2`$ as was implicitly employed in . One may argue that this is also a reasonable choice because strong-interaction phases vanish at threshold, but of course this condition is as arbitrary as the one adopted in . Using this condition and $`A(s=m_\pi ^2)=0`$, from Eq. (8) we find $$A_I(s)=_I\frac{(sm_\pi ^2)O_I(s)}{(m_K^2m_\pi ^2)O_I(4m_\pi ^2)}.$$ (18) With the same parameterization of the phase $`\delta _0(s)`$ as in Ref. and our choice (18), the dispersive factor for $`(8_L,1_R)`$ operators is $$_0^{\mathrm{threshold}}=\left|\frac{O_0(m_K^2)}{O_0(4m_\pi ^2)}\right|1.1$$ (19) instead of $$_0^{\mathrm{PP}}=\left|\frac{O_0(m_K^2)}{O_0(m_\pi ^2)}\right|1.4.$$ (20) With $`_0=1.1`$ (and $`_2=1.0`$) the central value $`\epsilon ^{}/\epsilon =7.0\times 10^4`$ of gets increased to $`9\times 10^4`$, instead of $`15\times 10^4`$ found in . While the first enhancement is well within the theoretical uncertainties quoted by the various analyses , the second one would have a considerable impact on the predicted value of $`\epsilon ^{}/\epsilon `$. Other choices of the initial conditions, equally acceptable under similar assumptions, would lead to still different results for $`_0`$. This exercise illustrates that, unless one knows the value of $`s`$ at which a given non-perturbative method provides the correct value of the amplitude (and/or its derivative), it is impossible to unambiguously compute the appropriate correction factor due to final state interactions. * In the case of $`\mathrm{\Delta }I=3/2`$ matrix elements of $`(8_L,8_R)`$ operators, such as $`Q_8^{(3/2)}`$ considered in Ref. , the amplitude does not vanish at $`s=m_\pi ^2`$. Still the authors of claim that the correction factor is $`_2=|O_2(m_K^2)/O_2(m_\pi ^2)|0.9`$. We can reproduce this result assuming that the dispersion relation requires only one subtraction and that the non-perturbative calculations give the correct amplitude at $`s=m_\pi ^2`$. This condition, however, appears even more arbitrary than the one used to determine $`A^{}(m_\pi ^2)`$ in the case of $`(8_L,1_R)`$ operators. Indeed in this case the point $`s=m_\pi ^2`$ does not play any special rôle. It is therefore not surprising that in a different but consistent framework, as the one discussed in , non-perturbative corrections to $`Q_8^{(3/2)}`$ are found to enhance, rather than suppress, lattice and large-$`N_C`$ estimates. Let us then summarize the main point we want to make. While the full amplitude clearly does not depend on the choice of the value of $`s`$ at which the conditions on the amplitude are placed, the dispersive correction factors are obviously dependent on this choice as we explicitly demonstrated above. As presently we are not in a position to state precisely which value of $`s`$ the exisiting non-perturbative approaches, like lattice, $`1/N`$, etc., correspond to, it is not possible to determine uniquely the corresponding factors $`_I`$ to be incorporated in each of those methods. Before concluding this section, we comment on other attempts present in literature to include FSI effects in $`\epsilon ^{}/\epsilon `$. The approach of Ref. is somehow similar to the one of Ref. . In this paper, however, the integral over the cut is divided in two parts: a low-energy region from threshold up to $`\mu ^2`$ and a high-energy region from $`\mu ^2`$ up to $`\mathrm{}`$. By identifying the scale $`\mu `$ with the renormalization scale of the inserted operator, the high-energy region is claimed to be estimated in perturbation theory. The results thus obtained are in qualitative agreement with those of Ref. and show a rough independence from the unphysical scale $`\mu `$. We stress, however, that the matching procedure adopted in this paper cannot be justified theoretically. Since we do not see a way of deriving consistently the formulae of Ref. from first principles, we are skeptical about the approach followed in this paper. Other recipes to include a posteriori FSI effects in non-perturbative calculations of weak amplitudes were presented in the recent literature . For these we have objections similar to those made to : it is not clear to us how to relate the theoretical values of the amplitudes, computed with some non-perturbative method, to the physical ones which include FSI. In cases where the results of non-perturbative calculations are unable to reproduce the experimental phases, it is not possible to extract unambiguously the absolute value of the amplitudes. Since corrections may be applied only if this issue is under control, we think these procedures to incorporate empirically correcting factors to account for FSI do not help to improve the accuracy of the calculation. These attempts provide at best estimates of the theoretical uncertainties, as for instance discussed in . ## 4 Further developments In spite of the criticism discussed before, we believe the Truong-Pallante-Pich approach is interesting and deserves further investigation. For this reason, in this section we briefly discuss possible applications of this technique to improve the accuracy of theoretical predictions for $`K\pi \pi `$ matrix elements. We first want to state the assumptions needed to make this approach useful: 1. In general, in order to fix the number $`N`$ of subtractions in the dispersion relation, hence the degree of the polynomial $`Q_{N1}`$, one has to make an ansatz on the behaviour of the amplitude as $`s\mathrm{}`$. In the present case, however, Eq. (2) should be understood as an approximate equation, valid in a limited $`s`$ region close to the physical point. Indeed other sources of non-analiticity besides the cut starting at $`4m_\pi ^2`$ can appear in $`A(s)`$ due to the identification of $`s`$ with kaon mass, that is a parameter of the underlying theory. Still the solution of Eq. (2) can give a good estimate of the true amplitude if these further sources of non-analiticity are well represented by a polynomial expansion in the considered region. This is for instance the case in an effective theory where the pions are the only dynamical fields, or in the quenched approximation of QCD. In this context the number of subtractions is no longer related to the behaviour of the amplitude at large $`s`$, rather to the accuracy one needs to achieve. Large values of $`N`$ are obviously preferable, although in practice they are hardly attainable. Therefore, following Refs. , we assume that $`N=2`$ provides a reasonable approximation. 2. Above the inelastic threshold, $`\delta (s)`$ is no longer the $`\pi \pi `$ phase shift and cannot be directly extracted from the experimental data. In addition, even in the elastic region, the phase of $`A(s)`$ can be identified with the experimental $`\pi \pi `$ phase shift only assuming that the dependence of the $`\pi \pi `$ scattering amplitude on the kaon mass is negligible. Were we able to compute the weak amplitude for a large number of points, we could reduce the sensitivity of the solution to the value of $`\delta (s)`$ at large $`s`$, as shown in Eq. (6). As already stressed, this possibility is presently rather remote. In practice, the integral entering $`O(s)`$ has to be restricted to the elastic region, unless some further assumptions on the amplitudes of the other channels are made. These issues entail some uncertainties which have been partially discussed in Ref. . Under these assumptions, we now envisage a possible implementation of this approach in lattice simulations. It has been shown that, even in the presence of chiral symmetry breaking and of the Maiani-Testa theorem , lattice calculations can obtain the physical matrix elements at threshold, corresponding to $`s=4m_\pi ^2`$ . This information, once available, combined with the constraint $`A(s=m_\pi ^2)=0`$ and with the ansatz that the amplitude grows linearly in $`s`$, is sufficient to determine the physical result for the real kaon mass (see Eq. (8)) in the case of $`(8_L,1_R)`$ operators. Concerning $`\mathrm{\Delta }I=3/2`$ matrix elements of $`(8_L,8_R)`$ operators, one may argue that one condition is not enough to fully determine the amplitude. We stress, however, that in this case the extrapolation between the $`\pi \pi `$ threshold and the physical kaon mass is expected to be smoother, resulting in a correction factor which can be approximated by unity with high accuracy. The evaluation of the amplitude at $`s>4m_\pi ^2`$ is particularly difficult with lattice simulations since these are performed in the Euclidean space-time. Proposals, however, exist for overcoming this problem . We finally stress that the region $`s<4m_\pi ^2`$ is not accessible to lattice calculations, as well as to any other approach, if the particles ought to be on-shell and the inserted operator carries zero momentum. ## 5 Conclusions We have critically discussed the issue of final state interactions effects in the evaluation of the ratio $`\epsilon ^{}/\epsilon `$. Whereas we cannot exclude possible substantial enhancements of $`\epsilon ^{}/\epsilon `$ through FSI over lattice and large-$`N_C`$ estimates, our analysis demonstrates that the present calculation of the dispersive factors $`_I`$ is subject to considerable uncertainties, which prevent to draw any definite conclusion on the importance of these effects. We summarize the main point of this paper. While the full amplitude clearly does not depend on the choice of the value of $`s`$ at which the conditions on the amplitude are placed, the dispersive correction factors are obviously dependent on this choice. As presently we are not in a position to state precisely which value of $`s`$ the exisiting non-perturbative calculations correspond to, it is not possible to determine uniquely the corresponding factors $`_I`$ to be incorporated in each of those methods. On the other hand, we have shown that the Muskhelishvili-Omnès approach could be useful to reduce, or at least to better estimate, the uncertainties on the physical amplitudes, once lattice results on the matrix elements at threshold will be available. ## Acknowledgments We are indebted to G. Colangelo for critical comments and discussions about the validity of dispersion relations in this context. We also thank E. Pallante and A. Pich for correspondence. This work has been supported in part by the German Bundesministerium für Bildung und Forschung under contract 06TM874 and 05HT9WOA0. E.F. and G.I. acknowledge a partial support by the TMR Network under the EEC contract ERBFMRX-CT980169 (EURODA$`\mathrm{\Phi }`$NE). G.M. thanks the T31 group at the Technische Universität München where part of this work has been done. L.S. acknowledges a partial support by the M.U.R.S.T.
no-problem/0002/hep-ph0002018.html
ar5iv
text
# 𝐵→𝑋_𝑠⁢𝜏⁺⁢𝜏⁻ in a CP spontaneously broken two Higgs doublet model ## Abstract The differential branching ratio, CP asymmetry and lepton polarization $`P_N`$ for $`BX_s\tau ^+\tau ^{}`$ in a CP spotaneously broken two Higgs doublet model are computed. It is shown that contributions of neutral Higgs bosons to the decay are quite significant when $`\mathrm{tan}\beta `$ is large and $`P_N`$ can reach five persent. The origin of the CP violation has been one of main issues in high energy physics. The measurements of electric dipole moments of the neutron and electron and the matter-antimatter asymmetry in the universe indicate that one needs new sources of CP violation in addition to the CP violation come from CKM matrix, which has been one of motivations to search new theoretical models beyond the standard model (SM). The minimal extension of the SM is to enlarge the Higgs sectors of the SM. It has been shown that if one adheres to the natural flavor conservation (NFC) in the Higgs sector, then a minimum of three Higgs doublets are necessary in order to have spontaneous CP violations . However, the constraint can be evaded if one allows the real and image parts of $`\varphi _1^+\varphi _2`$ have different self-couplings and adds a linear term of Re($`\varphi _1^+\varphi _2`$) in the Higgs potential (see below Eq. (1) ). Then, one can construct a CP spontaneously broken (and $`Z_2`$-symmetry softly broken) two Higgs doublet (2HDM), which is the minimal among the extensions of the SM that provide new source of CP violation. Rare decays $`BX_sl^+l^{}(l=e,\mu )`$ have been extensively investigated in both SM and the beyond . The inclusive decay $`BX_s\tau ^+\tau ^{}`$ has also been investigated in the SM, the model II 2HDM and SUSY models with and without including the contributions of NHB . In this note we investigate the inclusive decay $`BX_s\tau ^+\tau ^{}`$ with emphasis on CP violation effect in a CP spontaniously broken 2HDM in which the up-type quarks get masses from Yukawa couplings to the one Higgs doublet $`H_2`$ and down-type quarks and leptons get masses from Yukawa couplings to the another Higgs doublet $`H_1`$. The contributions from exchanging neutral Higgs bosons now is enhanced roughly by a factor of $`tg^2\beta `$ and can compete with those from exchanging $`\gamma ,Z`$ when $`tg\beta `$ is large enough. We shall be interested in the large $`\mathrm{tan}\beta `$ limit in this note. Consider two complex $`Y=1`$ $`SU(2)_w`$ doublet scalar fields, $`\varphi _1`$ and $`\varphi _2`$. The Higgs potential which spontaneously breaks $`SU(2)\times U(1)`$ down to $`U(1)_{EM}`$ can be written in the following form : $`V(\varphi _1,\varphi _2)`$ $`=`$ $`{\displaystyle \underset{i=1,2}{}}[m_i^2\varphi _i^+\varphi _i+\lambda _i(\varphi _i^+\varphi _i)^2]+m_3^2Re(\varphi _1^+\varphi _2)`$ (1) $`+\lambda _3[(\varphi _1^+\varphi _1)(\varphi _2^+\varphi _2)]+\lambda _4[\text{Re}(\varphi _1^+\varphi _2)]^2+\lambda _5[\text{Im}(\varphi _1^+\varphi _2)]^2`$ Hermiticity requires that all parameters are real so that the potential is CP conservative. It is easy to see that the minimum of the potential is at $`<\varphi _1>=v_1`$ and $`<\varphi _2>=v_2e^{i\xi },`$ thus breaking $`SU(2)\times U(1)`$ down to $`U(1)_{EM}`$ and simutaneously breaking CP, as desired. In the case of large $`\mathrm{tan}\beta `$, if we neglect all terms proptional to $`c_\beta `$ and taken $`s_\beta =1`$ in the mass-squared matrix of neutral Higgs, one would get that one of the Higgs boson mass is 0, obviously which is conflict with current experiments. So instead, below we will discuss a special case in which a analytical solution of the mass matrix can be obtained. Assuming $`4\lambda _1c_\beta ^2=\lambda _4\lambda _5`$ and neglecting other terms proptional to $`c_\beta `$ in the mass-squared matrix, one obtains the masses of neutral Higgs bosons $`m_{H_1^0}^2=2(\lambda _4\lambda _5)v^2s_\psi ^2,m_{H_2^0}^2=2(\lambda _4\lambda _5)v^2c_\psi ^2,m_{H_3^0}^2=4(\lambda _2+\lambda _3)v^2`$ (2) with the mixing angle $`\psi =\frac{\xi }{2}`$. Then it is straightforward to obtain the couplings of neutral Higgs to fermions $`H_1^0\overline{f}f:{\displaystyle \frac{igm_f}{2m_wc_\beta }}(s_{\xi /2}+ic_{\xi /2}\gamma _5);H_2^0\overline{f}f:{\displaystyle \frac{igm_f}{2m_wc_\beta }}(c_{\xi /2}+is_{\xi /2}\gamma _5),`$ (3) where $`f`$ represents down-type quarks and leptons. The coupling of $`H_3^0`$ to f is not enhenced by tan$`\beta `$ and will not be given explicitly. The couplings of the charged Higgs bosons to fermions are the same as those in the model II 2HDM. This is in contrary with the model III in which the couplings of the charged Higgs to fermions are quite different from model II. It is easy to see from Eq. (3) that the contributions come from exchanging NHB is proportional to $`\sqrt{2}G_Fs_{\xi /2}c_{\xi /2}m_f^2/\mathrm{cos}^2\beta `$, so that we have the constraint $`\sqrt{|\mathrm{sin}\xi |}\mathrm{tan}\beta <50`$ from the neutron EDM, and the constraint from the electron EDM is not stronger this. The constraints on $`\mathrm{tan}\beta `$ due to effects arising from the charged Higgs are the same as those in the model II and can be found in ref. . The transition rate for $`bs\tau ^+\tau ^{}`$ can be computed in the framework of the QCD corrected effective weak Hamiltonian $$H_{eff}=\frac{4G_F}{\sqrt{2}}V_{tb}V_{ts}^{}(\underset{i=1}{\overset{10}{}}C_i(\mu )O_i(\mu )+\underset{i=1}{\overset{10}{}}C_{Q_i}(\mu )Q_i(\mu ))$$ (4) where $`O_i(i=1,\mathrm{},10)`$ is the same as that given in the ref., $`Q_i`$’s come from exchanging the neutral Higgs bosons and are defined in Ref. . The coefficients $`C_i`$’s at $`\mu =m_W`$ have been given in the ref. and $`C_{Q_i}`$’s are (neglecting the $`O(tg\beta )`$ term) $`C_{Q_1}(m_W)`$ $`=`$ $`{\displaystyle \frac{m_bm_\tau tg^2\beta x_t}{2sin^2\theta _W}}\{{\displaystyle \underset{i=H_1,H_2}{}}{\displaystyle \frac{A_i}{m_i^2}}(f_1B_i+f_2E_i)\},`$ $`C_{Q_2}(m_W)`$ $`=`$ $`{\displaystyle \frac{m_bm_\tau tg^2\beta x_t}{2sin^2\theta _W}}\{{\displaystyle \underset{i=H_1,H_2}{}}{\displaystyle \frac{D_i}{m_i^2}}(f_1B_i+f_2E_i)\},`$ $`C_{Q_3}(m_W)`$ $`=`$ $`{\displaystyle \frac{m_be^2}{m_\tau g_s^2}}(C_{Q_1}(m_W)+C_{Q_2}(m_W)),`$ $`C_{Q_4}(m_W)`$ $`=`$ $`{\displaystyle \frac{m_be^2}{m_\tau g_s^2}}(C_{Q_1}(m_W)C_{Q_2}(m_W)),`$ $`C_{Q_i}(m_W)`$ $`=`$ $`0,i=5,\mathrm{},10`$ (5) where $`A_{H_1}`$ $`=`$ $`s_{\xi /2},D_{H_1}=ic_{\xi /2},A_{H_2}=c_{\xi /2},D_{H_2}=is_{\xi /2},`$ $`B_{H_1}`$ $`=`$ $`{\displaystyle \frac{ic_{\xi /2}s_{\xi /2}}{2}},B_{H_2}={\displaystyle \frac{c_{\xi /2}+is_{\xi /2}}{2}}`$ $`E_{H_1}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(s_{\xi /2}c_1+c_{\xi /2}c_2),E_{H_2}={\displaystyle \frac{1}{2}}(c_{\xi /2}c_1+s_{\xi /2}c_2)`$ $`c_1`$ $`=`$ $`x_{H^\pm }+{\displaystyle \frac{c_\xi }{2s_{\xi /2}^2}}x_{H_1}(c_\xi +is_\xi )+{\displaystyle \frac{1}{2s_{\xi /2}^2}}x_{H_1}`$ $`c_2`$ $`=`$ $`i[x_{H^\pm }+{\displaystyle \frac{c_{\xi /2}x_{H_1}}{s_{\xi /2}}}(s_\xi ic_\xi )].`$ (6) The QCD corrections to coefficients $`C_i`$ and $`C_{Q_i}`$ can be incooperated in the standard way by using the renormalization group equations . The explict expressions of the invariant dilepton mass distribution, CP asymmetries $`A_{CP}^i`$ (i=1,2) in branching ratio and in forward-backward (F-B) asymmetry and normal polarization of lepton $`P_N`$ for $`BX_s\tau ^+\tau ^{}`$ can be found in ref. . The following parameters have been used in the numerical calculations: $`m_t=175Gev,m_b=5.0Gev,m_c=1.6Gev,m_\tau =1.77Gev,\eta =1.724,`$ $`m_{H_1}=100Gev,m_{H^\pm }=200Gev.`$ From the numerical results, the following conclusions can be drawn. A. The contributions of NHB to the differential branching ratio $`d\mathrm{\Gamma }/ds`$ are significant when $`\mathrm{tan}\beta `$ is not smaller than 30 and the masses of NHB are in the reasonable region. For tan$`\beta `$=30, $`d\mathrm{\Gamma }/ds`$ is enhenced by a factor of 6 compared to SM. B. The direct CP violation in branching ratio $`A_{CP}^1`$ is of order $`10^4`$ which is hard to be measured. C. The direct CP violation in F-B $`A_{CP}^2`$ can reach about a few percents and is strongly dependent of the CP violation phase $`\xi `$ and comes mainly from exchanging NHBs, as expected. D. The CP-violating polarization $`P_N`$ is also strongly dependent of the CP violation phase $`\xi `$ and can be as large as 5% for some values of $`\xi `$, which should be within the luminosity reach of coming B factord comes mainly from NHB contributions in the most of range of $`\xi `$. So it is possible to discriminate the model from the other 2HDMs by measuring the CP-violated observables such as $`A_{CP}^2`$, $`P_N`$ if the nature chooses large $`\mathrm{tan}\beta `$. <sup>2</sup><sup>2</sup>2 This research was supported in part by the National Nature Science Foundation of China and the post doctoral foundation of China. S.H. Zhu gratefully acknowledges the support of K.C. Wong Education Foundation, Hong Kong.
no-problem/0002/cond-mat0002269.html
ar5iv
text
# The interplay between shell effects and electron correlations in quantum dots ## I Introduction The advances in nanofabrication of the last years opened the goal to build 2D quantum dots (QDs) and quantum dot molecules (QDMs) – artificial mesoscopic semiconductor structures of selectable shape and size – as containers for a controllable fixed number of electrons . Recently, depending on the strength and shape of the effective confining potential, the formation of spin density waves (SDWs) and Wigner crystals in QDs and QDMs has been predicted by different groups with different theoretical approaches. Hirose and Wingreen argue that SDWs are reproducible artefacts of spin density functional calculations. For a 2D parabolic confining potential the accordance of the spin-configuration with Hund’s Rule has been predicted by Koskinen, Manninen, and Reimann and questioned by Yannouleas and Landman. All these effects are governed by the intriguing interplay between shell effects, the pure coulomb repulsion, and the fermionic repulsion due to the Pauli exclusion principle and depend strongly on the values of the interaction parameters in the commonly for single QDs assumed Hamiltonian $$H=\underset{i=1}{\overset{N}{}}\left(\frac{𝐩_i^2}{2m^{}}+\frac{m^{}\omega _0^2}{2}𝐱_i^2\right)+\underset{i<j=1}{\overset{N}{}}\frac{e^2}{\kappa |𝐱_i𝐱_j|}$$ (1) where $`\kappa `$ is the dielectric constant, $`m^{}`$ is the effective mass, and $`\omega _0`$ defines the strength of the confining potential. Apart from the interesting physical questions that arise for quantum dots the reliable prediction of their properties is an ultimate test of modern methods in quantum chemistry. Due to the compared to atoms very shallow confining potential long range electron interactions and correlations play an important role in QDs and QDMs. Therefore it is misleading to name them artificial atoms and molecules. Well established and very elaborate methods of quantum chemistry might fail in describing them properly. Hartree-Fock and spin density functional methods use single Slater determinants or sums of them to approximate the many-body wave function. In spin-density functional methods the approximation of the functional for the exchange correlation energy adds another source of uncertainty and systematic errors to this approach. The Path Integral Monte Carlo method (PIMC) used in this paper samples the full many-body wave function instead. In contrast to density functional methods with PIMC it is possible to study the temperature dependent properties of QDs. The reason why PIMC is not yet a standard method of quantum chemistry is its numerical limitation due to the fermion sign problem. The rapidly increasing power of modern computers resizes this limitation. In sec. II we briefly summarize our implementation of PIMC and comment on how to limit the numerical deficiencies due to the fermion sign problem. We apply PIMC to calculate the electron density and two-particle correlation functions for quantum dots with up to 12 electrons. To compare to various experimental as well as to other theoretical studies we use different dielectric constants $`\kappa `$ and strengths of the confining parabolic potentials. The calculated addition energies are in very good agreement with the experimental findings of Tarucha et al. . For $`N=6`$ we investigate the temperature dependence of the Wigner crystallization (WC). ## II Numerical method For a system of $`N`$ electrons with position eigenket $`\stackrel{}{x}_i,s_i`$ ($`s_i=\pm \frac{1}{2}`$ for spin-up and spin-down electrons) in an external potential the Feynman path integral can be written as $`Z`$ $`=`$ $`{\displaystyle \left[\underset{\gamma =1}{\overset{M}{}}\underset{i=1}{\overset{N}{}}\mathrm{d}\stackrel{}{x}_i(\gamma )\right]\underset{\delta =1}{\overset{M}{}}det(A(\delta ,\delta +1))}`$ (2) $`\times `$ $`\mathrm{exp}\left({\displaystyle \frac{\beta }{M}}{\displaystyle \underset{\alpha =1}{\overset{M}{}}}V(\stackrel{}{x}_1(\alpha ),\mathrm{},\stackrel{}{x}_N(\alpha ))\right)+𝒪\left({\displaystyle \frac{\beta ^3}{M^2}}\right)`$ (3) with $`(A(\alpha ,\alpha +1))_{i,j}`$ (4) $`=\{\begin{array}{cc}\stackrel{}{x}_i(\alpha )\mathrm{exp}\left(\frac{\beta }{M}\frac{𝐩^2}{2m}\right)\stackrel{}{x}_j(\alpha +1):\hfill & \hfill s_i=s_j\\ 0:\hfill & \hfill s_js_j\end{array}`$ (7) and the boundary condition $`\stackrel{}{x}_j(M+1)=\stackrel{}{x}_j(1)`$. $`M`$ is the number of so-called timeslices of the Feynman paths. In the limit $`M\mathrm{}`$ Eq. (2) becomes exact. For quantum dots the space dimension is d=2 and the (2NM)-dimensional integral given in (2) can be evaluated by standard Metropolis Monte Carlo techniques. Due to the determinant the integrand is not always positive and the expectation value of an observable $`X(𝐱)`$ depending only on position operators has to be calculated using $$X=\frac{_{g=1}^GX_g\mathrm{sign}(W_g)}{_{g=1}^G\mathrm{sign}(W_g)}$$ (8) where $`X_g`$ is the value of the observable $`X`$ and $`W_g`$ is the value of the integrand in (2) in the g’th Monte Carlo step. Eq. (8) reveals a severe problem connected with the path integral for fermions which is commonly denoted as the fermion sign problem (see e.g. ). It can be shown that the ratio between integrands with positive sign ($`W^+`$) and negative sign ($`W^{}`$) is approximately given by $$\frac{W^+W^{}}{W^++W^{}}\mathrm{exp}(\beta (E_FE_B))$$ (9) where $`E_F`$ and $`E_B`$ are the ground state energies of the Fermi system and the corresponding Bose system. It is now obvious that the statistical error in (8) grows rapidly for small temperatures $`T`$. Moreover the energy difference $`(E_FE_B)`$ will grow with increasing system size causing an increase of the statistical error. Within PIMC the calculation of the kinetic energy expectation value is another critical task. This is merely due to the fact that the Monte Carlo calculation is usually done in position space and that the discretization of the paths allows a number of different approaches to calculate the expectation value of a momentum dependent operator. A number of various different energy estimators has been discussed in the past . To avoid these difficulties we developed a procedure which allows the calculation of all energy expectation values from the knowledge of the pair correlation functions $$\mathrm{\Gamma }_{i,j}(r)=\delta (r\stackrel{}{x}_i\stackrel{}{x}_j)$$ (10) and the radial density functions per electron $$\rho _i(r)=\frac{1}{2\pi r}\delta (r\stackrel{}{x}_i)=\frac{1}{2\pi r}\varrho (r),$$ (11) where $`\varrho `$ is the probability of finding electron $`i`$ in distance $`r`$ from the center. Due to the particle symmetry we have $$\mathrm{\Gamma }_{i,j}(r)=\{\begin{array}{cc}\hfill \mathrm{\Gamma }^{}(r):& s_i=s_j=+\frac{1}{2}\hfill \\ \hfill \mathrm{\Gamma }^{}(r):& s_i=s_j=\frac{1}{2}\hfill \\ \hfill \mathrm{\Gamma }^{}(r):& s_is_j\hfill \end{array}$$ (12) and $$\rho _i(r)=\{\begin{array}{cc}\hfill \rho ^{}(r):& s_i=+\frac{1}{2}\hfill \\ \hfill \rho ^{}(r):& s_i=\frac{1}{2}\hfill \end{array}.$$ (13) Utilizing the hypervirial theorem of Hirschfelder the energy can be written as a sum of ten parts $`E`$ $`=`$ $`E_{\mathrm{kin}}^{}+E_{\mathrm{kin}}^{}+E_{\mathrm{kin}}^{}+E_{\mathrm{kin}}^{}+E_{\mathrm{kin}}^{}`$ (15) $`+E_{\mathrm{pot}}^{}+E_{\mathrm{pot}}^{}+E_{\mathrm{pot}}^{}+E_{\mathrm{pot}}^{}+E_{\mathrm{pot}}^{}`$ $`=`$ $`{\displaystyle \frac{N^{}}{2}}{\displaystyle _0^{\mathrm{}}}dr\varrho ^{}(r)r_rV_1(r)+{\displaystyle \frac{N^{}}{2}}{\displaystyle _0^{\mathrm{}}}dr\varrho ^{}(r)r_rV_1(r)`$ (23) $`+{\displaystyle \frac{N^{}(N^{}1)}{4}}{\displaystyle _0^{\mathrm{}}}dr\mathrm{\Gamma }^{}(r)r{\displaystyle \frac{V_2(r)}{r}}`$ $`+{\displaystyle \frac{N^{}(N^{}1)}{4}}{\displaystyle _0^{\mathrm{}}}dr\mathrm{\Gamma }^{}(r)r{\displaystyle \frac{V_2(r)}{r}}`$ $`+{\displaystyle \frac{N^{}N^{}}{2}}{\displaystyle _0^{\mathrm{}}}dr\mathrm{\Gamma }^{}(r)r{\displaystyle \frac{V_2(r)}{r}}`$ $`+{\displaystyle \frac{N^{}}{2}}{\displaystyle _0^{\mathrm{}}}dr\varrho ^{}(r)V_1(r)+{\displaystyle \frac{N^{}}{2}}{\displaystyle _0^{\mathrm{}}}dr\varrho ^{}(r)V_1(r)`$ $`+{\displaystyle \frac{N^{}(N^{}1)}{2}}{\displaystyle dr\mathrm{\Gamma }^{}(r)V_2(r)}`$ $`+{\displaystyle \frac{N^{}(N^{}1)}{2}}{\displaystyle dr\mathrm{\Gamma }^{}(r)V_2(r)}`$ $`+N^{}N^{}{\displaystyle dr\mathrm{\Gamma }^{}(r)V_2(r)}`$ While in density functional approaches the calculation of the kinetic energy and the exchange correlation energy is a major topic and subject to permanent discussion, within the path integral approach these energies are included in a natural way. However, the systematic error arising from the limited number of timeslices $`M`$ and the statistical error of the Monte Carlo calculation have to be controlled carefully. We checked our algorithm extensively using eight non-interacting fermions in a parabolic trap as a test system. We found that at low temperatures where the ratio of signs is around 0.99, convergence can only be achieved obeying the following rules: 1) The determinants have to be calculated very accurately using a more costly algorithm with pivoting. 2) The completely uncorrelated generation of the Monte Carlo steps is essential, i.e. the coordinate to be moved should be chosen randomly. Moving the particle coordinates using always the same sequence produces inaccurate results. 3) A good random number generator with a completely uncorrelated sequence in all significant bits of a 64 bit real number should be applied. We therefore developed a 53 bit random number of Marsaglia-Zaman type instead of using one of the standard 24 or 32 bit random number generators coming with standard system libraries. 4) Further, to improve the convergence a number of different Monte Carlo steps can be applied, i.e. moving single time slices, moving complete particle paths and parts of a path. Our Fortran-code is completely parallelized using MPI and Lapack. ## III Results To compare our PIMC calculations to experimental data we calculated the addition energies $$\mathrm{\Delta }E=E_{N+1}2E_N+E_{N1}$$ (24) of a QD with up to 11 electrons using the material constants $`m^{}=0.067m`$ and $`\kappa `$ = 12.9 for GaAs as given by Hirose and Wingreen . It is assumed that these parameters mimic the experimental setup of Tarucha et al. reasonably well. The strength of the harmonic potential is fixed at $`\mathrm{}\omega _0`$ = 3.0 meV. The resulting effective atomic units are $`E_H^{}`$ = 10.955 meV for the Hartree energy and $`a_0^{}`$ = 10.1886 nm for the Bohr radius. The Boltzmann constant is $`k_B=7.8661\times 10^3E_H^{}/\mathrm{K}`$. We performed PIMC simulations for quantum dots with different spin configurations at a fixed temperature of 10 K. Due to the fermion sign problem the number of Monte Carlo steps necessary to push the statistical error of the total energy, which has been calculated properly from 25 uncorrelated subsequences of MC steps, into the range of 0.1 percent is extremely high. The number of Monte Carlo steps ranged between 2.5 billion steps per particle coordinate for $`N6`$ and about 10 billion steps for $`N=12`$. Fig. 1 displays the addition energies for quantum dots with up to 11 electrons. The circles indicate the results from our path integral calculations at 10 K, the squares are results of spin density functional calculations of Hirose and Wingreen , and the triangles are the experimental results of Tarucha et al. . Both theoretical calculations reproduce the general $`N`$-dependence of the addition energies in great detail. Tarucha et al. give an estimate of the electron temperature in their experiments of $`T=0.2`$ K. For computational reasons our PIMC calculations are performed at 10 K and it cannot be expected that the absolute energy values agree as well as the 0 K DFT calculations with the experimental results. However, it should be noted that PIMC correctly predicts the drop in the addition energy from $`N=7`$ to $`N=8`$ while the DFT calculations fail at this point. The inset in Fig. 1 displays the total spins of the spin configurations with lowest energy as found in DFT and PIMC at 10 K. In DFT calculations (0 K) the spin configuration of the ground state is determined by Hund’s rule for up to 22 electrons. In contrast, in our PIMC calculation at 10 K the total spin is not always in accordance with Hund’s rule. For $`N=4`$ we checked the temperature dependence of the spin configuration. At 5 K the energy of the spin 0 configuration is 0.01 $`E_H^{}`$ higher than the spin 1 energy indicating a temperature dependence of the favored spin configuration. As an important fact we note that the $`N`$-dependence of the addition energies is not affected by the actual spin configuration. The situation is quite similar to that in transition metal clusters with extreme small energy differences between states with significantly different magnetic moments . As can be inferred from Fig. 2(a) the radial spin densities are significantly different for both spin configurations. The total potential energy for the spin 1 configuration is about 0.07 meV lower than that of the spin 0 configuration. At 10 K this is overcompensated by an 0.27 meV higher kinetic energy (see Tab. I). Although the kinetic and potential energies for different total spins significantly differ the total energies are almost equal. Similar situations are found for larger $`N`$. For convenience and easy comparision we determined the value of the dimensionless density parameter $`r_s`$, which is sometimes used to characterize quantum dots (see e.g. ) to be $`r_s`$=4.19 for $`N=4`$. The energies given in Tab. I correspond to the integrals in Eq. 15. The total kinetic energy which is the sum of all $`E_{\mathrm{kin}}^x`$ terms is always positive while some of the addends might be negative. Tab. I reveals that larger total spins result in larger kinetic energies. The total potential energy is almost unchanged, the larger contribution from the trap potential is compensated by a smaller contribution from the Coulomb repulsion. We note that the ratio $`E_\mathrm{W}`$ between the kinetic energy and the total energy is considerably larger for $`N=4`$ than for $`N=9`$ reflecting the looser binding of the smaller system. Next we consider the dependence of the Wigner crystallization on the temperature and the choice of the material constants. The localization of the electrons in space is commonly referred as Wigner crystallization. For quantum dots the occurrence of well separated humps in the radial electron density and the pair correlation functions has been interpreted as WC. However, it is a nontrivial task to find a general parameter identifying if an electron system is crystallized or not. From a solid state physics point of view the electrons should have a low mobility, i.e. a small kinetic energy, and should not interchange their lattice positions. For fermions the localization of single electrons does not make any sense, and, as stated above, even the decomposition of the many body wave function in sums of determinants of single particle wave functions is probably a too rough approximation for QDs. These facts limit the analogies between crystallization in solids and electron systems and make the term crystallization itself somehow misleading. We therefore view Wigner crystals as states of the many body wave function with a relatively low kinetic energy. First we consider the strength of the Wigner crystallization depending on the choice of the interaction parameters. Fig. 3 displays the radial densities and pair correlation functions for six electrons with $`S`$=0, $`\mathrm{}\omega `$=5 meV and $`\kappa `$=3.0, 6.0, and 12.9. Of course, for stronger electron repulsions (small $`\kappa `$) the electron distributions are broadened. The qualitative picture of the distributions is merely the same. For all $`\kappa `$ shell effects indicated by off-center maximums of the radial density occur. However, only for $`\kappa `$=3 and 6 we observe a maximum at $`r`$=0. From our point of view it cannot definitely be decided from this figure if a system is Wigner crystallized or not. As a parameter reflecting the strength of the WC we employ the ratios between the kinetic and the total energies $`E_\mathrm{W}=E_{\mathrm{kin}}/E_{\mathrm{tot}}`$ which are 0.07, 0.10 and 0.14 for $`\kappa `$=3.0, 6.0 and 12.9. Although the radial distribution function for $`\kappa =`$12.9 is quite narrow, the relative mobility for the electrons indicated by $`E_\mathrm{W}`$ is twice as large as for $`\kappa =`$3. The underlying physical process can be understood intuitively. Due to the stronger electron-electron repulsions the electrons are fixed in an energetical favorable geometric configuration and as a consequence thereof the relative kinetic energy is reduced and the difference between the pair correlation functions of equal and oppposite spin almost vanishes (see Fig. 3). It is an interesting and to our knowledge open question, if the crystallization of electrons can be viewed as a phase transition. We therefore consider next the temperature dependent properties of a quantum dot with N=6, S=0, $`\kappa `$=3 and $`\mathrm{}\omega `$=5 meV. The results for temperatures between 10 and 150 K are presented in Tab. II. Most notably $`E_\mathrm{W}`$ increases relatively smoothly from 0.07 at 10 K to 0.22 at 150 K. Within our numerical accuracy the caloric curve does not show any evidence of a phase transition. The transition from a crystallized state to an electron fluid seems to be squashy. Of course, from our calculations we cannot exclude that a phase transition exists for larger $`N`$ or different interaction parameters. Fig. 4 displays the radial electron densities and the total electron pair correlation function for different temperatures. Up to 60 K the radial density shows clear geometric structure effects with two maximums while at 150 K only a smooth curve resembling to a simple gaussian remains. ## IV Conclusion In conclusion, we have found that despite of the notorious fermion sign problem PIMC is capable to answer interesting questions for strongly correlated electron systems like QDs and QDMs. For QDs PIMC reproduces correctly the experimental addition energies. Our temperature dependent calculations give new insights into the process of WC. For the 2-dimensional QDs a ratio $`E_\mathrm{W}=E_{\mathrm{kin}}/E_{\mathrm{tot}}`$ below 0.1 seems to indicate WC both for $`\kappa `$ and temperature dependent calculations. However, reagarding this aspect a more firm classification parameter e.g. similary to the Lindemann criterion is desirable. A comparision to other QMC methods seems to be in order here. Even our most complicated simulations took less than two hours on a Cray T3E with 62 processors. Taking the advantages of modern computer power and optimized software, the brute force PIMC applied here is able to produce very precise results. Different algorithms have been published to improve the performance of path integral methods. A recent one is the Multi-Level Blocking method published by Mak et al. . Simulations using this algorithm are expected to converge better than our direct treatment since the fermion sign problem is avoided partially. The published results of the treatment of quantum dots by Egger et al. do not confirm this expectation. Obviously the advantages of the new method are more than compensated by other numerical problems, maybe due to the energy estimator used . Nevertheless, a combination of the Multi-Level-Blocking method and our technique might result in a very powerful tool. ## V Acknowledgments We wish to thank the Regionales Rechenzentrum Niedersachsen and the Konrad Zuse Institut Berlin for their excellent computer support and E. R. Hilf for stimulating discussions.
no-problem/0002/astro-ph0002305.html
ar5iv
text
# On dust-correlated Galactic emission in the Tenerife data ## 1 Introduction A small, but significant correlation of existing Cosmic Microwave Background (CMB) data with maps of Galactic dust emission has been detected. It is important to understand the origin, and hence the characteristics of any Galactic emission present in CMB maps with structure on angular scales relevant to CMB measurements. With such information, we can attempt to remove these contaminating emissions from the data, to be left with a pure CMB map. Cross-correlating the COBE DMR maps with DIRBE far-infrared maps, Kogut et al (1996a,b) discovered that statistically significant correlations did exist at each DMR frequency, but that the frequency dependence was inconsistent with vibrational dust emission alone and strongly suggestive of additional dust-correlated free-free emission ($`\beta _{ff}=2.15`$). A number of recent microwave observations have also shown that these correlations are not strongly dependent on angular scale. Different experiments ( DMR: Kogut et al. 1996b; 19GHz: de Oliviera-Costa et al. 1998; Saskatoon: de Oliviera-Costa et al. 1997; MAX5: Lim et al. 1996; OVRO: Leich et al. 1997), together give, with $`95\%`$ confidence, $`3.6<\beta _{radio}<1.3`$, consistent with free-free emission over the frequency range 15-50 GHz. However if the source of this correlated emission is indeed free-free, it is expected that a similar correlation should exist between $`H_\alpha `$ and dust maps. Several authors have found that these data sets are only marginally correlated (McCallough 1997 and Kogut 1997). Thus, most of the correlated emission appears to come from another source. Draine and Lazarian (1998a,b) suggest that the correlated emission could originate from spinning dust grains. This model predicts a microwave emission spectrum that peaks at low microwave frequencies, the exact location of the peak depending on the size distribution of dust grains, and has a spectral index between -3.3 and -4, over the frequency range 15-50 GHz (Draine and Lazarian 1998b, Kogut 1999). The spectral indices obtained from various experiments do not seem to conform too well to the spinning dust model alone. However, recently de Oliveira-Costa et al (1999) (hereafter DOC99) claim to have found evidence of the spinning dust origin of DIRBE-correlated emission using the Tenerife 10 and 15GHz data. They find a rising spectrum from 10GHz to 15GHz, which is indicative of a spinning dust origin, as opposed to a free-free origin, for the dust-correlated Galactic foreground at these frequencies. In this paper, we use the most recent Tenerife data to show that the correlated emission is not necessarily indicative of spinning dust. ## 2 The data and Galactic templates Tenerife is a double differencing experiment that takes data by drift scanning in right-ascension at each declination. Data are taken every $`1^{}`$ in RA and at intervals of $`2.5^{}`$ in Dec. We used the Tenerife 10GHz and 15GHz data in the region, $`32.5^{}<`$Dec$`<42.5^{}`$, and $`0^{}<`$RA$`<360^{}`$, with point sources subtracted and baselines removed. The Tenerife experiment has beams of FWHM $`4.9^{}`$ and $`5.2^{}`$ for the 10 and 15GHz experiments respectively, with a differencing angle of $`8.1^{}`$ between east and west positions on the sky to make up the final triple beam (Jones et al (1998)). Figure 1 shows the patch of sky covered by the experiment in Galactic projection. Figure 2 shows the dust data and the Tenerife data at all the five declinations where data have been taken. We have extended the analysis presented in Gutierrez et al (1999) to also cover a region outside the high latitude range ($`b>40^{}`$) that they use to constrain CMB fluctuations. Therefore, the data we have used are different from that used in DOC99 as they did not extend the coverage of the raw data processing to be outside this range. This is one of the reasons we would expect differences between our results and those of DOC99. A great advantage of using the Tenerife data to constrain the spectral index of the dust-correlated component lies in the fact that similar angular scales and the same patch of the sky can be used. This is not the case when the correlation from different experiments are compared. Therefore, we have not used the two further declinations covered by the 15GHz data set (Decs $`30^{}`$ and $`45^{}`$), but not by the 10GHz data, avoiding any effect from features present in these declination strips. However, we have redone the calculations using these extra declinations and there was no significant difference with the results presented in this paper. When looking at Figure 2, a difference between the two sides of the RA range, at almost all declinations is apparent. The rise (drop because of the double differencing) of the Galactic signal towards low Galactic latitude seen in the convolved dust maps on both sides of the RA range is only followed on one side by the Tenerife data. The Tenerife Galactic signal shows a stronger Galactic centre / anti-centre difference and a variation with declination which is different compared to the dust template with a particularly strong feature at Dec $`42.5^{}`$. Also note that the noisier 10GHz data show a wider spread toward low Galactic latitude, which does not appear systematically Galactic, at least when compared to the Galactic dust template. This may be due to a systematic error in the Tenerife data that is not fully represented by the error bars. The Galactic templates used are the destriped DIRBE+IRAS $`100\mu `$m template of dust emission (Schlegel, Finkbeiner and Davis 1998), the 408MHz survey of synchrotron emission (Haslam et. al. 1981), and the 1420MHz survey of synchrotron emission (Reich $`\&`$ Reich, 1988, hereafter $`R\&R`$). We used the cleaned and destriped versions of the synchrotron emission maps by Lawson, Mayer, Osborne & Parkinson (1987). ## 3 Correlation analysis and results ### 3.1 The method Assuming that the microwave data consist of a super-position of CMB, noise and Galactic components, we can write $$y=aX+x_{CMB}+n.$$ Here $`y`$ is a data vector of $`N`$ pixels, $`X`$ is an $`N\times M`$ element matrix containing $`M`$ foreground templates convolved with the Tenerife beam, $`a`$ is a vector of size $`M`$ that represents the levels at which these foreground templates are present in the Tenerife data, $`n`$ is the noise in the data and $`x_{CMB}`$ is due to the CMB convolved with the beam. The noise and CMB are treated as uncorrelated. The minimum variance estimate of a is $$\widehat{a}=[X^TC^1X]^1X^TC^1y,$$ with errors given by $`\mathrm{\Delta }\widehat{a}_i=\mathrm{\Sigma }_{ii}^{1/2}`$ where $`\mathrm{\Sigma }`$ is given as, $$\mathrm{\Sigma }=<\widehat{a}^2><\widehat{a}>^2=[X^TC^1X]^1.$$ $`C`$ above is the total covariance matrix (the sum of the noise covariance matrix and the CMB covariance matrix). The noise covariance matrix of the Tenerife data is taken to be diagonal, and the CMB covariance matrix is obtained through Monte-Carlo simulations with 10000 CMB realizations (this can of course be found analytically as well, as in DOC99). In all the above equations $`X`$ and $`y`$ are actually deviations of the corresponding quantities from the weighted mean with weights<sup>1</sup><sup>1</sup>1We have tried a variety of different weighting schemes, including uniform weights, to check our method and all results found were within the $`68\%`$ confidence limits of those presented in the paper. given by $`C^1`$ (this is different to DOC99 who use the actual $`X`$ and $`y`$ in the analysis). It should be noted that there may be other components of emission present in the data so that the errors we consider here are only lower limits. Also, the templates that we are using may not be perfect and we do not account for possible uncorrelated structure in $`C`$. If there are other non-Gaussian components present in the data then the minimum variance estimate of $`a`$ will be incorrect. ### 3.2 Results Listed in Table 1 are the correlation coefficients obtained for the Tenerife data sets and the $`100\mu m`$ DIRBE template, the Haslam, and the $`R\&R`$, with each Galactic template taken individually. Table 2 shows the correlation coefficients obtained when fits are done for two templates jointly. The DIRBE and Haslam correlations correspond to joint $`100\mu m`$-Haslam fits, whereas the $`R\&R`$ values correspond to joint $`100\mu mR\&R`$ fits. $`\mathrm{\Delta }T=(\widehat{a}\pm \mathrm{\Delta }\widehat{a})\sigma _{Gal}`$ is the amplitude of fluctuations in temperature in the Tenerife data, that results from the correlation ($`\sigma _{Gal}`$ being the $`rms`$ deviation of the template map). The analysis is done for different Galactic cuts. Only positive Galactic latitudes are used in these tables as the Tenerife data do not extend below a $`b`$ of $`32^{}`$. From Table 1, we see that correlations with each of the three templates are detected with high significance. However these correlations fall off in significance towards higher Galactic latitudes. For comparison, the rms of the 10GHz data (after subtracting noise) for $`b>20^{}`$ is $`181\pm 9\mu K`$ (the error is due to a $`5\%`$ calibration uncertainty). If the templates are not significantly correlated with each other, one would conclude that 40% of the signal at 10 GHz is Galactic emission from Table 1. It should be noted that the correlation between the DIRBE and Haslam Galactic templates is significant for all Galactic cuts although if the dust and free-free emissions (or spinning dust) are 100% correlated then the Galactic emission would still only account for 50% of the total emission (assuming that all of the Galactic foregrounds present at this frequency are present in either the Haslam or DIRBE templates). We also see that the DIRBE-correlated component has slightly higher significance than the Haslam or $`R\&R`$-correlated components near the Galactic plane for both the data sets. Away from the Galactic plane, at 10 GHz, the component correlated to the Haslam map becomes dominant (for $`b>20^{}`$) whereas at 15 GHz the DIRBE component remains dominant. However, there is almost no significant correlated emission beyond this Galactic cut. ### 3.3 Discussion It is seen from Table 1 that individual correlations with dust do not show a rising (spinning dust) spectrum with significance between 10 and 15 GHz for any Galactic cut. From the joint $`100\mu mHaslam`$ correlations (Table 2), we see that it appears as if there is some evidence for a rising spectrum of dust-correlated emission only for the $`b>20^{}`$ cut (as reported by DOC99). Spectral indices inferred from the correlation coefficients are listed in Table 3. Errors are small close to the Galactic plane. However, the values do not consistently agree with the spectral indices of any one of the three Galactic emission components, -2.8 for synchrotron, -2.1 for free-free and positive for spinning dust, for example +1 if the peak is at 20 GHz, that could be expected to be present as contaminating emissions in CMB data. From Table 3 we see that the value of the spectral index inferred from the DIRBE-correlated component at 10 and 15 GHz is negative for most Galactic cuts (the correlations for $`b>30^{}`$ and $`b>40^{}`$ are not significant, so it was not possible to infer a spectral index<sup>2</sup><sup>2</sup>2 No significant correlations were obtained even for the $`b>25^{}`$ cut, indicating that only a small region near $`b`$ $`20^{}`$ Galactic latitude is correlated.). From the positive value of the spectral index for $`b>20^{}`$ DOC99 conclude that spinning dust is responsible for producing the major part of the DIRBE-correlated emission. However, since the spectral indices inferred for all other Galactic cuts are negative, a specific model of the spatial distribution of spinning dust would be required to explain the data. On the other hand it could also be that in the $`b`$-range where the Galactic signal in the Tenerife data drops the estimation of the spectral index becomes more sensitive to systematic effects we have not accounted for. To decide this we focus in on the $`b>20^{}`$ range and divide the data further. It should be noted that these spectral indices were found using the values for $`\widehat{a}`$ and not $`\mathrm{\Delta }T`$ as the beam sizes between the 10 and 15GHz data sets are slightly different giving rise to a lower expected $`\mathrm{\Delta }T`$ (although the same relative level of contamination and hence $`\widehat{a}`$) at 15 GHz than at 10 GHz which would systematically bias the values obtained. When we examine the spectral indices obtained from the component correlated to the Haslam map at the two frequencies, we find that the values from the individual analysis are less negative than expected for synchrotron emission (Table 3). Joint correlation of the $`100\mu `$m dust and Haslam maps with the data result in spectral indices that are steeper than expected for synchrotron. This seems to imply that most of the emission in the 15GHz data that is correlated with the Haslam map is also correlated with the DIRBE maps (this is expected as the region that correlates lies close to the Galactic plane), leading to a much lower value of $`\widehat{a}`$ for the synchrotron-correlated component than expected at 15GHz (and hence a much steeper spectrum). This may simply be because dust-correlated emission is more significant in the 15GHz data. Similarily since the Haslam -correlated emission is more significant at 10 GHz, the joint correlation method gives a lower value of $`\widehat{a}`$ for the dust-correlated component and a correspondingly higher value for the synchrotron-correlated component. Being able to correct for this effect might therefore sort the systematic discrepancy. However, since we see this discrepancy with the synchrotron spectral index we should be wary of the dust template correlation as well. If we now consider the DIRBE template to be an exact predictor for either spinning dust or free-free emission (so that all the features present in this template are present in the Tenerife data; as we are assuming when performing the correlation analysis) then the level of ‘contamination’ $`a`$ will be independent of the region tested. This means that each value of $`a`$ in Table 1 should be the same for the 10 and 15GHz data respectively. We can therefore take the variance of $`a`$ across the different regions analysed as a measure of the error in this method, and find that weighted mean and rms at 10 GHz and 15 GHz are 157 (20) and 122 (10) $`\mu `$K / (MJy sr<sup>-1</sup>) respectively. This would correspond to a spectral index of $`0.6_{0.5}^{+0.5}`$. ### 3.4 Further study of the $`b>20^{}`$ cut region We now divide the data into halves in RA, which gives two regions, one towards the Galactic centre and one towards the Galactic anticentre, at each declination. Repeating our analysis on these data we quantify the visual impression from Figure 2. Although the rise of the Galactic plane signal is equally strong in both regions of the dust template, we only find a significant correlation with the 10 and 15GHz data in the Galactic centre region, in spite of the fact that the noise errors in both regions are comparable. The results of this analysis are listed in table 4. It is seen that of the five declination stripes in the centre region only the three at higher declination are significantly correlated. The anticentre regions are found to be uncorrelated or even significantly anticorrelated in two stripes, which again demonstrates that the structure is not correlated rather than that our sensitivity is reduced when dividing the data. Again if we take the weighted mean and rms in the value of $`a`$ between declinations, we get 197 (66) and 157 (91) $`\mu `$K / (MJy sr<sup>-1</sup>) for the centre regions at 10 and 15 GHz respectively, giving a spectral index of $`0.6_{2.8}^{+2.2}`$. The variation between declinations is high and the spectral index thus obtained is consistent with both free-free and spinning dust models. We perform yet another split of the data in an attempt to identify the region where the correlation mainly comes from. It was found that the 20 pixels at lowest Galactic latitude of all declinations taken together (100 pixels) gave significant correlations, while the remaining pixels (750) gave no correlation (see table 5). Again the spectral index for dust-correlated emission is small and negative, while that for emission correlated to the Haslam map is steeper than expected. Since it is clear that the different declinations correlate differently, correlating them together is incorrect. Hence, shown also in table 5 are the values obtained for the same analysis on the 20 pixels at lowest Galactic latitude of each declination. Even though the signal to noise is much lower here, it is clear that the Galactic plane signal correlates with a free-free like spectral index while the rest of the pixels shows a significant anti-correlation with dust at 10 GHz for some declinations. The Galaxy correlates positively while the remaining regions correlate negatively or insignificantly. It is clear from analysing such splits that the correlation coefficients obtained when taking all declinations together as in tables 1 & 2 ($`b>20^{}`$) and in table 4 & 5, and which seem to indicate an almost flat or less negative spectral index between 10 and 15 GHz, are composed of regions that correlate with a negative spectral index and regions that do not correlate significantly at all. The anti-correlation, which occurs mostly in the 10GHz data, lowers the combined best fit $`a`$ value at that frequency, and hence raises the spectral index of dust-correlated emission. Note here that the high values of ’a’ obtained need not to be in contradiction to the level of free-free allowed by H<sub>α</sub> as the detections are all close to the Galactic plane, where the level of free-free is expected to be high. We still see a large variance between the three declinations which correlate significantly. If we take the weighted mean and rms of the highest three declinations as indicative of a composite correlation, we get 313 (172) and 219 (83) $`\mu `$K / (MJy sr<sup>-1</sup>) at 10 GHz and 15 GHz respectively, giving a spectral index of $`0.9_{2.2}^{+2.8}`$, again consistent with both free-free and spinning dust. Note that this value was obtained from the weighted average of three pairs of numbers shown in table 5, where each pair indicated a negative spectral index. The spectral index obtained simultaneously for the component correlated to the Haslam map is consistent within 1 $`\sigma `$ with the synchrotron spectral index. Given that the dust templates correlate with the data only in a small part of the Tenerife patch above $`b=20^{}`$ we further test, how physically meaningful the detection is. This is important because the spectral index of a component in this method is inferred by assuming that the difference in the correlation amplitude between the frequencies is solely due to the change of emissivity of a single physical component and is not due to a spurious change in the strength of the correlation. ## 4 Sky rotations Another way to test for systematic errors in the method is to take random samples from the Galactic templates themselves as ”Monte Carlo” simulations. The advantage is that we can test against chance correlations with typical structure in the templates without having to understand the templates to a degree that would allow us to model this structure. Since the rise of the signal towards the Galactic plane clearly is a strong feature in the data and high Galactic latitude correlations are at best weak, we correlate to rotated / flipped maps in the northern and southern hemispheres, which all have the same orientation relative to the Galactic plane. Figure 3 shows $`\widehat{a}`$ and its significance for 10 and 15 GHz, for the $`b=20^{}`$ Galactic cut, the Galactic centre region of all declination strips taken together, and the spectral indices derived from these values. We find that the real patch does not correlate most significantly at either frequency. Given the number of patches that correlate more significantly we conclude that the real correlation is at most typical and only due to the Galactic plane signal. Further we see that the 15 GHz real correlation although less significant than the neighbouring points has a higher value of $`\widehat{a}`$. It seems that the real dust patch, although it does not represent the observed 15 GHz structure particularly well, happens to produces a high correlation amplitude. Indeed the real dust patch shows no strong features (low intensity rms), which explains the weak correlation and results in an overestimate of $`\widehat{a}`$ at 15 GHz. An interesting point is that the mean spectral index which could be interpreted as the spectral index derived from correlations to typical Galactic plane dust signals, correlations which are about as significant as the real correlation, has a value of $`1.5\pm 1.2`$, well consistent with free-free emission. The $`rms`$ variation in the value of the correlation coefficient at 10 and 15 GHz can be taken as an estimate of the error in $`\widehat{a}`$ for the real patch. The error estimates obtained in this way range from about $`20\%`$ for $`a`$ at both 10 and 15 GHz for the $`b>0`$ Galactic cut, to $`64\%`$ and $`30\%`$ at 10 and 15 GHz respectively for the Galactic centre regions in the $`b>20`$ Galactic cut. As expected, the errors from sky-rotation increase with decreasing signal and size of the region and are very large for the individual declinations. Note that the lower Galactic cuts give the highest correlation with the highest significance for the real patch showing that the correlations there are not just due to aligned structure of the typical Galactic plane. The error estimates presented in this section are similar whether we include or exclude the rotated patches near the Galactic centre and anti-centre, where significantly different structure could be expected, though in the first case errors increase slightly for both 10 and 15 GHz. Henceforth we use results corresponding to the second case. Since we have strong indication that the correlations between dust and the Tenerife data results only from the Galactic plane signal which will be similarly present in many physical components of Galactic emission we further investigate this particular feature of spatial distribution by modelling it in the next section. ## 5 Galaxy modelling A problem with the interpretation of the correlation results at different frequencies purely as a spectral dependence is that it neglects the possibility of variations in the shape of the Galaxy due to different components at different frequencies, and the resulting change of spatial alignment. For example, this method for obtaining the spectral index of correlated emission does not take into account the varying extent of the galaxy in the different data sets and template maps, so that the errors that we are quoting in the above tables might be too small and could be systematically wrong. If the Galaxy has a different spatial extent at 10 GHz, 15 GHz and 3 THz ($`100\mu `$m), it will result in a higher or lower value for the overall correlation coefficient than is actually the case. We investigate this effect by modelling the spatial distribution of Galactic emission in the next section. A significant correlation between the Tenerife data and the DIRBE dust template only occurs close to the Galactic plane where there is a rise in the overall Galactic signal. This rise may be frequency dependent if different physical components are present (for example free-free and synchrotron) so that care must be taken when comparing correlation results between different frequencies. We take a simple model of the Galaxy $$S(b)=S_{}\mathrm{exp}\left(\frac{b^24\mathrm{ln}2}{FWHM^2}\right)$$ where $`b`$ is the Galactic latitude, $`S`$ is the amplitude and $`FWHM`$ is the full width half maximum of the Gaussian model for the Galactic plane. The value of $`S_{}`$ is arbitrary as it will only scale the estimate for $`a`$ for the various correlation results and not affect the relative value between models with different width. Each model is then convolved with the appropriate experimental beam, and correlated with the data, while varying FWHM. Figure 4 shows the widths of the Galactic models which correlated best with the data and the Galactic templates for the two declination strips in the centre region which do show convincing correlation. Two strips showed no significant correlation between the data and the templates, and Dec $`=42.5^{}`$ has the strong feature at 10 GHz, which is also incompatible with the templates. It can be seen that the most significant correlation between the dust/Haslam template and the model occurs with Galactic models of different sizes (FWHM), which are also different from the sizes obtained for the 10 and 15GHz data. However there seems to be a clear trend of decreasing width with increasing frequency compatible with more synchrotron emission at 10 GHz and more dust-correlated structure at 15 GHz. The combined fit to all declination stripes simultaneously also shows this expected trend of increasing Galactic width with wavelength. If this systematic change is real we might be able to fit for two distinct components with different width and spectral index. Although our fits of a single Gaussian component are all well acceptable the quality of the data is however not good enough to fit for two such components, if they are present independently of templates. In Figure 4 the points for the dust correlations lie closer to the 15 GHz points than to the 10 GHz points, implying a better match of the Galactic shape for 15 GHz. When correlating the dust template to the data, the significance of any correlation will drop and $`\widehat{a}`$ will change systematically, because the structure at each frequency is different. When correcting for this mismatch, assuming that the correlated component we are looking for has identical appearance at any frequency, we find that the correlation amplitudes for both 10 and 15 GHz shift upwards. However the 10 GHz amplitude, because of the larger difference in size with dust, gets a larger shift, which results in a drop of the spectral index for the dust-correlated emission by typically about 0.2. Taking this effect of mismatched structure into account for the Haslam correlations results in a less steep spectral index, which is more consistent with synchrotron emission. When we subtract the best fitting Galactic models from the data and the templates no correlations remain between the residuals, neither between the data and the templates nor between the templates, showing again that any correlation is only due to the Galactic plane signal, which we are able to model. Figure 5 shows these best fit models for the 10 and 15GHz data, for Dec 42.5, and the residuals. The fit achieved by the single component Gaussian model is indeed good and in fact better than the joint correlation fit, which we show for comparison. Further we find that since the joint fit to the 10 GHz data is rather poor in terms of goodness of fit, synchrotron and free-free components with the expected spectral indices (-2.8 and -2.1) can be fitted with an almost similar goodness of fit, based on the reduced $`\chi ^2`$. When the same is done for the other declinations the goodness of fit is generally better for both the joint fit as well as the model fitting. Further in the other declinations also we generally find equally acceptable fits for a dust-correlated component which is constrained to give a free-free spectral index. ## 6 Template errors Another yet unaccounted source of error is an intrinsic error in the dust template itself, which can be instrument noise or data reduction errors due to point source subtraction etc., but also systematic variations in the dust emissivity, which might not be followed in the same way by the components at 10 and 15 GHz. Note that the DIRBE maps at different frequencies are correlated with each other to not more than $`95\%`$. If we assume that there is a $`5\%`$ Gaussian error in the flux at $`100\mu `$m due to these errors we can model this effect to quantify any change that would occur in the correlation. To do this we added a $`5\%`$ error in the flux (on the Tenerife angular scales) to the maps and calculated the new values of $`a`$. We performed 300 Monte-Carlo simulations and found that there was a systematic drop in $`a`$ for the 10 and 15 GHz data by factors of $`1.6`$ and $`1.4`$ respectively, when the centre regions of all declinations were taken together for the $`b>20^{}`$ cut (similar values are obtained when the entire $`b>20^{}`$ region is taken). Turning this effect around for the real data, assuming that there is an intrinsic $`5\%`$ error present in the template map, we need to increase the values of $`a`$ given in the tables by these factors. The value of $`a`$ at 10 GHz has to be increased relative to 15 GHz, which results in a significantly steeper spectral index, again more consistent with free-free emission. However, it should be noted that we do not have error maps for the templates at present and thus in this current paper we have not attempted a quantitative evaluation of this effect Presumably template errors, given that we have an idea of the likely causes, will be greater towards the Galactic plane. Since we have found that the only correlations we have to report result for pixels close to the Galactic centre, this will be an important source of systematic error. ## 7 Conclusions Spinning dust emission can be identified and discriminated from free-free emission if indications for a peak in the emission spectrum can be found at low microwave frequencies, which are probed by the Tenerife experiment. A rising spectrum between 10 and 15 GHz can be taken as a prediction of the spinning dust hypothesis, although the exact location of the emission peak depends on details of the model. Tentative evidence for a turnover in the spectrum of the Galactic dust-correlated microwave component between 10 and 15 GHz has been presented by DOC99. We however find that the spectral index of dust-correlated emission is negative for all Galactic cuts except for the $`b>20^{}`$ cut. The Galactic signal, and with it the significance of the correlation, decreases with increasing Galactic latitude, and no correlations are detected in the higher Galactic latitude regions of $`b>30^{}`$. The variance in the value of $`\widehat{a}`$ between regions with different Galactic cuts is rather large. We further find that the correlation detected in the $`b>20^{}`$ region comes only from a small number of pixels at low Galactic latitude and towards the Galactic centre, where signal from the Galactic plane is present. This correlation shows a free-free like spectral index, whereas the rest of the region was found to be uncorrelated, or even significantly anticorrelated. Using sky rotations we show that the correlation we see at $`b>20^{}`$ is only an alignment of structure due to the rise of the Galactic plane signal. Employing a simple model for this structure we were able to demonstrate that the spatial distribution of Galactic emission is in fact different in the templates and in the data, giving rise to a systematic error. We were also able to show that this simple model of the Galaxy fits the data generally better than the templates. Further, modelling a Galactic free-free component, which correlates with the dust template, generally yields an equally acceptable fit to the data. Another significant systematic effect arises due to intrinsic errors in the templates, and all these effects cause a misleading increase in the inferred spectral index of the dust-correlated component between 10 and 15 GHz. A comparison of our results with other experiments is presented in Figure 7. Here the data points for the Tenerife experiment, obtained by taking a weighted average of all detections (all regions of all Galactic cuts, joint analyses) with errors taken from the sky rotations, correspond to values of $`180\pm 47`$ and $`123\pm 16`$ $`\mu `$K /MJy sr<sup>-1</sup> at 10 and 15 GHz respectively. Note that the 10GHz point on the plot is significantly higher, as compared to that plotted in DOC99 and the value quoted in our table 2 This is because this region consists of parts that are correlated as well as parts that do not correlate or even anticorrelate, as in the case of the 10GHz data. Here, since we are focussing only on the regions that correlate (these regions are the same for both 10 and 15 GHz data and have been found to lie close to the Galactic plane), the value is significantly higher. Note that the high values of $`a`$ that we get need not be in contradiction to the typical level of free-free allowed by $`H_\alpha `$ maps of other regions as our detections are close to the Galactic plane, where the level of Galactic emission is expected to be high. The spectral index for dust-correlated emission as deduced from the 10 and 15 GHz points is less negative (by about $`1\sigma `$) than expected for free-free emission. The spectral index deduced simultaneously from synchrotron-correlated emission is systematically steeper than expected. This could be attributed to the effects that we have identified and which have all been shown to influence the correlation in the same direction. Note also that in this plot only the Tenerife and COBE DMR points, at different frequencies, each represent data which probe the same angular scales and were taken with the same sky coverage. Inferring a spectral index by comparing different experiments assumes that the Galactic component is traced by a given fixed template and does not depend on parameters which vary over the sky, but our analysis of the Tenerife data shows large variations of Galactic correlation with the sky region. With the present analysis of Tenerife data we are not able to make a firm claim about the origin of the dust-correlated component, since we do not find convincing support for a spinning dust component. This does not have to rule out this hypothesis, since environmental conditions or grain sizes, which affect the position of the spectral peak, could change systematically with location on the sky, particularly in the transition region between low and high Galactic latitude. The spatial variation in the correlation amplitude and the possibility of the presence of some spinning dust emission along with free-free emission need to be dealt with. The separation task is difficult for two reasons. Firstly neither the frequency dependence nor the spatial distribution of any of the Galactic components at low microwave frequencies is particularly well known. And further we would expect these components to be correlated, since we find their templates to be correlated, at least at low Galactic latitudes. The combination of the results from the other experiments, shown in Figure 7, might not be strongly constraining, but nevertheless, does not give conclusive evidence for spinning dust emission either, without an expectation of the amplitude of free-free emission based on dust-correlated H<sub>α</sub> emission. In order to make more reliable inferences, a pixel by pixel separation of components would have to be performed, using the Maximum Entropy method for example. In a forthcoming paper we shall perform such a separation incorporating information about spinning dust. Also, including other data at frequencies lower than 10 GHz and adding other templates of Galactic emission such as the $`H_\alpha `$ maps from the Wisconsin H-Alpha mapper (Tufte, Reynolds and Haffner 1998) would be useful. ## Acknowledgements We wish to thank all the people involved in taking the Tenerife data set, and Juan Francisco Macias-Perez for useful comments. PM acknowledges financial support from the Cambridge Commonwealth Trust. AWJ acknowledges King’s College, Cambridge for support in the form of a Research Fellowship. RK acknowledges support from an EU Marie Curie Fellowship.
no-problem/0002/hep-th0002181.html
ar5iv
text
# Fractal index, central charge and fractons ## Abstract We introduce the notion of fractal index associated with the universal class $`h`$ of particles or quasiparticles, termed fractons, which obey specific fractal statistics. A connection between fractons and conformal field theory(CFT)-quasiparticles is established taking into account the central charge $`c[\nu ]`$ and the particle-hole duality $`\nu \frac{1}{\nu }`$, for integer-value $`\nu `$ of the statistical parameter. In this way, we derive the Fermi velocity in terms of the central charge as $`v\frac{c[\nu ]}{\nu +1}`$. The Hausdorff dimension $`h`$ which labelled the universal classes of particles and the conformal anomaly are therefore related. Following another route, we also established a connection between Rogers dilogarithm function, Farey series of rational numbers and the Hausdorff dimension. We consider the conformal field theory(CFT)-quasiparticles ( edge excitations ) in connection with the concept of fractons introduced in. These excitations have been considered at the edge of the quantum Hall systems which in the fractional regime assume the form of a chiral Luttinger liquid. Beyond this, conformal field theories have been exploited in a variety of contexts, including statistical mechanics at the critical point, field theories, string theory, and in various branches of mathematics. In this Letter, we suppose that the fractal statistics obeyed by fractons are shared by CFT-quasiparticles. Thus, the central charge, a model dependent constant is related to the universal class $`h`$ of the fractons. We define the fractal index associated with these classes as $$i_f[h]=\frac{6}{\pi ^2}_{\mathrm{}(T=0)}^{1(T=\mathrm{})}\frac{d\xi }{\xi }\mathrm{ln}\left\{\mathrm{\Theta }[𝒴(\xi )]\right\}$$ (1) and after the change of variable $`\xi =x^1`$, we obtain $$i_f[h]=\frac{6}{\pi ^2}_0^1\frac{dx}{x}\mathrm{ln}\left\{\mathrm{\Theta }[𝒴(x^1)]\right\}$$ (2) where in the Eq.$`(\text{1})`$ $`\mathrm{\Theta }[𝒴]={\displaystyle \frac{𝒴[\xi ]2}{𝒴[\xi ]1}}`$ (3) is the single-particle partition function of the universal class $`h`$ and $`\xi =\mathrm{exp}\left\{(ϵ\mu )/KT\right\}`$, has the usual definition. The function $`𝒴[\xi ]`$ satisfies the equation $`\xi =\left\{𝒴[\xi ]1\right\}^{h1}\left\{𝒴[\xi ]2\right\}^{2h}.`$ (4) We note here that the general solution of the algebraic equation derived from this last one is of the form $`𝒴_h[\xi ]=f[\xi ]+\stackrel{~}{h}`$ or $`𝒴_{\stackrel{~}{h}}[\xi ]=g[\xi ]+h,`$ where $`\stackrel{~}{h}=3h`$, is a duality symmetryThis means that fermions($`h=1`$) and bosons($`h=2`$) are dual objects. As a result we have a fractal supersymmetry, since for the particle with spin $`s`$ within the class $`h`$, its dual $`s+\frac{1}{2}`$ is within the class $`\stackrel{~}{h}`$. between the classes. The functions $`f[\xi ]`$ and $`g[\xi ]`$ at least for third, fourth degrees algebraic equation differ by plus and minus signs in some terms of their expressions. The particles within each class $`h`$ satisfy specific fractal statisticsIn fact, we have here fractal functions as discussed in. $`n`$ $`=`$ $`\xi {\displaystyle \frac{}{\xi }}\mathrm{ln}\mathrm{\Theta }[𝒴]`$ (5) $`=`$ $`{\displaystyle \frac{1}{𝒴[\xi ]h}}`$ (6) and the fractal parameter<sup>§</sup><sup>§</sup>§This parameter describes the properties of the path (fractal curve) of the quantum-mechanical particle. (or Hausdorff dimension) $`h`$ defined in the interval $`1`$$`<`$$`h`$$`<`$$`2`$ is related to the spin-statistics relation $`\nu =2s`$ through the fractal spectrum $`h1=1\nu ,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0}<\nu <1;h1=\nu 1,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}1}<\nu <2;`$ (7) $`etc.`$ (8) For $`h=1`$ we have fermions, with $`𝒴[\xi ]=\xi +2`$, $`\mathrm{\Theta }[1]=\frac{\xi }{\xi +1}`$ and $`i_f[1]=\frac{6}{\pi ^2}_{\mathrm{}}^1\frac{d\xi }{\xi }\mathrm{ln}\left\{\frac{\xi }{\xi +1}\right\}=\frac{1}{2}`$. For $`h=2`$ we have bosons, with $`𝒴[\xi ]=\xi +1`$, $`\mathrm{\Theta }[2]=\frac{\xi 1}{\xi }`$ and $`i_f[2]=\frac{6}{\pi ^2}_{\mathrm{}}^1\frac{d\xi }{\xi }\mathrm{ln}\left\{\frac{\xi 1}{\xi }\right\}=1`$. On the other hand, for the universal class $`h=\frac{3}{2}`$, we have fractons with $`𝒴[\xi ]=\frac{3}{2}+\sqrt{\frac{1}{4}+\xi ^2}`$, $`\mathrm{\Theta }\left[\frac{3}{2}\right]=\frac{\sqrt{1+4\xi ^2}1}{\sqrt{1+4\xi ^2}+1}`$ and $`i_f\left[\frac{3}{2}\right]=\frac{6}{\pi ^2}_{\mathrm{}}^1\frac{d\xi }{\xi }\mathrm{ln}\left\{\frac{\sqrt{1+4\xi ^2}1}{\sqrt{1+4\xi ^2}+1}\right\}=\frac{3}{5}`$. The distribution function for each class $`h`$ above, as we can check, are given by $`n[1]`$ $`=`$ $`{\displaystyle \frac{1}{\xi +1}},`$ (9) $`n[2]`$ $`=`$ $`{\displaystyle \frac{1}{\xi 1}},`$ (10) $`n\left[{\displaystyle \frac{3}{2}}\right]`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{\frac{1}{4}+\xi ^2}}},`$ (11) i.e. we have the Fermi-Dirac distribution, the Bose-Einstein distribution and the fracton distribution of the universal class $`h=\frac{3}{2}`$, respectively . Thus, our formulation generalizes in a natural way the fermionic and bosonic distributions for particles assuming rational or irrational values for the spin quantum number $`s`$. In this way, our approach can be understood as a quantum-geometrical description of the statistical laws of Nature. This means that the (Eq.5) captures the observation about the fractal characteristic of the quantum-mechanical path, which reflects the Heisenberg uncertainty principle. The fractal index as defined has a connection with the central charge or conformal anomaly $`c[\nu ]`$, a dimensionless number which characterizes conformal field theories in two dimensions. This way, we verify that the conformal anomaly is associated with universality classes, i.e. universal classes $`h`$ of particles. Now, we consider the particle-hole duality $`\nu \frac{1}{\nu }`$ for integer-value $`\nu `$ of the statistical parameter in connection with the universal class $`h`$. For bosons and fermions, we have $`\{0,2,4,6,\mathrm{}\}_{h=2}`$ and $`\{1,3,5,7,\mathrm{}\}_{h=1}`$ such that, the central charge for $`\nu `$ even is defined by $`c[\nu ]=i_f[h,\nu ]i_f[h,{\displaystyle \frac{1}{\nu }}]`$ (12) and for $`\nu `$ odd is defined by $`c[\nu ]=2\times i_f[h,\nu ]i_f[h,{\displaystyle \frac{1}{\nu }}],`$ (13) where $`i_f[h,\nu ]`$ means the fractal index of the universal class $`h`$ which contains the statistical parameter $`\nu =2s`$ or the particles with distinct spin values which obey specific fractal statistics. We assume that the fractal index $`i_f[h,\mathrm{}]=0`$ ( the class $`h`$ is undetermined ) and we obtain, for example, the results $`c[0]=i_f[2,0]i_f[h,\mathrm{}]=1;`$ (14) $`c[1]=2\times i_f[1,1]i_f[1,1]={\displaystyle \frac{1}{2}};`$ (15) $`c[2]=i_f[2,2]i_f[{\displaystyle \frac{3}{2}},{\displaystyle \frac{1}{2}}]=1{\displaystyle \frac{3}{5}}={\displaystyle \frac{2}{5}};`$ (16) $`c[3]=2\times i_f[1,3]i_f[{\displaystyle \frac{5}{3}},{\displaystyle \frac{1}{3}}]=10.656=0.344;`$ (17) $`etc,`$ (18) where the fractal index for $`h=\frac{5}{3}`$ is obtained from $`i_f\left[{\displaystyle \frac{5}{3}}\right]={\displaystyle \frac{6}{\pi ^2}}{\displaystyle _{\mathrm{}}^1}{\displaystyle \frac{d\xi }{\xi }}`$ (19) $`\times \mathrm{ln}\left\{{\displaystyle \frac{\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}+\frac{1}{9\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}}\frac{2}{3}}{\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}+\frac{1}{9\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}}+\frac{1}{3}}}\right\}`$ (20) $`=0.656`$ (21) and for its dual we have $`i_f\left[{\displaystyle \frac{4}{3}}\right]={\displaystyle \frac{6}{\pi ^2}}{\displaystyle _{\mathrm{}}^1}{\displaystyle \frac{d\xi }{\xi }}`$ (22) $`\times \mathrm{ln}\left\{{\displaystyle \frac{\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}+\frac{1}{9\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}}\frac{1}{3}}{\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}+\frac{1}{9\sqrt[3]{\frac{1}{27}+\frac{\xi ^3}{2}+\frac{1}{18}\sqrt{12\xi ^3+81\xi ^6}}}+\frac{2}{3}}}\right\}`$ (23) $`=0.56.`$ (24) From the Table I we can observe the correlation between the classes $`h`$ of particles and their fractal index, so our approach manifest a robust consistence in accordance with the unitary $`c[\nu ]`$$`<`$$`1`$ representations. Table I | $`h`$ | $`i_f[h]`$ | Denomination | $`\nu `$ | $`s`$ | $`c[\nu ]=i_f[h,\nu ]`$ | | --- | --- | --- | --- | --- | --- | | $`2`$ | $`1`$ | bosons | $`0`$ | $`0`$ | $`1`$ | | $`\mathrm{}`$ | $`\mathrm{}`$ | fractons | $`\mathrm{}`$ | $`\mathrm{}`$ | $`\mathrm{}`$ | | $`\frac{5}{3}`$ | $`0.656`$ | fractons | $`\frac{1}{3}`$ | $`\frac{1}{6}`$ | $`0.656`$ | | $`\mathrm{}`$ | $`\mathrm{}`$ | fractons | $`\mathrm{}`$ | $`\mathrm{}`$ | $`\mathrm{}`$ | | $`\frac{3}{2}`$ | $`0.6`$ | fractons | $`\frac{1}{2}`$ | $`\frac{1}{4}`$ | $`0.6`$ | | $`\mathrm{}`$ | $`\mathrm{}`$ | fractons | $`\mathrm{}`$ | $`\mathrm{}`$ | $`\mathrm{}`$ | | $`\frac{4}{3}`$ | $`0.56`$ | fractons | $`\frac{2}{3}`$ | $`\frac{1}{3}`$ | $`0.56`$ | | $`\mathrm{}`$ | $`\mathrm{}`$ | fractons | $`\mathrm{}`$ | $`\mathrm{}`$ | $`\mathrm{}`$ | | $`1`$ | $`0.5`$ | fermions | $`1`$ | $`\frac{1}{2}`$ | $`0.5`$ | Therefore, since $`h`$ is defined within the interval $`1`$$`<`$$`h`$$`<`$$`2`$, the corresponding fractal index is into the interval $`0.5`$$`<`$$`i_f[h]`$$`<`$$`1`$. Howewer, the central charge $`c[\nu ]`$ can assumes values less than $`0.5`$. Thus, we distinguish two concepts of central charge, one is related to the universal classes $`h`$ and the other is related to the particles which belong to these classes. For the statistical parameter in the interval $`0`$$`<`$$`\nu `$$`<`$$`1`$ (the first elements of each class $`h`$), $`c[\nu ]=i_f[h,\nu ]`$, as otherwise we obtain different values. In another way, the central charge $`c[\nu ]`$ can be obtained using the Rogers dilogarithm function, i.e. $$c[\nu ]=\frac{L[x^\nu ]}{L[1]},$$ (25) with $`x^\nu =1x`$, $`\nu =0,1,2,3,etc.`$ and $$L[x]=\frac{1}{2}_0^x\left\{\frac{\mathrm{ln}(1y)}{y}+\frac{\mathrm{ln}y}{1y}\right\}𝑑y,\mathrm{\hspace{0.33em}0}<x<1.$$ (26) Thus, taking into account the Eqs.(12,13), we can extract the sequence of fractal indexes ( Tables II and III ). Table II | $`h`$ | $`\nu `$ | $`s`$ | $`i_f[h]=c[\nu ]`$ | $`h`$ | $`\nu `$ | $`s`$ | $`c[\nu ]`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`2`$ | $`0`$ | $`0`$ | $`1`$ | $`2`$ | $`0`$ | $`0`$ | $`1`$ | | $`\frac{39}{20}`$ | $`\frac{1}{20}`$ | $`\frac{1}{40}`$ | $`0.858`$ | $`2`$ | $`20`$ | $`10`$ | $`0.142`$ | | $`\frac{37}{19}`$ | $`\frac{1}{19}`$ | $`\frac{1}{38}`$ | $`0.854`$ | $`1`$ | $`19`$ | $`\frac{19}{2}`$ | $`0.146`$ | | $`\frac{35}{18}`$ | $`\frac{1}{18}`$ | $`\frac{1}{36}`$ | $`0.849`$ | $`2`$ | $`18`$ | $`9`$ | $`0.151`$ | | $`\frac{33}{17}`$ | $`\frac{1}{17}`$ | $`\frac{1}{34}`$ | $`0.845`$ | $`1`$ | $`17`$ | $`\frac{17}{2}`$ | $`0.155`$ | | $`\frac{31}{16}`$ | $`\frac{1}{16}`$ | $`\frac{1}{32}`$ | $`0.84`$ | $`2`$ | $`16`$ | $`8`$ | $`0.16`$ | | $`\frac{29}{15}`$ | $`\frac{1}{15}`$ | $`\frac{1}{30}`$ | $`0.834`$ | $`1`$ | $`15`$ | $`\frac{15}{2}`$ | $`0.166`$ | | $`\frac{27}{14}`$ | $`\frac{1}{14}`$ | $`\frac{1}{28}`$ | $`0.829`$ | $`2`$ | $`14`$ | $`7`$ | $`0.171`$ | | $`\frac{25}{13}`$ | $`\frac{1}{13}`$ | $`\frac{1}{26}`$ | $`0.822`$ | $`1`$ | $`13`$ | $`\frac{13}{2}`$ | $`0.178`$ | | $`\frac{23}{12}`$ | $`\frac{1}{12}`$ | $`\frac{1}{24}`$ | $`0.814`$ | $`2`$ | $`12`$ | $`6`$ | $`0.186`$ | | $`\frac{21}{11}`$ | $`\frac{1}{11}`$ | $`\frac{1}{22}`$ | $`0.806`$ | $`1`$ | $`11`$ | $`\frac{11}{2}`$ | $`0.194`$ | Table III | $`h`$ | $`\nu `$ | $`s`$ | $`i_f[h]=c[\nu ]`$ | $`h`$ | $`\nu `$ | $`s`$ | $`c[\nu ]`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`\frac{19}{10}`$ | $`\frac{1}{10}`$ | $`\frac{1}{20}`$ | $`0.797`$ | $`2`$ | $`10`$ | $`5`$ | $`0.203`$ | | $`\frac{17}{9}`$ | $`\frac{1}{9}`$ | $`\frac{1}{18}`$ | $`0.786`$ | $`1`$ | $`9`$ | $`\frac{9}{2}`$ | $`0.214`$ | | $`\frac{15}{8}`$ | $`\frac{1}{8}`$ | $`\frac{1}{16}`$ | $`0.774`$ | $`2`$ | $`8`$ | $`4`$ | $`0.226`$ | | $`\frac{13}{7}`$ | $`\frac{1}{7}`$ | $`\frac{1}{14}`$ | $`0.759`$ | $`1`$ | $`7`$ | $`\frac{7}{2}`$ | $`0.241`$ | | $`\frac{11}{6}`$ | $`\frac{1}{6}`$ | $`\frac{1}{12}`$ | $`0.742`$ | $`2`$ | $`6`$ | $`3`$ | $`0.258`$ | | $`\frac{9}{5}`$ | $`\frac{1}{5}`$ | $`\frac{1}{10}`$ | $`0.721`$ | $`1`$ | $`5`$ | $`\frac{5}{2}`$ | $`0.279`$ | | $`\frac{7}{4}`$ | $`\frac{1}{4}`$ | $`\frac{1}{8}`$ | $`0.693`$ | $`2`$ | $`4`$ | $`2`$ | $`0.307`$ | | $`\frac{5}{3}`$ | $`\frac{1}{3}`$ | $`\frac{1}{6}`$ | $`0.656`$ | $`1`$ | $`3`$ | $`\frac{3}{2}`$ | $`0.344`$ | | $`\frac{3}{2}`$ | $`\frac{1}{2}`$ | $`\frac{1}{4}`$ | $`0.6`$ | $`2`$ | $`2`$ | $`1`$ | $`0.4`$ | | $`1`$ | $`1`$ | $`\frac{1}{2}`$ | $`0.5`$ | $`1`$ | $`1`$ | $`\frac{1}{2}`$ | $`0.5`$ | On the one way, we can estimate the fractal index for the dual classes of $`h`$ with rational values, considering a fitting of the graphics $`i_f[h]\times h`$ and $`c[\nu ]\times \nu `$, plus the observation that the $`i_f[h,\nu ]`$ diminishes in the sequence $`i_f[h,\nu ]`$ $`=`$ $`i_f[{\displaystyle \frac{3}{2}},{\displaystyle \frac{1}{2}}],i_f[{\displaystyle \frac{4}{3}},{\displaystyle \frac{2}{3}}],i_f[{\displaystyle \frac{5}{4}},{\displaystyle \frac{3}{4}}],`$ (33) $`i_f[{\displaystyle \frac{6}{5}},{\displaystyle \frac{4}{5}}],i_f[{\displaystyle \frac{7}{6}},{\displaystyle \frac{5}{6}}],i_f[{\displaystyle \frac{8}{7}},{\displaystyle \frac{6}{7}}],`$ $`i_f[{\displaystyle \frac{9}{8}},{\displaystyle \frac{7}{8}}],i_f[{\displaystyle \frac{10}{9}},{\displaystyle \frac{8}{9}}],i_f[{\displaystyle \frac{11}{10}},{\displaystyle \frac{9}{10}}],`$ $`i_f[{\displaystyle \frac{12}{11}},{\displaystyle \frac{10}{11}}],i_f[{\displaystyle \frac{13}{12}},{\displaystyle \frac{11}{12}}],i_f[{\displaystyle \frac{14}{13}},{\displaystyle \frac{12}{13}}],`$ $`i_f[{\displaystyle \frac{15}{14}},{\displaystyle \frac{13}{14}}],i_f[{\displaystyle \frac{16}{15}},{\displaystyle \frac{14}{15}}],i_f[{\displaystyle \frac{17}{16}},{\displaystyle \frac{15}{16}}],`$ $`i_f[{\displaystyle \frac{19}{18}},{\displaystyle \frac{17}{18}}],i_f[{\displaystyle \frac{20}{19}},{\displaystyle \frac{18}{19}}],i_f[{\displaystyle \frac{21}{20}},{\displaystyle \frac{19}{20}}],`$ $`i_f[1,1].`$ This way, we observe that our formulation to the universal class $`h`$ of particles with any values of spin $`s`$ establishes a connection between Hausdorff dimension $`h`$ and the central charge $`c[\nu ]`$, in a manner unsuspected till now. Besides this, we have obtained a connection between $`h`$ and the Rogers dilogarithm function, through the fractal index defined in terms of the partition function associated with the universal class $`h`$ of particles. Thus, considering the Eqs.(12, 13) and the Eq.(25), we have $`{\displaystyle \frac{L[x^\nu ]}{L[1]}}`$ $`=`$ $`i_f[h,\nu ]i_f[h,{\displaystyle \frac{1}{\nu }}],\nu =0,2,4,etc.`$ (34) $`{\displaystyle \frac{L[x^\nu ]}{L[1]}}`$ $`=`$ $`2\times i_f[h,\nu ]i_f[h,{\displaystyle \frac{1}{\nu }}],\nu =1,3,5,etc.`$ (35) Also in we have established a connection between the fractal parameter $`h`$ and the Farey series of rational numbers, therefore once the classes $`h`$ satisfy all the properties of these series we have an infinity collection of them. In this sense, we clearly establish a connection between number theory and Rogers dilogarithm function. Given that the fractal parameter is an irreducible number $`h=\frac{p}{q}`$, the classes satisfy the properties P1. If $`h_1=\frac{p_1}{q_1}`$ and $`h_2=\frac{p_2}{q_2}`$ are two consecutive fractions $`\frac{p_1}{q_1}`$$`>`$$`\frac{p_2}{q_2}`$, then $`|p_2q_1q_2p_1|=1`$. P2. If $`\frac{p_1}{q_1}`$, $`\frac{p_2}{q_2}`$, $`\frac{p_3}{q_3}`$ are three consecutive fractions $`\frac{p_1}{q_1}`$$`>`$$`\frac{p_2}{q_2}`$$`>`$$`\frac{p_3}{q_3}`$, then $`\frac{p_2}{q_2}=\frac{p_1+p_3}{q_1+q_3}`$. P3. If $`\frac{p_1}{q_1}`$ and $`\frac{p_2}{q_2}`$ are consecutive fractions in the same sequence, then among all fractions between the two, $`\frac{p_1+p_2}{q_1+q_2}`$ is the unique reduced fraction with the smallest denominator. For example, consider the Farey series of order 6, denoted by the $`\nu `$ sequence $`(h,\nu )`$ $`=`$ $`({\displaystyle \frac{11}{6}},{\displaystyle \frac{1}{6}})({\displaystyle \frac{9}{5}},{\displaystyle \frac{1}{5}})({\displaystyle \frac{7}{4}},{\displaystyle \frac{1}{4}})({\displaystyle \frac{5}{3}},{\displaystyle \frac{1}{3}})`$ (38) $`({\displaystyle \frac{8}{5}},{\displaystyle \frac{2}{5}})({\displaystyle \frac{3}{2}},{\displaystyle \frac{1}{2}})({\displaystyle \frac{7}{5}},{\displaystyle \frac{3}{5}})({\displaystyle \frac{4}{3}},{\displaystyle \frac{2}{3}})`$ $`({\displaystyle \frac{5}{4}},{\displaystyle \frac{3}{4}})({\displaystyle \frac{6}{5}},{\displaystyle \frac{4}{5}})({\displaystyle \frac{7}{6}},{\displaystyle \frac{5}{6}})\mathrm{}.`$ Using the fractal spectrum ( Eq.7 ), we can obtain other sequences which satisfy the Farey properties and for the classes $`h={\displaystyle \frac{11}{6}},{\displaystyle \frac{9}{5}},{\displaystyle \frac{7}{4}},{\displaystyle \frac{5}{3}},{\displaystyle \frac{8}{5}},{\displaystyle \frac{3}{2}},{\displaystyle \frac{7}{5}},{\displaystyle \frac{4}{3}},{\displaystyle \frac{5}{4}},{\displaystyle \frac{6}{5}},{\displaystyle \frac{7}{6}},\mathrm{},`$ and ( note that these ones are dual classes, $`\stackrel{~}{h}=3h`$ ) we can calculate the fractal index taking into account the Rogers dilogarithm function or the partition function associated with each $`h`$. Now, in we also considered free fractons and an equation of state at low temperatures was obtained $$P=\frac{h\rho ^2}{2\gamma }+\gamma (KT)^2𝒞_1(h),$$ (39) where $`\gamma =\frac{m(\nu +1)}{4\pi \mathrm{}^2}`$ ( $`\mathrm{}`$ is the Planck constant ), $`\rho `$ is the particle density and $$𝒞_1(h)=_{1(T=0)}^{\mathrm{}(T=\mathrm{})}\frac{d𝒴}{(𝒴1)(𝒴2)}\mathrm{ln}\left\{\frac{𝒴1}{𝒴2}\right\}=\frac{\pi ^2}{6}.$$ (40) Thus, for the fracton systems we obtain the specific heat $`C`$ as $$\frac{C}{L^2}=\frac{m}{4\pi \mathrm{}^2}K^2T(\nu +1)\frac{\pi ^2}{3}.$$ (41) On the other hand, the specific heat of a conformal field theory is given by $$\frac{C}{L}=\frac{1}{2\pi \mathrm{}v}K^2T\frac{\pi ^2}{3}c[\nu ].$$ (42) Comparing the expressions, we obtain the Fermi velocity as $$v\frac{c[\nu ]}{\nu +1},$$ (43) so for $`\nu =0`$,$`c[0]=1`$,$`v1`$;$`\nu =1`$,$`c[1]=\frac{1}{2}`$,$`v0.25`$; $`\nu =2`$,$`c[2]=\frac{2}{5}`$,$`v0.133`$;$`\nu =3`$,$`c[3]=0.344`$,$`v0.086`$; etc. We observe that fractons are objects defined in 2+1-dimensions ( see for more details ). In summary, we have obtained a connection between fractons and CFT-quasiparticles. This was implemented with the notion of the fractal index associated with the universal class $`h`$ of the fractons. This way, fractons and CFT-quasiparticles satisfy a specific fractal statistics. We also have obtained an expression for the Fermi velocity in terms of the conformal anomaly and the statistical parameter. A connection between the Rogers dilogarithm function, Farey series of rational numbers and Hausdorff dimension $`h`$ also was established. The idea of fractons as quasiparticles has been explored in the contexts of the fractional quantum Hall effect, Luttinger liquids and high-$`T_c`$ superconductivity. Finally, a connection between fractal statistics and black hole entropy also was exploited in and a fractal-deformed Heisenberg algebra for each class of fractons was introduced in. ###### Acknowledgements. We would like to thank an anonymous referee by the comments.
no-problem/0002/cond-mat0002395.html
ar5iv
text
# Magnetic Field Effect on the Supercurrent of an SNS junction ## Abstract In this paper we study the effect of a Zeeman field on the supercurrent of a mesoscopic SNS junction. It is shown that the supercurrent suppression is due to a redistribution of current-carrying states in energy space. A dramatic consequence is that (part of the) the suppressed supercurrent can be recovered with a suitable non-equilibrium distribution of quasiparticles. PACS numbers: 74.50.+r, 74.80.Fp, 85.25.-j Recently, there has been a revival of interest in the proximity effect which occurs when normal metals (N) are placed in contact with superconductors (S). This is due to the availability of submicron fabrication technology and low temperatures. One of the main lessons learnt in the study of these hybrid structures is that, at sufficiently low temperatures, due to the suppression of inelastic scattering etc., it is essential to properly understand the contribution from individual energies. . This applies to, e.g., the conductance between N and S in contact as well as supercurrent in an SNS junction For the latter systems, it is in particular useful to understand the supercurrent being carried at each individual energy, i.e., the current-carrying density of states $`N_J(ϵ)`$. This quantity is the ordinary density of states weighted by the current that the states carry. The supercurrent is the integral over the energy $`ϵ`$ of the product of $`N_J(ϵ)`$ and the occupation number $`n(ϵ)`$. Here we consider again an SNS junction but with an applied magnetic field. It is well-known that magnetic field in general suppresses the supercurrent. This can arise from two completely different mechanisms . First, it can be due to the coupling of the magnetic field to superconductivity via the vector potential. I shall not discuss this effect here. This effect can be suppressed in geometries where the area perpendicular to the magnetic field is sufficiently small. The second mechanism is due to the Zeeman energy. Since pairing is singlet in s-wave superconductors, a physical picture commonly used is that the Zeeman field is a pair-breaking perturbation and hence the Cooper pair amplitude decays in space faster in the presence of the field. In a dirty normal metal with diffusion coefficient $`D`$, this decay length $`\sqrt{D/h}`$ for an energy splitting $`h`$. For an SNS junction, the supercurrent thus decreases with field due to a reduction of coupling between the two superconductors. This, however, is not entirely the full picture. As will be shown below, the main effect of the Zeeman splitting is to shift the current-carrying density of states in energy space. This is analogous to the behavior of the ordinary density of states under $`h`$ . The pairing correlation between the two superconductors remains long-ranged at appropriate energies. The supercurrent decreases ( c.f., however, below) because of this mentioned shift and the associated change in the occupation of the states (see below for details). A dramatic consequence of the above is that, under a suitable non-equilibrium distribution of quasiparticles, one can recover the suppressed supercurrent. I shall demonstrate this using an experimental arrangement studied in ref . A suitable applied voltage can recover (partly) the supercurrent suppressed by a magnetic field. Consider thus a quasi-one dimensional dirty metal wire (N’) of length $`L`$ connecting two superconductors. We shall always assume that the junction is in the dirty limit. The supercurrent carrying density of states can be obtained via the angular averaged retarded green’s function matrix $`\widehat{g}`$. For the present purposes it is sufficient to consider the retarded component and I shall leave out the usual superscript $`R`$. $`\widehat{g}`$ obeys the normalization condition $`\widehat{g}^2=\pi ^2\widehat{1}`$ and the Usadel equation. The latter reads, for position $`x`$ within N’ ($`0<x<L`$) and a magnetic field $`B`$ along the $`\widehat{z}`$ direction, $$[ϵ\widehat{\tau }_3+h\widehat{\sigma }_3,\widehat{g}]+\frac{D}{\pi }_x(\widehat{g}_x\widehat{g})=0$$ (1) Here $`ϵ`$ is the energy with respect to the Fermi level, and $`h=\mu _eB`$. In order to avoid confusions I shall pretend that electrons have a positive magnetic moment and thus identify the directions of the magnetic moment and spin, with up (down) being the states with lower (higher) Zeeman energy. $`\widehat{g}`$ at the boundaries $`x=0`$ and $`L`$ are given by its corresponding values for the equilibrium superconductors. I shall assume that the magnetic field is perfectly screened in S. In this case the boundary conditions at the superconductors are given by $$\widehat{g}=\pi \frac{ϵ\widehat{\tau }_3\widehat{\mathrm{\Delta }}}{\sqrt{|\mathrm{\Delta }|^2ϵ^2}}.$$ (2) with suitable gap matrices $`\widehat{\mathrm{\Delta }}`$ reflecting the phase difference $`\chi `$ between the two superconductors. Eqn (1) can be simplified by noting that it is block-diagonal, since the pairing is singlet. In the usual $`4\times 4`$ notation, the elements associated with the 1st and 4th rows and columns are decoupled from those of 2nd and 3rd (as already noted in, e.g., ref ) Moreover, the matrix equations for these submatrices have the same structure as that in zero field except $`ϵϵ\pm h`$, corresponding to magnetic moment parallel and antiparallel with the external field. It is then convenient to introduce separately the current-carrying density of states for each spin direction: $$N_J^\sigma (ϵ)=<\widehat{p}_xN^\sigma (\widehat{p},ϵ,x)>$$ (3) where $`N^\sigma (\widehat{p},ϵ,x)`$ is the density of states for spin $`\sigma `$ ( $`=`$ or $``$), $`\widehat{p}`$ the direction of momentum. Here I have chosen to label the states with the spin direction of the particles. Note that, e.g., the up-spin particles are associated with the down-spin holes (c.f. above). For a given spin $`\sigma `$, $`N_J^\sigma (ϵ)`$ is related to the appropriate sub-matrix of the green’s function $`\widehat{g}`$ via formulas analogous to those in zero field (see, e.g. ) The total (number) current density at $`T=0`$ is given by the integration of $`N_J^{}`$ over the occupied (negative) energy states: $$J_s=v_f_{\mathrm{}}^0𝑑ϵ[N_J^{}(ϵ)+N_J^{}(ϵ)]$$ (4) We shall also introduce $`N_J^{av}(ϵ)=\frac{1}{2}[N_J^{}(ϵ)+N_J^{}(ϵ)]`$ as the spin averaged current-carrying density of states. $`J_s`$ is related to $`N_J^{av}(ϵ)`$ by $$J_s=2v_f_{\mathrm{}}^0𝑑ϵ[N_J^{av}(ϵ)],$$ (5) exactly the same formula as in zero field. Here $`v_f`$ is the Fermi velocity. We shall confine ourselves to the case of long junctions ( $`E_D<<|\mathrm{\Delta }|`$ ), and for definiteness choose $`|\mathrm{\Delta }|=100E_D`$. Here $`E_DD/L^2`$ is the Thouless energy associated with N’. The behavior of the current-carrying density of states $`N_J(ϵ)`$ in zero field has already been studied in detail in . I shall only mention some of the more relevant features below. $`N_J(ϵ)`$ vanishes for all energies at $`\chi =0`$. A typical case for other phase differences $`0<\chi <\pi `$ (we shall always restrict ourselves to this range, the other cases can be obtained by symmetries) is as shown by full line in Fig 1. $`N_J`$ is odd in the energy variable $`ϵ`$. Its major feature consists of a positive peak (labelled by $`+`$ in Fig 1) at energies of several times the Thouless energy $`E_D`$ below the fermi energy, and a corresponding negative peak above (labelled by $``$ in Fig. 1). Also seen are the small undulations as a function of energy for larger energies. This oscillatory behavior is a result of the difference in wave-vectors for the participating particles and holes undergoing Andreev reflection at given $`ϵ`$. For one dimension and in the clean case the pairing amplitude $`f`$ (the off-diagonal elements of $`\widehat{g}`$ in particle-hole space) oscillates as $`e^{\pm 2i(ϵ/v_f)x}`$. In the present dirty three dimensional case the same physics results in e.g., $`fexp\pm [(1i)\sqrt{ϵ/D}x]`$ for the linearized Usadel equation (i.e. the limit of small pairing amplitudes) At a fixed position, the pairing amplitude and hence the coupling between the two superconductors oscillates as a function of energy. $`N_J(ϵ)`$ also vanishes for $`|ϵ|`$ below a few times $`E_D`$, where the ordinary density of states also vanishes. Both the magnitude of this ‘minigap’ and the position of peaks mentioned above decrease with increasing phase difference, vanishing as $`\chi \pi `$ (where $`N_J`$ itself also vanishes for all $`ϵ`$’s). The behavior of $`N_J^\sigma (ϵ)`$ under a finite field is also shown in Fig 1, where I have chosen an intermediate $`h`$ ( $`E_D<<h<<|\mathrm{\Delta }|`$) for clarity. As mentioned it is convenient to discuss the current-carrying density of states separately for each spin direction under the presence of $`h`$. For magnetic moment along the applied field, the current carrying density of states is roughly (c.f. below) that of zero field except shifted in energy by $`h`$. i.e. $`N_J^{}(ϵ)N_J^{h=0}(ϵ+h)`$. Correspondingly $`N_J^{}`$ for magnetic moment pointing in the opposite direction is shifted up in energy. At fields $`h`$ not too small compared with $`\mathrm{\Delta }`$, there is a correction to this picture because, if we assume perfect screening of the magnetic field inside the superconductor as we are doing, the replacement $`ϵϵ\pm h`$ in the Usadel equation does not apply for the boundary condition (2) at $`x=0`$ and $`L`$. This correction is negligible if $`h<<|\mathrm{\Delta }|`$ and increases with increasing $`h`$. In this picture, the reason that the supercurrent at finite $`h`$ is suppressed (in general) from that of zero field is not pair-breaking, at least for $`h<<|\mathrm{\Delta }|`$. Rather, it is because at finite field $`E_D`$, some of the states that have positive contributions to $`J_s`$ were orginally occupied at $`h=0`$ but are now empty ($`+`$)., whereas some which were orginally empty are now occupied ( $``$) and contribute a negative current. By the above reasoning, the presence of the magnetic field has a non-trivial effect on the current-phase relationship. An example for a small $`h<<|\mathrm{\Delta }|`$ is shown in Fig 2. In zero field $`I(\chi )`$ is roughly like a sine function except for a small tilt towards $`\chi \pi `$. When $`h`$ increases from zero, one sees that $`I_s`$ first starts to decrease for $`\chi `$ near $`\pi `$ while $`I_s`$ at smaller $`\chi `$ is unaffected. Only at larger $`h`$ would $`I_s`$ begin to be suppressed there. This can be readily understood by considering the behavior of $`N_J`$ under $`h`$ discussed before. Recall that at zero field $`N_J`$ has a minigap of order $`E_D`$ but $`\chi `$ dependent, being smallest when $`\chi `$ is near $`\pi `$. Thus when the field $`h`$ is increased from $`0`$, $`I_s`$ at larger $`\chi `$ would be suppressed first since at these $`\chi `$’s, a smaller $`h`$ is needed to shift the antipeak $``$ (the peak $`+`$) to below (above) the fermi level. At higher $`h`$ and near $`\chi \pi `$ the current also oscillates with $`\chi `$. (See, in particular $`h=6`$ in Fig. 2, where $`I_s`$ becomes negative for $`\chi `$ slightly less than $`\pi `$, vanishing again at $`\chi =\pi `$). These features are due to the undulatory structure of $`N_J^\sigma `$ as a function of $`ϵ`$ (see Fig 1). At these higher fields the weaker bumps and troughs of $`N_J^\sigma `$ (not labelled) cross the fermi level successively. They do so at fields which are $`\chi `$ dependent. Their amplitudes also depends on $`\chi `$. Since the major effect of the magnetic field is not a suppression of $`N_J^\sigma `$ (except for $`h\genfrac{}{}{0pt}{}{>}{}|\mathrm{\Delta }|`$) but a redistribution in energy space, the supercurrent suppression by the magnetic field can, to a certain extent, be recovered by a suitable distribution of quasiparticles. Here we consider the case of a ‘controllable Josephson Junction’, studied in experimentally and theoretically. The device configuration is shown schematically in the inset of Fig 3. The two superconductors S, in general with a phase difference $`\chi `$, are at zero voltage. Equal but opposite voltages $`V`$ are applied on the normal (N) reservoirs. The N and S reservoirs are connected by quasi-one dimensional normal wires as shown. We are interested in the effect of the voltage $`V`$ on the supercurrent $`I_s`$ between the S reservoirs. At $`V=0`$, the distribution function $`n(ϵ)`$ is of the usual equilibrium form and is given by $`1`$ for $`ϵ<0`$ and $`0`$ otherwise ($`T=0`$), as shown by dotted lines in Fig 3. Under a finite $`V`$, $`n(ϵ)`$ for $`|ϵ|<eV`$ becomes $`1/2`$ (full line in Fig 3), i.e. its effect is to transfer half of the quasiparticles for $`eV<ϵ<0`$ to the region $`0<ϵ<eV`$. In zero field such a non-equilibrium distribution in general leads to a decrease of the supercurrent between the S reservoirs (see ), since usually this corresponds to decreasing (increasing) the occupation of states which contribute a positive (negative) current \[ $`+`$ ($``$) in Fig 1\] (full line in Fig 4, ; see ref for the oscillations at higher $`V`$’s) The effect of $`V`$ on $`I_s`$ for finite $`h`$ is also shown in Fig 4. At the fields chosen ($`>>E_D`$) $`I_s`$ at zero voltages have essentially decreased to zero (see also below). As claimed, for a given $`h`$, a suitable choice of $`V`$ may ‘enhance’ the supercurrent. For the present case where $`E_D<<h<<|\mathrm{\Delta }|`$, this enhancement is particularly spectacular for $`eVh`$. These features can be understood by examining the spin-averaged current-carrying density of states $`N_J^{av}(ϵ)`$, also shown in Fig 3. $`N_J^{av}(ϵ)`$ is odd in energy. The behavior of $`N_J^{av}`$ follows directly from $`N_J^{}`$ in Fig 1. For a given $`h0`$, when $`V`$ is increased from zero, we have a transfer of the particles as described before from the states near the antipeak just below the fermi level ( due to $``$) to the peak above (arising from $`+`$). The supercurrent thus increases. The strongest enhancement of the supercurrent occurs at $`eVh`$, since the region where this transfer occurs just covers the antipeak below the fermi level. For $`E_D<<h<<|\mathrm{\Delta }|`$, roughly half of the supercurrent at zero voltage can be recovered at $`eVh`$. This is because at this voltage, the contributions from the region $`eV<ϵ<eV`$ cancel among themselves and we are left with the integral over states with $`ϵ<eV`$. The integral of $`N_J^{av}`$ over the energy in this region is roughly half that of zero field. (compare Fig 3 and Fig 1) The above statements are not quantitatively precise due to: (i) $`N_J^{}`$ are not simple shifts of $`N_J`$ in energy and (ii) oscillatory structures of $`N_J^{}`$, both mentioned before. For $`V=0`$, as can be seen in Fig 4, the supercurrent does not simply decay monotonicially with increasing $`h`$ but rather shows a damped oscillation. Such a behavior has already been pointed out before , and was explained in terms of the oscillation of the pair-amplitude with $`h`$ at a given distance (the separation between the two superconductors). The present work provides a slightly different but closely connected perspective. $`I_s`$ oscillates with $`h`$ because $`N_J^\sigma `$ does so as a function of energy. The field $`h`$ shifts $`N_J^\sigma `$ in energy space. The undulatory behavior results when regions of alternating signs of $`N_J^\sigma `$ shift through the fermi level. Some of the physics discussed in this paper, such as the effect of $`h`$ on the current-phase relationship, is applicable beyond the dirty limit. I shall however defer these to a future study. There is recently strong interest in the physics of superconductors in contact with a ferromagnetic material. Many papers have simply modelled the ferromagnet F with a Stoner field. e.g. (c.f., however ) Within this model the Stoner field is formally equivalent to the Zeeman field here. A much discussed topic is the effect of this field $`h`$ on the number of conduction channels . This effect is important only when $`h`$ is comparable to the fermi energy of N’ and has not been included in the present calculations. Part of the paper was written using the facilities of the Chinese University of Hong Kong. I thank Profs. P. M. Hui and H. Q. Lin for their help.
no-problem/0002/cond-mat0002054.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is not difficult to find in the animal kingdom species that live and work in sexual pairs, but sometimes have extra-pair relations. Biologists believe that these pairs are formed in order to better take care of the pups, and that the extra-pair relations have the genetic purpose to maximize the variability of their offspring or to produce some fitness benefit for them . The Scandinavian great reed warbler is one of the species that presents these extra-pair matings. However, independent of its origin (social or genetic), true monogamy seems to be rare in Nature. The Penna model for biological ageing is a Monte Carlo simulation technique based on the mutation accumulation hypothesis. It has successfully reproduced many different characteristics of living species, as the catastrophic senescence of pacific salmon , the inheritance of longevity and the self-organization of female menopause . The extra-pair relations mentioned above have also been studied through this model . Martins and Penna have obtained that the offspring generated by extra-pair relations are genetically stronger and present a higher survival probability than those generated by social relations. In this paper we are interested in using the Penna model to study the true monogamy, rarely found in Nature. One example is the California mouse. In this species a female is not able to sustain one to three pups alone. The pups are born at the coldest time of the year and depend on the parents body heat to survive. According to the biologist David Gubernick, as cited in Science , the situation is so dramatic that if the male leaves or is taken away, the female abandons or kills the pups. However, he also points out that other species of mice that live in the same environment are promiscuous. That is, the reason for true monogamy is still an open question under study. We have adopted the strategy of considering monogamy as a genetic trait, exclusively related to paternal care. Our assumption that male fidelity is genetically transmitted is analogous to the recent findings that the gene Mest regulates maternal care . In the next section we explain the Penna model and how fidelity is introduced. In section 3 we present our results and in section 4 the conclusions. ## 2 The Sexual Penna model and Fidelity We will now describe the sexual version of the Penna model; details and applications can be found, for instance, in references . The genome of each individual is represented by two bit-strings of 32 bits that are read in parallel; that is, there are 32 positions to be read, to each position corresponding two bits. One time-step corresponds to read one position of all the genomes. In this way, each individual can live at most for 32 time-steps (“years”). Genetic diseases are represented by bits 1. If an individual has two bits 1 (homozygotous) at the third position, for instance, it starts to suffer from a genetic disease at its third year of life. If it is an homozygotous position with two bits zero, no disease appears at that age. If the individual is heterozygotous in some position, it will get sick only if that position is a dominant one. The number of dominant genes and its randomly chosen positions are defined at the beginning of the simulation; they are the same for all individuals and remain constant. When the number of accumulated diseases of any individual reaches a threshold $`T`$, the individual dies. The individuals may also be killed due to a lack of space and food, according to the logistic Verhulst factor $`V=1N(t)/N_{max}`$, where $`N(t)`$ is the current population size and $`N_{max}`$ the carrying capacity of the environment. At every time step and for each individual a random number between zero and 1 is generated and compared with $`V`$: if this number is greater than $`V`$, the individual dies independently of its age or number of accumulated diseases. If a female succeeds in surviving until the minimum reproduction age $`R`$, it generates, with probability $`p`$, $`b`$ offspring every year. The female randomly chooses a male to mate, the age of which must also be greater or equal to $`R`$. The offspring genome is constructed from the parents’ ones; firstly the strings of the mother are randomly crossed, and a female gamete is produced. $`M_m`$ deleterious mutations are then randomly introduced. The same process occurs with the father’s genome (with $`M_f`$ mutations), and the union of the two remaining gametes form the new genome. Deleterious mutation means that if the randomly chosen bit of the parent genome is equal to 1, it remains 1 in the offspring genome, but if it is equal to zero in the parent genome, it is set to 1 in the baby genome. It is well known that due to the dynamics of the model, the bits 1 accumulate, after many generations, at the end part of the genomes, that is, after the minimum reproduction age $`R`$. For this reason ageing appears: the survival probabilities decrease with age. The sex of the baby is randomly chosen, each one with probability $`50\%`$. Let’s see now how fidelity is introduced. We assume that if a female reproduces this year, she spents the next two following years without reproducing. So we consider two time steps as the parental care period. Remembering that in our simulations the female choses the male, if the male is a faithful one, he will refuse, during this period, to mate any female that eventually choses him as a partner. The non-faithful male accepts any invitation, but his offspring still under parental care pay the price for the abandonment: they have an extra probability $`P_d`$ of dying. The male offspring of a faithful father will also be faithful, with probability $`P_f`$. This means that if the father is faithful and $`P_f=1`$, the male offspring will necessarily be faithful. $`P_f`$ is also the probability of a non-faithful male having a non-faithful offspring. ## 3 Results We start our simulations with half of the males faithful and half non-faithful. In Fig.1 we show the final percentages (after many generations) of faithful males as a function of the offspring death probability $`P_d`$, for the cases where the male offspring inherits the father’s fidelity state with probability $`P_f=1`$ (full line) and with probability $`P_f=0.8`$ (dashed line). This last case means that the offspring of a faithful father has a $`20\%`$ probability of being non-faithful and vice-versa. From this figure we can see that as the death probability of the abandoned pups increases, the percentage of faithful fathers increases. From the solid curve it is easy to notice that there is a compromise between the lower reproduction rate of the faithful males and the death probability of the already born offspring abandoned by father: if $`P_d<0.3`$, a high reproduction rate dominates and after many generations the faithful males disappear from the population. However, for $`P_d=1`$ the opposite occurs, since there is a strong selection pressure against the non-faithful males to warrant the survival of the already born offspring. From the dashed curve ($`P_f=0.8`$) it can be seen that for $`P_d=0`$ a high percentage (greater than $`20\%`$) of faithful males remains in the final population. The reason is that for $`P_d=0`$ there is no selection pressure. There is a probability that non-faithful males, which have a high reproduction rate, generate faithful offspring; these offspring are introduced into the population and, without any pressure, remain there. At this point ($`P_f=0.8`$ and $`P_d=0`$) we have computed which percentages of faithful males descend from faithful and non-faithul fathers. We have obtained that for the $`26.95\%`$ of faithful males that remain in the population, $`9.28\%`$ of them descend from faithful fathers and $`17.67\%`$ from non-faithful ones. In Fig.2a we present the time evolution of the populations for $`P_f=1`$, and in Fig.2b for $`P_f=0.8`$. The inset show the final population sizes as a function of $`P_d`$. From Fig.2a it can be seen that the population sizes decrease until $`P_d=0.4`$ and then increase for increasing values of $`P_d`$, stabilizing around the same population size of $`P_d=0.3`$. For $`P_f=0.8`$ (Fig.2b) the population sizes decrease until $`P_d=0.7`$, and then stabilize around the same final size for increasing values of $`P_d`$. Fig.3 shows the survival rates for $`P_f=1`$ and $`P_d=0.0`$ (circles), 0.5 (squares) and 0.9 (triangles). It can be noticed that for $`P_d=0.5`$ the child mortality is greater, since $`P_d`$ is already large and nearly $`50\%`$ of the males (see fig.1, solid curve) are not faithful. The results obtained for $`P_f=0.8`$ are similar. The survival rate is defined, for a stable population, as the ratio $$S(a)=N(a+1)/N(a),$$ where $`N(a)`$ is the number of individuals of age $`a`$. A stable population means that the number of individuals of any given age $`a`$ is constant in time. It is important to emphasize that all curves presented here correspond to already stable situations. To obtain each of them we simulated 20 different populations (samples) during 800,000 time steps, and averaged the final results. The parameters of the simulations are: Initial population = 20,000 individuals (half for each sex); Maximum population size $`N_{max}=200,000`$; Limit number of allowed diseases $`T=3`$; Minimum reproduction age $`R=10`$; Probability to give birth $`p=0.5`$; Number of offspring $`b=2`$; Number of mutations at birth $`M_m=M_f=1`$; Number of dominant positions = 6 (in 32). ## 4 Conclusions We have used the Penna bit-string model for biological ageing to study the problem of true monogamy, rarely found in Nature. In our simulations a female that gives birth necessarily waits two time steps before giving birth again. We call this period the parental care period. A faithful father also cannot reproduce during this period, but a non-faithful one can accept any female that randomly choses him to mate, abandoning the pups already born. The abandoned pups have, as a consequence, an extra probability to die. In this way there is a competition between the reproduction rate and the death probability of already born pups. We show that depending on this death probability, nature may prefers a lower reproduction rate to warrant the survival of those babies already born. We consider the paternal fidelity an expression of paternal care, and so admit it as a genetic trait to be transmitted to the male offspring. Acknowledgements: to P.M.C. de Oliveira and D. Stauffer for important discussions and a critical reading of the manuscript; to CNPq, CAPES and FAPERJ for financial support. References V. Morell, Science 281, 1983 (1998). T.J.P. Penna, J.Stat.Phys. 78, 1629 (1995). K.W. Wachter and C.E. Finch, Betwee Zeus and the Salmon. The Biodemography of Longevity, National Academy Press, Washington D.C.; T.J.P. Penna, S. Moss de Oliveira and D. Stauffer, Phys.Rev. E52, 3309 (1995). P.M.C. de Oliveira, S. Moss de Oliveira, A.T. Bernardes and D. Stauffer, Lancet 352, 911 (1998). S. Moss de Oliveira, P.M.C. de Oliveira and D. Stauffer, Evolution, Money, War and Computers, Teubner, Sttutgart-Leipzig (1999). S.G.F. Martins and T.J.P. Penna, Int.J.Mod.Phys. C9, 491 (1998). L. Lefebvre, S. Viville, S. C. Barton, F. Ishino, E.B. Keverne and M.A. Surani, Nature Genetics 20, 163 (1998). R.S. Bridges, Nature Genetics 20, 108 (1998). A.T. Bernardes, Annual Reviews of Computational Physics IV, edited by D. Stauffer, World Scientific, Singapore (1996). J.S. Sa Martins and S. Moss de Oliveira, Int.J.Mod.Phys. C9, 421 (1998). Figure Captions Fig.1 - Final percentages of faithful males in the population as a function of the death probability of the abandoned pups. The solid line corresponds to the cases where the offspring fidelity state is the same of the father. The dashed line corresponds to the cases where the offspring inherit the same fidelity of the father with probability $`80\%`$. Fig.2a - Time evolution of the populations (linear-log scale) for $`P_f=1`$ and different offspring death probabilities $`P_d`$. The inset shows the final population sizes as a function of $`P_d`$. For $`0.6P_d1`$ the final sizes are all very close to that for $`P_d=0.3`$. Fig.2b - The same as Fig.2a for $`P_f=0.8`$. Fig.3 - Survival rates as a function of age for $`P_f=1`$ and different values of $`P_d`$; circles correspond to $`P_d=0.1`$, squares to 0.5 and triangles to 0.9. A higher child mortality can be noticed for $`P_d=0.5`$.
no-problem/0002/astro-ph0002101.html
ar5iv
text
# Red Companions to a z=2.15 Radio Loud Quasar ## 1 Introduction The selection of high redshift galaxies on the basis of their colours has been a growth industry over the last 5 years. Much of this work has concentrated on the selection of high (z$`>`$3) redshift objects through ‘dropout’ techniques. Such methods have had considerable success (eg. Steidel et al. 1999). However, many of these techniques are reliant on emission in the rest-frame ultraviolet. The UV emission from a galaxy can easily be dominated by a small burst of star formation, or alternatively obscured by a relatively small amount of dust. A population of older quiescent galaxies might thus coexist with the UV selected high redshift objects. Studies of the stellar populations in moderate redshift radio galaxies provide some support for this idea. A number of authors (eg. Stockton et al. (1995), Spinrad et al. (1997)) have shown that several radio galaxies have ages $`>`$3–5 Gyr at z$``$1.5, indicating that they must have formed at z$`>`$5. These results have even been used (Dunlop et al. 1996) to argue that $`\mathrm{\Omega }`$ must be significantly less than 1. Old galaxies at moderate redshift, passively evolving from z$`>`$5 to z=1.5 – 2.5, would appear as red objects, with R-K’ colours $`>`$4.5. There has been considerable interest in such red objects. Much of this work has centred on red objects found in the fields of known high redshift AGN (eg. Hu & Ridgeway (1994), Yamada et al., (1997)). A large survey of the environments of z=1–2 quasars (Hall et al., 1998) finds that such associations are quite common. The present paper attempts to push such studies above z=2. The alternative approach, to study red objects in the field, is also an active area with several surveys dedicated to or capable of finding such objects. See eg. Cohen et al. (1999), or Rigopoulou et al. (in preparation). Red objects need not be old, though. An alternative explanation is that they are heavily obscured, and may contain either a redenned AGN or massive starburst (eg. Dey et al., 1999, Egami et al., 1996). In this context it is interesting to note that several of the objects found in recent deep submm surveys have been identified with very red objects (Smail et al., 1999; Clements et al., in preparation). Finding emission from the putative galaxies responsible for metal and damped-Ly$`\alpha `$ absorption line systems has been the goal of numerous observational programmes. At low redshift there has been considerable success in identifying the galaxies responsible for MgII absorption systems (Bergeron & Boisse, 1991; Steidel et al., 1997. At higher redshifts, interest has mostly focussed on the damped-Ly$`\alpha `$ absorption systems. Searches for line emission from such objects (eg. Bunker et al. 1999; Wolfe et al. 1992) has met with varying success (Leibundgut & Robertson, 1999). Fewer observers have looked in the continuum, but there have been some successes there as well. For example, Aragon-Salamanca et al. (1996) found close companions to 2 out of 10 quasars with damped absorbers in a K band survey. As yet there has been no spectroscopic confirmation of these identifications, but the broad characteristics of these objects, and the small fraction of damped absorbers detected, is consistent with plausible models for the evolution of the galaxies responsible (Mathlin et al., in preparation). Meanwhile, Aragon-Salamanca et al. (1994) looked for counterparts to multiple CIV absorbers lying at z$``$1.6, also using K band observations. They found an excess of K band objects near to the quasars, consistent with their being responsible for the CIV absorption. Once again, there is no spectroscopic confirmation of the assumed redshifts. The present paper presents the first results of a programme aimed at finding quiescent objects at high redshift (z$``$2–2.5) using optical/IR colour selection techniques. Among the targets observed in an initial test programme was the radio loud quasar 1550-2655, selected as an example radio loud object. The rest of the paper is organised as follows. Section 2 describes our observations, data analysis and presents the results. Section 3 discusses these results and examines three possible origins for the red objects we have found to be associated with 1550-2655. Finally we draw our conclusions. We assume $`\mathrm{\Omega }_M`$ = 1, $`\mathrm{\Lambda }`$=0 and H<sub>0</sub>=100 kms<sup>-1</sup>Mpc<sup>-1</sup> throughout this paper. ## 2 Observations and Results As part of a programme to examine the role of quiescent galaxies at z=2–2.5, we observed the field surrounding the radio loud quasar 1550-2655. This object lies at a redshift of 2.15 and shows signs of associated Ly$`\alpha `$ absorption (Jauncey et al., 1984). Its spectrum also contains a CIV absorber at z=2.09. Observations were made at the 3.5m ESO NTT, and data reduction used standard IRAF and Eclipse routines. The optical observations, in R band, were conducted in service mode on 20 August 1997 using the SUSI imager. This provides high resolution images, with a pixel size of 0.13”. A total integration time of 3600s was obtained on the source. This integration time was broken up into 12 subintegrations of 300s each, whose relative positions were shifted by up to 40” in a semi-random jitter pattern. These images were bias subtracted, flat fielded using a sky flat made on the twilight sky, then aligned and median combined to produce the final image. A residual gradient going from left to right across the image was apparent in the final data. This was removed by subtracting a low-order polynomial fitted to each horizontal row of pixels once detected objects had been masked off. The final R band image is shown in Figure 1a. Seeing was measured to be marginally subarcsecond on the final image. The infrared data was obtained using the K’ filter on the SOFI infrared imager on 12 July 1998. The 3600s of integration were obtained in 60 one minute sub-integrations which themselves were the result of six 10s integrations. The 60 one minute sub-integrations were shifted relative to one another in a random 15” size dithering pattern to allow for sky background determination and subtraction. The flat field was obtained using a standard lamp ON $``$ lamp OFF dome flat. The data were reduced using the Eclipse package by Nick Devillard (1997). The algorithms used for reducing dithered infrared data in this package are detailed in Devillard (1999). In summary, the package allows for flat fielding with the preprepared flat, and conducts sky subtraction using a running average of 10 offset images. It then identifies sources on each image and uses a correlation technique to calculate the offsets between them. The separate subintegrations are then offset and combined to produce the final image. Seeing was measured in the final image to be $``$ 0.9”. Photometric calibration used the Landolt standard (Landolt, 1992) PG1633+099C in R band, and the faint IR standards P499E and S875C at K’ (Cassali & Hawarden, 1992). Galactic extinction was corrected using values obtained from the NASA Extragalactic Database (NED). After data reduction and flux calibration, the SUSI image, which has a resolution of 0.13”/pixel, was rebinned to match the 0.292”/pixel SOFI resolution, and the images were aligned. The final matched images were 67 by 88 arcseconds in size. The main limiting factor on this size was the small SUSI field of view and the dithering scheme used for the optical observations. We then used SExtractor to select objects detected at K’ and to extract their photometric properties in matched apertures in the two passbands. To qualify for detection, an object had to have a 1.5$`\sigma `$ significance flux in 10 connected pixels in the K’ band image (ie. $``$ 5$`\sigma `$ significance overall). This matched catalogue can then easily be searched for objects with specific colour criteria. We detected a total of 75 objects in K’ down to a limiting magnitude of $`20.5`$. The catalogue was then searched for candidate red objects, with R-K’$`>`$4.5. We found five such objects in the catalogue, details of which are given in Table 1. Their positions are also shown in Figure 1, which shows both the R and K’ band images of the quasar field. ### 2.1 A Red Quasar Companion Comparison of the R and K’ images of the quasar itself shows what would appear to be a red companion object — apparent as an extension in K, but absent in the R band image. The reality of this object was investigated by subtracting off the unresolved quasar contribution. This was achieved by selecting a star, with no close companions, in the observed field and using this as a PSF model. The central value of the PSF image was scaled to match that in the quasar image, and then the two images were subtracted. The companion was clearly visible in the K’-band PSF subtracted image, but was entirely absent in the R-band PSF subtracted image. The companion is marginally resolved, having a size of roughly 1.5 x 1 arcseconds, and is situated $``$2 arcseconds from the quasar. R and K’ magnitudes were extracted from the PSF subtracted image, indicating that the quasar companion is also red. Its details are included in Table 1. ## 3 Discussion Until we can obtain spectroscopy for these objects, it is difficult to assess their importance or role in this system. There are three possible origins for these red objects: (1) they are associated with the quasar, lying at z=2.15; (2) they are associated with the CIV absorber at z=2.09; (3) they are foreground objects, unrelated to the quasar or CIV system. We will assess each of these alternatives in turn. ### 3.1 Association with the Quasar The density of red objects in this field is surprisingly high - $`4\pm 1.5`$ arcmin<sup>-2</sup> as opposed to the supposed global value of $`1\pm 0.3`$ arcmin<sup>-2</sup> (Cohen et al. 1999). The density of red objects near to the quasar is significantly higher, with three of the red objects as well as the companion lying in a 0.25 arcmin<sup>-2</sup> region near the quasar. This is certainly suggestive that there is some connection between the red objects and the AGN (or CIV system). Other studies have found a similar connection between red objects and AGN, especially radio loud AGN. Perhaps the largest survey to date is that of Hall et al. (1999) who obtained images of 31 z=1 – 2 radio loud quasars and found a significant excess of red galaxies around them. Interestingly they found two radial dependencies for this excess, one of which lies close to the quasar ($`<`$40”) and another more distant (40”–100”). This is perhaps reflected in the present study, with two red objects in the second class, further from the quasar, and the rest, including the close companion, within 40”. In this interpretation, the close companion would be $``$26kpc from the quasar, and would be about 20x15 kpc in size. Hydrogen in this object might be responsible for the associated absorption seen in the quasar spectrum. If we take the redshift of the companion galaxies to be the same as the quasar, then the implied absolute magnitudes would be $``$-24 to -25, ie. about L (using the luminosity function of Mobasher et al., 1993 and converting to the assumed cosmology). This calculation of course ignores K-corrections, but these are expected to be quite low in the K-band. Cowie et al (1994) calculate K-corrections at z$``$2.1 to be less than 1 for all morphological classes from E to Irr. We also note that these results are not dissimilar to those of Francis et al. (1997) who uncovered a group of similarly red objects at z=2.38 associated with a cluster of quasar absorption line systems. K’ magnitudes for these objects are similar to, or brighter then, those of the objects discussed here. It is interesting to note that the Francis et al. objects are all Ly$`\alpha `$ emitters. If the present red objects have similar properties, then such line emission would make redshift determination much easier. ### 3.2 Association with the CIV Absorber Many of the same comments regarding direct association with the quasar can be made regarding association with the CIV absorber: there is an unusually high density of red objects in this region suggesting some connection between them. Of particular interest here is the closeness of the quasar companion to the quasar - only $``$2” away, or 26kpc at the redshift of the CIV absorber, and with a size similar to that 7 given above. To date little is known about the nature of CIV absorbers, so the possible identification of the galaxy responsible for one is rather interesting. Previous work looking for emission lines from objects associated with damped and CIV absorbers (Mannucci et al., 1998) suggests that galaxies are more likely to cluster with absorbers than with quasars. If correct, this would suggest that the objects found here are more likely to be associated with the absorber than the quasar. There has been a long and largely unsuccessful history of searches for emission from absorption line systems in quasars at large redshift. These have largely concentrated on emission lines, whether Ly$`\alpha `$ (eg. Leibundgut & Robertson, 1999), H$`\alpha `$, (eg. Bunker et al., 1999), or others, though work in the infrared continuum has perhaps shown greater success (see eg. Aragon-Salamanca et al., 1996, 1994). If the quasar companion in the present study is indeed responsible for the CIV absorption, then we may have an explanation for the failures. The object is both red and quite close to the quasar. Detection of the companion would require both good seeing (conditions for our own observations were sub-arcsecond) and observations in the near-IR as well as the ability to subtract off the quasar contribution. Sensitive infrared detectors have only recently become available at most observatories, while subarcsecond seeing is only rarely achieved. We might thus have been lucky in being able to detect the companion. New instruments, such as UFTI at UKIRT, which combines adaptive optics correction (regularly 0.5”) with a superb infrared imager, can regularly make such observations. This will hopefully allow us to make significant advances in our understanding of high redshift absorption line systems. ### 3.3 Foreground Contaminants The possibility that the red objects are at an entirely different redshift to the quasar and absorber must still be considered while we do not have confirming redshift spectra. In this context it is salutary to note the lesson of the first VRO discovered (Hu & Ridgeway, 1994), known as HR10. This was found in the field of a z=3.79 quasar, but was later shown to have a redshift of 1.44 (Graham & Dey, 1996). However, the number density of red objects in their field was 0.9 arcmin<sup>-2</sup> which matches the field density of red objects discussed by Cohen et al. (1999), and is lower than that found here. ## 4 Conclusions At present there are several deficiencies in our data. Firstly we have only obtained limits on the objects R band magnitudes. We must detect them and measure, rather than limit, their R band magnitudes before we can properly determine their colours. Secondly we must obtain spectra for the objects so that we can actually determine, rather than speculate on, their redshift. However, the results presented here suggest that a larger survey of quasar environments, both with and without absorbers, using infrared imagers with adaptive optics correction might shed new light on galaxy populations at large redshift. Acknowledgments This paper is based on observations made at the European Southern Observatory, Chile. It is a pleasure to thank Nick Devillard for his excellent Eclipse data reduction pipeline, and E. Bertin for SExtractor. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. I would like to thank Amanda Baker and Garry Mathlin for useful discussions, and the anonymous referee for helpful comments on an earlier version. This work was supported in part by an ESO fellowship, EU TMR Network programme FMRX-CT96-0068 and by a PPARC postdoctoral grant.
no-problem/0002/astro-ph0002294.html
ar5iv
text
# Studying the Pulsation of Mira Variables in the Ultraviolet ## 1 Introduction At some point in their lives, many if not most stars go through an unstable phase which leads to pulsation. There are many classes of these pulsating stars. Perhaps the most famous are the Cepheid variables, which are popular mostly because of their well-defined relationship between stellar luminosity and pulsation period (typically 5–50 days) that makes these stars very useful as distance indicators. Mira variables are another important class of stellar pulsators, having long periods of 150–500 days and luminosities that vary by as much as 6–7 magnitudes from minimum to maximum. Miras are asymptotic giant branch (AGB) stars with masses similar to that of the Sun. They have very massive, slow, cool winds, which produce a complex circumstellar environment. Observations of molecular CO lines show that Miras are often surrounded by molecular envelopes thousands of AU in diameter. These observations yield estimates of the wind termination velocity and total mass loss rate, which are typically of order 5 km s<sup>-1</sup> and $`10^7`$ M yr<sup>-1</sup>, respectively (Young, 1995). The circumstellar envelopes are rich sites for dust formation and are often found to be sources of SiO, OH, and H<sub>2</sub>O maser emission (Diamond et al., 1994; Lopez et al., 1997). The massive winds of Miras are believed to be driven by a combination of shocks induced by stellar pulsation, and dust formation (Bowen, 1988). The shocks lift a substantial amount of material up to 1–2 stellar radii above the surface of the star. Radiation pressure on dust formed in this material then pushes it away from the star. The pulsation-induced shocks not only assist in generating the massive winds of Miras, but they also determine the atmospheric structure of these stars to a large extent. Thus, understanding the nature of the shocks and measuring their properties is essential to understanding the physics of pulsation and mass loss from pulsating stars. The ultraviolet spectral regime is an ideal place to study radiation from the shocks. Many UV emission lines are generated from immediately behind the shocks, which are potentially very useful diagnostics for various characteristics of the shocks. Foremost among these lines are the strong Mg II h & k lines at 2800 Å. A large number of UV spectra of Miras have been taken by the International Ultraviolet Explorer (IUE) over the years, and some of the basic characteristics of the Mg II h & k lines have been noted. It is known, for example, that the Mg II lines are not visible throughout part of the pulsation cycle. They typically appear at about pulsation phase $`\varphi =0.1`$, well after optical maximum ($`\varphi =0.0`$). The Mg II fluxes peak around $`\varphi =0.30.45`$ and then decrease until becoming undetectable at about $`\varphi =0.7`$ (Brugel, Willson, & Cadmus, 1986; Luttermoser, 1996). For a set of LW-HI observations of S Car and R Car, Bookbinder, Brugel, & Brown (1990) showed that the Mg II h & k lines are blueshifted relative to the stellar rest frame by as much as 100 km s<sup>-1</sup>, and the h line is significantly stronger than the k line. Both of these properties are very difficult to explain, as the shock speeds should be much lower than 100 km s<sup>-1</sup>, and for other astronomical targets the k line is almost always found to be stronger than the h line (e.g. Robinson & Carpenter, 1995). Clearly the unusual behavior of the Mg II lines of Miras should be looked at in more detail to understand the pulsation process. In this paper, we utilize the extensive IUE data sets that exist for 5 Miras to fully characterize the behavior of the ultraviolet emission of these stars. Many pulsation cycles are sampled for each star, allowing us to see how the UV emission lines behave from one cycle to the next. ## 2 The IUE Archival Data We have searched the IUE database for Miras that have been extensively observed by the satellite. In this paper, we are only interested in Miras without companion stars that may be contaminating the UV spectrum. We are particularly interested in high resolution spectra taken with IUE’s long-wavelength cameras (i.e. LW-HI spectra). With these spectra, fully resolved profiles of detected emission lines (especially Mg II) can be analyzed. However, we also include low resolution, long-wavelength camera spectra (i.e. LW-LO spectra) in our analysis, which can at least be used to measure accurate fluxes of the Mg II lines and the background. We only consider large aperture data to ensure that our measured fluxes are accurate. Except for the Lyman-$`\alpha `$ line, which is blended with geocoronal emission, no emission lines are typically seen in Mira spectra taken with IUE’s short-wavelength SWP camera, so we are not interested in those data. Table 1 lists the five Miras with the most extensive sets of IUE observations. (Actually, L<sup>2</sup> Pup is a semi-regular variable star rather than a Mira, but it is a long-period pulsating star similar to Miras and it has a very large IUE dataset so we include it in our analysis.) The table gives the position, distance, center-of-mass radial velocity ($`V_{rad}`$), and pulsation period of each star. The distances were taken from the Hipparcos catalog (Perryman et al., 1997). The center-of-mass radial velocities of these stars were taken from the SIMBAD database. However, deriving true systemic center-of-mass velocities is difficult for the pulsating Miras, because different spectral features observed for these stars exhibit different velocities, and these velocities are often found to vary during the pulsation cycle. The velocities in Table 1 are based on measurements of optical absorption lines of neutral atoms, which provide plausible values for $`V_{rad}`$. Other potential measures, such as various optical emission lines, molecular radio emission lines, and maser lines, are generally blueshifted relative to the neutral atomic absorption lines by roughly 5–10 km s<sup>-1</sup>. These features are probably formed either behind an outward moving shock or in the outer regions of the stellar atmosphere where the massive wind of the star is being accelerated (Joy, 1954; Merrill, 1960; Wallerstein, 1975; Gillet, 1988). Note that infrared CO emission lines, which can also be used to estimate stellar center-of-mass radial velocities, generally suggest velocities that are blueshifted relative to the optical velocities by $`5`$ km s<sup>-1</sup> (Hinkle, 1978; Hinkle & Barnbaum, 1996; Hinkle, Lebzelter, & Scharlach, 1997). In order to determine how UV emission varies during the pulsation cycle it is necessary to derive accurate pulsation phases, which requires knowledge of the pulsation period and a zero-phase date. All of the Miras in Table 1 have been monitored for many decades, and their average periods are well known. However, the periods of Miras can sometimes differ from this average. Therefore, in order to derive the most accurate phases, we sought to determine the average period and zero-phase of our Miras during IUE’s lifetime only (1978–1996). The American Association of Variable Star Observers (AAVSO) has a long-term program to monitor hundreds of Miras using observations from amateur astronomers around the world (Mattei & Foster, 1997). Using AAVSO data obtained from the World Wide Web, we derived the pulsation periods listed in Table 1. The derived periods of S Car, R Car, and R Leo are within a day of the accepted long-term average values (see, e.g., Hirshfeld & Sinnott, 1985), but the derived period of L<sup>2</sup> Pup is 4 days shorter and that of T Cep is 11 days longer. For these two stars, the accuracy of the computed pulsation phases is improved significantly with the use of the derived periods in Table 1. As mentioned above, L<sup>2</sup> Pup is a semi-regular variable rather than a traditional Mira, but it is similar to Miras in other respects so it is included in our sample. Because of the irregularity of its period, the accuracy of the pulsation phases we derive is limited to about $`\pm 0.2`$. The pulsation of the other stars on our list is far more regular, and we estimate that the phases we use for these stars are accurate to within about $`\pm 0.05`$. Table 1 lists the number of LW-HI and LW-LO spectra available for each star. In each case, the available data provide reasonably good coverage of at least 2–3 different pulsation cycles. The data were extracted from the IUE Final Archive. The spectra in the archive were processed using the NEWSIPS software, which became available near the end of the IUE’s 18-year lifetime in the mid-1990s. This software corrects the fixed pattern noise problem that plagued spectra processed with the older IUESIPS software, and improves signal-to-noise by up to a factor of 2 for high resolution spectra. The NEWSIPS software is described in detail by Nichols & Linsky (1996). ## 3 Data Analysis ### 3.1 The LW-LO Spectra Figure 1 illustrates the behavior of the near UV spectrum of the Mira R Car during a typical pulsation cycle in 1986–1987. The primary spectral feature visible in the LW-LO spectra is the Mg II h & k feature at 2800 Å, which appears shortly after optical maximum, achieves a maximum flux around $`\varphi =0.40.5`$, and then declines. The Fe II UV1 multiplet lines at 2620 Å are also visible when Mg II is at its brightest. We developed a semi-automated routine to measure the Mg II fluxes from the LW-LO spectra of our stars. For each spectrum, the underlying background flux is estimated using the flux in the surrounding spectral region. The reasonableness of this computed background is verified visually. After subtracting the background from the spectrum, the Mg II flux is then computed by direct integration. The error vector provided by the NEWSIPS software then allows us to estimate the 1 $`\sigma `$ uncertainty for this flux. Three spectra from R Leo show substantial background emission at an unexpected phase, which we believe indicates contamination by scattered solar light, so these spectra are not used. In Figure 2, we plot the Mg II fluxes as a function of pulsation phase for all the stars in our sample. We use thick symbols to indicate Mg II lines with 3 or more pixels that have been flagged by NEWSIPS as being potentially inaccurate, usually due to overexposure. These fluxes should be considered more uncertain than the plotted 1 $`\sigma `$ error bars would suggest. In general, however, these fluxes are not wildly discrepant from fluxes of better exposed lines measured closely in time. Dashed lines connect points that are within the same pulsation cycle, excluding cases where the points are separated by over 0.4 phase. Figure 2 shows that the pulsation cycles of each star are all consistent with the general rule of a flux maximum around $`\varphi =0.20.5`$. However, there are substantial flux differences from one cycle to the next. The R Car data set shows the most extreme such variability, with the Mg II fluxes reaching only about $`1\times 10^{13}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> in one cycle and approaching $`1\times 10^{10}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> during another. In contrast, the Mg II fluxes of S Car differ by less than an order of magnitude from one cycle to the next, this despite the fact that S Car has one of the largest IUE LW-LO databases. The other relatively short period pulsator in our sample, L<sup>2</sup> Pup, seems to behave similarly. The data points show more scatter for L<sup>2</sup> Pup, but this is probably due to inaccuracies in the estimated pulsation phases caused by the irregular period of this star (see §2). A comparison of L<sup>2</sup> Pup’s Mg II and optical variability has been presented by Brugel, Willson, & Bowen (1990). The phase of maximum Mg II flux differs somewhat for the five stars, with S Car appearing to have the earliest maximum and R Leo the latest. One of the brighter Mg II cycles of R Leo is particularly interesting for showing substantial Mg II flux very near optical maximum at $`\varphi =0.04`$ (observations LWP21679 and LWP21680). We generally do not detect significant Mg II flux this close to $`\varphi =0.0`$ for any of our stars (excluding L<sup>2</sup> Pup because of its uncertain phases), but apparently there can be exceptions to this rule. By inspecting the AAVSO light curve we confirmed that the strong Mg II lines had in fact been observed very near optical maximum. ### 3.2 The LW-HI Spectra #### 3.2.1 The Mg II h & k Lines Turning our attention to the LW-HI spectra, in Figures 3–7 we display the observed Mg II h & k line profiles for the five stars in our sample. The k line is shown as a solid line and the h line is represented by a dotted line. The spectra are shown on a velocity scale in the stellar rest frame, assuming the stellar center-of-mass velocities listed in Table 1, and assuming rest wavelengths in air of 2802.705 Å and 2795.528 Å for the h and k lines, respectively. The date, time, and pulsation phase of each observation are indicated in the figures. Cosmic ray hits, which are apparent in the original spectra as very narrow, bright spectral features not observed in other spectra, have been identified and removed manually. One S Car spectrum with wildly discrepant properties (LWP12197) was removed from the analysis and is therefore not shown in Figure 5. Many of the S Car spectra were taken with the star deliberately offset within IUE’s large aperture. This results in an inaccurate wavelength calibration. Fortunately, there is a set of nine observations taken on 1991 March 9–13 with different offsets (see Fig. 5). After measuring the centroids of the Mg II h line (see below), we were able to see from these data how the offsets affected the h line centroid and we used this information to correct the wavelengths of all the S Car spectra obtained with aperture offsets. The spectra in Figure 5 are the corrected spectra. In almost every spectrum in Figures 3–7, the k line is contaminated by substantial absorption centered at a stellar rest frame velocity of about $`70`$ km s<sup>-1</sup>. This absorption is due to lines of Fe I and Mn I with rest wavelengths in air of 2795.005 Å and 2794.816 Å, respectively (Luttermoser et al., 1989; Luttermoser & Mahar, 1998). Because the mutilation of the k line by these overlying neutral absorbers is generally severe, we focus most of our attention on the more pristine h line. Another concern is interstellar absorption, which should affect both the h and k lines at the same velocity. The one star whose spectrum we know will not be affected by the ISM is S Car, because its extremely large $`+289`$ km s<sup>-1</sup> center-of-mass velocity will shift the Mg II lines well away from any ISM absorption. The h line of L<sup>2</sup> Pup shows absorption at about $`40`$ km s<sup>-1</sup> (see Fig. 7). The ISM velocity predicted for this line of sight by the local ISM flow vector of Lallement et al. (1995) is $`43`$ km s<sup>-1</sup> after shifting to the stellar rest frame. Thus, it seems likely that this is indeed interstellar absorption, although for lines of sight as long as those towards the stars in our sample, ISM components could potentially be present with significantly different velocities than that predicted by the local flow vector. The h lines of R Leo, R Car, and T Cep do not appear to be contaminated by any obvious ISM absorption features. For R Leo and T Cep, the ISM velocities predicted by the local flow vector ($`4`$ and $`+14`$ km s<sup>-1</sup>, respectively) suggest that the ISM absorption may lie outside of the observed Mg II emission (see Figs. 3 and 6). For the R Car line of sight, the local flow vector predicts ISM absorption at $`30`$ km s<sup>-1</sup>, which falls just within the red side of the h line. No obvious absorption is seen, however, suggesting that the ISM column density for this particular line of sight may be quite low. This is not necessarily unusual, as other lines of sight with distances of $`100`$ pc have been found to have very low H I column densities of $`10^{18}`$ cm<sup>-2</sup> (Gry et al., 1995; Piskunov et al., 1997). We will assume that the Mg II h line profiles of R Leo, R Car, and T Cep are at most only mildly affected by ISM absorption, meaning that for these three stars and S Car we can be reasonably confident that any substantial line profile differences we might find are due to intrinsic differences in the stellar spectra. The Mg II h line profiles of these four stars often appear to be slightly asymmetric, with a red side that is steeper than the blue side. Nevertheless, the profiles can be represented reasonably well as a Gaussian, and can therefore be quantified with Gaussian fits. We developed a semi-automated procedure to fit all Mg II h lines detected with enough signal-to-noise (S/N) for a meaningful fit to be performed. The quality of all fits was verified visually. For spectra without strong Mg II h lines, we estimated a line flux by direct integration as we did for the LW-LO spectra. For L<sup>2</sup> Pup, we measured h line fluxes for all the spectra in this manner, since we could not accurately fit Gaussians to the ISM-contaminated line profiles observed for this star. In Figures 8–10, we plot versus pulsation phase the Mg II h line fluxes, centroid velocities, and widths measured from the LW-HI spectra of the stars in our sample. As in Figure 2, dotted lines connect points within the same pulsation cycle, and thick data points indicate h lines with three or more pixels flagged as being overexposed. The 1 $`\sigma `$ uncertainties shown in the figure were estimated using the procedures outlined by Lenz & Ayres (1992). The flux variations seen in Figure 8 are essentially the same as those seen in the LW-LO data (see Fig. 2). Note once again how much the Mg II fluxes vary from one cycle to the next for R Car. Bookbinder et al. (1990) found that the Mg II lines in selected observations of S Car and R Car were substantially blueshifted. Figure 9 demonstrates that this behavior is common to all the Mg II lines observed by IUE for all the Miras in our sample. Furthermore, Figure 9 also reveals a strong correlation between line velocity and pulsation phase, in which the magnitude of the line blueshifts decreases with pulsation phase. For example, between $`\varphi =0.2`$ and $`\varphi =0.6`$ the line velocities of S Car change from about $`100`$ km s<sup>-1</sup> to $`50`$ km s<sup>-1</sup>, and those of T Cep change from $`50`$ km s<sup>-1</sup> to $`30`$ km s<sup>-1</sup>. Note that even though we are using Gaussians to fit the Mg II profiles, this does not imply that Mg II h & k are optically thin or that the large blueshifts of these lines actually represent gas velocities (see §3.2.2). This velocity behavior exactly mimics the behavior of the optical Ca II H & K lines of Miras, which also have blueshifted velocities that typically change from about $`100`$ km s<sup>-1</sup> to $`40`$ km s<sup>-1</sup> between $`\varphi =0.2`$ and $`\varphi =0.6`$ (Merrill, 1952; Buscombe & Merrill, 1952). Thus, opacity effects and atmospheric flow fields appear to induce the same line behavior in both the Mg II and Ca II lines. Furthermore, many Cepheid variables appear to exhibit similar behavior, suggesting that these phase-dependent velocity variations may be typical for stellar pulsators in general (Merrill, 1960). The data in Figure 9 suggest a possible correlation between pulsation period and line velocity, with the shortest period Mira in our sample (S Car) having the largest blueshifts, and the longest period star (T Cep) having the smallest. The R Car data suggest another possible correlation. For the pulsation cycle with the largest Mg II fluxes, the Mg II h lines of R Car are more blueshifted than they are during the weaker cycles. Unfortunately, our sample of stars and the number of well-sampled pulsation cycles per star is very small, making it difficult to truly establish these correlations. Figure 10 demonstrates that a well-defined phase dependence also exists for the line widths, which are quantified in Figure 10 as full-widths-at-half-maxima (FWHM). The line width behavior is very similar for all the stars, with a decrease from about 70 km s<sup>-1</sup> to 40 km s<sup>-1</sup> between $`\varphi =0.2`$ and $`\varphi =0.6`$. #### 3.2.2 Other Lines in the IUE LW-HI Spectra The Mg II h & k lines are by far the brightest lines that appear in the LW-HI spectra of Miras, but they are not the only lines present. During pulsation cycles that produce strong Mg II lines, other lines also appear out of the background noise of the LW-HI spectra. The largest Mg II fluxes observed for any star in our sample were observed during the 1989–90 pulsation cycle of R Car, during which Mg II h line fluxes reached up to $`5\times 10^{11}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> (see Figs. 4 and 8). Figure 11 shows three sections of an LW-HI spectrum taken during this period. These sections contain all of the obvious real emission features apparent in the full spectrum. The expected line locations of several multiplets are indicated in the figure, which accounts for most of the observed emission features. The spectrum is dominated by several multiplets of Fe II lines (UV1, UV32, UV60, UV62, and UV63). The Mg II h & k (UV1) lines are of course apparent, as perhaps are two much weaker Mg II lines of the UV3 multiplet. The intersystem Al II\] (UV1) line at 2669.155 Å is easily visible. Luttermoser & Mahar (1998) observed the feature at 2823 Å in an HST/GHRS spectrum of the Mira R Hya, and identified it as an Fe I (UV44) 2823.276 Å line which is fluoresced by the Mg II k line. A couple other emission features in Figure 11 can also be identified with Fe I (UV44) lines. The lines seen in Figure 11 have previously been observed in many IUE and HST/GHRS spectra of red giant stars (Judge & Jordan, 1991; Carpenter, Robinson, & Judge, 1995; Robinson, Carpenter, & Brown, 1998). The Al II\] line and most of the Fe II lines are formed by collisional excitation, as are the Mg II h & k lines, but Judge, Jordan, & Feldman (1992) find that the UV62 and UV63 multiplet lines of Fe II are excited by a combination of collisional excitation and photoexcitation by photospheric emission at optical wavelengths. The flux behavior of all the lines in Figure 11 parallels that of Mg II, increasing to a maximum near $`\varphi =0.30.4`$ and then decreasing. Thus, the Fe II and Al II\] lines presumably originate in atmospheric locations similar to Mg II h & k. Note that many of the lines apparent in Figure 11 were only observed during this one particularly bright cycle of R Car, presumably being too weak to be detected during weaker cycles on R Car and the other stars. For a selection of the brightest emission lines, we used an automated fitting procedure like that described in §3.2.1 to fit Gaussians to the lines of all the stars in our sample, whenever the lines are clearly detected. The lines measured in this manner are listed in Table 2. For L<sup>2</sup> Pup, no observed cycle was bright enough to detect any of these lines, and for S Car only the brightest Fe II line at 2625.667 Å could ever be clearly detected. For the observations in which the lines are observed, we find that line flux ratios among the various Fe II lines are always similar to those seen in Figure 11, as are Mg II/Fe II flux ratios. For example, the (Mg II $`\lambda `$2803)/(Fe II $`\lambda `$2626) ratio is always about 20. The Fe II and Mg II/Fe II flux ratios found for the Miras are somewhat different from those observed in red giants, but not radically so. However, the Fe II line profiles are very different. For normal red giant stars, most of the Fe II lines are clearly opacity broadened and have profiles very different from that of a simple Gaussian (Judge & Jordan, 1991; Carpenter et al., 1995). In contrast, the Fe II lines of Miras are very narrow, Gaussian-shaped emission features, with widths at or near the IUE’s instrumental resolution of $`0.2`$ Å. This is why we could accurately fit the lines with single Gaussians. The exception is the broader Fe II 2599.394 Å line, which should have the highest opacity of all the Fe II lines. The widths and velocities of the lines listed in Table 2 do not appear to exhibit any substantial phase dependent behavior, in contrast to Mg II h & k. As an example, in Figure 12 we plot the velocities of the Fe II 2625.667 Å line as a function of pulsation phase for the four Miras in which this line was occasionally detected. Only S Car has a hint of phase dependence, with some evidence for a small increase in velocity with phase. Much of the scatter seen in Figure 12 could be due to uncertainties in target centering, which can induce systematic velocity errors of $`\pm 5`$ km s<sup>-1</sup> (Wood & Ayres, 1995). Since there is little if any phase dependence in the velocities of the lines listed in Table 2, we compute a weighted average and standard deviation for all the measurements of all the lines (Bevington & Robinson, 1992), and in Table 2 these velocities are listed for each Mira in our sample. For R Leo, R Car, and S Car, all of the lines are blueshifted $`515`$ km s<sup>-1</sup>. The Fe II 2599.394 Å line is once again an exception, showing significantly larger blueshifts. The lines of T Cep behave differently, as they do not show systematic blueshifts relative to the star. The Mg II lines of T Cep are also not as blueshifted as the other stars (see §3.2.1 and Fig. 9). It is difficult to identify a reason for this difference in behavior, but perhaps the center-of-mass velocity we are assuming for this star is off by $`10`$ km s<sup>-1</sup>. The difficulties in defining center-of-mass velocities for Miras have already been discussed in §2. The large blueshifts observed for the Mg II lines are unlikely to be direct measurements of outflow velocities. The shock velocities present in Miras are expected to be of order $`1020`$ km s<sup>-1</sup>, although this issue is still a matter of debate (Bowen, 1988; Gillet, 1988; Hinkle et al., 1997). Thus, the $`515`$ km s<sup>-1</sup> blueshifts seen for most of the Fe II and Al II\] lines of R Leo, R Car, and S Car are more likely to be measuring the true outflow velocities of shocked material. The larger widths and blueshifts of the Mg II lines are probably due to opacity effects, similar to the findings of Judge et al. (1993) for non-Mira M-type giants. The similar behavior of the Fe II $`\lambda `$2599 line suggests that this line is influenced by similar opacity effects, which is reasonable since this line should have the highest opacity of any of the Fe II lines. The other Fe II lines will have substantially lower optical depths than Mg II h & k and Fe II $`\lambda `$2599, and some may even be optically thin. The Al II\] $`\lambda `$2669 line should also have very low opacity, since it is a semi-forbidden transition. Thus, the Fe II and Al II\] line centroids should be more indicative of the true outflow velocities of the shocked material, and the line widths (which are actually unresolved) should be more indicative of turbulent velocities within the shocked material. Interpreting the difference between the behavior of the Mg II lines and that of the less opaque lines could be very important for understanding the structure of the shocks propagating through Mira atmospheres. In future work, we hope to explore possible reasons why the Mg II lines are broader and more blueshifted than the less optically thick lines. ## 4 Summary We have compiled IUE observations of 5 Mira variables with substantial IUE data sets in order to study the properties of emission lines seen in the UV spectra of these stars, which are believed to be formed behind outwardly propagating shocks in the atmospheres of these pulsating stars. Our findings are summarized as follows: We confirm the phase-dependent Mg II flux behavior previously reported for Mira variables (e.g. Brugel et al., 1986), which is observed for all the pulsation cycles that we study: the Mg II flux rises after optical maximum, peaks near $`\varphi =0.20.5`$, and then decreases. For some Miras (e.g. R Car) the amount of Mg II flux produced during a pulsation cycle can vary by 2–3 orders of magnitude from one cycle to the next, while for others (e.g. S Car) the flux behavior is more consistent. The Mg II k lines are almost always contaminated with circumstellar absorption lines of Fe I and Mn I, making analysis of the line profile very difficult. The Mg II h line is always blueshifted, with the magnitude of the blueshift decreasing with pulsation phase. The blueshifts vary somewhat from star to star and cycle to cycle, but typical velocity changes are from $`70`$ km s<sup>-1</sup> to $`40`$ km s<sup>-1</sup> from $`\varphi =0.2`$ to $`\varphi =0.6`$. Note, however, that these line shifts do not represent the actual gas velocities at the formation depths of these lines, because of the high opacity of Mg II h & k. These velocity variations are very similar to those of the optical Ca II H & K lines. The width of the Mg II h line decreases from about 70 km s<sup>-1</sup> to 40 km s<sup>-1</sup> between $`\varphi =0.2`$ and $`\varphi =0.6`$. In addition to the Mg II lines, other lines of Fe II, Fe I, and Al II\] are also observed in IUE LW-HI spectra. The fluxes of these lines show the same phase-dependent behavior as the Mg II lines. Unlike Mg II, these other emission lines tend to be very narrow and do not show phase-dependent velocity and width variations. Except for Fe II $`\lambda `$2599, the Fe II and Al II\] lines of most of the Miras show blueshifts of $`515`$ km s<sup>-1</sup>, which may indicate the flow velocity of the shocked material. In contrast, the lines of T Cep do not show any significant line shifts, although we speculate that perhaps this is due to an uncertain center-of-mass velocity for this star. We would like to thank the referee, Dr. D. Luttermoser, for many useful comments on the manuscript. MK is a member of the Chandra X-ray Center, which is operated under contract NAS-839073, and is partially supported by NASA. In this research, we have used, and acknowledge with thanks, data from the AAVSO International Database, based on observations submitted to the AAVSO by variable star observers worldwide.
no-problem/0002/hep-ex0002014.html
ar5iv
text
# The CLEO III Upgrade ## 1 Introduction The CLEO physics program is a broad effort that emphasizes the study of $`b`$ and $`c`$ quark decays, $`\tau `$ lepton decays and two photon mediated reactions. Data are collected at or slightly below the $`\mathrm{{\rm Y}}(4S)`$ resonance ($`E5.3`$GeV) produced by $`e^+e^{}`$ collisions at the symmetric, one ring Cornell Electron Storage Ring (CESR). Important elements of the CLEO physics program include studies of rare B decays, determination of CKM matrix elements, and possible observation of time-independent CP violation. Examples include a measurement of the ratio of CKM matrix elements $`V_{td}/V_{ts}`$ by measuring $`\mathrm{\Gamma }(B\rho \gamma )/\mathrm{\Gamma }(BK^{}\gamma )`$, a measurement of $`V_{ub}`$ by measuring $`\mathrm{\Gamma }(B\rho (\omega )l\nu )`$, and possible observation of $`CP`$ violation in $`BK\pi `$ decays. These examples, and others, involve rare reactions whose final states include pions and kaons. Large integrated luminosity and efficient charged hadron identification are necessary to reduce backgrounds, as well as statistical and systematic errors. The CLEO III upgrade comprises two major efforts: an enhanced CLEO detector and an upgraded CESR. To substantially enlarge data sets, CESR is modified to boost its instantaneous luminosity to $`L2\times 10^{33}\mathrm{cm}^2\mathrm{sec}^1`$. To utilize the boost in integrated luminosity, key CLEO sub-detectors have been designed and installed to reduce physics backgrounds present in many CLEO analyses. CLEO III contains a new silicon tracking device, a new drift chamber and a novel dedicated particle identification device which replaces a time-of-flight counter. The trigger and data acquisition systems are upgraded to accommodate the higher data taking rate. There are minor modifications to the rest of the CLEO sub-detectors. ## 2 CESR Improvements To increase the threshold for longitudinal beam instability, a past limitation to higher luminosity, and to provide adequate power to the larger amperage beams, CESR has replaced completely all of its room temperature 5-cell copper cavities in its RF system by 4 single cell superconducting cavities. These new cavities are designed to produce an accelerating gradient of $`10`$MV/m, allowing the bunch length in CESR to be reduced to $`13`$mm, to transmit $`325`$kW to the beam, and to raise the threshold for longitudinal instability above $`1`$A total beam current. Installation of new interaction region (IR) optics is scheduled for the Spring of 2000 to reduce the amplitude of the vertical betatron function $`\beta `$ at the interaction point (IP) to $`\beta _v^{}=13`$mm. These optics are two pairs of a set of magnets consisting of a permanent quadrupole magnet and two superconducting (SC) quadrupoles. Each permanent neodymium iron boron magnet provides vertical focusing and is positioned just $`337`$mm from the IP. Most of the focusing is provided by the four identical SC magnets that can produce gradients up to $`48.4`$T/m at $`1225`$A. The entire set of IR optics is contained within $`\pm 2.55`$m of the IP. The proximity of the IR optics to the IP necessitates the installation of a new tracking system for CLEO since these new magnets intrude into the volume of CLEO’s former configuration. After the upgrade, CESR will collide beams of 9 nine trains each, with each train composed of 5 bunches separated by $`14`$ns. The peak total beam current will be $`1`$A. Opposing beams will continue to circulate in a pretzel orbit scheme and will collide at the IP with a small horizontal crossing angle $`\theta _C=2.7`$mrad. The expected luminosity is $`L2\times 10^{33}\mathrm{cm}^2\mathrm{sec}^1`$. ## 3 Particle Identification A fast ring imaging Cherenkov detector (RICH) is CLEO’s technology choice for particle identification (PID). The general scheme is shown in figure 1 where a charged hadron exits the IP, traverses a solid LiF radiator and produces Cherenkov photons in the familiar cone pattern. The Cherenkov cone expands in an uninstrumented volume outside the radiator before being intercepted by a thin multiwire proportional chamber (MWPC) filled with a photon conversion gas of triethylamine (TEA) and methane. The photoelectrons are amplified near the MWPC anode wires and the resultant charge is then capacitively coupled to cathode pads at the rear of the MWPC which are then read out by front end electronics. The Cherenkov angle is inferred from the pattern of hit cathode pads. The actual implementation of this scheme is shown in figure 2 where a quasi-cylinder of lithium fluoride (LiF) is surrounded at slightly larger radius by a quasi-cylinder of multi-wire proportional counters (MWPCs) with the same azimuthal symmetry. The device covers 80% of the solid angle. The design goal for the RICH detector alone is to provide at least $`3\sigma `$ $`\pi /K`$ Cherenkov angle separation at momentum $`p=2.8`$GeV/c, the maximum possible for $`B`$ decay daughters at CLEO. Combining a RICH measurement with a current $`2\sigma `$ $`E/x`$ measurement from the drift chamber for particle momentum exceeding $`p=2.2`$GeV/c, the overall CLEO design goal is to provide $`>3.6\sigma `$ $`\pi /K`$ separation at high momentum. This corresponds to a Cherenkov angle resolution of 4 mrad per charged track, equivalent to an angular resolution of 14 mrad/photoelectron and 12 photoelectrons per track. The radiator material is lithium fluoride (LiF) in the form of tiles with planar dimensions $`17\mathrm{cm}\times 17.5\mathrm{cm}`$ and mean thickness $`10`$mm. The tiles are arrayed in a regular pattern to form a quasi-cylinder of $`1.6`$m diameter and 30-fold azimuthal symmetry. LiF is selected to accommodate the narrow wavelength $`\lambda `$ region $`(135\mathrm{nm}<\lambda <165\mathrm{nm})`$ where the TEA-methane quantum efficiency is usable, to minimize the total radiation thickness in front of CLEO’s calorimeter, and to minimize the chromatic variation in the radiator index of refraction $`n(\lambda )`$, which directly affects the Cherenkov angle resolution. The detected Cherenkov photon patterns form images in the MWPCs of conic sections distorted by refractive effects. Roughly half the Cherenkov cone is trapped inside a radiator crystal by total internal reflection (TIR). Indeed, if all radiator crystals were parallelepipeds, then charged tracks traveling at angle $`\theta =90^{}`$ with respect to the beamline would have all of their Cherenkov photons trapped inside a radiator crystal by TIR. To prevent this, all radiator crystals within $`\theta =90^{}\pm 22^{}`$ of the beamline have their top surface cut in a sawtooth pattern so that Cherenkov photons can strike the upper crystal surface at incidence angles below the critical value for TIR. A cross-sectional view of one of the 30 MWPCs is shown in the upper portion of figure 2. Each MWPC has a $`250\times 20\mathrm{cm}^2`$ rectangular footprint with 70 Au-plated tungsten anode wires of $`20\mu `$m diameter running longitudinally. Its $`2`$mm thick front window is built from 8 rectangular ($`30\times 19\mathrm{cm}^2`$) $`\mathrm{CaF}_2`$ crystals that have $`100\mu `$m Ag silver traces applied to them. The MWPC cathode is a printed circuit board with $`7.5\times 8\mathrm{mm}^2`$ Au-plated Cu pads. The MWPC is run at a typical gain $`g4\times 10^4`$. The RICH front end readout electronics is based on a dedicated 64-channel VLSI chip comprised of a preamplifier, a shaper with a programmable peaking time, sample and hold circuitry, and an analog output multiplexer. Through a via, each of the total 230,400 MWPC pads is connected to a preamplifier input channel. The low noise front end has a linear response for input signals up to $`3\times 10^5`$ electrons with a measured equivalent noise charge ENC, for detector capacitance $`C`$, $`\mathrm{ENC}=130e^{}+(9e^{}/\mathrm{pF})C150e^{}`$. The front end output is a differential current transmitted serially to off-detector data acquisition boards. A test beam performed with 100 Gev muons incident on planar and sawtooth radiator crystals imaged by 2 MWPCs yields, for the sawtooth crystal, a mean number of photoelectrons per track $`N_{PE}=13.5`$, a Cherenkov angle resolution per photoelectron $`\sigma _{PE}=10.2`$mrad, and an overall Cherenkov angle resolution per track $`\sigma _{TRK}=4.5`$mrad. Similar results are obtained for the planar crystal. For the CLEO detector itself, with has improved tracking and larger acceptance, we expect $`\sigma _{TRK}=2.9`$mrad, satisfying our design goal. ## 4 Central Drift Chamber The CLEO III central wire tracker is a 47 layer drift chamber with wire layers extending radially from $`132`$mm to $`790`$mm. Outer cathode strips at $`797`$mm radius provide a final z-position measurement. The first 16 wire layers are axial and the remaining 31 layers are small angle ($`25`$mrad) stereo. Tungsten sense wires are $`20\mu `$m diameter and the aluminum field wires are $`110\mu `$m diameter. With a 3:1 field:sense wire ratio, the 9796 drift cells are nearly square with $`14`$mm width. The average chamber radiation thickness $`X/X_0=3.3\%`$ for normally incident particles. The drift gas is a 60:40 mixture of helium-propane, selected for its long radiation length ($`X_0>569`$m), and its favorable drift and ionization properties. The mechanical structure of the chamber accommodates the close positioning of the final focus quadrupoles by having a stepped endplate to hold and position the axial wire layers. See figure 3. A conical endplate holds and positions the stereo wires. The wire tension is resisted solely by outer skins that hold the cathode strips. An inner tube provides a gas seal only. Sense wire signals are read by preamplifiers that drive differential analog current signals to special purpose “time-charge-trigger” (TQT) front end boards that provide signals for timing, charge measurement and the trigger. Signals received by the TQT boards are split into parallel timing/charge and trigger paths. The boards have independent thresholds for the discriminated timing and trigger signals to permit a low trigger rate without degrading timing resolution. The output of the timing and charge-to-time circuits are multiplexed onto the same channel of a LeCroy 1877S Fastbus multi-hit TDC. The discriminated trigger signal is sent to the trigger system. ## 5 Silicon Vertex Detector CLEO III includes a 4-layer barrel-style silicon strip vertex detector with double-sided sensor layers located at radii between $`2.5`$cm and $`10.1`$cm and covering 93% of the solid angle. To simplify overall ladder production, all 447 of the $`300\mu `$m thick sensors are identical and contain no coupling capacitors or bias resistors. Ladder lengths span $`16`$cm at the inner radius to $`53.3`$cm at the outer radius. The n-strips measure the $`r\varphi `$ coordinate and have a $`50\mu `$m pitch, while the p-strips measure the z-coordinate and have a $`100\mu `$m pitch. Considerable care has been taken to minimize sensor capacitance. Approximately $`125,000`$ total strips are read out. The vertex detector mechanical design minimizes material in the fiducial region by placing the detector’s major mechanical support structure and readout electronics outside this volume. Ladders are mechanically stiffened by gluing V-shaped CVD diamond beams to them and are supported at their ends by copper end cones through kinematic mounts. A flexible kapton circuit electrically connects the sensors to the front end electronics mounted on BeO hybrids that are attached directly to the copper cones for efficient cooling. A thin carbon fiber tube joins the opposite copper end cones. A signal-to-noise ratio $`S/N>15`$:1 for all sensor layers is the overall detector readout goal. The front end electronics is built from 3 different chips mounted on double-side hybrids. For detector biasing and decoupling, a special 128 channel $`R/C`$ chip is used with $`R=150\mathrm{M}\mathrm{\Omega }`$ and $`C=150`$pF. Sensor signals are amplified and shaped by a 128 channel multi-stage preamplifier/shaper with a $`125`$fF feedback capacitor, 200 mV/mip gain, and a $`2\mu `$s shaping time. The circuit includes a variable gain stage to accommodate the range in ladder capacitance due to different ladder lengths. The measured equivalent noise charge ENC for the preamplifier/shaper is, for sensor capacitance $`C`$, $`ENC150e^{}+(5.5e^{}/\mathrm{pF})C`$. The final front end chip is a 128 channel “SVX\_CLEO” digitizer/sparsifier based on the FNAL/LBL SVX\_II(b). This chip incorporates a comparator and an 8 bit ADC for each channel. The digitized results can then be sparsified using a chip-wide threshold before being readout serially by an off-detector VME-based data board. ## 6 Trigger System The CLEO III trigger system has a single hardware level with a $`2\mu `$s trigger decision time. Since this is much longer than the $`14`$ns bunch-to-bunch-spacing, a trigger pipeline is used with all trigger processing done in $`42`$ns steps. The level 1 trigger information originates from just 2 detector subsystems: the central drift chamber and the calorimeter. The tracking trigger is built from an “axial” trigger, derived from hits on the 16 axial layers of the drift chamber, and a “stereo” trigger, derived from hits on the chamber’s 31 stereo layers. The calorimeter trigger is derived from energy deposition in the 7800 thalium doped CsI calorimeter crystals. The relatively small number of axial wires (1696) permits the full set to be examined for all possible hit patterns caused by tracks with transverse momentum $`p_{}>200`$MeV/c. Up to two hits from a valid track can be missing from each of the inner and outer set of 8 wires. Hit patterns are compared against look-up tables for valid tracks using field programmable gate arrays (FPGAs) and up to two hits are allowed to be missing from each of the inner and outer set of 8 wires. Additional circuitry generates track count and topological information to correlate axial and stereo tracks. The relatively large number of stereo wires (8384) does not make the wire-by-wire hit examination used for the axial layers practical. Instead, stereo wires are grouped into 4x4 arrays and then patterns of hit arrays are compared against those for valid tracks via FPGA lookup logic. Not all arrays along an arc need be hit to satisfy the logic which accommodates tracks with transverse momentum $`p_{}>250`$MeV/c. The calorimeter trigger has been substantially upgraded to increase its efficiency, particularly at low energies. Previously, the light output from 16 crystals was summed and shaped, and then compared against low- and high-level energy thresholds. However, energy deposited in the calorimeter associated with a track is typically shared among several crystals. If these crystals lie on the border between adjacent groups of 16 crystals, then it is possible for the separate summed signals to fall below one or both of the thresholds. To eliminate this effect, additional ‘tiling’ circuitry is used to sum sets of 16 crystals into $`2\times 2`$ arrays or ‘tiles’ of 64 crystals who total signal is then compared against three thresholds. The tiles overlap so that the problem of energy sharing between crystals is eliminated. Other circuitry is used to properly rescale, if necessary, the apparent transverse shower size produced by the tiling circuitry. ## 7 Conclusion The superconducting RF cavities, the RICH detector and the central drift chamber are all installed, and CESR and CLEO are now conducting an engineering run scheduled from November 1999 to February 2000. The major sub-detector not quite complete is the silicon vertex detector, due for installation in February 2000, after which physics running commences. The final focus quadrupoles for CESR are due in Spring 2000 and their installation is the final element of the complete CLEO/CESR upgrade. ## 8 Acknowledgments The generous assistance of K. Ecklund, V. Fadeyev, R. Mountain and M. Selen in preparing this talk is appreciated.