id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0003/astro-ph0003029.html
ar5iv
text
# Untitled Document A CORRECTION TO THE PP REACTION Robert L. Kurucz Harvard-Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 ABSTRACT These descriptive comments are made to encourage detailed three-body, relativistic, quantum collision calculations for the pp reaction. In stars, coulomb barrier tunneling, as in the pp reaction, is not a two-body process. Tunneling is mediated by an energetic electron that interacts with the colliding particles. The presence of such an electron lowers the potential barrier and increases the probability of tunneling by orders of magnitude. The solar luminosity can be maintained with a central temperature near 10<sup>7</sup>K where the neutrino production rates correspond to the observed rates. Current stellar interior and evolutionary models need substantial revision. Subject headings: neutrinos — nuclear reactions — stars: interiors — sun: interior As the pp reaction p + p $`>`$ d + e<sup>+</sup> \+ $`\nu `$ can be produced in laboratory accelerators, there is no question that it is a real reaction in which two protons move toward each other with enough energy to quantum-mechanically tunnel through their repulsive coulomb potential and combine. The proton-proton reaction, coulomb-barrier tunneling, and statistical Debye-Hückel electron shielding are discussed in basic texts. However, the conditions in a dense plasma at the center of a star substantially differ from those in an accelerator. At the center of the sun the temperature is of the order of 15 $`\times `$ 10<sup>6</sup>K, the proton density is of the order of 10<sup>26</sup> protons-cm<sup>-3</sup>, and the electron density in of the order of 10<sup>26</sup> electrons-cm<sup>-3</sup>. Typical velocities are 500 km s<sup>-1</sup> for protons and 20,000 km s<sup>-1</sup> for electrons. Slowly moving electrons tend to cluster around slowly moving protons. The electron cluster reduces the proton effective charge by a small amount near the proton and cuts it off completely at a radius of about 10<sup>-9</sup> cm. Neither fast electrons nor fast protons are aware of a slow proton until they penetrate the shielding electron cluster at which point they are immediately attracted or repelled by the coulomb potential. As the electrons typically move 40 times faster than the protons, the electron-proton collision frequency must be about 40 times the proton-proton collision frequency. A colliding fast proton decelerates from 2000 or 3000 km s<sup>-1</sup> to 0 km s<sup>-1</sup> relative velocity by the time it reaches a separation of 10<sup>-10</sup> cm, which is only 90% of the distance to the target proton. Unless they tunnel, protons are always far apart on a nuclear scale because the nuclear interaction radius is on the order of 10<sup>-13</sup> cm. A proton-proton collision is a slow process. An electron-proton collision is much faster. A colliding fast electron passes through the shielding electron cluster at, say, 100,000 km s<sup>-1</sup> and is immediately accelerated toward the central proton. In some collisions the electron passes near the proton, through the volume inaccessible in a proton-proton collision. A proton can suffer both a proton and an electron collision simultaneously. Such collisions may be infrequent, but they are more probable than tunneling, and they determine the pp reaction rate. When a fast electron penetrates the electron cluster during a proton-proton collision it is attracted by both protons. If the electron approaches from a polar direction with respect to the proton-proton axis, it helps to pull the protons apart and it prevents tunneling. If the electron approaches equatorially, it shields the protons from each other and accelerates them toward each other. Part of the kinetic energy of the electron contributes to the pp reaction. When the electron leaves, the two protons are closer than they would have been on their own and the tunneling probability has greatly increased. The reaction p + p + e $`>`$ d + e + e<sup>+</sup> \+ $`\nu `$ requires lower proton energies than the reaction p + p $`>`$ d + e<sup>+</sup> \+ $`\nu `$. A solar central temperature of, say, 10 $`\times `$ 10<sup>6</sup>K produces the same energy and neutrino yield as 15 $`\times `$ 10<sup>6</sup>K for the pp reaction without the electron boost. At 10 $`\times `$ 10<sup>6</sup>K pp side chain reactions are much slower than at 15 $`\times `$ 10<sup>6</sup>K and produce the low neutrino rates that are actually observed. These descriptive comments are made to encourage detailed three-body, relativistic, quantum collision calculations for the pp reaction. Until such calculations become available, the problem can be investigated with solar evolutionary models by making ad hoc increases in the pp reaction rate until the model yields the observed neutrino flux. Beyond the pp reaction there is much more work. The reactions d + p, <sup>3</sup>He + <sup>3</sup>He, <sup>3</sup>He + <sup>4</sup>He, <sup>7</sup>Be + p, and <sup>7</sup>Li + p are coulomb barrier reactions and also have to be recalculated. The Be and B neutrinos do not come from coulomb barrier reactions so they are not directly affected. Light element burning occurs at lower temperatures than have been assumed.
no-problem/0003/cond-mat0003472.html
ar5iv
text
# Discontinuous transitions in double exchange materials ## I Introduction. Doped manganites show many unusual features, the most striking being the colossal magnetoresistance (CMR) in the ferromagnetic (FM) phase . In addition, the manganites have a rich phase diagram as function of band filling, temperature and chemical composition. The broad features of these phase diagrams can be understood in terms of the double exchange model (DEM), although Jahn-Teller deformations and orbital degeneracy may also play a role. A remarkable property of these compounds is the existence of inhomogeneities in the spin and charge distributions in a large range of dopings, compositions and temperatures. At band fillings where CMR effects are present, $`x0.20.5`$, these compounds can be broadly classified into those with a high Curie temperature and a metallic paramagnetic phase, and those with lower Curie temperatures and an insulating magnetic phase. The DEM is a simplification of the FM Kondo lattice, where the FM coupling between core spins and conduction electrons is due to Hund’s rule. When this coupling is larger than the width of the conduction band, the model can be reduced to the double exchange model with weak inter-atomic antiferromagnetic (AFM) interactions. Early investigations showed a rich phase diagram, with AFM, canted and FM phases, depending on doping and the strength of the AFM couplings. More recent studies have shown that the competition between the double exchange and the AFM couplings leads to phase separation into AFM and FM regions, suppressing the existence of canted phases. In addition, the double exchange mechanism alone induces a change in the order of the FM transition, which becomes of first order, and leads to phase separation, at low dopings. Note, however, that a detailed study of the nature of the transition at finite temperatures is still lacking, despite its obvious relevance to the experiments. The purpose of this work is to investigate systematically the phase diagram of the DEM with weak AFM interactions. We find, in addition to the previously discussed transitions, a PM-FM first order transition near half filling, if the double exchange mechanism is sufficiently reduced by the AFM interactions. This transition does not involve a significant change in electronic density, so that domain formation is not suppressed by electrostatic effects. The model is described in the next section, and the method of calculation is introduced in the following section. The main results are presented in section IV, and the main conclusions are discussed in section V. ## II The model. We study a cubic lattice with one orbital per site. At each site there is also a classical spin. The coupling between the conduction electron and this spin is assumed to be infinite, so that the electronic state with spin antiparallel to the core spin can be neglected. Finally, we include an AFM coupling between nearest neighbor core spins . The Hamiltonian is: $$=\underset{ij}{}𝒯(𝑺_i,𝑺_j)c_i^{}c_j+\underset{i,j}{}\stackrel{~}{J}_{\mathrm{A}F}S^2𝑺_i𝑺_j$$ (1) where $`S=3/2`$ is the value of the spin of the core, Mn<sup>3+</sup>, and $`𝑺`$ stands for a unit vector oriented parallel to the core spin, which we assume to be classical. In the following, we will use $`J_{\mathrm{A}F}=\stackrel{~}{J}_{\mathrm{A}F}S^2`$. Calculations show that the quantum nature of the core spins does not induce significant effects. The function $`𝒯(𝑺_i,𝑺_j)=t[\mathrm{cos}\frac{\theta _i}{2}\mathrm{cos}\frac{\theta _j}{2}+\mathrm{sin}\frac{\theta _i}{2}\mathrm{sin}\frac{\theta _j}{2}\mathrm{e}^{\mathrm{i}(\phi _i\phi _j)}]`$ stands for the overlap of two spin 1/2 spinors oriented along the directions defined by $`𝑺_i`$ and $`𝑺_j`$, whose polar and azimuthal angles are denoted by $`\theta `$ and $`\phi `$, respectively. We study materials of composition La$`{}_{1x}{}^{}M_{x}^{}`$MnO<sub>3</sub>, where $`M`$ is a divalent ion, and $`x0.5`$. In this composition range, the probability of finding two carriers in neighboring sites (two contiguous Mn<sup>4+</sup> ions) is small, so that a carrier in a given ion has all the $`e_g`$ orbitals in the next ions available. Then, the anisotropies associated to the differences between the two inequivalent $`e_g`$ orbitals should not play a major role. On the other hand, if $`x0.5`$, we expect a significant dependence of the hopping elements on the occupancy of orbitals in the nearest ions. In this regime, the equivalence of the two $`e_g`$ orbitals in a cubic lattice can be broken, leading to orbital ordering (see, however). We will show that the main features of the PM-FM phase transition, for $`x0.5`$, can be understood without including orbital ordering effects. Moreover, in this doping range, anisotropic manganites show similar features, which suggest the existence of a common description for the transition. We will also neglect the coupling to the lattice. As mentioned below, magnetic couplings suffice to describe a number of discontinuous transitions in the regime where CMR effects are observed. These transitions modify substantially the coupling between the conduction electrons and the magnetic excitations. Thus, they offer a simple explanation for the anomalous transport properties of these compounds. Couplings to additional modes, like optical or acoustical phonons , and dynamical Jahn-Teller distortions will enhance further the tendency towards first order phase transitions discussed here. We consider that a detailed understanding of the role of the magnetic interactions is required before adding more complexity to the model. ## III Method. At finite temperatures, the thermal disorder in the orientation of the core spins induces off-diagonal disorder in the dynamics of the conduction electrons. The calculation of the partition function requires an average over core spin textures, weighted by a Boltzmann factor which depends on the energy of the conduction electrons propagating within each texture. We have simplified this calculation by replacing the distribution of spin textures by the one induced by an effective field acting on the core spins, which is optimized so as to minimize the free energy. The electronic energy includes accurately the effects of the core spin disorder on the electrons. Our calculation is a mean field approximation to the thermal fluctuations of the core spins, retaining, however, the complexity of a system of electrons with off-diagonal disorder. This approximation can be justified by noting that the conduction electrons induce long range interactions between the core spins, that always favor a FM ground state. In general, our method is well suited for problems of electrons interacting with classical fields. In more mathematical terms, we have used the variational formulation of the Weiss Mean-Field method to compute the free energy of the system. We first trace-out the fermion operators in (1), thus obtaining the effective Hamiltonian for the spins, $`^{\mathrm{eff}}(\{𝑺\})`$ $`=`$ $`J_{\mathrm{A}F}{\displaystyle \underset{i,j}{}}𝑺_i𝑺_j`$ (2) $``$ $`k_\mathrm{B}TV{\displaystyle 𝑑Eg(E;\{𝑺\})\mathrm{log}\left[1+\mathrm{e}^{\frac{E\mu }{k_\mathrm{B}T}}\right]},`$ (3) where $`g(E;\{𝑺\})`$, is the fermionic density of states and $`V`$ the volume of the system. The Mean-Field procedure consists on comparing the system under study with a set of simpler reference models, whose Hamiltonian $`_0`$ depends on external parameters. We choose $$_0=\underset{i}{}𝒉_i𝑺_i.$$ (4) The variational method follows from the inequality $$_0+_{\mathrm{e}ff}_0_0,$$ (5) where $`_0`$ is the free energy of the system with Hamiltonian (4), and the expectation values $`_0`$ are calculated with the Hamiltonian $`_0`$. The mean-fields $`\{𝒉\}`$ are chosen to minimize the right-hand side of (5). The calculation of the right-hand side of (5), requires the average of the density of states (see Eq.(3)) on spin configurations straightforwardly generated according to the Boltzmann weight associated to the Hamiltonian $`_0`$ and temperature $`T`$. The key point is that $`g(E;\{𝑺\})`$ can be numerically calculated on very large lattices without further approximations using the method of moments (complemented with an standard truncation procedure ). We have extracted the spin-averaged density of states on a $`64\times 64\times 64`$ lattice (for these sizes, we estimate that finite size effects are negligible). For simplicity on the analysis, we have restricted ourselves to four families of fields: uniform, $`𝒉_i=𝒉`$, giving rise to FM ordered textures; $`𝒉_i=(1)^{z_i}𝒉`$, originating A-AFM order, i.e., textures that are FM within planes and AFM between planes; $`𝒉_i=(1)^{x_i+y_i}𝒉`$, producing C-AFM order, that is, textures that are FM within lines and AFM between lines; and staggered, $`𝒉_i=(1)^{x_i+y_i+z_i}𝒉`$, which originate G-AFM order, i.e., completely AFM textures. We have chosen fields of these kind since they produce the expected kinds of order, although this is not a limitation of the method. Once the spin-averaged density of states is obtained, it is straightforward to obtain the values of the mean-field that minimize the right-hand size of Eq.(5), and the corresponding value of the density of fermions. Expressing the right-hand side of Eq.(5) as a function of the magnetization (or staggered magnetizations), we obtain the Landau’s expansion of the free energy on the order parameter. It is finally worth mentioning when our calculation and the Dynamical Mean Field Approximation are expected to yield the same results. It is clear that the key point is the calculation of the density of states in Eq.(3). For this problem of classical variables, the dynamical Mean-Field is known to yield the same density of states than the CPA approximation . Under the hypothesis of spatially uncorrelated fluctuations of the spins, which holds in any Mean-Field approximation, the CPA becomes exact on the Bethe lattice with large coordination number. However, one cannot conclude that with our calculation we would get the same results on the Bethe lattice, since one has still to specify the probability distribution for the spins to be used in the CPA calculation of the average density of states. In Refs. the calculation is done by identifying an effective Heisenberg-like mean field, which becomes exact when the magnetization is very small. Then, the distribution of spin orientations is equivalent to the one generated by an effective magnetic field. In this limit, our ansatz should reproduce the calculations reported in, when implemented in a Bethe lattice. In order to study first order transitions, one must consider solutions at finite magnetizations. Then, the optimal Boltzmann weights need not coincide with the effective field ansatz made here. Detailed DMFA calculations for the double exchange model, however, show that the differences between the optimal DMFA distribution and that obtained with an effective field are small throughout the entire range of magnetizations. Thus, the scheme used in this work includes the same physical processes as the DMFA, but it is also able to describe effects related to the topology of the three-dimensional lattice, like those associated to the Berry phase, which arises from the existence of closed loops. Furthermore, the present scheme allows us to study the relative stability of phases, like the A and C antiferromagnetic phases described below, which can only be defined in a cubic lattice. ## IV Results. The model, Eq.(1), contains two dimensionless parameters, the doping $`x`$, and the ratio $`J_{\mathrm{A}F}/t`$. The range of values of $`x`$ is $`0x1`$, and the Hamiltonian has electron-hole symmetry around $`x=0.5`$. The zero temperature phase diagram, shown in Fig. 1, is calculated minimizing the effective Hamiltonian at fixed chemical potential and zero temperature (we take the limit of zero temperature in Eq.(3) obtaining the grand-canonical Hamiltonian), within the four Mean-Field ansatzs previously defined. At zero $`J_{\mathrm{A}F}/t`$, only the ferromagnetic phase is found, and the system is stable at all compositions. When $`J_{\mathrm{A}F}/t`$ is finite, there is a value of the chemical potential for which the empty system with a perfect G-type AFM spin ordering has the same grand-canonical energy that a system with a perfect FM spin ordering and a finite value of $`x`$. At this value of the chemical potential the system is unstable against phase separation , as shown in Fig. 1. Notice that the phase-separation region can never reach $`x=0.5`$, due to the hole-particle symmetry. For larger values of $`J_{\mathrm{A}F}`$ a small region of A-type AFM is found for $`x0.25`$, and a much larger region of C-type AFM for $`x`$ close to half-filling. Finally, a G-type AFM-region is eventually reached by further increasing $`J_{\mathrm{A}F}/t`$. However, this is not a saturated antiferromagnetic phase since the mean-field that minimizes the grand-canonical energy has a finite $`𝒉/T`$ when $`T`$ tends to zero (notice that one cannot have a continuously varying value of $`x`$ in a perfect AFM configuration). Let us now discuss the phase diagrams at non-zero temperatures for the different values of $`J_{\mathrm{A}F}/t`$ shown in Fig. 2. For $`J_{\mathrm{A}F}=0`$, we obtain a maximum transition temperature of $`T=400`$K for a width of the conduction band $`W=12t2`$eV, which is consistent with a density of states of $`\rho (E_\mathrm{F})=0.85`$ eV<sup>-1</sup> calculated in for La<sub>1/3</sub>Ca<sub>2/3</sub>MnO<sub>3</sub> (see also). Note that the bandwidth calculated in this way is probably an overestimate, as it does not include renormalization effects due to lattice vibrations. There is some controversy regarding the value of $`J_{\mathrm{A}F}`$. The reported value of $`J_{\mathrm{A}F}`$ for the undoped compound, LaMnO<sub>3</sub>, is $`\stackrel{~}{J}_{\mathrm{A}F}0.58`$meV, so that $`J_{\mathrm{A}F}0.005t`$, although calculations give higher values. In the doped compounds, there is an additional contribution of order $`J_{\mathrm{A}F}t^2/U_{\mathrm{e}x}`$, where $`U_{\mathrm{e}x}12`$eV is the level splitting induced by the intra-atomic Hund’s coupling. Thus, $`J_{\mathrm{A}F}0.01t0.08t`$, although higher values have been suggested. Our results show four types of first order transitions: i) In pure DEM ($`J_{\mathrm{A}F}=0`$) the magnetic transition becomes discontinuous at sufficiently low densities, in agreement with the analysis presented in. The phase coexistence region shrinks to zero and the critical temperatures vanish as $`x`$ goes to zero, as expected. ii) The competition between antiferromagnetism and ferromagnetism when $`J_{\mathrm{A}F}0`$ leads to a discontinuous transition which prevents the formation of canted phases, as reported in. This transition also takes place at low dopings. iii) At moderate to high dopings, the FM-PM transition becomes discontinuous, if the AFM couplings are sufficiently large. The onset for first order transitions at $`x=1/2`$ is $`J_{\mathrm{A}F}/t0.06`$. Unlike the previous two cases, this transition takes place between phases of similar electronic density. First and second order transition lines are separated by tricritical points. iv) In an interval of $`J_{\mathrm{A}F}/t`$, which depends on the doping level, we also find phase transitions between the PM and A-AFM and C-AFM phases, that are of second order (see Fig. 3). At low temperatures there appear FM, C-AFM, A-AFM, and G-AFM phases separated by first order transitions with its associated phase separation regions, as shown in Fig 1. As we see, the DEM complemented with AFM superexchange interactions between the localized spins give rise to a very rich magnetic phase diagram that contains first and second order transitions between phases with different magnetic order. In order to set a common frame for comparison with with standard approximations , we note the free energy of the system is made up of an entropy term, due to the thermal fluctuations of the core spins, an almost temperature independent contribution from the electrons and another temperature independent term due to the direct AFM coupling between the core spins. For instance, in the PM-FM case, we can write: $`=3J_{\mathrm{A}F}M^2+E_{\mathrm{e}lec}(M)TS(M)`$ where $`S(M)`$ is the entropy of a spin in an effective magnetic field producing magnetization $`M`$. We can expand: $`S(M)=(\frac{3}{2}M^2+\frac{9}{20}M^4+\frac{99}{350}M^6+\mathrm{})`$ and $`E_{\mathrm{e}lec}(M)=c_1M^2+c_2M^4+c_3M^6+\mathrm{}`$ where $`c_1,c_2`$ and $`c_3`$ are functions of the band filling, and $`c_1`$ is always negative ($`c_1`$, $`c_2`$ and $`c_3`$ are obtained fitting the numerical results for $`E_{\mathrm{e}lec}`$). If there is a continuous transition, the critical temperature is given by $`T_\mathrm{C}=(2|c_1|6J_{\mathrm{A}F})/3`$. The transition becomes discontinuous when the quartic term in $`(M)`$ is negative. This happens if $`c_2<0`$ and $`T<20/9|c_2|`$. Thus, if $`J_{\mathrm{A}F}>|c_1|/310|c_2|/9`$, and $`c_2<0`$, the transition becomes of first order. A tricritical point appear in the transient between first and second order transitions. The fact that $`c_2<0`$ is due to the energetics of the electrons in the disordered spin background. In a fully polarized system, $`M=1`$, the electrons propagate in a perfect lattice. If $`M=0`$ the spins are completely disordered, and our results reduce to those reported in. Standard approximations to the phase diagram of the DEM use the virtual crystal approximation, in which the cubic density of states is scaled by the average value $`𝒯(𝑺_i,𝑺_j)`$, defined in Eq.(1). This approximation suffices to describe the main features of the phase diagram when $`J_{\mathrm{A}F}=0`$, but overestimates the kinetic energy of the electrons moving in the disordered spin background. The effect is more pronounced near half filling, when the electronic contribution is the largest, and $`c_2`$ is positive on the virtual crystal scheme. As our calculation takes fully into account the propagation of the electrons in a disordered environment, we think that the existence of a first order PM-FM transition when $`T_\mathrm{C}`$ is suppressed is a robust feature of the model. At zero temperature, our calculation leads to a richer phase diagram to that calculated within the Dynamical Mean Field Approximation. As mentioned in the preceding section, our method coincides with this approximation when implemented in a Bethe lattice. The topology of a cubic lattice allows for the possibility of A-AFM and C-AFM phases. We have developed an exact Monte Carlo algorithm to study the DEM. This approach is based in a Path Integral formulation that allows to simulate on lattices much larger than in an usual Hamiltonian formulation. Full details of the method will be given elsewhere . The first data of the Monte Carlo computation confirm the robustness of the present results. Simulations in the parameter region depicted in Fig. 3 show a very clear evidence for a first order transition in lattices up to $`12\times 12\times 12`$ sizes. In Fig. 4 we show data on a $`L=8`$ lattice at half-filling at several temperatures. Note the large region of metaestability marked by the vertical lines. It is also clear that fluctuations lower the transition temperatures from their mean-field values, as it also happens in the three-dimensional Heisenberg model . In addition, we find a helicoidal spin structure at sufficiently low temperatures, which replaces, partially, the A-AFM and C-AFM phases discussed earlier. Turning again to the Mean-Field approach, let us recall that while a continuous transition is changed into a smooth crossover in an applied field, a first order transition survives until a critical field is reached. The transition takes place between two phases with finite, but different, magnetization, in a similar way to the liquid-vapor transition. The PM-FM line of first order transitions for dopings close to $`x=0.5`$ ends in a critical point, $`(T_\mathrm{c},H_\mathrm{c})`$. For $`J_{\mathrm{A}F}=0.08t`$, the critical field varies from $`H_\mathrm{c}=0.00075t2.2`$T at $`x=0.5`$ to $`H_\mathrm{c}=0.0002t0.6`$T at $`x=0.3`$, while $`T_\mathrm{c}T_\mathrm{C}`$ and $`T_\mathrm{C}`$ is the Curie temperature at zero field, shown in Fig. 3. ## V Conclusions. We have shown that the phase diagram of double exchange systems is richer than previously anticipated, and differs substantially from that of more conventional itinerant ferromagnets. We have described first order transitions which are either intrinsic to the double exchange mechanism, or driven by the competition between it and AFM couplings. In particular, we find that, in the doping range relevant for CMR effects, AFM interactions of reasonable magnitude change the PM-FM transition from continuous to first order. The existence of such a transition has been argued, on phenomenological grounds, in order to explain the observed data in a variety (but not all) of doped manganites in the filling range $`x0.30.5`$. The generic phase diagram that we obtain is consistent with a number of observations: i) Materials with a high transition temperature (low AFM couplings) have a continuous PM-FM transition, with no evidence for inhomogeneities or hysteretic effects. The paramagnetic state shows metallic behavior. ii) The PM-FM transition in materials with low transition temperature (significant AFM couplings) is discontinuous. Near $`T_\mathrm{C}`$ inhomogeneities and hysteretic behavior are observed. The transport properties in the paramagnetic phase are anomalous. iii) Substitution of a trivalent rare earth for another one with smaller ionic radius (i.e., compositional changes that do not modify the doping level) diminishes the $`\mathrm{M}nOMn`$ bond angle, reducing the conduction bandwidth, $`W=12t`$ . Assuming that the AFM coupling, $`J_{\mathrm{A}F}`$, does not change significantly, the ratio $`J_{\mathrm{A}F}/t`$ increases; therefore, the doping level $`y`$ in series of the type $`(\mathrm{R}E_{1y}\mathrm{R}E_y)_{1x}\mathrm{A}E_x\mathrm{M}nO_3`$ might be traded by $`J_{\mathrm{A}F}/t`$. The top panel of Fig. 3 shows the experimental magnetic phase diagram of (La<sub>1-y</sub> Tb<sub>y</sub>)<sub>2/3</sub> Ca<sub>1/3</sub> MnO<sub>3</sub>, as taken from Ref. . We note the similarities with the phase diagrams of the DEM in the plane $`(J_{\mathrm{A}F}/t,T/t)`$ at fixed $`x`$. The phases A-AFM and C-AFM at intermediate $`J_{\mathrm{A}F}/t`$ could become spin glass like phases in presence of disorder. iv) The first order PM-FM transition reported here survives in the presence of an applied field. A critical field is required to suppress it (hysteretic effects in an applied field have been reported in ). ## VI Acknowledgements. We are thankful for helpful conversations to L. Brey, J. Fontcuberta, G. Gómez-Santos, C. Simon, J.M. De Teresa, and especially to R. Ibarra. V. M.-M. is MEC fellow. We acknowledge financial support from grants PB96-0875, AEN97-1680, AEN97-1693, AEN99-0990 (MEC, Spain) and (07N/0045/98) (C. Madrid).
no-problem/0003/math0003184.html
ar5iv
text
# Untitled Document Life and work of the Mathemagician Srinivasa Ramanujan K. Srinivasa Rao The Institute of Mathematical Sciences, Chennai 600 113 . (E-mail : rao@imsc.ernet.in) Introduction Srinivasa Ramanujan, hailed as one of the greatest mathematicians of this century, left behind an incredibly vast and formidable amount of original work, which has greatly influenced the development and growth of some of the best research work in mathematics of this century. He was born at Erode, on Dec. 22, 1887. There were no portents to indicate that he would, in a short life-span of 32 years 4 months and 4 days, become comparable to the all-time great Euler, Gauss and Jacobi, for natural genius. There are two aspects of interest to biographers and mathematicians regarding Ramanujan: his life and his work. Mathematicians, who are interested in his work, have to contend with not only his publications in journals which are precise and profound, but also with his Notebooks which are a treasure house of intriguing results stated without proofs and lacking perspective with contemporary mathematical work. Those who attempt to write biographic articles on Ramanujan have to surmount the time barrier to reconstruct a story from all the indirect information accessible and to them, Hardy on Ramanujan is akin to Boswell on Samuel Johnson. The challenge to the mathematicians who work on any of his thousands of recorded results, which are still shrouded in mystery, is to prove the same with what was accessible to Ramanujan in those days in the form of books and publications. While the individual writer’s perception of Ramanujan will depend upon his/her background and imagination, the task of the mathematician is perhaps unenviable, in comparison. Anyone who ever heard of Srinivasa Ramanujan and reads the compelling rags-to-intellectual-riches story of Ramanujan contained in the two Notices, one by G.H. Hardy and the other by Dewan Bahadur R. Ramachandra Rao and P.V. Seshu Iyer, published in the Collected papers of Srinivasa Ramanujan , would be moved by the achievements of the unorthodox mathematical genius under adverse circumstances. The lack of formal education, lack of appreciation and a job, in the beginning of his career and ill health during the last few years of his life, did not prevent him from being creative in Mathematics. This is indeed something not easy to comprehend, for often one would buckle under similar trying circumstances. In these lectures, I will present an account of his romantic life, provide a few glimpses into his mathematics and relate the increasing interest in his work and its relevance even today. Formal education Ramanujan’s father, Mr. K. Srinivasa Iyengar, was an accountant to a cloth merchant in Kumbakonam. His mother was Komalattammal and Erode was her parental home. He was the first of three sons to his parents. Very little is known about his father and not even a photograph of his seems to be available. His mother was convinced of the greatness of Ramanujan and she zealously protected and projected his interests all through his life. She is portrayed as a shrewd, cultured lady and her photograph is available in some books on Ramanujan. Ramanujan was sent to Kangeyam Primary School in Kumbakonam at the age of seven. During his school days, he impressed his classmates, senior students and teachers with his extraordinary intuition and astounding proficiency in several branches of mathematics - viz. arithmetic, algebra, geometry, number theory and trigonometry. In later years a friend of his, C.V. Rajagopalachari, recounted the following incident (, p.83) which happened when Ramanujan was in his third form: In an arithmetic class on division, the teacher said that if three bananas were given to three boys, each boy would get a banana. The teacher generalised this idea and said that any number divided by itself would give one. Ramanujan asked: Sir, if no banana is distributed to no student, will everyone still get a banana ? Another friend who took private tuition from Ramanujan also recalled that Ramanujan used to ask about the value of zero divided by zero and then answer that it can be anything since the zero of the denominator may be several times the zero of the numerator and vice versa and that the value cannot be determined. He stood first in the Tanjore District Primary Examinations held in November 1897, and this entitled him to a half-fee concession in the Town High School at Kumbakonam, where he studied from 1898 to 1903, until he passed the Matriculation Examination of the University of Madras (1904). At the age of 12, Ramanujan is said to have worked out the properties of arithmetical, geometrical and harmonic progressions. Once a senior school student , posed to Ramanujan, who was in the fourth year at school, the following problem: $$If\sqrt{x}+y=7\mathrm{and}x+\sqrt{y}=11,whatarethevaluesofxandy\mathrm{?}$$ Ramanujan’ s immediate reply to this questionn– which was expected to be tackled by only a sixth year student – that $`x=9`$ and $`y=4`$, won for him a friend who in later years took him to the collector of Nellore<sup>1</sup><sup>1</sup>1If one does not guess this answer, the result can be obtained by setting $`x=m^2,y=n^2`$, then take the difference between the two simultaneous equations and factorise to get: $`(mn)(m+n1)=4`$, which has integer solutions only for $`m=3,n=2`$ and hence $`x=9,y=4`$.. The senior mathematics teacher of the school, Ganapathy Subbier, had such confidence in Ramanujan’ s ability that year after year he entrusted Ramanujan with the task of preparing a conflict free time-table for the school, which had about 1500 students and 30 or more teachers. Ramanujan won prizes for his outstanding performance in mathematics and mastered Loney’ sTrigonometry, Part II, in his fourth year at school. He won many prizes in his second, fourth and sixth years at High School. To augment the family income, Ramanujan’ s mother took in a couple of students from Tirunelveli and Tiruchirapalli as boarders. Noticing Ramanujan’s precocity in mathematics these undergraduate students are purported to have given him an elementary introduction to all branches of mathematics. In 1903, through these friends from the Kumbakonam Government College, Ramanujan obtained G.S. Carr’s : A Synopsis of Elementary Results, a book on Pure Mathematics, which contained propositions, formulae and methods of analysis with abridged demonstrations, published in 1886. Carr presented in this book 4865 formulae \[7, p.3\], without proofs, in algebra, trigonometry, analytical geometry and calculus. This book is similar to the modern day compilations like the Table of Integrals, Series, and Products, by I.S. Gradshteyn and I.M. Ryzhik (Academic Press, New York, 1994). Prof. P.V. Seshu Aiyar and Mr. R. Ramachandra Rao, in their biographies of Ramanujan state that: It was this book which awakened his genius. He set himself to establish the formulae given therein. As he was without the aid of other books, each solution was a piece of research so far as he was concerned. It is the considered opinion of many (cf. Kanigel , p.57) that in proving one formula, he discovered many others and thus, Ramanujan laid for himself a foundation for higher mathematics. Also, at about this time, he started noting his results in Notebooks. The first public recognition of his extraordinary prowess came when he was awarded a special prize – the Sri K. Ranganatha Rao Prize e– at the annual prize distribution ceremony of the Town High School, in 1904, for proficiency in mathematics. Ramanujan passed his Matriculation Examination in 1904 and joined the Government Arts College in Kumbakonam. As a result of his success in a competitive examination in Mathematics and English composition, he secured the Junior Subrahmanyam Scholarship. In the F.A. (First Examination in Arts) Class, Ramanujan had to study English, Sanskrit, Mathematics, Physiology and the History of Rome and Greece. Partly due to his pre-occupation with researches into mathematics, he neglected the study of other subjects. He went to his mathematics lecturer with a number of original and very ingenious results in finite and infinite series. Prof. P.V. Seshu Aiyar exhorted him but advised him not to neglect the study of other subjects. Unfortunately, he did not pass in English and Physiology and hence was not promoted to the senior F.A. class in January 1905. He lost his scholarship. His mother, who played a domineering role in his life, tried to persuade the Principal of the Government Arts College to take note of Ramanujan’s extraordinary mathematical ability and appealed for a continuance of the scholarship, but to no avail. Ramanujan’ s failure to get promoted to the senior F.A. class marked the beginning of a very trying period in his life. It is not clear what he did in 1905, when he discontinued his studies and spent some months in (the present day) Andhra Pradesh region, when he set out from Kumbakonam, for the first time. He joined Pachaiyappa’ s College in Madras, in the F.A. class again, in 1906. One of his classmates, T. Devaraja Mudaliar, (, p.63 and p.65) recalls that the Chief Professor of Mathematics, P. Singaravelu Mudaliar, considered an acquisition by Pachaiyappa’ s College since he had the reputation of being a very successful teacher for the B.A. class, waited for Ramanujan’ s assistance to solve difficult problems in mathematical journals. He also recalls that a junior mathematics teacher of the F.A. class, Prof. N. Ramanujachari, allowed Ramanujan to go to the board to show the solutions to the difficult problems in algebra or trigonometry using fewer steps than the ones used by him. Senior students of the B.A. Class also sought Ramanujan’ s help in mathematics . Ramanujan who was a strict vegetarian should have abhorred the dissection of the frog in the Physiology classes. Once, to a question on the digestive system, he is supposed to have provided a skimpy answer which he concluded with : Sir, this is my undigested product of the Digestion chapter. Please excuse me . Another classmate of his at Pachaiyappa s College recalls that Ramanujan rarely got more than 10 contempt and got something more, say 15 % to 20 % in Greek and Roman History, but managed to get about 25 % in English. However, Ramanujan considered the problems given in$`\mathrm{}`$ textbooks in Geometry, Algebra, and Trigonometry to be mental sums. In 1906, while studying at Pachaiyappa’ s College, Ramanujan lived with his grandmother in a house in a lane in George Town, Madras. After about three months, Ramanujan fell ill and discontinued his studies. However, he appeared privately for the F.A. examination in 1907. Though he secured a centum in mathematics, he failed to secure pass marks in other subjects. This marked the end of his formal education. Formative years It was during the period, 1907 - 12, that Ramanujan was frantically in search of a benefactor and started making contacts with those who could help him in his quest for a job to eke out a livelihood. He continued to stay in Madras after his formal education came to an end in 1907. According to Hardy: The years between 18 and 25 are the critical years in a mathematician’s career. During his five unfortunate years (1907-1912) his genius was misdirected, side-tracked and to a certain extent distorted. (Hardy ). Despite the pecuniary circumstances and the stresses and strains of day-to-day existence, Ramanujan started noting down his mathematical results in Notebooks. By 1909, his Notebooks were precious to Ramanujan. For, one (F.A.) classmate of his, states that Ramanujan fell ill in 1909, while living in George Town, Madras, and on a Doctor’ s advise, when he was being sent to the home of his parents in Kumbakonam, Ramanujan entrusted him with his Notebooks for safe keeping and stated: If I die, please hand them over to Prof. Singaravelu Mudaliar or to the British Professor – Edward B Ross – Madras Christian College . Another college mate of Ramanujan has stated that during his collegiate years, Ramanujan taught him the method of constructing Magic Squares, the subject of the first chapter of his Notebooks. The interest in this subject dates from his school days and is disconnected from the subject matter of the remainder of the Notebooks. Probably Ramanujan’ s expertise in preparing the conflict free time tables for his School inspired him to a study of these Magic Squares. Ramanujan’ s investigations in continued fractions and divergent series started during this period. His betrothal to nine year old Janaki was in 1908 and his wedding took place near Karur, in 1909. Robert Kanigel , in his biography on Ramanujan, constructs a vivid account of this marriage arranged by his mother Komalattammal, not approved by his father, and dramatizes the foreboding of the impending disaster through the omens preceding the wedding, which was on the brink of being called off due to the late arrival of the bridegroom’ s party. During this period he tutored a few students in mathematics and even sought employment as a tutor in mathematics. Disappointed at the lack of recognition, during this trying period, Ramanujan had bemoaned to a friend that he was probably destined to die in poverty like Galileo! Fortunately, this was not to be. In 1910, Ramanujan sought the patronage of Prof. V. Ramaswamy Iyer – the founder of Indian Mathematical Society – who was at Salem and asked for a clerical job in his office. The only recommendation Ramanujan had was his Notebooks which by then contained several results on Magic Squares, prime numbers, infinite series, divergent series, Bernoulli numbers, Riemann zeta function, hypergeometric series, partitions, continued fractions, elliptic functions, modular equations, etc. A scrutiny of the entries in the Notebooks was sufficient to convince Prof. Ramaswamy Iyer that Ramanujan was a gifted mathematician and he had no mind to smother his (Ramanuja’n s) genius by an appointment in the lowest rungs of the revenue department . So, he sent Ramanujan back to Madras with a letter of introduction to Prof. P.V. Seshu Aiyar, then at the Presidency College, Madras. Prof. Seshu Aiyar, who had known Ramanujan as a student at the Government Arts College, Kumbakonam, when he himself was employed there as a lecturer of mathematics, was meeting him after a gap of four years and was greatly impressed with the contents of the well-sized Notebooks. So he gave Ramanujan a note of recommendation to that true lover of mathematics, Dewan Bahadur R. Ramachandra Rao, who was then the District Collector at Nellore. The turning point With the help of a friend, R. Krishna Rao , who was a nephew of Dewan Bahadur Ramachandra Rao, Ramanujan went to Tirukkoilur in December 1910. This was a turning point in Ramanujan’s life. Ramachandra Rao states that in the plentitude of my mathematical wisdom, I condescended to permit Ramanujan to walk into my presence . At that time, Ramanujan appeared to Ramachandra Rao as a short uncouth figure, stout, unshaved, not over-clean, with one conspicuous feature - shining eyes - walked in, with a frayed Notebook under his arm $`\mathrm{}`$. He was miserably poor. He had run away from Kumbakonam to get leisure in Madras to pursue his studies. He never craved for any distinction. He wanted leisure, in other words, simple food to be provided for him without exertion on his part and that he should be allowed to dream on. Though Ramachandra Rao gave him a patient hearing, he took a few days to look into the Notebooks of Ramanujan. At their fourth meeting, when Ramanujan confronted Ramachandra Rao with a letter from Prof. Saldhana of Bombay appreciating the genuineness of his work, Ramachandra Rao started to feel that Ramanujan’s work must be examined in depth by eminent mathematicians. Ramachandra Rao himself states that Ramanujan led him step-by-step to elliptic integrals and hypergeometric series and at last to his theory of divergent series not yet announced to the world and this converted him into a benefactor who undertook to underwrite Ramanujan’s expenses at Madras for some time. Prof. Seshu Aiyar also communicated the earliest contributions of Ramanujan to the Journal of the Indian Mathematical Society (I.M.S.) in the form of questions. These appeared in 1911 and in his brief and illustrious career Ramanujan proposed in all 59 questions or solutions to questions in this journal. The first fifteen page article entitled: Some properties of Bernoulli numbers appeared in the same 1911 volume of the journal of the I.M.S. In it Ramanujan stated eight theorems embodying arithmetical properties of the Bernoulli numbers, indicating proofs for three of them; two theorems are stated as corollaries of two others, while three theorems are stated as mere conjectures. Prof. Seshu Iyer states : Ramanujan’s methods were so terse and novel and his presentation was so lacking in clearness and precision, that the ordinary reader, unaccustomed to such intellectual gymnastics, could hardly follow him. Ramanujan lived in a small house, called ‘ Summer Hous’e , in Sami Pillai Street, Triplicane, Madras, accepting reluctantly a monthly financial assistance from the collector of Nellore for about a year. Later he declined this help and from Jan. 12 to Feb. 21, 1912, he worked as a clerk in the Accountant General s Office, on a salary of Rs.25/- per month. Not satisfied with this job, Ramanujan applied for and secured a post in the Accounts Section (Class III, Grade IV clerk on a salray of Rs.30/- per month) in the Madras Port Trust, with the help of Mr. S. Narayana Iyer, the Manager of Port Trust, who was the treasurer of the IMS and a friend of Profs. V. Ramaswamy Aiyar and P.V. Seshu Aiyar. Mr. Narayana Aiyer was a good mathematician and was a great source of support to Ramanujan. He was not only instrumental in Ramanujan being offered a job in the Madras Port Trust, but also in securing for Ramanujan the life-long support of Sir Francis Spring. When Ramanujan was living in No. 580, Pycrofts Road, Triplicane, Madras, he used to meet Mr. Narayana Iyer and work out Mathematics on two big slates. Narayana Aiyer’s son N. Subbanarayanan relates the role his father played in the career of Ramanujan \[18, p. 112\]: My father, being a fairly good mathematician himself, was unable to capture the strides of Ramanujan’s discoveries. He used to tell him, “ When I am not able to understand your steps, I do not know how other mathematicians of a critical nature will accept your genius. You must descend to my level and write at least ten steps between the two steps of yours”. Sri Ramanujan ud to say, “ When it is so simple and clearto me, why should I write more steps ?” But somehow my father slowly got him round, cajoled him and made him write some more, though it used to be a mighty task of boredom to him. Dewan Bahadur Ramachandra Rao wrote to Sir Francis Spring, Chairman of Madras Port Trust, about Ramanujan. He also induced Prof. C.L.T. Griffith of the Engineering College, Madras to take interest in Ramanujan and Prof. Griffith in turn wrote in November 1912, to Sir Francis Spring, the Chairman of Madras Port Trust about the very poor accountant who was a most remarkable mathematician and asking him to keep Ramanujan happily employed until something can be done to make use of his extraordinary gifts. As stated before, these efforts resulted in Ramanujan’ s entry into Port Trust, on March 1, 1912, as a Clerk in the Accounts Department. This may well be considered as the turning point in his career prospects. He held this clerical post for 14 months. His wife joined him during this period and Ramanujan shifted his residence to Saiva Muthiah Mudali Street in George Town. This period also marked the beginning of the appreciation of his scholarship and researches in mathematics. Prof. Griffith wrote to Prof. M.J.M. Hill, of University College, University of London, on Ramanujan’ s work and he received a reply in December 1912. Unfortunately, Prof. Hill could not find time to study the results. He observed that the book which will be most useful to him is Bromwic’h s Theory of Infinite Series, published by Cambridge University Press (or Macmillan) and gave advice as to how Ramanujan could get his papers published. In a sequel to this reply, dated 7 December 1912, Prof. Hill wrote to Prof. Griffith : Mr. Ramanujan is evidently a man with a taste for Mathematics, and with some ability, but he has got on the wrong lines. He does not understand the precautions which have to be taken in dealing with divergent series, otherwise he could not have obtained the erroneous results you send me, viz. $`1+2+3+\mathrm{}+\mathrm{}`$ $`=`$ $`1/12,`$ $`1^2+2^2+3^2+\mathrm{}+\mathrm{}^2`$ $`=`$ $`0,`$ $`1^3+2^3+3^3+\mathrm{}+\mathrm{}^3`$ $`=`$ $`1/240.`$ The sums of $`n`$ terms of these series are: $$n(n+1)/2,n(n+1/2)(n+1)/3,[n(n+1)]^2/2$$ and they all tend to $`\mathrm{}`$ as $`n`$ tends to $`\mathrm{}`$ . I do think you can do no better for him than to get him a copy of the book I recommended, Bromwich’s Theory of Infinite Series, published by Macmillan and Co., who have branches in Calcutta and Bombay. Price 15/- net. It is not as though Ramanujan was not aware of the apparent absurd looking nature of the results on divergent series. Ramanujan, in his second letter to Hardy , wrote: I have got theorems on divergent series, theorems to calculate the convergent values corresponding to the divergent series, viz.: $`12+34+\mathrm{}`$ $`=`$ $`1/4,`$ $`11!+2!3!+\mathrm{}`$ $`=`$ $`0.596,`$ $`1+2+3+4+\mathrm{}`$ $`=`$ $`1/12,`$ $`1^3+2^3+3^3+\mathrm{}+\mathrm{}^3`$ $`=`$ $`1/24.`$ Theorems to calculate such values for any given series (say, $`11^1+2^23^3+4^45^5+\mathrm{}`$), and the meaning of such values. I have also dealt with such questions When to use, where to use, and how to use such values, where do they fail and where do they not ? Hill failed to discern the origin of the results of Ramanujan and the three sums of the integers, their squares and their cubes are indeed the values of $`\zeta (n)`$, for $`n=1,2,3`$, respectively<sup>2</sup><sup>2</sup>2The Riemann zeta function is defined as: $`\zeta (s)=_{n=1}^{\mathrm{}}1/n^s,Res>1`$. The $`\zeta `$ function has a unique analytic continuation to the points $`s=1`$, where we get $`\zeta (1)=1/12`$, which is what Ramanujan writes as: $`1+2+3+\mathrm{}+\mathrm{}=\frac{1}{12}`$. This result is used in the zeta function regularization method, by String theorists, in recent times.. Ramanujan published two short notes, one On question 330 of Professor Sanjana and another a Note on a set of simultaneous equations, in the IMS journal, in 1912. When Ramanujan approached Prof. Seshu Aiyar with some theorems on Prime Numbers, his attention was drawn to G.H. Hardy s Tract onOrders of infinity. In it, Ramanujan observed that (\[III\], p.xxii): no definite expression has yet been found for the number of prime numbers less than any given number . Ramanujan told Prof. Seshu Aiyar that he ha discovered the required result. This made Prof. Seshu Aiyar suggest communication of this and other results to Mr. G.H. Hardy – a Fellow of the Royal Society and Cayley Lecturer in Mathematics at Cambridge a world famous mathematician, who was ten years Ramanujan’ s senior. The years of fruition The life of Ramanujan, in the words of C.P. Snow is an admirable story, and one which showers credit on nearly everyone . Ramanujan’ s first letter to Prof. Hardy, dated 16th January 1913, is a historic letter. It contained the bare statements of about 120 theorems, mostly formal identities from the Notebooks. This collection obviously represented what Ramanujan himself considered were results of importance. Ramanujan wrote: Dear Sir, I beg to introduce myself to you as a clerk in the Accounts Department of the Port Trust Office at Madras on a salary of £20 per annum. I am now about 23 years of age. I have had no University education but I have undergone the ordinary school course. After leaving school I have been employing the spare time at my disposal to work at Mathematics. I have not trodden through the conventional regular course which is followed in a University course, but I am striking out a new path for myself. I have made a special investigation of divergent series in general and the results I get are termed by the local mathematicians as ‘startling’ $`\mathrm{}`$. I would request you to go through the enclosed papers. Being poor, if you are convinced that there is anything of value I would like to have my theorems published. I have not given the actual investigations nor the expressions that I get but I have indicated the lines on which I proceed. Being inexperienced I would very highly value any advice you may give me. Requesting to be excused for the trouble I give you, I remain, Dear Sir, Yours truly, (sd) S. Ramanujan. Prof. Hardy, the professional mathematician, who was aware that he was the first really competent person who had the chance to see some of his work, found some of the series formulae intriguing, some of the integral formulae (which were classical and known) vaguely familiar and he could prove some integral formulae with effort but these were to him the least impressive. However, some of Ramanujan’ s formulae wereon a different level and obviously both difficul and deep, which even Hardy had never seen anything in the least like them before and whic he has state ‘defeated me completely’. The following is a record of Hardy’ s reaction to this historic letter of Ramanujan, in the words of C.P. Snow : Hardy gave the manuscript a perfunctory glance, and went on reading the morning paper. It occurred to him that the first page was a little out of the ordinary for a cranky correspondent. It seemed to consist of some theorems, very strange-looking theorems, without any argument. Hardy then decided that the man must be a fraud, and duly went about the day according to his habits, giving a lecture, playing a game of tennis. But there was something nagging at the back of his mind. Anyone who could fake such theorems, right or wrong must be a fraud of genius. Was it more or less likely that there should be a fraud of genius or an unknown Indian mathematician of genius ? He went that evening after dinner to argue it out with his collaborator, J.E. Littlewood, whom Hardy always insisted was a better mathematician than himself. They soon had no doubt of the answer. Hardy was seeing the work of someone whom, for natural genius, he could not touch who, in natural genius, though of course not in achievement, as Hardy said later, belonged to the class of Euler and Gauss. Hardy made up his mind that Ramanujan should be brought to Cambridge and provided with the necessary education and contact with western mathematicians of the highest class. So, Hardy, wrote to the Secretary of the Indian students, in the India Office, London, suggesting that some means be found to get Ramanujan to Cambridge and he in turn wrote, in February 1913, to Mr. Arthur Davies, the Secretary to the Advisory Committee for Indian students in Madras conveying the desire of the tutors at Trinity to get Ramanujan to Cambridge. Sir Francis Spring, the Chairman and Mr. S. Narayana Iyer, the Manager of Madras Port Trust gave Ramanujan every possible encouragement. Dr. Gilbert T. Walker, F.R.S., Director General of Observatories, Simla, and Head of the Indian Meteorological Department, paid a visit to the harbour in Madras on February 25, 1913 and Sir Francis Spring drew his attention to the work of Ramanujan and his Notebooks. Dr. Walker, a good mathematician and a Senior Wrangler, was a former Fellow of Trinity College, Cambridge, as well as a lecturer and he said that in his opinion Mr. Hardy would be the most competent to arrive at a judgement of the true value of the work of Ramanujan. Since by then Hardy’ s reply had arrived (on Feb. 8, 1913), Gilbert Walker wrote to Mr. Francis Dewsbury, the Registrar of the University of Madras, commending the work of Ramanujan to be comparable in originality with that of a Mathematics Fellow in a Cambridge college, though lacking in the precision and completeness necessary for establishing the universal validity of the results. He wrote that it was perfectly clear to him that the university would be justified in enabling S. Ramanujan for a few years at least to spend the whole of his time on mathematics without any anxiety as to his livelihood. He also wanted the University to correspond with Mr. Hardy, Fellow of Trinity College, Cambridge, since Ramanujan was already in correspondence with Hardy, assuring Mr. Hardy of the University s interest in Ramanujan. The recommendation of Dr. Walker was accepted by the Board of Studies in Mathematics of the University of Madras. Then the Vice Chancellor of the University got the approval of the Syndicate overcoming the legal hurdle of awarding a research scholarship to Ramanujan who did not have the required qualification of a Master s Degree. As a measure of precaution, the consent of the Chancellor of the University (Lord Pentland, the Governor of Madras) was obtained to grant Ramanujan a special research scholarship of Rs.75/- per month for two years with the condition that Ramanujan should submit quarterly reports on his work . The Madras Port Trust granted Ramanujan two years leave (on loss of pay) to enable him to accept this scholarship from May 1913, as the first research scholar of the University of Madras. Thus began Ramanujan’ s carer as a professional mathematician. In quick succession, Ramanujan received in the next three months, four long letters from Hardy in which the latter wrote plainly about what had been proved or claimed to have been proved by Ramanujan. He clearly communicated his genuine anxiety to see what can be done to give you (Ramanujan) a better chance of making the best use of your obvious mathematical gifts. At last Ramanujan had found a sympathetic friend in Hardy and was willing to place unreservedly in his hands all that he had. Ramanujan wrote again to Hardy on 27th February 1913 and sent him more formulae and explanations. On 17th April 1913, Ramanujan wrote to Hardy about his having secured the scholarship, of £60 per annum, of the University of Madras, for two years. Ramanujan took up residence at Hanumantharayan Koil Lane in Triplicane around this time and had access to books on mathematics in the University library. His wife Janaki and his mother came to live with him. Ramanujan was initially reluctant to go abroad because of his own caste prejudices<sup>3</sup><sup>3</sup>3Crossing the oceans was considered a sacrilege by the Hindu Brahmins and often people did so were, on their return to India, treated as outcastes. All relationships with the even their families were shunned! in those days which were compounded by the extremely orthodox views of his mother to whom he was greatly attached. At the beginning of 1914, Mr. E.H. Neville, a young mathematician and a Fellow of Trinity College, Cambridge, was in Madras as a visiting lecturer to give a series of lectures on Differential Geometry to Mathematics Honours students of the University of Madras. Mr. Hardy entrusted him with the mission of persuading Ramanujan to visit Cambridge. Mr. Neville met Ramanujan and saw his priceless notebooks. This was sufficient to convince him of Ramanujan’ s uncommon ability and to make him take over the initiative to overcome all the difficulties in arranging for Ramanujan’ s visit to Cambridge. Prof. Richard Littlehails, who was a Professor of Mathematics with the observatory in Madras introduced Neville to everyone who carried weight in the University or in the civil administration. Neville, in turn, explained to them the importance of Ramanujan’ s stay in Cambridge, and urged them to be generous in their support. In a letter , dated 28th January 1914, to Mr. Dewsbury, the Registrar of the University of Madras, Mr. Neville wrote about the importance of securing to Ramanujan a training in the refinements of modern methods and a contact with men who know what range of ideas have been explored and what have not and prophesied that Ramanujan would respond to such a stimulus and that his name will become one of the greatest in the history of mathematics, and the University and city of Madras will be proud to have assisted in his passage from obscurity to fame. The very next day, Prof. Littlehails also wrote to Mr. Dewsbury that Ramanujan be granted by this University a scholarship of about £250 (Sterling) together with a grant of about £100 in order to enable him to proceed to Cambridge. Ramanujan is a man of most remarkable mathematical ability, amounting I might say to genius, whose light is metaphorically hidden under a bushel in Madras. The proposals regarding the scholarship to be granted to Ramanujan by the University of Madras were approved. To the lasting credit of the University of Madras, the Syndicate decided within a week to set aside Rs.10,000/- to offer Ramanujan a scholarship of £250 a year plus £100 for a passage by ship and for initial outfit<sup>4</sup><sup>4</sup>4The second class fare between London and Bombay was £32, in 1914, or about Rs. 480 – British Passenger Liners of the Five Oceans, By C.R. Vernon Gibbs (London: Putnam, 1963, p.63). (, p.397).. At the instance of Professors Neville and Littlehails, Sir Francis Spring wrote to the personal Secretary (Mr. C.B. Cotterell) to the Governor (Lord Pentland) of Madras, persuading His Excellency to speedily approve the University s sanction. Government sanction too was granted within a wek. This offer of the University of Madras was made to Ramanujan in February 1914. He sent his wife and mother back to Kumbakonam, changed the traditional hair- style of a brahmin, viz. a tuft, and got his hair trimmed in European style and left Madras by s.s. Nevasa on 17th March 1914. Prior to his departure, he arranged with the University for £60 a year to be sent to his parents in Kumbakonam, out of his annual scholarship amount. Mr. Arthur Davies and Prof. Littlehails attended to all the details regarding Ramanujan’ s passage to England. Except for the first three days when he was sea-sick, Ramanujan enjoyed the voyage and reached London through the Channel and the Thames on 14th April 1914. He was received by Mr. E.H. Neville and his brother at the docks and stayed at Cromwell Road for a few days before going to Cambridge on the 18th evening. He remained for a few days in Mr. Neville’ s house before moving to th college premises for stay, which even though costlier than lodging houses, was more convenient for him and the professors. Ramanujan wrote to his friend that Mr. Hardy, Mr. Neville and others here are unassuming, kind and obliging. As soon as I came here, Mr. Hardy paid £20 to the college for my entrance and other fees and made arrangements to give me a scholarship of £40 a year. Ramanujan was admitted by Mr. Hardy to Trinity College which supplemented his scholarship with the award of an exhibition of £60 a year, to augment the £250 a year scholarship awarded by the University of Madras. Though Ramanujan had access only to Carr s Synopsiss– and perhaps, to a few other books<sup>5</sup><sup>5</sup>5From the article of Mr. Narayana Iyer’s son \[Ref. 18, p.112\], Ramanujan had access to a book on Jacobis elliptic functions. Unfortunately, it is not possible to ascertain, from the records of the Library of the University of Madras, what books were available for reference to Ramanujan. – still, in the words of the historian J.R. Newman , he arrived in England abreast and often ahead of contemporary mathematical knowledge. Thus, in a lone mighty sweep, he had succeeded in recreating in his field, through his own unaided powers, a rich half century of European mathematics. One may doubt whether so prodigious a feat had ever before been accomplished in the history of thought. To Mr. Hardy Ramanujan’ s friend, philosopher an discoverer: The limitation of his knowledge was as startling as its profundity. Here was a man who could work out modular equations, and theorems of complex multiplications, to orders unheard of, whose mastery of continued fractions was, on the formal side at any rate, beyond that of any mathematician in the world, who had found for himself the functional equation of the zeta-function, and the dominant terms of many of the most famous problems in the analytic theory of numbers, and he had never heard of a doubly periodic function or of Cauchy s theorem, and had indeed but the vaguest idea of what a function of a complex variable was. His ideas of what constituted a mathematical proof were of the most shadowy description. All his results, new or old, right or wrong, had been arrived at by a process of mingled argument, intuition and induction, of which he was entirely unable to give a coherent account. With such a natural genius, Hardy collaborated and tried to teach, as he wrote, the things of which it was impossible that he should remain in ignorance. It was impossible to allow him to go through life supposing that all the zeroes of the zeta function were real. So I had to try to teach him, and in a measure I succeeded, though I obviously learnt from him much more than he learnt from me . Hardy did not attempt to convert Ramanujan into a mathematician of the modern school but enabled him to go on producing original ideas in his classical mould with rigorous proofs for the theorems he discovered. The period of Ramanujan’ s stay in England almost overlapped with the years in which World War I took place. One of the lecturers went to war<sup>6</sup><sup>6</sup>6Ramanujan was perhaps referring to the departure of Mr. J.E. Littlewood. wrote Ramanujan to a friend in India and Ramanujan felt that the other professors $`\mathrm{}`$ lost their interest owing to the $`\mathrm{}`$ war. One of the professors had remarked that Ramanujan was in England at the most unfortunate time. There were about 700 students before the war, but this number was reduced to 150 by November 1915. Initially Ramanujan asked for and obtained some South Indian food items (like tamarind, coconut oil, etc.) by post parcel from his home, as well as from a company in London but by January 1915, he wrote to a friend of his in India that now as well as in the future I am not in need of anything as I gained control over my taste and can live on mere rice with a little salt and lemon juice for an indefinite time. His difficulty of getting proper food was alleviated by the availability of good milk and fruits. Being a vegetarian he had no option but to cook for himself. He was attending a lecture by Mr. Berry at the University on elliptic integrals. Mr. Berry was working out some formulae on the black-board and a glance at Ramanujan’ s face, alight with excitement, caused him to ask Ramanujan whether he was following the lecture and whether he had anything to say. At this Ramanujan went to the black-board and much to everyone s surprise wrote down some of the results which were yet to be proved. This anecdote was recalled by Dr. P.C. Mahalanobis , the eminent Indian statistician, who joined King’ s College, Cambridge, in October 1913, and took a mathematics course by Prof. Hardy. The following is another anecdote about Ramanujan from Dr. Mahalanobis : I was fortunate in forming a good friendship with Ramanujan very soon. It came about in a somewhat strange way. One day, soon after his arrival, I went to see Ramanujan in his room in Trinity College. It had turned quite cold. Ramanujan was sitting near the fire<sup>7</sup><sup>7</sup>7Ramanujan’s room had electricity and he was provided with a gas stove.. I asked him whether he was quite warm at night. He said that he was feeling the cold though he was sleeping with his overcoat on and was also wrapping himself up in a shawl. I went to his bedroom to see whether he had enough blankets. I found that his bed had a number of blankets but all tucked in tightly, with a bed cover spread over them. He did not know that he should turn back the blankets and get into the bed. The bed cover was loose; he was sleeping under that linen cover with his overcoat and shawl. I showed him how to get under the blankets. He was extremely touched. I believe this was the reason why he was so kind to me. Ramanujan wrote a few articles soon after he reached Cambridge and in June 1914, Hardy presented some of the results from Ramanujan’ s Notebooks at a meeting of the London Mathematical Society. However, in January 1915, Ramanujan wrote to a friend in India that his notebook is sleeping in a corner for these four or five months. Ramanujan was more interested in getting new results (and partly due to the ongoing war), he decided to publish the old results worked out in his Notebooks after the war. After about a year and a half at Cambridge, Hardy wrote to the Registrar of the University of Madras, that Ramanujan is beyond question the best Indian mathematician of modern times. He will always be rather eccentric in his choice of subjects and methods of dealing with them. But of his extraordinary gifts there can be no questions; in some ways he is the most remarkable mathematician I have ever known. Hardy’ s letter and official report to the University, as well as an appeal by Sir Francis Spring to the University to continue the assistance extended by it to Ramanujan, made the University (in December 1915) extend the scholarship up to March 1919. Honours During his five year stay in Cambridge, Ramanujan published 21 research papers containing theorems on definite integrals, modular equations, Riemann’ s zeta function, infinite series, summation of series, analytic number theory, asymptotic formulae, modular functions, partitions and combinatorial analysis. His paper entitled Highly Composite Numbers which appeared in the Journal of the London Mathematical Society, in 1915, is 62 pages long and contains 269 equations. This is his longest paper. The London Mathematical Society had some financial difficulties at that time and Ramanujan was requested to reduce the length of his paper to save printing expenses. Five of these 21 research papers were in collaboration with Hardy. Ramanujan also published 5 short notes in the Records of Proceedings at meetings of the London Mathematical Society and six more in the journal of the Indian Mathematical Society. Ramanujan was awarded the B.A. degree by research in March 1916 for his work on Highly composite numbers and published as a long paper. Ramanujan’ s dissertation bore the same title and included six other papers. Ramanujan was registered as a research student in June 1914 and the prerequisite of a diploma or a certificate, as well as the domiciliary requirement of six terms must have been relaxed in his extraordinary case. It is unfortunate that a copy of this dissertation is not to be found in the records of the University . According to Hardy , this work of Ramanujan is a very peculiar one, standing somewhat apart from the main channels of mathematical research. But there can be no question as to the extraordinary insight and ingenuity which he has shown in treating it, nor any doubt that the memoir is one of the most remarkable published in England for many years. Ramanujan’ s designated tutor who monitored his progress at Trinity College, Cambridge, was E.W. Barnes, who considered Ramanujan as perhaps the most brilliant of all the top Trinity students, which included Littlewood . Hardy was immensely satisfied with the progress of Ramanujan and wrote so to the Registrar of the University of Madras supporting an extension of Ramanujan’ s two-year scholarship $`\mathrm{}`$ until, as I confidently expect, he is elected to a Fellowship at the College. Such an election I should expect in October 1917 . Later, in June 1916, in an official report on the progress of Ramanujan’s work in England to the University’s Registrar, he wrote: $`cdots`$ it is already safe to say that Mr. Ramanujan has justified abundantly all the hopes that were based upon his work in India, and has shown that he possesses powers as remarkable in their way as those of any living mathematicians. $`\mathrm{}`$ I have said enough, I hope, to give some idea of his astonishing individuality and power. India has produced many talented mathematicians in recent years, a number of whom have come to Cambridge and attained high academical distinction. They will be the first to recognize that Mr. Ramanujan’s work is of a different category . In spite of the war which was raging, which deprived Ramanujan of the center stage which he would otherwise have held with his brilliant research work in the midst of his peers, the confidence he kindled in Hardy was enough to win for him recognition and laurels very soon, but, unfortunately, the first signs of illness appeared in Ramanujan in the spring of 1917. Thanks to the unstinted efforts of Hardy, who did his best to get Ramanujan due recognition, he was elected a Fellow of the Royal Society of London in February 1918. The Records of the Royal Society, dated December 18, 1917, include the following certificate for the candidature of Ramanujan (then a Research student in Mathematics at Trinity College, Cambridge) for election to the Fellowship of the Royal Society<sup>8</sup><sup>8</sup>8A copy of this documjent is an exhibit in the Ramanujan Museum in Royapuram, Madras.: Qualifications (Not to exceed 250 words): Distinguished as pure mathematician, particularly for his investigations in elliptic functions and the theory of numbers. Author of the following papers, amongst others: ‘Modular equations and approximations to $`\pi `$’, Quarterly Journal, vol. 45; ‘New expressions of Riemann’s function $`\zeta (s)`$ and $`\chi (t)`$’ ,ibid, vol. 46; ‘Highly composite numbers’, Proc. London. Math. Soc., vol. 14; ‘On certain arithmetical functions’, Trans. Camb. Phil. Soc., vol. 22; ‘On the expression of a number in the form a $`x^2+bey^2+cz^2+dt^2`$, Proc. Camb. Phil. Soc. , vol. 19. Joint author with G.H. Hardy, F.R.S., of the following papers: ‘Une formulae asymptotique pour le nombre des partitions de n’, Comptes Rendus , 2 Jan. 1917; ‘Asymptotic Formulae for the distribution of numbers of various types’, Proc. London Math. Soc., vol. 16; ‘The normal number of prime factors of a number n’, Quarterly Journal, vol. 47; ‘Asymptotic Formulae in Combinatory Analysis’,Proc. London Math. Soc., (awaiting publication). being desirous of admission into the ROYAL SOCIETY OF LONDON, we the undersigned propose and recommend him as deserving that honour, and as likely to become a useful and valuable Member. This nomination was proposed by G.H. Hardy and seconded by P.A. MacMahon. The signatories with ‘Personal knowledge’ of Ramanujan were, besides Hardy and MacMahon, J.H. Grace, Joseph Larmor, T.J.I’ A. Bromwich E.W. Hobson<sup>9</sup><sup>9</sup>9Note that E.W. Hobson and H.F. Baker, who had not replied to letters written by Ramanujan from India, being signatories., H.F. Baker, J.E.Littlewood and J.W. Nicholson. Besides these 9 signatures were the signatures of E.T. Whittaker, A.R. Forsyth and A.N. Whitehead, under those who knew him from General Knowledge. This certificate on a printed form of the Royal Society has been filled by hand (and the hand writing appears to be that of Mr. Hardy), delivered at the Apartments of the Society on the 18th Dec. 1917 and read to the Society on the 24th January 1918. As a consequence, Ramanujan was, awarded on Feb. 28, 1918, the Fellowship of Royal Society, London, and the citation read: Srinivasa Ramanujan, Trinity College, Cambridge. Research student in Mathematics Distinguished as a pure mathematician particularly for his investigations in elliptic functions and the theory of numbers. In recent times<sup>10</sup><sup>10</sup>10Private communication by e-mail from Prof. Dalitz, Oxford University, dated March 29, 1996., I came to know from Prof. R.H. Dalitz, F.R.S., that the signature of Ramanujan is not in the book of the Royal Society. According to Prof. Dalitz: The book is indexed, so it is just not there. The reason undoubtedly is that he was ill in that period and could not go to the Royal Society to sign it. There are other examples of well-known F.R.S s who somehow didn t get their signature into the book. That means that he did not ever attend any meeting of the Royal Society; if he had, they would have brought out the book and not let him go until he had signed. Of course, it was also war-time, which meant that there were as few meetings as possible. Ramanujan was elected to a Trinity College Fellowship, in October 1918, which was a Prize Fellowship worth £250 a year for six years with no duties or conditions. These awards acted as great incentives to Ramanujan who discovered some of the most beautiful theorems in mathematics, subsequently. Hardy s letter to the Registrar of the University of Madras, Mr. Dewsbury, dated Nov. 26, 1918 struck a hopeful note: There is at last, I am profoundly glad to say, a quite definite change for the better. I think we may now hope that he has turned the corner, and is on the road to recovery. His temperature has ceased to be irregular, and he has gained nearly a stone<sup>11</sup><sup>11</sup>11One stone weight is equal to 14 pounds. in weight. The consensus of medical opinion is that he has been suffering from some obscure source of blood poisoning, which has now dried up; and that it is reasonable to expect him to recover his health completely and if all goes well fairly rapidly. Ramanujan’ s symptoms were predominantly night-time fever, loss of weight leading to his emaciated looks and these caused depressions which once drove him to the limit of attempting suicide<sup>12</sup><sup>12</sup>12A story which was recounted many years after his death, by the Astrophysicist Dr. S. Chandrasekhar, Nobel Laureate, as told to him by Prof. Hardy, and reproduced in Ch. 5 of Ref. 7.. These symptoms made the doctors consider various diagnosis, at different times: gastric ulcer, malaria, tuberculosis, cancer of the liver, etc. In recent times, with hind sight, vitamin $`B_{12}`$ deficiency (something unknown to the world at that time) has been diagnosed as a possibility . The recovery alluded to by Hardy in his letter to Dewsbury was obviously the reason why Ramanujan was persuaded to return to India, with the hope that he would soon recover and return to take up the Trinity College Fellowship awarded to him for five years. The beginning of the end After completing nearly five years at Cambridge, early in 1919, when Ramanujan appeared to have recovered sufficiently to withstand the rigours of a long voyage to India, he left England on 27th February 1919 by s.s. Nagoya. Four weeks later on 27th March he arrived at Bombay and soon after at Madras, thin, pale and emaciated, but with a scientific standing and reputation such as no Indian has enjoyed before. Professor Hardy who expressed this view also hoped that India will regard him as the treasure he is. He urged the University of Madras to make a permanent provision for him to enable him to continue his research work. Again the University rose to the occasion by granting Ramanujan £250 a year as an allowance for five years, commencing from April 1919. He was sent back to India by Hardy with the fond hope that the warmer climate would help complete his recovery from a tubercular tendency. Most unfortunately his precarious health did not improve, on his return to India. Fevers relapsed and in addition, his wife recalled that he suffered severe bouts of stomach pain too . Ramanujan was subject to fits of depression, had a premonition of his death and was a difficult patient. He spent 3 months in Madras, 2 months in Kodumudi and 4 months at Kumbakonam. When his condition showed signs of further deterioration, after great persuasion, Ramanujan was brought to Madras for expert medical treatment, in January 1920. Despite all the tender attention he could get from his wife who nursed him throughout this period, and the best medical attention from the doctors, his untimely end came on 26th April 1920, at Chetput, Madras, when Ramanujan was 32 years, 4 months and 4 days old. His wife lived with him, after she came of age, only for a year before his departure to England, and looked after him during his illness after his return. Even during those months of prolonged illness he kept on working, though in a reclining position, at a furious pace and kept jotting down his results on sheets of paper. In his last and only letter to Hardy written after his return to India, in January 1920, Ramanujan communicated his original work on what he called ‘ mock’ theta functions. From the available evidence and retrospective diagnosis, Young makes out the case for “ hepatic amoebiasis” , a tropical disease contacted by Ramanujan in 1906, as the cause of his terminal illness. His reason as to why this was not recognized at that time is best recounted in his own words: Hepatic amoebiasis was regarded in 1918 as a tropical disease (‘ tropical lie r abscess’ ), and this would have had important implications for successful diagnosis, especially in provincial medical centers. Furthermore, the specialists called in were experts in either tuberculosis or gastric medicine. Another major difficulty is that a patient with this disease would not, unless specifically asked, recall as relevant that he had had two episodes of dysentery 11 and 8 years before. Finally, there is the very good reason that, because of the great variability in physical findings, the diagnosis was difficult in 1918 and remains so today: hepatic amoebiasis presents a severe challenge to the diagnostic skills \[and\] should be considered in any patient with fever and an abnormal abdominal examination coming from an endemic area . Hardy, who was unaware that the end was to come so soon was shocked when it came prematurely. He was of the view that a mathematician is often comparatively old at 30. For, in his roll-call of mathematicians, Hardy wrote (, p.71): Galois died at twenty-one, Abel at twenty-seven, Ramanujan at thirty-three, Riemann at forty I do not know an instance of a major mathematical advance initiated by a man past fifty<sup>13</sup><sup>13</sup>13A few examples which can be cited which explode ‘The Myth of the Young Mathematician’ are: Newton’s Principia was written when he was in his mid 40s; when Euler, despite his blindness, produced his three volumes on integral calculus when he was in his 60s; Gauss at 34 proposed his theory of analytic functions; and in more recent times, Cartan, Poincaré , Siegel, Kolmogorov and Erdös exhibited creativity in mathematics in their later years. (Ref. Susan Landau, Notices of the AMS, vol. 44 (1997) p. 1284.). Human qualities In figure he (Ramanujan) was a little below medium height (5ft. 5in.) and stout until emaciated by disease; he had a big head, with long black hair brushed sideways above a big forehead; his face was square, he was clean shaven, and his complexion never really dark, grew paler during his life in England; his ears were small, his nose broad, and always his shining eyes were the conspicuous feature that Ramachandra Rao observed in 1910. He walked stiffly, with head erect and toes out-turned; if he was not talking as he walked, his arms were held clear of the body, with hands open and palms downwards. But when he talked, whether he was walking or standing, sitting or lying down, his slender fingers were for ever alive, as eloquent as his countenance. The above physical description of Ramanujan was recorded by Prof. E.H. Neville . Ramanujan had only one passion in life – mathematics. He devoted all his time to this subject and its development. Quoting Prof. Neville again , Ramanujan had an instinctive perfection of manners that made him a delightful guest or companion. Success and fame left his natural simplicity quite untouched. To his friends he was devoted beyond measure, and he devised curiously personal ways of showing his gratitude and expressing his affection. The wonderful mathematician was indeed a loveable man. This is in complete accord with the views of Hardy on Ramanujan: $`\mathrm{}`$ the picture I want to present to you is that of a man who had his peculiarities like other distinguished men, but a man in whose society one could take pleasure, with whom one could drink tea and discuss politics or mathematics; the picture in short, not of a wonder from the East, or an inspired idiot, or a psychological fraud, but of a rational human being who happened to be a great mathematician. The integrity of Ramanujan is transparent from the following statement of Hardy : All of Ramanujan’ s manuscripts passed through my hands, and I edited them very carefully for publication. The earlier ones I wrote completely. I had no share of any kind in the results, except of course when I was actually a collaborator, or when explicit acknowledgement was made. Ramanujan was almost absurdly scrupulous in his desire to acknowledge the slightest help. In a letter to a friend of Ramanujan, in September 1917, Hardy wrote : He has been seriously ill but is now a good deal better. It is very difficult to get him to take proper care of himself; if he would only do so we should have every hope that he would be quite well again before very long. In this letter Hardy referred to his discovering that Ramanujan was not writing to his people nor apparently hearing from them. He was very reserved about it and it appeared to us that there must have been some quarrel. He expressed his anxiety regarding the trouble which might have arisen and wanted it to be cleared away. Ramanujan was shy by temperament and contemplative by nature. He was a man with a great sense of humour. In the words of Neville : He had a fund of stories, and such was his enjoyment in telling them that in his great days his irrepressible laughter often swallowed the climax of his narrative. On learning, after his return to India, that the Government and the University of Madras were insisting on his going to Thanjavur, he punned on the word Thanjavur – by breaking it into three part s anTh ’,‘ s’a’v ‘vnd’ ’ n Tamil – and quipped that they wanted him to go to ‘ Than-savu-v”r , meaning thtownce of his death! Later when he was shifted to Chetput, he punned on this word, ‘ ch”t - ‘ put , and said that he was being taken to a place where everything wille ‘ very quick’ . He also did not like the name of the building‘ Crynant where he was stay in Chetput stating that the ‘ Cr’y in the word did not augur well and got himself shifted to another building ‘ Gometr’a (which is the home where he breathed his last on April 26, 1920). Ramanujan was very affectionate towards his brothers and his mother, in particular. His wife recollected that he knew astrology and made astrological predictions to some extent and that he knew he would not live beyond 34 years. Sometimes, he is supposed to have made predictions for others also. He told her , after his return from England, that he felt very happy when the Editor of The Hindu, Mr.Kasturiranga Iyengar, went to his room and partook the pongal<sup>14</sup><sup>14</sup>14A South Indian delicacy prepared with rice, greengram, ghee, pepper, jeera and cashew nuts. prepared and served by him. In later years, Janakiammal told several who visited her that Ramanujan was confident his mathematics would provide her with funds, even after his death. Some friends of Ramanujan have remembered that Ramanujan could foresee events in visions; that being an ardent devotee of Lord Narasimha he saw drops of blood in dreams (which was considered as a sign of the Lord’ s grace) and that after seeing such drops, scrolls containing the most complicated mathematics used to unfold before him, and these he set down on paper on waking only a fraction of what was thus shown to him. Ramanujan’ s maternal grandmother was a staunch devotee of Goddess Namagiri of Namakkal. Ramanujan himself was known to his friends to be a devotee of the Goddess of Namakkal and he used to say that the Goddess appeared in his dreams and inspired him to come forth with new formulae<sup>15</sup><sup>15</sup>15This was probably his way of explaining away his incomparable intuition and success, to those who could not comprehend his ability to churn out continuously new results but who persisted in questioning him as to how he arrived at those results!. Prof. K. Ananda Rao was at King s College, when Ramanujan was at Trinity College, and he recalled , in 1962, that: In his nature he was simple, entirely free from affectation, with no trace whatever of his being self-conscious of his abilities. He was quite sociable, very polite and considerate to others. Ramanujan never forgot that as a first born he had to shoulder the responsibility of taking care of his parents. He was compassionate. Accepting the University’s offer of a scholarship, he wrote to Mr. Francis Dewsbury, the Registrar of the University of Madras, in a letter dated 11th January 1919, from a nursing home in Putney: I feel, however, that after my return to India, which I expect to happen as soon as arrangements can be made, the total amount of money to which I shall be entitled will be much more than I shall require. I should hope that, after my expenses in England have been paid, 50 a year will be paid to my parents and that the surplus, after my necessary expenses are met, should be used for some educational purpose, such in particular as the reduction of school-fees for poor boys and orphans and provision of books in schools. No doubt it will be possible to make an arrangement about this after my return. I feel very sorry that, as I have not been well, I have not been able to do so much mathematics during the last two years as before. I hope that I shall soon be able to do more and will certainly do my best to deserve the help that has been given to me. Ramanujan concluded a letter to Mr. Narayana Iyer, in November 1915, with the following words of gratitude: I am ever indebted to you and Sir Francis Spring for your zealous interest in my case from the very beginning of acquaintance. I would like to coclude this lecture with the following assessments of Ramanujan and his work (Bruce Berndt ): $``$ In notes left by B.M. Wilson, he tells us how George Polya was captivated by Ramanujan’s formulas. One day in 1951 while Polya was visiting Oxford, he borrowed from Hardy his copy of Ramanujan’s notebooks. A couple of days later, Polya returned them in almost a state of panic explaining that however long he kept them, he would have to keep attempting to verify the formulae therein and never again would have time to establish another original result of his own. Neville began a broadcast in Hindustani, in 1941, with the declaration : $``$ Srinivasa Ramanujan was a mathematician so great that his name transcends jealousies, the one superlatively great mathematician whom India has produced in the last thousand years. Commenting on the quality of the theorem’s in the ‘Lost’ Notebook, Richard Askey says: $``$ Try to imagine the quality of Ramanujan’s mind, one which drove him to work unceasingly while deathly ill, and one great enough to grow deeper while his body became weaker. I stand in awe of his achievements; understanding is beyond me. We would admire any mathematician whose life’s work is half of what Ramanujan found in the last year while he was dying. $``$ Paul Erdös has passed on to us Hardy’s personal ratings of mathematicians: Suppose that we rate mathematicians on the basis of pure talent on a scale from 0 to 100, Hardy gave himself a score of 25, Littlewood 30, Hilbert 80, and Ramanujan 100. (Berndt ). References 1. Ramanujan: Twelve Lectures on subjects suggested by his life and work, G.H. Hardy, Chelsea, New York, 1940. 2. Collected Papers by Srinivasa Ramanujan, edited by G.H. Hardy, P.V. Seshu Aiyar and B.M. Wilson, Chelsea, New York, 1962; first published by Cambridge Univ. Press, 1927. 3. Ramanujan: Letters and Reminiscences, Memorial Number, Vol.I, ed. P.K. Srinivasan, Muthialpet High School, Madras, 1968. 4. K.S. Viswanatha Sastri, in , P.89-93. 5. N. Govindarajan, in P.104- 105. 6. See P.94, 95, 120, 121. 7. Srinivasa Ramanujan: A Mathematical Genius, K. Srinivasa Rao, East West Books (Madras) Pvt. Ltd., 1998. 8. The Man Who Knew Infinity: A Life of the Genius Ramanujan, Robert Kanigel, Charles Scribner’s Sons, New York (1991); Indian edition: Rupa & Co. (1994). 9. Ramanujan : The Man and the Mathematician, S.R. Ranganathan, Asia Publishing House, 1967. 10. K. Chengalvarayan, in p.64 (MD2). 11. C.R. KrishnaswamiAyyar, in p. 69 (MF63). 12. T. Srinivasa Raghavacharya, in p. 75 (MK2). 13. R. Radhakrishna Ayyar, in p. 74 (MJ91). 14. N. Hari Rao, in p.120-123. 15. V. Ramaswamy Iyer, in p.129. 16. R. Krishna Rao, cousin of the mother of Prof. K. Ananda Rao. 17. R. Ramachandra Rao, in p.126-127. 18. P.V. Seshu Aiyer in p. 125. 19. C.L.T. Griffith to Sir Francis Spring, in p.50. 20. M.J.M. Hill to C.L.T. Griffith, in p.53. 21. Ramanujan: Letters and Commentary, ed. By Bruce C. Berndt and Robert A. Rankin, American Mathematical Society and London Mathematical Society (1995); also Indian Edition with a Preface, Additions to the Indian Edition and Errata, by K. Srinivasa Rao, Affiliated East West Press Pvt. Ltd. (1997). 22. Ramanujan’ s second letter to G.H. Hardy, in ref. 2, p. xxvii. 23. Ref. , p. 17. 24. C.P. Snow in his Forward to G.H. Hardy’sA Mathematician’s Apology, Cambridge University Press (1976), p.30. 25. According to C.P. Snow, Hardy was not the first eminent to be sent the Ramanujan manuscripts. There had been two before him, both English, both of the highest professional standard. They had each returned the manuscripts without comment. I don’t think history relates what they said, if anything, when Ramanujan became famous. As for their identity, Snow adds that: out of chivalry Hardy concealed this in all that he said or wrote about Ramanujan. (p.33 of ). However, the names are given by A. Nandy (in Alternative Sciences, Allied Publishers, New Delhi, 1980) who claims the two to be H.F. Baker and E.W. Hobson.(also see p.3). 26. C.P. Snow in his Rectorial Address delivered before the University of St. Andrews, Scotland, on 13th April 1962. 27. Ref. , p.9. 28. Ref. , p. 157-1158 29. Ref. , p.55. 30. Refer Bruce C. Berndt and Robert A. Rankin: Ramanujan: Letters and Commentary, ref. , for these and other letters referred to. 31. E.H. Neville, in ref. , p. 138-1141 32. E.H. Neville to Dewsbury, ref. , p. 59-660. 33. Littlehailes to Dewsbury, in ref. , p. 61-66. 34. Sir Francis Spring to C. B. Cotterell, in ref. , p. 64-665 35. Letter 2 to R. Krishna Rao, in ref. , p. 4-7. 36. Srinivasa Ramanujan, J.R. Newman, in Mathematics in the modern World, W.H. Freeman & Co. (1968) 73-76. 37. Ref. , p. xxx. 38. Ref. , p. 226. 39. Letter 4 to R. Krishna Rao, in ref. , p. 12-119. 40. Letter 1 to S.M. Subramanian, in ref. , p. 20. 41. P.C. Mahalanobis, in ref. , p. 145 148. Also, inRamanujan: The Man and the Mathematician, S.R. Ranganathan, Asia Publishing House, 1967, p.81. (MN1). 42. G.H. Hardy to Dewsbury, in ref. , p. 76-777 43. Ref. , p. 137. 44. Ref. , p. 499. 45. Ref. , 233. 46. Ref. , p.199. 47. R.A. Rankin, Ramanujan as a patient, Proc. Indian Acad. Sci., Math. Sci. vol. 93 (1984) 79-1100 48. G.H. Hardy to Dewsbury, in ref. , p. 76-777 49. Janaki Ramanujan in . 50. G.H. Hardy: A Mathematician’ sApology, (with a Foreword by C.P. Snow), Cambridge Univ. Press (1976), first published in 1967. 51. G.H. Hardy to Subramanian, in ref. , p. 68-775 52. Ref. , p.93. (N22). 53. Janaki Ramanujan, in ref. , p. 159-1161 (in Tamil), p. 17- 172 (English translation). 54. Reminiscences of Janaki Ramanujan, in ref. , p.89-991. (MT- MT7 55. T.K. Rajagopalan, in ref. , p. 167 and in ref. , p. 87; R. Srinivasan, in ref. , p. 165 166; R. Radhakrishna Ayyar, in ref.9, p.73 56. K. Ananda Rao, in ref. , p. 143-144. 57. Copy of Ramanujan’ s letter to the Registrar, University of Madras, in ref. , plate 6, between pages 104-1105. Also reproduced in ref.\[ \]2, p.xix 58. Letter to Narayana Iyer, in ref. , p. 32-333 59. D.A.B. Young, Ramanujan’ s illness, Current Science vol.67 (1994) p .967 \- 972. 60. Ramanujan’s Notebooks, Part I, Bruce C. Berndt, Springer-verlag (1975).
no-problem/0003/hep-ph0003223.html
ar5iv
text
# References COLBY 00-02 IUHET 419 January 2000 SEARCHING FOR LORENTZ VIOLATION IN THE GROUND STATE OF HYDROGEN <sup>1</sup><sup>1</sup>1Presented by R.B. at Orbis Scientiae 1999, Ft. Lauderdale, Florida, December 1999 Robert Bluhm<sup>a</sup>, V. Alan Kostelecký<sup>b</sup>, and Neil Russell<sup>c</sup> <sup>a</sup>Physics Department Colby College Waterville, ME 04901 USA <sup>b</sup>Physics Department Indiana University Bloomington, IN 47405 USA <sup>c</sup>Physics Department Northern Michigan University Marquette, MI 49855 USA INTRODUCTION The hydrogen atom has a rich history as a testing ground of fundamental physics where small differences between theory and experiment have led to major advances . With the advent of optical high-resolution spectroscopy and tunable dye lasers, new tests of quantum electrodynamics in hydrogen have become possible. The two-photon 1S-2S transition is especially suitable for high-precision tests and metrology because of its small natural linewidth of only $`1.3`$ Hz. This transition has been measured in a cold atomic beam of hydrogen with a precision of $`3.4`$ parts in $`10^{14}`$. It has also been observed in trapped hydrogen with a precision of about one part in $`10^{12}`$. As experimental techniques advance, the measurement of the line center to one part in $`10^3`$ becomes plausible with an ultimate resolution of one part in $`10^{18}`$, making new tests of fundamental theory possible. The recent production of antihydrogen in experiments ushers in a new era for testing fundamental physics by allowing direct high-precision comparisons of hydrogen and antihydrogen . Since the CPT theorem predicts that all local relativistic quantum field theories of point particles are invariant under the combined operations of charge conjugation C, parity reversal P, and time reversal T , comparisons of the 1S-2S transition in hydrogen and antihydrogen should provide a new high-precision test of CPT. Indeed, two future experiments at CERN are aimed at making high-resolution spectroscopic comparisons of the 1S-2S transitions in spin-polarized hydrogen and antihydrogen confined within a magnetic trap. The comparisons of the 1S-2S transition should have relative figures of merit comparable to that of the neutral meson system, which places a bound on the mass difference between the $`K_0`$ and $`\overline{K}_0`$ at less than 2 parts in $`10^{18}`$ . In this proceedings, we first review a recent theoretical analysis we made of CPT and Lorentz tests in hydrogen and antihydrogen, which was published in Ref. . This included investigations of on-going experiments in hydrogen as well as the proposed experiments at CERN comparing hydrogen and antihydrogen. We showed that these experiments can provide tests of both CPT-preserving and CPT-violating Lorentz symmetry. In addition to examining comparisons of 1S-2S transitions, we suggested other possible experimental signatures that are sensitive to CPT or Lorentz breaking, including measurements of the Zeeman hyperfine levels in the ground state of hydrogen. Some of these measurements are currently being made and preliminary results are presented for the first time in Walsworth’s talk . THEORETICAL FRAMEWORK Our analysis uses a theoretical framework that describes CPT- and Lorentz-violating effects in an extension of the SU(3)$`\times `$SU(2)$`\times `$U(1) standard model and quantum electrodynamics (QED) . The framework originates from the idea of spontaneous CPT and Lorentz breaking in a more fundamental theory such as string theory . Within this framework, possible violations of CPT and Lorentz symmetry are included which maintain desirable features of quantum field theory, including gauge invariance, power-counting renormalizability, and microcausality. The model is highly constrained, and only a small number of terms are possible. These terms are controlled by parameters that can be bounded by experiments. This framework has been used to analyze neutral-meson experiments , baryogenesis , photon properties , Penning-trap experiments , atomic clock comparisons , muon experiments , and experiments in spin-polarized matter . To investigate experiments in hydrogen and antihydrogen, it suffices to work in the context of the QED extension. The modified Dirac equation for a four-component spinor field $`\psi `$ describing electrons and positrons of mass $`m_e`$ and charge $`q=|e|`$ in a Coulomb potential $`A^\mu `$ is $$\left(i\gamma ^\mu D_\mu m_ea_\mu ^e\gamma ^\mu b_\mu ^e\gamma _5\gamma ^\mu \frac{1}{2}H_{\mu \nu }^e\sigma ^{\mu \nu }+ic_{\mu \nu }^e\gamma ^\mu D^\nu +id_{\mu \nu }^e\gamma _5\gamma ^\mu D^\nu \right)\psi =0.$$ (1) Here, natural units with $`\mathrm{}=c=1`$ are used, $`iD_\mu i_\mu qA_\mu `$, and $`A^\mu =(|e|/4\pi r,0)`$. The two terms involving the effective coupling constants $`a_\mu ^e`$ and $`b_\mu ^e`$ violate CPT, while the three terms involving $`H_{\mu \nu }^e`$, $`c_{\mu \nu }^e`$, and $`d_{\mu \nu }^e`$ preserve CPT. All five of these terms break Lorentz invariance. Since no CPT or Lorentz violation has been observed, these parameters are assumed to be small. Free protons are also described by a modified Dirac equation involving the corresponding parameters $`a_\mu ^p`$, $`b_\mu ^p`$, $`H_{\mu \nu }^p`$, $`c_{\mu \nu }^p`$, and $`d_{\mu \nu }^p`$. A perturbative treatment in the context of relativistic quantum mechanics is used to examine the bound states of hydrogen and antihydrogen. In this approach, the unperturbed hamiltonian $`\widehat{H}_0`$ and its energy eigenfunctions are the same for hydrogen and antihydrogen. All of the perturbations in free hydrogen described by conventional quantum electrodynamics are identical for both systems. However, the interaction hamiltonians for hydrogen and antihydrogen including the effects of possible CPT- and Lorentz-breaking are not the same. These are obtained in several steps , involving charge conjugation to obtain the Dirac equation for antihydrogen, a field redefinition to eliminate additional time derivatives in the Dirac equation, and the use of standard relativistic two-fermion techniques . EXPERIMENTS WITH FREE HYDROGEN We first consider free hydrogen and antihydrogen in the absence of external trapping potentials. Using a description in terms of the basis states $`|m_J,m_I`$, with $`J=1/2`$ and $`I=1/2`$ describing the uncoupled atomic and nuclear angular momenta, the leading-order energy corrections can be computed. The energy shifts at the 1S and 2S levels are found to be the same. For hydrogen they are given by $`\mathrm{\Delta }E^H(m_J=\pm \frac{1}{2},m_I=\pm \frac{1}{2})=(a_0^e+a_0^pc_{00}^em_ec_{00}^pm_p)`$ $`+{\displaystyle \frac{m_J}{|m_J|}}(b_3^e+d_{30}^em_e+H_{12}^e)+{\displaystyle \frac{m_I}{|m_I|}}(b_3^p+d_{30}^pm_p+H_{12}^p),`$ (2) where $`m_e`$ and $`m_p`$ are the electron and proton masses, respectively. The corresponding energy corrections for the 1S and 2S states of antihydrogen $`\mathrm{\Delta }E^{\overline{H}}`$ are obtained from these by letting $`a_\mu a_\mu `$, $`d_{\mu \nu }d_{\mu \nu }`$, and $`H_{\mu \nu }H_{\mu \nu }`$ for both the electron-positron and proton-antiproton coefficients. The hyperfine interaction couples the electron and proton or positron and antiproton spins. The appropriate basis states are then $`|F,m_F`$ which are linear combinations of the states $`|m_J,m_I`$. The selection rules for the two-photon 1S-2S transition are $`\mathrm{\Delta }F=0`$ and $`\mathrm{\Delta }m_F=0`$. These selection rules require that the 1S-2S transitions in free hydrogen and antihydrogen occur between states of the same spin configurations. As a result, the leading-order energy shifts are equal, and there are no observable leading-order shifts in frequency in either hydrogen or antihydrogen. There are, however, subleading-order shifts in the 1S-2S frequencies. These are due to small relativistic corrections of order $`\alpha ^2`$ times the CPT- or Lorentz-breaking parameters which are different at the 1S and 2S levels. For example, the term proportional to $`b_3^e`$ results in a frequency shift in the $`m_F=1m_F^{}=1`$ transition relative to that of the $`m_F=0m_F^{}=0`$ line (which remains unshifted) equal to $`\delta \nu _{1S2S}^H\alpha ^2b_3^e/8\pi `$. However, electron bounds obtained in $`g2`$ experiments suggest that $`b_3^e`$ is sufficiently small so that $`\delta \nu _{1S2S}^H`$ would be below the expected 1S-2S line resolution. EXPERIMENTS WITH TRAPPED HYDROGEN The experiments to be performed at CERN will use trapped hydrogen and antihydrogen in a magnetic field $`B`$. We use the conventional labels $`|a_n`$, $`|b_n`$, $`|c_n`$, and $`|d_n`$ in order of increasing energy to denote the four S-state hyperfine levels of hydrogen with principal quantum number $`n`$. The $`|b_n`$ and $`|d_n`$ states have proton and electron spins that are aligned, while the remaining two states have mixed spin configurations given by $$|c_n=\mathrm{sin}\theta _n|\frac{1}{2},\frac{1}{2}+\mathrm{cos}\theta _n|\frac{1}{2},\frac{1}{2},$$ (3) $$|a_n=\mathrm{cos}\theta _n|\frac{1}{2},\frac{1}{2}\mathrm{sin}\theta _n|\frac{1}{2},\frac{1}{2}.$$ (4) The mixing angles depend on $`n`$ and obey $`\mathrm{tan}2\theta _n(51\mathrm{mT})/n^3B`$. The states $`|c_n`$ and $`|d_n`$ are low-field seeking states that remain confined in the trap. However, collisional effects lead to a loss of population over time of the $`|c_n`$ states. One possible measurement would therefore be to compare the frequencies $`\nu _d^H`$ and $`\nu _d^{\overline{H}}`$ for transitions between $`|d_n`$ states at the 1S and 2S levels. These measurements are particularly attractive because the 1S-2S $`|d_1|d_2`$ transitions are field-independent for small values of $`B`$. However, since the spin configurations of the 1S $`|d_1`$ and 2S $`|d_2`$ states are the same, we find no observable frequency shifts to leading order in this case, i.e., $`\delta \nu _d^H=\delta \nu _d^{\overline{H}}0`$. An alternative experiment would look at transitions involving the mixed states $`|c_n`$ and $`|a_n`$. Here, the $`n`$ dependence in the hyperfine splitting leads to a difference in the amount of spin mixing at the 1S and 2S levels. This gives rise to a nonzero frequency shift in 1S-2S transitions between $`|c_n`$ hyperfine states: $$\delta \nu _c^H(\mathrm{cos}2\theta _2\mathrm{cos}2\theta _1)(b_3^eb_3^pd_{30}^em_e+d_{30}^pm_pH_{12}^e+H_{12}^p),$$ (5) The corresponding transition for antihydrogen can be computed as well. The hyperfine states in antihydrogen in the same magnetic fields have opposite spin assignments for the positron and antiproton compared to those of the electron and proton in hydrogen. The resulting shift $`\delta \nu _c^{\overline{H}}`$ for antihydrogen is the same as for hydrogen except that the signs of $`b_3^e`$ and $`b_3^p`$ are changed. Two possible experimental signatures for CPT and Lorentz breaking follow from these results. The first involves looking for sidereal time variations in the frequencies $`\nu _c^H`$ and $`\nu _c^{\overline{H}}`$. The second involves measuring the instantaneous 1S-2S frequency difference in hydrogen and antihydrogen in the same magnetic trapping fields. In either case, the strength of the signal would depend on the difference in the amount of spin mixing at the 1S and 2S levels. The optimal experiment would be one that maximizes the 1S-2S spin-mixing difference, which is controlled by the magnetic field $`B`$. Since the 1S-2S $`|c_1|c_2`$ transition in hydrogen and antihydrogen is field dependent, these experiments would need to overcome line broadening effects due to field inhomogeneities in the trap. EXPERIMENTS ON THE GROUND-STATE HYPERFINE LEVELS The best tests of CPT and Lorentz symmetry in atomic systems are those that have the sharpest frequency resolutions. It is therefore natural to consider other transitions in hydrogen and antihydrogen besides the 1S-2S transition that can be measured with high precision. One candidate set involves measurements of the ground-state hyperfine levels in hydrogen and antihydrogen. For example, hydrogen maser transitions between $`F=0`$ and $`F^{}=1`$ hyperfine states can be measured with accuracies of less than $`1`$ mHz. High-resolution radio-frequency measurements can also be made on transitions between Zeeman hyperfine levels in a magnetic field. To examine these types of experiments, we compute the energy shifts of the four hydrogen ground-state hyperfine levels in a magnetic field. The spin-dependent contributions to the energy are $$\mathrm{\Delta }E_a^H\widehat{\kappa }(b_3^eb_3^pd_{30}^em_e+d_{30}^pm_pH_{12}^e+H_{12}^p),$$ (6) $$\mathrm{\Delta }E_b^Hb_3^e+b_3^pd_{30}^em_ed_{30}^pm_pH_{12}^eH_{12}^p,$$ (7) $$\mathrm{\Delta }E_c^H\mathrm{\Delta }E_a^H,$$ (8) $$\mathrm{\Delta }E_d^H\mathrm{\Delta }E_b^H,$$ (9) where $`\widehat{\kappa }\mathrm{cos}2\theta _1`$. In a very weak or zero magnetic field $`\widehat{\kappa }0`$ and the energies of the states $`|a_1`$ and $`|c_1`$ are unshifted while the states $`|b_1`$ and $`|d_1`$ acquire equal and opposite shifts. The degeneracy of the three $`F=1`$ levels is therefore lifted. A conventional hydrogen maser operates on the field-independent transition $`|c_1|a_1`$ in the presence of a small ($`B\text{ }<10^6`$ T) magnetic field. Since $`\widehat{\kappa }\text{ }<10^4`$ in this case, the leading-order effects due to CPT and Lorentz violation are suppressed. However, the frequencies of the Zeeman hyperfine transitions between $`F=1`$ levels are affected by CPT and Lorentz violation and have unsuppresed corrections. For example, the correction to the $`|c_1|d_1`$ transition frequency in a very weak field is given by $$\delta \nu _{cd}^{\mathrm{H}\mathrm{maser}}(b_3^eb_3^p+d_{30}^em_e+d_{30}^pm_p+H_{12}^e+H_{12}^p)/2\pi .$$ (10) A signature of CPT and Lorentz violation would thus be sidereal time variations in the frequency $`\nu _{cd}^{\mathrm{H}\mathrm{maser}}`$. The transition $`|c_1|d_1`$ in a hydrogen maser is field-dependent, and one would expect field broadening to limit the resolution of frequency measurements. However, as described by Walsworth , it is possible to perform a double-resonance experiment in which variations of the $`|c_1|d_1`$ transition are determined by monitoring their effect on the usual $`|a_1|c_1`$ maser line. This then permits a search for sidereal variations in the frequency $`\nu _{cd}^{\mathrm{H}\mathrm{maser}}`$. Walsworth’s group at the Harvard-Smithsonian Center has begun this experiment, and their preliminary results indicate that the sidereal variations in $`\nu _{cd}^{\mathrm{H}\mathrm{maser}}`$ can be bounded at a level of approximately 0.7 mHz. This corresponds to a bound on the combination of parameters in $`\delta \nu _{cd}^{\mathrm{H}\mathrm{maser}}`$ in Eq. (10) at a level of $`10^{27}`$ GeV. Defining a figure of merit as the ratio of the amplitude of the sidereal variations of the energy relative to the energy itself, i.e., $`r_{\mathrm{hf}}^H(\mathrm{\Delta }E_{\mathrm{hf}})_{\mathrm{sidereal}}/E_{\mathrm{hf}}`$, one obtains from the results of Walsworth’s experiment the value $$r_{\mathrm{hf}}^H\text{ }<10^{27}.$$ (11) This now gives one of the sharpest bounds on CPT and Lorentz violation for protons and electrons. In principle, measurements of this kind can also be made on the Zeeman hyperfine levels in antihydrogen. Since only in a direct comparison of matter and antimatter can the CPT-violating effects be isolated, it is hoped that the technical obstacles of performing radio-frequency spectroscopy in trapped antihydrogen can be overcome. As an alternative to measurements in a very weak magnetic field, which might be hard to maintain in a trapping environment, one could perform a comparison of $`|c_1|d_1`$ transitions in hydrogen and antihydrogen at the field-independent transition point $`B0.65`$ T. At this field strength, the electron and proton spins in the $`|c_1`$ state are highly polarized with $`m_J=\frac{1}{2}`$ and $`m_I=\frac{1}{2}`$. The transition $`|c_1|d_1`$ is effectively a proton spin-flip transition. The instantaneous difference in this transition for hydrogen and antihydrogen is found to be $`\mathrm{\Delta }\nu _{cd}2b_3^p/\pi `$. A measurement of this difference would provide a direct, clean, and accurate test of CPT for the proton. CONCLUSIONS In summary, we find that by using a general framework we are able to analyze proposed tests of CPT in hydrogen and antihydrogen. We find that in addition to testing CPT, these experiments will also test Lorentz symmetry. Our analysis shows that in comparisons of 1S-2S transitions in hydrogen and antihydrogen, control of the spin mixing at the 1S and 2S levels is an essential feature in designing an effective test of CPT and Lorentz symmetry. We also find that high-resolution radio frequency experiments in hydrogen or antihydrogen offer the possibility of new and precise tests of CPT and Lorentz symmetry. One very recent experiment using a double-resonance technique in a hydrogen maser has obtained a new CPT and Lorentz bound at the level of $`10^{27}`$ for electrons and protons. ACKNOWLEDGMENTS This work was supported in part by the National Science Foundation under grant number PHY-9801869. REFERENCES
no-problem/0003/hep-th0003092.html
ar5iv
text
# References CANONICAL TRANSFORMATIONS AND SOLDERING R. Banerjee <sup>1</sup><sup>1</sup>1e-mail address: rabin@boson.bose.res.in S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Calcutta 700091, India. and Subir Ghosh <sup>2</sup><sup>2</sup>2e-mail address: subir@boson.bose.res.in ; sghosh@isical.ac.in P. A. M. U., Indian Statistical Institute, 203 B. T. Road, Calcutta 700035, India. Abstract: We show that the recently developed soldering formalism in the Lagrangian approach and canonical transformations in the Hamiltonian approach are complementary. The examples of gauged chiral bosons in two dimensions and self-dual models in three dimensions are discussed in details. The concept of soldering has proved extremely useful in different contexts. The soldering formalism essentially combines two distinct Lagrangians manifesting dual aspects of some symmetry (like left-right chirality or self and anti-self duality etc.) to yield a new Lagrangian which is devoid of, or rather hides, that symmetry. The quantum interference effects, whether constructive or destructive , among the dual aspects of symmetry, are thereby captured through this mechanism. Alternatively, in the Hamiltonian formulation, Canonical Transformations (CT) can be sometimes used to decompose a composite Hamiltonian into two distinct pieces. A familiar example is the decomposition of the Hamiltonian of a particle in two dimensions, moving in a constant magnetic field and quadratic potential, into two pieces corresponding to the Hamiltonians of two one dimensional oscillators, rotating in a clockwise and an anti-clockwise direction, respectively. It a! ppears, therefore, that the solder ing formalism which fuses the symmetries while CT which decouples the symmetries are complementary to each other. In the present paper, we shall elaborate on these notions by considering a particular symmetry, namely chirality. This is known to play a pivotal role in discussing different aspects of two dimensional field theories with left movers (leftons) and right movers (rightons), ideas which have also been used in string theory context . Consider, eg. two such Lagrangians $`L_+(q_+,\dot{q}_+)`$ and $`L_{}(q_{},\dot{q}_{})`$, that permit a soldering. The basic variables transform identically under a transformation $$\delta q_+=\delta q_{}=\alpha .$$ (1) The main idea is to show that although $`L_\pm `$ are not invariant under these transformations, it is possible to devise a modified Lagrangian $$L(q_\pm ,\dot{q}_\pm ,\eta )=L_+(q_+,\dot{q}_+)+L_{}(q_{},\dot{q}_{})+\mathrm{\Delta }(q_\pm ,\dot{q}_\pm ,\eta ),$$ (2) that will be invariant. The new (external) field $`\eta `$ is the soldering field which can be eliminated in favour of the original variables by using the equations of motion. An explicit form for $`\mathrm{\Delta }`$ is obtained such that $`L`$ remains invariant under the soldering transformation. The soldered Lagrangian incidentally, no longer depends on $`q_\pm `$, but on their difference $`q_+q_{}=q`$. Hence, this Lagrangian is manifestly invariant under the transformations (1). The Hamiltonian obtained from the soldered Lagrangian by a formal Legendre transformation is denoted in terms of its canonical pairs by $`H(q,p)`$. Performing a CT into a new canonical set $`(Q,P)`$, the Hamiltonian is changed to $`H(Q,P)`$. For systems containing the dual aspects of some symmetry, this $`H(Q,P)`$ actually decomposes into distinct pieces, $$H(Q,P)=H_1(q_1,p_1)+H_2(q_2,p_2),$$ (3) where the new set $`(Q,P)`$ consistis of independent canonical pairs $`(q_1,p_1)`$ and $`(q_2,p_2)`$. Indeed, as we will show later, the matching of the degrees of freedom count between the original and final system is crucial. It is now possible to identify these pieces with $`H_\pm `$ obtained from the original $`L_\pm `$, thereby establishing a connection between the soldering formulation and CT. Before going to field theory in two dimensions, let us first consider quantum mechanics in these dimensions where the basic ideas are illuminated in a simple way. A very familiar example, alluded earlier, is provided by the quantum mechanical model , $$L=\frac{m}{2}\dot{x}_i^2+\frac{B}{2}ϵ_{ji}x_j\dot{x}_i\frac{K}{2}x_i^2;i=1,2$$ (4) which describes the planar motion of a unit charged particle in a constant magnetic field $`B`$ and a prescribed electric field. The Hamiltonian is given by $$H=p^i\dot{x}_iL=\frac{1}{2m}(p_i+\frac{B}{2}ϵ_{ij}x_j)^2+\frac{K}{2}x_i^2,$$ (5) with $`p^i=\frac{L}{\dot{x}_i}`$. Performing a CT , $$p_+=\sqrt{\frac{w_+}{2m\mathrm{\Omega }}}p_1+\sqrt{\frac{w_+m\mathrm{\Omega }}{2}}x_2;p_{}=\sqrt{\frac{w_{}}{2m\mathrm{\Omega }}}p_1\sqrt{\frac{w_{}m\mathrm{\Omega }}{2}}x_2,$$ $$x_+=\sqrt{\frac{m\mathrm{\Omega }}{2\omega _+}}x_1\sqrt{\frac{1}{2w_+m\mathrm{\Omega }}}p_2;x_{}=\sqrt{\frac{m\mathrm{\Omega }}{2\omega _{}}}x_1+\sqrt{\frac{1}{2w_{}m\mathrm{\Omega }}}p_2,$$ (6) where $$w_\pm =\mathrm{\Omega }\pm \frac{B}{2m},\mathrm{\Omega }=\sqrt{\frac{B^2}{4m^2}+\frac{K}{m}}$$ (7) the Hamiltonian takes the form $$H=H_++H_{}=\frac{1}{2}[p_+^2+w_+^2x_+^2]+\frac{1}{2}[p_{}^2+w_{}^2x_{}^2].$$ (8) This corresponds to the Hamiltonian of two decoupled Harmonic Oscillators with independent canonical pairs $`(x_+,p_+)`$ and $`(x_{},p_{})`$, respectively. The above analysis can be understood strictly in the Lagrangian formalism by following our soldering prescription . The Hamiltonians $`H_\pm `$, in (8) can be derived from the following Lagrangians respectively, $$L_+=\frac{1}{2}(w_+ϵ_{ij}x_i\dot{x}_jw_+^2x_i^2);L_{}=\frac{1}{2}(w_{}ϵ_{ij}y_i\dot{y}_jw_{}^2y_i^2),$$ (9) which have a non-trivial algebra, following from their symplectic structure, $$\{x_i,x_j\}=\frac{1}{w_+}ϵ_{ij};\{y_i,y_j\}=\frac{1}{w_{}}ϵ_{ij}.$$ These characterise one dimensional oscillators rotating in the clockwise and anti-clockwise sense with frequencies $`w_+`$ and $`w_{}`$ respectively. Hence $`L_+`$ and $`L_{}`$ can be soldered as shown in . In fact, $`L_\pm `$ mimic the left and right movers ,(or leftons and rightons), which one usually associates with chiral field theory models in two dimensional space time. The basic steps of soldering are just recapitulated. Consider the transformations, $$\delta x_i=\delta y_i=\eta _i.$$ (10) It is possible to construct a modified Lagrangian, $$L=L_+(x_i)+L_{}(y_i)+W_i[J_i^+(x_i)+J_i^{}(y_i)]\frac{1}{2}(w_+^2+w_{}^2)W_i^2,$$ (11) with $$J_{\pm i}(z_i)=w_\pm (\pm \dot{z}_i+w_\pm ϵ_{ij}z_j);z_i=x_i,y_i,$$ which is invariant under the transformations (10) together with $`\delta W_i=ϵ_{ij}\eta _j`$. Eliminating the soldering field $`W_i`$ from (11) we obtain the final Lagrangian in terms of the difference of original variables, $$L=\frac{1}{2}\dot{X}_i^2+\frac{1}{2}(w_{}w_+)ϵ_{ij}X_i\dot{X}_j\frac{1}{2}w_+w_{}X_i^2;X_i=\sqrt{\frac{w_+w_{}}{w_+^2+w_{}^2}}(x_iy_i).$$ (12) With the identifications, $$w_{}w_+=\frac{B}{m},w_{}w_+=\frac{K}{m}$$ which follow from (7), the above Lagrangian goes over to (4). This shows the dual roles of soldering and CT complementing each other. It is however essential that the oscillators must have the left-right symmetry (as in (9) to effect the soldering. Observe that if $`w_+=w_{}`$, then (12) just reduces to the Lagrangian of a two-dimensional oscillator. Physically speaking, two one dimensional chiral oscillators moving in opposite directions have been combined to yield a conventional planar oscillator. If, however, $`w_+w_{}`$, the left and right oscillators do not cancel so that a net rotational motion survives. This is the origin of the generation of the ”magnetic field” effect in (12). Let us now consider two dimensional field theory. An explicit one loop computation of the two dimensional chiral fermion determinant in the presence of an external abelian gauge field, yields , in the bosonised language, the following results, $$W_\pm =\frac{1}{4\pi }d^2x(_+\varphi _{}\varphi +2eA_\pm _{}\varphi +ae^2A_+A_{}).$$ (13) where we have introduced light cone variables, $$A_\pm =\frac{1}{\sqrt{2}}(A_0\pm A_1)=A^{};_\pm =\frac{1}{\sqrt{2}}(_0\pm _1)=^{},$$ and $`a`$ is a parameter manifesting bosonization or regularization ambiguities. Note that our regularization preserves Bose symmetry , so that the same factor $`a`$ appears in either expression. The soldering of $`W_+(\varphi )`$ with $`W_{}(\rho )`$ is easily done by exploiting the relevant chiral symmetries. Consider the transformation, $$\delta \varphi =\delta \rho =\alpha ;\delta A_\pm =0.$$ (14) Introducing the soldering fields $`B_\pm `$, it is possible to verify that the modified effective action, $$W[\varphi ,\rho ]=W_+[\varphi ]+W_{}[\rho ]d^2x[B_{}J_+(\varphi )+B_+J_{}(\rho )]+\frac{1}{2\pi }d^2xB_+B_{},$$ (15) with the currents, $$J_\pm (\eta )=\frac{1}{2\pi }(_\pm \eta +eA_\pm );\eta =\varphi ,\rho ,$$ is invariant under the symmetry including (14) and $`\delta B_\pm =_\pm \alpha `$. Eliminating $`B_\pm `$, by using the equations of motion, the soldered effective action is given by $$W[\theta ]=\frac{1}{4\pi }d^2x(_+\theta _{}\theta +2eA_+_{}\theta 2eA_{}_+\theta +2(a1)e^2A_+A_{}),$$ (16) where $`\theta =\varphi \rho `$. Conventional gauge invariance is restored for $`a=1`$. Thus with this particular value, we see how the individual components pertaining to the left and right chiral effective actions are soldered to yield the gauge invariant result for the vector effective action. This is an example of a constructive interference. If we had included the conventional Maxwell (gauge field) term, then this analysis shows how the two massless modes of the chiral models $`(a=1)`$ are fused to yield the single massive mode of the vector Schwinger model . The above analysis has an exact analogue in the Hamiltonian formulation based on CTs. The gauge invariant $`(a=1)`$ Lagrangian for the vector theory (16) is reexpressed as, $$W=\frac{1}{4\pi }d^2x;=\frac{1}{2}(\dot{\theta }^2\theta ^2)+\sqrt{2}e[A_+(\dot{\theta }\theta ^{})A_{}(\dot{\theta }+\theta ^{})].$$ (17) The corresponding Hamiltonian is $$=\frac{1}{2}[(\pi eA_++eA_{})^2+\theta ^2]+e(A_++A_{})\theta ^{},$$ (18) where we have scaled $`\sqrt{2}ee`$ and $`\pi =\frac{}{\dot{\theta }}`$. The Hamiltonian in (18), under the following CT, $$\theta ^{}=\frac{1}{2}(\theta _1^{}+\pi _1+\theta _2^{}+\pi _2),$$ $$\pi =\frac{1}{2}(\theta _1^{}\pi _1+\theta _2^{}+\pi _2),$$ (19) where $`(\theta _1,\pi _1)`$ and $`(\theta _2,\pi _2)`$ are the new canonical pairs, gets decoupled and is given by $$=[(\frac{1}{2}\theta _1^{}\frac{1}{2}\pi _1)^22eA_+(\frac{1}{2}\theta _1^{}\frac{1}{2}\pi _1)+\frac{e^2}{2}(A_+^2A_+A_{})]$$ $$+[(\frac{1}{2}\theta _2^{}+\frac{1}{2}\pi _2)^2+2eA_{}(\frac{1}{2}\theta _2^{}+\frac{1}{2}\pi _2)+\frac{e^2}{2}(A_{}^2A_+A_{})]=_R+_L,$$ (20) where the independent pieces are $$_R=[(\psi _R^{})^22eA_+\psi _R^{}+\frac{e^2}{2}(A_+^2A_+A_{})],$$ (21) $$_L=[(\psi _L^{})^2+2eA_{}\psi _L^{}+\frac{e^2}{2}(A_{}^2A_+A_{})].$$ (22) Here we have identified $$\psi _L^{}\frac{1}{2}\theta _2^{}+\frac{1}{2}\pi _2,\psi _R^{}\frac{1}{2}\theta _1^{}\frac{1}{2}\pi _1,$$ which corresponds to the basic algebra, $$\{\psi _L(x),\psi _L(y)\}=\frac{1}{2}\delta ^{}(xy),\{\psi _R(x),\psi _R(y)\}=\frac{1}{2}\delta ^{}(xy).$$ (23) This is nothing but the well known left and right chiral boson algebra . It is seen that the original Hamiltonian has decoupled into two distinct pieces which are identified with the left and right gauged chiral boson Hamiltonians , associated with a gauged lefton and righton, respectively. A word about the degree of freedom count may be useful. The CT has decomposed the original boson to two (left and right) chiral bosons. A single degree of freedom in configuration space is thus shown to consist of two times half a degree of freedom, also in configuration space. On the other hand, we now apply the above CTs to the original Lagrangians $`_+(\varphi )`$ and $`_{}(\rho )`$, defined from (13), that were being soldered, $$_{}(\rho )=\frac{1}{2}(\dot{\rho }^2\rho ^2)+eA_{}(\dot{\rho }+\rho ^{})+\frac{e^2}{2}A_+A_{},$$ $$_+(\varphi )=\frac{1}{2}(\dot{\varphi }^2\varphi ^2)+eA_+(\dot{\varphi }\varphi ^{})+\frac{e^2}{2}A_+A_{}.$$ (24) Again we have scaled $`\sqrt{2}ee`$. The corresponding Hamiltonians are $$_{}(\rho )=\frac{1}{2}(\pi ^\rho eA_{})^2+\frac{1}{2}\rho ^2eA_{}\rho ^{}\frac{e^2}{2}A_+A_{},$$ $$_+(\varphi )=\frac{1}{2}(\pi ^\varphi eA_+)^2+\frac{1}{2}\varphi ^2+eA_+\varphi ^{}\frac{e^2}{2}A_+A_{}.$$ (25) Applying CTs similar to (19), $$\varphi ^{}=\frac{1}{2}(\varphi _1^{}+\pi _1^\varphi +\varphi _2^{}+\pi _2^\varphi );\pi ^\varphi =\frac{1}{2}(\varphi _1^{}\pi _1^\varphi +\varphi _2^{}+\pi _2^\varphi ),$$ $$\rho ^{}=\frac{1}{2}(\rho _1^{}+\pi _1^\rho +\rho _2^{}+\pi _2^\rho );\pi ^\rho =\frac{1}{2}(\rho _1^{}\pi _1^\rho +\rho _2^{}+\pi _2^\rho ),$$ on each of the Hamiltonians $`_\pm `$ we find, $$_+=[(\eta _R^{})^22eA_+\eta _R^{}+\frac{e^2}{2}(A_+^2A_+A_{})]+(\eta _L^{})^2,$$ (26) $$_{}=[(\chi _L^{})^22eA_{}\chi _L^{}+\frac{e^2}{2}(A_{}^2A_+A_{})]+(\chi _R^{})^2.$$ (27) The fields are identified as, $$\eta _R^{}\frac{1}{2}\varphi _1^{}\frac{1}{2}\pi _1^\varphi ,\eta _L^{}\frac{1}{2}\varphi _2^{}+\frac{1}{2}\pi _2^\varphi ;\chi _R^{}\frac{1}{2}\rho _1^{}\frac{1}{2}\pi _1^\rho ,\chi _L^{}\frac{1}{2}\rho _2^{}+\frac{1}{2}\pi _2^\rho .$$ (28) Note that $`\eta _L`$, $`\chi _L`$ satisfy the lefton algebra while $`\eta _R`$, $`\chi _R`$ satisfy the righton algebra given in (23). Both $`_+`$ and $`_{}`$ have split up into two components such that there is a free chiral boson and an interacting one of opposite chirality. Ignoring the free chiral component the interacting ones exactly match with the gauged chiral components in (21,22), with the following identification $`\eta _R\psi _R,\chi _L\psi _L`$. This is the central result of our paper showing how the soldering mechanism in the Lagrangian formalalism and the CT in the Hamiltonian formalism are connected. The degree of freedom count exactly parallels the analysis given earlier. A more direct contact between the soldering formulation and CT is also possible in this context. The Lagrangians (24) that were originally soldered have, in their interaction, only one chiral component. Thus, as far as the interactions are concerned, the effective degree of freedom is only half, the other half being free. This is more clearly seen in the structure of the hamiltonians in (26, 27). Thus when the Lagrangians (24) were soldered to yield (17), the single degree of freedom associated with the interaction was revealed, while the free part did not manifest itself. This is nothing wrong since the contribution from the free Lagrangian can always be absorbed in the normalisation of the path integral. However, it is possible to directly start from gauged chiral boson Lagrangians, whose degree of freedom is exactly half. The extra half degree of freedom assocaited with the free part is non-existant. These Lagrangians are , $$_R=\dot{\varphi }\varphi ^{}\varphi ^22e\varphi ^{}(A_0+A_1)\frac{e^2}{2}(A_0+A_1)^2+\frac{e^2}{2}A_\mu A^\mu ,$$ (29) $$_L=\dot{\rho }\rho ^{}\rho ^2+2e\rho ^{}(A_0A_1)\frac{e^2}{2}(A_0A_1)^2+\frac{e^2}{2}A_\mu A^\mu ,$$ (30) which precisely correspond to the Hamiltonians $`_R`$ and $`_L`$ given in (21,22). We now show that the soldering of $`_R`$ with $`_L`$ yields (17). Taking the variations (14) we find, $$\delta _R=2J_R\alpha ^{},\delta _L=2J_L\alpha ^{},$$ (31) where the currents are $$J_R=(\dot{\varphi }+\varphi ^{}+e(A_0+A_1)),J_L=(\dot{\psi }\psi ^{}+e(A_0A_1)).$$ (32) Introducing the soldering field $`B`$, which transformed as, $$\delta B=2\alpha ^{}$$ (33) it is easy to check that the Lagrangian $$=_R+_L+B(J_R+J_L)\frac{1}{2}B^2,$$ (34) is invariant under the combined transformations (14) and (33). Eliminating $`B`$ in favour of the other variables yields the soldered Lagrangian. This is exactly (17) with the basic field defined as $`\theta =\varphi \rho `$. As a final illustration, which would be a field theoretic extension of the model in (4), consider the self-dual models in 2+1-dimensions , $$_\pm (h)=\frac{1}{2}h_\mu h^\mu \pm \frac{1}{2m_\pm }ϵ_{\mu \nu \lambda }h^\mu ^\nu h^\lambda ;h=f,g.$$ (35) A straightforward analysis yields the following field algebras and Hamiltonians (with appropriate renaming of variables), $$\{f_1(x),f_2(y)\}=m_+\delta (xy),$$ $$_+=\frac{1}{2}f_i^2+\frac{1}{2m_+^2}(ϵ_{ij}_if_j)^2=\frac{1}{2}(\pi _f^2+m_+^2f^2)+\frac{1}{2m_+^2}(ϵ_{ij}_if_j)^2;$$ and, $$\{g_1(x),g_2(y)\}=m_{}\delta (xy),$$ $$_{}=\frac{1}{2}g_i^2+\frac{1}{2m_{}^2}(ϵ_{ij}_ig_j)^2=\frac{1}{2}(\pi _g^2+m_{}^2g^2)+\frac{1}{2m_{}^2}(ϵ_{ij}_ig_j)^2.$$ (36) Following the usual steps of soldering of the Lagrangians $`_+(f)`$ and $`_{}(g)`$ in (35) we get the soldered Lagrangian as , $$=\frac{1}{2}A_\mu A^\mu \frac{\theta }{2m^2}ϵ_{\mu \nu \lambda }^\mu A^\nu A^\lambda \frac{1}{4m^2}A_{\mu \nu }A^{\mu \nu },$$ (37) with $$A_{\mu \nu }=_\mu A_\nu _\nu A_\mu ;A_\mu f_\mu g_\mu ;m_+m_{}=\theta ;m_+m_{}=m^2.$$ Going over to the Hamiltonian formulation, the first step is to obtain the canonical Hamiltonian, $$=\mathrm{\Pi }^i\dot{A}_i=\frac{m^2}{2}\mathrm{\Pi }_i\mathrm{\Pi }_i+(\frac{1}{2}+\frac{\theta ^2}{8m^2})A_iA_i\frac{\theta }{2}ϵ_{ij}\mathrm{\Pi }_iA_j+\frac{1}{4m^2}A_{ij}A_{ij}.$$ (38) where $`\mathrm{\Pi }^i=\frac{}{\dot{A}_i}`$. Due to the presence of spatial derivatives, it is problematic to decouple the $`A_{ij}A_{ij}`$ term. This may be contrasted with the Maxwell theory where such a decoupling in terms of Harmonic Oscillators in the momentum space is possible only after a proper choice of gauge, (in particular the Coulomb gauge) . Since the present theory is not a gauge theory, the above mechanism fails. We thus work in the approximation where the term $`A_{ij}A_{ij}`$ can be neglected. In other words, we are looking at the long wave length limit and keep the smallest number of derivatives. Going over to a new set of independent canonical variables, $$\{A_{}(x),\mathrm{\Pi }_{}(y)\}=\delta (xy);\{A_+(x),\mathrm{\Pi }_+(y)\}=\delta (xy),$$ (39) by the following CT, $$A_\pm =\frac{1}{4}\sqrt{\frac{m_++m_{}}{m_{}}}(\mathrm{\Pi }_1\frac{m_++m_{}}{2m_+m_{}}A_2);$$ $$\mathrm{\Pi }_\pm =4\sqrt{\frac{m_{}}{m_++m_{}}}(\frac{m_+m_{}}{m_++m_{}}\mathrm{\Pi }_2\frac{1}{2}A_1),$$ (40) the Hamiltonian decouples into $$=\frac{1}{2}[(\mathrm{\Pi }_{}^2+m_{}^2A_{}^2)+(\mathrm{\Pi }_+^2+m_+^2A_+^2)].$$ (41) Each of the pieces is now mapped to the previously obtained Hamiltonians of the self and anti-self dual models in the long wavelength limit, using the following identifications, $$g_2\mathrm{\Pi }_{},g_1A_{};f_1\mathrm{\Pi }_+,f_2A_+.$$ The soldering formalism, as elaborated here, is applicable only for Lagrangians manifesting dual aspects of some symmetry. Exploiting this feature, it is possible to combine these Lagrangians to yield a new Lagrangian. Canonical Transformations, on the other hand, can be performed on any Hamiltonian. However, the effect of the Canonical Transformation to decouple the Hamiltonian into distinct and independent pieces is essentially tied to the dual aspects of the symmetry. The roles of the two mechanisms is therefore complementary, which has been amply illustrated here. Apart from this, the canonical transformations given here provide an alternative way of gauging chiral bosons without the necessity of any ad-hoc insertions of constraints . Since the study of gauged chiral bosons has been revived , such an approach might be useful.
no-problem/0003/hep-ph0003121.html
ar5iv
text
# Quark and Lepton Masses from a 𝑼⁢(𝟏)×𝒁_𝟐 Flavor Symmetry ## I Introduction Experimental data on the quark and lepton sector masses and mixings may provide a clue to the nature of new physics beyond the Standard Model (SM). Masses and mixings are experimentally accessible, but as far as the SM is concerned, these parameters can be adjusted at will without destroying the consistency of the theory. Therefore any relationships between them must come from theoretical ideas beyond those already contained in the SM, and the experimental data can guide us in narrowing down the choices and freedom in these ideas. The recent data on the mixing of neutrinos has rekindled interest in models of fermion masses and mixings since it supplements the existing data from the quark and charged lepton sectors. The neutrino observations have some intriguing features that one might hope to explain. First of all, the neutrinos are very light in comparison to the other fermions. This suggests that a heavy mass scale may be involved that is providing a small dimensionless number that is responsible for the small neutrino masses. Secondly, the atmospheric neutrino data indicates that there is a large mixing angle involved. This is in contrast to the small mixing (Cabibbo-Kobayashi-Maskawa of CKM) angles of the quark sector. Since in grand unified theories (GUTs) the quarks and leptons are unified in representations of the larger gauge theory, this dichotomy of small CKM angles with large mixing in the lepton sector provides hints as to how the fermion masses might arise. In fact the quark and charged lepton sectors show large hierarchies of masses. This seems to indicate that there might be a flavor symmetry whose spontaneous breaking might result in the generation of naturally small contributions resulting in the hierarchical pattern of masses. One hope is that such a flavor symmetry can be implemented to understand the masses and mixings detailed above as well as the long-standing evidence for solar neutrino oscillations. In this paper we show solutions to the quark and lepton masses and mixings based on a $`U(1)\times Z_2`$ Abelian flavor symmetry are possible. A particular solution (with nontrivial $`Z_2`$ charges) we detail is entirely consistent with all the phenomenological constraints, and one can implement the seesaw mechanism to explain the light neutrino masses. The paper is organized as follows. In Section II we briefly review the approach of supersymmetric Abelian flavor (or horizontal) symmetries, and present the phenomenological requirements that must be satisfied in both the lepton and quark sectors. In Section III we discuss how a discrete symmetry can be used to overcome the constraints implied by a $`U(1)`$ symmetry, and show how the discrete symmetry can suppress an entry to the extent that it has no impact on the leading order predictions for the masses and mixing angles. In Sections IV and V we review how the light neutrino mass matrix is independent of the horizontal charges of the singlet neutrinos for the case where the symmetry is $`U(1)`$. In Section V we also generalize the derivation of the light neutrino mass matrix for the case where the horizontal symmetry is $`U(1)\times Z_2`$. In Section VI we present a solution for the quark sector that satisfies all the phenomenological requirements. Finally in Section VII we present our conclusions. ## II Flavor Symmetries The hierarchical structure of the fermion mass matrices strongly suggests that there is a spontaneously broken family symmetry responsible for the suppression of Yukawa couplings. In this paper we employ supersymmetric Abelian horizontal symmetries. These flavor symmetries allow the fermion mass and mixing hierarchies to be naturally generated from nonrenormalizable terms in the effective low-energy theory. The idea is quite simple and easily implemented. There is some field $`\mathrm{\Phi }`$ which is charged under a $`U(1)`$ family symmetry, and without loss of generality, we can assume that its charge is -1. There are terms contributing to effective Yukawa couplings for the quarks, $`Q_i\overline{d}_jH_d\left({\displaystyle \frac{<\mathrm{\Phi }>}{\mathrm{\Lambda }_L}}\right)^{m_{ij}}+Q_i\overline{u}_jH_u\left({\displaystyle \frac{<\mathrm{\Phi }>}{\mathrm{\Lambda }_L}}\right)^{n_{ij}},`$ (1) and the integer exponents $`m_{ij}`$ and $`n_{ij}`$ are easily calculated in terms of the horizontal symmetry charges of the quark and Higgs fields. For example, if we choose to have the Higgs fields to be uncharged, then the exponent $`m_{ij}`$ is just the sum of the horizontal charge of the fields $`Q_i`$ and $`\overline{d}_j`$. The hierarchy is generated from terms in the superpotential that carry integer charges $`m_{ij},n_{ij}0`$. If we call the small breaking parameter $`\lambda `$, then the generated terms for say the down quark Yukawa matrix will be of order $`\lambda ^{m_{ij}}`$. The holomorphy of the superpotential forbids terms from arising with $`m_{ij},n_{ij}<0`$. A nice analysis of the possible approaches to explaining the neutrino masses and mixings using $`U(1)`$ symmetries only is given in Ref. . In the remainder of this section we outline the experimental constraints that a solution employing the above idea must satisfy. ### A Phenomenological requirements for leptons The first phenomenological constraints we consider involve the charged leptons whose masses are the most precisely measured parameters of the quark-lepton sector. For the experimental values for the masses, we require that $`{\displaystyle \frac{m_\mu }{m_\tau }}\lambda ^2,{\displaystyle \frac{m_e}{m_\mu }}\lambda ^3,`$ (2) where the small parameter is identified as the Cabibbo angle, i.e. $`\lambda 0.22`$. The remaining constraints on leptons involve the neutrino masses and mixings. The most interesting aspect of the neutrino data is that the atmospheric neutrino mixing appears to be large, perhaps even maximal. It is then hard to understand a hierarchical pattern for the neutrino masses, since large mixing should result when the neutrino masses are of roughly the same order of magnitude. The Super-Kamiokande data suggest that $`\mathrm{\Delta }m_{23}^22.2\times 10^3\mathrm{eV}^2,\mathrm{sin}^22\theta _{23}^\nu 1,`$ (3) where the subscripts indicate the generations of neutrinos involved in the mixing (here we assume the mixing is between $`\nu _\mu `$ and $`\nu _\tau `$). The solar neutrino flux can be explained by one of three distinct solutions. Two of these involve matter-enhanced oscillation (MSW), while the third involves vacuum oscillations (VO). The two MSW solutions are differentiated by the size of the mixing angle, so one is usually called the small mixing angle (SMA) solution, and the other is called the large mixing angle (LMA) solution. The values required for the mixing parameters in each of these three cases are shown in the table below. $`\begin{array}{ccc}& \mathrm{\Delta }m_{1x}^2[eV^2]& \mathrm{sin}^22\theta _{1x}\\ \mathrm{MSW}(\mathrm{SMA})& 5\times 10^6& 6\times 10^3\\ \mathrm{MSW}(\mathrm{LMA})& 2\times 10^5& 0.8\\ \mathrm{VO}& 8\times 10^{11}& 0.8\end{array}`$ (4) For example, consider the small mixing angle (SMA) solution of the solar neutrino problem. Combining the solar neutrino data with the atmospheric neutrino data, one requires then the following $`{\displaystyle \frac{\mathrm{\Delta }m_{1x}^2}{\mathrm{\Delta }m_{23}^2}}\lambda ^4,\mathrm{sin}\theta _{12}^\nu \lambda ^2,\mathrm{sin}\theta _{23}^\nu \lambda ^0,`$ (5) when the small parameter is taken to be the Cabibbo angle. As explained by Grossman, Nir, Shadmi and Tanimoto, one can accommodate the phenomenological constraints on the neutrino masses and mixings as well as the charged lepton masses by postulating that there is a $`U(1)\times Z_2`$ horizontal symmetry. We show how such a solution can be obtained in the seesaw mechanism in Section V. ### B Phenomenological requirements for quarks Again taking the expansion parameter to be the Cabibbo angle, $`\lambda =|V_{us}|`$, then the experimental constraints $`|V_{us}|=0.2196\pm 0.0023,|V_{cb}|=0.0395\pm 0.0017,\left|{\displaystyle \frac{V_{ub}}{V_{cb}}}\right|=0.08\pm 0.02,`$ (6) on the CKM matrix can be identified in terms of powers of $`\lambda `$ by the following $`|V_{us}|\lambda ,|V_{cb}|\lambda ^2,|V_{ub}|\lambda ^3\lambda ^4,\left|{\displaystyle \frac{V_{ub}}{V_{cb}}}\right|\lambda \lambda ^2.`$ (7) The constraint on $`|V_{ub}/V_{cb}|`$ can be expressed in a stronger way at 90% confidence level as $`0.25\lambda 0.5\lambda `$. One also has a constraint on the CKM elements from $`B_d^0\overline{B}_d^0`$ mixing, $`|V_{tb}^{}V_{td}|=0.0084\pm 0.0018,`$ (8) which implies that $`|V_{td}|\lambda ^3.`$ (9) It has been argued that $`|V_{ub}|`$ is more accurately given as $`\lambda ^4`$ and taking it to be $`\lambda ^3`$ (as we will do) requires an unnatural cancellation. However, in our opinion, requiring $`|V_{ub}|\lambda ^4`$ is too restrictive for two reasons. Firstly, the fine-tuning required is much less if one uses an expansion parameter $`\lambda `$ somewhat less than 0.22, say 0.18. Secondly, since there are four parameters in the CKM matrix we are trying to predict, it is not unnatural that one of these would appear mildly fine-tuned, given $`\lambda 1/5`$. One can appreciate the nature of the cancellation in terms of the unitarity of the CKM matrix. This constraint requires $`V_{ud}V_{ub}^{}+V_{cd}V_{cb}^{}+V_{td}V_{tb}^{}=0,`$ (10) so to leading order in $`\lambda `$ one has the relationWe use the notation $`ab`$ to indicate that $`a`$ and $`b`$ are equal to leading order in the small parameter $`\lambda `$, while we use $`ab`$ to indicate that $`a`$ and $`b`$ are the same order in $`\lambda `$. $`V_{ub}^{}+V_{td}+V_{cd}V_{cb}^{}0.`$ (11) Since $`|V_{cd}|\lambda `$, $`|V_{cb}|\lambda ^2`$ and $`|V_{td}|\lambda ^3`$, unitarity implies that to leading order one might expect $`|V_{ub}|\lambda ^3`$ whereas the experimental data yields a value somewhat suppressed<sup>§</sup><sup>§</sup>§Equation (11) represents the familiar unitarity triangle, and the cancellation can be reinterpreted in terms of the size of the CP asymmetry angle $`\beta `$. Moreover, the unitarity triangle makes it clear how to interpret the three mixing angles and one CP-violating phase of the CKM matrix in terms of the four CKM elements in Eq. (11). The amount of CP-violation is proportional to the size of the triangle.. One can show that with $`U(1)`$ or $`Z_2`$ as components of the horizontal symmetry, one can suppress $`|V_{ub}|`$ (or $`|V_{td}|`$) relative to $`\lambda ^3`$ only by even powers of $`\lambda `$, so $`|V_{ub}|\lambda ^4`$ is not possibleIn fact, the first (unsuccessful) solution in Section VI predicts $`|V_{td}|\lambda ^5`$ and the relation $`|V_{us}||V_{ub}/V_{cb}|`$.. The additional suppression one might desire to attribute to an additional power of $`\lambda `$ must in fact be resulting from a mild cancellation. There is a universal scaling factor associated with the renormalization group evolution of the CKM angles $`|V_{ub}|`$ and $`|V_{cb}|`$ from the high (grand unified) scale to the electroweak scale, but this scaling is not enough to change the expectations for the relevant exponents of $`\lambda `$. The mass ratios should satisfy $`{\displaystyle \frac{m_c}{m_t}}\lambda ^4,{\displaystyle \frac{m_u}{m_c}}\lambda ^4,{\displaystyle \frac{m_s}{m_b}}\lambda ^2,{\displaystyle \frac{m_d}{m_s}}\lambda ^2.`$ (12) To compare the predictions of flavor symmetries to these phenomenological constraints, one has to relate the CKM elements to the entries in the Yukawa matrices. The Yukawa matrices $`𝐔`$ and $`𝐃`$ can be diagonalized by biunitary transformations $`𝐔^{\mathrm{𝐝𝐢𝐚𝐠}}`$ $`=`$ $`V_u^L𝐔V_u^R,`$ (13) $`𝐃^{\mathrm{𝐝𝐢𝐚𝐠}}`$ $`=`$ $`V_d^L𝐃V_d^R.`$ (14) The CKM matrix is then given by $$VV_u^LV_d^L.$$ (15) The left-handed transformation matrices $`V_u^L`$ and $`V_d^L`$ can be defined in terms of three successive rotations in the (2,3), (1,3) and (1,2) sectors. These rotation angles of the transformation matrices can be expressed in terms of the elements of the Yukawa matrices as follows $`s_{12}^u`$ $`=`$ $`{\displaystyle \frac{u_{12}}{\stackrel{~}{u}_{22}}}+{\displaystyle \frac{u_{11}u_{21}^{}}{|\stackrel{~}{u}_{22}^{}|^2}}{\displaystyle \frac{u_{13}(u_{32}+u_{23}^{}u_{22})}{\stackrel{~}{u}_{22}}}{\displaystyle \frac{u_{11}u_{31}^{}(u_{23}^{}+u_{32}u_{22}^{})}{|\stackrel{~}{u}_{22}^{}|^2}},`$ (16) $`s_{13}^u`$ $`=`$ $`u_{13}+u_{11}u_{31}^{}+u_{12}(u_{32}^{}+u_{22}^{}u_{23})+u_{11}u_{21}^{}(u_{23}+u_{22}u_{32}^{}),`$ (17) $`s_{23}^u`$ $`=`$ $`u_{23}+u_{22}u_{32}^{},`$ (18) where $`u_{ij}`$ is the $`i,j`$th component of the up quark Yukawa matrix, $`𝐔/(𝐔)_{33}`$, and $`\stackrel{~}{u}_{22}=u_{22}u_{33}u_{23}u_{32}`$. There are corresponding expressions for the $`s_{ij}^d`$ in terms of the components of the down quark Yukawa matrix, $`𝐃`$ (which are slightly more complicated due to the fact that the (2,3) sector mixing in $`V_d^R`$ might be of order one). Clearly contributions to the CKM matrix elements can come from a number of terms. We shall be interested in what follows in determining the leading order contribution(s) to the CKM angles and the fermion masses. ## III Texture Zeros The procedure of adopting a $`U(1)`$ horizontal symmetry introduces nontrivial relationships between the parameters in the quark and lepton mass matrices. This results because previously undetermined entries in the matrices are described in terms of a few parameters. For example, the quark (up and down) mass matrices are described by nine parameters, namely the $`U(1)`$ charges of the fields $`Q_L`$, $`\overline{u}_R`$, and $`\overline{d}_R`$. Relationships between the masses and mixing angles are then obtained. The motivation for including texture zeros in mass matrices was to derive more relationships between the masses and mixings. The earliest of these was the relationship between the Cabibbo angle and the first and second generation down quark masses, $`V_{us}\sqrt{m_d/m_s}`$. The texture zeros responsible for this relation can be obtained in models where there is an additional discrete symmetry that forbids their occurrence, for example. Furthermore, these relationships might also include Clebsch-Gordon factors that allow one to obtain phenomenologically acceptable relationships: one of the earliest and most successful of these was the Georgi-Jarlskog model, which was shown later to be successful in the case of electroweak scale supersymmetry (with the experimental data available at that time). The guiding principle for the case where the Yukawa matrices are symmetric is this: the mass hierarchy is of order $`\lambda ^4`$ in the up quark matrices, and is of order $`\lambda ^2`$ in the down quark matrices (c.f. Eq. (12)). So the dominant contribution to the Cabibbo matrix should come from the down quark matrices (If the down quark matrix is symmetric and the 1-1 component is suppressed, then one has the relation $`|V_{us}|\sqrt{m_d/m_s}`$), while the dominant contribution to $`|V_{cb}|\lambda ^2`$ should come from the diagonalization of the up quark matrices. Furthermore the relation $`|V_{ub}/V_{cb}|\sqrt{m_u/m_c}`$ follow from suitable texture zero patterns). When the theory is supersymmetric, one can obtain zero entries in the mass matrices that are sometimes called holomorphic or supersymmetric zeros. They arise because the superpotential must be holomorphic, so entries that would get a contribution from $`\mathrm{\Phi }^{}`$ are absent. In terms of the mass matrices, this simply means that there are no entries with the small parameter $`\lambda `$ raised to a negative power. Rather entries, for which the quantum numbers would seem to require a negative exponent, are simply zero. In this paper we want to introduce another concept that we will call a texture zero by flavor suppression. The horizontal symmetry we are considering here does not by itself give us any information on the order one coefficients of the entries in the mass matrices. When certain entries are suppressed because of the discrete symmetry, it can result that the entry is sufficiently suppressed that it does not affect the leading order of the phenomenology. Equivalently this entry could be replaced with an exact zero, and the leading order expectation for the masses and mixing angles would remain the same. Consider a simple $`2\times 2`$ example of quark mass matrices where there is a horizontal $`U(1)`$ symmetry, and where the phenomenological constraints (listed below) are motivated by the (2,3) sector of the quark mass matrices. We require that the mixing angle be $`\lambda ^2`$ and the quark mass ratios satisfy $`m_c/m_t\lambda ^4`$ and $`m_s/m_b\lambda ^2`$. This can be obtained by assuming the charges $`Q_L:2,0`$, $`\overline{u}_R:2,0`$, $`\overline{d}_R:0,0`$: $`𝐔\left(\begin{array}{cc}\lambda ^4& \lambda ^2\\ \lambda ^2& \lambda ^0\end{array}\right),𝐃\left(\begin{array}{cc}\lambda ^2& \lambda ^2\\ \lambda ^0& \lambda ^0\end{array}\right).`$ (19) This is a unique solution of $`U(1)`$ charges, and thereby determines already relationships between the mixings and masses of the first generation. The procedure for determining the exponents of $`\lambda `$ in a model with a $`U(1)`$ solution, clearly leads to the relations between exponents, $`n_{ii}+n_{jj}=n_{ij}+n_{ji},`$ (20) for all $`i,j=1,2,3`$. Turning now to the case of a $`U(1)\times Z_2`$ symmetry, suppose the charges are $`Q_L:(2,1),(0,1)`$, $`\overline{u}_R:(1,0),(0,1)`$, $`\overline{d}_R:(0,1),(0,1)`$, and assume a common symmetry breaking parameter $`\lambda `$. We still achieve the hierarchies for $`𝐔`$ and $`𝐃`$ shown in Eq. (19). So there are additional charge assignments that can be made that satisfy the phenomenological constraints. However there is something more that adding a discrete symmetry can do. Notice that in Eq. (19), the mixing $`V_{cb}`$ arises from contributions from diagonalizing $`𝐔`$ and from diagonalizing $`𝐃`$, since both of these contributions are of order $`\lambda ^2`$. We can however find stronger relationships if we can suppress one of these contributions. For example, if the mixing contribution from the (2,3) block of the up quark matrix is suppressed and the $`(𝐃)_{22}`$ entry is suppressed, then one has that the mixing angle is the same power of $`\lambda `$ as the square root of the mass ratio $`|V_{cb}|\sqrt{m_c/m_t}`$. (If the up quark matrix is known to be exactly symmetric, then one even knows that the order one coefficient of the leading contributions in $`\lambda `$ is the same, $`|V_{cb}|\sqrt{m_c/m_t}`$.) Just this kind of suppression of the element $`(𝐔)_{23}`$ can be obtained by employing a discrete symmetry. So if the exponent of $`\lambda `$ in $`(𝐔)_{23}`$ is sufficiently large that it plays no role in determining the phenomenological predictions (to leading order), then we say it is a texture zero. Returning to our example, take the $`U(1)\times Z_2`$ charges to be $`Q_L:(3,1),(0,0)`$, $`\overline{u}_R:(1,1),(0,0)`$, $`\overline{d}_R:(1,0),(0,1)`$. Then one obtains the Yukawa matrices $`𝐔\left(\begin{array}{cc}\lambda ^4& \lambda ^4\\ \lambda ^2& \lambda ^0\end{array}\right),𝐃\left(\begin{array}{cc}\lambda ^5& \lambda ^3\\ \lambda ^1& \lambda ^1\end{array}\right),`$ (21) for which the phenomenological predictions in terms of powers of $`\lambda `$ are the same, but the mixing comes at leading order from diagonalizing $`𝐃`$. We see that the relations, Eq. (20), need not necessarily hold if one adds a $`Z_2`$ symmetry. Since it is the off-diagonal entries that are suppressed in $`𝐔`$, we can define the texture pattern in the following schematic way, $`𝐔\left(\begin{array}{cc}X& 0\\ 0& X\end{array}\right),𝐃\left(\begin{array}{cc}0& X\\ X& X\end{array}\right),`$ (22) where $`X`$ denotes a phenomenologically relevant entry, and the $`0`$ entries are suppressed sufficiently that the exponent that appears there is irrelevant. Even though the entries in the first column of the up and down quark matrices are not suppressed by the discrete symmetry, we denote these as zeros because they do not contribute at leading order to either the phenomenologically relevant (left-handed) mixing angles or the quark masses. We leave the categorization of what patterns of texture zeros one can employ in $`3\times 3`$ matrices to a future work. ## IV Neutrino Masses Assume that the lepton fields have charges under a $`U(1)`$ family symmetry $`\begin{array}{ccccccccc}\overline{e}_{R1}& \overline{e}_{R2}& \overline{e}_{R3}& \mathrm{}_{L1}& \mathrm{}_{L2}& \mathrm{}_{L3}& \overline{\nu }_{R1}^c& \overline{\nu }_{R2}^c& \overline{\nu }_{R3}^c\\ E_1& E_2& E_3& L_1& L_2& L_3& 𝒩_1& 𝒩_2& 𝒩_3\end{array}.`$ (25) We assume here that the quantum numbers satisfy the hierarchies $`E_1E_2E_30`$, $`L_1L_2L_30`$, and $`𝒩_1𝒩_2𝒩_30`$. In the lepton sector, the effective Yukawa couplings come from nonrenormalizable terms, giving $`\mathrm{}_{Li}\overline{e}_{Rj}H_d\lambda ^{p_{ij}}+{\displaystyle \frac{1}{M_R}}\mathrm{}_{Li}\mathrm{}_{Lj}H_uH_u\lambda ^{q_{ij}},`$ (26) where $`M_R`$ is the relevant high mass scale at which the light neutrino masses are generated by the effective (nonrenormalizable) operator in the second term. There are two cases we can consider: (1) the mechanism that gives rise to the light neutrino masses generates all possible contributions to the nonrenormalizable terms $`\mathrm{}_{Li}\mathrm{}_{Lj}H_uH_u`$. In this case the exponent $`q_{ij}`$ that makes the second term an invariant under the horizontal symmetry (before symmetry breaking) is just $`q_{ij}=L_i+L_j`$. So the light neutrino mass matrix is simply $`m_\nu \left(\begin{array}{ccc}\lambda ^{2L_1}& \lambda ^{L_1+L_2}& \lambda ^{L_1+L_3}\\ \lambda ^{L_1+L_2}& \lambda ^{2L_2}& \lambda ^{L_2+L_3}\\ \lambda ^{L_1+L_3}& \lambda ^{L_2+L_3}& \lambda ^{2L_3}\end{array}\right){\displaystyle \frac{v_2^2}{\mathrm{\Lambda }_L}},`$ (27) where $`v_2`$ is the electroweak scale vev of $`H_u`$ (and $`v_1`$ is the vev of $`H_d`$). (2) The horizontal symmetry can play a role in the generation of the second term in which case it is not necessarily the case that the exponent $`q_{ij}`$ is given by a naive inspection of the charges $`L_i`$ and $`L_j`$. The seesaw mechanism for the generation of the light neutrino masses is such an example, and we explore it further in the next section. In particular we show that with the appropriate assignment of heavy neutrino charges, $`𝒩_i`$, one can enhance the $`(m_\nu )_{33}`$ entry of the light neutrino mass matrix. ## V Neutrino Seesaw The neutrino seesaw mechanism is a popular way to explain the lightness of the observed neutrino masses. It follows naturally from the group theory structure of the Standard Model (SM). We have left-handed neutrino doublets in the SM we can supplement by adding a right-handed neutrino singlet. In fact we have three generations, so the resulting masses will involve mass matrices. The neutrino doublets can pair up with the singlets to form a Dirac mass matrix, $`m_D`$, coming from the breakdown of the electroweak symmetry. The neutrino singlets can pair up with themselves to form a $`3\times 3`$ Majorana matrix, $`M_N`$. This mass matrix is expected to be superheavy; it is not generated by the electroweak symmetry breaking. The group theory dictates that the neutrino doublets cannot pair up with each other. So the result is a $`6\times 6`$ mass matrix of the form $`\left(\begin{array}{cc}0& m_D^T\\ m_D& M_N\end{array}\right),`$ (28) and upon diagonalization of the $`3\times 3`$ light neutrino mass matrix is given by the seesaw formula $`m_\nu =m_D(M_N)^1m_D^T.`$ (29) In the rest of this section we will explore the implications of assuming the mass matrix entries are governed by an Abelian horizontal symmetry. A simple result emerges for the case where the symmetry is $`U(1)`$, and some interesting enhancements (or suppressions) can occur when the symmetry is extended to $`U(1)\times Z_2`$ which have some phenomenological usefulness. ### A $`𝑼\mathbf{(}\mathrm{𝟏}\mathbf{)}`$ Given lepton doublet charges $`L_i`$ and right-handed neutrino charges $`𝒩_i`$ one has the following pattern for the neutrino Dirac mass matrix $`m_D\left(\begin{array}{ccc}\lambda ^{L_1+𝒩_1}& \lambda ^{L_1+𝒩_2}& \lambda ^{L_1+𝒩_3}\\ \lambda ^{L_2+𝒩_1}& \lambda ^{L_2+𝒩_2}& \lambda ^{L_2+𝒩_3}\\ \lambda ^{L_3+𝒩_1}& \lambda ^{L_3+𝒩_2}& \lambda ^{L_3+𝒩_3}\end{array}\right)v_2,`$ (30) and the following pattern for the Majorana mass matrix $`M_N\left(\begin{array}{ccc}\lambda ^{2𝒩_1}& \lambda ^{𝒩_1+𝒩_2}& \lambda ^{𝒩_1+𝒩_3}\\ \lambda ^{𝒩_1+𝒩_2}& \lambda ^{2𝒩_2}& \lambda ^{𝒩_2+𝒩_3}\\ \lambda ^{𝒩_1+𝒩_3}& \lambda ^{𝒩_2+𝒩_3}& \lambda ^{2𝒩_3}\end{array}\right)\mathrm{\Lambda }_L.`$ (31) Then one obtains the same form for the light neutrino mass matrix via the see-saw formula $`m_\nu =m_D(M_N)^1m_D^T`$ that was presented in the previous section in Eq. (27). It is easy to see that if one wants to have a hierarchy in light neutrino masses $`m_{\nu _\mu }/m_{\nu _\tau }\lambda ^2`$, and large mixing in the (2,3) generation in the lepton sector, one cannot rely on a $`U(1)`$ symmetry alone. From Eq. (27) one sees that $`L_2=L_3+1`$ is required to give the proper mass ratio. This then immediately prevents the large mixing from arising in the neutrino sector, because the mixing is necessarily of order $`\lambda `$. However we must still examine the diagonalization of the charged lepton matrix. Let the $`U(1)`$ charges of the charged lepton singlets be $`E_i`$, then the charged lepton matrix is $`m_\mathrm{}^\pm \left(\begin{array}{ccc}\lambda ^{L_1+E_1}& \lambda ^{L_1+E_2}& \lambda ^{L_1+E_3}\\ \lambda ^{L_2+E_1}& \lambda ^{L_2+E_2}& \lambda ^{L_2+E_3}\\ \lambda ^{L_3+E_1}& \lambda ^{L_3+E_2}& \lambda ^{L_3+E_3}\end{array}\right)v_1.`$ (32) The relevant mixing comes from the right hand side of this matrix, $`\lambda ^{L_2+E_3}/\lambda ^{L_3+E_3}`$. So the mixing parameter here is also order $`\lambda `$, since $`L_2=L_3+1`$. Therefore the atmospheric neutrino mixing is necessarily of the order $`\mathrm{sin}\theta _{23}^\nu \lambda `$ in contradiction to the order one mixing observed (c.f. Eq. (3)). ### B $`𝑼\mathbf{(}\mathrm{𝟏}\mathbf{)}\mathbf{\times }𝒁_\mathrm{𝟐}`$ We consider next the changes that can occur when a discrete symmetry is utilized. This avenue has been exploited to understand the large mixing in the atmospheric neutrino oscillations together with a hierarchy in the neutrino masses, $`m_{\nu _\mu }/m_{\nu _\tau }<<1`$. It can also lead to an enhancement in the production of a matter/antimatter symmetry in the early universe, if the heavy neutrinos are assumed to be decaying asymmetrically. In the rest of this section we explain in detail how to implement the discrete symmetry with the seesaw mechanism so that the $`m_{\nu _\mu }/m_{\nu _\tau }\lambda ^2`$, and large mixing results. Take the following $`U(1)\times Z_2`$ charges for the lepton fields $`\begin{array}{ccccccccc}\overline{e}_{R1}& \overline{e}_{R2}& \overline{e}_{R3}& \mathrm{}_{L1}& \mathrm{}_{L2}& \mathrm{}_{L3}& \overline{\nu }_{R1}& \overline{\nu }_{R2}& \overline{\nu }_{R3}\\ (E_1,0)& (E_2,0)& (E_3,0)& (L_1,0)& (L_2,0)& (L_31,1)& (𝒩_1,0)& (𝒩_2,0)& (𝒩_31,1)\end{array}.`$ (35) Assume the symmetry breaking is characterized by the single expansion parameter $`\lambda `$. The formulas given above for the heavy neutrino mass matrix, $`M_N`$, the neutrino Dirac mass matrix, $`m_D`$, and the resulting light neutrino mass matrix, $`m_\nu `$ are modified. With the above assignments one finds that $`M_N\left(\begin{array}{ccc}\lambda ^{2𝒩_1}& \lambda ^{𝒩_1+𝒩_2}& \lambda ^{𝒩_1+𝒩_3}\\ \lambda ^{𝒩_1+𝒩_2}& \lambda ^{2𝒩_2}& \lambda ^{𝒩_2+𝒩_3}\\ \lambda ^{𝒩_1+𝒩_3}& \lambda ^{𝒩_2+𝒩_3}& \lambda ^{2𝒩_32}\end{array}\right)\mathrm{\Lambda }_L,`$ (36) so that $`(M_N)^1\left(\begin{array}{ccc}\lambda ^{2𝒩_1}& \lambda ^{𝒩_1𝒩_2}& \lambda ^{𝒩_1𝒩_3+2}\\ \lambda ^{𝒩_1𝒩_2}& \lambda ^{2𝒩_2}& \lambda ^{𝒩_2𝒩_3+2}\\ \lambda ^{𝒩_1𝒩_3+2}& \lambda ^{𝒩_2𝒩_3+2}& \lambda ^{2𝒩_3+2}\end{array}\right)\mathrm{\Lambda }_L^1.`$ (37) So the effect of the discrete symmetry in our case is to enhance the 3-3 entry of the $`M_N`$ matrix, and thereby alter the results for the third row and the third column on the inverse matrix, $`(M_N)^1`$. With our charge assignments, one also has an enhanced entry in the 3-3 component of the neutrino Dirac mass matrix, $`m_D\left(\begin{array}{ccc}\lambda ^{L_1+𝒩_1}& \lambda ^{L_1+𝒩_2}& \lambda ^{L_1+𝒩_3}\\ \lambda ^{L_2+𝒩_1}& \lambda ^{L_2+𝒩_2}& \lambda ^{L_2+𝒩_3}\\ \lambda ^{L_3+𝒩_1}& \lambda ^{L_3+𝒩_2}& \lambda ^{L_3+𝒩_32}\end{array}\right)v_2,`$ (38) It is easy to see then that the light neutrino mass matrix $`m_\nu `$ is modified so that only the 3-3 entry is enhanced as desired, $`m_\nu \left(\begin{array}{ccc}\lambda ^{2L_1}& \lambda ^{L_1+L_2}& \lambda ^{L_1+L_3}\\ \lambda ^{L_1+L_2}& \lambda ^{2L_2}& \lambda ^{L_2+L_3}\\ \lambda ^{L_1+L_3}& \lambda ^{L_2+L_3}& \lambda ^{2L_32}\end{array}\right){\displaystyle \frac{v_2^2}{\mathrm{\Lambda }_L}}.`$ (39) A phenomenologically viable solution has been presented in Ref. : taking $`L_1=3`$, $`L_2=L_3=1`$, $`E_1=5`$, $`E_2=4`$, and $`E_3=2`$ yields mass matrices of the form $`m_\nu \left(\begin{array}{ccc}\lambda ^6& \lambda ^4& \lambda ^4\\ \lambda ^4& \lambda ^2& \lambda ^2\\ \lambda ^4& \lambda ^2& 1\end{array}\right){\displaystyle \frac{v_2^2}{\mathrm{\Lambda }_L}},m_\mathrm{}^\pm \left(\begin{array}{ccc}\lambda ^8& \lambda ^7& \lambda ^5\\ \lambda ^6& \lambda ^5& \lambda ^3\\ \lambda ^6& \lambda ^5& \lambda ^3\end{array}\right)v_1,`$ (40) which give the correct orders of magnitude for the small mixing angle MSW solution $`{\displaystyle \frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}}\lambda ^4,\mathrm{sin}\theta _{12}^\nu \lambda ^2,\mathrm{sin}\theta _{23}^\nu \lambda ^0,`$ (41) and the correct orders of magnitude for the charged lepton mass ratios, Eq. (2). It also gives $`\mathrm{sin}\theta _{13}^\nu \lambda ^2`$. In fact it has been shown that one can obtain a VO solution as well by a proper quantum number assignment to the lepton fields. Without introducing more theoretical input (e.g. grand unified theories) to relate the quark and lepton charges, we cannot say more about which of the solutions is the correct one. We have shown here that the lepton sector phenomenology and the neutrino seesaw mechanism is consistent with a $`U(1)\times Z_2`$ symmetry, and in fact a hierarchy in the neutrino parameters $`m_{\nu _\mu }<<m_{\nu _\tau }`$ requires the extra $`Z_2`$. Furthermore we have shown in detail how to implement the neutrino mass enhancement in the seesaw mechanism. In the next section, we show that the quark sector phenomenological constraints also admit a solution consistent with Eqs. (7) and (12), and with a $`U(1)\times Z_2`$ symmetry. ## VI A $`𝑼\mathbf{(}\mathrm{𝟏}\mathbf{)}\mathbf{\times }𝒁_\mathrm{𝟐}`$ solution Our solution to the quark Yukawa matrices is based upon the Elwood-Irges-Ramond (EIR) solution obtained with a $`U(1)`$ flavor symmetry. Here we show that one can impose a texture pattern by choosing quark fields to be charged under the new $`Z_2`$ symmetry. EIR obtained the Yukawa matrices $`𝐔\left(\begin{array}{ccc}\lambda ^8& \lambda ^5& \lambda ^3\\ \lambda ^7& \lambda ^4& \lambda ^2\\ \lambda ^5& \lambda ^2& \lambda ^0\end{array}\right),𝐃\left(\begin{array}{ccc}\lambda ^4& \lambda ^3& \lambda ^3\\ \lambda ^3& \lambda ^2& \lambda ^2\\ \lambda ^1& \lambda ^0& \lambda ^0\end{array}\right).`$ (42) The EIR solution was obtained by the $`U(1)`$ charges (after adding appropriate overall constants to each field which don’t affect the hierarchy pattern) $`\begin{array}{cccc}i=& 1& 2& 3\\ Q_L:& 4& 3& 1\\ \overline{u}_R:& 4& 1& 1\\ \overline{d}_R:& 0& 1& 1\end{array}.`$ (43) The CKM elements can be expressed in terms of the Yukawa matrix elements, $`|V_{us}|`$ $`=`$ $`\left({\displaystyle \frac{d_{12}}{\stackrel{~}{d}_{22}}}{\displaystyle \frac{d_{13}d_{32}}{\stackrel{~}{d}_{22}}}\right)\left({\displaystyle \frac{u_{12}}{\stackrel{~}{u}_{22}}}{\displaystyle \frac{u_{13}u_{32}}{\stackrel{~}{u}_{22}}}\right),`$ (44) $`|V_{cb}|`$ $`=`$ $`d_{23}+d_{22}d_{32}^{}u_{23},`$ (45) $`|V_{ub}|`$ $`=`$ $`(d_{13}+d_{12}d_{32}^{}u_{13})\left({\displaystyle \frac{u_{12}}{\stackrel{~}{u}_{22}}}{\displaystyle \frac{u_{13}u_{32}}{\stackrel{~}{u}_{22}}}\right)(d_{23}+d_{22}d_{32}^{}u_{23}),`$ (46) $`|V_{td}|`$ $`=`$ $`(d_{13}+d_{12}d_{32}^{}u_{13})+\left({\displaystyle \frac{d_{12}}{\stackrel{~}{d}_{22}}}{\displaystyle \frac{d_{13}d_{32}}{\stackrel{~}{d}_{22}}}\right)(d_{23}+d_{22}d_{32}^{}u_{23}),`$ (47) where it is understood that there are possible phases associated with each term on the right hand sides of the equations. From Eq. (46), one sees that $`|V_{ub}|`$ is receiving contributions in the EIR solution of order $`\lambda ^3`$ from both $`u_{13}`$ and $`d_{13}`$, as well as from the final term $`\left({\displaystyle \frac{u_{12}}{\stackrel{~}{u}_{22}}}{\displaystyle \frac{u_{13}u_{32}}{\stackrel{~}{u}_{22}}}\right)V_{cb}.`$ (48) Then the experimental value for $`|V_{ub}|`$ must arise from a partial cancellation of these three contributions. Tanimoto showed that a solution involving a $`U(1)\times Z_2`$ symmetry is not possible if the Cabibbo mixing, $`|V_{us}|`$, arises from the diagonalization of the down quark Yukawa matrix. The Cabibbo mixing in our scheme comes from diagonalizing the up quark matrix $`𝐔`$. Our first attempt has the following assignments for the quantum numbers of the quark fields $`\begin{array}{cccc}i=& 1& 2& 3\\ Q_L:& (4,0)& (2,1)& (0,1)\\ \overline{u}_R:& (4,0)& (1,0)& (0,1)\\ \overline{d}_R:& (0,0)& (0,1)& (0,1)\end{array},`$ (49) which is easily related to the EIR $`U(1)`$ assignment above by replacing the $`Z_2`$ charge with $`+1`$ for $`Q_L`$ and with $`1`$ for $`\overline{u}_R`$ and $`\overline{d}_R`$. One can always substitute $`01`$ for $`Z_2`$ charges without affecting the results. We obtain the following Yukawa matrices for the up and down quarks $`𝐔\left(\begin{array}{ccc}\lambda ^8& \lambda ^5& \lambda ^5\\ \lambda ^7& \lambda ^4& \lambda ^2\\ \lambda ^5& \lambda ^2& \lambda ^0\end{array}\right),𝐃\left(\begin{array}{ccc}\lambda ^4& \lambda ^5& \lambda ^5\\ \lambda ^3& \lambda ^2& \lambda ^2\\ \lambda ^1& \lambda ^0& \lambda ^0\end{array}\right).`$ (50) The mass matrices can be obtained by multiplying these Yukawa matrices by the relevant vev ($`v_1`$ for $`𝐃`$ and $`v_2`$ for $`𝐔`$). The intergenerational hierarchy, $`m_b/m_t\lambda ^3`$, is then accounted for either by $`\mathrm{tan}\beta =v_2/v_1`$, and/or by increasing the $`U(1)`$ charges for $`\overline{d}_R`$. It is straightforward to check that these matrices have the texture pattern $`𝐔\left(\begin{array}{ccc}X& X& 0\\ X& X& X\\ 0& X& X\end{array}\right),𝐃\left(\begin{array}{ccc}X& 0& 0\\ 0& X& X\\ 0& X& X\end{array}\right).`$ (51) We refer to this type of suppression as a 3-texture zero solution, since three entries are suppressed by the charge assignments in the discrete symmetry. Referring back to Eq. (46), one can see that the dominant contribution to $`|V_{ub}|`$ comes only from the third term and is of order $`\lambda ^3`$. This solution was motivated by the desireSince the experimental data for $`|V_{cb}|=a\lambda ^2`$ already requires the order one coefficient to be less than one, $`a0.6`$, it is more likely that this final term (which includes another order one coefficient we will call $`b`$) will give agreement with the experimental data, $`|V_{ub}|=ab\lambda ^3`$ with $`b`$ somewhat smaller than one. Furthermore it is then consistent with the experimental data on $`|V_{ub}/V_{cb}|=0.08\pm 0.02=b\lambda `$ for $`b0.4`$. So the combination of order one coefficients satisfy the required relation, $`ab\lambda `$. to derive that $`|V_{ub}|`$ is proportional to $`|V_{cb}|`$; this can be achieved if the first term in parentheses in Eq. (46) is suppressed, and this requires a texture zero in the (1,3) position of both $`𝐔`$ and $`𝐃`$. Consequently one only has a contribution from the final term ($`u_{13}\lambda ^5`$ and $`d_{13}\lambda ^5`$). But then $`|V_{td}|\lambda ^5`$ is inconsistent with Eq. (9), and unitarity (Eq. (11)) requires the relation $`|V_{us}|\left|{\displaystyle \frac{V_{ub}}{V_{cb}}}\right|,`$ (52) which is also not supported by the experimental data, Eq. (6). Clearly the 3-texture zero pattern in Eq. (51) is too restrictive. We can relax the problematic constraint, Eq. (52), by removing the texture zero in the (1,3) position of the up quark matrix. Consider the following texture zero pattern $`𝐔\left(\begin{array}{ccc}0& X& X\\ X& X& X\\ X& X& X\end{array}\right),𝐃\left(\begin{array}{ccc}X& 0& 0\\ 0& X& X\\ 0& X& X\end{array}\right).`$ (53) from $`\begin{array}{cccc}i=& 1& 2& 3\\ Q_L:& (4,0)& (2,1)& (0,1)\\ \overline{u}_R:& (6,1)& (2,0)& (0,0)\\ \overline{d}_R:& (0,0)& (0,1)& (0,1)\end{array}`$ (54) We obtain the following Yukawa matrices for the up and down quarks $`𝐔\left(\begin{array}{ccc}\lambda ^{11}& \lambda ^6& \lambda ^4\\ \lambda ^8& \lambda ^5& \lambda ^3\\ \lambda ^6& \lambda ^3& \lambda ^1\end{array}\right),𝐃\left(\begin{array}{ccc}\lambda ^4& \lambda ^5& \lambda ^5\\ \lambda ^3& \lambda ^2& \lambda ^2\\ \lambda ^1& \lambda ^0& \lambda ^0\end{array}\right).`$ (55) As is clear from the texture pattern in Eq. (53), the Cabibbo angle is arising in the up quark matrix $`𝐔`$. However one avoids the uncomfortable relation $`|V_{us}|\sqrt{m_u/m_c}`$ because the matrix is not symmetric. All phenomenological constraints are satisfied by this solution with $`|V_{ub}|`$ receiving contributions of order $`\lambda ^3`$ from only the $`u_{13}`$ term and the last term in Eq. (46). The matrices in Eq. (40) and (55) show that nontrivial $`Z_2`$ charges can be assigned to the quark and lepton fields, and that all phenomenological requirements can be met. ## VII Conclusion We have shown that one can explain all the masses and mixings of the quarks and leptons with a $`U(1)\times Z_2`$ symmetry. The phenomenological requirements can be met if the Cabibbo mixing in the two light generations is generated in the up quark mass matrix. This runs counter to the bias of assuming that the Cabibbo mixing is coming from the down quark mass matrix so that the relation $`|V_{us}|\sqrt{m_d/m_s}`$ is obtained. This prejudice should be abandoned in the context of these Abelian flavor symmetries, because the resulting Yukawa matrices need not be symmetric. and $`m_u/m_c\lambda ^4`$ is a reasonable hierarchy even with the leading contribution to the Cabibbo angle $`|V_{us}|`$ coming from the up quark matrix. The advantages of employing the $`U(1)\times Z_2`$ as a flavor symmetry are the following: * One can understand a mass hierarchy $`m_{\nu _\mu }/m_{\nu _\tau }\lambda ^2`$ and large atmospheric neutrino mixing $`\mathrm{sin}\theta _{23}^\nu 1`$, without invoking unnatural cancellations. * The discrete symmetry can be implemented consistently with the neutrino seesaw mechanism to give the necessary neutrino mass hierarchy. * The source for CKM mixing angles in terms of the original parameters in the Yukawa matrices is reduced via the presence of texture zeros. For example $`|V_{ub}|`$ arises from a single contribution in Eq. (46), since the other contributions are suppressed by a flavor suppression. The EIR model has the Cabibbo angle, $`|V_{us}|\lambda `$, arising at leading order from both the up and down quark matrices (c.f. Eqs. (44) and (42)). Our solution suppresses the contribution from the down quark Yukawa matrix, and thus the leading contribution arises entirely in the up quark matrix. * The discrete $`Z_2`$ symmetry can enhance the lepton asymmetry generated by the decay of heavy right-handed neutrinos. This can be exploited to straightforwardly explain the baryon asymmetry of the Universe. A discrete flavor symmetry offers some attractive features for generating phenomenologically reasonable models. We leave to future work the question of whether these models can be embedded in a larger theoretical structure. ## Acknowledgments This work was supported in part by the U.S. Department of Energy under Grant No. No. DE-FG02-91ER40661.
no-problem/0003/cond-mat0003316.html
ar5iv
text
# Striped superconductors in the extended Hubbard model. 0pt0.4pt 0pt0.4pt 0pt0.4pt ## Abstract We present a minimal model of a doped Mott insulator that simultaneously supports antiferromagnetic stripes and $`d`$-wave superconductivity. We explore the implications for the global phase diagram of the superconducting cuprates. At the unrestricted mean-field level, the various phases of the cuprates, including weak and strong pseudogap phases, and two different types of superconductivity in the underdoped and the overdoped regimes, find a natural interpretation. We argue that on the underdoped side, the superconductor is intrinsically inhomogeneous – striped coexistence of of superconductivity and magnetism – and global phase coherence is achieved through Josephson-like coupling of the superconducting stripes. On the overdoped side, the state is overall homogeneous and the superconductivity is of the classical BCS type. Experimental evidence increasingly suggests that microscopic inhomogeneous “stripe” states are ubiquitous in the doped cuprates, as well as in other complex electronic materials . Nanoscale stripe morphologies have been inferred in YBCO and LSCO from neutron scattering and angle resolved photoemission experiments. In the superconducting phase, the stripes appear to coexist with superconductivity in a range of dopings without destroying the global phase coherence. The main issue that we address in this Letter is the nature of the superconducting state in the presence of stripes. Inhomogeneous quantum states in non-superconducting lattice models are no less common than they are in the experiments. Many lattice models possessing antiferromagnetic (AF) ground states at half-filling, when doped away from that filling develop stripes at the unrestricted mean-field (MF) level . Exact solutions may lead to fluctuations that introduce dynamics into the MF solutions, but are expected to preserve the qualitative features. The lower the spatial dimension the more important the quantum fluctuations are and, sometimes, MF solutions do not reproduce the exact large distance physics of the problem. However, often they pick up the low lying manifold of excited states which becomes relevant at low enough temperatures. Also, 3-dimensional coupling in real materials helps to reduce the effect of fluctuations. We consider here a minimal model with stripes to illustrate our conclusions. We employ the 2-dimensional one-band Hubbard Hamiltonian with hopping $`t`$ and on-site repulsion $`U`$ . Superconductivity is introduced by including the nearest neighbor attraction $`V`$, which produces pairing predominantly in the $`d`$-wave channel close to half-filling . The effective minimal Hamiltonian is thus $`H=t{\displaystyle \underset{ij\sigma }{}}c_{i\sigma }^{}c_{j\sigma }^{}+U{\displaystyle \underset{i}{}}n_in_i+V{\displaystyle \underset{ij}{}}n_in_j,`$ (1) where the operator $`c_{i\sigma }^{}`$ ($`c_{j\sigma }^{}`$) creates (annihilates) an electron with spin $`\sigma `$ on the lattice site $`i`$, and $`n_i=c_i^{}c_i^{}+c_i^{}c_i^{}`$ represents the electron density on site $`i`$. For our computations, we use the unrestricted mean-field approximation to this Hamiltonian, $`H_{MF}=t{\displaystyle \underset{ij\sigma }{}}c_{i\sigma }^{}c_{j\sigma }^{}+U{\displaystyle \underset{i}{}}n_in_i+n_in_i`$ (2) $`+{\displaystyle \underset{ij}{}}c_i^{}c_j^{}\mathrm{\Delta }_{ij}^{}+\mathrm{H}.\mathrm{c}.,`$ (3) where $`\mathrm{\Delta }_{ij}=Vc_ic_j`$ is the MF superconducting order parameter. The direct Hartree terms in $`V`$ are neglected since the magnitude of the effective nearest neighbor attraction is expected to be much smaller than the on-site repulsion $`U`$. Hence, it should not affect the diagonal part of the Hamiltonian, which is responsible for the charge and spin order. Therefore, the effect of $`V`$ in our model is limited to the generation of superconducting correlations. We do not address the very important issue of the microscopic origin of the attraction $`V`$. Our goal is only to construct a minimal model that may help to qualitatively understand the rich phase diagram of the cuprates. Unless stated otherwise, standard parameter values used are $`U=4t`$ and $`V=0.9t`$. This choice of parameters allows us to clearly demonstrate our conclusions regarding the inhomogeneous superconducting phase. To self-consistently solve the MF equations we use an iterative scheme. This consists of two stages which are repeated until convergence is achieved: (1) Diagonalization of the MF Hamiltonian, and (2) Update of the MF parameters. An important feature of our approach is that all physical quantities are allowed to vary from one lattice site to another, e.g., $`n_in_i`$ and $`n_{i\sigma }n_{i+\alpha \sigma }`$. Generically, we are seeking inhomogeneous solutions whose typical correlation lengths $`\xi `$ involve several lattice spacings. Therefore, it is important that the simulated supercell size $`N_x\times N_y`$ (with periodic boundary conditions) is such that $`N_{x,y}>\xi _{x,y}`$. The Hamiltonian in Eq. (2) can be rewritten in the matrix form, $`H_{MF}=𝐜^{}\widehat{H}𝐜`$, with $`𝐜=(c_1^{},c_1^{},c_2^{},c_2^{},\mathrm{})^T`$, where $`\widehat{H}`$ is a $`(2N_xN_y)\times (2N_xN_y)`$ hermitian matrix. By applying a unitary transformation $`\alpha `$ ($`\alpha ^1=\alpha ^{}`$) the Hamiltonian matrix can be diagonalized as $`\widehat{H}=\alpha \widehat{D}\alpha ^1`$, with $`D_{nm}=\delta _{nm}E_n`$. The Hamiltonian can be diagonalized in the Bogoliubov quasiparticles, $`\gamma _n={\displaystyle \underset{m}{}}\alpha _{nm}^1c_m,`$ (4) with energies $`E_n`$. By reexpressing the original creation-annihilation operators in terms of the Bogoliubov quasiparticles, one can recompute the parameters of the MF Hamiltonian. For example, $`n_i`$ $`=`$ $`c_i^{}c_i^{}={\displaystyle \underset{nm}{}}\alpha _{i,n}^{}\gamma _n^{}\alpha _{i,m}\gamma _m`$ (5) $`=`$ $`{\displaystyle \underset{n}{}}|\alpha _{i,n}|^2n_F(E_n),`$ (6) where $`n_F(E_n)`$ is the Fermi-Dirac distribution function. Repeated until the convergence, the iterations produce the spatial profiles of the self-consistent density and order parameter. A typical zero-temperature MF inhomogeneous solution is shown in Fig. 1. In the lowest energy configuration, the spin density develops a soliton-like AF anti-phase domain boundary — a stripe — at which the AF order parameter changes sign. At the domain boundary, the electronic charge density is depleted. The width of the domain wall, $`\xi _{DW}`$, decreases with increasing on-site repulsion $`U`$. However, for values of $`U`$ that are not much larger than the hopping $`t`$, the charge per unit length of the optimal (the lowest energy) stripe remains the same and is close to unity near half-filling. This result, first demonstrated in this simple model by Schultz , is a direct consequence of doping-dependent nesting in the Hubbard model. The bond-centered stripes are favored relative to the site-centered ones, although the energy difference in our case is small due to the smooth charge distribution. For a different band structure the exact relation between the doping $`x`$ and inter-stripe distance, $`L(x)`$, may change; however, any model whose ground state is AF at zero doping, is expected to have AF stripes for a finite doping, with incommensuration proportional to the doping, $`1/L(x)x`$, near half-filling. For example, negative next-nearest neighbor hopping $`t^{}`$ (relevant in the hole-doped cuprates), modifies the stripe filling without compromising the stripe phase stability relative to commensurate AF at the MF level. The stripe filling is a monotonically decreasing function of the magnitude of $`t^{}`$, with the magic filling 1/2 occurring when $`t^{}=0.35t`$. Experimentally, however, the value of the effective doping-dependent $`t^{}`$ is still not well defined. Among the consciences of the doping dependence of $`t^{}`$ are doping-dependent stripe filling, and a possibility of a transition from vertical to diagonal stripes as a function of doping. While we focus here on the model case $`t^{}=0`$, our conclusions remain qualitatively the same also in the presence of a moderate negative $`|t^{}|0.5t`$. For large enough doping levels, such that $`L(x)\xi _{DW}`$, the AF stripes begin to overlap. In this regime the excitation spectrum is no longer fully gapped and mobile carriers appear. Further doping mostly changes the amplitude of the spin and charge density waves, only slowly modifying the incommensuration, $`1/L(x)`$. When the stripes are sufficiently close to melting, the AF aspect of the problem becomes unimportant, and the superconductivity is of the conventional BCS type. The superconducting order parameter $`\mathrm{\Delta }_{ij}^{d(s^{})}`$ is maximal on the stripes and is not smooth (even within the stripe) due to the presence of the AF background. This happens since the order parameter is sensitive to the spin density on sites $`i`$ and $`j`$. If $`i`$ belongs to the spin-down sublattice and the neighbor $`j`$ is on the spin-up sublattice, then the order parameter is large, and vice versa. Notice, that in addition to the dominant $`d`$-wave component, there is a small extended $`s`$-wave component generated on the stripe, which can be interpreted as a distortion of the $`d`$-wave at the level of about $`10\%`$. This happens because a symmetry of the lattice has been broken. The superconducting stripe is pinned by the AF phase domain boundary. Since the spin density on the domain boundary is small, a natural interpretation is that the superconductivity is suppressed in the regions of large AF order. Nevertheless, the width of a superconducting stripe, $`\xi _{SC}`$, is determined not only by the width of the AF stripe, but also increases rapidly with increasing magnitude of the nearest neighbor attraction, $`V`$. For our choice of parameters, $`\xi _{DW}4`$ and $`\xi _{SC}8`$ lattice sites. For dopings smaller than about $`10\%`$ (corresponding to $`L(x)>10`$ lattice sites) the stripes have negligible overlap. In this regime, the amplitude of the superconducting order parameter on the stripes no longer depends upon the stripe-stripe separation. For higher doping levels, an overlap between the superconducting order parameters on adjacent stripes is established. A central question is connected to the conducting properties of the resulting inhomogeneous state: Is it a global superconductor, a metal, an insulator, or some unusual anisotropic phase? To resolve this issue we use the concepts of charge stiffness $`D_c`$ , which measures the sensitivity of the ground state to changes in boundary conditions, and anomalous flux quantization , which provides a direct signature of the Meissner effect. These concepts, together with topological quantum numbers , are routinely employed to study the localization properties of models of interacting electrons. To compute $`D_c`$ one needs to determine how the energy of a system with a fixed number of particles, $`E`$, depends on the twist in the boundary conditions, $`\mathrm{\Theta }[0,2\pi )`$. The twist of the boundary conditions is independently applied along each spatial direction $`j=x,y`$ and implies that $`c_{N_j+1}=\mathrm{exp}(i\mathrm{\Theta })c_1`$. The special case of $`\mathrm{\Theta }=0`$ corresponds to strictly periodic boundary conditions. Textbook schematics of the energy dependence, $`E(\mathrm{\Theta })`$, are shown in Fig. 2A. The calculated many-particle spectrum $`E(\mathrm{\Theta })`$ for a system with a stripe separation of 17 lattice sites in our model is shown in Fig. 2B. The energy curves imply that the system is superconducting, however the superfluid stiffnesses along and across the stripes are drastically different. However, for a smaller stripe periodicity (Fig. 2C), due to substantial overlap between the stripes, superconductivity is almost as strong across the stripes as it is along the stripes. Arrays of superconductors separated by insulating regions, known as Josephson junction arrays, have non-trivial conducting properties. Depending on the relative strength of the coupling between the superconductors and the charging energy, such systems can be either superconductors or insulators . At the MF level, the charging aspect of the problem is absent, and hence we can expect a global superconductor at zero temperature no matter how weak the interchain coupling is. In a real system, for a sufficiently weak coupling the superconducting behavior will be suppressed. In the case of superconducting channels separated by AF insulators, the coupling is inversely related to $`L(x)`$. For very large $`L(x)`$ at low doping, $`L(x)\xi _{SC}`$, the overlap between the superconducting stripes, and hence the superconducting transition temperature $`T_c`$, is exponentially small. As the distance between the stripes decreases (larger doping), the overlap of the superconducting condensate wave functions should establish a phase coherent superconducting state. Indeed, this is qualitatively what we observe already at the mean-field level in the striped superconductors. For a large superconducting stripe overlap, the effective coupling between the stripes is non-exponential. In this regime, the experimentally measured superconducting transition temperature is proportional to incommensuration , which implies that the effective Josephson coupling scales as $`1/L(x)`$. A possible experimental test of the Josephson-coupled superconductor scenario proposed here (see also ) can be performed by measuring the in-plane Josephson plasmon resonance. The resonance should be present in the microwave-frequency range and is excitable by an in-plane electric field. From our zero-temperature analysis of the coexistence of AF stripes (ICAF) and superconductivity, a simple qualitative thermodynamic phase diagram emerges. In the conjectured phase diagram, we utilize the finite-temperature AF/ICAF phase diagram of the Hubbard model together with the superconducting (SC) phase diagram of the $`t`$-$`V`$ model. The SC phase diagram is obtained in the homogeneous MF , while the AF/ICAF phase boundary is constructed under the assumption of the second order phase transition between the homogeneous and inhomogeneous states . For a suitable choice of parameters, for instance $`U=2t`$ and $`V=t`$, the SC and the AF/ICAF regions in the phase diagram intersect, as shown in Fig. 3. The boundary between the AF and ICAF phases corresponds to an infinite period stripe modulation, and implies that the incommensuration is a decreasing function of temperature. The energy scale associated with the AF/ICAF region of the phase diagram is much larger than that of the SC part. Thus, one expects that only the SC phase boundary is modified when it passes through the AF/ICAF region. The central result of our work is that the superconductivity does not disappear in the region of the AF stripes, but rather becomes striped, with anisotropic superfluid stiffness. Based on familiar Josephson coupling physics, in the region of coexistence of superconductivity and stripes, we can expect a part that is a globally coherent striped superconductor (SSC). The rest of the intersection region is covered by an exotic phase which, if it were perfectly orientationally ordered, would be a superconductor in one direction and a strongly-correlated insulator in the other. In reality, due to the meandering of the stripes and their break-up into finite segments , the state is likely to be highly inhomogeneous and neither an insulator, nor a superconductor, but also not a simple metal. In agreement with the experimental attribution, we refer to this region as a “strange metal” (SM). The line separating the SM from the AF/ICAF region, in the context of the experiments, can be associated with the crossover to the strong pseudogap regime, and corresponds to the opening of the superconducting gap. The high-temperature AF/ICAF phase boundary marks to onset of the weak pseudogap. For very small dopings the MF stripe separation becomes so large that the superconducting aspects of the model become irrelevant and one crosses over to the regime governed predominantly by the physics of antiferromagnets. It should be emphasized that the phase diagram presented here is based on the (inhomogeneous) MF treatment of a 2-dimensional model. As such, it is susceptible to the quantum and thermal fluctuations that tend to destroy long-range order. For instance, the ICAF phase in Figure 3 in a real material is more likely to manifest itself as incommensurate AF fluctuations, rather than a pure phase. However, the effects of the 3rd dimension and impurity pinning may stabilize the MF phases at a sufficiently low temperatures, revalidating the MF phase diagram. Within our model, we find that increasing on-site repulsion leads to a suppression of superconductivity. For larger $`U`$, the pure $`d`$-wave superconducting region of the phase diagram (SC) shrinks, and the width of the superconducting stripes in the SSC region decreases. For example, changing $`U`$ from $`4t`$ to $`5t`$ (keeping $`V`$ fixed) leads to a five-fold reduction of the superconducting order parameter on stripes. This is in contrast with the homogeneous mean-field results , there the d-wave superconductivity is independent of the magnitude of $`U`$. On the other hand, large values of the on-site repulsion, $`U>4t`$, in the Hubbard model lead to diagonal stripes . Applying our model to diagonal stripes ($`U=5t`$, $`V=0.9t`$), we find that superconductivity vanishes completely at the optimal stripe filling. Perhaps it is not a coincidence, that the insulating cuprates do in fact show diagonal stripes, as opposed to the lattice-aligned stripes in the superconducting cuprates . There is an analogy between the role played by doping in our model, and the strength of the electron-phonon coupling, $`\lambda `$, in the McMillan criterion for the maximum achievable $`T_c`$ in conventional superconductors . In McMillan’s picture, increasing $`\lambda `$ favors superconductivity. However, increasing $`\lambda `$ too much induces a structural transition, and hence changes the reference ground state. In our model, increasing doping brings the superconducting stripes closer together, and hence enhances the global $`T_c`$. However, increasing doping too much causes a transition to the uniform state, with a subsequent monotonically decreasing $`T_c`$ as a function of doping. The topology of the phase diagram we have proposed appears to be relevant for the superconducting cuprates, such as LSCO and YBCO. The same simple model (for other parameter values) can produce other topologies as well. A non-trivial topology, which may be realized in a material with a weaker attractive coupling, is when the superconducting region is fully contained in the AF/ICAF region. Depending on the parameters, such a material may be a striped superconductor in a certain range of dopings. Also, similar physics may occur in organic charge-transfer salts , that show AF, ICAF and superconductivity under pressure which controls interchain coupling. There, the intrinsically anisotropic coupling is also important for the stabilization of the mesoscopic inhomogeneities . In conclusion, we find that a simple one-band model with on-site repulsion and nearest-neighbor attraction, in an appropriate range of parameters, can simultaneously sustain both incommensurate antiferromagnetism and inhomogeneous superconductivity. Prompted by this finding and utilizing well-known antiferromagnetic and superconducting phase diagrams, we have constructed a generic phase diagram that captures many of the phases observed in the cuprate and organic superconductors. An experimental test of the Josephson-coupled superconductor proposed here (see also ) can be performed by measuring the in-plane Josephson plasmon resonance. Although our simple model appears to capture much of the observed rich physics, however it can be readily elaborated (with multiple electron bands, 3-dimensionality, long-range interactions, lattice coupling, etc.) for more quantitative comparisons with specific materials. We would like to thank L. Bulaevskii for suggesting to us the in-plane Josephson plasma experiment as a probe of the striped superconductivity. We acknowledge useful discussions with C.D. Batista, E. Fradkin, S. Kivelson, D. Morr, D. Pines, S. Trugman, and J. Zaanen. This work was supported by the U.S. DOE.
no-problem/0003/astro-ph0003325.html
ar5iv
text
# HS 0907+1902: a new 4.2 hr eclipsing dwarf nova ## 1 Introduction Cataclysmic variables (CVs) may be discovered by various means. Historically, most of them were found because of their cataclysmic nature, i.e. strong variability. This is especially valid for dwarf novae, which show quasi-regular outbursts in the visual of up to 8 magnitudes. With the advent of space-based X-ray telescopes, a new class of CVs was discovered, containing magnetic white dwarfs as accretors. The ROSAT and EUVE missions were extremely successful in discovering this type of CVs (e.g. Beuermann 1998). However, a large number of CVs are neither prominent X-ray sources, nor strongly variable. In non-magnetic CVs with a constantly high mass transfer rate – novalike variables – the accretion disc remains in a perpetual hot state, turning them into unspectacular blue objects. Similarly, dwarf novae with low outburst amplitudes or long outburst cycles are likely to slip the attention of sky patrols. As a result, the sample of known CVs Downes et al. (1997) is skewed by selection effects, and the actual space density of CVs is a matter of serious debate (e.g. de Kool 1992 and Patterson 1998). The Hamburg Schmidt objective prism survey (HQS, Hagen et al. 1995), originally aimed at the detection of a magnitude-limited sample of bright quasars (V=13–17.5), provides a rich source of CV candidates selected because of their spectroscopic properties. Up to date, only few CVs have been serendipitously identified from the HQS: HS 0551+7241 Dobrzycka et al. (1998); HS 1023+3900 Reimers et al. (1999); and HS 1804+6753 (=EX Dra) Billington et al. (1996); Fiedler et al. (1997). The latter two objects show the strength of the spectroscopic selection of CV candidates: HS 1023+3900 is a magnetic CV with a very low accretion rate and no X-ray emission, and HS 1804+6753 is a bright eclipsing dwarf nova with low-amplitude outbursts, both stars were unlikely to be detected with the “classic” selection mechanisms described above. We have initiated a search for new CVs selected from the HQS with follow-up observations of CV candidates detected also in the ROSAT Bright Source catalogue Voges et al. (1999). They were identified as possible CVs by Bade et al. Bade et al. (1998) because of the Balmer line emission seen in their HQS prism spectra. HS 0907+1902 ( = 1RXS J090950.6+184956) was independently confirmed spectroscopically as CV at the BAO (X. Jiang, private communication). Here we report on the first photometric and spectroscopic results for HS 0907+1902. ## 2 Observations and results ### 2.1 Photometry Several nights of differential photometry of HS 0907+1902 (Fig. 1) were obtained at Braeside Observatory, Arizona, with a 41 cm reflector equipped with a SITe 512 CCD camera (Table 1). $`B`$ and $`V`$ magnitudes of HS 0907+1902 were derived relative to the $`V=11.08`$ and $`BV=0.86`$ comparison star ( = Tycho2 1404-1852-1). The first night of the observation run on 2000 February 7 showed HS 0907+1902 at a magnitude of $`V13`$. As Bade et al. Bade et al. (1998) estimated $`B16.4`$ from the HQS prism spectra, this clearly indicated that we had detected the first dwarf nova outburst of HS 0907+1902. The light curves (Fig. 2) show deep eclipses ($`\mathrm{\Delta }B=3.0`$, $`\mathrm{\Delta }V=2.6`$, and $`\mathrm{\Delta }R=2.1`$) with a period of 4.2 h and low flickering activity. The mean $`V`$ magnitude increased by $`0.2`$ throughout the night, indicating that HS 0907+1902 was still on the rise to the maximum of the outburst. No orbital modulation (hump) was detected. The outburst was independently discovered by observers of the Variable Star Network (VSNET) who reported the outburst of HS 0907+1902 on February 11 (vsnet-alert 4176)<sup>1</sup><sup>1</sup>1http://www.kusastro.kyoto-u.ac.jp/vsnet/Mail/vsnet-alert/msg04176.html. The $`B`$, $`V`$, and $`R`$ light curves obtained on February 11 cover one eclipse and are very similar to those of February 7, but the system was somewhat fainter ($`V13.8`$) and a decline by $`0.1`$ mag is observed during the night. Hence, the maximum of the outburst occurred between February 7 and 11. HS 0907+1902 was apparently in outburst during the epoch of the plates of the Hubble Space Telescope Guide Star Catalogue, which lists $`V=12.53`$. This value might be taken as the brightest outburst magnitude so far recorded. On February 15, the system appeared to be much fainter, and, due to poor weather conditions, we decided to obtain filterless photometry to maximise the time resolution. Flickering with an amplitude of $`0.4`$ mag was apparent, typical of dwarf novae in quiescence. Surprisingly, the light curve shows no strong orbital modulation, which is normally the signature of a bright spot where the accretion stream impacts the disc. One eclipse egress and one full eclipse were covered. On February 16, we obtained a $`V`$ band light curve near $`V16`$ (Fig. 3). Finally, an eclipse egress and one complete eclipse were covered on February 23 and 25, respectively, again in white light photometry, with the same out-of-eclipse magnitude as on February 15. HS 0907+1902 was, therefore, already in quiescence on the $`15^{\mathrm{th}}`$. Ephemeris. From the seven observed eclipses we derive the following ephemeris: $$\varphi _0=\mathrm{HJD}\mathrm{\hspace{0.17em}2451581.8263}(1)+0.175446(3)\times E$$ (1) where $`\varphi =0`$ is defined as the mid-eclipse phase, equivalent to the inferior conjunction of the secondary star. Errors in the last digit are given in brackets. Table 2 lists the eclipse timings. Eclipse shape. We observed three eclipses during the dwarf nova outburst. In order to compare the eclipse profiles, a linear fit was made to the out-of-eclipse light curves and subsequently used to detrend the light curves. The detrended eclipse profiles, folded with the above ephemeris, are displayed in Fig. 4. The wings of the eclipse profiles are perfectly symmetric with respect to $`\varphi =0.0`$, indicating an axisymmetrical brightness distribution in the accretion disc in HS 0907+1902. The brightness at eclipse minimum is higher than that observed during quiescence ($`V17.6`$, Fig. 3), so the accretion disc is not totally eclipsed. Interestingly, the centre of the eclipse profile is variable in shape: while the first eclipse (labelled ’a’) on February 7 has a round minimum, the second one (’b’) has a flat bottom, which hints to an increase in brightness of the non-eclipsed part of the accretion disc and could explain the observed increase of the overall brightness of the system. In the context of disc-outburst theory (e.g. Osaki 1996) this behaviour would be expected if the observed outburst is of inside-out nature. The width of the eclipse at half depth is $`\mathrm{\Delta }\varphi _{1/2}0.06`$ during outburst, which is somewhat lower than e.g. in IP Peg ($`\mathrm{\Delta }\varphi _{1/2}0.09`$). The measured eclipse duration implies $`i73^{}79^{}`$ for a mass ratio $`q=M_{\mathrm{wd}}/M_{\mathrm{sec}}`$ in the range $`1.253`$. Unfortunately, during quiescence only two full eclipses were covered with data of satisfactory quality (Fig. 4), which leaves the following statements somewhat speculative. Taken at face value, the eclipse observed on February 15 broadly resembles that of Z Cha during quiescence Cook & Warner (1984): the eclipse minimum is followed by a steep jump in brightness, which is followed by a smooth egress to the out-of-eclipse level. In Z Cha, the final egress is due to the appearance of the bright spot, whereas the sudden jump from eclipse minimum corresponds to the egress of the white dwarf. The hypothetical white dwarf egress in HS 0907+1902 would result in a relatively short eclipse of the primary ($`\mathrm{\Delta }\varphi 0.04`$), which agrees with the conclusions on $`i`$ and $`q`$ obtained above from the high state eclipse profiles. However, the eclipse obtained on February 25 has a less structured shape, and higher S/N data are needed to decisively derive the binary parameters. ### 2.2 Spectroscopy On 1999 October 20, two identification spectra of HS 0907+1902 were obtained with the low resolution spectrograph (LRS, $`\lambda /\mathrm{\Delta }\lambda 520`$) on the 9.2 m Hobby-Eberly telescope (HET) on Mt. Fowlkes, Texas. An absolute flux calibration of these spectra was not possible due to thermal drift in the alignment of the 91 individual segments of the primary mirror. We, therefore, used the flux standard LDS749b to derive the instrumental response function and adjusted the spectra of HS 0907+1902 to the observed $`V`$ magnitude. One of the HET spectra is shown in Fig. 5. The spectrum of HS 0907+1902 is typical of a dwarf nova in quiescence, containing strong Balmer emission lines and weaker He I and Fe II lines. There is possibly some weak emission of He II$`\lambda `$ 4686, blended with He I$`\lambda `$ 4713 and with the C III/N III$`\lambda \lambda 46404650`$ complex. The double-peaked shape of the emission lines is typical for high-inclination dwarf novae. The equivalent widths of the most prominent lines are $`\mathrm{H}\alpha `$ = 90 Å, $`\mathrm{H}\beta `$ = 78 Å, $`\mathrm{H}\gamma `$ = 46 Å, He I$`\lambda `$ 5870 = 24 Å, He I$`\lambda `$ 4474 = 10 Å. The FWHM, corrected for the instrumental resolution, are $`\mathrm{H}\alpha `$ = 32 Å, $`\mathrm{H}\beta `$ = 28 Å, $`\mathrm{H}\gamma `$ = 26 Å, He I$`\lambda `$ 5870 = 33 Å, He I$`\lambda `$ 4474 = 26 Å. The red end of the HET spectrum of HS 0907+1902 shows signatures of a late-type secondary star, namely the broad absorption blends of TiO/CaOH ($`\lambda \lambda 61606320`$) and of TiO ($`\lambda \lambda 71907210`$). The flux increases for $`\lambda >7210`$ Å, as expected for the contribution of a late type star. Using a library of observed M-dwarf spectra, we derive a spectral type of M$`3\pm 1.5`$ for the secondary star in HS 0907+1902. This estimate agrees well with the observed spectral types of secondaries in CVs with similar $`P_{\mathrm{orb}}`$ Beuermann et al. (1998). Fig. 5 shows the M3-dwarf Gl352ab scaled according to the depth of the observed absorption features. From the adjusted M-star spectra of stars with the above range of spectral types, we measure an observed surface brightness of the flux difference in the $`\lambda \lambda 7500/7165`$ Å band of $`f_{\mathrm{TiO}}=(3.9\pm 1.7)\times 10^{16}`$$`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\text{Å}^1`$. From Roche geometry and from Patterson’s Patterson (1984) mass-radius relation for main-sequence stars we estimate that $`M_{\mathrm{sec}}=0.42M_{}`$ and $`R_{\mathrm{sec}}=3.17\times 10^{10}`$ cm. Finally, applying the calibration for the $`F_{\mathrm{TiO}}`$ surface brightness of late-type stars of Beuermann & Weichold Beuermann & Weichhold (1999), we obtain a distance of $`d=320\pm 100`$ pc, corresponding to a distance modulus of $`mM=7.5\pm 0.7`$. If we assume an outburst magnitude of $`V=12.5`$ (Sect. 2.1), we derive an absolute magnitude in outburst of $`M_V=3.3`$, where we applied a correction $`\mathrm{\Delta }M_V(i)=1.6`$ for an assumed inclination of $`80^{}`$. Warner’s Warner (1987) $`M_VP_{\mathrm{orb}}`$ relation gives $`M_V=4.6`$ for $`P_{\mathrm{orb}}=4.2`$ h; this can be taken as a hint that the true distance is on the lower side of our error range and that the spectral type of the secondary is rather $`M4`$. ## 3 Conclusion We have discovered a bright new eclipsing dwarf nova with an orbital period of 4.2 h. Eclipsing CVs offer the best means of deriving the system parameters such as stellar masses, mass transfer rates, and the structure of the accretion disc. With its long orbital period, HS 0907+1902 is only the fourth deeply eclipsing dwarf nova above the $`23`$ h period gap, the other ones being IP Peg, HS 1804+6753 (=EX Dra), and BD Pav. With a quiescent and an outburst magnitude of $`V16`$ and $`V12.5`$, respectively, HS 0907+1902 is well suited for detailed follow-up studies. ###### Acknowledgements. BTG and DN were supported by DLR/BMBF grant 50 OR 9903 6. The HQS was supported by the Deutsche Forschungsgemeinschaft through grants Re 353/11 and Re 353/22. Braeside Observatory acknowledges the support of The Research Corporation, The National Science Foundation (AST-92-180002), and the Fund of Astrophysical Research. The Hobby-Eberly Telescope is operated by McDonald Observatory on behalf of The University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen. The Marcario Low Resolution Spectrograph is a joint project of the Hobby-Eberly Telescope partnership and the Instituto de Astronomía de la Universidad Nacional Autonoma de México. We thank the referee Dr. Osaki for helpful comments.
no-problem/0003/cond-mat0003241.html
ar5iv
text
# Data clustering and noise undressing of correlation matrices \[ ## Abstract We discuss a new approach to data clustering. We find that maximum likelyhood leads naturally to an Hamiltonian of Potts variables which depends on the correlation matrix and whose low temperature behavior describes the correlation structure of the data. For random, uncorrelated data sets no correlation structure emerges. On the other hand for data sets with a built-in cluster structure, the method is able to detect and recover efficiently that structure. Finally we apply the method to financial time series, where the low temperature behavior reveals a non trivial clustering. \] Statistical mechanics typically addresses the question of how structures and order arising from interactions in extended systems are dressed, and eventually destroyed, by stochastic – so-called thermal – fluctuations. The inverse problem, unraveling the structure of correlations from stochastic fluctuations in large data sets, has recently been addressed using ideas of statisical mechanics. This is the case of data clustering problems, where the goal is to classify $`N`$ objects, defined by $`D`$ dimensional vectors $`\{\stackrel{}{\xi }_i\}_{i=1}^N`$, in equivalence classes. Generally the idea is 1) postulate a cost function, which depends on the data sample, for each structure and 2) consider the cost function as an Hamiltonian and study its thermal properties. Structures are identified by configurations $`𝒮=\{s_i\}_{i=1}^N`$ of class indices, where $`s_i`$ is the equivalence class to which object $`i`$ belongs. Regarding $`s_i`$ as Potts spins, a Potts Hamiltonian $`H_q=_{i<j}J_{i,j}\delta _{s_i,s_j}`$ has been recently proposed as a cost function, with couplings $`J_{i,j}`$ decreasing with the distance $`d_{i,j}=\stackrel{}{\xi }_i\stackrel{}{\xi }_j`$ between objects $`i`$ and $`j`$. The underlying structure of data sets emerges as the clustering of Potts variables at low temperatures. In this work we address the question of data clustering for time series. Rather than postulating the form of the Hamiltonian, we start from a statistical ansatz and invoke maximum likelyhood and maximum entropy principles. In this way, the structure of the Hamiltonian arises naturally from the statistical ansatz, without the need of assumptions on its form. We study, by Montecarlo method, this Hamiltonian for artificial time series: If time series are generated with some cluster structure $`𝒮^{}`$, we find a low temperature phase which is dominated by cluster configurations close to $`𝒮^{}`$. For random time series no low temperature phase is found. We also study time series of assets composing the S&P500 index, whose correlations have been the subject of much recent interest. Correlation matrices of financial time series are of great practical interest. Indeed they are at the basis of risk minimization in the modern portfolio theory. This states that, in order to reduce risk, the investment needs to be diversified (i.e. divided) on many uncorrelated assets. However the measure of correlation in finite samples was recently found to be affected by considerable noise-dressing. Our aim is to address the problem of revealing the structure of bare correlations hidden in a finite data set. Quite interestingly, our analysis of the S&P500 data set reveals a low temperature behavior dominated by few clusters of correlated assets with scale invariant properties. The thermal average over the relevant cluster structures provides a good fit of the financial correlations, which allows us to estimate the noise-undressed correlation matrix. Finally, we discuss several generalizations of our approach to generic data clustering. The data $`\mathrm{\Xi }=\{\stackrel{}{\xi }_i\}_{i=1}^N`$ is composed of $`N`$ sets $`\stackrel{}{\xi }_i=\{\xi _i(d)\}_{d=1}^D`$ of $`D`$ measures. These are normalized to zero mean $`_d\xi _i(d)/D=0`$ and unit variance $`_d\xi _i^2(d)/D=1`$. We focus below on the case where $`\xi _i(d)`$ is the normalized daily returns of asset $`i`$ of the S&P500 index, in day $`d`$. For the moment being, let us assume that $`\xi _i(d)`$ are Gaussian variables. The reason is that we want to focus exclusively on pairwise correlations and the Gaussian model is the only one which is completely specified at this level. We shall discuss below how to apply the method when $`\xi _i(d)`$ are not Gaussian. The key quantity of interest is the matrix $$C_{i,j}(D)\frac{1}{D}\underset{d=1}{\overset{D}{}}\xi _i(d)\xi _j(d).$$ (1) The spectral properties of this matrix, for uncorrelated time series, are known exactly. The spectrum of eigenvalues $`\lambda `$ extends over an interval of size $`N/D`$ around $`\lambda =1`$, as shown in Fig. 1a. The spectrum of eigenvalues of the S&P500 correlation matrix is also shown. The similarity of the two distributions for $`\lambda 1`$ suggests that significant noise-dressing due to finite $`D`$ occurs. The tail of the distribution ($`\lambda 1`$) implies that some correlation is however present. The structure of correlation was analyzed both by minimal spanning tree method and by the method of ref. in ref. . In order to explain this correlation, Noh proposed the ansatz $$\xi _i(d)=\frac{\sqrt{g_{s_i}}\eta _{s_i}(d)+ϵ_i(d)}{\sqrt{1+g_{s_i}}}.$$ (2) Here $`g_s>0`$ and $`s_i`$ are integer variables (so-called Potts spins), $`\eta _s(d)`$ and $`ϵ_i(d)`$ are iid gaussian variables with zero average and unit variance. In order to allow for totally uncorrelated sets, we allow $`s_i`$ to take all integer values up to $`N`$. In Eq. (2) sets are correlated in clusters labeled by $`s`$. The $`s^{\mathrm{th}}`$ cluster is composed of $`n_s`$ sets with internal correlation $`c_s`$, where $$n_s=\underset{i=1}{\overset{N}{}}\delta _{s_i,s},c_s=\underset{i,j=1}{\overset{N}{}}C_{i,j}\delta _{s_i,s}\delta _{s_j,s}.$$ (3) The correlation matrix generated by Eq. (2) for $`D\mathrm{}`$ is $`C_{i,j}=(g_{s_i}\delta _{s_i,s_j}+\delta _{i,j})/(1+g_{s_i})`$. Its distribution of eigenvalues is simple: To each $`s`$ with $`n_s1`$ there correspond one eigenvalue $`\lambda _{s,0}=(1+g_sn_s)/(1+g_s)`$ and $`n_s1`$ eigenvalues $`\lambda _{s,1}=1/(1+g_s)`$ Hence, large eigenvalues correspond to groups of many ($`n_s1`$) sets. For $`D`$ finite, we expect noise to lift degeneracies between $`\lambda _{s,1}`$ but to leave the structure of large eigenvalues unchanged. In order to fit the data set $`\mathrm{\Xi }`$ with Eq. (2), let us compute the likelyhood. This is the probability $`P(\mathrm{\Xi }|𝒮,𝒢)`$ of observing the data $`\mathrm{\Xi }`$ as a realisation of Eq. (2) with structure $`𝒮`$ and parameters $`𝒢=\{g_s\}_{s=1}^N`$, and it reads $`P(\mathrm{\Xi }|𝒮,𝒢)={\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \underset{d=1}{\overset{D}{}}}\delta \left(\xi _i(d){\displaystyle \frac{\sqrt{g_{s_i}}\eta _{s_i}(d)+ϵ_i(d)}{\sqrt{1+g_{s_i}}}}\right)`$ where the average is over all the $`\eta `$’s and $`ϵ`$’s variables and $`\delta (x)`$ is Dirac’s delta function. Gaussian integration leads to $`P(\mathrm{\Xi }|𝒮,𝒢)e^{DH\{𝒮,𝒢\}}`$ with $`H\{𝒮,𝒢\}=\frac{1}{2}_s[(1+g_s)(n_s\frac{g_sc_s}{1+g_sn_s})n_s\mathrm{ln}(1+g_s)+\mathrm{ln}(1+g_sn_s)]`$. We fix the coupling strengths $`g_s`$ by likelyhood maximization $`\frac{H}{g_s}=0`$ for all $`s`$, which yields $$\widehat{g}_s=\frac{c_sn_s}{n_s^2c_s}$$ (4) for $`n_s>1`$ and $`\widehat{g}_s=0`$ for $`n_s1`$. Note that for uncorrelated sets $`C_{i,j}=\delta _{i,j}`$ we have $`c_s=n_s`$ $`s`$ and hence $`\widehat{g}_s=0`$. The coupling strength $`\widehat{g}_s`$ instead diverges for totally correlated sets ($`C_{i,j}=1`$) because $`c_s=n_s^2`$. Using Eq. (4) we find that the likelyhood of structure $`𝒮`$ under ansatz (2) takes the form $`P(\mathrm{\Xi }|𝒮)e^{DH_c}`$, where $$H_c\{𝒮\}=\frac{1}{2}\underset{s:n_s>0}{}\left[\mathrm{log}\frac{c_s}{n_s}+(n_s1)\mathrm{log}\frac{n_s^2c_s}{n_s^2n_s}\right]$$ (5) The ground state $`𝒮_0`$ of $`H_c`$ yields the maximum likelyhood fit with Eq. (2). This would probably take the ansatz (2) too seriously. In general, it is preferrable to consider probabilistic solutions $`P\{𝒮\}`$ and, following ref. , we invoke the the maximum entropy principle: Among all distributions $`P\{𝒮\}`$ with the same average log-likelyhood, we select that which has maximal entropy. This, as usual, leads to the Gibbs distribution $`P\{𝒮\}e^{\beta H_c\{𝒮\}}`$ where the inverse temperature $`\beta `$ arises as a Lagrange multiplier. The Hamiltonian $`H_c`$ depends implicitely on the Potts spins $`s_i`$ through the cluster variables $`n_s`$ and $`c_s`$ of Eq. (3). Unlike the Potts Hamiltonian $`H_q`$, the dependence on $`\delta _{s_i,s_j}`$ is non-linear and it is modulated by $`C_{i,j}`$. For $`s_is_j`$ for all $`ij`$ we have $`n_s=c_s=1`$ for all $`s`$ and hence $`H_c=0`$. This state is representative of the high temperature ($`\beta 0`$) limit. The low temperature physics of $`H_c`$ is instead non-trivially related to the correlation matrix $`C_{i,j}`$. Note, that the ferromagnetic state $`s_i=1,i`$, which dominates as $`\beta \mathrm{}`$ in clustering methods based on Potts models, is in general not the ground state of $`H_c`$. Intuitively we expect that, if the model of Eq. (2) is reasonable, $`H_c`$ should have a well defined ground state and low temperature phase which is energetically dominated by this state. In these cases, as in ref. , we expect a thermal phase transition. In order to study the properties of $`H_c`$ we resort to Montecarlo (MC) method by Metropolis algorithm. This, at equilibration, allows us to sample the Gibbs distribution $`P\{𝒮\}`$ and compute average quantities, such as the internal energy $`E_\beta =H_c_\beta `$ where $`\mathrm{}_\beta `$ stands for thermal average. In order to detect the occurrence of spontaneous magnetization – which occurs if the $`s_i`$ lock into energetically favourable configurations at low temperature – we measure the autocorrelation function $$\chi (t,\tau )=\frac{_{i<j}\delta _{s_i(t),s_j(t)}\delta _{s_i(t+\tau ),s_j(t+\tau )}}{_{i<j}\delta _{s_i(t),s_j(t)}}.$$ (6) This quantity tells us how many pairs of sites belonging to the same cluster at time $`t`$ are still found in the same cluster after $`\tau `$ MC steps. For $`t`$ large enough, $`\chi `$ becomes a function of $`\tau `$ only. This function decreases rapidly to a plateau value $`\chi _\beta =\chi (t,\tau )_\beta `$ for $`t\tau 1`$. Clearly $`\chi _\beta 0`$ implies that no persistent structure is present whereas, at the other extreme, $`\chi _\beta =1`$ implies that all sites are locked in a persistent structure of clusters. We monitored these quantities for three different data sets all composed by $`N=443`$ time series: 1) uncorrelated time series; 2) the time series of daily returns of the assets composing the S&P500 index 3) correlated time series generated by Eq. (2) with given $`s_i=s_i^{}`$ and $`g_s=g_s^{}`$. The first and the third data sets serve to test the method in cases where we know the answer. Let us start with a truly uncorrelated time series with $`D=1599`$. We compute $`C_{i,j}`$ and study the corresponding Hamiltonian $`H_c`$ by the MC method. We do not expect any clustering to emerge in this case. Indeed, the internal energy $`E_\beta `$ stays very close to $`0`$ (see Fig. 2a) for all values of $`\beta `$ investigated up to $`\beta =512`$.Correspondingly no persistent cluster arises, i.e. $`\chi _\beta 0`$. The results change turning to correlated data. Let us first discuss the S&P500 data for $`D=1599`$: As Fig. 2a shows, for $`\beta 20`$ the energy $`E_\beta `$ starts deviating significantly from zero. For $`\beta >20`$ persistent clusters are present: $`\chi _\beta `$ rapidly raises from zero and it has a maximum at $`\beta 40`$ (see Fig. 2b). The energy fluctuations reported in the inset shows a broad peak of intensity marking the onset of an ordered low temperature phase. As $`\beta `$ increases the dynamics is significantly slowed down. At $`\beta 200`$ the energy reaches a minimal value $`E_\beta 0.11N`$ and does not decrease significantly increasing $`\beta `$ at least up to $`\beta =4095`$. This energy is smaller than that of the ferromagnetic state ($`E_f=0.086N`$), with all sets in the same cluster. The system in this range of temperatures visits only few configurations. The statistical properties of cluster configurations, as $`\beta `$ varies, are shown in Fig. 3. For small $`\beta `$ only small clusters survive to thermal fluctuations. As $`\beta `$ increases a distribution of cluster sizes develops. At low temperatures the rank order plot of $`n_s`$ reveals a broad distribution of clusters with the largest aggregating more than $`190`$ sets. By a power law fit of this distribution, we find that the number of clusters with more than $`n`$ sets decays as $`n^{0.83}`$. The scatter plot of $`c_s`$ versus $`n_s`$ also reveals a non-trivial power law dependence $`c_sn_s^{1.66}`$. This gives a statistical characterization of the dominant configurations of clusters at low energy. For $`D=400`$ we find two transitions at $`\beta _120`$ and at $`\beta _280`$ which are signalled by bending in the $`E_\beta `$ curve and by peaks in the $`\delta E_\beta ^2`$ vs $`\beta `$ plot. At the first temperature clusters start to appear. For $`\beta <\beta _2`$ the largest cluster groups less than $`30`$ sets and for $`\beta >\beta _2`$ larger clusters $`n_s100`$ appear. This hints at a time dependence of correlations, which are averaged in the $`D=1599`$ data set. For even shorter time series we found that sampling errors, acting like a temperature, destroy large clusters and only relatively small clusters ($`n_s<40`$ for $`D=60`$) were found. We build a syntetic correlated data set of $`D=1599`$ points using Eq. (2) with a structure $`𝒮=𝒮^{}`$ and parameters $`𝒢=𝒢^{}`$. The structure $`𝒮^{}`$ is a typical low energy configuration for the S&P500 data set for $`D=1599`$. The parameters $`g_s^{}`$ where deduced from the $`n_s`$ and $`c_s`$ of this configuration, via Eq. (4). The distribution of eigenvalues of $`C_{i,j}`$ is shown in Fig. 1a. This data set is useful for at least two reasons: first it allows one to understand to what extent a struture of correlation put by hand with the form dictated by Eq. (2) can be correctly recovered. Secondly it allows us to compare the results found for the S&P500 data with those of a time series with correlations described by Eq. (2), of a similar nature. For $`\beta <150`$, the behaviors of $`E_\beta `$, $`\delta E_\beta ^2`$ and $`\chi _\beta `$ are similar to those found for the S&P500 data (see Fig. 2). A second, sharp peak in $`\delta E_\beta ^2`$ at $`\beta 170`$ signals a new clustering phase transition. Below this temperature, as shown by the plot of $`\chi _\beta `$ (fig. 2b), the MC dynamics freezes into the original structure $`𝒮^{}`$. The overlap with the configuration $`𝒮^{}`$, defined as in Eq. (6) as the fraction of “bonds” $`s_i=s_j`$ for which $`s_i^{}=s_j^{}`$, quickly converges to $`1`$ (see fig. 2b) for the syntetic time series, whereas it remains around $`60\%`$ for the S&P500 data set. This, on one hand means that the original structure $`𝒮^{}`$ can be recovered quite efficiently. On the other hand, it suggests that several cluster configuration compete at low temperatures in the S&P500 data set. Eq. (2) with a single cluster configuration ($`\beta \mathrm{}`$), is inadequate to capture the full complexity of the correlations in the S&P500 data set. Probabilistic clustering, where several cluster structures $`𝒮`$ are allowed with their Gibbs probability $`P\{𝒮\}`$ (and finite $`\beta `$) provides a much better approximation. At finite $`\beta `$ each set $`i`$ may belong to several clusters and we can measure the corresponding coupling strenght $`g_{s,i}(\beta )=\widehat{g}_s\delta _{s,s_i}_\beta `$. Taking these as the parameters of the generalized model $$\xi _i(d)=\frac{_s\sqrt{g_{s,i}(\beta )}\eta _s(d)+ϵ_i(d)}{\sqrt{1+_sg_{s,i}(\beta )}},$$ (7) we can build an artificial time series $`\stackrel{}{\xi }_i`$ and compute the correlation matrix $`C_{i,j}^{(\beta )}(D)`$. Here $`\beta `$ is a free parameter which can be adjusted to “fit” the S&P500 correlation matrix. The eigenvalue spectra of the two matrices are compared in fig. 1b for $`\beta =48`$. The value of $`\beta `$ was choosen by visual inspection as that giving the best fit. The curves are remarkably close, suggesting that Eq. (7) provides a good statistical description of the correlations among assets. Fig. 1b also shows the noise undressed matrix $`C_{i,j}^{(\beta )}(\mathrm{})`$, which allows one to appreciate the effect of noise dressing. As expected, noise mainly affects small eigenvalues. The applicability of the method can be extended considerably to a generic data set $`\{\stackrel{}{x}_i\}_{i=1}^N`$. $`\stackrel{}{x}_i`$ need not be a time series. The distribution of $`x_i(d)`$ need not be Gaussian and it does not even need to be the same across $`i`$. For example, $`x_i(d)`$ may be the measure of the $`d^{\mathrm{th}}`$ feature of the $`i^{\mathrm{th}}`$ object or the concentration of species $`i`$ in the $`d^{\mathrm{th}}`$ sample of an experiment. The idea is to map the data set $`\stackrel{}{x}_i`$ into a Gaussian time series $`\stackrel{}{\xi }_i`$ to which we apply Eq. (2). The mapping results from requiring that non-parametric cross-correlations $`\tau _{i,j}^x=\tau _{i,j}^\xi `$ are preserved. To do this in practice we compute Kendall’s $`\tau `$ for the $`\stackrel{}{x}_i`$ data sets: $`\tau _{i,j}^x=\text{sign}[x_i(d)x_i(d^{})]\text{sign}[x_j(d)x_j(d^{})]_{d<d^{}}`$. We note that, for two infinite Gaussian time series with correlation $`c`$ we have $`\tau =\frac{2}{\pi }[\mathrm{tan}^1\sqrt{\frac{1+c}{1c}}\mathrm{tan}^1\sqrt{\frac{1c}{1+c}}]`$. Inverting this relation, we find the correlation $`c=C_{i,j}`$ as a function of $`\tau =\tau _{i,j}^\xi =\tau _{i,j}^x`$. This allows us to build the Hamiltonian which can then be studied. With respect to ref. , our approach does not need any assumption on the form of the Hamiltonian. As input, the method only needs the correlation matrix $`C_{i,j}`$ (or $`\tau _{i,j}`$). The range of interactions is set by the correlations themselves. For small $`D`$, the local interaction of ref. may well be more efficient in capturing the structure of data. Our method is most useful in cases where $`DN1`$. These ideas can clearly be extended to models of correlations different from Eq. (2). I acknowledge R. Zecchina, R. Pastor-Satorras and L. Giada for interesting discussions and R. N. Mantegna for providing the S&P500 data.
no-problem/0003/astro-ph0003035.html
ar5iv
text
# J-BAND INFRARED SPECTROSCOPY OF A SAMPLE OF BROWN DWARFS USING NIRSPEC ON KECK II 1footnote 11footnote 1 Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. ## 1 Introduction After eluding undisputed detection for many years, numerous brown dwarfs – objects with sub-stellar mass – are now known. While some candidates were discovered in small-scale surveys of young nearby clusters, such as the Pleiades and Hyades, or as companions to low-mass stars, the biggest breakthrough has come as a result of large-scale surveys such as the Deep Near-Infrared Sky (DENIS) survey (Delfosse et al. 1997), 2MASS, the 2-Micron All-Sky Survey (Skrutskie et al. 1997, Kirkpatrick et al. 1999) and the Sloan Digital Sky Survey (SDSS) (Strauss et al. 1999). Recently, using optical (CCD) spectroscopy, Kirkpatrick et al. (1999) have defined a new spectral class, L-dwarfs, in which the metallic oxides (such as TiO and VO) found in M stars lose their dominance to metallic hydrides (such as FeH and CrH). The temperature range for the L class is given by Kirkpatrick et al. (2000) from about 2000 K for L0 to about 1250 K for L8, whereas Martin et al. (1999) suggest a range from 2200-1600 K. Depending on age and model calculations, Kirkpatrick et al. (1999) argue that at least one third of the L-dwarf objects must be brown dwarfs and perhaps all are. Spectral classification is based on spectroscopy between 6500 and 10000 Å at a resolution of 9Å (R $``$1000). While use of this spectral region provides many important spectral diagnostics, it suffers from the fact that L- and T-dwarfs are extremely faint at these wavelengths and therefore long exposures on very large aperture telescopes are required to obtain spectra with good signal-to-noise ratios. Typical I-band magnitudes are about 19 or fainter (e.g. GD165B), but a gain of 3-6 magnitudes can be obtained in going to the near infrared. Using earlier generations of infrared instruments, previous observations of individual brown dwarf candidates have yielded a typical resolving power of about R=500-1000; see the observations of Geballe et al. (1996), Ruiz, Leggett and Allard (1997), Tinney, Delfosse and Forveille (1997), Kirkpatrick et al. (1999) and Strauss et al. (1999). These pioneering efforts were accomplished with instruments using an earlier generation of IR detector arrays, with at most 256 x 256 pixels. This resolution is sufficient to reveal the major differences that set apart the L-dwarfs and T-dwarfs from warmer stars, e.g., the presence of deep steam bands and strong methane bands in the L- and T-dwarfs respectively. Kirkpatrick et al. (1993) modeled a spectral sequence of M-dwarfs using spectroscopy from 0.6 - 1.5 microns and identified the major bands and atomic features. Jones et al. (1996) performed a similar analysis from 1.16 - 1.22 microns with a sample of M dwarfs which also included GD165B. An excellent review of model atmospheres of very low mass stars and brown dwarfs is given by Allard et al. (1997). In this paper we report observations using NIRSPEC, a new cryogenic infrared spectrograph on the Keck II telescope employing a 1024 x 1024 InSb array. A consistent set of J-band spectra with R$``$2500 is presented which, for the first time, allows a detailed comparison of the near-infrared features of the spectral sequence from early L-dwarfs to T-dwarfs. Our targets were selected from the list of L-dwarfs published by Kirkpatrick et al. (1999) and supplemented with new sources discovered more recently by the 2MASS (Kirkpatrick et al. , 2000). One of these objects is reported as being the closest known L-dwarf to date and another is likely the coolest L-dwarf discovered thus far. ## 2 Observations Table 1 lists the objects observed and provides a summary of their photometric properties and spectral classification based on the far-optical spectroscopy by Kirkpatrick et al. (1999). As part of the “first light” scientific commissioning of the NIRSPEC spectrograph at the W. M. Keck Observatory on Mauna Kea, Hawaii, near-infrared spectra of this sample were obtained. Kelu-1 (Ruiz et al. , 1997), was observed on April 29, 1999, but all of the other sources were observed on June 2, 1999. Since detailed descriptions of the design and performance of NIRSPEC are given elsewhere, (McLean et al. 1998, McLean et al. 2000), only a short summary is included here. Briefly, this cryogenic instrument is the world’s first facility-class infrared spectrograph employing the state-of-the-art 1024 x 1024 InSb array. For the highest spectral resolution work, a cross-dispersed echelle grating is used which yields R=25000 for a 0.43<sup>′′</sup> wide slit, corresponding to 3-pixels along the dispersion direction. A much lower resolution mode can be obtained simply by replacing the echelle grating with a flat mirror and using the cross-dispersion grating alone. The spectral resolution in this mode is R $``$ 2500 for a 2-pixel wide slit (corresponding to 0.38<sup>′′</sup> in this case). For the present study, the lower resolution mode was selected for speed and efficiency. The goal was to obtain a spectral sequence of L-dwarfs with good signal-to-noise ratios from about 1-2.5 $`\mu `$m. Only the J-band results, covering the interesting range from 1.135 – 1.362 $`\mu `$m are discussed in this short note. As shown in Table 1, the J magnitudes of the sample range from 12.8 to 16.3. All objects received the same total exposure time of 600 seconds. The observing strategy employed was to obtain a 300 s integration at each of two positions along the entrance slit separated by about 20<sup>′′</sup> , referred to as a nodded pair. Seeing conditions were generally very good for these measurements (0.3<sup>′′</sup>– 0.5<sup>′′</sup>) and a slit width of 0.38<sup>′′</sup> was used in all cases. To calibrate for absorption due to the Earth’s atmosphere, stars of spectral type A0 V to A2 V were observed as close to the same airmass as possible (typically within 0.05 airmasses, except for 2MASSW J1632+1904 for which the difference was 0.28) and also close in time. The J-band is sensitive to atmospheric extinction due mainly to water vapor absorption. A-type stars are essentially featureless in this region except for the Paschen Beta line at 1.2816 $`\mu `$m, which can be interpolated out. Immediately after the observation of each source, both neon and argon arc lamp spectra were obtained for wavelength calibration, and a white-light spectrum was recorded for flat-fielding. Reduction of the data followed the steps set out below. The first requirement is to place the raw data on a uniform grid of wavelength and position along the slit. Using custom software developed by one of us (James Larkin) the spatial distortions were corrected first. A spatial map was formed by using the sum of the nodded pair of standard star spectra with the assumption that the pair of spectra must be exactly a fixed number of pixels apart. Next, the arc lamp spectra were used to construct a spectral map to warp the raw data onto a uniform wavelength scale using a second order polynomial fit. Next, the A-type calibration star was reduced by forming the difference image, warping it with the spatial and spectral mapping routines, dividing by the normalized flat field lamp, shifting and co-adding the pair of spectra at the two slit positions and then extracting the resultant spectrum. Division with a blackbody spectrum for the temperature corresponding to the star’s published spectral class completed the reduction of the standard star. Finally, the Paschen Beta absorption line at 1.2816 $`\mu `$m was removed by interpolation from the reduced spectra before it was used for division into the corresponding object spectra. Similar steps were applied to the raw data frames of the target sources. After rectification and flat-field correction, each spectrum was extracted and divided by the fully-reduced spectrum of its associated calibration star. Finally, the nodded pair of reduced spectra were shifted, co-added together and extracted to give the resultant calibrated spectrum of each source. The results are shown in Figure 1. Note that this entire set of new infrared spectra for a sample of seven optically-faint, low-mass objects represents a total of only 70 minutes of on-source observing time, comparable to the exposure time per object needed with other instruments or at shorter wavelengths. ## 3 Results Clearly, the strongest atomic line transitions in this wavelength region are the pair of neutral potassium (K I) lines at 1.1690, 1.1770 $`\mu `$m and 1.2432, 1.2522 $`\mu `$m respectively. The first pair correspond to the multiplet designation 4p <sup>2</sup>P<sup>o</sup> \- 3d <sup>2</sup>D, and the second pair are from the 4p <sup>2</sup>P<sup>o</sup> \- 5s <sup>2</sup>S multiplet. The dominant molecular species in the L-dwarfs in this band are H<sub>2</sub>O and iron hydride (FeH), with methane (CH<sub>4</sub>) appearing in the T-dwarf. The strongest FeH bands are expected at 1.194, 1.21 and 1.237 $`\mu `$m approximately. A pair of sodium lines, the 3p <sup>2</sup>P<sup>o</sup> \- 4s <sup>2</sup>S multiplet, can just be detected buried in the water absorption at 1.138 and 1.141 $`\mu `$m and we report the detection of a weak rubidium line at 1.3233 $`\mu `$m (5p <sup>2</sup>P<sup>o</sup> \- 6s <sup>2</sup>S) as well as the cesium line at 1.3588 $`\mu `$m (6p <sup>2</sup>P<sup>o</sup> \- 7s <sup>2</sup>S) in the spectrum of 2MASSW J1507. Many other features are evident however, and even the smallest spectral structures are real and above the noise level. Distinctive patterns of lines around 1.25-1.33 $`\mu `$m repeat from object to object among the earlier spectral types but fade out in the later spectral classes. Comparison of the region from 1.16–1.22 $`\mu `$m in our L-dwarf spectra with the same region studied in M-dwarfs by Jones et al. 1996 (see their Fig. 2 for Gl 406) using CGS4 on UKIRT, reveals excellent detailed agreement. Although the spectra are still heavily blended even at R$``$2500, we have extracted equivalent widths and full widths at the base of each line for the four K I transitions. We have also constructed an index to measure the strength of the H<sub>2</sub>O absorption using the ratio of the flux at 1.33 $`\mu `$m to that at 1.27 $`\mu `$m. The results are summarized in Table 2. The equivalent width of the K I lines changes very slowly from an average of about 7 Å across the L-dwarfs to about 12 Å in the T-dwarf SDSS1624, although the line depth decreases significantly. This behavior is due to line broadening. The full width at the base of these lines increases from about 40Å in the L-dwarfs to over 80 Å in the T-dwarf. The water index slowly strengthens from about 0.6 to 0.4 through the L-dwarfs as the absorption at 1.33 $`\mu `$m becomes deeper and then drops markedly to about 0.04 in the T-dwarf. ## 4 Discussion The variation in the J-band spectra of this sample of objects is quite remarkable and it is relatively easy to place the objects in a temperature sequence. The water band strengthens as the temperature decreases. FeH weakens and then disappears, the K I lines weaken and broaden and the continuum around 1.15 $`\mu `$m slowly drops relative to the continuum at 1.26 $`\mu `$m. As expected, the DENIS L5 source and the 2MASS L5 object exhibit almost identical spectral characteristics. By ordering the spectra according to the classifications given by Kirkpatrick et al. (1999, 2000), with the earliest spectral type (L2) at the top, the following trends are apparent in the J-band spectra. L2 (Kelu-1): strong K I and FeH lines are superimposed on a larger depression across the region, which is perhaps the result of residual oxide (either TiO and/or VO) absorption; VO is expected around 1.19 $`\mu `$m. L4 (GD165B): any residual oxide absorption has gone, effectively raising the continuum to produce a flatter spectrum, and making the K I and FeH features appear stronger although they are expected to decrease with decreasing temperature. The water absorption (steam) band at 1.30 $`\mu `$m is increasing in strength. Numerous small features from 1.25–1.30 $`\mu `$m closely match those in Kelu-1. L5 (2MASSW J1507-1627): all features present in the L4 class remain. The K I and FeH features are very slightly weaker, while the water band at 1.30 microns is deeper than before and there is a slight slope of the continuum towards the blue end. L5 (DENIS-P J1228-1547): this spectrum is almost the same as the previous one, confirming that they are indeed the same spectral class. L8 (2MASSW J1623+1904): at L8, the FeH features have disappeared and the depth of the K I lines are significantly weaker but there is evidence of broadening in their wings. There is a slight downward slope of the continuum towards the blue. The steam band is relatively stronger. L8/9 (2MASSW J1523+3014): Very similar to the previous L8, but the K I lines appear slightly broader and the water band is slightly deeper. The slope to the blue is a little stronger than in L8. Consequently, this object may be cooler than 2MASSW J1623+1904 as its designation suggests, but the difference is small. T (SDSS 1624+0029): A dramatic slope towards the blue appears, due to the onset of methane absorption in this wavelength region, and there is also a slope or “break” towards the red from about 1.26–1.31 $`\mu `$m before a deep water band sets in. The K I lines are still present but are now very broad. As an illustration of the density of molecular features and the problem of line blending, Figure 2 shows a model spectrum kindly provided by Peter Hauschildt (private communication). This sample spectrum is based on a model atmosphere code (AMES-Dusty, Allard & Hauschildt. in prep), with a self-consistent treatment for dust formation. The treatment of dust is complicated however, and theorists have yet to agree on the best approach. The parameters of this model are solar metallicity, log(g)=4.5 and T<sub>eff</sub>=2000 K and the model spectrum was smoothed from an original resolution of R=50,000. Qualitatively, the agreement with the NIRSPEC spectra is very good. Another useful framework for understanding these spectra is the molecular equilibrium calculations by Burrows and Sharp (1999). As their analysis shows, the main absorbers characteristic of M stars (e.g.TiO and VO) decline rapidly in importance with decreasing effective temperature. These molecules are expected to condense onto dust grains; TiO for instance forms perovskite (CaTiO<sub>3</sub>). The abundance of gaseous TiO begins to decrease around 2400 K and similarly, VO will become depleted near 1800 K. For iron, the first condensate to form is the metal itself, at about 2200 K, which can then form droplets and rain out of the atmosphere. We have carefully compared our spectra to the solar atlas and cannot make any conclusive identifications with iron lines, or any other metal lines (such as Mn and Al) among the dense forest of H<sub>2</sub>O transitions. Interestingly, Jones et al. 1996 noted the presence of Fe in earlier spectral types, such as the M6 dwarf GL406, at a comparable resolution. A significant amount of iron may have rained out. Since they are less refractory and survive in monatomic form for a greater temperature range, the neutral alkali metals (Na, K, Rb, Cs) are expected to remain after the true metals become depleted. In effect, as the temperature falls the atmospheres of cool sub-stellar objects become more transparent. The column density of potassium and sodium, for instance, is expected to increase to the point where the wings of the absorption lines become damped. This result explains the strength, broadening and temperature dependence of the K I lines seen in our spectra. According to Burrows and Sharp (1999), sodium and potassium should become depleted around 1500 - 1200 K, with sodium disappearing first and potassium forming into KCl below about 1200 K. If there is settling of refractory species however, at higher, deeper temperatures, then both atomic sodium and potassium are expected to persist to lower temperatures, at which point they should form their sulfides, not chlorides (see Burrows, Marley and Sharp 1999, and Lodders 1999). Figure 1 shows that the very strong K I lines persist, albeit with broad wings, well into the T-dwarf temperature range. Some features apparent in the new data are not yet explained by the existing models. For example, a broad, relatively strong feature is seen in our spectra at 1.22 $`\mu `$m. This feature remains through L5, but is gone in the L8 spectra. Although this is the same pattern as followed by FeH, this broad feature does not appear in the opacity plot of FeH kindly supplied by Adam Burrows (private communication), nor in the model spectrum provided by Peter Hauschildt. Finally, our results imply that any L- or T- dwarf object meeting the discovery parameters of the 2MASS and/or the SDSS can be observed spectroscopically with NIRSPEC on Keck at medium to high spectral resolution. The near-infrared region from 1.13 - 1.36 $`\mu `$m is quite rich in spectral features, most of which appear to be unresolved blends of molecular species, namely H<sub>2</sub>O and FeH in the L-dwarfs and CH<sub>4</sub> in the T-dwarfs. Evidently, even higher spectral resolution would help to constrain the models. It is a pleasure to acknowledge the hard work of past and present members of the NIRSPEC instrument team at UCLA: Maryanne Anglionto, Odvar Bendiksen, George Brims, Leah Buchholz, John Canfield, Kim Chim, Jonah Hare, Fred Lacayanga, Samuel B. Larson, Tim Liu, Nick Magnone, Gunnar Skulason, Michael Spencer, Jason Weiss, Woon Wong. In addition we thank director Fred Chaffee, CARA instrument specialist Thomas A. Bida, and the Observing Assistants at Keck observaory, Joel Aycock, Gary Puniwai, Charles Sorenson, Ron Quick, and Wayne Wack, for their support. We are pleased to acknowledge the International Gemini Telescopes Project for the InSb detector used in these measurements. Finally, we gratefully acknowledge Adam Burrows and Peter Hauschildt for very helpful information and advice about the current model atmospheres of low-mass stars and sub-stellar objects.
no-problem/0003/cond-mat0003349.html
ar5iv
text
# 𝑑-wave superconductivity and Pomeranchuk instability in the two-dimensional Hubbard model ## Abstract We present a systematic stability analysis for the two-dimensional Hubbard model, which is based on a new renormalization group method for interacting Fermi systems. The flow of effective interactions and susceptibilities confirms the expected existence of a $`d`$-wave pairing instability driven by antiferromagnetic spin fluctuations. More unexpectedly, we find that strong forward scattering interactions develop which may lead to a Pomeranchuk instability breaking the tetragonal symmetry of the Fermi surface. PACS: 71.10.Fd, 71.10.-w, 74.20.Mn The two-dimensional Hubbard model has attracted much interest as a promising prototype model for the electronic degrees of freedom in the copper-oxide planes of high-temperature superconductors, since it has an antiferromagnetically ordered ground state at half-filling and is expected to become a $`d`$-wave superconductor for slightly smaller electron concentrations . Although the Coulomb interaction in the cuprate superconductors is rather strong, the tendency towards antiferromagnetism and $`d`$-wave pairing is captured already by the 2D Hubbard model at weak coupling. Conventional perturbation theory breaks down for densities close to half-filling, where competing infrared divergences appear as a consequence of Fermi surface nesting and van Hove singularities . A controlled and unbiased treatment of these divergencies cannot be achieved by standard resummations of Feynman diagrams, but requires a renormalization group (RG) analysis which takes into account the particle-particle and particle-hole channels on an equal footing. Early RG studies of the two-dimensional Hubbard model started with simple but ingenious scaling approaches, very shortly after the discovery of high-$`T_c`$ superconductivity . These studies focussed on dominant scattering processes between van Hove points in k-space, for which a small number of running couplings could be defined and computed on 1-loop level. Spin-density and superconducting instabilities where identified from divergencies of the corresponding correlation functions. A major complication in two-dimensional systems compared to one dimension is that the effective interactions cannot be parameterized accurately by a small number of running couplings, even if irrelevant momentum and energy dependences are neglected, since the tangential momentum dependence of effective interactions along the Fermi surface is strong and important in the low-energy limit. This has been demonstrated in particular in a 1-loop RG study for a model system with two parallel flat Fermi surface pieces . Zanchi and Schulz have recently shown how modern functional renormalization group methods can be used to treat the full tangential momentum dependence of effective interactions for arbitrary curved Fermi surfaces. Most recently, Salmhofer has derived an improved version of this field theoretic approach. The resulting flow equations are particularly suitable for a concrete numerical evaluation. To compute physical instabilities, we have derived the corresponding flow equations for susceptibilities . In this letter we present results for the flow of susceptibilities as obtained by applying Salmhofer’s renormalization group method to the two-dimensional Hubbard model with nearest and next-nearest neighbor hopping on a square lattice. The expected existence of a $`d`$-wave pairing instability driven by antiferromagnetic spin fluctuations is thereby confirmed beyond doubt. More unexpectedly, we find that strong forward scattering interactions develop which may lead to a Pomeranchuk instability breaking the tetragonal symmetry of the Fermi surface. The one-band Hubbard model $$H=\underset{𝐢,𝐣}{}\underset{\sigma }{}t_{\mathrm{𝐢𝐣}}c_{𝐢\sigma }^{}c_{𝐣\sigma }+U\underset{𝐣}{}n_𝐣n_𝐣,$$ (1) describes tight-binding electrons with a local repulsion $`U>0`$. Here $`c_{𝐢\sigma }^{}`$ and $`c_{𝐢\sigma }`$ are creation and annihilation operators for fermions with spin projection $`\sigma \{,\}`$ on a lattice site $`𝐢`$, and $`n_{𝐣\sigma }=c_{𝐣\sigma }^{}c_{𝐣\sigma }`$. A hopping amplitude $`t`$ between nearest neighbors and an amplitude $`t^{}`$ between next-nearest neighbors on a square lattice leads to the dispersion relation $$ϵ_𝐤=2t(\mathrm{cos}k_x+\mathrm{cos}k_y)4t^{}\mathrm{cos}k_x\mathrm{cos}k_y$$ (2) for single-particle states. This dispersion relation has saddle points at $`𝐤=(0,\pi )`$ and $`(\pi ,0)`$, which generate logarithmic van Hove singularities in the non-interacting density of states at the energy $`ϵ_{vH}=4t^{}`$. For $`t^{}=0`$, $`ϵ_𝐤`$ has the nesting property $`ϵ_{𝐤+𝐐}=ϵ_𝐤`$ for $`𝐐=(\pi ,\pi )`$, which leads to an antiferromagnetic instability for arbitrarily small $`U>0`$ at half-filling . The RG equations are obtained as follows (for details, see Salmhofer and Ref. ). The infrared singularities are regularized by introducing an infrared cutoff $`\mathrm{\Lambda }>0`$ into the bare propagator such that contributions from momenta with $`|ϵ_𝐤\mu |<\mathrm{\Lambda }`$ are suppressed. All Green functions of the interacting system will then flow as a function of $`\mathrm{\Lambda }`$, and the true theory is recovered in the limit $`\mathrm{\Lambda }0`$. Salmhofer has recently pointed out that (amputated) Green functions obtained by expanding the effective action of the theory in powers of normal ordered monomials of fermion fields obey differential flow equations with a structure that is particularly convenient for a power counting analysis to arbitrary loop order. With the bare interaction as initial condition at the highest scale $`\mathrm{\Lambda }_0=\mathrm{max}|ϵ_𝐤\mu |`$, these flow equations determine the exact flow of the effective interactions as $`\mathrm{\Lambda }`$ sweeps over the entire Brillouin zone down to the Fermi surface. The effective low-energy theory can thus be computed directly from the microscopic model without introducing any ad hoc parameters. For a weak coupling stability analysis it is sufficient to truncate the exact hierarchy of flow equations at 1-loop level. The effective 2-particle interaction then reduces to the one-particle irreducible 2-particle vertex $`\mathrm{\Gamma }^\mathrm{\Lambda }`$, and its flow is determined exclusively by $`\mathrm{\Gamma }^\mathrm{\Lambda }`$ itself (no higher many-particle interactions enter). Flow equations for susceptibilities are obtained by considering the exact RG equations in the presence of suitable external fields, which leads to an additional 1-particle term in the bare interaction, and expanding everything in powers of the external fields to sufficiently high order . One cannot solve the flow equations with the full energy and momentum dependence of the vertex function, since $`\mathrm{\Gamma }^\mathrm{\Lambda }`$ has three independent energy and momentum variables. The problem can however be much simplified by ignoring dependences which are irrelevant in the low energy limit, namely the energy dependence and the momentum dependence normal to the Fermi surface (for details, see Ref. ). This approximation is exact for the bare Hubbard vertex, and asymptotically exact in the low-energy regime. The remaining tangential momentum dependence is discretized for a numerical evaluation. Most of our results where obtained for a discretization with 16 points on the Fermi surface (yielding 880 ”running couplings”), and we have checked that increasing the number of points does not change our results too much. We have computed the flow of the vertex function for many different model parameters $`t^{}`$ and $`U`$ ($`t`$ just fixes the absolute energy scale) and densities close to half-filling. In all cases the vertex function develops a strong momentum dependence for small $`\mathrm{\Lambda }`$ with divergencies for several momenta at some critical scale $`\mathrm{\Lambda }_c>0`$, which vanishes exponentially for $`U0`$. To see which physical instability is associated with the diverging vertex function we have computed commensurate and incommensurate spin susceptibilities $`\chi _S(𝐪)`$ with $`𝐪=(\pi ,\pi )`$, $`𝐪=(\pi \delta ,\pi )`$ and $`𝐪=(1\delta )(\pi ,\pi )`$, where $`\delta `$ is a function of density , the commensurate charge susceptibility $`\chi _C(\pi ,\pi )`$, and singlet pair susceptibilities with form factors $$d(𝐤)=\{\begin{array}{cc}1\hfill & \text{(}s\text{-wave)}\hfill \\ \frac{1}{\sqrt{2}}(\mathrm{cos}k_x+\mathrm{cos}k_y)\hfill & \text{(extended }s\text{-wave)}\hfill \\ \frac{1}{\sqrt{2}}(\mathrm{cos}k_x\mathrm{cos}k_y)\hfill & \text{(}d\text{-wave }d_{x^2y^2}\text{)}\hfill \\ \mathrm{sin}k_x\mathrm{sin}k_y\hfill & \text{(}d\text{-wave }d_{xy}\text{)}.\hfill \end{array}$$ (3) Some of these susceptibilities diverge together with the vertex function at the scale $`\mathrm{\Lambda }_c`$. Depending on the choice of $`U`$, $`t^{}`$ and $`\mu `$, the strongest divergence is found for the commensurate or incommensurate spin susceptibility or for the pair susceptibility with $`d_{x^2y^2}`$ symmetry. In Fig. 1 we show a typical result for the flow of susceptibilities as a function of $`\mathrm{\Lambda }`$. Note the threshold at $`\mathrm{\Lambda }0.03t`$ below which the amplitudes for various scattering processes, especially umklapp scattering, renormalize only very slowly. The flow of the antiferromagnetic spin susceptibility is cut off at the same scale. The pairing susceptibility with $`d_{x^2y^2}`$-symmetry is obviously dominant here (note the logarithmic scale). Following the flow of the susceptibilities one can see that the $`d_{x^2y^2}`$-pairing correlations develop in the presence of pronounced but short-range antiferromagnetic spin-correlations, in agreement with earlier ideas on $`d`$-wave superconductivity in the Hubbard model . In Fig. 2 we show the $`(\mu ,U)`$ phase diagram for $`t^{}=0.01t`$ obtained by identifying the dominant instability from the flow for many different values of $`\mu `$ and $`U`$. For $`\mu =4t^{}`$ the Fermi surface touches the saddle points $`(0,\pi )`$ and $`(\pi ,0)`$, while $`\mu =4t^{}+0.01t`$ corresponds to half-filling. Note that for $`U0`$ the pairing instability always dominates, because the BCS channel dominates the flow in the limit $`\mathrm{\Lambda }0`$. A spin density wave is the leading instability for $`U0`$ only in the special case with perfect nesting, $`t^{}=0`$ and $`\mu =0`$ (see the $`(\mu ,U)`$ phase diagram computed from the 1-loop flow for $`t^{}=0`$ in Ref. ). How the critical energy scale $`\mathrm{\Lambda }_c`$ varies as a function of the chemical potential (i.e. as a function of density) is shown in Fig. 3 for an interaction strength $`U=1.5t`$. Obviously $`\mathrm{\Lambda }_c`$ is maximal for a chemical potential at the van Hove energy. Note that $`\mathrm{\Lambda }_c`$ must not be interpreted as a transition temperature for spin density wave formation or superconductivity, but rather as an energy scale where bound particle-hole or particle-particle pairs are formed. Since some of the forward scattering interactions grow strong for small $`\mathrm{\Lambda }`$, while the Fermi velocity is very small near the saddle points, the Fermi surface may be significantly deformed by interactions, especially for $`\mu ϵ_{vH}`$. Previous investigations of Fermi surface deformations within standard perturbation theory have yielded only very small shifts even for sizable interaction strengths , but in these studies the possibility of a spontaneous breaking of the point group symmetry of the square lattice has not been taken into account. To analyze systematically the stability of the Fermi surface shape, we define a susceptibility $$\kappa _{𝐤_F𝐤_F^{}}=\frac{\delta s_{𝐤_F}}{\delta \mu _{𝐤_F^{}}}$$ (4) which measures the size of Fermi surface shifts $`\delta s_{𝐤_F}`$ for small momentum dependent shifts of the chemical potential $`\delta \mu _{𝐤_F^{}}`$ at points $`𝐤_F^{}`$ on the Fermi surface. The matrix $`\kappa _{𝐤_F𝐤_F^{}}`$ defines a linear integral operator acting on functions of $`𝐤_F`$. A simple consideration in the spirit of phenomenological Fermi liquid theory shows that the corresponding inverse operator is given by $$(\kappa ^1)_{𝐤_F𝐤_F^{}}=v_{𝐤_F}\delta (𝐤_F𝐤_F^{})+2f_{𝐤_F𝐤_F^{}}^c$$ (5) where $`v_{𝐤_F}`$ is the Fermi velocity and $`f_{𝐤_F𝐤_F^{}}^c`$ is the Landau function in the charge (spin-symmetric) channel. It is now obvious that the matrix $`\kappa _{𝐤_F𝐤_F^{}}`$ is symmetric. The Fermi surface is stable, if all eigenvalues of $`\kappa `$ (or $`\kappa ^1`$) are positive. Note that Landau’s energy functional can be written as a quadratic form in $`\delta s_{𝐤_F}`$, with $`\kappa ^1`$ as kernel , and negative eigenvalues would imply that this energy can be lowered by a suitable deformation of the Fermi surface. In isotropic Fermi liquids such instabilities occur for strongly negative Landau parameters, as first pointed out by Pomeranchuk . We have computed the renormalization group flow of the eigenvalues and eigenvectors of the operator $`\kappa ^1`$ from the flow of the Landau function $`f_{𝐤_F𝐤_F^{}}^{c\mathrm{\Lambda }}`$, which is given directly by the vertex function in the forward scattering channel . For various choices of the model parameters we have always found a Fermi surface instability at a scale $`\mathrm{\Lambda }_c^P`$ above the scale $`\mathrm{\Lambda }_c`$ where the vertex function diverges. In all cases the instability favors a deformation of the Fermi surface which breaks the point group symmetry of the square lattice, as shown schematically in Fig. 4. The instability is mainly driven by a strong attractive interaction between particles (or holes) on opposite corners of the Fermi surface near the saddle points and a repulsive interaction between particles on neighboring corners. The above diagnosis of Pomeranchuk instabilities would be rigorous for a normal Fermi liquid with finite renormalized interactions in the infrared limit. In the present system, however, the vertex function diverges at a finite scale and possible Pomeranchuk type instabilities compete with magnetic and superconducting instabilities. Since we have no quantitative theory of the strong coupling physics near and below the scale $`\mathrm{\Lambda }_c`$, we can only list and discuss two possible scenarios: i) Energy gaps due to particle-particle or particle-hole binding may stop the flow of forward scattering interactions before a Pomeranchuk instability sets in. ii) The Pomeranchuk instability is not blocked by binding phenomena. In that case one would have a finite temperature phase transition with a spontaneous breaking of the (discrete) tetragonal symmetry of the square lattice, and subsequent continuous symmetry breaking associated with magnetic order or superconductivity in the ground state. Which of the two scenarios is realized depends on the choice of the model parameters. The Pomeranchuk instability occurs more easily if the Fermi surface is close to the saddle points of $`ϵ_𝐤`$. On the other hand, nesting raises the scale for particle-hole binding (leading ultimately to magnetic order). The best candidate is therefore the Hubbard model with a sizable $`t^{}`$ (reducing nesting) and $`\mu =ϵ_{vH}`$. We emphasize that the Pomeranchuk instability does not cut off the singularity in the Cooper channel since it does not break the reflection invariance. Hence, at sufficiently large doping away from half-filling, $`d`$-wave superconductivity will set in in any case, with an order parameter that may be slightly distorted away from perfect $`d`$-wave symmetry. The Pomeranchuk instability would also not destroy the umklapp scattering route to an insulating spin liquid discussed recently by Furukawa et al. . To our knowledge a Pomeranchuk instability has not yet been observed in numerical solutions of the two-dimensional Hubbard model. Of course this may be due to finite size limitations or too high temperatures in Monte Carlo simulations. It would thus be interesting to compute the Fermi surface susceptibility $`\kappa _{𝐤_F𝐤_F^{}}`$ numerically. In real systems a Pomeranchuk instability as in Fig. 4 may lead to an orthorhombic lattice distortion, as a consequence of the coupling of electronic and lattice degrees of freedom. High temperature superconductors indeed exhibit structural phase transitions between tetragonal and orthorhombic phases. It would be interesting to clarify whether a Pomeranchuk instability might drive (at least to a significant extent) the transition into the orthorhombic phase in these materials. In summary, we have shown that modern renormalization group methods can be used to establish the expected $`d`$-wave pairing instability in the two-dimensional Hubbard beyond doubt. Note that for small bare interactions and in a parameter regime where only particle-particle pairing fluctuations grow strong, the strong coupling problem associated with the formation of a superconducting state can be treated rigorously . Furthermore, we have pointed out that a Pomeranchuk instability breaking the tetragonal symmetry of the Fermi surface is likely to occur for a suitable choice of the model parameters, especially for a Fermi surface close to the saddle points in the absence of perfect nesting. Acknowledgments: We are very grateful to Maurice Rice, Manfred Salmhofer and Eugene Trubowitz for valuable discussions. This work has been supported by the Deutsche Forschungsgemeinschaft under Contract No. Me 1255/4 and within the Sonderforschungsbereich 341.
no-problem/0003/astro-ph0003240.html
ar5iv
text
# Cepheid variables in the LMC cluster NGC 1866. I. New BVRI CCD photometry ## 1 Introduction and motivation for our study Cepheid variables are the most important local distance calibrators to lay the foundations of the extragalactic distance ladder. Cepheids in the Magellanic Clouds, and in particular the Large Magellanic Cloud, have played a crucial role in this effort ever since Henrietta Leavitt (Pickering, 1912) discovered the Cepheid PL relation in the SMC in the early years of the 20th century. Since LMC Cepheids all lie at virtually the same distance, as opposed to Galactic Cepheids which are found over a range of distances, and since the LMC is rich in Cepheids and contains Cepheid variables up to the largest pulsation periods ($`100`$ days), the slopes of the PL and PLC relations are best established in this satellite galaxy. Furthermore, the geometrical structure of the LMC is basically a simple tilted disk with the NE side closest to us (Caldwell and Coulson, 1986; Welch et al., 1987), which makes it possible to apply relatively accurate position-dependent distance corrections to LMC objects to refer distances to the LMC barycenter, and thus further reduce the dispersions in the intrinsic Cepheid PL and PLC relations. Unlike the slope, the zero point of the PL relation is much harder to calibrate. This step involves the need to determine distances to Cepheid variables with methods independent of the PL relation. The classic approach to solve this problem has been the use of Cepheids in Galactic clusters whose distances can be derived from ZAMS-fitting to their observed magnitude-color diagrams. However, recent HIPPARCOS parallax data on a number of open clusters have shown that the location of cluster main sequences seems to depend, in a stronger way than predicted by stellar evolution models, on the cluster age (e.g. van Leeuwen (1999)), and as long as this dependence (and perhaps others) is not well understood and calibrated, the cluster ZAMS-fitting method may not be the best way to derive the zero point of the Cepheid PL relation. On the other hand, the determination of the PL zero point, and thus of the distance to the LMC, from HIPPARCOS parallaxes of Galactic Cepheids (Feast and Catchpole, 1997), while of course very attractive as an independent, geometrical method, has proven to be a difficult subject, due to the crucial importance of the correct treatment of biases in these very low signal-to-noise data. The $`\pm 8`$ percent (statistical error only) uncertainty of the LMC distance obtained with this method (Pont, 1999) is still too large for a galaxy which is our closest neighbor in space and serves as a principal reference object for the extragalactic distance scale. For a long time, the best alternative to derive distances of Cepheid variables has been the Baade-Wesselink method (Wesselink, 1946) which utilizes the light-, color- and radial velocity variations of a Cepheid to infer its radius and distance in a way which is completely independent of other astrophysical distance scales. Over the years, the classic approach used by Wesselink has been improved in many respects (for a good review, see Gautschy (1987)). Perhaps the single most important improvement of the method has been the shift to infrared wavelengths, where the dependence of the surface brightness of a Cepheid on atmospheric parameters like gravity and microturbulence (Laney and Stobie, 1995), and on metallicity (Bell and Gustafsson, 1989) becomes very small as compared to optical wavelengths. This has opened up the possibility to derive the distances to Cepheid variables with an accuracy of $`4`$ percent when the $`V`$, $`VK`$ or $`K`$, $`JK`$ magnitude/color combinations are used to derive the Cepheid surface brightnesses in the $`V`$ or $`K`$ band. The surface brightnesses yield the angular diameters once the relation between surface brightness and angular diameter has been properly calibrated. Such an empirical calibration was first presented by Welch (1994) using precise, interferometrically measured angular diameters of giants and supergiants in the Cepheid color range, and has more recently been improved and extended by Fouqué and Gieren (1997). These authors have shown (Gieren, Fouqué and Gómez, 1997, 1998) that the infrared surface brightness (ISB) technique, as calibrated in Fouqué and Gieren (1997), yields Cepheid distances from either of its versions (using $`V`$, $`VK`$ or $`K`$, $`JK`$) which are accurate to about 4 percent if the underlying observational data are of excellent quality and the amplitude of the color variation exceeds $`0.25`$ mag. In $`VK`$, this is the case for most of even the shortest period Cepheids pulsating in the fundamental mode. The ISB technique is therefore an excellent tool to derive direct, accurate distances to Cepheid variables which are likely independent of the stellar metallicity. As an added benefit, they are also largely independent of the absorption corrections applied in the analyses (Welch, 1994; Gieren, Fouqué and Gómez, 1997). These latter properties make the method very attractive for an application to extragalactic Cepheids. Here, the obvious target to start with is the LMC for which the necessary data can be acquired with small-to medium-aperture telescopes. The ideal targets in the LMC for the potentially most accurate distance determination with Cepheids are the young, rich clusters. Due to the right combination of turnoff mass, richness and metallicity, a number of these clusters have been found to contain considerable numbers of Cepheids (Mateo, 1992) which allow a distance determination to these clusters, and thus to the LMC barycenter, more accurate than that achievable for individual field variables. It is therefore reasonable to expect that an application of the ISB technique to Cepheid-rich LMC globular clusters will produce the as yet most accurate distance determination to the LMC based on Cepheid variables, and allow the determination of the absolute PL zero point (in a given photometric band) with a smaller uncertainty than other methods. In the construction of the extragalactic distance ladder, and in order to take full advantage of the HST Key Project Cepheid observations in other galaxies, this is a crucial step, especially in view of the large dispersion among LMC distance moduli as currently obtained from a variety of objects and methods (for a comprehensive review of this subject, see Walker (1999)). We start our work on Cepheids in LMC globular clusters with NGC 1866. More than 20 Cepheids have been detected in this cluster (Welch et al. (1991); Storm et al. (1988); and references therein), making it the most Cepheid-rich cluster known in the LMC. Previous work on the variable stars in NGC 1866, and particularly on its Cepheids, has been reviewed in Welch et al. (1991) who were also the first to collect radial velocity data on a number of Cepheids in the cluster field. Recently, our group has reported results on a first Cepheid variable in NGC 1866, HV 12198, from our current study using the ISB technique (Gieren et al., 2000). In that paper, we also reviewed more recent ($`>1991`$) work on NGC 1866 Cepheids. The purpose of this paper is to present the new optical CCD photometry we have obtained for a number of NGC 1866 Cepheids, including HV 12198. A follow-up paper (Storm et al., 2000) will present infrared photometry and new radial velocity observations for these same variables. This material will then be used to discuss cluster membership and binarity for the Cepheids in the field of NGC 1866, and to derive their distances and radii with the ISB method. We emphasize that one of the important consequences of many Cepheids being in the same cluster is a sanity check that the actual precision of the technique can be assessed (without the usual fallback explanations of differential reddening, distance modulus, and/or metallicity). While we shall also study the physical and evolutionary properties of the Cepheids in NGC 1866, our principal goal is a truly accurate distance determination to the LMC, which will be further improved by distance determinations to other clusters we are currently working on, notably NGC 2031 and NGC 2136/37. ## 2 Photometry of selected Cepheids in NGC1866 CCD images in $`BVRI`$ (Cousins) filters of NGC 1866 were acquired during fours runs between November 1994 and January 1996. The first sequence of images was obtained using the CFCCD instrument on the 0.9 m telescope of CTIO which has a pixel scale of 0.58 arcsec/pixel, while the remainder of the data was obtained on the Swope 1.0 m reflector of Las Campanas Observatory, using the 1024 x 1024 TEK2 CCD which has a pixel scale of 0.6 arcsec/pixel and a field of view of 10 sq. arcmin. Most of the observing nights were photometric. Integration times of 240 sec in all of the filters produced a S/N for the Cepheids always high enough for photometry at the 0.01-0.02 mag level of accuracy (see below). On all the photometric nights, we observed large numbers of photometric standard stars from the list of Landolt (1992) to tie our observations of the Cepheids to the standard $`BVRI`$ system. From some 20 local comparison stars on our frames which were checked thoroughly for photometric constancy we established a final local comparison by taking a weighted sum of the intensities of this set of stars, in any of the four filters. The weighting was proportional to the intensities of the local comparison stars which ensured that stars with higher signal to noise ratios got a higher weight. All the photometry was done differentially with respect to this local reference system. Using this differential photometry procedure, we could obtain reliable Cepheid magnitudes on a number of non-photometric nights. Once the images were bias-subtracted and flatfielded, point spread function-photometry was done on them using the DAOPHOT (Stetson, 1987) and DoPHOT (Schecter, Mateo and Saha, 1993) reduction packages for photometry in crowded fields. We chose to do the automatic finding and photometry down to the faintest objects, and then to identify the Cepheids on the output lists. DAOPHOT was used by one of us (TJM) to reduce the 1994 CTIO images while DoPHOT was used to reduce all the images obtained at Las Campanas. This procedure yielded an independent check on the accuracy of our photometry, on an absolute scale, and the light curve plots (Figs. 1-7) of the Cepheids show that there is excellent consistency (at the 0.01 mag level) between the magnitudes, in all filters, derived with both software packages. An example is shown in Fig. 8 for the $`V`$-band magnitudes of the variable HV 12199. We did photometry on the ”classical” Cepheid variables HV 12197, HV 12198, HV 12199, HV 12202, HV 12203 and HV 12204 in the NGC 1866 field (for a finder chart of these variables, see Arp and Thackeray (1967); Storm et al. (1988); Welch et al. (1991)) which i) seemed to be sufficiently uncrowded to allow photometry accurate enough for the distance and radius determinations we are going to perform with these data, and ii) have light- and color amplitudes large enough to permit a successful use of at least the $`V`$, $`VK`$ version of the ISB technique. The individual photometric observations for these six variables are listed in Tables 1-6. The first column gives the Heliocentric Julian Dates for the mid-points of a $`BVRI`$ (or $`VRI`$) sequence, column 2 the phases from maximum light (see below), and the other columns give the calibrated $`V`$ magnitudes and $`BV`$, $`VI`$ and $`VR`$ color indices, each with their respective standard deviations. To check on the capabilities of the DoPHOT routine to do photometry in very crowded fields, we also performed photometry on the Cepheid variable V7 (see Storm et al. (1988)) which is located in the central region of the cluster. These data are given in Table 7. As expected, the light curve (Fig. 7) is much noisier than that for the other variables, ruling this variable (and the other Cepheids close to the center of NGC 1866) out for a distance determination with the ISB technique. We used our new $`V`$ band data together with the literature $`V`$ data reported by Walker (1987) and Welch et al. (1991) to derive improved values for the pulsation periods of the Cepheids. These values differ only slightly from those given and discussed in Côté et al. (1991); the agreement is good to 2 - 5$`\times 10^5`$ days. We can confirm the variable period of HV 12198 as found by Côté et al. (1991), but the period change is so small that there is no significant difference between their period and ours, and consequently there is no significant effect on the phasing between the data presented here and those of Côté et al. (1991). The improved periods are listed in Table 8, together with modern epochs of maximum light (in $`V`$) of the variables derived from our new data. For the Cepheid V7, the uncertainty of the period is much larger as for the other variables since we only have data from a few epochs and therefore run into aliasing problems. The number of digits given in Table 8 is still significant, however, if we assume that we have adopted the correct period peak. In Figs. 1-7, we show the light- and color curves of the Cepheids, folded on the new periods derived in this paper. Overplotted on the new data are the $`BVI`$ data of Walker (1987), and the $`BV`$ data of Welch et al. (1991). The new data are more complete and slightly more accurate than the previous data sets. It is seen that for five of the Cepheid variables in NGC 1866, the composite $`V`$ light curves are now of excellent quality, exhibiting complete phase coverage and very low scatter. Only the two Cepheids closest to the cluster center, HV 12202 and V7, show enhanced scatter in their light curves which is clearly a result of increased contamination in the photometry due to severe crowding. For all of the variables, the agreement of the various data sets in $`V`$ is excellent. Offsets typically amounting to 0.05 mag between different data sets are visible, however, in the $`B`$ and $`I`$ bands. Our new $`B`$ band data have a tendency to be somewhat redder than the Walker and Welch et al. data, which among themselves also show minor systematic deviations as already discussed by Welch et al. (1991). The $`VI`$ data show the same trend, but for a few stars, like HV 12198 and HV 12199, the agreement between our and the Walker data is very good. It is likely that these small zero point differences have their origin in different amounts of contamination by nearby blue and red stars due to different pixel scales in the CCDs used by the different authors, or are simply a consequence of small systematic errors in the transformation to the standard system. In this context, we are very confident that our magnitudes match the $`BVRI`$ standard system very well. We find formal rms uncertainties for the transformation to the standard system of $`\pm 0.014`$ in $`B`$, $`\pm 0.015`$ in $`V`$, $`\pm 0.018`$ in $`R`$, and $`\pm 0.017`$ in $`I`$, and these small uncertainties are supported by the excellent agreement of the CTIO and LCO data sets in all of the filters which were reduced in a completely independent way. For the $`V`$, $`VK`$ infrared surface brightness distance and radius determination for which these data were principally obtained, it is comforting to see that the $`V`$ light curves of five variables are now very accurately defined, and that in this band there are evidently no zero point problems in the photometry. These five Cepheids are therefore excellent candidates for accurate distance determinations once their $`K`$ light and radial velocity curves become available. We are currently working on this. WPG and MG gratefully acknowledge substantial amounts of observing time granted to this project by the Las Campanas Observatory. They appreciate the expert support of the LCO staff which makes observing at Las Campanas a very pleasant experience. WPG also acknowledges financial support for this project received by CONICYT through Fondecyt grant 1971076, and by the Universidad de Concepción through research grant No. 97.011.021-1.0. TJM and TGB gratefully acknowledge observing time at CTIO and the strong support received from the staff. They acknowledge financial support from NATO grant 900494 and National Science Foundations grants AST 92-21595 (TJM), AST 95-28134 (TJM), AST 92-21405 (TGB), and AST 95-28372 (TGB).
no-problem/0003/cond-mat0003469.html
ar5iv
text
# Gauge theory of self-similar system ## 1 Introduction Let us consider the one-dimensional random walker with coordinate $`x`$ at time $`t`$ – a characteristic example of the stochastic system carrying out Lévy flights . In the case of self-similar system, the corresponding probability distribution $`P(x,t)`$ is a homogeneous function satisfying to the condition $$P(x,t)=a^\alpha 𝒫(\kappa ),\kappa x/a$$ (1) where $`aa(t)`$ is a time dependent partial scale, $`\kappa `$ is a dimensionless coordinate, $`\alpha `$ is a self-similarity index. To analyze such a system, it is convenient to use the basic conception of Jackson’s derivative $`𝒟_q`$, whose properties are given in Appendix. Basic advantage of this derivative for analyzing self-similar system is that the Jackson’s derivative determines rate of the function variation with respect to dilatation $`q`$, but not to the shift $`\mathrm{d}x0`$ as in usual case. This work is devoted to studying a stochastic self-similar system on the basis of such a type representation. The paper is organized as follows. In Section 2 the governed equations for probability density and gauge potential are obtained starting from a dilatation invariant Lagrangian. Section 3 deals with the determination of the time dependencies for a characteristic scale and probability density. A steady state is shown to realize when the Jackson’s derivative of the gauge potential equals to zero. This represents a gauge condition, under which the probability distribution has the Tsallis’ form. With breaking the gauge condition, an automodel regime is observed at small time interval determined by the Tsallis’ parameter $`q>1`$. An exponential falling down happens at large time where the dilatation parameter and the partial scale tend to a constant values. Section 4 contains a short conclusions and in Section 5 basic properties of the Jackson’s derivative are adduced. ## 2 Basic equations In contrast to the simple case, when the scale $`a`$ does not depend on the time $`t`$, we study here a non-stationary self-similar system, for which the value $`a`$ and the dilatation factor $`q`$ are functions of $`t`$. As it is known from the theory of the gauged fields , in such a case the system is invariant with respect to transformations $`xqx`$, $`PQ_qP`$, $`Q_qq^\alpha `$ if the gradient terms $`𝒟_qP`$, $`P/t`$ are replaced by the elongated derivatives $`(𝒟_q+ϵ)P`$, $`(/t+E)P`$ with dilatational $`ϵ`$ and temporal $`E`$ components of the gauge potential (hereafter the time $`t`$ is measured in units of the probability relaxation time). In accordance with Eq.(33), it is easy to see that this elongated derivatives are invariant with respect to the non-stationary dilatation $`q=q(t)`$ determined by the follow transformations: $`xqx,PQ_qP;`$ (2) $`ϵϵ{\displaystyle \frac{𝒟_qQ_q}{Q_q}}(q1){\displaystyle \frac{𝒟_qQ_q}{Q_q}}{\displaystyle \frac{𝒟_qP}{P}},EE\dot{Q}`$ where point denotes the time derivative for brevity. Gauge invariant Lagrangian of the corresponding Euclidean field theory is supposed to take the form $$=\frac{1}{2}\left[(𝒟_q+ϵ)P\right]^2+\frac{1}{2}(𝒟_qϵ)^2$$ (3) where the first term is caused by the gauged dilatation, the second one is the field contribution. The respective dissipative function reads $$f=\frac{1}{2}\left[\left(\frac{}{t}+E\right)P\right]^2+\frac{\theta }{2}\left[\left(\frac{}{t}+E\right)ϵ\right]^2$$ (4) where $`\theta `$ is the relaxation times ratio of the gauge field and probability. As a result, the Euler equation $$𝒟_q\frac{}{(𝒟_q\mathrm{z})}\frac{}{\mathrm{z}}=\frac{f}{\dot{\mathrm{z}}},\mathrm{z}(P,ϵ)$$ (5) leads to the differential equations with partial derivatives and non-linear terms: $`\dot{P}+𝒟_q^2P`$ $`=`$ $`EP+ϵ^2P,`$ (6) $`\theta \dot{ϵ}+𝒟_q^2ϵP𝒟_qP`$ $`=`$ $`\theta EP+ϵP^2.`$ (7) The first terms in right-hand parts describe dissipative influence of external environment. Further, we will take into consideration conserved systems only, so that the time component of the gauge potential being inversely proportional to corresponding relaxation time will be put equal to zero $`(E=0)`$. ## 3 Solution of equations In the limits $`ϵ0`$, $`𝒟_q^2ϵ0`$, the obtained equations (6), (7) take the form $`\dot{P}`$ $`=`$ $`𝒟_q^2P,`$ (8) $`\theta \dot{ϵ}`$ $`=`$ $`P𝒟_qP.`$ (9) The first of them has the diffusion type but with inverted sign, so that self-similar system reveals running away kinetics that is inherent in hierarchical systems . However, such a behaviour realizes during short time interval $`t\theta 1`$ only. At usual time $`t1`$, we can use the condition $`\theta 1`$ of adiabatic approximation that will be used everywhere below and allow to neglect left-hand side of Eq.(9). As a result, the condition $`𝒟_qP0`$ holds true and the system passes to stationary homogeneous regime: $$P(x,t)=constP_{st}.$$ (10) In much more complicated limit $`ϵ(t)const0`$, equation (7) is reduced to the simplest form $$𝒟_qP=ϵP$$ (11) and we arrive at the static distribution (10) as before. To continue analysis, let us multiply Eq.(6) by factor $`𝒟_qP`$ and Eq.(7) by $`𝒟_qϵ`$. Then, after addition of the obtained results we find $$\frac{1}{2}(𝒟_qP)^2+\frac{1}{2}(𝒟_qϵ)^2=\frac{1}{2}(ϵP)^2+\frac{1}{2}C^2.$$ (12) Here we put $$(\dot{P}P𝒟_qϵ)𝒟_qP=0$$ (13) and fulfilled integration with constant $`C^2/2`$. As a result, under the assumption $$𝒟_qϵ=C,C>0$$ (14) and condition $`𝒟_qP0`$, the equation (13) arrives at the exponential time dependence $$P\mathrm{e}^{Ct}$$ (15) whereas the equation (12) is reduced to the form (11). The exponential falling-down is known to be not inherent in the self-similar systems and, as a consequence, we are need to put $`C=0`$ in Eqs.(12), (14). We arrive then at the gauge condition $$𝒟_qϵ=0,$$ (16) according to which the potential $`ϵ`$ can be a time dependent function but does not vary with the system dilatation. Then, equation (6) takes the form $`\dot{P}=0`$ meaning that the system is in steady-state, which probability distribution is obeyed to Eq.(11). Being accompanied with Eq.(31), this equation arrives at the condition $`[\alpha ]_q=ϵ`$. As is ascertained in Appendix, the Jackson’s q-number $`[\alpha ]_q`$ is reduced to the Tsallis’ q-logarithm if the steady-state probability $`P_{st}`$ and dilatation $`q`$ are connected via the follow relation: $$P_{st}^{q1}q^\alpha .$$ (17) Then, the above obtained condition gives the Tsallis’ distribution $$P_{st}=[1(q1)ϵ]^{\frac{1}{q1}}.$$ (18) With breaking gauge condition (16) the self-similar system gets into non-stationary state, whose behaviour is determined by Eq.(6). Accounting definitions (34) arrives this to the algebraic form with respect to the Jackson’s derivation: $$\dot{P}=\left(ϵ^2[\alpha ]_{qq}\right)P.$$ (19) Inserting here Eq.(1) and taking into consideration the condition $`[\alpha ]_q=ϵ`$ and relation $`\dot{P}=a^{(1+\alpha )}(\alpha 𝒫+\kappa 𝒫^{})\dot{a}`$ (hereafter prime denotes the usual derivation with respect to the argument $`\kappa `$) we obtain $$a^1\dot{a}(\alpha 𝒫+\kappa 𝒫^{})=[\delta \alpha ]_{qq}𝒫$$ (20) where the factor $`[\delta \alpha ]_{qq}`$ stands for the term determined by Eqs.(34). In the limit $`q\mathrm{}`$, one can see with accounting asymptotics (35) that system behaves in automodel manner if conditions $`aq=const`$, $`a^{3\alpha 4}\dot{a}=const\tau _0^1`$ and equation $$\kappa 𝒫^{}=(\tau _0\alpha )𝒫$$ (21) are implemented. Solution of the equation is $`𝒫\kappa ^{\tau _0\alpha }`$ and the time dependencies of the characteristic scale and the probability density read: $$a^{3(\alpha 1)}=\frac{t}{\tau },Px^{\tau _0\alpha }t^\tau ,\tau \frac{\tau _0}{3(\alpha 1)}\mathrm{at}q1,t<\tau .$$ (22) Within the opposite limit $`q1`$, a magnitude $`q`$ in Eq.(20) ought to put time independent and we arrive at the long-time dependencies: $$a\mathrm{exp}(t/\tau _0),Px^{\lambda _0\tau _0\alpha }\mathrm{exp}(\lambda _0t),\lambda _0\frac{\alpha 1}{q1}\mathrm{at}q1,t\lambda _0^1.$$ (23) The coincidence condition for the time limits $`\tau `$ and $`\lambda _0^1`$ in dependencies given by Eqs.(22), (23) leads to relation $$\tau =3(q1).$$ (24) At last, we consider the case with non-zeroth second Jackson’s derivative $`𝒟_q^2ϵ=[\epsilon ]_{qq}ϵ`$ determined by an index $`\epsilon `$. Here equation (7) gives the gauge potential $$ϵ=\frac{[\alpha ]_q}{1[\epsilon ]_{qq}P^2}$$ (25) that behaves in self-similar manner if the value $`[\epsilon ]_{qq}P^2`$ falls down with $`q`$-increase. Supposing this falling down in power form $`q^\gamma `$ with positive index $`\gamma 0`$, we obtain the needed dependence $`P(t)q^\tau (t)`$, following from Eqs.(22) and condition $`a(t)q(t)=const`$, if $`\gamma =2\tau 3(\epsilon 1)>0`$. As a result, the gauge potential index $`\epsilon `$ is limited by the condition $$\epsilon <1+\frac{2}{3}\tau =1+2(q1)$$ (26) where the second equality follows from Eq.(24). Under this conditions the equation (6) accompanied with approximated result $`ϵ[\alpha ]_q`$, following from Eq.(25), arrives at the above obtained time dependencies (22). The automodel regime is studied to be broken if the factor in right-hand side of Eq.(19) $$ϵ^2[\alpha ]_{qq}=[\alpha ]_q^2\left[(1[\epsilon ]_{qq}P_{st}^2)^21\right][\delta \alpha ]_{qq}\lambda $$ (27) becomes time independent (here the steady-state probability $`P_{st}`$ is determined by Eq.(18) and we take into account Eqs.(25), (34)). In such a case, the exponential decay (23) is characterized by the relaxation time $`\lambda ^1`$ instead of $`\lambda _0^1`$. Because this regime is inherent in small values of $`q`$, we can use the limit $`q1`$: $$\lambda =\lambda _0\alpha ^2\left\{\left[1\left(\epsilon ^2+\frac{\epsilon 1}{q1}\right)\mathrm{e}^{2ϵ}\right]^21\right\},\lambda _0\frac{\alpha 1}{q1}.$$ (28) Finally, under conditions $`q1`$, $`\epsilon ^2+(\epsilon 1)/(q1)\mathrm{e}^{2\alpha }`$ when $`\alpha =[\alpha ]_q=ϵ`$, we have $$\lambda =\lambda _0\left[12\alpha ^2\frac{q1}{\alpha 1}\left(\epsilon ^2+\frac{\epsilon 1}{q1}\right)\mathrm{e}^{2\alpha }\right].$$ (29) ## 4 Discussion The above offered formalism is based on the dilatation invariant Lagrangian (3) and dissipative function (4) to describe the conserved non-stationary self-similar stochastic system. Behaviour of such a system is determined by the probability density distribution (1) and the gauge potential $`ϵ`$ (the latter is reduced to the ratio of the microstate energy to temperature in usual case of thermodynamic systems). The non-linear differential equations with partial derivatives, Eqs.(6), (7) are obtained to analyze the system kinetics. It is occurred that under gauge condition (16), when the microstate energy is independent on dilatation, the system is in the steady-state characterized by the Tsallis’ distribution (18). With gauge breaking, the automodel regime (22) realizes for a time less than bounded magnitude (24) determined by the difference $`q1`$. For more values of time, when the dilatation becomes constant, the system passes to the usual exponential regime (23). Finally, we comment on limitations of our approach. The main of these is that the system under consideration is conserved, so that influence of external environment have been put equal to zero (the value $`E=0`$ in Eqs.(6), (7)). Accounting such type terms for hierarchical systems shows that the time component of the gauge potential is reduced to the linear differential operator $`E(/x)F(x)+(^2/^2x)D(x)`$ to be anti-dissipative in the physical meaning (here $`F(x)`$ is a drift force and $`D(x)`$ is a diffusion coefficient) . Due to these terms, the above found exponential regime will be suppressed and the self-similar system will behave in automodel manner during whole time interval. ## Acknowledgment I am grateful to Constantino Tsallis for sending his review , which studying inspired me to this work. ## 5 Appendix. Basic properties of the Jackson’s derivative The Jackson’s derivative is defined by equation $$𝒟_qf(x)\frac{f(qx)f(x)}{q1},q1$$ (30) that is reduced to the usual derivative in the limit $`q1`$. Apparently, for a homogeneous function with a self-similarity index $`\alpha `$ the Jackson’s derivative is reduced to Jackson’s q-number $`[\alpha ]_q`$: $$𝒟_qf(x)=[\alpha ]_qf(x),[\alpha ]_q\frac{q^\alpha 1}{q1}.$$ (31) It is easily to see that the value $`[\alpha ]_q\alpha `$ in the limit $`q1`$ and increases as $`q^{\alpha 1}`$ at $`q\mathrm{}`$ (we propose $`\alpha >1`$). On the other hand, the Tsallis’ q-logarithm function $`\mathrm{ln}_qx(x^{q1}1)/(q1)`$ can be represented in the form of the Jackson’s q-number with index $`\alpha =(q1)\mathrm{ln}x/\mathrm{ln}q`$. Accompanied Eq.(31) this relation and apparent equality $$\mathrm{ln}_q(xy)=\mathrm{ln}_qx+\mathrm{ln}_qy+(q1)(\mathrm{ln}_qx)(\mathrm{ln}_qy)$$ (32) lead to important rule for the Jackson’s derivative: $$𝒟_q\left[f(x)g(x)\right]=\left[𝒟_qf(x)\right]g(x)+f(x)\left[𝒟_qg(x)\right]+(q1)\left[𝒟_qf(x)\right]\left[𝒟_qg(x)\right].$$ (33) The Jackson’s derivative of the second order is determined as follows: $`𝒟_p𝒟_qf(x)=𝒟_p\{[\alpha ]_qf(x)\}=[\alpha ]_{pq}f(x),`$ (34) $`[\alpha ]_{pq}[\alpha ]_p[\alpha ]_q+[\delta \alpha ]_{pq},[\alpha ]_p{\displaystyle \frac{p^\alpha 1}{p1}},[\delta \alpha ]_{pq}{\displaystyle \frac{p^\alpha [(pq)^\alpha pq]}{(p1)(pq1)}}.`$ In proposition $`\alpha >1`$, the value $`[\delta \alpha ]_{pq}`$ has the follow asymptotics: $`[\delta \alpha ]_{pq}{\displaystyle \frac{\alpha 1}{p1}}\mathrm{at}p,q1,`$ (35) $`[\delta \alpha ]_{pq}p^{2(\alpha 1)}q^{(\alpha 1)}\mathrm{at}p,q\mathrm{}.`$
no-problem/0003/math-ph0003031.html
ar5iv
text
# SIMILARITY AND CONSIMILARITY OF ELEMENTS IN REAL CAYLEY-DICKSON ALGEBRAS Abstract. Similarity and consimilarity of elements in the real quaternion, octonion, and sedenion algebras, as well as in the general real Cayley-Dickson algebras are considered by solving the two fundamental equations $`ax=xb`$ and $`ax=\overline{x}b`$ in these algebras. Some consequences are also presented. AMS mathematics subject classifications: 17A05, 17A35. Key words: quaternions, octonions, sedenions, Cayley-Dickson algebras, equations, similarity, consimilarity. 1. Introduction We consider in the article how to establish the concepts of similarity and consimilarity for elements in the real quaternion, octonion and sedenion algebras, as well as in the $`2^n`$-dimensional real Cayley-Dickson algebras. This consideration is motivated by some recent work on eigenvalues and eigenvectors, as well as similarity of matrices over the real quaternion and octonion algebras(see and ). In order to establish a set of complete theory on eigenvalues and eigenvectors, as well as similarity of matrices over the quaternion, octonion, sedenion algebras, as well as the general real Cayley-Dickson algebras, one must first consider a basic problem— how to characterize similarity of elements in these algebras, which leads us to the work in this article. Throughout $`,𝒪,`$ and $`𝒮`$ denote the real quaternion, octonion, and sedenion algebras, respectively; $`𝒜_n`$ denotes the $`2^n`$-dimensional real Cayley-Dickson algebra, $``$ and $`𝒞`$ denote the real and complex number fields, respectively. It is well known that $`𝒜_0=`$, $`𝒜_1=𝒞`$, $`𝒜_2=`$, $`𝒜_3=𝒪`$, and $`𝒜_4=𝒮`$. As is well-known, the real quaternion algebra $``$ is a four-dimensional associative division algebra over the real number field $``$ with its basis $`1,i,j,k`$ satisfying the multiplication rules $`i^2=j^2=k^2=1,`$ $`ij=ji=k,`$ $`jk=kj=i,`$ $`ki=ik=j.`$ The elements in $``$ take the form $`a=a_0+a_1i+a_2j+a_3k,`$ where $`a_0`$$`a_3`$, which can simply be written as $`a=\mathrm{Re}a+\mathrm{Im}a,`$ where $`\mathrm{Re}a=a_0`$ and $`\mathrm{Im}a=a_1i+a_2j+a_3k.`$ The conjugate of $`a`$ is defined to be $`\overline{a}=a_0a_1ia_2ja_3k=\mathrm{Re}a\mathrm{Im}a,`$ which satisfies $`\overline{\overline{a}}=a,\overline{a+b}=\overline{a}+\overline{b}`$ and $`\overline{ab}=\overline{b}\overline{a}`$ for all $`a,b.`$ The norm of $`a`$ is defined to be $`|a|=\sqrt{a\overline{a}}=\sqrt{\overline{a}a}=\sqrt{a_0^2+a_1^2+a_2^2+a_3^2}.`$ Some basic operation properties on quaternions are listed below $$a^22(\mathrm{Re}a)a+|a|^2=0,(\mathrm{Im}a)^2=|\mathrm{Im}a|^2,$$ $`(1.1)`$ $$|ab|=|a||b|,$$ $`(1.2)`$ $$a^1=\frac{\overline{a}}{|a|^2},$$ $`(1.3)`$ $$\mathrm{Re}(ab)=\mathrm{Re}(ba).$$ $`(1.4)`$ As for octonions, they can be defined by the Cayley-Dickson process as an extension of quaternions as follows $$a=a^{}+a^{\prime \prime }e,$$ where $`a^{},a^{\prime \prime }`$, and the addition and multiplication for any $`a=a^{}+a^{\prime \prime }e,b=b^{}+b^{\prime \prime }e𝒪`$ are $$a+b=(a^{}+a^{\prime \prime }e)+(b^{}+b^{\prime \prime }e)=(a^{}+b^{})+(a^{\prime \prime }+b^{\prime \prime })e,$$ $`(1.5)`$ $$ab=(a^{}+a^{\prime \prime }e)(b^{}+b^{\prime \prime }e)=(a^{}b^{}\overline{b^{\prime \prime }}a^{\prime \prime })+(b^{\prime \prime }a^{}+a^{\prime \prime }\overline{b^{}})e.$$ $`(1.6)`$ In that case, $`𝒪`$ is spanned as an eight-dimensional non-associative but alternative division algebra over the real number field $``$ with its canonical basis as follows $$1,e_1=i,e_2=j,e_3=k,e_4=e,e_5=ie,e_6=je,e_7=ke.$$ $`(1.7)`$ The multiplication table for the basis can be derived from (1.6)( see, e.g., ), but we omit it here. In term of (1.7), all elements of $`𝒪`$ take the form $$a=a_0+a_1e_1+\mathrm{}+a_7e_7,$$ where $`a_0`$$`a_7`$, which can simply be written as $`a=\mathrm{Re}a+\mathrm{Im}a,`$ where $`\mathrm{Re}a=a_0.`$ The conjugate of $`a=a^{}+a^{\prime \prime }e`$ is defined to be $`\overline{a}=\overline{a^{}}a^{\prime \prime }e=\mathrm{Re}a\mathrm{Im}a,`$ which satisfies $`\overline{\overline{a}}=a,\overline{a+b}=\overline{a}+\overline{b}`$ and $`\overline{ab}=\overline{b}\overline{a}`$ for all $`a,b𝒪.`$ The norm of $`a`$ is defined to be $`|a|=\sqrt{a\overline{a}}=\sqrt{\overline{a}a}=\sqrt{a_0^2+a_1^2+\mathrm{}+a_7^2}.`$ Some basic operation properties used in the sequel on octonions are listed below $$a(ab)=a^2b,(ba)a=ba^2,$$ $`(1.8)`$ $$(ab)a=a(ba):=aba,(ab)a^1=a(ba^1):=aba^1,$$ $`(1.9)`$ $$a^1=\frac{\overline{a}}{|a|^2},$$ $`(1.10)`$ $$a^22(\mathrm{Re}a)a+|a|^2=0,(\mathrm{Im}a)^2=|\mathrm{Im}a|^2,$$ $`(1.11)`$ $$|ab|=|a||b|,$$ $`(1.12)`$ $$\mathrm{Re}(ab)=\mathrm{Re}(ba).$$ $`(1.13)`$ By the Cayley-Dickson process, the sedenion algebra $`𝒮`$ is an extension of the octonion algebra $`𝒪`$ with adding a new generator $`\epsilon `$ in it. The elements in $`𝒮`$ take the form $$a=a^{}+a^{\prime \prime }\epsilon ,$$ where $`a^{},a^{\prime \prime }𝒪`$, and the addition and multiplication for any $`a=a^{}+a^{\prime \prime }\epsilon ,b=b^{}+b^{\prime \prime }\epsilon 𝒮`$ are defined by $$a+b=(a^{}+a^{\prime \prime }\epsilon )+(b^{}+b^{\prime \prime }\epsilon )=(a^{}+b^{})+(a^{\prime \prime }+b^{\prime \prime })\epsilon ,$$ $`(1.14)`$ $$ab=(a^{}+a^{\prime \prime }\epsilon )(b^{}+b^{\prime \prime }\epsilon )=(a^{}b^{}\overline{b^{\prime \prime }}a^{\prime \prime })+(b^{\prime \prime }a^{}+a^{\prime \prime }\overline{b^{}})\epsilon .$$ $`(1.15)`$ In that case, $`𝒮`$ is spanned as a sixteen-dimensional nonassociative algebra over $``$ with its canonical basis as follows $$1,e_1,\mathrm{},e_7,e_8=\epsilon ,e_9=e_1\epsilon ,\mathrm{},e_{15}=e_7\epsilon ,$$ $`(1.16)`$ where $`1,e_1`$$`e_7`$ are the canonical basis of $`𝒪`$. The multiplication table for this basis can be found in . In term of (1.16), all elements of $`𝒮`$ can be written as $$a=a_0+a_1e_1+\mathrm{}+a_{15}e_{15},$$ where $`a_0`$$`a_{15}`$, or simply $`a=\mathrm{Re}a+\mathrm{Im}a,`$ where $`\mathrm{Re}a=a_0.`$ The conjugate of $`a`$ is defined to be $`\overline{a}=\overline{a^{}}a^{\prime \prime }\epsilon =\mathrm{Re}a\mathrm{Im}a,`$ which satisfies $`\overline{\overline{a}}=a`$ and $`\overline{a+b}=\overline{a}+\overline{b},\overline{ab}=\overline{b}\overline{a}`$ for all $`a,b𝒮.`$ The norm of $`a`$ is defined to be $`|a|=\sqrt{a\overline{a}}=\sqrt{\overline{a}a}=\sqrt{a_0^2+a_1^2+\mathrm{}+a_{15}^2}.`$ It is well known that $`𝒮`$ is a non-commutative, non-alternative, non-composition and non-division algebra, but it is a power-associative, flexible and quadratic algebra over $``$, namely, $$aa^2=a^2a,a(aa^2)=a^2a^2,$$ $`(1.17)`$ $$(ab)a=a(ba):=aba,$$ $`(1.18)`$ $$a^22(\mathrm{Re}a)a+|a|^2=0,(\mathrm{Im}a)^2=|\mathrm{Im}a|^2.$$ $`(1.19)`$ In addition $$a^1=\frac{\overline{a}}{|a|^2},$$ $`(1.20)`$ $$\mathrm{Re}(ab)=\mathrm{Re}(ba),$$ $`(1.21)`$ $$(ab)\overline{a}=a(b\overline{a}):=ab\overline{a},(ab)a^1=a(ba^1):=aba^1.$$ $`(1.22)`$ 2. Similarity and consimilarity of quaternions In this section, we fist solve two basic equations $`ax=xb`$ and $`ax=\overline{x}b`$, and then introduce the concepts of similarity and consimilarity for elements in $``$. In addition, we shall also present some interesting consequences. Theorem 2.1. Let $`a=a_0+a_1i+a_2j+a_3k`$ and $`b=b_0+b_1i+b_2j+b_3k`$ be two given quaternions. Then (a) The linear equation $$ax=xb$$ $`(2.1)`$ has a nonzero solution for $`x`$ if and only if $$a_0=b_0,and|\mathrm{Im}a|=|\mathrm{Im}b|.$$ $`(2.2)`$ (b) In that case, the general solution of Equation (2.1) is $$x=(\mathrm{Im}a)p+p(\mathrm{Im}b),$$ $`(2.3)`$ or equivalently $`x=app\overline{b},`$ where $`p`$ is arbitrary. (c) In particular, if $`b\overline{a},`$ i.e., $`\mathrm{Im}a+\mathrm{Im}b0,`$ then Equation (2.1) has a solution as follows $$x=\lambda _1(\mathrm{Im}a+\mathrm{Im}b)+\lambda _2[|\mathrm{Im}a||\mathrm{Im}b|(\mathrm{Im}a)(\mathrm{Im}b)],$$ $`(2.4)`$ where $`\lambda _1,\lambda _2`$ are arbitrary. If $`b=\overline{a},`$ then the general solution to Equation (2.1) can be written as $$x=x_1i+x_2j+x_3k,$$ $`(2.5)`$ where $`x_1,x_2`$ and $`x_3`$ satisfy $`a_1x_1+a_2x_2+a_3x_3=0.`$ Proof. Assume first that (2.1) has a nonzero solution $`x`$ . Then by (1.2) and (1.4) we find $$ax=xb|ax|=|xb||a||x|=|x||b||a|=|b|$$ and $$ax=xba=xbx^1\mathrm{Re}a=\mathrm{Re}(xbx^1)=\mathrm{Re}(bx^1x)=\mathrm{Re}b,$$ which are equivalent to (2.2). Conversely, substituting (2.3) into $`axxb`$, we obtain $`axxb`$ $`=`$ $`(a_0+\mathrm{Im}a)[(\mathrm{Im}a)p+p(\mathrm{Im}b)][(\mathrm{Im}a)p+p(\mathrm{Im}b)](b_0+\mathrm{Im}b)`$ $`=`$ $`(a_0b_0)x+(\mathrm{Im}a)^2p+(\mathrm{Im}a)p(\mathrm{Im}b)(\mathrm{Im}a)p(\mathrm{Im}b)p(\mathrm{Im}b)^2`$ $`=`$ $`(a_0b_0)x+(|\mathrm{Im}b|^2|\mathrm{Im}a|^2)p.`$ Under (2.2) the right-hand side of the above equality is zero. Thus (2.3) is a solution of (2.1). Next suppose that $`x_0`$ is any solution of (2.1) under (2.2), and let $$p=\frac{(\mathrm{Im}a)x_0}{2|\mathrm{Im}a|^2}=\frac{x_0(\mathrm{Im}b)}{2|\mathrm{Im}b|^2}.$$ Then (2.3) becomes $$x=\frac{(\mathrm{Im}a)^2x_0}{2|\mathrm{Im}a|^2}\frac{x_0(\mathrm{Im}b)^2}{2|\mathrm{Im}b|^2}=\frac{1}{2}x_0+\frac{1}{2}x_0=x_0,$$ which shows that any solution to (2.1) can be represented by (2.3). Thus (2.3) is the general solution of (2.1) under (2.2). If setting $`p=1`$ in (2.3), then we know that $`x_1=\mathrm{Im}a+\mathrm{Im}b`$ is a special solution to (2.1), and if setting $`p=\mathrm{Im}b`$ in (2.3), then we get $`x_2=|\mathrm{Im}a||\mathrm{Im}b|(\mathrm{Im}a)(\mathrm{Im}b)`$, another special solution to (2.1). Thus $`x=\lambda _1x_1+\lambda _1x_2`$, where $`\lambda _1,\lambda _2`$ are arbitrary, is also a solution to (2.1) under (2.2) and $`b\overline{a}.`$ If $`b=\overline{a},`$ then (2.1) is $`ax=x\overline{a}`$, namely $$2(\mathrm{Re}x)(\mathrm{Im}a)+(\mathrm{Im}a)(\mathrm{Im}x)+(\mathrm{Im}x)(\mathrm{Im}a)=0,$$ which is also equivalent to $$\mathrm{Re}x=0,anda_1x_1+a_2x_2+a_3x_3=0.$$ Thus we have (2.5). $`\mathrm{}`$ An equivalent statement for (2.1) to have a nonzero solution is that $`a`$ and $`b`$ are similar, which is written by $`ab`$. A direct consequence of Theorem 2.1 is given below. Corollary 2.2. Let $`a`$ given with $`a.`$ Then the equation $$ax=x(\mathrm{Re}a+|\mathrm{Im}a|i)$$ $`(2.6)`$ always has a nonzero solution, namely, $`a\mathrm{Re}a+|\mathrm{Im}a|i,`$ and the general solution to Equation (2.6) is $$x=(\mathrm{Im}a)p+|\mathrm{Im}a|pi,$$ $`(2.7)`$ where $`p`$ is arbitrary. In particular, if $`a𝒞,`$ then a solution of Equation (2.6) is $$x=\lambda _1[|\mathrm{Im}a|i+\mathrm{Im}a]+\lambda _2[|\mathrm{Im}a|(\mathrm{Im}a)i],$$ where $`\lambda _1,\lambda _2`$ are arbitrary. If $`a=a_0+a_1i`$ with $`a_1<0,`$ then the general solution to (2.6) is $$x=x_2j+x_3k,forallx_2,x_3.$$ Through (2.6), one can easily find out powers and $`n`$th roots of quaternions. This topic was previously examined in , and . Theorem 2.3. Let $`a=a_0+a_1i+a_2j+a_3k`$ and $`b=b_0+b_1i+b_2j+b_3k`$ be two given quaternions. Then the equation $$ax=\overline{x}b$$ $`(2.8)`$ has a nonzero solution for $`x`$ if and only if $$|a|=|b|.$$ $`(2.9)`$ In that case, if $`a+\overline{b}0,`$ then Equation (2.8) has a solution as follows $$x=\lambda (\overline{a}+b),$$ $`(2.10)`$ where $`\lambda `$ is arbitrary. If $`a+\overline{b}=0,`$ then the general solution to Equation (2.8) is can be written as $$x=x_0+x_1i+x_2j+x_3k,$$ $`(2.11)`$ where $`x_0`$$`x_3`$ satisfy $`a_0x_0a_1x_1a_2x_2a_3x_3=0.`$ Proof. Suppose first that (2.8) has a nonzero solution $`x`$. Then by (1.2) we get $$ax=xb|ax|=|xb||a||x|=|x||b||a|=|b|.$$ Conversely, we let $`x_1=\overline{a}+b`$. Then $`ax_1\overline{x}_1b`$ $`=`$ $`a(\overline{a}+b)(a+\overline{b})b`$ $`=`$ $`a\overline{a}+abab+\overline{b}b=|a|^2|b|^2.`$ Thus (2.10) is a solution to (2.8) under (2.9). If $`a+\overline{b}=0`$, then (2.8) is equivalent to $`ax+\overline{ax}=0`$, i.e., $`\mathrm{Re}(ax)=0`$. Thus we have (2.11). $`\mathrm{}`$ Based on the equation in (2.8) we can also extend the concept of consimilarity on complex matrices ( see, e.g., ) to quaternions. Two quaternions $`a`$ and $`b`$ are said to be consimilar if there is a nonzero $`p`$ such that $`a=\overline{p}bp^1`$. By Theorem 2.3, we immediately know that two quaternions are consimilar if and only if their norms are identical. Thus the consimilarity defined here is also an equivalence relation on quaternions. Corollary 2.4. Any quaternion $`a`$ with $`a`$ is consimilar to its norm, namely, $`a=\overline{p}|a|p^1,`$ where $`p=|a|+\overline{a}`$. Corollary 2.5. Let $`a,b`$ be given with $`a,b.`$ Then $`a|a|^1`$ and $`b|b|^1`$ are consimilar. An interesting consequence of Corollary 2.4 is given below. Corollary 2.6. Let $`a`$ be given with $`a`$. Then the quadratic equation $`x^2=a`$ has two quaternion solutions as follows $$x=\pm \frac{|a|^{\frac{1}{2}}(|a|+a)}{||a|+a|}=\pm (\lambda _0+\lambda _1a),$$ $`(2.12)`$ where $`\lambda _0=\frac{|a|^{\frac{3}{2}}}{||a|+a|}`$ and $`\lambda _1=\frac{|a|^{\frac{1}{2}}}{||a|+a|}.`$ Proof. By Corollary 2.4, we can write $`a`$ as $`a=|a|\overline{p}p^1`$, where $`p=|a|+\overline{a}.`$ Thus by (1.3), we have $$a=|a|\frac{\overline{p}^2}{|p|^2}=\left(|a|^{\frac{1}{2}}\frac{\overline{p}}{|p|}\right)^2,$$ which shows that the two quaternions in (2.12) are the solutions to $`x^2=a`$. $`\mathrm{}`$ The above result can also be restated that any quaternion $`a`$ with $`a`$ has two square roots as in (2.12). Based on the result in Corollary 2.6, solutions can also explicitly be derived for some other simple quadratic equations over $``$, such as, $`xax=b,x^2+bx+xb+c=0`$ and $`x^2+xb+c=0`$ with $`bc=cb`$. Based on the results in Theorem 2.1, we can also solve the quaternion equation $`\overline{x}ax=b`$, which was exmined previouly in . Theorem 2.7. Let $`a,b`$ be given with $`a,b`$. Then the equation $`\overline{x}ax=b`$ is solvable if and only if there is a $`\lambda `$ with $`\lambda >0`$ such that $$\mathrm{Re}a=\lambda \mathrm{Re}b,and|\mathrm{Im}a|=\lambda |\mathrm{Im}b|.$$ $`(2.13)`$ In that case, a solution to $`\overline{x}ax=b`$ can be written as $$x=\frac{(\mathrm{Im}a)p+p(\mathrm{Im}b)}{\sqrt{\lambda }|(\mathrm{Im}a)p+p(\mathrm{Im}b)|},$$ $`(2.14)`$ where $`p`$ is arbitrarily chosen such that $`(\mathrm{Im}a)p+p(\mathrm{Im}b)0`$. Proof. Notice that $`x^1=\overline{x}/|x|^2`$ for a nonzero quaternion $`x`$. Thus the equation $`\overline{x}ax=b`$ can equivalently be written as $$ax=\frac{x}{|x^2|}b.$$ $`(2.15)`$ If (2.15) is solvable for $`x`$, then $$\mathrm{Re}a=\frac{1}{|x|^2}\mathrm{Re}b,\mathrm{and}|\mathrm{Im}a|=\frac{1}{|x|^2}|\mathrm{Im}b|$$ by Theorem 2.1(a), which implies (2.13). Conversely if (2.13) holds, it is easy to verify that (2.14) satisfies (2.15). $`\mathrm{}`$ Without much effort, all the above results can be extended to the octonion algebra, which are given in the next section. 3. Similarity and consimilarity of octonions Theorem 3.1. Let $`a=a_0+a_1e_1+\mathrm{}+a_7e_7`$ and $`b=b_0+b_1e_1+\mathrm{}+b_7e_7`$ be two given octonions. Then (a) The linear equation $$ax=xb$$ $`(3.1)`$ has a nonzero solution for $`x𝒪`$ if and only if $$a_0=b_0,and|\mathrm{Im}a|=|\mathrm{Im}b|.$$ $`(3.2)`$ (b) In that case, if $`b\overline{a},`$ i.e., $`\mathrm{Im}a+\mathrm{Im}b0,`$ then the general solution of Equation (3.1) can be expressed as $$x=(\mathrm{Im}a)p+p(\mathrm{Im}b),$$ $`(3.3)`$ where $`p`$ is arbitrarily chosen in $`𝒜(a,b),`$ the subalgebra generated by $`a`$ and $`b`$. In particular, Equation (3.1) has a solution as follows $$x=\lambda _1(\mathrm{Im}a+\mathrm{Im}b)+\lambda _2[|\mathrm{Im}a||\mathrm{Im}b|(\mathrm{Im}a)(\mathrm{Im}b)],$$ $`(3.4)`$ where $`\lambda _1,\lambda _2`$ are arbitrary. (c) If $`b=\overline{a},`$ then the general solution of Equation (3.1) is $$x=x_1e_1+x_2e_2+\mathrm{}+x_7e_7,$$ $`(3.5)`$ where $`x_1`$$`x_7`$ satisfy $`a_1x_1+a_2x_2+\mathrm{}+a_7x_7=0.`$ Proof. Assume first that (3.1) has a nonzero solution $`x𝒪`$. Then by (1.8), (1.9) and (1.12), (1.13) we find $$ax=xb|ax|=|xb||a||x|=|x||b||a|=|b|$$ and $$ax=xba=xbx^1\mathrm{Re}a=\mathrm{Re}(xbx^1)=\mathrm{Re}(bx^1x)=\mathrm{Re}b,$$ which are equivalent to (3.2). Conversely, note that $`p𝒜(a,b)`$ in (3.3). The products of $`a,b`$ with $`p`$ are associative. Thus $`x`$ in (3.3) and $`a,b`$ satisfy $`axxb`$ $`=`$ $`(a_0+\mathrm{Im}a)[(\mathrm{Im}a)p+p(\mathrm{Im}b)][(\mathrm{Im}a)p+p(\mathrm{Im}b)](b_0+\mathrm{Im}b)`$ $`=`$ $`(a_0b_0)x+(\mathrm{Im}a)^2p+(\mathrm{Im}a)p(\mathrm{Im}b)(\mathrm{Im}a)p(\mathrm{Im}b)p(\mathrm{Im}b)^2`$ $`=`$ $`(a_0b_0)x+(|\mathrm{Im}b|^2|\mathrm{Im}a|^2)p,`$ Under (3.2) the right-hand sides of the above equality is zero. Thus (3.3) is a solution of (3.1). Next suppose that $`x_0`$ is any solution of (3.1) under (3.2), then it must belong to $`𝒜(a,b)`$ because (3.1) is linear. Now let $$p=\frac{(\mathrm{Im}a)x_0}{2|\mathrm{Im}a|^2}=\frac{x_0(\mathrm{Im}b)}{2|\mathrm{Im}b|^2},$$ in (3.3). Then $`p𝒜(a,b)`$ and (3.3) becomes $$x=\frac{(\mathrm{Im}a)^2x_0}{2|\mathrm{Im}a|^2}\frac{x_0(\mathrm{Im}b)^2}{2|\mathrm{Im}b|^2}=\frac{1}{2}x_0+\frac{1}{2}x_0=x_0,$$ which shows that any solution to (3.1) can be represented by (3.3). Thus (3.3) is the general solution of (3.1) under (3.2). If setting $`p=1`$ in (3.3), then we know that $`x_1=\mathrm{Im}a+\mathrm{Im}b`$ is a special solution to (3.1), and if setting $`p=\mathrm{Im}b`$ in (3.3), then we get $`x_2=|\mathrm{Im}a||\mathrm{Im}b|(\mathrm{Im}a)(\mathrm{Im}b)`$, another special solution to (3.1). Thus $`x=\lambda _1x_1+\lambda _1x_2`$, where $`\lambda _1,\lambda _2`$ are arbitrary, is also a solution to (3.1) under (3.2) and $`b\overline{a}.`$ If $`b=\overline{a},`$ then (3.1) is $`ax=x\overline{a}`$, namely $$2(\mathrm{Re}x)(\mathrm{Im}a)+(\mathrm{Im}a)(\mathrm{Im}x)+(\mathrm{Im}x)(\mathrm{Im}a)=0,$$ which is also equivalent to $$\mathrm{Re}x=0,\mathrm{and}a_1x_1+a_2x_2+\mathrm{}+a_7x_7=0.$$ Thus we have (3.5). $`\mathrm{}`$ We guess that (3.4) is equivalent to (3.3), but fail to give a satisfactory proof. Based on the equation (3.1), we can define similarity of octonions. Two octonions $`a`$ and $`b`$ are said to be similar if there is a nonzero $`p𝒪`$ such that $`a=pbp^1`$, which is written as $`ab`$. By Theorem 3.1, we immediately know that two octonions are similar if and only if $`\mathrm{Re}a=\mathrm{Re}b`$ and $`|\mathrm{Im}a|=|\mathrm{Im}b|`$. Thus the similarity defined here is also an equivalence relation on octonions. A direct consequence of Theorem 3.1 is given below. Corollary 3.2. For any $`a𝒪`$ with $`a𝒞,`$ the equation $$ax=x(\mathrm{Re}a+|\mathrm{Im}a|i)$$ $`(3.6)`$ always has a nonzero solution, namely, $`a\mathrm{Re}a+|\mathrm{Im}a|i,`$ and a solution to Equation (3.6) is $$x=\lambda _1(|\mathrm{Im}a|i+\mathrm{Im}a)+\lambda _2[|\mathrm{Im}a|(\mathrm{Im}a)i]forall\lambda _1,\lambda _2.$$ In particular, if $`a=a_0+a_1i`$ with $`a_1<0`$, then the general solution to (3.6) is $$x=x_2e_2+\mathrm{}+x_7e_7,forallx_2,\mathrm{},x_7.$$ The result in (3.6) can also be written as $`a=x(\mathrm{Re}a+|\mathrm{Im}a|i)x^1`$. Through it one can easily find out powers and $`n`$th roots of octonions. Theorem 3.3. Let $`a=a_0+a_1e_1+\mathrm{}+a_7e_7`$ and $`b=b_0+b_1e_1+\mathrm{}+b_7e_7`$ be two given octonions. Then the equation $$ax=\overline{x}b$$ $`(3.7)`$ has a nonzero solution for $`x𝒪`$ if and only if $$|a|=|b|.$$ $`(3.8)`$ In that case, if $`a+\overline{b}0,`$ then (3.7) has a solution as follows $$x=\lambda (\overline{a}+b),$$ $`(3.9)`$ where $`\lambda `$ is arbitrary. In particular, if $`a+\overline{b}=0,`$ i.e., $`a_0+b_0=0`$ and $`\mathrm{Im}a=\mathrm{Im}b,`$ then the general solution to (3.6) is $$x=x_0+x_1e_1+\mathrm{}+x_7e_7,$$ $`(3.10)`$ where $`x_0`$$`x_7`$ satisfy $`a_0x_0a_1x_1\mathrm{}a_7x_7=0.`$ Proof. Suppose first that (3.7) has a nonzero solution $`x𝒪`$. Then by (1.12) we get $$ax=xb|ax|=|xb||a||x|=|x||b||a|=|b|.$$ Conversely, we let $`x_1=\overline{a}+b`$. Then $$ax_1\overline{x_1}b=a(\overline{a}+b)(a+\overline{b})b=a\overline{a}+abab+\overline{b}b=|a|^2|b|^2.$$ Thus (3.9) is a solution to (3.7) under (3.8). If $`a+\overline{b}=0`$, then (3.7) is equivalent to $`ax+\overline{ax}=0`$, i.e., $`\mathrm{Re}(ax)=0`$. Thus we have (3.10). $`\mathrm{}`$ Based on the equation in (3.7) we can also define the consimilarity of octonions. Two octonions $`a`$ and $`b`$ are said to be consimilar if there is a nonzero $`p𝒪`$ such that $`a=\overline{p}bp^1`$. By Theorem 3.3, we immediately know that two octonions are consimilar if and only if their norms are identical. Thus the consimilarity defined here is also an equivalence relation on octonions. Corollary 3.4. Any octonion $`a`$ with $`a`$ is consimilar to its norm, namely, $`a=\overline{p}|a|p^1`$, where $`p=|a|+\overline{a}`$. Corollary 3.5. Let $`a,b𝒪`$ be given with $`a,b.`$ Then $`a|a|^1`$ and $`b|b|^1`$ are consimilar. Corollary 3.6. Let $`a𝒪`$ be given with $`a`$. Then the quadratic equation $`x^2=a`$ has two octonion solutions as follows $$x=\pm \frac{|a|^{\frac{1}{2}}(|a|+a)}{||a|+a|}=\pm (\lambda _0+\lambda _1a),$$ $`(3.11)`$ where $`\lambda _0=\frac{|a|^{\frac{3}{2}}}{||a|+a|}`$ and $`\lambda _1=\frac{|a|^{\frac{1}{2}}}{||a|+a|}.`$ Proof. Follows from Corollary 3.4. $`\mathrm{}`$ Based on the result in Corollary 3.6, solutions can also be found for some other quadratic equations over $`𝒪`$ , such as, $`xax=b,x^2+bx+xb+c=0`$ and $`x^2+xb+c=0`$ with $`c𝒜(b),`$ the subalgebra generated by $`b`$. Theorem 3.7. Let $`a,b𝒪`$ be given with $`a,b`$. Then the equation $`\overline{x}ax=b`$ is solvable if and only if there is a $`\lambda `$ with $`\lambda >0`$ such that $$\mathrm{Re}a=\lambda \mathrm{Re}b,and|\mathrm{Im}a|=\lambda |\mathrm{Im}b|.$$ $`(3.12)`$ In that case, a solution to $`\overline{x}ax=b`$ can be written as $$x=\frac{(\mathrm{Im}a)p+p(\mathrm{Im}b)}{\sqrt{\lambda }|(\mathrm{Im}a)p+p(\mathrm{Im}b)|},$$ $`(3.13)`$ where $`p𝒜(a,b)`$ is arbitrarily chosen such that $`(\mathrm{Im}a)p+p(\mathrm{Im}b)0`$. Proof. Notice that $`x^1=\overline{x}/|x|^2`$ for a nonzero quaternion $`x`$. Thus the equation $`\overline{x}ax=b`$ can equivalently be written as $$ax=\frac{x}{|x^2|}b.$$ $`(3.14)`$ If (3.14) is solvable for $`x`$, then $$\mathrm{Re}a=\frac{1}{|x|^2}\mathrm{Re}b,\mathrm{and}|\mathrm{Im}a|=\frac{1}{|x|^2}|\mathrm{Im}b|$$ by Theorem 3.1(a), which implies (3.12). Conversely if (3.12) holds, it is easy to verify that (3.13) satisfies (3.14). $`\mathrm{}`$ 4. Similarity and consimilarity of sedenions The results in the above two sections can partly be extended to the sedenion algebra $`𝒮`$. Theorem 4.1. Let $`a=a_0+a_1e_1+\mathrm{}+a_{15}e_{15}`$ and $`b=b_0+b_1e_1+\mathrm{}+b_{15}e_{15}`$ be two given sedenions. If $$a_0=b_0,and|\mathrm{Im}a|=|\mathrm{Im}b|,$$ $`(4.1)`$ and $`b\overline{a},`$ then the linear equation $$ax=xb$$ $`(4.2)`$ has a solution as follows $$x=\lambda (\mathrm{Im}a+\mathrm{Im}b),$$ $`(4.3)`$ where $`\lambda `$ is arbitrary. If $`b=\overline{a},`$ then the general solution to Equation (4.2) is $$x=x_1e_1+x_2e_2+\mathrm{}+x_{15}e_{15},$$ $`(4.4)`$ where $`x_1`$$`x_{15}`$ satisfy $`a_1x_1+a_2x_2+\mathrm{}+a_{15}x_{15}=0.`$ Proof. Let $`x_1=\mathrm{Im}a+\mathrm{Im}b`$. Then it is easy to verify that $`ax_1x_1b`$ $`=`$ $`(a_0+\mathrm{Im}a)(\mathrm{Im}a+\mathrm{Im}b)(\mathrm{Im}a+\mathrm{Im}b)(b_0+\mathrm{Im}b)`$ $`=`$ $`(a_0b_0)x_1+(\mathrm{Im}a)^2+(\mathrm{Im}a)(\mathrm{Im}b)(\mathrm{Im}a)(\mathrm{Im}b)(\mathrm{Im}b)^2`$ $`=`$ $`(a_0b_0)x_1+|\mathrm{Im}b|^2|\mathrm{Im}a|^2.`$ Under (4.1), the right-hand side of the above equality is zero. Thus (4.3) is a nonzero solution of (4.2). If $`b=\overline{a},`$ then (4.2) is $`ax=x\overline{a}`$, namely $$2(\mathrm{Re}x)(\mathrm{Im}a)+(\mathrm{Im}a)(\mathrm{Im}x)+(\mathrm{Im}x)(\mathrm{Im}a)=0,$$ which is also equivalent to $$\mathrm{Re}x=0,anda_1x_1+a_2x_2+\mathrm{}+a_{15}x_{15}=0.$$ Thus we have (4.4). $`\mathrm{}`$ Since $`𝒮`$ is not a division algebra, it may occur that there is a nonzero $`x𝒮`$ such that $`ax=0`$ and $`xb=0`$ for some $`a,b𝒮`$. Thus (4.1) is not a necessary condition for (4.2) to have a nonzero solution. Besides, since $`𝒮`$ is non-alternative, the sedenion $`x_2=|\mathrm{Im}a||\mathrm{Im}b|(\mathrm{Im}a)(\mathrm{Im}b)`$ is no longer a solution to (4.2) under (4.1). Nevertheless, the similarity concept can reasonably be established for elements in $`𝒮`$, according to the result in Theorem 4.1 as follows: two sedenions $`a`$ and $`b`$ are said to be similar if $`\mathrm{Re}a=\mathrm{Re}b`$ and $`|\mathrm{Im}a|=|\mathrm{Im}b|`$. Clearly the similarity defined above is also an equivalence relation for elements in S. Moreover, we still have the following. Corollary 4.2. Let $`a𝒮`$ given with $`a𝒞`$. Then $`a`$ is similar to the complex number $`\mathrm{Re}a+|\mathrm{Im}a|i`$ and both of them satisfy the equality $$ax=x(\mathrm{Re}a+|\mathrm{Im}a|i),$$ $`(4.5)`$ where $$x=\lambda (|\mathrm{Im}a|i+\mathrm{Im}a),\lambda .$$ In particular, if $`a=a_0+a_1i`$ with $`a_1<0,`$ then the general solution to (4.5) is $$x=x_2e_2+\mathrm{}+x_{15}e_{15},forallx_2,\mathrm{},x_{15}.$$ Theorem 4.3. Let $`a=a_0+a_1e_1+\mathrm{}+a_{15}e_{15}`$ and $`b=b_0+b_1e_1+\mathrm{}+b_{15}e_{15}`$ be two given sedenions. If $$|a|=|b|,$$ $`(4.6)`$ and $`a+\overline{b}0,`$ then the equation $$ax=\overline{x}b$$ $`(4.7)`$ has a solution as follows $$x=\lambda (\overline{a}+b),$$ $`(4.8)`$ where $`\lambda `$ is arbitrary. If $`a+\overline{b}=0,`$ then the general solution to (4.7) is $$x=x_0+x_1e_1+\mathrm{}+x_{15}e_{15},$$ $`(4.9)`$ where $`x_0`$$`x_{15}`$ satisfy $`a_0x_0a_1x_1\mathrm{}a_{15}x_{15}=0.`$ Proof. Let $`x_1=\overline{a}+b`$. Then $$ax_1\overline{x_1}b=a(\overline{a}+b)(a+\overline{b})b=a\overline{a}+abab+\overline{b}b=|a|^2|b|^2.$$ Thus (4.8) is a solution to (4.7) under (4.6). If $`a+\overline{b}=0`$, then (4.7) is equivalent to $`ax+\overline{ax}=0`$, i.e., $`\mathrm{Re}(ax)=0`$. Thus we have (4.9). $`\mathrm{}`$ Since $`𝒮`$ is not a division algebra, it may occur that there is a nonzero $`x𝒮`$ such that $`ax=0`$ and $`\overline{x}b=0`$ for certain $`a,b𝒮`$. Thus (4.6) is not a necessary condition for (4.7) to have a nonzero solution. Nevertheless, we can reasonably introduce consimilarity concept for elements in $`𝒮`$ as follows: two sedenions $`a`$ and $`b`$ are said to be consimilar if $`|a|=|b|`$. In that case, there is, by Theorem 4.3, an $`x0`$ such that $`ax=\overline{x}b`$. Moreover, we still have the following two results. Corollary 4.4. Any sedenion $`a𝒮`$ with $`a`$ is consimilar to its norm $`|a|,`$ and both of them satisfies the following equality $$a=\overline{p}|a|p^1,$$ $`(4.10)`$ where $`p=|a|+\overline{a}`$. Proof. Follows from a direct verification. $`\mathrm{}`$ Theorem 4.5. Let $`a𝒮`$ given with $`a`$. Then the quadratic equation $`x^2=a`$ has two sedenion solutions as follows $$x=\pm \frac{|a|^{\frac{1}{2}}(|a|+a)}{||a|+a|}=\pm (\lambda _0+\lambda _1a),$$ where $`\lambda _0=\frac{|a|^{\frac{3}{2}}}{||a|+a|}`$ and $`\lambda _1=\frac{|a|^{\frac{1}{2}}}{||a|+a|}.`$ Correspondingly, solutions can also derived to the sedenion equations $`x^2+bx+xb+c=0`$ and $`x^2+xb+c=0`$ with $`c𝒜(b),`$ the subalgebra generated by $`b`$. But the solution of $`xax=b`$ can not be derived from Theorem 4.5, because $`xax=b`$, in general, is not equivalent to $`(ax)(ax)=ab`$ over $`𝒮`$. Remarks. As is well known(see, e.g., , , , , , ), the real Cayley-Dickson algebra $`𝒜_n`$ when $`n4`$ is inductively defined by adding a new generator $`\tau `$ in $`𝒜_{n1}`$. In that case, the elements in $`𝒜_n`$ take form $$a=a^{}+a^{\prime \prime }\tau ,$$ where $`a^{},a^{\prime \prime }𝒜_{n1}`$. The conjugate $`a`$ is defined to be $$\overline{a}=\overline{a^{}}a^{\prime \prime }\tau =\mathrm{Re}a\mathrm{Im}a.$$ The norm of $`a`$ is defined to be $$|a|=\sqrt{a\overline{a}}=\sqrt{\overline{a}a}=\sqrt{|a^{}|^2+|a^{\prime \prime }|^2}.$$ The addition and multiplication for any $`a=a^{}+a^{\prime \prime }\tau ,b=b^{}+b^{\prime \prime }\tau 𝒜_n`$ are $$a+b=(a^{}+a^{\prime \prime }\tau )+(b^{}+b^{\prime \prime }\tau )=(a^{}+b^{})+(a^{\prime \prime }+b^{\prime \prime })\tau ,$$ $$ab=(a^{}+a^{\prime \prime }\tau )(b^{}+b^{\prime \prime }\tau )=(a^{}b^{}\overline{b^{\prime \prime }}a^{\prime \prime })+(b^{\prime \prime }a^{}+a^{\prime \prime }\overline{b^{}})\tau .$$ In that case, $`𝒜_n`$ is spanned as a $`2^n`$-dimensional non-commutative, non-alternative, non-composition and non-division algebra over $``$, but it is still a power-associative, flexible and quadratic algebra over $``$ when $`n4`$. The algebraic properties of $`𝒜_n`$ with $`n4`$ are much similar to those of $`𝒜_4=𝒮`$. Therefore the results in Section 4 can directly be extended to elements in $`𝒜_n`$. For simplicity, we do not intend to list them here.
no-problem/0003/astro-ph0003081.html
ar5iv
text
# Galactic Worm 123.4-1.5: A Mushroom-shaped HI Cloud ## 1 Introduction Heiles (1984) has identified atomic hydrogen (HI) gas filaments “crawling” away from the plane of the inner Galaxy. These so-called “worms” were proposed to be parts of larger HI shells blown by the energetic stellar winds or supernovae in stellar associations. Such open, or blown out, shells would serve as conduits for hot gas and radiation to escape into the galactic halo, as recently observed in the superbubble/chimney reported by Normandeau et al. 1996. Koo et al. (1992) later produced a catalogue of 118 Galactic worm candidates, defining as a worm any dusty, H I structure perpendicular to the Galactic Plane. The Canadian Galactic Plane Survey (CGPS) is currently mapping a 70 degree longitude segment of the northern Galaxy at high resolution in HI ( Taylor et al. 1999). One Galactic worm candidate, GW 123.4-1.5, is within the early regions surveyed. These observations reveal that GW 123-1.5 is an unusual, mushroom-shaped cloud, hundreds of pc in size, apparently unrelated to a conventional shell or chimney structure. In this paper we present the observations, derive the observed properties of the mushroom cloud, and discuss possible scenarios for its origin. ## 2 Observations The HI emission from GW 123.4-1.5 was observed using the Synthesis Telescope of the Dominion Radio Astrophysical Observatory. A complete synthesis of the field of view (2.5 at $`\lambda `$21 cm) was combined with short spacing information from the DRAO 26-m telescope (Higgs 1999) so that there are no gaps in the uv plane and all spatial scales are detected to a limiting resolution of $`1^{}\times 1.14^{}`$ (RA $`\times `$ DEC). Data cubes obtained for each of 10 fields were mosaiced together. Five northern fields within the mosaic, observed between 1995 and 1997, are part of the on-going CGPS and five southern fields were observed in June 1997 and July 1998 with identical parameters by extending the CGPS grid to more negative Galactic latitude. The complex gain calibration was applied from observations of strong unresolved sources before and after each 12-hour synthesis. The S7 region, with an adopted brightness temperature of 100 K, was used as the flux calibrator for the 26-m observations. Each cube contained 256 velocity channels with a channel spacing of 0.824 km s<sup>-1</sup> and an LSR velocity range of $`164.7`$ to 58 km s<sup>-1</sup>. Each field was then corrected for primary beam attenuation, resulting in a non-uniform noise distribution in the mosaiced images. The mosaic’s r.m.s. noise per channel ranges from a minimum of 2.9 K (at the field center locations) to 4.6 K. A map of HI emission integrated over the LSR velocity range $`31.1`$ $`\mathrm{km}\mathrm{s}^1`$ to $`43.5`$ $`\mathrm{km}\mathrm{s}^1`$, is shown in Figure 1a. The image shows the mushroom cloud extending vertically out of the galactic plane toward negative Galactic latitudes. The “stem” of the mushroom extends about 3 out of the plane into a mushroom “cap” that is about 2.5 wide. At the bottom outside edges of the cap, two “lobes” extend back toward the plane on either side of the stem. At the top of the cap, wispy filaments reach toward larger latitudes. We have not assumed apparently independent H I features are part of the cloud. For example, the ring at $`(l,b)=(123.2,6.2)`$ is atomic gas surrounding the HII region S184, which is at a distance of 2.2 $`\pm `$ 0.7 kpc (Fich, private communication), and it would be speculative to assume this region is a component of the mushroom cloud. ## 3 Derived Properties of the Mushroom Cloud The velocity-latitude diagram in Figure 1b, taken along longitude, -123.6 which cuts through the stem and central cap (see slice in Figure 1a), shows that the stem emerges from the ambient HI in the mid-plane at a velocity of about $`43`$ $`\mathrm{km}\mathrm{s}^1`$. Using the galactic rotation model of Brand & Blitz (1993 and presented in Burton 1988), the kinematic distance is 3.8 $`\pm `$ 1.2 kpc, placing the cloud in the inner edge of the Perseus arm. At this distance, the mushroom cloud extends from $`|z|70`$ pc (where it becomes distinct from the disk emission) to 420 pc, for a total projected length perpendicular to the plane of 350 pc, about three times larger than the estimate of Koo et al.(1992). The velocities along the stem become less negative as $`|z|`$ increases (Figure 1b; clearer when stepping through the cube). The stem overlaps with the cap starting at about $`|z|200`$ pc and $`v_{\mathrm{LSR}}=38`$ $`\mathrm{km}\mathrm{s}^1`$, then extends an additional $``$80 pc into the cap. The stem is typically 35-40 pc wide and appears to have a cavity as well as a velocity signature indicating either expansion, contraction or helical motion. The central portion of the cap has little or no velocity gradient with respect to $`|z|`$ (Figure 1b) or longitude (Figure 1d). However, Figure 1c shows that the two lobes of the cap that extend back toward the plane are blueshifted with respect to the central cap region by 5 $`\mathrm{km}\mathrm{s}^1`$. In contrast, the wispy diffuse southeast parts of the cap are redshifted with respect to the main body of the cap. The velocity characteristics, presented in Table 1, were measured for the stem component, the cap subcomponents (lobes, central region and wisps), and the entire mushroom cloud. Velocity characteristics were obtained from brightness temperature ($`\mathrm{T}_\mathrm{B}`$) versus velocity profiles averaged over visually selected regions of each component. The errors reflect the values measured for different baseline estimates and different box sizes surrounding the emission regions. The mass estimates in Table 1 were obtained from column densities, integrated over the total velocity range for that component and averaged over the component spatially. These were also corrected for an average “background” column density. The mean excess column density of the mushroom above the background, over the velocity range of the entire structure, is $`N_H=6\times 10^{20}`$ cm<sup>-2</sup>, roughly equal to the average background over the same velocity range. The total hydrogen mass of the cap is then $`10^5M_{}`$, and the average density of hydrogen atoms within the cap is $`0.2`$ cm<sup>-3</sup>. The density in the ambient medium surrounding the mushroom is less well determined, since the distance of the column over the same velocity interval is not known. However the column is not likely to exceed about 1 kpc (compared to about 200 pc for the mushroom), so the over-density between the mushroom and surrounding gas is not likely to exceed a factor of 5. Using half the visually determined velocity range as an estimate of the average internal motion of each component, the internal kinetic energy of the hydrogen gas in the cap is $`1\times 10^{50}`$ ergs (note: the velocity dispersion determined from the $`\mathrm{T}_\mathrm{B}`$ profile gives an energy estimate of a couple $`\times 10^{49}`$ ergs). The radial velocity difference between the base of the stem and the mid-region of the cap is about 7 $`\mathrm{km}\mathrm{s}^1`$ and the bulk kinetic energy of the cloud may be larger than the internal energy. Nevertheless, the internal kinetic energy represents a minimum value for the energy of the event that gave rise to the the mushroom cloud. The velocity gradient within the stem of the mushroom cloud could be produced by accelerated streaming motion along the stem if the stem is tilted at at constant angle along its length with respect to the line of sight. The same effect could be produced without acceleration if the stem is the result of an ejection of material with a velocity range of order 10 $`\mathrm{km}\mathrm{s}^1`$ (higher velocity gas would appear furthest from the base). Also, a gradient can be produced by a constant velocity flow with a monotonically changing inclination angle along the length of the stem. The data at hand may not allow us to resolve this ambiguity. Preliminary analysis of HI absorption from background continuum sources through the cap indicate hydrogen spin temperatures of 50 to 100 K. The cap contains dust emission which is visible in all 4 IRAS passbands, becoming more apparent with increasing wavelength. There is no obvious diffuse soft X-ray emission in the energy range 0.1-2.0 keV, associated with any component of the cloud (S. Snowden, private communication). X-ray emission from hot gas that may be interior to the mushroom would be attenuated via absorption by the neutral gas along the line of sight. Using the interstellar photoelectric absorption cross-section of Morrison and McCammon (1983) and a neutral hydrogen column density of $`6\times 10^{20}`$ cm<sup>-2</sup>, X-ray emission from gas with a temperature less than $`10^7`$ K will be absorbed. Thus the lack of observed X-rays appears to rule out the existence of interior gas hotter than this. ## 4 Discussion of Possible Scenarios The mushroom cloud shape and mass distribution of GW123.4-1.5 pose a challenge to conventional superbubble scenarios. In these models (e.g., see Mac Low, McCray, & Norman 1989; Tenorio-Tagle, Rozyczka, & Bodenheimer 1990) the lower part (the stem) retains the bulk of the mass, even though the upper part of the bubble (the cap) may expand to a large radius. This is not the case with GW123.4-1.5, where we estimate that the cap contains about four times the mass in the stem. Additionally the greatest cap to stem width ratio in the superbubble models is about 3:1 (Tenorio-Tagle, Rozyczka, & Bodenheimer 1990) while the mushroom cloud’s ratio is 6:1. Furthermore, the radius of the model stem is typically equal to $`2H`$, where $`H`$ is the exponential scale length of the local Gaussian density distribution; the stem is the cavity created by a blowout from a stratified atmosphere into a uniform low-density halo (at $``$500 pc). If the local scale length were equal to the global average of the HI disk $`H135`$ pc ( Dickey & Lockman 1990), then the width of the mushroom’s stem would be 500 pc rather than the $``$ 40 pc observed. The above discrepancies lead us to consider alternate scenarios in which a stem plus cap morphology can be realized. For example, an H I jet could be ejected from the disk and a wide lobe could be created where it has stalled, possibly falling back to the disk in a fountain-like manner. One possible origin for such an event is the passage of a High Velocity Cloud through the galactic disk and the subsequent emergence of gas on the other side (Tenorio-Tagle et al. 1987). However, here we consider the rise of buoyant gas. A rising fireball after a terrestrial nuclear explosion creates a structure strikingly similar to that of GW123.4-1.5. In the interstellar context, mushroom-shaped clouds can arise from the interaction between different components in a multiphase medium (e.g. Rosen & Bregman 1995, Avillez 1999), including a cloud-cloud collision (Miniati et al. 1997). For our model, we focus on the rise of buoyant hot gas resulting from a single supernova event. Jones (1973) has investigated the early evolution of a supernova remnant and found signs of buoyant rise. Here, we follow to late times the evolution of a remnant which does not have enough energy to blow out of the disk atmosphere. The initially pressure driven hot bubble stalls at radius smaller than the scale height of the medium<sup>1</sup><sup>1</sup>1The outer shock front continues to propagate up through the stratified atmosphere, but has little dynamic importance since the inner hot bubble has stalled and is no longer compressing matter into a thin shell behind the shock front.. Buoyancy forces lift the low density bubble out of the galactic plane and through the stratified atmosphere. In analogy with the nuclear fireball, the quasi-vacuum produced under the rising cap may pull in surrounding material, entraining it to form the stem. In this case, the stem width is not determined by the scale height of the medium and can be quite narrow. However the existence of cold neutral material in the cap must still be explained since the rise of a buoyant plume in pressure equilibrium does not physically move much material from the galactic disk to high latitudes. To examine the plausibility of the buoyant bubble scenario we have begun modeling a single 10<sup>51</sup> ergs explosion at 60 pc above the midplane using a modified Zeus 2-d code (e.g. Stone & Norman 1992). Our simulations include the effects of radiative cooling, heat conduction, and the vertical gravitational field. We also use artificially low quiescent gas density to reduce the influence of “numerical diffusion”, a computational artifact resulting from steep gradients. For example, we use a scale height 60 pc<sup>2</sup><sup>2</sup>2The stalling radius is fixed for a given energy input and ambient density in the plane. So for any given source the buoyancy condition, that the stalling radius is less than $`H`$, is more likely to exist if the z-distribution of the ambient gas is $`H135`$ pc . and a midplane number density of $`\mathrm{n}_\mathrm{o}=1\mathrm{atom}\mathrm{cm}^3`$ to create an ambient medium distribution which allows bubbles to form and evolve. Our preliminary low-resolution models are intentionally generic and detailed comparisons with GW123.4-1.5 will be left to a later paper. In both Gaussian and exponential atmospheres, a bubble forms interior to the blastwave and, rather than elongating vertically as in a conventional superbubble, the bubble rises buoyantly. Since the rise is supersonic relative to the cooled gas above (interior to the blastwave), the bubble accumulates cold ambient gas in a snow-plow mode along its shock front. Gas also flows upwards in a column following the bubble, contributing to its evolution into a mushroom-shaped cloud. By 8 Myr the interior bubble has the kinetic temperature distribution shown in Fig. Galactic Worm 123.4-1.5: A Mushroom-shaped HI Clouda. The models show a cold gas stem which is narrower than the primary bubble (stem) in conventional superbubble scenarios in which the stem width is dictated by the gas scale height. Our modeled stem is expected to become narrower and more obvious than shown here when a higher ambient density is used and cooling becomes more efficient. The coolest temperatures trace a curlyqued cap structure like that observed in GW 123.5-1.5. These structures are also evident in the column density map, plotted for gas $`<`$7500 K, in Fig. Galactic Worm 123.4-1.5: A Mushroom-shaped HI Cloudb. Similar to some models of supernova evolution (Slavin & Cox 1992) the hottest gas in our simulation decreases to fill a small volume which eventually collapses. Residing just inside the skin of the lobes are remnants of a warm gas envelope, formed by heat conduction, which had been pushed away from the bubble by the upward flow of cold, dense gas from the stem. Their temperature of a few $`\times 10^4`$ K is consistent with the lack of soft x-ray emission. The cool, H I by-product of the buoyant bubble should be observable for substantially longer than the cooling time ($`2\times 10^5`$ yr) of this hot gas. We have carried out a preliminary search in other wavebands for evidence of a possible energy source, but no obvious candidates can be identified. For example, there are no IRAS sources near the base with CO emission in the FCRAO database (between the midplane and - 3<sup>o</sup> galactic latitude) within the FWHM velocity range of the stem. The projected position of the H II emitting reflection nebula Sharpless 185 lies near the base of the stem. However, it is associated with the Be X-ray emitting star $`\gamma `$ Cass at a distance of only about two hundred pc (see Blouin et al. 1997).
no-problem/0003/cond-mat0003423.html
ar5iv
text
# Intertube coupling in ropes of single-wall carbon nanotubes ## Abstract We investigate the coupling between individual tubes in a rope of single-wall carbon nanotubes using four probe resistance measurements. By introducing defects through the controlled sputtering of the rope we generate a strong non-monotonic temperature dependence of the four terminal resistance. This behavior reflects the interplay between localization in the intentionally damaged tubes and coupling to undamaged tubes in the same rope. Using a simple model we obtain the coherence length and the coupling resistance. The coupling mechanism is argued to involve direct tunneling between tubes. The unique structural and electronic properties of carbon nanotubes make them interesting objects for basic science study as well as applications. The relation between their geometry and and electronic structure is of particular interest. Semiconducting or metallic behavior is possible depending on tube diameter and chirality . Based on their unique properties, several applications in electronics have been proposed and some, such as field effect transistors and diodes have already been demonstrated. While the electronic structure of individual tubes has been characterized using scanning tunneling spectroscopy and found to be in agreement with the theoretical predictions , the interaction between tubes in ropes has received much less attention. Some studies have concluded that the coupling between tubes must be weak , but few attempted to directly measure this interaction . Thus, most of the applications rely on single tubes bridging metal contacts . However, the extensive use of nanotubes in future nano-electronics would also require a knowledge of the tube-tube electronic coupling. Here, we present a novel approach that allows us to determine the electrical coupling between tubes in a rope using four terminal transport measurements. The ropes are self-assembled bundles of carbon nanotubes, in which the tubes line up parallel to each other. The tubes in our ropes have diameters close to 1.4 nm and form a regular triangular lattice with a lattice constanct of $`d_0`$ = 1.7 nm . Both, semiconducting and metallic tubes are present in a rope in a random distribution. In our experiment, the ropes are dispersed on an oxidized Si substrate and gold electrodes were subsequently fabricated on top of the ropes (inset Fig. 1). The key feature in our investigation involves a sputtering of the rope before deposition of the electrodes by an Ar<sup>+</sup> ion beam at an energy of 500 eV. The purpose of the sputtering is to introduce defects into the top nanometers of the rope. As will be shown later, this will enable us to vary the path taken by the electric current in a well defined manner. In order to estimate the extent of the sputter damage, a Monte Carlo simulation was performed . From our sputtering conditions, we estimate that the damage reaches about 6 ($`\pm `$2) nm deep into the rope and the damage density is about one defect per 1000 atoms, which gives a distance of 5-10 nm between defects along the tubes. This defect density is high enough to have a significant influence on the electrical properties of the tubes, while at the same time, it is low enough to preserve their structural integrity. The damage in the upper part of the rope is confined to the area directly underneath the gold contacts (note the shading in the inset of Fig. 1), while the main part of the rope between the electrodes was not exposed to the ion beam and is thus undamaged. Electronic transport in the damaged metallic tubes is strongly affected by the defects, while contributions from semiconducting tubes are negligible at the low temperatures used in the experiments (the typical band gap forsemiconducting tubes of $``$ 1.4 nm diameter is $``$ 500 meV ). The resistances were measured using standard lock-in techniques while the sample was cooled in a <sup>4</sup>He continuous flow cryostat with a base temperature of 1.5 K. We fully characterized a total of 13 samples. Typical results of R vs. T curves are presented in Fig. 1. The two terminal (2t) resistances were found to increase with decreasing temperature. On the other hand, the four terminal (4t) measurements showed a pronounced resistance maximum at temperatures around 20 K in all of the samples we made. This behavior is caused by the damage introduced by sputtering. As a comparison, a 4t measurement of an undamaged rope is shown in the second inset in Fig. 1. In this case, we observe a decrease in resistance with decreasing temperature over the whole temperature range. Thus, the undamaged ropes show a metallic behavior as is expected for ropes consisting, at least in part, of metallic tubes . The linear dependence of the resistance on the temperature is attributed to phonon scattering . It is important to note the very different values of the resistance in the damaged and undamaged ropes, the latter being only of the order of 1 k$`\mathrm{\Omega }`$, while the former is in the range of several M$`\mathrm{\Omega }`$. Thus, the damage greatly increases the resistance, but it does not block the electrical transport. Obviously, the damaged, metallic tubes under the gold contacts carry most of the current, since only a few k$`\mathrm{\Omega }`$ is expected for undamaged nanotubes in the rope, and the semiconducting ones, damaged or undamaged, are insulating. The metallic tubes are one-dimensional systems with two degenerate modes at the Fermi energy , hence the resistance of a segment of length $`L`$ containing defects with an average distance $`L_0`$ is given by $`R=h/4e^2L/L_0`$ (neglecting any interference effects). The extent of the damaged areas along the direction of current transport, i. e. the width of the gold electrodes, is typically 200 nm. The rope segments between the contacts are undamaged and thus their contribution to the resistance is negligible. At room temperature, we find resistances of the ropes around 200 k$`\mathrm{\Omega }`$. This resistance corresponds to a mean free path of $`L_06nm`$, which is contistent with the mean distance between damaged sites obtained by the Monte carlo simulation. In Fig. 1, both 2t and 4t measurements show an increase in resistance when cooling the sample. This increase is caused by electronic localization in the damaged tube, an interference effect which increases the resistance by coherent backscattering of electrons by the defects. Localization occurs when the phase coherence length, $`L_\mathrm{\Phi }`$, exceeds the average distance between scatterers $`L_0`$ in the sample. There is strong localization if $`L_\mathrm{\Phi }`$ exceeds the localization length $`L_C=ML_0`$, with M being the number of modes in the system . In this case all modes are strongly affected by the localization and the resistance increases exponentially with decreasing temperature. In view of the short mean free path $`L_0510nm`$ and M=2, the rapid increase in resistance at low temperatures is likely to be caused by a strong localization in the damaged metallic nanotubes in the rope. Below a sample-dependent temperature around 20 K, the resistance obtained by the 4t measurement starts to decrease. This effect cannot be caused by some gold-tube contact resistances, since these do not contribute in a 4t measurement. Any scattering mechanism (e. g. phonon scattering), which could possibly lead to such a behaviour, would show in both, the 4t and the 2t measurement. In general, the absence of a similar decrease in the 2t measurement proves, that this behavior can not be caused by a change in transport inside the actual current path, i. e. the damaged tube. We will now discuss a model how to understand our experiments and will extract information about tube-tube interactions. Key to the understanding of our experiments is the realization that disorder can switch the current path from one tube to another inside the rope. Normally, we expect the current to be carried by the tube that has the lowest contact resistances to the electrodes. This must be a tube at the surface of the rope in direct contact with the electrodes. In our experiments, the surface tubes (and all other tubes about 6 nm deep into the rope) were damaged during the sputter treatment and thus show high resistance already at room temperature. When the sample is cooled the resistance increases due to localization in the damaged tube, and eventually grows sufficiently high, so that the current switches its path to another, undamaged metallic tube deeper inside the rope. The tubes are only weakly coupled, so the coupling resistances are rather high and it is for this reason that the undamaged tubes do not carry the current at 300 K. Once, however, the resistance in the damaged surface tube grows higher than the coupling resistance, the current favors this ’new’ path and switches to the undamaged tube in the bulk of the rope. In a 4t measurement, we will then only detect the transport inside the bulk tube, which involves a resistance of a few k$`\mathrm{\Omega }`$, while in the 2t measurement both channels are highly resistive, one due to localization, and the other due to the high coupling resistances involved in changing the current path. We will now develop our model in order to be able to analyze the observations quantitatively. Consider a network of damaged and undamaged tubes with resistances $`R_d`$ and $`R_u`$ with the inter-tube coupling resistance $`R_t`$ and with contact resistances $`R_c`$ which connect the damaged tube at the surface to the gold electrodes. The inset in Fig. 2 shows how these resistances are connected in the 4t model. In order to calculate the total 2t and 4t resistances, we have to evaluate the individual resistances. $`R_d`$ is governed by strong localization and can therefore be described by $$R_d=\frac{L}{L_\mathrm{\Phi }}\frac{h}{2e^2}\frac{1}{2}\left(e^{\frac{2L_\mathrm{\Phi }}{ML_0}}1\right).$$ $`L_\mathrm{\Phi }`$ follows a power law dependence on temperature, $`L_\mathrm{\Phi }T^\alpha `$, and we will use this to describe $`R_d`$. The inter-tube coupling resistance $`R_t`$ will be taken to be independent of temperature (we will justify this later on). The coupling resistances are placed underneath the electrodes, i. e. connected to the damaged areas of the surface tube, since a change in current path only occurs, where transport inside the surface tube gets ’blocked’ by localization. The resistance of the undamaged tube $`R_u`$ is much smaller than $`R_t`$ and $`R_d`$ (cf. the second inset in Fig. 1) and since it is always connected to $`R_t`$, it does not play any significant role. The last resistance to be discussed is the contact resistance $`R_c`$ between the gold electrode and the surface tube. This resistance will only show up in 2t measurements but will not contribute to the 4t resistance. We will neglect this resistance for the moment and we will justify this below. Figure 2 shows experimental data from one of our samples, together with the fits based on our model. The results of both 4t and 2t measurements are well reproduced, underlining the validity of our simple model. First, we note that indeed no additional contact resistances $`R_c`$ are necessary to describe the 2t measurements. Second, from the fits in Fig. 2 we can extract $`L_\mathrm{\Phi }(T)`$ as a function of temperature as shown in the inset. The coherence length turns out to be about 200 nm at the lowest temperature, a value significantly lower than that reported by other groups , but the discrepancy is not surprising. We are dealing here with a disordered system. It is well known that disorder significantly enhances phase breaking processes . We find that the temperature dependence of $`L_\mathrm{\Phi }`$ can be described by $`L_\mathrm{\Phi }T^\alpha `$ with $`\alpha =0.330.5`$. $`\alpha =1/3`$ points to dephasing by disorder-enhanced electron-electron scattering , while $`\alpha =1/2`$ suggests electron-phonon scattering. Both processes seem to be involved, with the electron-electron scattering possibly becoming dominant at the lowest temperatures . Next we evaluate the coupling resistance $`R_t`$ between the tubes. $`R_t`$ is extracted from the data in a very simple manner and turns out to be the most reliable and stable parameter in the simulation, since it is only determined by the value of the resistance maximum in the 4t measurement. The temperature dependence of $`R_d`$ determines the shape and position of the maximum in temperature ($`R_d(T_{max})R_t`$). Since slight variations in sputter damage ($`L_0`$) significantly affect $`R_d`$ there is no strict correlation between $`R_t`$ and $`T_{max}`$, but in spite of this $`R_t`$ can be obtained from the value of the resistance maximum. Analyzing $`R_t`$ for our 13 samples, we obtained values ranging from 2 M$`\mathrm{\Omega }`$ to 140 M$`\mathrm{\Omega }`$. Which coupling mechanism can explain such an enormous range of values? Hopping processes are sometimes invoked to describe inter-tube transport . This involves transport by hopping through intermediate states (e. g. via other tubes). In our case, a single transfer would correspond to a resistance of about 2 M$`\mathrm{\Omega }`$, and thus the highest value of 140 M$`\mathrm{\Omega }`$ would need 70 transfers, barely imaginable with only 100 tubes in the rope at all. Moreover, the hopping processes are thermally activated and thus the coupling between tubes would eventually freeze out, leaving the rope insulating at the lowest temperatures, in contrast to the observation. So, the only explanation that fits our data seems to be a tunneling process between the tubes. In this process a small range of distances between the tubes leads to a large range of resistances due to the exponential dependence typical for tunneling. Furthermore, this process does not freeze out even at the lowest temperatures. We will now try to link the experimental findings for $`R_t`$ to the geometry of the rope, i. e. the distances between the bulk and the surface tubes. The cross section of a rope consists of typically 100 single tubes. A fraction of about 2/3 of the tubes is semiconducting, while the remaining 1/3 is said to be metallic . The tubes in the top part of the rope are damaged in the sputter treatment. When the resistance in the damaged surface tube has increased sufficiently (by localization) the current can switch via the coupling resistance into an undamaged metallic tube in the bulk, which involves tunneling over some distance d within the triangular lattice of the rope. Of course, the depth of the damage of the sputter treatment (about 6 nm) sets a lower limit for the distance d within which we can find an undamaged metallic tube. How can we describe the coupling resistance $`R_t`$ that is caused by the tunnel process ? For the coupling of one dimensional wave guides separated by a tunnel barrier we find $`R_t=h/4e^2v_F/v_{}1/Th/4e^2e^{2\kappa d}`$. The velocity perpendicular to the tube axis $`v_{}`$ can be approximated by the Fermi velocity $`v_F`$ when transport along the damaged tube is blocked by localization. The transmission T in the tunnel process is determined by the overlap of the wave functions of the tubes, with $`\kappa `$ being related to the barrier height. Given the linear dispersion relation for the metallic nanotubes $`ϵ(k)=\mathrm{}v_F(kk_F)`$ around the Fermi energy and the barrier height $`\mathrm{\Phi }`$, $`\kappa `$ is calculated as $`\kappa =\mathrm{\Phi }/(\mathrm{}v_F)`$. Since the electrons tunnel through the other tubes, which are mostly semiconducting, the barrier height is given by the conduction band edge of these semiconducting tubes. All nanotubes share the same graphene structure, hence their work function is expected to be nearly the same , and the Fermi level of the metallic tubes is expected to align midgap the semiconductingenergy gap. with an average band gap of 500 meV we find $`\mathrm{\Phi }=E_{gap}/2250meV`$. Using $`v_F=10^6m/s`$ , we obtain a penetration depth of $`1/2\kappa =1.25nm`$, comparable to the value given in reference . Figure 3 compares theoretical and experimental data, where the theoretical predictions result from evaluating the above formula for the discrete distances $`d`$ realized in the triangular lattice of the rope. (In fact, the tunnel distance is $`dd_0`$ if d is the distance between the centers of the involved tubes. We used $`\mathrm{\Phi }=225meV`$. Since $`k_BT\mathrm{\Phi }`$, the tunnel resistance is indeed temperature independent.) We find that all data points coincide with values allowed by the model. None of the resistances we found corresponds to a distance shorter than about 8 nm, which is caused by the depth of the sputter damage of about 6 nm leaving no undamaged metallic tube in a shorter distance of the surface. Thus, the theoretical assumption of direct tunneling yields a consistent picture for the electronic coupling in nanotube ropes. In conclusion, a sputter treatment of single-wall carbon nanotube ropes before making electrical contact resulted in damage and strong localization in the current carrying tubes at the surface of the rope. Below a sample specific threshold temperature the current tunnels into an undamaged, metallic tube in the bulk of the rope, leading to a dramatic reduction of the four terminal resistance. The value of the resistance maximum is related to the inter-tube coupling resistance between the involved tubes. Using a simple model, this inter-tube resistance is shown to be caused by direct tunneling between tubes.
no-problem/0003/math0003191.html
ar5iv
text
# Zero divisors and 𝐿^𝑝⁢(𝐺), II ## 1. Introduction Let $`G`$ be a discrete group and let $`f`$ be a complex-valued function on $`G`$. We may represent $`f`$ as a formal sum $`_{gG}a_gg`$ where $`a_g`$ and $`f(g)=a_g`$. Thus $`L^{\mathrm{}}(G)`$ will consist of all formal sums $`_{gG}a_gg`$ such that $`sup_{gG}|a_g|<\mathrm{}`$, $`C_0(G)`$ will consist of those formal sums for which the set $`\{g|a_g|>ϵ\}`$ is finite for all $`ϵ>0`$, and for $`p1`$, $`L^p(G)`$ will consist of those formal sums for which $`_{gG}|a_g|^p<\mathrm{}`$. Then we have the following inclusions: $$GL^p(G)C_0(G)L^{\mathrm{}}(G).$$ For $`\alpha =_{gG}a_ggL^1(G)`$ and $`\beta =_{gG}b_ggL^p(G)`$, we define a multiplication $`L^1(G)\times L^p(G)L^p(G)`$ by (1.1) $$\alpha \beta =\underset{g,h}{}a_gb_hgh=\underset{gG}{}\left(\underset{hG}{}a_{gh^1}b_h\right)g.$$ In this paper we consider the following: ###### Problem 1.1. Let $`G`$ be a torsion free group and let $`1p\mathrm{}`$. If $`0\alpha G`$ and $`0\beta L^p(G)`$, is $`\alpha \beta 0`$? Some results on this problem are given in . In this sequel we shall obtain new results for the cases $`G=^d`$, the free abelian group of rank $`d`$, and $`G=F_k`$, the free group of rank $`k`$. Part of this work was carried out while the first author was at the Sonderforschungsbereich in Münster. He would like to thank Wolfgang Lück for organizing his visit to Münster, and the Sonderforschungsbereich for financial support. ## 2. Statement of Main Results Let $`0\alpha L^1(G)`$ and let $`1p`$. We shall say that $`\alpha `$ is a *$`p`$-zero divisor* if there exists $`\beta L^p(G)0`$ such that $`\alpha \beta =0`$. If $`\alpha \beta 0`$ for all $`\beta C_0(G)0`$, then we say that $`\alpha `$ is a *uniform nonzero divisor*. Let $`2d`$. It was shown in that there are $`p`$-zero divisors in $`^d`$ for $`p>\frac{2d}{d1}`$. In this paper we shall show that this is the best possible by proving ###### Theorem 2.1. Let $`2d`$, $`1p`$, let $`0\alpha ^d`$, and let $`0\beta L^p(^d)`$. If $`p\frac{2d}{d1}`$, then $`\alpha \beta 0`$. Let $`𝕋^d`$ denote the $`d`$-torus which, except in Section 4, we will view as the cube $`[\pi ,\pi ]^d`$ in $`^d`$ with opposite faces identified, and let $`𝔭:[\pi ,\pi ]^d𝕋^d`$ denote the natural surjection. For $`n^d`$ and $`t𝕋^d`$, let $`nt`$ indicate the dot product, which is well defined modulo $`2\pi `$. If $`\alpha =_{n^d}a_nnL^1(^d)`$, then for $`t𝕋^d`$ its Fourier transform $`\widehat{\alpha }:𝕋^d`$ is defined by $$\widehat{\alpha }(t)=\underset{n^d}{}a_ne^{i(nt)}$$ and we shall let $`Z(\alpha )=\{t𝕋^d\widehat{\alpha }(t)=0\}`$. We say that $`M`$ is a hyperplane in $`𝕋^d`$ if there exists a hyperplane $`N`$ in $`^d`$ such that $`M=𝔭([\pi ,\pi ]^dN)`$. We will prove the following theorem, which is an improvement over \[8, theorem 1\]. ###### Theorem 2.2. Let $`\alpha ^d`$. Then $`\alpha `$ is a uniform nonzero divisor if and only if $`Z(\alpha )`$ is contained in a finite union of hyperplanes in $`𝕋^d`$. Let $`V=𝔭\left((\pi ,\pi )^d\right)`$, let $`\alpha L^1(^d)`$, let $`E=Z(\alpha )V`$, and let $`U`$ be an open subset of $`(\pi ,\pi )^{d1}`$. Let $`\varphi :U(\pi ,\pi )`$ be a smooth map, and suppose $`\{𝔭(x,\varphi (x))xU\}E`$. If the Hessian matrix $$\left(\frac{^2\varphi }{x_ix_j}\right)$$ of $`\varphi `$ has constant rank $`d1\nu `$ on $`U`$ where $`0\nu d1`$, then we say that $`\varphi `$ has constant relative nullity $`\nu `$. We shall say that $`Z(\alpha )`$ has *constant relative nullity $`\nu `$* if every localization $`\varphi `$ of $`E`$ has constant relative nullity $`\nu `$ \[6, p. 64\]. We shall prove ###### Theorem 2.3. Let $`\alpha ^d`$, let $`1p`$, and let $`2d`$. Suppose that $`Z(\alpha )`$ is a smooth $`(d1)`$-dimensional submanifold of $`𝕋^d`$ with constant relative nullity $`\nu `$ such that $`0\nu d2`$. Then $`\alpha `$ is a $`p`$-zero divisor if and only if $`p>\frac{2(d\nu )}{d1\nu }`$. For $`k_0`$, let $`F_k`$ denote the free group on $`k`$ generators. It was proven in that if $`\alpha F_k0`$ and $`\beta L^2(F_k)0`$, then $`\alpha \beta 0`$. We will give an explicit example to show that if $`k2`$, then this result cannot be extended to $`L^p(F_k)`$ for any $`p>2`$. This is a bit surprising in view of Theorem 2.1. We will conclude this paper with some results about $`p`$-zero divisors for the free group case. ## 3. A characterization of $`p`$-zero divisors Let $`G`$ be a group, not necessarily discrete, and let $`L^p(G)`$ be the space of $`p`$-integrable functions on $`G`$ with respect to Haar measure. Let $`yG`$ and let $`fL^p(G)`$. The right translate of $`f`$ by $`y`$ will be denoted by $`f_y`$, where $`f_y(x)=f(xy^1)`$. Define $`T^p[f]`$ to be the closure in $`L^p(G)`$ of all linear combinations of right translates of $`f`$. A common problem is to determine when $`T^p[f]=L^p(G)`$; see for background. Now suppose that $`G`$ is also discrete. Given $`1p`$, we shall always let $`q`$ denote the conjugate index of $`p`$. Thus if $`p>1`$, then $`\frac{1}{p}+\frac{1}{q}=1`$, and if $`p=1`$ then $`q=\mathrm{}`$. Sometimes we shall require $`p=\mathrm{}`$, and then $`q=1`$. Let $`\alpha =_{gG}a_ggL^p(G)`$, $`\beta =_{gG}b_ggL^q(G)`$, and define a map $`,:L^p(G)\times L^q(G)`$ by $$\alpha ,\beta =\underset{gG}{}a_g\overline{b_g}.$$ Fix $`hL^q(G)`$. Then $`,h`$ is a continuous linear functional on $`L^p(G)`$ and if $`p\mathrm{}`$, then every continuous linear functional on $`L^p(G)`$ is of this form. We shall use the notation $`\stackrel{~}{\beta }`$ for $`_{gG}b_gg^1`$, $`\overline{\beta }`$ for $`_{gG}\overline{b_g}g`$, and $`\beta ^{}`$ for $`_{gG}\overline{b_g}g^1`$. Also the same formula in equation (1.1) gives a multiplication $`L^p(G)\times L^q(G)L^{\mathrm{}}(G)`$. Then we have the following elementary lemma, which roughly says that $`\alpha \beta =0`$ if and only if all the translates of $`\alpha `$ are perpendicular to $`\beta `$. ###### Lemma 3.1. Let $`1p`$ or $`p=\mathrm{}`$, let $`\alpha L^p(G)`$, and let $`\beta L^q(G)`$. Then $`\alpha \beta =0`$ if and only if $`(\stackrel{~}{\alpha })_y,\overline{\beta }=0`$ for all $`yG`$. ###### Proof. Write $`\alpha =_{gG}a_gg`$ and $`\beta =_{gG}b_gg`$. Then $$\alpha \beta =\underset{yG}{}(\underset{gG}{}a_{yg^1}b_g)y$$ and $`(\stackrel{~}{\alpha })_y,\overline{\beta }=_{gG}a_{yg^1}b_g`$. The result follows. ∎ The following proposition, which is a generalization of \[8, lemma 1\], characterizes $`p`$-zero divisors in terms of their right translates (the statement of \[8, lemma 1\] should have the additional condition that $`p1`$). ###### Proposition 3.2. Let $`\alpha L^1(G)`$ and let $`1<p`$ or $`p=\mathrm{}`$. Then $`\alpha `$ is a $`p`$-zero divisor if and only if $`T^q[\stackrel{~}{\alpha }]L^q(G)`$. ###### Proof. The Hahn-Banach theorem tells us that $`T^q[\stackrel{~}{\alpha }]L^q(G)`$ if and only if there exists a nonzero continuous linear functional on $`L^q(G)`$ which vanishes on $`T^q[\stackrel{~}{\alpha }]`$. The result now follows from Lemma 3.1. ∎ ###### Remark 3.3. If $`p=1`$ in the above Proposition 3.2, we would need to replace $`L^q(G)`$ with $`C_0(G)`$, and $`T^q[\stackrel{~}{\alpha }]`$ with the closure in $`C_0(G)`$ of all linear combinations of right translates of $`\stackrel{~}{\alpha }`$. ## 4. A key proposition In this section we prove a proposition that will enable us to prove Theorems 2.1, 2.2 and 2.3. Let $`1p`$, let $`y^d`$ and let $`fL^p(^d)`$. We shall use additive notation for the group operation in $`^d`$; thus the right translate of $`f`$ by $`y`$ is now given by $`f_y=f(xy)`$. We say that $`f`$ has linearly independent translates if and only if for all $`a_1,\mathrm{},a_m`$, not all zero, and for all distinct $`y_1,\mathrm{},y_m^d`$, $$\underset{i=1}{\overset{m}{}}a_if_{y_i}0.$$ For the rest of this section we shall view $`𝕋^d`$ as the unit cube $`[0,1]^d`$ with opposite faces identified. Let $`L^p(𝕋^d\times ^d)`$ denote the space of functions on $`𝕋^d\times ^d`$ which satisfy $$_{t𝕋^d}\underset{m^d}{}|f(t,m)|^pdt<\mathrm{}.$$ Then for $`\alpha =_{n^d}a_nn^d`$ and $`fL^p(𝕋^d\times ^d)`$, we define $`\alpha fL^p(𝕋^d\times ^d)`$ by $$(\alpha f)(t,m)=\underset{n^d}{}a_nf(t,mn),$$ and this yields an action of $`^d`$ on $`L^p(𝕋^d\times ^d)`$. ###### Lemma 4.1. Let $`\alpha ^d`$. Then there exists $`\beta L^p(^d)0`$ such that $`\alpha \beta =0`$ if and only if there exists $`fL^p(𝕋^d\times ^d)0`$ such that $`\alpha f=0`$. ###### Proof. Let $`\beta L^p(^d)0`$ such that $`\alpha \beta =0`$ and define a nonzero function $`fL^p(𝕋^d\times ^d)`$ by $`f(t,m)=\beta (m)`$. For $`n^d`$, set $`b_n=\beta (n)`$. Then (4.1) $$\begin{array}{cc}\hfill (\alpha f)(t,m)& =\underset{n^d}{}a_nf(t,mn)=\underset{n^d}{}a_n\beta (mn)\hfill \\ & =\underset{n^d}{}a_nb_{mn}=(\alpha \beta )(m)=0.\hfill \end{array}$$ Conversely suppose there exists $`fL^p(𝕋^d\times ^d)0`$ such that $`\alpha f=0`$. This means that $`(\alpha f)(t,n)=0`$ for all $`n`$, for all $`t`$ except on a set $`T_1𝕋^d`$ of measure zero. Also $`_{n^d}|f(t,n)|^p<\mathrm{}`$ for all $`t`$ except on a set $`T_2𝕋^d`$ of measure zero. Since $`f0`$, we may choose $`s𝕋^d(T_1T_2)`$ such that $`f(s,n)0`$ for some $`n`$. Now define $`\beta (n)=f(s,n)`$. Then $`\beta L^p(^d)0`$ and the calculation in equation (4.1) shows that $`\alpha \beta =0`$. ∎ For $`\alpha =_{n^d}a_nn^d`$ and $`fL^p(^d)`$, we define $`\alpha fL^p(^d)`$ by $$(\alpha f)(x)=\underset{n^d}{}a_nf(xn).$$ If $`\alpha 0`$ and $`\alpha f=0`$, then there is a dependency among the right translates of $`f`$, i.e. $`f`$ does not have linearly independent translates. We are now ready to prove ###### Proposition 4.2. Let $`\alpha ^d`$. Then $`\alpha `$ is a $`p`$-zero divisor if and only if there exists $`fL^p(^d)0`$ such that $`\alpha f=0`$. ###### Proof. Define a Banach space isomorphism $`\zeta :L^p(^d)L^p(𝕋^d\times ^d)`$ by $`(\zeta f)(t,n)=f(tn)`$ for $`fL^p(^d)`$. We want to show that this isomorphism commutes with the action of $`^d`$. Clearly it will be sufficient to show that $`\zeta `$ commutes with the action of $`^d`$. If $`m^d`$, then $`m\left((\zeta f)(t,n)\right)`$ $`=m(f(tn))`$ $`=f(tnm)`$ $`=(mf)(tn)`$ $`=(\zeta (mf))(t,n).`$ Thus the action of $`^d`$ commutes with $`\zeta `$. We deduce that for $`\alpha ^d`$, there exists $`fL^p(^d)0`$ such that $`\alpha f=0`$ if and only if there exists $`f^{}L^p(𝕋^d\times ^d)0`$ such that $`\alpha f^{}=0`$. The proposition now follows from Lemma 4.1. ∎ ###### Remark 4.3. Replacing $`L^p(^d)`$ by $`C_0(^d)`$ in the above arguments, we can also show that $`\alpha `$ is a uniform nonzero divisor if and only if $`\alpha f0`$ for all $`fC_0(^d)0`$. ## 5. Proofs of theorems 2.1, 2.2, and 2.3 The proof of Theorem 2.1 is obtained by combining \[11, theorem 3\] with Proposition 4.2. The proof of Theorem 2.2 is obtained by combining \[3, theorem 2.12\] with Remark 4.3. Before we prove Theorem 2.3, we will need to define the notion of a $`q`$-thin set. See for more information on this and other concepts used in this paragraph. Let $`G`$ be a locally compact abelian group and let $`X`$ be its character group. Let $`\beta L^{\mathrm{}}(G)`$ and let $`\widehat{\beta }`$ indicate the generalized Fourier transform of $`\beta `$. The key reason for using the generalized Fourier transform is that for $`\alpha L^1(G)`$, we have $`\widehat{\alpha \beta }=\widehat{\alpha }\widehat{\beta }`$ which tells us that $`\alpha \beta =0`$ if and only if $`\mathrm{supp}\widehat{\beta }Z(\alpha )`$. Let $`EX`$. We shall say that $`E`$ is $`q`$-thin if $`\beta C_0(G)L^p(G)`$ and $`\mathrm{supp}\widehat{\beta }E`$ implies $`\beta =0`$. Recall that $`p`$ is the conjugate index of $`q`$. The result of Edwards \[4, theorem 2.2\] says that if $`\alpha L^1(^d)`$ and $`Z(\alpha )`$ is $`q`$-thin, then $`T^q[\alpha ]=L^q(G)`$. Here our $`q`$ is used in place of Edwards’s $`p`$, and our $`p`$ is used in place of his $`p^{}`$. We are now ready to prove Theorem 2.3. Suppose $`Z(\alpha )`$ satisfies the hypothesis of the theorem. Let $`\beta L^p(^d)0`$ such that $`\alpha \beta =0`$ and $`p\frac{2(d\nu )}{d1\nu }`$. Since $`\frac{2(d\nu )}{d1\nu }>1`$ and increasing $`p`$ retains the property $`\beta L^p(^d)`$, we may assume that $`p>1`$. Then $`\stackrel{~}{\alpha }\stackrel{~}{\beta }=0`$ and using Proposition 3.2, we see that $`T^q[\alpha ]L^q(^d)`$. But \[4, theorem 2.2\] tells us that $`Z(\alpha )`$ is not $`q`$-thin, and this contradicts \[6, theorem 1\]. Conversely, let $`T`$ be a smooth, nonzero mass density on $`Z(\alpha )`$ vanishing near the boundary of $`Z(\alpha )`$. Using \[6, theorem 3\], we can construct $`\beta L^p(^d)0`$ for $`p>\frac{2(d\nu )}{d1\nu }`$ such that $`\widehat{\beta }=T`$. Then $`\mathrm{supp}\widehat{\beta }Z(\alpha )`$, that is $`\alpha \beta =0`$. An application of Proposition 4.2 completes the proof of Theorem 2.3. ## 6. Free groups and $`p`$-zero divisors Throughout this section, $`2k`$. In it was proven that if $`0\alpha F_k`$, then $`\alpha `$ is not a 2-zero divisor. In this section we will give explicit examples to show that this result cannot be extended to $`L^p(F_k)`$ for any $`p>2`$. We will conclude this section by giving sufficient conditions for elements of $`L_r^1(F_k)`$, the radial functions of $`L^1(F_k)`$ as defined below, to be $`p`$-zero divisors. Any element $`x`$ of $`F_k`$ has a unique expression as a finite product of generators and their inverses, which does not contain any two adjacent factors $`ww^1`$ or $`w^1w`$. The number of factors in $`x`$ is called the *length* of $`x`$ and is denoted by $`|x|`$. A function in $`L^{\mathrm{}}(F_k)`$ will be called radial if its value depends only on $`|x|`$. Let $`E_n=\{xF_k|x|=n\}`$, and let $`e_n`$ indicate the cardinality of $`E_n`$. Then $`e_n=2k(2k1)^{n1}`$ for $`n1`$, and $`e_0=1`$. Let $`\chi _n`$ denote the characteristic function of $`E_n`$, so as an element of $`F_k`$ we have $`\chi _n=_{|x|=n}x`$. Then every radial function has the form $`_{n=0}^{\mathrm{}}a_n\chi _n`$ where $`a_n`$. Let $`L_r^p(F_k)`$ denote the radial functions contained in $`L^p(F_k)`$ and let $`(F_k)_r`$ denote the radial functions contained in $`F_k`$. Then $`L_r^p(F_k)`$ is the closure of $`(F_k)_r`$ in $`L^p(F_k)`$. Let $`\omega =\sqrt{2k1}`$. It was shown in \[5, chapter 3\] that $`\chi _1\chi _1`$ $`=\chi _2+2k\chi _0`$ $`\chi _1\chi _n`$ $`=\chi _{n+1}+\omega ^2\chi _{n1},n2,`$ hence $`L_r^1(F_k)`$ is a commutative algebra which is generated by $`\chi _0`$ and $`\chi _1`$. Later we will need the following elementary result. ###### Lemma 6.1. Let $`x,yF_k`$ with $`|x|=|y|`$, and let $`0m,n`$. Then $`\chi _mx,\chi _n=\chi _my,\chi _n`$. ###### Proof. We have $`\chi _mx,\chi _n=x,\chi _m^{}\chi _n=x,\chi _m\chi _n`$. By the above remarks, $`\chi _m\chi _n`$ is a sum of elements of the form $`\chi _r`$. Therefore we need only prove that $`x,\chi _r=y,\chi _r`$. But $`x,\chi _r=1`$ if $`m=r`$ and 0 if $`mr`$, and the result follows. ∎ Let $`\alpha `$ be a complex-valued function on $`F_k`$. Set $$a_n(\alpha )=\frac{1}{e_n}\underset{xE_n}{}\alpha (x)$$ and denote by $`P(\alpha )`$ the radial function $`_{n=0}^{\mathrm{}}a_n(\alpha )\chi _n`$. ###### Lemma 6.2. Let $`1p`$ or $`p=\mathrm{}`$, let $`\alpha L_r^1(F_k)`$, and let $`\beta L^p(F_k)`$. If $`\alpha \beta =0`$, then $`\alpha P(\beta )=0`$. ###### Proof. Let $`f,hF_k`$. It was shown in \[9, lemma 6.1\] that $`P(f)P(h)=P(P(f)h)`$. Write $`\beta =_{gF_k}b_gg`$. If $`p\mathrm{}`$ and $`0a_1,\mathrm{},a_n`$, then by Jensen’s inequality \[10, p. 189\] applied to the function $`x^p`$ for $`x>0`$, $$\left(\frac{a_1+\mathrm{}+a_n}{n}\right)^p\frac{a_1^p+\mathrm{}+a_n^p}{n},$$ consequently $$P(\beta )_p^p=\underset{n=0}{\overset{\mathrm{}}{}}e_n\left|\frac{1}{e_n}\underset{|g|=n}{}b_g\right|^p\underset{gF_k}{}|b_g|^p=\beta _p^p.$$ Therefore $`P`$ is a continuous map from $`L^p(F_k)`$ into $`L_r^p(F_k)`$ for $`p\mathrm{}`$. It is also continuous for $`p=\mathrm{}`$. The lemma follows because the map $`L^1(G)\times L^p(G)L^p(G)`$ is continuous; specifically $`\alpha \beta _p\alpha _1\beta _p`$. ∎ For $`n_0`$, define polynomials $`P_n`$ by $`P_0(z)=1,P_1(z)=z,P_2(z)=z^22k`$ and $`P_{n+1}(z)=zP_n(z)\omega ^2P_{n1}(z)\text{ for }n2.`$ Let $`\alpha =_{n=0}^{\mathrm{}}a_n\chi _nL_r^1(F_k)`$. In , Pytlik shows the following. 1. $`X=\{x+iy(\frac{x}{2k})^2+(\frac{y}{2k2})^21\}`$ is the spectrum of $`L_r^1(F_k)`$. 2. The Gelfand transform of $`\alpha `$ is given by $`\widehat{\alpha }(z)=_{n=0}^{\mathrm{}}a_nP_n(z)`$ for $`zX`$. Let $`Z(\alpha )=\{zX\widehat{\alpha }(z)=0\}`$. For $`zX`$ we define $`\varphi _zL_r^{\mathrm{}}(F_k)`$, the space of continuous linear functionals on $`L_r^1(F_k)`$ \[1, p. 34\], by $$\varphi _z=\underset{n=0}{\overset{\mathrm{}}{}}\frac{P_n(z)}{e_n}\chi _n.$$ We can now state ###### Lemma 6.3. Let $`\alpha L_r^1(F_k)`$ and let $`zX`$. Then $`\alpha \overline{\varphi _z}=0`$ if and only if $`zZ(\alpha )`$. ###### Proof. Let $`\beta L_r^1(F_k)`$ and write $`\beta =_{m=0}^{\mathrm{}}b_m\chi _m`$. Then $`\beta ,\overline{\varphi _z}`$ $`={\displaystyle \underset{m,n}{}}{\displaystyle \frac{b_mP_n(z)}{e_n}}\chi _m,\chi _n`$ $`={\displaystyle \underset{n}{}}b_nP_n(z)=\widehat{\beta }(z).`$ Applying this in the case $`\beta =\alpha \chi _n`$, we obtain $`\alpha \chi _n,\overline{\varphi _z}=\widehat{\alpha }(z)P_n(z)`$. Using Lemma 6.1, we deduce that if $`yF_k`$ and $`|y|=n`$, then $`\alpha y,\varphi _z=\widehat{\alpha }(z)P_n(z)/e_n`$. Since $`\alpha =\stackrel{~}{\alpha }`$, the result now follows from Lemma 3.1. ∎ If $`\alpha L_r^1(F_k)`$, we shall say that $`\alpha \chi _n`$ is a radial translate of $`\alpha `$. We then set $`TR^1[\alpha ]`$ equal to the closure in $`L_r^1(F_k)`$ of the set of linear combinations of radial translates of $`\alpha `$. ###### Proposition 6.4. Let $`\alpha L_r^1(F_k)`$. Then $`\alpha \beta 0`$ for all $`\beta L^{\mathrm{}}(F_k)0`$ if and only if $`Z(\alpha )=\mathrm{}`$. ###### Proof. If $`zZ(\alpha )`$, then $`\varphi _zL^{\mathrm{}}(F_k)0`$ and $`\alpha \overline{\varphi _z}=0`$ by Lemma 6.3. Conversely suppose there exists $`\beta L^{\mathrm{}}(F_k)0`$ such that $`\alpha \beta =0`$. Then $`\beta _y0`$ for some $`yF_k`$, so replacing $`\beta `$ with $`\beta y^1`$, we may assume that $`P(\beta )0`$. If $`\gamma =\overline{\beta }`$, then $`\alpha \overline{\gamma }=0`$ and $`P(\gamma )0`$. Using Lemma 6.2 we see that $`\alpha \overline{P(\gamma )}=0`$, and we deduce from Lemma 3.1 that $`\alpha _y,P(\gamma )=0`$ for all $`yF_k`$. It follows that $`\alpha \chi _n,P(\gamma )=0`$ for all $`n_0`$, consequently $`TR^1[\alpha ]L_r^1(F_k)`$. Let $`J`$ be a maximal ideal in $`L_r^1(F_k)`$ which contains $`TR^1[\alpha ]`$. By Gelfand theory there exists $`zX`$ such that $`J=\{\delta L_r^1(F_k)\widehat{\delta }(z)=0\}`$, so $`zZ(\gamma )`$. ∎ We can now state ###### Example 6.5. Let $`k2`$. Then $`\chi _1`$ is a $`p`$-zero divisor for all $`p>2`$. ###### Proof. Since $`0Z(\chi _1)`$, we see from Lemma 6.3 that $`\chi _1\varphi _0=0`$. Of course $`\varphi _00`$. We now prove the stronger statement that $`\varphi _0L^p(F_k)`$ for all $`p>2`$. We have $$\varphi _0=\underset{n=0}{\overset{\mathrm{}}{}}\frac{P_n(0)}{e_n}\chi _n=\underset{n=0}{\overset{\mathrm{}}{}}\frac{(1)^n}{(2k1)^n}\chi _{2n}.$$ Therefore $`{\displaystyle \underset{gF_k}{}}|\varphi _0(g)|^p`$ $`=1+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{e_{2n}}{(2k1)^{pn}}}=1+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{2k(2k1)^{2n1}}{(2k1)^{pn}}}`$ $`=1+{\displaystyle \frac{2k}{2k1}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{(2k1)^{n(p2)}}}`$ and the result follows. ∎ We can use the above result to prove that the nonsymmetric sum of generators in $`F_k`$ is a $`p`$-zero divisor for all $`p>2`$ in the case $`k`$ is even and $`k>2`$. Specifically we have ###### Example 6.6. Let $`k>3`$ and let $`\{x_1,\mathrm{},x_k\}`$ be a set of generators for $`F_k`$. If $`k`$ is even, then $`x_1+\mathrm{}+x_k`$ is a $`p`$-zero divisor for all $`p>2`$. To establish this, we need some results about free groups. ###### Lemma 6.7. Let $`0<n`$ and let $`F`$ be the free group on $`x_1,\mathrm{},x_n`$. Then no nontrivial word in the $`2n1`$ elements $`x_1^2,\mathrm{},x_n^2,x_1x_2,x_2x_3,\mathrm{},x_{n1}x_n`$ is the identity; in particular these $`2n1`$ elements generate a free group of rank $`2n1`$. ###### Proof. We shall use induction on $`n`$, so assume that the result is true with $`n1`$ in place of $`n`$. Let $`T`$ denote the Cayley graph of $`F`$ with respect to the generators $`x_1,\mathrm{},x_n`$. Thus the vertices of $`T`$ are the elements of $`F`$, and $`f,gF`$ are joined by an edge if and only if $`f=gx_i^{\pm 1}`$ for some $`i`$. Also $`F`$ acts by left multiplication on $`T`$. Suppose a nontrivial word in $`x_1^2,\mathrm{},x_n^2,x_1x_2,x_2x_3,\mathrm{},x_{n1}x_n`$ is the identity, and choose such a word $`w`$ with shortest possible length. Note that $`w`$ must involve $`x_n^2`$, because $`F`$ is the free product of the group generated by $`x_1,\mathrm{},x_{n1}`$ and the group generated by $`x_{n1}x_n`$. By conjugating and taking inverses if necessary, we may assume without loss of generality that $`w`$ ends with $`x_n^2`$. Write $`w=w_m\mathrm{}w_1`$, where $`w_1=x_n^2`$, and each of the $`w_i`$ are one of the above $`2n1`$ elements. Let us consider the path whose $`(2i+1)`$th vertex is $`w_i\mathrm{}w_11`$. Note that $`w1=1`$, but $`w_i\mathrm{}w_111`$ for $`0<i<m`$. Observe that the path of length 2 from $`x_n^2`$ to $`w_2x_n^2`$ cannot go through $`x_n`$ (just go through the $`2n1`$ possibilities for $`w_2`$, noting that $`w_2x_n^2`$). Now remove the edge joining $`x_n`$ and $`x_n^2`$. Since $`T`$ is a tree \[2, I.8.2 theorem\], the resulting graph will become two trees; one component $`T_1`$ containing 1 and the other component $`T_2`$ containing $`x_n^2`$. Since the length 2 path from $`x_n^2`$ to $`w_2x_n^2`$ did not go though $`x_n`$, for $`i1`$ the path $`w_i\mathrm{}w_2(w_11)`$ remains in $`T_2`$ at least until it passes through $`x_n^2`$ again. Also the path must pass through $`x_n^2`$ again in order to get back to 1. Since the paths $`w_i\mathrm{}(w_11)`$ all have even length (all the $`w_i`$ are words of length 2), it follows that $`w_l\mathrm{}w_11=x_n^2`$ for some $`l`$, where $`2l<m`$. We deduce that $`w_l\mathrm{}w_2=1`$, which contradicts the minimality of the length of $`w`$. ∎ ###### Corollary 6.8. Let $`n_1`$ and let $`F`$ be the free group on $`x_1,\mathrm{},x_n`$. Then no nontrivial word in the $`2n1`$ elements $`x_1^2,\mathrm{},x_n^2,x_1^1x_2,x_2^1x_3,\mathrm{},x_{n1}^1x_n`$ is the identity; in particular these $`2n1`$ elements generate a free group of rank $`2n1`$. ###### Proof. This follows immediately from Lemma 6.7: replace $`x_ix_{i+1}`$ with $`x_i^2x_ix_{i+1}`$ for all $`i<n`$. ∎ ###### Corollary 6.9. Let $`n_1`$ and let $`F`$ be the free group on $`x_1,\mathrm{},x_n,w`$. Then the elements $`wx_1,wx_1^1,\mathrm{},wx_n,wx_n^1`$ generate a free subgroup of rank $`2n`$. ###### Proof. The above elements generate the subgroup generated by $$x_1^2,\mathrm{},x_n^2,x_1^1x_2,x_2^1x_3,\mathrm{},x_{n1}^1x_n,wx_1.$$ The result follows from Corollary 6.8. ∎ ###### Proof of Example 6.6. Let $`G=F_k`$ and let $`F`$ be the free group on $`y_1,\mathrm{},y_k,w`$. By Corollary 6.9 there is a monomorphism $`\theta :GF`$ determined by the formula $$\theta (x_1)=wy_1,\theta (x_2)=wy_1^1,\mathrm{},\theta (x_k)=wy_{k/2}^1.$$ Note that $`\theta `$ induces a Banach space monomorphism $`L^p(G)L^p(F)`$. Set $`\alpha =wy_1+wy_1^1+\mathrm{}+wy_{k/2}+wy_{k/2}^1`$. Since $`y_1+y_1^1+\mathrm{}+y_{k/2}+y_{k/2}^1`$ is a $`p`$-zero divisor by Example 6.5, we see that $`\alpha `$ is a $`p`$-zero divisor, say $`\alpha \beta =0`$ where $`0\beta L^p(F)`$. Write $`F=_{tT}\theta (G)t`$ where $`T`$ is a right transversal for $`\theta (G)`$ in $`F`$. Then $`\beta =_{tT}\beta _tt`$ where $`\beta _tL^p(\theta (G))`$ for all $`t`$. Also $`\alpha \beta _t=0`$ for all $`t`$ and $`\beta _s0`$ for some $`sT`$. Define $`\gamma L^p(G)`$ by $`\theta (\gamma )=\beta _s`$. Then $`0\gamma L^p(G)`$ and $`(x_1+\mathrm{}+x_k)\gamma =0`$ as required. ∎ We conclude with some information on the existence of $`p`$-zero divisors in $`L_r^1(F_k)`$. Let $`\alpha L_r^1(F_k)`$ and define $`p(\alpha )`$ as follows. If $`Z(\alpha )(2k,2k)=\mathrm{}`$, then set $`p(\alpha )=\mathrm{}`$. If $`Z(\alpha )(2k,2k)\mathrm{}`$, then set $`m(\alpha )=\mathrm{min}\{|t|tZ(\alpha )(2k,2k)\}`$. If $`m(\alpha )[0,2\omega ]`$, then set $`p(\alpha )=2`$. Finally if $`m(\alpha )(2\omega ,2k)`$, then let $`p(\alpha )`$ be the positive root of the following equation in $`p`$: $$m(\alpha )=\sqrt{2k1}\left((2k1)^{\frac{1}{2}\frac{1}{p}}+(2k1)^{\frac{1}{p}\frac{1}{2}}\right).$$ We can now state ###### Proposition 6.10. Let $`\alpha L_r^1(F_k)`$. Then $`\alpha `$ is a $`p`$-zero divisor for all $`p>p(\alpha )`$. ###### Proof. Let $`t(2k,2k)`$ such that $`m(\alpha )=|t|`$ and suppose $`p>p(\alpha )`$. Since $`\varphi _t`$ is a positive definite function by \[9, lemma 6.1\], we can apply \[1, theorem 2(a)\] to deduce that $`\varphi _tL_r^p(F_k)`$. By Lemma 6.3 $`\alpha \varphi _t=0`$ and the result is proven. ∎
no-problem/0003/hep-ph0003310.html
ar5iv
text
# CP Violation in the SM and Beyond in Hadronic B Decays ## Abstract Three different methods, using $`B_dJ/\psi K_S`$, $`J/\psi K_S\pi ^0`$, $`B_dK^{}\pi ^+,\pi ^+\pi ^{}`$ and $`B_uK^{}\pi ^0,\overline{K}^0\pi ^{},\pi ^{}\pi ^0`$, to extract hadronic model independent information about new physics are discussed in this talk. 1. Introduction In this talk I discuss three methods, using $`B_d`$ $`J/\psi K_S`$, $`J/\psi K_S\pi ^0`$ , $`B_dK^{}\pi ^+,\pi ^+\pi ^{}`$ and $`B_uK^{}\pi ^0,\overline{K}^0\pi ^{},\pi ^{}\pi ^0`$ , to extract hadronic model independent information about the SM and models beyond. The SM effective Hamiltonian respossible for hadronic decays is knownRef. . When going beyond the SM, there are new contributions. I will take three models beyond the SM for illustrations. Model i): R-parity violation model In R-parity violating supersymmetric (SUSY) models, there are new CP violating phases. Here I consider the effects due to, $`L=(\lambda _{ijk}^{\prime \prime }/2)U_R^{ci}D_R^{cj}D_R^{ck}`$, R-parity violating interaction. Exchange of $`\stackrel{~}{d}_i`$ squark can generate the following effective Hamiltonian at tree level, $`H_{eff}=(4G_F/\sqrt{2})`$ $`V_{fb}^{}V_{fq}`$ $`c^{f(q)}[O_1^{f(q)}(R)O_2^{f(q)}(R)]`$. Here $`O_1^{f(q)}(R)=\overline{f}\gamma _\mu Rf\overline{q}\gamma ^\mu Rb`$ and $`O_2^{f(q)}(R)`$ $`=\overline{f}_\alpha \gamma _\mu Rf_\beta \overline{q}_\beta \gamma ^\mu Rb_\alpha `$. The operators $`O_{1,2}(R)`$ have the opposite chirality as those of the tree operators $`O_{1,2}(L)`$ in the SM. The coefficients $`c^{f(q)}`$ with QCD corrections are given by $`(c_1c_2)(\sqrt{2}/(4G_FV_{fb}V_{fq}^{}))(\lambda _{fqi}^{\prime \prime }\lambda _{f3i}^{\prime \prime }/2m_{\stackrel{~}{d}^i}^2)`$. Here $`m_{\stackrel{~}{d}^i}`$ is the squark mass. $`B_u\pi ^{}\overline{K}^0`$ and $`B_uK^{}\overline{K}^0`$ data constrain $`|c^{c(q)}|`$ to be less than $`O(1)`$ . The new contributions can be larger than the SM ones. I will take the values to be 10% of the corresponding values for the SM with arbitrary phases $`\delta ^{f(q)}`$ for later discussions. Model ii): SUSY with large gluonic dipole interaction In SUSY models with R-parity conservation, potential large contributions to B decays may come from gluonic dipole interaction $`c_{11}^{new}`$ by exchanging gluino at loop level with left- and right-handed squark mixing. $`c_{11}^{new}`$ is constrained by experimental data from $`bs\gamma `$ which, however, still allows $`c_{11}^{new}`$ to be as large as three times of the SM contribution in magnitude with an arbitrary CP violating phase $`\delta _{dipole}`$ . I will take $`c_{11}^{new}`$ to be 3 times of the SM value with an arbitrary $`\delta _{dipole}`$. Model iii): Anomalous gauge boson couplings Anomalous gauge boson couplings can modify the Wilson Coefficients of the SM ones with the same CP violating source as that for the SM . The largest contribution may come from the $`WWZ`$ anomalous coupling $`\mathrm{\Delta }g_1^Z`$. LEP data constrain $`\mathrm{\Delta }g_1^Z`$ to be within $`0.113<\mathrm{\Delta }g_1^Z<0.126`$ at the 95% c.l.. The resulting Wilson Coefficients can be very different from those in the SM. 2. Test new physics and $`\mathrm{sin}2\beta `$ from $`BJ/\psi K_S,J/\psi K_S\pi ^0`$ The usual $`CP`$ violation measure for $`B`$ decays to CP eigenstates is $`\mathrm{Im}\xi =\mathrm{Im}\{(q/p)(A^{}\overline{A}/|A|^2)\}`$, where $`q/p=e^{2i\varphi _B}`$ is from $`B^0`$$`\overline{B}^0`$ mixing, while $`A`$, $`\overline{A}`$ are $`B`$, $`\overline{B}`$ decay amplitudes. For $`BJ/\psi K_S`$, the final state is $`P`$-wave hence $`CP`$ odd. Setting the weak phase in the decay amplitude to be $`2\varphi _0=Arg(A/\overline{A})`$, one has , $`\mathrm{Im}\xi (BJ/\psi K_S)=\mathrm{sin}(2\varphi _B+2\varphi _0)\mathrm{sin}2\beta _{J/\psi K_S}`$. For $`BJ/\psi K^{}J/\psi K_S\pi ^0`$, the final state has both $`P`$-wave ($`CP`$ odd) and $`S`$\- and $`D`$-wave ($`CP`$ even) components. If $`S`$\- and $`D`$-wave have a common weak phase $`\stackrel{~}{\varphi }_1`$ and P-wave has a weak phase $`\varphi _1`$ , $`\mathrm{Im}\xi (BJ/\psi K_S\pi ^0)`$ $`=`$ $`\mathrm{Im}\{e^{2i\varphi _B}[e^{2i\varphi _1}|P|^2e^{2i\stackrel{~}{\varphi }_1}(1|P|^2)]\}`$ (1) $``$ $`(12|P|^2)\mathrm{sin}2\beta _{J/\psi K_S\pi ^0},`$ where $`|P|^2`$ is the fraction of P-wave component. In the SM one has $`\varphi _B=\beta `$ and $`2\varphi _0=2\varphi _1(2\stackrel{~}{\varphi }_1)=Arg[\{V_{cb}V_{cs}^{}\{c_1+a(a^{})c_2\}\}/\{V_{cb}^{}V_{cs}\{c_1+a(a^{})c_2\}\}]=0`$, in the Wolfenstein phase convention. Here $`a`$ and $`a^{}`$ are parameters which indicate the relative contribution from $`O_2^{c(s)}(L)`$ compared with $`O_1^{c(s)}(L)`$ for the P-wave and (S-, D-) wave. In the factorization approximation $`1/a=1/a^{}=N_c`$ (the number of colors). Therefore $`\mathrm{sin}2\beta _{J/\psi K_S}=\mathrm{sin}2\beta _{J/\psi K_S\pi ^0}=\mathrm{sin}2\beta `$. $`|P|^2`$ has been measured with a small value $`0.16\pm 0.08\pm 0.04`$ by CLEO which implies that the measurement of $`\mathrm{sin}2\beta `$ using $`BJ/\psi K_S\pi ^0`$ is practical although there is a dilution factor of 30%. When one goes beyond the SM, $`\mathrm{sin}2\beta _{J/\psi K_S}=\mathrm{sin}2\beta _{J/\psi K_S\pi ^0}`$ is not necessarily true. Let us now analyze the possible values for $`\mathrm{\Delta }\mathrm{sin}2\beta \mathrm{sin}2\beta _{J/\psi K_S}\mathrm{sin}2\beta _{J/\psi K_S\pi ^0}`$. Because $`BJ/\psi K_S,J/\psi K^{}`$ are tree dominated processes, Models ii) and iii) would not change the SM predictions significantly. $`\mathrm{\Delta }\mathrm{sin}2\beta `$ is not sensitive to new physics in Models ii) and iii). However, for Model i), the contributions can be large. The weak phases are given by $`2\varphi _0=2\varphi _1(2\stackrel{~}{\varphi }_1)=Arg[\{V_{cb}V_{cs}^{}\{c_1+ac_2+()c^{c(s)}\{1a(a^{})\}\}\}/\{V_{cb}^{}V_{cs}\{c_1+ac_2+()c^{c(s)}\{1a(a^{})\}\}\}]`$. Taking the new contributions to be 10% of the SM ones, one obtains $`\varphi _0=\varphi _1\stackrel{~}{\varphi }_10.1\mathrm{sin}\delta ^{c(s)}`$. From this, $`\mathrm{\Delta }\mathrm{sin}2\beta 4((1|P|^2)/(12|P|^2))\mathrm{cos}2\varphi _B(0.1\mathrm{sin}\delta ^{c(s)})0.5\mathrm{cos}2\varphi _B\mathrm{sin}\delta ^{c(s)}`$. $`\varphi _B`$ may be different from the SM one due to new contributions. Using the central value $`\mathrm{sin}2\beta _{J/\psi K_S}=0.91`$ measured from CDF and ALEPH , $`\mathrm{\Delta }\mathrm{sin}2\beta 0.2\mathrm{sin}\delta ^{c(s)}`$ which can be as large as 0.2. Such a large difference can be measured at B factories. Information about new CP violating phase $`\delta ^{c(s)}`$ can be obtained. 3. Test new physics and rate differences between $`B_d\pi ^+K^{},\pi ^+\pi ^{}`$ I now show that hadronic model independent information about CP violation can be obtained using SU(3) analysis for rare hadronic B decays. The SM operators $`O_{1,2}`$, $`O_{36,11,12}`$, and $`O_{710}`$ for rare hadronic B decays transform under SU(3) symmetry as $`\overline{3}_a+\overline{3}_b+6+\overline{15}`$, $`\overline{3}`$, and $`\overline{3}_a+\overline{3}_b+6+\overline{15}`$, respectively. These properties enable one to write the decay amplitudes for $`BPP`$ in only a few SU(3) invariant amplitudes. When small annihilation contributions are neglected, one has $`A(B_d\pi ^+\pi ^{})=V_{ub}V_{ud}^{}T+V_{tb}V_{td}^{}P,`$ $`A(B_d\pi ^+K^{})=V_{ub}V_{us}^{}T+V_{tb}V_{ts}^{}P.`$ From above one obtains, $`\mathrm{\Delta }(\pi ^+\pi ^{})=\mathrm{\Delta }(\pi ^+K^{})`$. This non-trivial equality dose not depend on detailed models for hadronic physics and provides test for the SM . Including SU(3) breaking effect from factorization calculation, one has, $`\mathrm{\Delta }(\pi ^+\pi ^{})\frac{f_\pi ^2}{f_K^2}\mathrm{\Delta }(\pi ^+K^{})`$. Although there is correction, the relative sign is not changed. When going beyond the SM, there are new CP violating phases leading to violation of the equality above. For example Models i) and ii) can alter the equality significantly, while Model iii) can not because the CP violating source is the same as that in the SM. To illustrate how the situation is changed in Models i) and ii), I calculate the normalized asymmetry $`A_{norm}(PP)=`$$`\mathrm{\Delta }(PP)/\mathrm{\Gamma }(\pi ^+K^{})`$ using factorization approximation following Ref. . The new effects may come in such a way that only $`B_d\pi ^+K^{}`$ is changed but not $`B_d\pi ^+\pi ^{}`$. This scenario leads to maximal violation of the equality discussed here. The results are shown in Figure 1. The solid curve is the SM prediction for $`A_{norm}(\pi ^+K^{})`$ as a function of $`\gamma `$. For $`\gamma _{best}`$ $`A_{norm}(\pi ^+K^0)10\%`$. It is clear from Figure 1 that within the allowed range of the parameters, new physics effects can dramatically violate the equality discussed above. 4. Test new physics and SU(3) relation for $`B_u\pi ^{}\overline{K}^0,\pi ^0K^{},\pi ^0\pi ^{}`$ I now discuss another method which provides important information about new physics using $`B_u\pi ^{}\overline{K}^0,\pi ^0K^{},\pi ^0\pi ^{}`$. Using SU(3) relation and factorization estimate for the breaking effects, one obtains $`A(B_u\pi ^{}\overline{K}^0)+\sqrt{2}A(B_u\pi ^0K^{})`$ $`=`$ $`ϵA(B_u\pi ^{}\overline{K}^0)e^{i\mathrm{\Delta }\varphi }(e^{i\gamma }\delta _{EW}),`$ $`\delta _{EW}={\displaystyle \frac{3}{2}}{\displaystyle \frac{|V_{cb}||V_{cs}|}{|V_{ub}||V_{us}|}}{\displaystyle \frac{c_9+c_{10}}{c_1+c_2}},`$ $`ϵ=\sqrt{2}{\displaystyle \frac{|V_{us}|}{|V_{ud}|}}{\displaystyle \frac{f_K}{f_\pi }}{\displaystyle \frac{|A(\pi ^+\pi ^0)|}{|A(\pi ^+K^0)|}},`$ where $`\mathrm{\Delta }\varphi `$ is the difference of the final state rescattering phases for $`I=3/2,1/2`$ amplitudes. For $`f_K/f_\pi =1.22`$ and $`Br(B^\pm \pi ^\pm \pi ^0)=(0.54_{0.20}^{+0.21}\pm 0.15)\times 10^5`$ , one obtains $`ϵ=0.21\pm 0.06`$. Neglecting small tree contribution to $`B_u\pi ^{}\overline{K}^0`$, one obtains $`\mathrm{cos}\gamma =\delta _{EW}{\displaystyle \frac{(r_+^2+r_{}^2)/21ϵ^2(1\delta _{EW}^2)}{2ϵ(\mathrm{cos}\mathrm{\Delta }\varphi +ϵ\delta _{EW})}},r_+^2r_{}^2=4ϵ\mathrm{sin}\mathrm{\Delta }\varphi \mathrm{sin}\gamma ,`$ where $`r_\pm ^2=4Br(\pi ^0K^\pm )/[Br(\pi ^+K^0)+Br(\pi ^{}\overline{K}^0)]=1.33\pm 0.45`$ . It is interesting to note that although the above equation is complicated, bound on $`\mathrm{cos}\gamma `$ can be obtained . For $`\mathrm{\Delta }=(r_+^2+r_{}^2)/21ϵ^2(1\delta _{EW}^2)()>0`$, we have $`\mathrm{cos}\gamma ()\delta _{EW}{\displaystyle \frac{\mathrm{\Delta }}{2ϵ(1+ϵ\delta _{EW})}},\text{or}\mathrm{cos}\gamma ()\delta _{EW}{\displaystyle \frac{\mathrm{\Delta }}{2ϵ(1+ϵ\delta _{EW})}}.`$ (2) The bounds on $`\mathrm{cos}\gamma `$ as a function of $`\delta _{EW}`$ are shown in Fig. 2 by the solid curves for three representative cases: a) Central values for $`ϵ`$ and $`r_\pm ^2`$; b) Central values for $`ϵ`$ and $`1\sigma `$ upper bound $`r_\pm ^2=1.78`$; and c) Central value for $`ϵ`$ and $`1\sigma `$ lower bound $`r_\pm ^2=0.88`$. The bounds with $`|\mathrm{cos}\gamma |1`$ for a), b) and c) are indicated by the curves (a1, a2), (b) and (c1, c2), respectively. For cases a) and c) there are two allowed regions, the regions below (a1, c1) and the regions above (a2, c2). For case b) the allowed range is below (b). One also has $`(1\mathrm{cos}^2\gamma )[1({\displaystyle \frac{\mathrm{\Delta }}{2ϵ(\delta _{EW}\mathrm{cos}\gamma )}}ϵ\delta _{EW})^2]{\displaystyle \frac{(r_+^2r_{}^2)^2}{16ϵ^2}}=0.`$ (3) To have some idea about the details, I analyze the solutions of $`\mathrm{cos}\gamma `$ as a function of $`\delta _{EW}`$ for the three cases discussed earlier with a given value for the asymmetry $`A_{asy}=(r_+^2r_{}^2)/(r_+^2+r_{}^2)=15\%`$. In the SM for $`r_v=|V_{ub}|/|V_{cb}|=0.08`$ and $`|V_{us}|=0.2196`$, $`\delta _{EW}=0.81`$. The central values for $`r_\pm `$ and $`ϵ`$ prefers $`\mathrm{cos}\gamma <0`$ which is different from the result obtained in Ref. by fitting other data. The parameter $`\delta _{EW}`$ is sensitive to new physics in the tree and electroweak sectors. Model i) has large corrections to the tree level contributions. However, the contribution is proportional to the sum of the coefficients of operators $`O_{1,2}^{u(s)}(R)`$ which is zero in Model i). The above method does not provide information about new physics due to Model i). This method would not provide information about new physics due to Model ii) neither because the gluonic dipole interaction transforms as $`\overline{3}`$ which does not affect $`\delta _{EW}`$. Model iii) can have large effect on $`\delta _{EW}`$. In this model $`\delta _{EW}=0.81(1+4.33\mathrm{\Delta }g_1^Z)`$ which can vary in the range $`0.401.25`$. For case a), in the SM $`\mathrm{cos}\gamma <0.18`$ which is inconsistent with $`\mathrm{cos}\gamma _{best}0.5`$ from other fit . In Model iii) $`\mathrm{cos}\gamma `$ can be consistent with $`\mathrm{cos}\gamma _{best}`$. For case b), $`\mathrm{cos}\gamma `$ is less than zero in both the SM and Model iii). If this is indeed the case, other types of new physics is needed. For case c) $`\mathrm{cos}\gamma `$ can be close to $`\mathrm{cos}\gamma _{best}`$ for both the SM and Model iii). 5. Conclusion From discussions in previous sections, it is clear that using $`B_dJ/\psi K_S`$, $`J/\psi K_S\pi ^0`$, $`B_u\pi ^{}K^+,\pi ^+\pi ^{}`$ and $`B^{}\pi ^0K^{},\pi ^{}\overline{K}^0,\pi ^0\pi ^{}`$ important information free from uncertainties in hadronic physics about the Standard Model and models beyond can be obtained. These analyses should be carried out at B factories. Acknowledgements This work was partially supported by National Science Council of R.O.C. under grant number NSC 89-2112-M-002-016. I thank Deshpande, Hou, Hsueh and Shi for collaborations on materials presented in this talk.
no-problem/0003/astro-ph0003369.html
ar5iv
text
# Additional Information from Astrometric Gravitational Microlensing Observations ## 1. Introduction When a source is microlensed, it is split into two images. The flux sum of the individual images is greater than that of the unlensed source, and thus the source becomes brighter during the event. The sizes and brightnesses of the individual images change as the lens-source separation changes due to their transverse motion. Therefore, microlensing events can be detected either by photometrically monitoring the source brightness changes or by directly imaging the two separated images. However, with the current instrument direct imaging of the separate images is impossible due to the low precision of the instrument. As a result, current microlensing observations have been and are being carried out only by using the photometric method (Aubourg et al. 1993; Alcock et al. 1993; Udalski et al. 1993; Alard & Guibert 1997). However, if an event is astrometrically observed by using the planned high precision interferometers from space-based platform, e.g. the Space Interferometry Mission (SIM), and ground-based interferometers soon available on 8-10 m class telescope, e.g. the Keck and the Very Large Telescope, one can measure the shift of the source star image centroid caused by microlensing. The astrometric centroid shift vector as measured with respect to the position of the unlensed source is related to the lens parameters by $$\delta \delta \theta \theta _{c,0}=\frac{\theta _\mathrm{E}}{u^2+2}\left[\left(\frac{tt_0}{t_\mathrm{E}}\right)\widehat{𝐱}+\beta \widehat{𝐲}\right],$$ (1) where $`\theta _\mathrm{E}`$ is the angular Einstein ring radius, $`t_\mathrm{E}`$ is the time required for the source to cross $`\theta _\mathrm{E}`$ (Einstein time scale), $`t_0`$ is the time of the closest lens-source approach (and thus the time of maximum amplification), and $`\beta `$ is the separation at this moment (i.e. impact parameter). The notation $`𝐱`$ and $`𝐲`$ represent the unit vectors with their directions that are parallel and normal to the lens-source proper motion. If one defines $`x=\delta \theta _{c,x}`$ and $`y=\delta \theta _{c,y}\beta \theta _\mathrm{E}/2(\beta ^2+2)`$, equation (1) becomes $$x^2+\frac{y^2}{q^2}=a^2,$$ (2) where $$a=\frac{\theta _\mathrm{E}}{2(\beta ^2+2)^{1/2}},$$ (3) and $$q=\frac{\beta }{(\beta ^2+2)^{1/2}}.$$ (4) Therefore, during the event the image centroid traces out an elliptical trajectory (hereafter astrometric ellipse) with a semi-major axis $`a`$ and an axis ratio $`q`$. In Figure 1, we present astrometric ellipses for several example microlensing events with various lens-source impact parameters. The greatest importance of astrometric microlensing observation is that one can determine $`\theta _\mathrm{E}`$ from the observed astrometric ellipse (Høg, Novikov & Polarev 1995; Walker 1995; Paczyński 1998; Boden, Shao, & Van Buren 1998). This is because the size (i.e. semi-major axis) of the astrometric ellipse is directly proportional to $`\theta _\mathrm{E}`$ \[see equation (3)\]. Once $`\theta _\mathrm{E}`$ is determined, the lens proper motion is determined by $`\mu =\theta _\mathrm{E}/t_\mathrm{E}`$ with the independently determined $`t_\mathrm{E}`$ from the light curve. While the photometrically determine $`t_\mathrm{E}`$ depends on the three physical lens parameters of the lens mass ($`M`$), location ($`D_{ol}`$), and the transverse motion ($`v`$), the astrometrically determined $`\mu `$ depends only on the two parameters of $`M`$ and $`D_{ol}`$. Therefore, by measuring $`\mu `$ one can significantly better constrain the nature of lens matter. However, we note that to completely resolve the lens parameter degeneracy, it is still required to additionally determine the lens parallax (see more details in § 4). In this proceeding, we demonstrate that besides this original usage astrometric microlensing observations can be additionally used in obtaining various important information about lenses. First, we show that the lens brightness can be determined with astrometric observations, enabling one to know whether the event is caused by a bright star or a dark lens (§ 2). Second, we demonstrate that additional astrometric microlensing observations allow one to uniquely determine the binary lens parameters (§ 3). Finally, we propose two astrometric methods that can uniquely determine the lens parallax, with which one can completely break the lens parameter degeneracy along with the measured proper motion (§ 4). ## 2. Lens Brightness Determination If an event is caused by a bright lens (i.e. star), the centroid shift trajectory is distorted by the brightness of the lens. The lens brightness affects the centroid shift trajectory in two ways. First, the lens makes the image centroid further shifted toward the lens. Second, the bright lens makes the reference of centroid shift measurements changed from the position of the unlensed source to the one between the source and the lens. By considering these two effects of the bright lens, the resulting centroid shift vector is computed by $$\delta \delta \theta \theta _c=\frac{1+f_L+f_L[(u^2+2)u(u^2+4)^{1/2}]}{(1+f_L)[1+f_Lu(u^2+4)^{1/2}/(u^2+2)]}\delta \delta \theta \theta _{c,0},$$ (5) where $`f_L=\mathrm{}_L/\mathrm{}_S`$ is the flux ratio between the lens and the source star. In the left part of Figure 2, we present the trajectories of astrometric centroid shifts for events caused by bright lenses with various brightnesses. From the figure, one finds that the trajectories are also ellipses like those of dark lens events. As seen from the view of identifying bright lenses from the distorted trajectories, this is a bad news because one cannot identify whether the event is caused by a bright lens or not just from the shape of the trajectory (Jeong, Han, & Park 1999). One also finds that as the lens becomes brighter, the observed astrometric ellipse becomes rounder and smaller (measured by $`a`$). Fortunately, identification of bright lenses is possible by measuring the angular speed ($`\omega `$) of the image centroid motion around the unlensed source position (Han & Jeong 1999). In the left part of Figure 2, we present the position vectors (arrows with straight lines) of the image centroid at different times during events for both the dark and bright lens events. From the figure, one finds that the position vector at a given moment directs towards the same direction for both the dark and bright lens events, implying that $`\omega `$ is the same regardless of the lens brightness. The angular speed does not depend on the lens brightness because lens always lies on the line connecting the two images, and thus additional shift caused by the bright lens occurs along this line. As a result, although the amount of shift changes due to lens brightness, the direction of the shift does not change. Since the angular speed is related to the lensing parameters of ($`\beta ,t_\mathrm{E},t_0`$) by $$\omega (t)=\frac{\beta t_\mathrm{E}}{(tt_0)^2+\beta ^2t_\mathrm{E}^2},$$ (6) these parameters can be determined from the observed angular speed curve. Note that these parameters are the same regardless of the lens brightness because the angular speed curve is not affected by the lens brightness. By contrast, the the impact parameter determined from the shape of the observed centroid shift trajectory \[see equation (4)\] differs from the true value because the shape of the astrometric ellipse for a bright lens event differs from that of a dark lens event. Then, if an event is caused by a bright lens, the impact parameter determined from the observed centroid shift trajectory, $`\beta `$, will differ from that determined from the angular speed curve, $`\beta _0`$. Therefore, by comparing $`\beta `$ and $`\beta _0`$, one can identify the bright lens and measure its flux. In the right part of Figure 2, we present $`\delta \beta =\beta \beta _0`$ as a function of lens-source brightness difference in magnitudes. ## 3. Resolving Binary-Lens Parameter Degeneracy If an event is caused by a binary lens, the resulting light curve deviates from that of a single lens event. Detecting binary lens events is important because one can determine important binary lens parameters such as the mass ratio ($`q`$) and separation ($`\mathrm{}`$). These parameters are determined by fitting model light curves to the observed one. For many cases of binary lens events, however, it is difficult to uniquely determine the solutions of the binary lens parameters with the photometrically constructed light curves alone. In Figure 3, we illustrate this ambiguity of the photometric binary lens fit. In the left part of the figure, we present the observed light curve of the binary lens event OGLE-7 (dots with error bars, Udalski et al. 1994) and several example model light curves (solid curves) obtained from the fit to the observed light curve by Dominik (1999). In the right part of the figure, we also present the binary lens system geometries for the individual solutions responsible for the model light curves. The binary lens parameters ($`\mathrm{}`$, $`q`$, and $`t_\mathrm{E}`$) for each model are marked in the corresponding panel. From the figure, one finds that despite the dramatic differences in the binary lens parameters between different solutions, the resulting light curves fit the observed light curve very well, implying that unique determination of lens parameters is difficult by using the photometrically measured light curve alone. However, the binary lens parameter degeneracy can be lifted if events are additionally observed astrometrically (Han, Chun, & Chang 1999). To demonstrate this, we compute the expected astrometric centroid shifts of the binary lens events resulting from the lens parameter solutions responsible for the model light curves in Figure 3, and the resulting trajectories are presented in Figure 4. From the figure, one finds that the trajectories are dramatically different each other. Therefore, with the additional information provided by the astrometric microlensing observations, one can completely resolve the ambiguity of the photometric binary lens fit and thus uniquely determine the binary lens parameters. ## 4. Resolving Parallax Degeneracy Although the astrometrically determined $`\mu `$ better constrains the lens parameters than the photometrically determined $`t_\mathrm{E}`$ does, $`\mu `$ still results from the combination of the lens mass and location, and thus the lens parameter degeneracy is not completely resolved. To completely resolve the lens parameter degeneracy, it is required to determine the transverse velocity projected on the source plane ($`\stackrel{~}{v}`$, hereafter simply projected speed). Determination of $`\stackrel{~}{v}`$ is possible by measuring the lens parallax $`\mathrm{\Delta }u`$ from photometric observations of the source light variations from two different locations, one from ground and the other from a helio-centric satellite (Gould 1994, 1995). Once both $`\mu `$ and $`\stackrel{~}{v}`$ are determined, the individual lens parameters are determined by $`M=\left({\displaystyle \frac{c^2}{4G}}\right)t_\mathrm{E}^2\stackrel{~}{v}\mu ,`$ (7) $`D_{ol}={\displaystyle \frac{D_{os}}{\mu D_{os}/\stackrel{~}{v}+1}},`$ (8) $`v={\displaystyle \frac{1}{[\stackrel{~}{v}^1+(\mu D_{os})^1]^1}}.`$ (9) However, the elegant idea of lens parallax measurements proposed to resolve the lens parameter degeneracy suffers from its own degeneracy. The parallax degeneracy is illustrated in Figure 5. In the upper panel, we present two light curves of an event observed from the Earth and the satellite. Presented in the middle panels are the two possible lens system geometries that can produce the light curves in the upper panel. From the figure, one finds that depending on whether the source trajectories as seen from the Earth and the satellite are on the same or opposite sides with respect to the lens, there can be two possible values of $`\mathrm{\Delta }u`$. Astrometric microlensing observations are useful in resolving the degeneracy in parallax determination (Han & Kim 2000). The first method is provided by simultaneous astrometric observations from the ground and the satellite instead of photometric observations. Note that the SIM will have a heliocentric orbit, and thus can be used for this purpose. In the lower panels of Figure 5, we present two sets of the astrometric ellipses as seen from the Earth and the satellite that are expected from the corresponding two sets of source trajectories in the middle panels. One finds that these two sets of astrometric ellipses have opposite orientations, and thus can be easily distinguished from one another. The parallax degeneracy can also be resolved if the event is astrometrically observed on one site and photometrically observed on a different site, instead of simultaneous astrometric observations from the ground and the satellite. This is possible because astrometric observations allow one to determine the lens-source proper motion $`\mu \mu `$. Then with the known Earth-satellite separation vector, which is parallel to $`\mathrm{\Delta }uu`$, one can uniquely determine the angle between $`\mu \mu `$ and $`\mathrm{\Delta }uu`$, allowing one to select the right solution of $`u`$. ## 5. Summary In this proceeding, we demonstrate various additional usages of astrometric microlensing observations besides the original usage of the lens proper motion determination. These are summarized as follows. 1. By astrometrically observing a microlensing event caused by a bright lens, one can identify the bright lens and measure its flux. 2. With additional information from astrometric observations one can resolve the ambiguity of the photometric binary lens fit and thus uniquely determine the binary lens parameters. 3. With application of the two proposed astrometric methods, the degeneracy in the photometric lens parallax determination can be resolved, allowing one to completely break the lens parameter degeneracy along with simultaneously determined lens proper motion. ## References Alard, C., & Guibert, J. 1997, A&A, 326, 1 Alcock, C., et al. 1993, Nature, 365, 621 Aubourg, E., et al. 1993, Nature, 365, 623 Boden, A. F., Shao, M., & Van Buren, D. 1998, ApJ, 502, 538 Dominik, M. 1999, A&A, 341, 943 Gould, A. 1994, ApJ, 421, L75 Gould, A. 1995, ApJ, 441, L21 Han, C., & Kim, H.-I. 2000, ApJ, 528, 687 Han, C., & Jeong, Y. 1999, MNRAS, 309, 404 Han, C., Chun, M.-S., & Chang, K. 1999, ApJ, 526, 405 Høg, E., Novikov, I. D., & Polarev, A. G. 1995, A&A, 294, 287 Jeong, Y., Han, C., & Park, S.-H. 1999, ApJ, 511, 569 Paczyński, B. 1998, ApJ, 494, L23 Udalski, A., et al. 1993, Acta Astron., 43, 289 Udalski, A., et al. 1994, ApJ, 436, L103 Walker, M. A. 1995, ApJ, 453, 37
no-problem/0003/astro-ph0003255.html
ar5iv
text
# Surface trapping and leakage of low-frequency 𝒈-modes in rotating early-type stars – I. Qualitative analysis ## 1 Introduction The self-excitation of global non-radial pulsation modes in a star is a prime example of positive feedback, whereby small oscillatory perturbations grow in amplitude via the efficient conversion of heat into vibrational energy by a suitable driving mechanism (see, e.g., Unno et al. 1989 for a comprehensive review of the topic). A fundamental ingredient in the feedback loop is that the oscillations must be trapped in some part of the stellar interior, so that energy does not leak from the system faster than it can be generated. Such trapping can occur when a pair of evanescent regions, where traveling waves cannot be supported, enclose a propagative region; waves are repeatedly reflected at the two evanescent boundaries, and the resulting superposition leads to a standing wave of the normal-mode type. For waves excited in stellar envelopes, it is common for the surface layers to serve as one of the evanescent regions required for the formation of a trapping zone. Ando & Osaki demonstrated that such a situation occurs in the Sun, where low-order $`p`$-modes are trapped beneath the photosphere, supporting a model first put forward by Ulrich to explain the five-minute solar oscillation . However, the trapping is only effective for modes with frequencies below some cut-off; higher-frequency modes cannot be reflected at the photosphere, and will leak through the stellar surface. This issue was addressed in detail by Ando & Osaki , who found that, although leakage does occur through the solar photosphere at frequencies above the cut-off, some waves can subsequently be reflected at the chromosphere-corona interface, and standing waves are able to form. More recently, Balmforth & Gough suggested that such coronal reflection can explain apparent observations of high-frequency chromospheric standing waves , although debate concerning this interpretation still continues . Pulsation in massive, early stars (types O and B) is qualitatively quite different from the solar case, due to the gross structural differences between the two stellar classes. However, it is still subject to the same wave trapping requirements, since the underlying physics remains the same. Shibahashi & Osaki , in their study of $`g`$-modes trapped within the hydrogen-burning shell of evolved massive stars, found that high-frequency (low-order) modes can tunnel through an evanescent region separating core and envelope, and thence escape from the star. A complementary situation was discussed by Osaki when studying pulsation in Cepheid-type stars; non-radial $`p`$-modes trapped within the envelope were able to tunnel through an evanescent region into the core, where they were damped rapidly without reflection at the centre. In both cases, the appropriate region of the star was modeled as an isolating oscillating unit with the inclusion of wave leakage at one boundary. The leakage was found to stabilize some modes which would otherwise have been self-excited, due to the associated loss of vibrational energy from the star. Shibahashi analyzed wave trapping in an idealized stellar model (corresponding to an evolved massive star) using an asymptotic method, and discussed in some depth these two cases; in addition, he considered the situation where low-frequency (high-order) $`g`$-modes are able to tunnel through an evanescent region in the envelope and thence escape through the stellar surface. More recently, however, relatively little attention has been shown regarding wave trapping issues at the surface of early-type stars; in particular, stability analyses , based on the new opacity calculations of Rogers & Iglesias and Seaton , have assumed that the Lagrangian pressure perturbation $`\delta p`$ tends to zero or some limiting value at the stellar surface. Such an assumption corresponds to the ab initio condition that waves incident from the interior are totally reflected at the stellar surface; the possibility of leakage is thereby disregarded, and no consideration of trapping issues is undertaken. This is the first in a short series of papers studying the surface trapping of low-frequency $`g`$-modes in early-type stars, in an attempt to re-open discussion of, and investigation into, this important area. Much of the work is conceptually developed from that of Ando & Osaki ; however, in light of recent research into the influence of rotation on low-frequency modes , and due to the fact that significant rotation appears to be commonplace in O- and B-star populations , the theory is updated to include rotational effects. The current paper serves as a introduction, covering the more qualitative, general aspects of the study; subsequent papers will investigate various issues arising from this paper in greater depth. The following section reviews the pulsation equations appropriate for low-frequency $`g`$-modes in rotating stars, whilst section 3 derives the dispersion relation corresponding to these equations. The trapping of waves described by this dispersion relation is examined in section 4 with the aid of propagation diagrams, and the effect of rotation on the eigenfrequencies of individual modes is discussed in section 5. The findings are discussed in section 6, and summarized in section 7. ## 2 Pulsation equations The dynamics of pulsation in a rotating star differ from the non-rotating case due to the influence of the fictitious Coriolis and centrifugal forces, which arise as a consequence of the non-inertial nature of a rotating frame of reference. The centrifugal force breaks the equilibrium symmetry of the star, so that the level (equipotential) surfaces become oblate spheroids rather than the usual concentric spheres. Such a change in stellar configuration will manifest itself implicitly in the pulsation equations, through modifications to the equilibrium variables of state. However, in the case of uniform (solid-body) rotation, no explicit modification of the pulsation equations occurs due to this centrifugal distortion . In contrast, the Coriolis force enters the pulsation equations explicitly, through the introduction of a velocity-dependent term in the hydrodynamical momentum equation. This term can lead to the significant modification of individual pulsation modes, and is also responsible for the existence of new classes of wave-like solutions which are not found in non-rotating systems. Simultaneous treatment of both forces within a pulsation framework is fraught with difficulty. Some progress towards this goal has been made (e.g., Lee 1993; Lee & Baraffe 1995), but attempts remain frustrated by the fact that the centrifugal distortion cannot really be considered as an a posteriori modification to the structure of a given star, but must be treated self-consistently with the evolution of the star (see, e.g., Meynet & Maeder 1997). However, in a number of limiting cases, certain approximations can be made which simplify the problem significantly. In the case of the low-frequency modes, the Coriolis force will dominate the centrifugal force, and the effects of the latter on the equilibrium configuration may be disregarded if the rotation is not too severe. Furthermore, the so-called ‘traditional approximation’ may be employed, whereby the horizontal component of the angular frequency vector of rotation $`𝛀`$ is neglected. This approximation is most appropriate for low-frequency pulsation modes in the outer regions of a star , and therefore can be considered useful in the present study. In combination with the Cowling and adiabatic approximations, where the perturbations to the gravitational potential and specific entropy, respectively, are neglected, the traditional approximation renders the pulsation equations separable in the spherical polar co-ordinates $`(r,\theta ,\varphi )`$. Solutions for the dependent variables $`\xi _r`$ and $`p^{}`$, the radial fluid displacement and Eulerian pressure perturbation, respectively, may then be written in the form $$\xi _r=\xi _r(r)\mathrm{\Theta }_l^m(\mu ;\nu )\mathrm{exp}[\mathrm{i}(m\varphi +\omega t)],$$ (1) $$p^{}=p^{}(r)\mathrm{\Theta }_l^m(\mu ;\nu )\mathrm{exp}[\mathrm{i}(m\varphi +\omega t)],$$ (2) where $`\mu \mathrm{cos}\theta `$ is the normalized latitudinal distance from the equatorial plane, $`\omega `$ is the pulsation frequency in the co-rotating reference frame, and $`\mathrm{\Theta }_l^m(\mu ;\nu )`$ is a Hough function . These Hough functions are the eigensolutions of Laplace’s tidal equation , and form a one-parameter family in $`\nu 2\mathrm{\Omega }/\omega `$, where $`\mathrm{\Omega }|𝛀|`$ is the angular frequency of rotation. The integer indices $`l`$ and $`m`$, with $`l0`$ and $`|m|l`$, correspond to the harmonic degree and azimuthal order, respectively, of the associated Legendre polynomials $`P_l^m(\mu )`$ to which the Hough functions reduce in the non-rotating limit, so that $`\mathrm{\Theta }_l^m(\mu ;0)P_l^m(\mu )`$. This indexing scheme, based on the one adopted by Lee & Saio , is less general than that of Lee & Saio , in that it does not encompass the Hough functions corresponding to Rossby and oscillatory convective modes (which do not have non-rotating counterparts); however, such modes are not considered herein, and the current scheme is sufficient. Note that $`\omega `$ is considered to be positive throughout the following discussion, and, therefore, prograde and retrograde modes correspond to negative and positive values of $`m`$, respectively. The radial dependence of the solutions (12) is described by the eigenfunctions $`\xi _r(r)`$ and $`p^{}(r)`$, which are governed by a pair of coupled first-order differential equations. In order to facilitate subsequent manipulation, it is useful to write these equations in the form $$\frac{1}{r^2}\frac{\mathrm{d}}{\mathrm{d}r}\left(r^2\xi _r\right)\frac{g}{c_\mathrm{s}^2}\xi _r=\frac{1}{\omega ^2c_\mathrm{s}^2}\left(\frac{\lambda _{lm}^2c_\mathrm{s}^2}{r^2}\omega ^2\right)\frac{p^{}}{\rho }$$ (3) and $$\frac{1}{\rho }\frac{\mathrm{d}p^{}}{\mathrm{d}r}+\frac{g}{c_\mathrm{s}^2}\frac{p^{}}{\rho }=(\omega ^2N^2)\xi _r,$$ (4) where $`\rho `$, $`c_\mathrm{s}`$, $`g`$ and $`N`$ are the local equilibrium values of the density, adiabatic sound speed, gravitational acceleration and Brunt-Väisälä frequency, respectively. Note that $`\xi _r`$ and $`p^{}`$ are now taken to be functions of $`r`$ alone in both these and subsequent equations, unless explicitly stated. The quantity $`\lambda _{lm}`$ appearing in equation (3), which arises as separation constant when solutions of the form (12) are sought, is the eigenvalue of Laplace’s tidal equation corresponding to the appropriate Hough function $`\mathrm{\Theta }_l^m(\mu ;\nu )`$. In the limit $`\nu =0`$, this eigenvalue is equal to $`l(l+1)`$, and equations (34) are then identical to those appropriate for a non-rotating star (e.g., Unno et al. 1989, §15.1). The utility of the traditional approximation thus lies in the fact that much of the formalism of the non-rotating case may also be applied to rotating stars with the simple replacement of $`l(l+1)`$ by $`\lambda _{lm}`$, a result first found by Lee & Saio \[1987a\]. Global solution of equations (34) must typically be approached numerically; however, an examination of the local character of the solutions suffices in the present qualitative context. This character is governed by the dispersion relation applicable to the equations, discussed in the following section. ## 3 Dispersion relation To derive a local dispersion relation for the pulsation equations (34), it is useful first to place the equations in a canonical form similar to that introduced by Osaki for the non-rotating case. By defining the two new eigenfunctions, $$\stackrel{~}{\xi }=r^2\xi _r\mathrm{exp}\left(_0^r\frac{g}{c_\mathrm{s}^2}dr\right),$$ (5) $$\stackrel{~}{\eta }=\frac{p^{}}{\rho }\mathrm{exp}\left(_0^r\frac{N^2}{g}dr\right),$$ (6) the left-hand sides of both pulsation equations may be written as a single derivative, and the canonical form is found as $$\frac{\mathrm{d}\stackrel{~}{\xi }}{\mathrm{d}r}=h(r)\frac{r^2}{c_\mathrm{s}^2\omega ^2}\left(\frac{\lambda _{lm}c_\mathrm{s}^2}{r^2}\omega ^2\right)\stackrel{~}{\eta },$$ (7) $$\frac{\mathrm{d}\stackrel{~}{\eta }}{\mathrm{d}r}=\frac{1}{r^2h(r)}\left(\omega ^2N^2\right)\stackrel{~}{\xi },$$ (8) where $$h(r)=\mathrm{exp}\left[_0^r\left(\frac{N^2}{g}\frac{g}{c_\mathrm{s}^2}\right)dr\right].$$ (9) Note that $`h(r)`$ is always positive, so that the original eigenfunctions $`\xi _r`$ and $`p^{}`$ everywhere share the same sign as $`\stackrel{~}{\xi }`$ and $`\stackrel{~}{\eta }`$, respectively. Qualitative solution of these canonical equations is accomplished using the same method as Osaki , namely, by assuming that the coefficients on the right-hand sides are independent of $`r`$. Such an assumption will be valid if the characteristic variation scale of the solutions is much smaller than that of the coefficients. Then, local solutions of the form $$\stackrel{~}{\xi },\stackrel{~}{\eta }\mathrm{exp}(\mathrm{i}k_rr),$$ (10) lead to a dispersion relation for the radial wavenumber $`k_r`$, $$k_r^2=\frac{1}{c_\mathrm{s}^2\omega ^2}\left(\frac{\lambda _{lm}c_\mathrm{s}^2}{r^2}\omega ^2\right)\left(N^2\omega ^2\right).$$ (11) By introducing the effective transverse wavenumber $`k_{\mathrm{t}r}`$, defined by Bildsten et al. as $$k_{\mathrm{t}r}^2=\frac{\lambda _{lm}}{r^2},$$ (12) the dispersion relation may be re-written in the more useful form $$k_r^2c_\mathrm{s}^2\omega ^2=\left(k_{\mathrm{t}r}^2c_\mathrm{s}^2\omega ^2\right)\left(N^2\omega ^2\right),$$ (13) The value of $`k_r`$ for given $`\omega `$ and $`r`$, calculated using this expression, determines the local character of waves at the appropriate frequency and location within the star. Inspection of equation (10) shows that real values ($`k_r^2>0`$) correspond to propagative regions, where the waves oscillate spatially, whilst imaginary values ($`k_r^2<0`$) correspond to evanescent regions, where the waves grow or decay exponentially in amplitude. The $`k_r^2=0`$ curves in the $`(r,\omega ^2)`$ plane, defined by the roots of the right-hand side of the dispersion relation, separate these two types of region, and therefore correspond to the reflective boundaries discussed in the introduction. These boundaries, of fundamental importance when trapping zones are considered, are examined in the following section with the aid of propagation diagrams. Note that this $`k_r^2=0`$ definition of the reflective boundaries formally violates the assumption used previously to derive the solutions (10); however, this violation will have little effect on the positions of the boundaries, and is not important at a qualitative level. The remainder of this section is left to a discussion of the effective transverse wavenumber $`k_{\mathrm{t}r}`$, since, as will be demonstrated subsequently, this quantity can be pivotal in determining the trapping conditions at the stellar surface. In the case of plane waves in an infinite, plane-parallel, stratified medium, $`k_{\mathrm{t}r}`$ may be regarded as a free parameter; however, in the case of a spherical configuration, its is constrained to assume values permitted by equation (12). These constraints arise from to transverse boundary conditions applicable to waves propagating in horizontal (i.e., non-radial) directions; in a non-rotating star, they are equivalent to the requirement that solutions are invariant under the periodic transformations $`\theta \theta +2\pi `$ and $`\varphi \varphi +2\pi `$, and lead to the familiar result (e.g., Unno et al. 1989) $$k_{\mathrm{t}r}^2=\frac{l(l+1)}{r^2}(\nu =0).$$ (14) When significant rotation is introduced, both of these requirements still hold, but an additional constraint in $`\theta `$ is introduced as a consequence of the variation of the Coriolis force with latitude. Such variation means that, for $`\nu >1`$, waves near the equator which are propagating in the latitudinal direction become evanescent when $`|\mu |>1/\nu `$; subsequent reflections lead to the trapping of these waves within the so-called ‘equatorial waveguide’ . The resulting horizontally-standing waves, whose angular dependence is described by the Hough functions $`\mathrm{\Theta }_l^m(\mu ;\nu )`$, are oscillatory in latitude between the waveguide boundaries at $`\mu =\pm 1/\nu `$, and evanescent elsewhere. With increasing $`\nu `$, these boundaries converge towards the equator; for significant rotation, the constraints on $`k_{\mathrm{t}r}`$ therefore become dominated by the approximate requirement that an integer number of half-wavelengths in latitude fit between the waveguide boundaries. This requirement is manifest in the asymptotic expression for $`\lambda _{lm}`$ found by Bildsten et al. , which, when substituted into equation (12), gives $$k_{\mathrm{t}r}^2=\frac{(2l_\mu 1)^2\nu ^2}{r^2}(\nu 1),$$ (15) where $`l_\mu `$ is the number of latitudinal nodes exhibited by the appropriate Hough function between the waveguide boundaries. This latter quantity is independent of $`\nu `$ for prograde ($`m>0`$) and zonal ($`m=0`$) modes, whilst it increments by 2 as $`\nu `$ is increased beyond unity for retrograde ($`m<0`$) modes, due to the introduction of an additional pair of latitudinal nodes at $`\nu =1`$ . Furthermore, $`l_\mu =l|m|`$ for $`\nu =0`$, since the associated Legendre polynomials $`P_l^m(\mu )`$ exhibit $`l|m|`$ zeroes over $`1<\mu <1`$, and $`\mathrm{\Theta }_l^m(\mu ;0)P_l^m(\mu )`$. These properties mean that $`l_\mu `$ in the above asymptotic expression (15) may be written in terms of $`l`$ and $`m`$ as $$k_{\mathrm{t}r}^2=\frac{(2l2|m|\pm 1)^2\nu ^2}{r^2}(\nu 1),$$ (16) the plus sign being chosen for retrograde modes ($`m>0`$), and the minus sign for prograde and zonal modes ($`m0`$). A comparison of this result with equation (14) indicates that the permitted values of $`k_{\mathrm{t}r}`$ for $`\nu 1`$ can greatly exceed the corresponding ones in the non-rotating case, especially for small $`|m|`$. The notable exception to this discussion is the case of the prograde sectoral modes ($`m=l`$), which in the limit $`\nu 1`$ are transformed into equatorially-trapped Kelvin waves. Such Kelvin waves have a exponential latitudinal dependence at small $`\mu `$ described by $$\xi _r,p^{}\mathrm{exp}\left(\frac{\mathrm{\Omega }^2\mu ^2r^2}{c_\mathrm{s}^2}\right),$$ (17) indicating that they should be considered evanescent in the latitudinal direction even at the equator. Therefore, the constraints on $`k_{\mathrm{t}r}`$ are dominated by the periodic boundary condition in $`\varphi `$; the $`\mathrm{exp}(\mathrm{i}m\varphi )`$ azimuthal dependence of the solutions (12) then gives the transverse wavenumber for prograde sectoral modes as $$k_{\mathrm{t}r}^2=\frac{m^2}{r^2}(\nu 1,m=l),$$ (18) which can also be derived using the asymptotic expression for $`\lambda _{lm}`$ found by Bildsten et al. for these modes. ## 4 Wave trapping and leakage As was demonstrated in the preceding section, the character of waves within a star is determined by the local radial wavenumber, so that positive and negative values of $`k_r^2`$ can be identified with propagative and evanescent regions, respectively. An indispensable diagnostic tool for visualizing the location and extent of these regions, over a range of frequencies, is the propagation diagram introduced by Scuflaire , in which the $`(r,\omega ^2)`$ plane is divided into zones over which the sign of $`k_r^2`$ is constant. Figure 1 shows the propagation diagram for $`l=4`$ modes in a typical (non-rotating) early-type star; the logarithm of the temperature $`T`$ has been adopted as the abscissa, rather than the radius, to emphasize the outer regions of the star. Regions in the $`(\mathrm{log}T,\omega ^2)`$ plane where waves are propagative ($`k_r^2>0`$) are hatched, whilst evanescent regions ($`k_r^2<0`$) are blank. Values for the Brunt-Väisälä frequency $`N`$ and adiabatic sound speed $`c_\mathrm{s}`$ throughout the star, required for the evaluating $`k_r^2`$ using the dispersion relation (13), have been taken from a $`7M_{\mathrm{}}`$ ZAMS stellar model, calculated by Loeffler , whose parameters summarized in table 1; the model extends out to the photosphere at optical depth $`\tau =2/3`$, where the temperature $`TT_{\mathrm{e}ff}=21,000\mathrm{K}`$ corresponds to an early-B spectral type. Equation (14), which is appropriate in the non-rotating case, has been used to calculate $`k_{\mathrm{t}r}`$. In this figure, the type of hatching used to show propagative regions delineates between waves with $`p`$\- and $`g`$-mode characters, the former occurring when both parenthetical terms on the right hand side of the dispersion relation (13) are negative, and the latter when both terms are positive. The division of the diagram into relatively distinct $`p`$\- and $`g`$-mode propagation regions is characteristic of early-type stars. This division arises due to the fact that $`k_{\mathrm{t}r}^2c_\mathrm{s}^2`$ diverges at the origin (due to the $`1/r`$ dependence of $`k_{\mathrm{t}r}`$) and is relatively small at the surface, while $`N^2`$ is approximately zero in the convective core ($`\mathrm{log}T7.3`$) and relatively large at the surface due to the steep stratification there. The prominent ‘well’ at $`\mathrm{log}T4.6`$ indicates the presence of a thin convective region ($`N^2<0`$) due to He ii ionization, whilst the smaller well at $`\mathrm{log}T5.3`$ is due to the metal opacity bump responsible for $`\kappa `$-mechanism pulsation in early-type stars. As mentioned in the previous section, the $`k_r^2=0`$ curves which separate propagative and evanescent regions correspond to the reflective boundaries required for wave trapping. Inspection of fig. 1 shows that, for $`g`$-modes with frequencies greater than the trapping cut-off $`\omega _t`$, where $`\omega _t^21.9\times 10^9\mathrm{r}ad^2s^2`$ is shown in the figure as a horizontal dotted line, there exists an extensive trapping zone formed by a pair of reflective boundaries, one at the edge of the convective core and the other in the envelope at lower temperatures. In contrast, for modes with frequencies below $`\omega _t`$, waves are propagative even at the surface of the star, and the outer reflective boundary required for the formation of a trapping zone does not exist. Strictly speaking, the term ‘mode’ is not appropriate in such circumstances, since stationary waves will not be established by repeated complete reflection. However, all waves at frequencies below the cut-off are evanescent in the convective region at $`\mathrm{log}T4.6`$. This region, with a width of approximately 0.16 percent of the stellar radius, behaves like a partially-reflecting barrier to waves incident from the interior; some fraction of the waves will leak through the barrier and thence propagate unhindered to the surface, where they are lost from the star, whilst the remaining reflected fraction will contribute to the establishment of ‘somewhat-stationary’ waves interior to the barrier. Within the adiabatic approximation, these waves must decay exponentially in amplitude with time to compensate for the the energy lost through leakage, but will still exhibit a discrete eigenfrequency spectrum. Shibahashi & Osaki , when considering a similar situation for high-frequency $`g`$-modes in evolved early-type stars, drew a useful analogy with virtual levels in the potential problem of quantum mechanics; therefore, it seems appropriate to refer to such partially-trapped waves as virtual modes. Whether virtual modes can actually be self-excited in a star depends on the balance between the input of vibrational energy from a suitable driving mechanism, and the loss of vibrational energy associated with the leakage; non-adiabatic calculations are required to answer such a questions. The trapping cut-off frequency $`\omega _t`$, which separates the leaking virtual modes from the fully-trapped ‘traditional’ modes, is given by the smaller root of the dispersion relation (13) at the stellar surface, namely $$\omega _t^2=k_{\mathrm{t}r}^2c_\mathrm{s}^2|_{r=R},$$ (19) where $`R`$ is the stellar radius. This expression demonstrates the pivotal rôle of the effective transverse wavenumber $`k_{\mathrm{t}r}`$, discussed at the end of the preceding section, in determining the trapping condition at the surface. In the non-rotating context, $`k_{\mathrm{t}r}`$ can be eliminated from this expression through use of equation (14) to give $$\omega _t^2=\frac{l(l+1)c_\mathrm{s}^2}{r^2}|_{r=R}$$ (20) for $`\mathrm{\Omega }=0`$. When the effects of rotation are included, the more general expression (12) for $`k_{\mathrm{t}r}`$ must be used in evaluating the sign of $`k_r^2`$ using the dispersion relation (13). However, propagation diagrams may be constructed and interpreted in exactly the same manner as the non-rotating case. Figure 2 shows the propagation diagram for the $`7M_{\mathrm{}}`$ stellar model considered previously, but with rotation included at an angular frequency $`\mathrm{\Omega }=8.04\times 10^5\mathrm{r}ads^1`$, which is half of the critical rotation rate for the star, and corresponds to a period of 21.7 hours. The effects of the rotation on the equilibrium stellar structure having been neglected. Calculation of the eigenvalue $`\lambda _{lm}`$ in equation (12), for each frequency ordinate value in the $`(\mathrm{log}T,\omega ^2)`$ plane, was accomplished using Townsend’s implementation of the matrix eigenvalue method presented by Lee & Saio . This method corresponds to the spectral expansion of Hough functions in a truncated series of associated Legendre polynomials of the same azimuthal order $`m`$; 100 expansion terms were used throughout the calculations, a value deemed to provide sufficient accuracy since a similar calculation with 200 terms produced no numerical change in the results. An azimuthal order $`m=1`$ was adopted, so fig. 2 should be taken as appropriate for modes with $`(l,m)=(4,1)`$. Inspection of this figure shows that the trapping cut-off is significantly larger ($`\omega _t^28.0\times 10^9\mathrm{r}ad^2s^2`$) than in the non-rotating case. This is a direct consequence of the influence of rotation on $`k_{\mathrm{t}r}`$; at low frequencies where $`\omega <2\mathrm{\Omega }`$, $`\nu >1`$, and $`k_{\mathrm{t}r}`$ can assume large values, as discussed in the preceding section. The appropriate expression for $`\omega _t`$ in rotating stars is given by $$\omega _t^2=\frac{\lambda _{lm}c_\mathrm{s}^2}{r^2}|_{r=R},$$ (21) although this should be regarded as formal, since it must be remembered that $`\lambda _{lm}`$ is itself a function of $`\omega `$ through its dependence on the parameter $`\nu `$. However, in the limit $`\mathrm{\Omega }^2c_\mathrm{s}^2/r^2`$ (at the surface), this expression will have solutions corresponding to $`\nu 1`$, and thus the asymptotic expressions (16,18) found previously may be used in the place of the general expression (19) for $`k_{\mathrm{t}r}`$. Solving the resulting equations for $`\omega _t`$ then gives $$\omega _t^2=\{\begin{array}{c}2\mathrm{\Omega }(2l2|m|+1)c_\mathrm{s}/r\hfill \\ 2\mathrm{\Omega }(2l2|m|1)c_\mathrm{s}/r\hfill \\ m^2c_\mathrm{s}^2/r^2\hfill \end{array}|_{r=R}\begin{array}{c}(m>0)\hfill \\ (l<m0)\hfill \\ (m=l)\hfill \end{array}$$ (22) for $`\mathrm{\Omega }^2c_\mathrm{s}^2/r^2`$. Applying the middle expression to the $`7M_{\mathrm{}}`$ model for $`\mathrm{\Omega }=8.04\times 10^5\mathrm{r}ads^1`$, and $`c_\mathrm{s}/r=9.76\times 10^6\mathrm{s}^1`$ at the surface, leads to the asymptotic value $`\omega _t^2=7.85\times 10^9\mathrm{r}ad^2s^2`$ for $`(l,m)=(4,1)`$ modes, which is in reasonably good agreement with the value $`\omega _t^28.0\times 10^9\mathrm{r}ad^2s^2`$ shown in fig. 2. The above expressions, when compared with equation (20), demonstrate that the effect of rotation is to increase the trapping cut-off $`\omega _t`$ for all but the prograde sectoral modes; these latter modes will exhibit a smaller cut-off in rotating stars than in the non-rotating case, due to their transformation into Kelvin waves discussed previously. This result is interesting in light of anecdotal observational evidence favouring prograde sectoral modes as the source of periodic line-profile variations in rapidly-rotating early-type stars. If such evidence can be substantiated at a quantitative level, as has been done by Howarth et al. for the rapidly-rotating pulsators HD 93521 and HD 64760, then it can be suggested that the bias towards prograde sectoral modes is due to the suppression of other types of mode, which will have a large values of $`\omega _t`$ at rapid rotation rates and therefore preferentially leak from the star without self-excitation. ## 5 Eigenfrequencies In addition to its influence on the trapping cut-off frequency $`\omega _t`$, rotation modifies the eigenfrequencies and eigenfunctions of individual modes through its influence on the positions of trapping boundaries; this can be anticipated from the appearance of $`\lambda _{lm}`$ in the pulsation equations (34). To evaluate the modified eigenfrequencies at a qualitative level, the asymptotic technique developed by Shibahashi and Tassoul may be adapted using the traditional approximation to given expressions appropriate for rotating stars. Using such an approach, Lee & Saio \[1987a\] found that low-frequency $`g`$-modes trapped between the boundary of the convective core ($`r=r_c`$) and the surface of a rotating early-type star have eigenfrequencies $`\omega _n`$ given by $$\omega _n=\frac{2\sqrt{\lambda _{lm}}}{(n+\eta _e/21/6)\pi }_{r_c}^R\frac{|N|}{r}dr,$$ (23) where $`\eta _e`$ is the effective polytropic index at the surface, and $`n`$ is the radial order of the mode. This expression is not strictly appropriate in the current context, due to the fact that the outer reflecting boundary for trapped modes in figs. 1 and 2 occurs at $`r<R`$; furthermore, the presence of the convective region at $`\mathrm{log}T4.6`$ has been neglected. However, the form of the expression demonstrates that the frequencies of individual modes share the same $`\lambda _{lm}`$-dependence as the trapping cut-off $`\omega _t`$ in equation (21. It can therefore be suggested that a $`g`$-mode which is trapped in a non-rotating star will remain trapped once rotation is introduced, since the effect of rotation is to scale both sides of the trapping condition $`\omega _n>\omega _t`$ by an equal amount. Disregarding the possibility of avoided crossings (Aizenman, Smeyers & Weigert 1977; Lee & Saio 1989), a more general hypothesis may be put forward that, ceteris paribus, the set of radial orders $`\{n\}`$ of the $`g`$-modes which are trapped in a non-rotating star will remain invariant under the influence of rotation, with a similar result applying to the virtual modes. The hypothesis can be supported with an analogy drawn to atomic energy levels under the influence of a magnetic field; even though the levels are distorted by the action of the Lorentz force (which, like the Coriolis force, can be expressed as a velocity cross-product), the set of discrete states which are bound is invariant under the action of the field. However, numerical calculations should be employed to test this hypothesis rigorously. ## 6 Discussion An important caveat regarding quantitative interpretation of the results presented previously is that atmospheric layers above the photosphere at $`\tau =2/3`$ have been disregarded. This is justifiable if $`k_{\mathrm{t}r}`$ is constant throughout these layers; however, such a situation is unlikely to be realized, since even in the case of isothermal trans-photospheric regions where the sound speed $`c_\mathrm{s}`$ is constant, $`k_{\mathrm{t}r}1/r`$ due to the spherical geometry. As a consequence, waves which are formally propagative at $`r=R`$ may leak out to some radius $`r>R`$ and thence be reflected back towards the interior, leading to complete wave trapping at frequencies below the cut-off $`\omega _t`$. To address properly this issue of trans-photospheric reflection, it is necessary to relocate the nominal outer boundary of the star to a radius at which it is guaranteed that no reflected, inward-propagating waves will occur. In the context of the linear and adiabatic approximations adopted herein, this guarantee can only be made of the outer boundary is located at infinity. However, once non-adiabatic effects are considered, it is possible that strong radiative or non-linear dissipation above the photosphere can lead to the effective absorption of all outward-propagating waves, with no reflection and subsequent trapping. Such a situation is analogous to the core-absorption of inwardly-propagating envelope $`p`$-modes found by Osaki , and can be treated by placing the outer boundary at the base of the dissipative region (which may be close to the photosphere). Similar arguments concerning the absence of trans-photospheric reflection can be made for systems with stellar winds. Owocki & Rybicki found that, for a line-absorption driven wind, any wave-like disturbance which reaches the sonic point $`r_s`$ can never propagate back to smaller radii, due to the non-linear interaction between the wave and underlying mean flow. In this case, the outer boundary can be located at $`r_s`$; however, quantitative treatments are problematical, since the pulsation equations must be revised to take the sub-sonic wind regions into account. In spite of these difficulties, the results of this work are valid on a phenomenological level, and may be of particular relevance to the 53 Per and slowly-pulsating B (SPB; Waelkens 1991) classes of variable stars, which are unstable to low-frequency $`g`$-mode pulsation due to the metal opacity bump at $`\mathrm{log}T5.3`$ (Dziembowski & Pamyatnykh 1993; Dziembowski, Moskalik & Pamyatnykh 1993); the observed periods of these stars are typically $`13`$ days, which is the same order of magnitude as the trapping cut-off depicted in figs. 1 and 2. As indicated previously, the global self-excitation of a given pulsation mode in one of these stars (or, indeed, any other type of star) depends on the competitive interplay between excitation and damping mechanisms, which, respectively, pump energy into and remove energy from the pulsation at each point within the star; self-excitation will only occur if the net contributions from the former outweighs the net deductions from the latter. For the purposes of the following discussion, both of these generic energy-transfer processes may be classified as either 1. non-adiabatic, where the transfer arises through perturbations to the specific entropy, corresponding to the operation of a Carnot heat engine which converts between thermal and mechanical (wave) energy within a given region, or 1. advective, where the transfer arises through the non-zero divergence of the wave flux, corresponding to a net flow of mechanical energy through the boundaries of the region. The opacity mechanism operative in 53 Per and SPB stars is thus a non-adiabatic excitation mechanism (a), whilst the energy loss associated with wave leakage at frequencies below the trapping cut-off $`\omega _t`$ may be identified as an advective damping mechanism (b). In the stability calculations of Dziembowski et al. , who use the approach described by Dziembowski , the assumption is made that the Lagrangian pressure perturbation $`\delta p`$ tends to a limiting value at the surface; this corresponds to the ab initio restriction that all waves are evanescent at the surface, and thus completely trapped within the star. Hence, any contributions to advective damping arising from wave leakage are neglected, which might lead to incorrect results for the overstability of modes at frequencies below $`\omega _t`$. However, it must be stressed that the last point is somewhat formal, if modes are stabilized by non-adiabatic damping well before the frequency is low enough for leakage to occur; whilst advective damping might enhance the stability of the virtual modes, it will have little influence in determining which (trapped) modes are unstable in a star. In the case of the 53 Per and SPB stars, such a situation may arise due to the dominance of the opacity mechanism by radiative damping at lower frequencies. The latter is large for high-order (large-$`n`$) $`g`$-modes, whose eigenfunctions exhibit many radial nodes in the stellar envelope; the sub-adiabatic temperature gradient in the radiative parts of the envelope will lead to significant thermal diffusion between neighbouring oscillating elements, which tends to suppress the pulsation (see, for instance, equation 26.13 in Unno et al. 1989 plus their accompanying text). These issues will be examined further in the next paper in this series. In a contrasting situation, where non-adiabatic damping is less important at low frequencies, the overstability of virtual modes will be determined by the relative strengths of non-adiabatic excitation and advective damping. If the latter is dominant, then $`g`$-modes will exhibit an upper limit in their variability periods which corresponds to the trapping cut-off; in the more rapidly-rotating stars (see, e.g., Aerts et al. 1999), this upper limit will depend, amongst other things, on $`m`$ and the degree of rotation. In contrast, if non-adiabatic excitation dominates, then no upper period limit will be observed, since virtual modes will be excited in addition to trapped modes. Estimates of the strength of advective damping can be obtained using, for instance, the asymptotic approach presented by Shibahashi . However, as with the local analysis used in section 3, such an approach is only valid when the characteristic variation scale of eigenfunctions is much smaller than that of the underlying star. This restriction means that Shibahashi’s approach may lead to poor results for those virtual modes with frequencies close to $`\omega _t`$; therefore, an examination of the importance of leakage-originated advective damping is deferred to the following paper, where the pulsation equations are solved globally using a numerical approach which does not suffer from the restriction discussed. Whilst a proper treatment of trapping, even in cases without the trans-photospheric reflection described above, adds a certain level of complexity to theoretical studies, it does open the way for asteroseismological studies of the near-surface regions of early-type stars. For instance, if an upper period limit is observed as described, the inferred value of $`\omega _t`$ may be used, in tandem with equations (2022), to calculate a value for the acoustic timescale $$\tau _{\mathrm{a}cc}=r/c_\mathrm{s}$$ (24) in the region where the onset of wave leakage occurs. Since, for an ideal gas, the adiabatic sound speed $`c_\mathrm{s}`$ is a function of temperature $`T`$ alone, this timescale then gives an independent estimate of the temperature in the outer layers of the star. Conversely, observations of variability attributable to virtual modes can confirm the existence of sub-surface convective regions arising in ionization zones, predicted by evolutionary models of early-type stars. The degree of wave leakage associated with a virtual mode depends on the thickness of these regions (which form the partially-reflective barrier necessary for the existence of virtual modes); therefore, it might be possible to obtain estimates of the thickness through measurements of the leakage rate. ## 7 Conclusions The prime conclusion to be drawn from the work presented herein is that the complete trapping of low-frequency $`g`$-modes beneath the surface of early-type stars is not guaranteed. This is especially the case in rotating stars, where the trapping cut-off frequency $`\omega _t`$ can be significantly increased by the action of the Coriolis force for all but the prograde sectoral modes. The fact that the latter are more effectively trapped in rapidly-rotating stars than other types of modes may explain anecdotal observational evidence which points to their favoured excitation. As a consequence of the dependence of $`g`$-mode eigenfrequencies on the rotation rate, the hypothesis has been put forward that the set of radial orders $`\{n\}`$ of trapped $`g`$-modes is invariant under the influence of rotation. Stability analyses which contain the ab initio assumption of complete wave reflection at the stellar surface might be in error at frequencies below the cut-off $`\omega _t`$. More rigorous calculations can include the possibility of wave leakage, by adopting a more physically-realistic outer mechanical boundary condition. Such calculations will reveal to what extent advective damping associated with leaking virtual modes might suppress the self-excitation of these modes. These points may be of especial relevance to the 53 Per and SPB classes of variable stars. ## Acknowledgements I would like to thank Ian Howarth for many useful conversations regarding wave trapping, and for reading and suggesting improvements to the manuscript. Also, thanks must go to Conny Aerts and Joris de Ridder for introducing me to SPB stars. Finally, I am indebted to Wolfgang Loeffler for the very generous provision of stellar structure models. All calculations have been performed on an Intel Linux workstation provided by Sycorax Ltd, and this work has been supported by the Particle Physics and Astronomy Research Council of the UK.
no-problem/0003/astro-ph0003322.html
ar5iv
text
# A MODEL FOR THE QUIESCENT PHASE OF THE RECURRENT NOVA U SCORPII ## 1. INTRODUCTION U Scorpii is one of the best observed recurrent novae, the outbursts of which were recorded in 1863, 1906, 1936, 1979, 1987, and the latest in 1999. Especially, the 1999 outburst was well observed from the rising phase to the cooling phase by many observers (e.g., Munari et al. (1999); Kahabka et al. (1999); Leṕine et al. (1999)) including eclipses (Matsumoto, Kato, & Hachisu 2000). Based on Matsumoto et al.’s (2000) observation, Hachisu et al. (2000) have constructed a theoretical light-curve model for the 1999 outburst of U Sco and obtained various physical parameters of the recurrent nova. Their main results are summarized as follows: (1) A direct light-curve fitting of the 1999 outburst indicates a very massive white dwarf (WD) of $`M_{\mathrm{WD}}=1.37\pm 0.01M_{}`$. (2) The envelope mass at the optical maximum is estimated to be $`\mathrm{\Delta }M3\times 10^6M_{}`$. (3) Therefore, the mass accretion rate of the WD is $`\dot{M}_{\mathrm{acc}}2.5\times 10^7M_{}`$ yr<sup>-1</sup> during the quiescent phase between 1987 and 1999. (4) An optically thick wind blows from the WD and plays a key role in determining the nova duration because it reduces the envelope mass (Kato & Hachisu (1994)). About 60% of the envelope mass is carried away in the wind, which forms an expanding shell as observed in T Pyx (e.g., Shara et al. (1989)). The residual 40% ($`1.2\times 10^6M_{}`$) is added to the helium layer of the WD. (5) As a result, the WD can grow in mass at an average rate of $`1\times 10^7M_{}`$ yr<sup>-1</sup>. The above physical pictures are exactly the same as proposed by Hachisu et al. (1999b) as a progenitor system of Type Ia supernovae (SNe Ia). However, the distance to U Sco is still controversial because the direct light-curve fitting results in a relatively short distance of $`6`$ kpc (Hachisu et al. (2000)), which is incompatible with the distance of $`14`$ kpc at the quiescent phase (e.g., Webbink et al. (1987); Warner (1995); Kahabka et al. (1999), for a summary). If the distance of $`14`$ kpc is the case, it could be hardly consistent with the results (1) to (5) mentioned above. Our purpose in this Letter is to construct a light-curve model for the quiescent phase and to rectify the distance to U Sco. Our numerical method to obtain light curves has been described both in Hachisu & Kato (1999) to explain the second peak of T CrB outbursts and in Hachisu et al. (2000) to reproduce the light curve for the 1999 outburst of U Sco. Therefore, we mention only new parts of our numerical method in §2. In §3, by directly fitting our theoretical light curve to the observations, we derive the distance to U Sco. Discussions follow in §4, especially in relation to the recently detected orbital-period change of U Sco and a systemic mass loss through the outer Lagrangian points. We also discuss the relation to a progenitor system of SNe Ia. ## 2. THEORETICAL LIGHT CURVES Our U Sco model is graphically shown in Figure 1. Schaefer (1990) and Schaefer & Ringwald (1995) observed eclipses of U Sco in the quiescent phase and determined the orbital period ($`P=1.23056`$ days) and the ephemeris (HJD 2,451,235.777 $`+`$ 1.23056$`E`$) at the epoch of mid-eclipse. Thus, the companion is a main-sequence star (MS) which expands to fill its Roche lobe after a most part of the central hydrogen is consumed. We call such a star ”a slightly evolved” MS. The inclination angle of the orbit ($`i80\mathrm{°}`$) is a parameter for fitting. We have assumed that (1) $`M_{\mathrm{WD}}=1.37M_{}`$, (2) the WD luminosity of $$L_{\mathrm{WD}}=\frac{1}{2}\frac{GM_{\mathrm{WD}}\dot{M}_{\mathrm{acc}}}{R_{\mathrm{WD}}}+L_{\mathrm{WD},0},$$ (1) where the first term is the accretion luminosity (e.g., Starrfield, Sparks, & Shaviv 1988) and the second term $`L_{\mathrm{WD},0}`$ is the intrinsic luminosity of the WD, and $`R_{\mathrm{WD}}=0.0032R_{}`$ the radius of the $`1.37M_{}`$ WD, and (3) a black-body photosphere of the WD. The accretion luminosity is $`1700L_{}`$ for a suggested mass accretion rate of $`\dot{M}_{\mathrm{acc}}2.5\times 10^7M_{}`$ yr<sup>-1</sup>. Here, we assume $`L_{\mathrm{WD},0}=0`$ because the nuclear luminosity is smaller than the accretion luminosity for this accretion rate, but we have examined other two cases of $`L_{\mathrm{WD},0}=2000`$ and 4000 $`L_{}`$ and found no significant differences in the distance as shown below. We do not consider the limb-darkening effect for simplicity. It is assumed that the companion star is synchronously rotating on a circular orbit and its surface fills the inner critical Roche lobe as shown in Figure 1. We neglect both the limb-darkening effect and the gravity-darkening effect of the companion star for simplicity. Here, we assume a 50% irradiation efficiency of the companion star ($`\eta _{\mathrm{ir},\mathrm{MS}}=0.5`$). We have examined the dependence of the distance on the irradiation efficiency (i.e., $`\eta _{\mathrm{ir},\mathrm{MS}}=0.25`$ and 1.0) but found no significant differences in the distance as shown below. The non-irradiated photospheric temperature $`T_{\mathrm{ph},\mathrm{MS}}`$ of the companion star is a parameter for fitting. The mass of the secondary is assumed to be $`M_{\mathrm{MS}}=1.5M_{}`$. The size of the accretion disk is a parameter for fitting and defined as $$R_{\mathrm{disk}}=\alpha R_1^{},$$ (2) where $`\alpha `$ is a numerical factor indicating the size of the accretion disk, and $`R_1^{}`$ the effective radius of the inner critical Roche lobe for the WD component (e.g., Eggleton 1983). We also assume that the accretion disk is axisymmetric and has a thickness given by $$h=\beta R_{\mathrm{disk}}\left(\frac{\varpi }{R_{\mathrm{disk}}}\right)^\nu ,$$ (3) where $`h`$ is the height of the surface from the equatorial plane, $`\varpi `$ the distance on the equatorial plane from the center of the WD, $`\nu `$ the power of the surface shape, and $`\beta `$ a numerical factor showing the degree of thickness and also a parameter for fitting. We adopt a $`\varpi `$-squared law ($`\nu =2`$) simply to mimic the flaring-up effect of the accretion disk rim (e.g., Schandl, Meyer-Hofmeister, & Meyer 1997), and have examined the dependence of the distance on the power ($`\nu =1.25`$ and 3.0) without finding any significant differences as shown below. The surface of the accretion disk also absorbs photons from the WD photosphere and reemits with a black-body spectrum at a local temperature. We assume a 50% irradiation efficiency of the companion star, i.e., $`\eta _{\mathrm{ir},\mathrm{DK}}=0.5`$ (e.g., Schandl et al. (1997)). We have examined other two cases of $`\eta _{\mathrm{ir},\mathrm{DK}}=0.25`$ and 1.0, and found no significant differences in the distance as shown below. The non-irradiated temperature of the disk surface is assumed to be determined by the viscous heating of the standard accretion disk model. Then, the disk surface temperature is given by $$\sigma T_{\mathrm{ph},\mathrm{disk}}^4=\frac{3GM_{\mathrm{WD}}\dot{M}_{\mathrm{acc}}}{8\pi \varpi ^3}+\eta _{\mathrm{ir},\mathrm{DK}}\frac{L_{\mathrm{WD}}}{4\pi r^2}\mathrm{cos}\theta ,$$ (4) where $`r`$ the distance from the WD center, and $`\mathrm{cos}\theta `$ the incident angle of the surface (e.g., Schandl et al. (1997)). The temperature of the disk rim is assumed to be 3000 K. ## 3. RESULTS Figure 2 shows the observational points (open circles) by Schaefer (1990) together with our calculated $`B`$ light curve (thick solid line) for $`\dot{M}_{\mathrm{acc}}=2.5\times 10^7M_{}`$ yr<sup>-1</sup>. To fit our theoretical light curves with the observational points, we calculate $`B`$ light curves by changing the parameters of $`\alpha =0.5`$—1.0 by 0.1 step, $`\beta =0.05`$—0.50 by 0.05 step, $`T_{\mathrm{ph},\mathrm{MS}}=3500`$—8000 K by 100 K step, and $`i=75`$$`85\mathrm{°}`$ by $`1\mathrm{°}`$ step and seek for the best fit model. The best fit parameters obtained are shown in Figure 2 (see also Table 1). There are five different contributions to the $`B`$-light ($`L_B`$) in the system: the white dwarf ($`L_{B1}`$), the non-irradiated portions of the accretion disk ($`L_{B2}`$) and the donor star ($`L_{B3}`$), and the irradiated portions of the accretion disk ($`L_{B4}`$) and the donor star ($`L_{B5}`$). In order to show each contribution, we have added two light curves in Figure 2, that is, a non-irradiation case of the ACDK ($`\eta _{\mathrm{ir},\mathrm{DK}}=0`$, dash-dotted), and a non-irradiation case of the MS ($`\eta _{\mathrm{ir},\mathrm{MS}}=0`$, dashed). The light from the WD is completely blocked by the accretion disk rim, thus having no contribution, $`L_{B1}=0`$. The depth of the primary eclipse, 1.5 mag, means $`L_{B3}=0.25L_B`$ because the ACDK is completely occulted by the MS. The difference of 1 mag between the thick solid and dash-dotted lines indicates $`L_{B4}=0.60L_B`$. The difference of 0.1 mag between the thick solid and dashed lines indicates $`L_{B5}=0.10L_B`$. Thus, we obtain each contribution: $`L_{B1}=0`$, $`L_{B2}=0.05L_B`$, $`L_{B3}=0.25L_B`$, $`L_{B4}=0.60L_B`$, and $`L_{B5}=0.10L_B`$. Then we calculate the theoretical color index $`(BV)_c`$ for these best fit models. Here, we explain only the case of $`\dot{M}_{\mathrm{acc}}=2.5\times 10^7M_{}`$ yr<sup>-1</sup>. By fitting, we obtain the apparent distance modulus of $`m_{B,0}=16.71`$, which corresponds to the distance of $`d=22`$ kpc without absorption ($`A_B=0`$). On the other hand, we obtained a rather blue color index of $`(BV)_c=0.0`$ outside eclipses. Together with the observed color of $`(BV)_o=0.56`$ outside eclipses (Schaefer (1990); Schaefer & Ringwald (1995)), we derive a color excess of $`E(BV)=(BV)_o(BV)_c=0.56`$ Here, suffixes $`c`$ and $`o`$ represent the theoretically calculated and the observational values, respectively. Then, we expect an absorption of $`A_V=3.1E(BV)=1.8`$ and $`A_B=A_V+E(BV)=2.3`$. Thus, we are forced to have a rather short distance to U Sco of 7.5 kpc. In our case of $`\alpha =0.7`$ and $`\beta =0.30`$, the accretion disk is completely occulted at mid-eclipse. The color index of $`(BV)_c=0.53`$ at mid-eclipse indicates a spectral type of F8 for the cool component MS, which is in good agreement with the spectral type of F8$`\pm `$2 suggested by Johnston & Kulkarni (1992). Hanes (1985) also suggested that a spectral type nearer F7 is preferred. For other mass accretion rates of $`\dot{M}_{\mathrm{acc}}=`$ (0.1—5.0)$`\times 10^7M_{}`$ yr<sup>-1</sup>, we obtain similar short distances to U Sco, as summarized in Table 1. It should be noted that, although the luminosity of the model depends on our various assumptions of the irradiation efficiencies, the $`\varpi `$-powered law of the disk, and the intrinsic luminosity of the WD, the derived distance to U Sco itself is almost independent of these assumptions, as seen from Table 2. Therefore, the relatively short distance to U Sco ($``$ 6—8 kpc) is a rather robust conclusion, at least, from the theoretical point of view. ## 4. DISCUSSION Matsumoto et al. (2000) observed a few eclipses during the 1999 outburst and, for the first time, detected a significant period-change of $`\dot{P}/P=(1.7\pm 0.7)\times 10^6`$ yr<sup>-1</sup>. If we assume the conservative mass transfer, this period change requires a mass transfer rate of $`10^6M_{}`$ yr<sup>-1</sup> in quiescence. Such a mass transfer for 12 years is too high to be compatible with the envelope mass on the white dwarf, thus implying a non-conservative mass transfer in U Sco. We have estimated the mass transfer rate for a non-conservative case by assuming that matter is escaping from the outer Lagrangian points and thus the specific angular momentum of the escaping matter is $`1.7a^2\mathrm{\Omega }_{\mathrm{orb}}`$ (Sawada et al. (1984); Hachisu et al. 1999a ), where $`a`$ is the separation and $`\mathrm{\Omega }_{\mathrm{orb}}2\pi /P`$. Then the mass transfer rate from the companion is $`\dot{M}_{\mathrm{MS}}=(5.5\pm 1.5)\times 10^7M_{}`$ yr<sup>-1</sup> for $`M_{\mathrm{MS}}=0.8`$—2.0 $`M_{}`$ under the assumption that the WD receives matter at a rate of $`\dot{M}_{\mathrm{acc}}=2.5\times 10^7M_{}`$ yr<sup>-1</sup>. The residual ($`3\times 10^7M_{}`$ yr<sup>-1</sup>), which is escaping from the system, forms an excretion disk outside the orbit of the binary. Such an extended excretion disk/torus may cause a large color excess of $`E(BV)=0.56`$. Kahabka et al. (1999) reported the hydrogen column density of (3.1—4.8)$`\times 10^{21}`$ cm<sup>-2</sup>, which is much larger than the Galactic absorption in the direction of U Sco (1.4$`\times 10^{21}`$ cm<sup>-2</sup>, Dickey & Lockman (1990)), indicating a substantial intrinsic absorption. It should also be noted here that Barlow et al. (1981) estimated the absorption toward U Sco by three ways: (1) the Galactic absorption in the direction of U Sco, $`E(BV)0.24`$ and $`A_V0.7`$, (2) the line ratio of He II during the 1979 outburst ($`t`$ 12 days after maximum), $`E(BV)0.2`$ and $`A_V0.6`$, and (3) the Balmer line ratio during the 1979 outburst ($`t`$ 33—34 days after maximum), $`E(BV)0.35`$ and $`A_V1.1`$. The last one is significantly larger than the other two estimates. They suggested the breakdown of their case B approximation in high density regions. However, we may point out another possibility that the systemic mass outflow from the binary system has already begun at $`t`$ 33 days and, as a result, an intrinsic absorption is gradually increasing. The mass of the companion star can be constrained from the mass transfer rate. Such a high transfer rate as $`\dot{M}_{\mathrm{MS}}5.5\times 10^7M_{}`$ yr<sup>-1</sup> strongly indicates a thermally unstable mass transfer (e.g., van den Heuvel et al. (1992)), which is realized when the mass ratio is larger than 1.0—1.1, i.e., $`q=M_{\mathrm{MS}}/M_{\mathrm{WD}}>`$ 1.0—1.1 for zero-age main-sequence stars (Webbink (1985)). This may pose a requirement $`M_{\mathrm{MS}}1.4M_{}`$. We estimate the most likely companion mass of 1.4—1.6 $`M_{}`$ from equation (11) in Hachisu et al. (1999b). If the distance to U Sco is $``$ 6.0—8.0 kpc, it is located $``$ 2.3—3.0 kpc above the Galactic plane ($`b=22\mathrm{°}`$). The zero-age masses of the progenitor system to U Sco are rather massive (e.g., $`8.0M_{}+2.5M_{}`$ from Hachisu et al. 1999b) and it is unlikely that such massive stars were born in the halo. Some normal B-type main-sequence stars have been found in the halo (e.g., PG0009+036 is located $``$ 5 kpc below the Galactic disk, Schmidt et al. (1996)), which were ejected from the Galactic disk because of their relatively high moving velocities $``$100—200 km s<sup>-1</sup>. The radial velocity of U Sco is not known but it is suggested that the $`\gamma `$-velocity is $``$50—100 km s<sup>-1</sup> from the absorption line velocities (Johnston & Kulkarni (1992); Schaefer & Ringwald (1995)). If so, it seems likely that U Sco was ejected from the Galactic disk with a vertical velocity faster than $`20`$ km s<sup>-1</sup> and has reached at the present place within the main-sequence lifetimes of a $`3.0M_{}`$ star ($`3.5\times 10^8`$ yr). Now, we can understand the current evolutionary status and a further evolution of U Sco system. The white dwarf has a mass $`1.37\pm 0.01M_{}`$. It is very likely that the WD has reached such a large mass by mass accretion. In fact the WD is currently increasing the mass of the helium layer at a rate of $`\dot{M}_{\mathrm{He}}1.0\times 10^7M_{}`$ yr<sup>-1</sup> (Hachisu et al. (2000)). We then predict that the WD will evolve as follows. When the mass of the helium layer reaches a critical mass after many cycles of recurrent nova outbursts, a helium shell flash will occur. Its strength is as weak as those of AGB stars because of the high mass accretion rate (Nomoto (1982)). A part of the helium layer will be blown off in the wind, but virtually all of the helium layer will be burnt into carbon-oxygen and accumulates in the white dwarf (Kato & Hachisu (1999)). Therefore, the WD mass can grow until an SN Ia explosion is triggered (Nomoto et al. (1984)). We thank the anonymous referee for many critical comments to improve the manuscript. This research has been supported in part by the Grant-in-Aid for Scientific Research (07CE200, 08640321, 09640325, 11640226, 20283591) of the Japanese Ministry of Education, Science, Culture, and Sports. KM has been financially supported as a Research Fellow for Young Scientists by the Japan Society for the Promotion of Science.
no-problem/0003/cond-mat0003007.html
ar5iv
text
# Kondo screening and exhaustion in the periodic Anderson model ## I Introduction The periodic Anderson model (PAM) is one of the standard models for heavy fermion systemshewson . In its simplest form, it describes a system of localized electronic states which hybridize with an uncorrelated conduction band. Except for a few statements concerning ground state propertiesTSU97 , no exact solution of the PAM is known up to now. There have been many approximate calculations like QMCJar95 ; TJF97 ; TJF98 ; GE98 ; GE99 ; GE99pre2 ; HMS99 , perturbation theoryLM78 ; SC90a ; SC90b as well as other analytical approaches DS97 ; CS97 ; DS98 ; MW93 ; VGR92 ; VGR94 ; VTJK99pre ; PBJ99pre ; MNRR98 ; MN99a ; MN99pre . However, it is still far from being understood. Closely related to the PAM is the single-impurity Anderson model (SIAM)hewson . The SIAM is one of the best-understood models in theoretical physics; besides a broad range of approximate solutions even some exact calculations are possible (for a detailed review, see reference hewson, ). The main result of these calculations is the emergence of a new temperature scale $`T_K`$ (Kondo Temperature), which governs the low-temperature physics. For $`T<T_K`$, the magnetic moment of the impurity is screened by conduction electrons (Kondo screening). All thermodynamic properties can be expressed in terms of $`T_K`$. However, due to the periodicity of the localized states in the case of the PAM, new and more complicated physical properties will emerge. The most prominent example is the so-called exhaustion problem first mentioned by NozièresNoz98 . His argumentation was based on the Kondo lattice model (KLM), which can be derived from the PAM under the condition of half-filling for the localized states using the Schrieffer-Wolff transformationSW66 ; hewson . In the KLM, where the charge degrees of freedom of the localized states have been removed, Kondo screening manifests itself in the quenching of the magnetic moment of the localized spins by the conduction electrons. Reducing the number of conduction electrons below half-filling, however, the situation changes. The conduction band occupation is not sufficient to screen the localized spins completely. The nature of the ground state is not clear. As noted by NozièresNoz98 , the situation is different for the case of small and large $`J`$, respectively. Whereas for small $`J`$, the screening is a collective effect, in the limit $`J\mathrm{}`$, it can be understood as the formation of local Kondo singlets, in which one conduction electron and one $`S=\frac{1}{2}`$-spin form a bound singlet state. On removing conduction electrons, some of these local Kondo singlets (LKS) will be broken. The remaining “bachelor” spins can be described as a system of hard-core fermions, for which double occupancy is forbiddenNoz98 . The small-$`J`$ limit with its collective screening is more complicated. But nevertheless, following Nozières, this case can similarly be mapped onto an effective Hubbard modelNoz98 . Contrary to the above-described KLM, the localized states also have a charge degree of freedom in the PAM, and its physical properties are therefore more complicated. The Schrieffer-Wolff transformation is valid only in the small $`J=\frac{V^2}{U}`$\- limit ($`V`$ is the hybridization strength between localized and the conduction band states and $`U`$ is the on-site Coulomb interaction among the localized electrons (see section II). So, especially in the limit of strong hybridization between localized and conduction states, the KLM is not a priori justified as an effective model for the PAM. For the weakly hybridized case, the exhaustion in the PAM has recently been discussed by several authorsTJF97 ; TJF98 ; VTJK99pre ; PBJ99pre . They find that Nozières’ picture of the effective Hubbard model also holds in this case and explains the emergence of a new low-temperature scale. In this paper, we want to present a systematic investigation of the PAM. In the following chapter,we will introduce the theoretical approach, which combines the dynamical mean-field theory (DMFT)MV89 ; PJF95 ; GKKR96 with a modification of the iterated perturbation theory (IPT)KK96 ; PWN97 ; WPN98 ; MWPN99 . The results will be presented in section III. In the first part of the discussion, we will focus on the hybridization-strength dependence of the symmetric PAM, and in the second part, we reduce the number of conduction electrons and investigate the exhaustion problem, thereby extending the discussion of reference VTJK99pre, . ## II Theory The PAM is defined by its Hamiltonian $`H=`$ $`{\displaystyle \underset{\stackrel{}{k},\sigma }{}}ϵ(\stackrel{}{k})s_{\stackrel{}{k},\sigma }^{}s_{\stackrel{}{k},\sigma }+{\displaystyle \underset{i,\sigma }{}}e_ff_{i,\sigma }^{}f_{i,\sigma }+`$ (1) $`V{\displaystyle \underset{i,\sigma }{}}(f_{i,\sigma }^{}s_{i,\sigma }+s_{i,\sigma }^{}f_{i,\sigma })+{\displaystyle \frac{1}{2}}U{\displaystyle \underset{i,\sigma }{}}n_{i,\sigma }^{(f)}n_{i,\sigma }^{(f)}`$ with $`s_{i,\sigma }`$ ($`f_{i,\sigma }`$) and $`s_{i,\sigma }^{}`$ ($`f_{i,\sigma }^{}`$) being the conduction band ($`f`$-level) electron annihilation and creation operators ($`n_{i,\sigma }^{(f)}=f_{i,\sigma }^{}f_{i,\sigma }`$). $`ϵ(\stackrel{}{k})`$ is the dispersion of a non-degenerate $`s`$-type conduction band and $`e_f`$ denotes the position of the $`f`$-level with respect to the center of gravity of the conduction band. The hybridization $`V`$ is taken as a $`\stackrel{}{k}`$-independent constant, and finally $`U`$ is the local Coulomb interaction between two $`f`$-electrons at the same lattice site. To obtain the the $`f`$-electron Green function $`G_{ii\sigma }^{(f)}(E)=f_{i\sigma };f_{i\sigma }^{}`$, we apply a two-step procedure. The first of these, known as dynamical mean-field theory (DMFT)MV89 ; PJF95 ; GKKR96 , is a mapping of the PAM onto a simpler model, namely the single-impurity Anderson model. The second step of our procedure is to find an approximate solution of the SIAM using the modified perturbation theory (MPT)PWN97 ; WPN98 ; MWPN99 . The starting point of the DMFT is the assumption of a $`\stackrel{}{k}`$-independent, i.e. local self-energy $`\mathrm{\Sigma }_\sigma (E)`$. It can be shown that in this case, the self-energy of the PAM is equivalent to the self-energy of a properly defined impurity model (SIAM). The conduction band within this SIAM has to be determined by the following expression for the hybridization functionhewson $$\mathrm{\Delta }_\sigma (E)=Ee_f\mathrm{\Sigma }_\sigma (E)\left(G_{ii\sigma }^{(f)}(E)\right)^1$$ (2) In the original SIAM, this function is given as $`\mathrm{\Delta }(E)=_\stackrel{}{k}\frac{V^2}{Eϵ(\stackrel{}{k})}`$. All information of the conduction band and its hybridization with the impurity is contained in $`\mathrm{\Delta }(E)`$. Therefore its knowledge is sufficient to define the electron bath of the SIAM also within the DMFT. From the self-energy $`\mathrm{\Sigma }_\sigma (E)`$ of the SIAM defined by $`\mathrm{\Delta }_\sigma (E)`$, the PAM $`f`$-electron Green function can directly be obtained as $$G_{ii\sigma }^{(f)}(E)=\frac{1}{N}\underset{\stackrel{}{k}}{}\frac{1}{Ee_f\frac{V^2}{Eϵ(\stackrel{}{k})}\mathrm{\Sigma }_\sigma (E)}$$ (3) Since $`G_{ii\sigma }^{(f)}(E)`$ enters expression (2), a self-consistent solution has to be found by iteration. As already noted above, the DMFT-procedure becomes exact for a local, i. e. $`\stackrel{}{k}`$-independent self-energy. It has been shown that for the limit of infinite dimensions, or equivalently in the limit of the lattice coordination number going to infinity, this is indeed exactly the caseMV89 ; Mue89 . Furthermore, solving the PAM in three dimensions using perturbation theory, it was shown that the results obtained in the local approximation compare qualitatively and quantitatively very well with those where the full $`\stackrel{}{k}`$-dependence has been consideredSC89b ; SC90a . Now the actual problem is shifted to solve the SIAM which is defined by $`\mathrm{\Delta }_\sigma (E)`$ (see equation (2)). For this task we apply the modified perturbation theory (MPT)PWN97 ; WPN98 ; MWPN99 which is based on the following ansatz for the electronic self-energyMR82 ; KK96 $$\mathrm{\Sigma }_\sigma (E)=Un_\sigma ^{(f)}+\frac{a_\sigma \mathrm{\Sigma }_\sigma ^{(\mathrm{SOC})}(E)}{1b_\sigma \mathrm{\Sigma }_\sigma ^{(\mathrm{SOC})}(E)}$$ (4) where $`\mathrm{\Sigma }_\sigma ^{(\mathrm{SOC})}(E)`$ denotes the second-order contribution to the second-order perturbation theory around the Hartree-Fock solution (SOPT-HF)Yam75 ; ZH83 ; SC90a . Please note that the ansatz (4) is $`\stackrel{}{k}`$-independent by construction, the basic assumption of the DMFT procedure is therefore already incorporated. Hence, the ansatz (4) together with the proposal to fit the parameters as will be described in detail below is the only approximation necessary to obtain the $`f`$-electron Green function. The coefficients $`a_\sigma `$ and $`b_\sigma `$ are determined so that the first four ($`n\{0,1,2,3\}`$) moment sum rules $$M_\sigma ^{(n)}=𝑑EE^nA_\sigma ^{(f)}(E)=[\underset{n\text{-fold commutator}}{\underset{}{[\mathrm{}[f_\sigma ,H]_{},\mathrm{},H]_{}}},f_\sigma ^{}]_+$$ (5) $$A_\sigma ^{(f)}(E)=\frac{1}{\pi }\mathrm{}G_{ii\sigma }^{(f)}(E+i0^+)$$ are fulfilledPWN97 ; PHWN98 . $`\mathrm{}x`$ denotes the imaginary part of $`x`$. Since the moments $`M_\sigma ^{(n)}`$ determine the high-energy expansion of the Green function, the compliance of the $`n=3`$ sum rule automatically leads to the correct behavior of $`G_{ii\sigma }^{(f)}(E)`$ up to the order $`\frac{1}{E^4}`$PHWN98 . Furthermore, the $`n=3`$ sum rule is directly related to the correct positions and spectral weights of the charge excitations in the strong-coupling limit $`U\mathrm{}`$HL67 ; PHWN98 . This is ensured by the occurrence of a higher-order correlation function called bandshift $`B_\sigma `$, which is discussed in detail in the context of the Hubbard model in reference PHWN98, , and in reference PWN97, for the effective SIAM within the dynamical mean-field theory. Since we use the perturbation theory around the Hartree-Fock solution to determine the $`\mathrm{\Sigma }_\sigma ^{(\mathrm{SOC})}(E)`$, another parameter enters the calculation, namely the chemical potential within the Hartree-Fock calculation. It is a priori not evident that this chemical potential should be identical to the chemical potential of the full problem. In fact, several other choices to determine this parameter seem possible. Within the iterated perturbation theory (IPT)KK96 ; VTJK99pre , which is away from half-filling also based on the ansatz (4), the Luttinger sumLW60 or equivalently the Friedel sum ruleFri56 ; Lan66 is used. This, however, limits the calculation to $`T=0`$ from the very beginning. We therefore define a different constraint, demanding that the impurity occupation number within the Hartree-Fock calculation is equivalent to the true occupation number. A detailed investigation of this choice and other possibilities is found in reference PWN97, . Investigating the well-known single-impurity Anderson model to test the quality of our methodMWPN99 , we found that the MPT fulfills the Friedel sum rule within numerical accuracy not only under symmetric parameter conditions but also in a broad range of parameters, especially when reducing the conduction electron density. But contrary to the IPT, the MPT is applicable also at finite temperatures. Summarizing the features of the MPT, it should be considered trustworthy for small $`U`$ since it is based on perturbation theory. But furthermore it is well justified in the strong coupling regime, where the main features – the charge excitations are correctly reproduced. This is clearly one step beyond similar methods which determine the parameters $`a_\sigma `$ and $`b_\sigma `$ with respect to the “atomic” limit $`V=0`$KK96 ; VTJK99pre ; TS99pre . To calculate the susceptibility, we apply an external magnetic field $`B_{\text{ext}}`$ which couples to the $`f`$\- and the $`s`$-electrons. The susceptibility $`\chi ^{\text{(tot)}}`$ is given as $`\frac{M}{B_{\text{ext}}}|_{B_{\text{ext}}=0}`$, where $`M`$ is the total magnetization of the system. Since the $`f`$\- and $`s`$-magnetization can be computed separately, the respective contribution of the $`f`$-($`s`$-) electrons to $`\chi ^{\text{(tot)}}`$ can be determined. With the above-described theory, the $`f`$-electron Green function and all quantities deriveable from it can be obtained. This includes several two-particle correlation functions as e.g. the $`f`$-electron double occupancy $`n_\sigma ^{(f)}n_\sigma ^{(f)}`$NolBd7 ; HerrmannDipl . However, for the discussion below, we will also be interested in two-particle correlation functions which are not readily obtained from $`G_{ii\sigma }^{(f)}(E)`$. To determine these, as e. g. the $`s`$-$`f`$ density correlation function $`n_\sigma ^{(f)}n_\sigma ^{}^{(s)}`$ we need a further approximation: We construct the following effective medium Hamiltonian: $$\begin{array}{cc}\hfill H^{\text{(eff)}}=& \underset{\stackrel{}{k},\sigma }{}ϵ(\stackrel{}{k})s_{\stackrel{}{k},\sigma }^{}s_{\stackrel{}{k},\sigma }+\underset{i,\sigma }{}\left(ϵ_f+\mathrm{\Sigma }_\sigma (E)\right)f_{i,\sigma }^{}f_{i,\sigma }\hfill \\ & +V\underset{i,\sigma }{}\left(f_{i,\sigma }^{}s_{i,\sigma }+s_{i,\sigma }^{}f_{i,\sigma }\right)\hfill \end{array}$$ (6) with $`\mathrm{\Sigma }_\sigma (E)`$ being the fully self-consistent solution of the DMFT-MPT scheme. Since the effective medium Hamiltonian is bilinear in fermion operators, all Green functions of interest can be evaluated exactly. By construction, the single-particle properties of model (6) are equivalent to the original model (1) solved within the DMFT-MPT scheme. Although we are aware of the fact, that using the effective Hamiltonian (6) must be seen as an approximation to the original model (1), it is in our opinion clearly one step beyond standard first-order perturbation theory for the latter, which would not reproduce the non-trivial results we will discuss below. Let us already point out here that these non-trivial results will always be accompanied by special features in quantities which were derived from the full Hamiltonian (1) using the DMFT-MPT (e. g. the susceptibility or $`n_\sigma ^{(f)}n_\sigma ^{(f)}`$ ). They seem therefore to be not effects due to the replacement of Hamiltonian (1) with (6). ## III Results and discussion ### III.1 The symmetric PAM The symmetric PAM is defined by complete particle-hole symmetry, i. e. $`e_f=\frac{U}{2}`$ and a particle-hole symmetric density of states of the conduction band with the chemical potential located at its center of gravity. In figure 1, the $`s`$\- and $`f`$-densities of states ($`s`$\- and $`f`$-DOS) with $`U=2`$ and $`V=0.2`$ are plotted for various temperatures. The energy scale is defined by the free, i.e. unhybridized conduction band of unit width and semielliptic shape centered at $`E=0`$. Within this energy scale, the temperatures will be given in $`\frac{K}{eV}`$. We have plotted the projections onto the $`f`$($`s`$)-states using solid (dotted) lines. The DOS consist of the charge excitations approximately located at $`e_f`$ and $`e_f+U`$ which are dominantly of $`f`$-character and the conduction band mostly of $`s`$-character which is slightly deformed due to the hybridization. For low temperatures, an additional feature appears in the vicinity of $`E=\mu =0`$, the Kondo resonancehewson . It is split by the coherence gap which originates from the coherent hybridization between $`f`$\- and $`s`$-states at all lattice sites. This coherence gap might be the theoretical equivalent of the experimentally seen “pseudo-gap” e.g. in $`SmB_6`$ABW79 or in the so-called Kondo insulators, e.g. $`Ce_3Bi_4Pt_3`$HCTFL90 or $`CeNiSe`$DKJWWLF97 . It can be understood as a level-repulsion between the conduction band states and the effective $`f`$-level located at the chemical potential. Since this effective $`f`$-level is clearly correlation-induced, the coherence gap is as well. The DOS obtained by our method compare very well with those calculated by QMC in $`d=1`$GE98 ; GE99 , $`d=2`$GE99pre2 and $`d=\mathrm{}`$Jar95 ; VTJK99pre . At least for $`d=1`$ and $`d=2`$, there is one qualitative difference, however. In the MPT, the Kondo resonance is of pure $`f`$-character, whereas in the cited papers, the conduction band also contributes to the resonance. Whether this is due to the maximum-entropy methodJG95 necessary to complement the QMC formalism, or an artefact of our method, remains an open question. However, physically relevant is only the total density of states ($`f`$ plus conduction band). The choice of projecting the DOS onto the $`f`$\- and $`s`$-states, i.e. onto the basis given by the $`V=0`$ solution of the problem is rather arbitrary. Therefore, the above-discussed differences seem to be of minor importance. Focusing on the Kondo screening problem, the question arises on how one can observe it. One possibility found in the literature is the definition of an effective magnetic moment $`T\chi (T)`$. This is motivated by the Curie law which, however, only holds for high temperatures. In the SIAM, there is a suppression of $`T\chi (T)`$ coinciding with the temperature scale $`T_K`$ which also governs the other low-energy properties of the system. In analogy to the SIAM, the behaviour of $`T\chi (T)`$ is often interpreted as indirect indicator for the onset of Kondo screening in the PAMJar95 ; TJF97 . Another criterion of Kondo screening, which we will focus on, might be found in spin-spin correlation functions. Let us look for example at the problem of two $`S=\frac{1}{2}`$ spins. Of the four possible states of this system, three are of triplet and one is of singlet nature. The spin-spin correlation function takes the value $`\stackrel{}{S}_a\stackrel{}{S}_b=\frac{1}{4}`$ for the triplet states and $`\stackrel{}{S}_a\stackrel{}{S}_b=\frac{3}{4}`$ for the singlet state ($`a`$ and $`b`$ denote the two spins). In the following, we will discuss only the on-site interband spin-spin correlation function $`\stackrel{}{S}_i^{(f)}\stackrel{}{S}_i^{(s)}`$. It is obvious that this function can be an indicator only for local singlet formation, as will be discussed in more detail below. In figure 2, the on-site interband spin-spin correlation function $`\stackrel{}{S}_i^{(f)}\stackrel{}{S}_i^{(s)}`$ as well as several on-site double occupancy correlation functions are plotted as function of the hybridization strength $`V`$ for $`T=0`$ (other parameters as in figure 1). With increasing $`V`$, the on-site spin-spin correlation function approaches the value $`\frac{3}{4}`$. In the same range of $`V`$, the interband double occupancy with parallel spin, $`n_{i\sigma }^{(f)}n_{i\sigma }^{(s)}`$ vanishes, whereas the respective correlation function with opposite spin indices, $`n_{i\sigma }^{(f)}n_{i\sigma }^{(s)}`$ stays almost constant. In analogy to the two-spin problem, these are clear indications of a local singlet correlation. A similar transition in the spin-spin correlation function has been seen in a PAM with next-neighbor hybridization using QMC, where it has been interpreted as singlet formationHMS99 . In the upper graphs of figures 3 and 4, the correlation functions of figure 2 are plotted as function of temperature for large ($`V=0.6`$) and small ($`V=0.2`$) hybridization strengths. Additionally, the lower graphs show the respective susceptibilities. In the large-$`V`$ case, the above described transition in the various correlation functions is clearly visible: $`\stackrel{}{S}_i^{(f)}\stackrel{}{S}_i^{(s)}\frac{3}{4}`$ and $`n_{i\sigma }^{(f)}n_{i\sigma }^{(s)}`$ shows a huge drop around $`T3000`$, being of very small value at low temperatures. In the same temperature range, the susceptibility vanishes. Both the $`f`$\- and $`s`$-contribution to $`\chi ^{\text{(tot)}}`$ disappear simultaneously. From these findings, we propose the occurrence of local Kondo singlet formation. With the term local Kondo singlet (LKS), we want to stress, that the singlet formation is predominantly a local process determined by the binding of one $`f`$\- and one $`s`$-electron at each lattice site. In the opposite case of collective Kondo screening as it could occur for small $`V`$, the local correlation functions discussed here need not show any particularities. Our proposal is further supported by the behavior of the conduction band double occupancy $`n_{i\sigma }^{(s)}n_{i\sigma }^{(s)}`$. In the large-$`V`$ region, where we propose the LKS formation, this correlation function is reduced compared to the small-$`V`$ case, as can be clearly seen in figure 2. This indicates a tendency towards localization of the conduction electrons, which is exactly what one would expect in the case of LKS formation. The unique temperature scale which we identify with the Kondo temperature of $`T_K1000K`$ seems to be very large. This cannot be due to the “Hartree-Fock-character” implied by the simple effective medium approach of equation (6) used to determine the higher correlation functions, since the same temperature scale appears in the susceptibility, which is determined from the full Hamiltonian (equation (1)) in the DMFT-MPT scheme. Whether $`T_K`$ is over-estimated due to the mean-field character of the DMFT or the use of the MPT, or whether it displays the true behavior of the system, would be speculation. However, comparing our method with other means of calculation (e.g. numerical renormalization group calculations (NRG)BHP98 ; PBJ99pre ), the DMFT-MPT seems to overestimate energy scales. We believe this is connected to the fact that the MPT, as any perturbative method, is unable to reproduce the exponential temperature scale typical for the Kondo physicsMWPN99 A detailed comparison with complementary methods, e. g. NRG, is needed to shed further light on this. For small $`V`$ (figure 4), the situation is completely different. The systems with small and large $`V`$ behave similar only for very low and very high temperatures. For very high temperatures, a Curie-like behavior is found as expected. For $`T=0`$, the susceptibility vanishes also in the small-$`V`$ case. This is compatible with the exact result by Tsunetsugu et al.TSU97 , who proved the singlet nature of the ground state of the symmetric PAM independent of the size of $`V`$ and showed the existence of a spin gap. However, for small hybridization the various correlation functions discussed above show none of the occurrences which led us to the conclusion of the local Kondo singlet formation in the large-$`V`$ case. The susceptibility shows two features. The one at the high temperature $`T_{\text{high}}4000K`$ corresponds to the delocalization of the $`f`$-electrons due to thermal excitations. As can be seen at the decomposition of $`\chi ^{\text{(tot)}}`$, only the $`f`$-contribution is responsible for this feature. The conduction electron contribution $`\chi ^{(s)}`$ resembles at and below $`T_{\text{high}}`$ still the susceptibility of a system of uncorrelated electrons ($`\chi _{\text{free band}}`$). The delocalization of the $`f`$-electrons is visible in the increase of $`n_{i\sigma }^{(f)}n_{i\sigma }^{(f)}`$ for $`T>T_{\text{high}}`$. The Kondo screening sets in at the much lower Kondo temperature $`T_K100K`$. This corresponds to the temperature where the Kondo resonance appears in the DOS (see figure 1). As already mentioned, the local correlation functions show here only a weak signature of Kondo screening. The Kondo screening for small $`V`$ is a collective effect, the quantities accessible by our theory do not allow more detailed investigation of this state. To summarize the discussion of the symmetric PAM, we find two different kinds of Kondo singlet formation depending on the hybridization strength. Where for small $`V`$ the singlet formation involves non-local or collective screening of the $`f`$-electrons by the conduction band electrons, in the large $`V`$ domain, the singlet formation is dominantly a local process. At every lattice site, one $`f`$\- and one $`s`$-electron couple to form a local Kondo singlet (LKS). ### III.2 Exhaustion problem One important question concerning the screening behavior is that of exhaustion: What happens if the number of conduction electrons $`n^{(s)}`$ is reduced (the number of $`f`$-electrons is fixed $`n^{(f)}=1`$)? This question was recently brought into attention by NozièresNoz98 . In this section, we want to present our results concerning “exhaustion”. In figure 5, the $`f`$-electron density of states ($`f`$-DOS) is plotted for $`T=0`$ and various $`n^{(s)}`$. The main change in the $`f`$-DOS is the shift of the charge excitations as well as the Kondo resonance towards lower energies. The shift of the charge excitations is due to an adjustment of $`e_f`$ which is necessary to keep the constraint $`n^{(f)}=1`$ with decreasing $`n^{(s)}`$. The Kondo resonance also moves towards lower energies as its position is pinned to the chemical potential $`\mu `$. The coherence gap stays in the vicinity of $`\mu `$, too. However, the relative position of the gap, the Kondo peak and $`\mu `$ changes. Whereas for $`n^{(s)}=1`$, the Kondo resonance and the gap are centered symmetrically around $`\mu `$, for $`n^{(s)}<1`$ the situation becomes asymmetric. The chemical potential moves into the lower half of the resonance. As a direct consequence, the shape of the resonance becomes asymmetric. This strongly resembles the behavior found for the SIAMMWPN99 ; hewson . The relative shift of $`\mu `$ and the Kondo resonance has also an important consequence for the coherence gap. For $`n^{(s)}<1`$ the system is metallic since $`\mu `$ is no longer located within the gap. With decreasing $`n^{(s)}`$, the gap moves further away from the chemical potential. The imaginary part of the self-energy $`\mathrm{}\mathrm{\Sigma }_\sigma (E)`$ at the gap increases thereby since it shows a “Fermi-liquid” $`E^2`$ dependence around $`E=\mu `$. For $`n^{(s)}`$ slightly below unity, the gap is still present in form of a “pseudo-gap”. However, for $`n^{(s)}`$ approaching zero, the gap closes completely. This resembles the behavior found in $`Ce_3Bi_4Pt_3`$ upon doping with lanthanumHCTFL90 . The undoped system is an insulator with a very small gap believed to be a prototypical Kondo insulator. On doping $`La`$, the system becomes a metallic heavy-fermion system. In reference HCTFL90, this is interpreted as due to the de-construction of lattice coherence by disorder. From the behavior of the PAM, one might also conclude that simply the change of the electron density due to doping would also be sufficient to explain this metal-insulator transition. In figure 5 a broadening of the upper charge excitation is observed for $`n^{(s)}0.2`$. Here, the upper charge excitation overlaps with the conduction band which results in a stronger hybridization. This could be prevented by using a larger value for $`U`$. However, the impact of this on the results discussed below is negligible. Let us now turn to the actual problem of exhaustion. In the case of the symmetric PAM ($`n^{(s)}=n^{(f)}=1`$), we have found a singlet ground state with a finite spin gap which is in agreement with QMC resultsBFGS87 and exact statements concerning the ground stateTSU97 . It is not clear how the system will react when the number of conduction electrons is reduced while $`n^{(f)}=1`$ is kept constant. One possibility would be a partial or complete breakdown of the screening since the conduction-band filling is not sufficient to screen the magnetic moments of all $`f`$-electrons. However, conduction-band mediated $`f`$-$`f`$ correlations could again lead to a singlet ground state via intersite singlet correlations. This scenario has been confirmed for the two-impurity Kondo problemALJ95 . Recently, the exhaustion problem has been discussed as the origin of a new low-energy scaleNoz98 ; TJF97 ; TJF98 ; VTJK99pre ; PBJ99pre ; TJPF99 . In one of these publicationsVTJK99pre , the iterated perturbation theory has been applied, which is very similar to the approximations used in this paper. There, the authors found a gap close to the chemical potential in the effective hybridization defined by $`\mathrm{}\mathrm{\Delta }_\sigma (E)`$ (see equation (2)). In figure 6, $`\mathrm{}\mathrm{\Delta }_\sigma (E)`$ as well as the conduction band density of states ($`s`$-DOS) are plotted in the vicinity of $`\mu `$ for various conduction band fillings. The parameters correspond to the DOS plotted in figure 5. In the $`s`$-DOS, the coherence gap but no Kondo resonance is visible. The latter appears only in the projection onto $`f`$-states ($`f`$-DOS) and is therefore of pure $`f`$-character. Again, the coherence gap shifts together with $`\mu `$ towards lower energies on reducing $`n^{(s)}`$. Let us focus on $`\mathrm{}\mathrm{\Delta }_\sigma (E)`$, the upper picture in figure 6. Independently of $`n^{(s)}`$ there exists a gap/dip close to the chemical potential. However, in the symmetric case ($`n^{(s)}=1`$) a sharp $`\delta `$-like peak appears exactly at $`E=\mu `$. So $`\mathrm{}\mathrm{\Delta }_\sigma (E=\mu )`$ is large in the symmetric case, but already for any small change in $`n^{(s)}`$, $`\mathrm{}\mathrm{\Delta }_\sigma (E=\mu )`$ becomes small since $`\mu `$ lies within the gap or dip. In reference VTJK99pre, , the authors interpret the dip which they find for $`n^{(s)}=0.4`$ in the effective hybridization as sign of the exhaustion of the conduction electrons. However, this reduction of $`\mathrm{}\mathrm{\Delta }_\sigma (E)`$ around $`E=\mu `$, which we also find for $`n^{(s)}=0.4`$, continuously develops into a gap for $`n^{(s)}=0.9`$. In the interpretation of the cited work, this would imply that the exhaustion problem is stronger for $`n^{(s)}1`$ than for $`n^{(s)}=0.4`$ since the dip evolves into a true gap. The special case $`n^{(s)}=1`$ with its “preformed gap” surrounding the $`\delta `$-like peak in $`\mathrm{}\mathrm{\Delta }_\sigma (E)`$ would also need further considerations. In our opinion, without clearer justification, this gap/dip cannot be taken as a direct sign of exhaustion. In figure 7, we have plotted the zero-temperature susceptibility for small $`V=0.2`$ and large $`V=0.6`$ as function of the conduction band occupation number $`n^{(s)}`$. In addition to that, the effective mass $`m^{}=1\frac{\mathrm{\Sigma }}{E}|_{E=0}`$ is plotted in the inset of the upper panel. From these figures, one gains some insight in the stability of the LKS in case of exhaustion ($`n^{(s)}<n^{(f)}=1`$). In the small-$`V`$ case, $`\chi ^{(tot)}`$ is roughly proportional to the value of the DOS at $`E=\mu `$ for less than half-filled conduction band. Both the $`f`$\- and $`s`$-electrons contribute to $`\chi ^{(tot)}`$. Contrary to that, for the LKS-dominated system ($`V=0.6`$), only the $`f`$-electrons react to an external field. The conduction band contribution $`\chi ^{(s)}`$ is negligible for $`n^{(s)}<0.9`$. So we concluse that all conduction electrons are more or less bound into LKS states and therefore inable to react to an external magnetic field. Another hint can be derived from the fact that $`\stackrel{}{S}_i^{(f)}\stackrel{}{S}_i^{(s)}`$ decreases linearly with $`n^{(s)}`$ (not plotted). This can be interpreted as a linear decrease of the number of LKS. I.e. all available conduction electrons form a LKS, and $`n^{(\text{bach})}=1n^{(s)}`$ $`f`$-electrons remain unpaired. These are the equivalent of the “bachelor spins” discussed by Nozières for the Kondo lattice modelNoz98 . Following his argumentation, they can be described as a system of $`n^{(\text{bach})}`$ spin-$`\frac{1}{2}`$ fermions. In the KLM these are “hard-core”-fermions since for the localized spins, double occupancy is strictly forbidden. In our case of the PAM, double occupancy is principally possible, only suppressed by the on-site repulsion $`U`$. Therefore, these bachelor fermions should show similarities to a non-degenerate Hubbard model with finite $`U`$. One feature of the Hubbard model is the Mott-Hubbard transitionGKKR96 ; gebhard . For an exactly half-filled system with sufficiently large $`U`$, the otherwise metallic system becomes insulating due to the electron correlations. Approaching the half-filled situation from lower carrier concentrations, the effective mass strongly increases, and diverges at half-filling. Bearing this in mind, the behavior of the effective mass shown in the inset of figure 7 can be understood. The most prominent feature is the increase of $`m^{}`$ for $`n^{(s)}0`$. The latter, however, implies that the above-discussed bachelor-fermion model approaches half-filling, since the number of unpaired $`f`$-electrons goes to unity. In terms of this model, the system is in the proximity of the Mott-Hubbard transition. Please note, that in the PAM with finite hybridization, $`n^{(s)}=0`$ is only possible for extreme parameter conditions, namely $`e_f\mathrm{}`$ and $`U\mathrm{}`$. Our method is not feasible for these parameters. Let us now discuss why the susceptibility is enlarged at $`n^{(s)}=0.9`$. Looking at the partial contributions $`\chi ^{(f)}`$ and $`\chi ^{(s)}`$ at $`n^{(s)}=0.9`$, we note that both take on large values. However, while $`\chi ^{(f)}`$ is positive, the conduction band contribution becomes negative (see lowest panel of figure 8). We believe this is due to the proximity of a magnetically ordered phase. Since we do not allow for antiferromagnetic ordering in our method, we obtain a paramagnetic system for the shown parameters. For larger values of $`U`$, however, we also see an onset of ferromagnetic order. The critical $`U_c`$ is lower for $`n^{(s)}0.9`$ than for other values of $`n^{(s)}`$. So the peak in the susceptibility is in our opinion due to the proximity of the ferromagnetic phase. The opposite signs of $`\chi ^{(f)}`$ and $`\chi ^{(s)}`$ result from the fact that in the ferromagnetic phase, the conduction band will be polarized antiparallel to the $`f`$-levels, as will be shown in a forthcoming publication. However, using quantum Monte Carlo methods, antiferromagnetic order was found for $`n^{(s)}1`$TJF97 . In principle, the investigation of an antiferromagnetic order is also possible within our method and is planned for future studies. The results of reference TJF97, suggest that for $`n^{(f)}=1`$ and $`n^{(s)}1`$ the antiferromagnetic phase will be stable against the ferromagnetic phase. Note that also in the weakly hybridized case, this increase in $`m^{}`$ is visible. This is similar to the findings in reference PBJ99pre, and can be understood in a similar fashion as in the large-$`V`$ case (a detailed discussion is given in reference TJF98, ). In figure 8, the temperature dependence of the susceptibility and the on-site interband spin-spin correlation function is plotted for $`V=0.6`$ and $`n^{(s)}\{0.4,0.8,0.9\}`$. As can be seen, the temperature where the LKS formation occurs, is nearly independent of $`n^{(s)}`$. The LKS formation should not be confused with the potential new low-energy scale discussed in recent publicationsTJF98 ; PBJ99pre . In our results, the only possible indication for this energy scale could be the increase of $`m^{}`$. Complementary methods such as numerical renormalization theoryhewson ; BHP98 ; PBJ99pre should be used to get more insight at this question. ## IV Conclusions In the present work, we have studied the periodic Anderson model (PAM) using the modified perturbation theory in the context of the dynamical mean-field theory. This approach is well motivated both for the large and small coupling regime. Furthermore, applying it to the single-impurity Anderson modelMWPN99 and the Hubbard modelPWN97 ; WPN98 ; PHWN98 good accordance with (numerically) exact methods has been found. Being fast and numerically stable, all parameter regions of the model can conveniently be investigated, this includes $`T=0`$ as well as finite temperatures. The density of states generally consists of the charge excitations of the localized level, the conduction band structure and, for low temperatures, an additional peak, the Kondo resonance. At least for symmetric parameters the latter is split by the coherence gap. The Kondo resonance is ascribed to the phenomenon of Kondo screening meaning the quenching of the magnetic moment of the localized levels by the conduction electrons. In the case of the symmetric PAM, where the $`f`$-level and the conduction band are both half-filled and the two charge excitations lie symmetrically around the chemical potential, the ground state is always a singlet with a spin gap which is in accordance with the assumption of complete Kondo screening. Investigating several on-site correlation functions and the susceptibility as function of the hybridization strength $`V`$, we see a crossover between two qualitatively different regions. Whereas for small $`V`$, the Kondo screening is a collective effect, for intermediate and large $`V`$, the screening is a dominantly local process. At each lattice site a local Kondo singlet exists. It is built up by one conduction- and one $`f`$-electron spin. On reducing the number of conduction electrons, the LKS remain stable. However, due to unavailability of $`s`$-electrons, a finite number of $`f`$-electrons are unpaired (bachelor fermions). Following a recent reasoning by NozièresNoz98 and othersTJF98 , the low-temperature physics of the system should be describable by an effective model which only regards these bachelor fermions. Our results are compatible with this proposal. In the susceptibility, we found indications of the proximity of magnetically ordered phases. These will be subject of a forthcoming paper. ## Acknowledgments We wish to thank M. Potthoff for many helpful discussions. Financial support by the Volkswagen foundation is gratefully acknowledged. One of the authors (D. M. ) would like to thank the Friedrich-Naumann foundation for support this work.
no-problem/0003/quant-ph0003127.html
ar5iv
text
# Reply to Comment by A. Moroz In his comment, Moroz questions the validity of the near band edge (effective mass) approximation to the total photon density of states (DOS) as a useful representation of the local density of states (LDOS) experienced by a single radiating atom or molecule located at a particular position $`\stackrel{}{r}`$ within a photonic crystal (PC). In this approximation, the band edge DOS takes the form: $$\rho (\omega )\text{const}|\omega \omega _\text{c}|^\eta $$ where $`\eta =0.5`$ for a 1-d PC and $`\eta =0.5`$ for a 3-d PC. We reassert that this behaviour indeed applies to the LDOS as well as the DOS. However, the frequency range over which this behaviour is realized depends sensitively on $`\stackrel{}{r}`$. In particular, if $`\stackrel{}{r}`$ is chosen near a node of the electromagnetic field intensity $`\left|\stackrel{}{E}(\stackrel{}{r})\right|^2`$, then $`\omega `$ must be chosen very close to $`\omega _\text{c}`$ before the asymptotic behaviour is realized. The seemingly arbitrary exponents obtained by Moroz are simply an artifact of fitting the asymptotic form to numerical data for a frequency $`\omega `$ which is not sufficiently close to $`\omega _\text{c}`$ at certain positions $`\stackrel{}{r}`$. We consider precisely the example quoted by Moroz in his comment and assume that the LDOS has the asymptotic form: $$\rho (\omega ,\stackrel{}{r})=𝒦(\stackrel{}{r})|\omega _\text{c}\omega |^\eta $$ Near the lower band edge of the first photonic band gap ($`\omega \omega _\text{c}`$) we define $`u1\frac{\omega }{\omega _\text{c}}>0`$. In order to numerically estimate the exponent $`\eta `$, we write: $$y\mathrm{log}_{10}\rho =\eta (\mathrm{log}_{10}u+\mathrm{log}_{10}\omega _\text{c})+\mathrm{log}_{10}𝒦(\stackrel{}{r})$$ Using equations (4) and (7) of Moroz’s paper we plot (in Fig. 1a) $`y`$ as a function of $`z\mathrm{log}_{10}u`$ for 8 different positions $`\stackrel{}{r}`$ in the 1-d unit cell of the example quoted in the above comment. The asymptotic behaviour of $`dy/dz`$ for large negative values of $`z`$ ($`\omega \omega _\text{c}`$) yields the exponent $`\eta `$ (see Fig. 1b). In this model the lower band edge mode intensity vanishes at $`x|\stackrel{}{r}|=0.5`$ (center of air region) and has a maximum at $`x=0.0`$ (center of dielectric slab). For all cases the asymptotic behaviour ($`\omega \omega _\text{c}`$) yields the common exponent $`\eta =0.5`$. However arbitrary values of $`dy/dz`$, and hence $`\eta `$, may be erroneously inferred by choosing too large a value of $`|\omega \omega _\text{c}|`$. This is particularly evident near the node of the field intensity. We conclude that although the LDOS is sensitive to the actual position $`\stackrel{}{r}`$, the exponent $`\eta `$ is indeed universal except on a set of measure zero, namely the field intensity nodes. The seemingly arbitrary exponents quoted by Moroz are somewhat misleading. On the other hand, inhomogeneous line broadening is a very important and relevant ingredient which must be incorporated into theoretical models which aim to interpret experiments involving a distribution of atoms in a PC.
no-problem/0003/quant-ph0003082.html
ar5iv
text
# Complete quantum teleportation with a Kerr nonlinearity ## Abstract We present a scheme for the quantum teleportation of the polarization state of a photon employing a cross-Kerr medium. The experimental feasibility of the scheme is discussed and we show that, using the recently demonstrated ultraslow light propagation in cold atomic media, our proposal can be realized with presently available technology. PACS numbers: 03.67Hk, 42.65.-k, 03.65.Bz, 42.50.Gy Quantum entanglement is a powerful resource at the basis of the extraordinary development of quantum information. Among the most fascinating examples of the possibilities offered by sharing quantum entanglement are quantum teleportation , quantum dense coding , entanglement swapping , quantum cryptography , and quantum computation . Quantum teleportation is the “reconstruction”, with $`100\%`$ success, of an unknown state given to one station (Alice), performed at another remote station (Bob), on the basis of two bits of classical information sent by Alice to Bob. Perfect teleportation is possible only if the two parties share a maximally entangled state. The most delicate part needed for the effective realization of teleportation is the Bell-state measurement, i.e. the discrimination between the four, maximally entangled, Bell states which has to be performed by Alice and whose result is communicated to Bob through the classical channel. There have been numerous proposals for its realization in different systems and recently successful, pioneering experiments have provided convincing experimental proof-of-principle of the correctness of the teleportation concept. These experiments differ by the degrees of freedom used as qubits and for the different ways in which the Bell-state measurement is performed. The Innsbruck experiment is the conceptually simplest one, since each qubit is represented by the polarization state of a single photon pulse. In this experiment, however, only two out of the four Bell states can be discriminated and therefore the success rate cannot be larger than $`50\%`$ . The Rome experiment employs the entanglement between the spatial and the polarization degrees of freedom of a photon and it is able to distinguish all the corresponding four Bell states completely. However in this scheme the state to be teleported is generated within the apparatus (it cannot come from the outside) and therefore the scheme cannot be used as a computational primitive in a larger quantum network for further information processing, as it has been recently proposed in Ref. . Finally the Caltech experiment is conceptually completely different since it implies the teleportation of the state of a continuous degree of freedom , the mode of an electromagnetic field, employing the entangled two-mode squeezed states at the output of a parametric amplifier. In this case, the Bell-state measurement is replaced by two homodyne measurements and a direct comparison with the original quantum teleportation scheme of Ref. cannot be made. Up to now, only coherent states of the electromagnetic field have been successfully teleported using this scheme. It is therefore desirable to have a scheme for a Bell-state measurement that can be used in the simplest case of the Innsbruck scheme. This would imply the possibility of realizing the first complete verification of the original quantum teleportation scheme and also of having a device useful for other quantum protocols, as quantum dense coding . What we need is a device able to discriminate among the four Bell states that can be realized with the polarization-entangled photon pairs produced in Type-II phase matched parametric down conversion , that is $`|\psi ^\pm `$ $`=`$ $`{\displaystyle \frac{|V_1,H_2\pm |H_1,V_2}{\sqrt{2}}}`$ (2) $`|\varphi ^\pm `$ $`=`$ $`{\displaystyle \frac{|V_1,V_2\pm |H_1,H_2}{\sqrt{2}}},`$ (3) where $`|H`$ and $`|V`$ denote the horizontally and vertically polarized one-photon states, respectively, and $`1,2`$ refer to two different spatial modes. It has been recently shown that it is impossible to perform a complete Bell measurement on two-mode polarization states using only linear passive elements (unless the two photons are entangled in more than one degree of freedom ), and for this reason schemes involving some effective nonlinearities, such as resonant atomic interactions , or the Kerr effect , have been proposed. In the present Letter we propose a scheme for a perfect Bell-state discrimination based on a nonlinear optical effect, the cross-phase modulation taking place in Kerr media. In this respect, our scheme is based on a $`\chi ^{(3)}`$ medium as the “Fock-filter” proposal of Ref. . However, our scheme is different and simpler and, above all, is feasible using available technology, since we shall show that the needed crossed-Kerr nonlinearity can be obtained using the recently demonstrated ultraslow light propagation , achieved via electromagnetic induced transparency (EIT) in ensembles of cold atoms. Our “Bell box” is described in Fig. 1 and can be divided into two parts: the left part is composed by three polarization rotators ($`R`$, $`R^{}`$) and by the “quantum phase gate” (QPG) which will be described below, and can be called “the disentangler”, since it realizes the unitary transformation changing each Bell state of Eqs. (Complete quantum teleportation with a Kerr nonlinearity) into one of the four factorized polarization states, i.e. $`|\psi ^+`$ $``$ $`|H_1,V_2`$ (5) $`|\psi ^{}`$ $``$ $`|V_1,V_2`$ (6) $`|\varphi ^+`$ $``$ $`|H_1,H_2`$ (7) $`|\varphi ^{}`$ $``$ $`|V_1,H_2.`$ (8) The right part of the scheme is composed by two polarizing beam-splitters (PBSs) and by four detectors with single-photon sensitivity, and simply serves the purpose of detecting the four states of the factorized polarization basis $`\{|e_1,|e_2,|e_3,|e_4\}`$ (9) $`=\{|H_1,H_2,|H_1,V_2,|V_1,H_2,|V_1,V_2\},`$ (10) where $`\{|e_i\}`$ are the tensor product of the single-photon polarization basis states, $$|H_i=\left(\begin{array}{c}1\\ 0\end{array}\right)_i|V_i=\left(\begin{array}{c}0\\ 1\end{array}\right)_i.$$ (11) Due to the one-to-one correspondence of Eqs. (1), it is clear that the detection of each Bell state corresponds to a different pair of detector clicks, so that they are unambiguously distinguishable. The disentangler, and in particular the QPG, is the most delicate part as concerns the experimental implementation, since it involves a two-qubit operation, i.e., an effective photon-photon interaction. In fact, if $`\stackrel{~}{R}_i`$ is a simple polarization rotation by $`\pi /4`$ radians for mode $`i`$ (and $`\stackrel{~}{R}_i^{}`$ its inverse), i.e., $`|H_i\left(|H_i+|V_i\right)/\sqrt{2}`$, $`|V_i\left(|V_i|H_i\right)/\sqrt{2}`$, we have $$R_1=\stackrel{~}{R}_1I_2,R_2=I_1\stackrel{~}{R}_2,$$ (12) which can be obtained using a $`\lambda /2`$ retardation plate at a $`\pi /8`$ angle. In Eq. (12) $`I_i`$ is the $`2\times 2`$ unit matrix for mode $`i`$. The general QPG $`P(\phi )`$ is a universal two-qubit gate as long as $`\phi 0`$ , and in the two-photon polarization basis (9) we are considering here, it can be written as $`|H_1,H_2`$ $``$ $`|H_1,H_2`$ (14) $`|H_1,V_2`$ $``$ $`|H_1,V_2`$ (15) $`|V_1,H_2`$ $``$ $`|V_1,H_2`$ (16) $`|V_1,V_2`$ $``$ $`e^{i\phi }|V_1,V_2.`$ (17) The experimental realization of this gate has been reported in Ref. , in the case when one qubit is given by the internal state of a trapped ion and the other qubit by its two lowest vibrational states, and recently in Ref. , where the two qubits are represented by two circular Rydberg states of a Rb atom and by the two lowest Fock states of a microwave cavity. In the optical case we are interested in, the QPG between two frequency-distinct cavity modes has been experimentally investigated in Ref. , using however weak coherent states instead of single photon pulses, demonstrating therefore only conditional quantum dynamics and not the full quantum transformation of Eqs. (12). As it can be easily checked, the QPG (12) can be realized using a crossed-Kerr interaction involving the vertically polarized modes only $$H_K=\mathrm{}\chi a_{V_1}^{}a_{V_1}a_{V_2}^{}a_{V_2},$$ (18) so that the conditional phase shift is $`\phi =\chi t_{int}`$, where $`t_{int}`$ is the interaction time within the Kerr medium. The disentangler of Fig. 1 realizes the transformation (1) when the conditional phase shift is $`\phi =\pi `$, as it can be checked in a straightforward way by writing the matrix form of the transformation $$R_1^{}R_2P(\pi )R_2^{}$$ (19) of Fig. 1 in the factorized polarization basis (9), which is just the matrix form of Eqs. (1) in the chosen basis. The proposed Bell box is therefore extremely simple and also robust against detector inefficiencies. This is due to the fact that in our scheme, only one photon at most impinges on each of the four detectors. First of all this means that only single photon sensitivity and not single photon resolution is needed, and in this case solid-state photomultipliers can provide up to $`90\%`$ efficiency . Moreover, this implies that the detection scheme is reliable, i.e., it always discriminates the correct Bell state, whenever it answers. In the case of detectors with the same efficiency $`\eta `$, our Bell box gives the (always correct) output with probability $`\eta ^2`$ and it does not give any output (only zero or one photon is detected) with probability $`1\eta ^2`$. As we have already remarked, the most difficult part for the experimental implementation of the scheme is the QPG with a conditional phase shift $`\phi =\pi `$. In fact, realizing the transformation (12) means having a large cross-phase modulation at the single photon level between two traveling-wave pulses, with negligible absorption, which is very demanding. For example, in the experiment of Ref. , a conditional phase-shift $`\phi =16^{}`$ has been measured, which however involved two frequency-distinct cavity modes in a high-finesse cavity. However, the recent demonstration of ultraslow light propagation in a cold gas of sodium atoms and with hot Rb atoms , opens the way for the realization of significant conditional phase shifts also between two traveling single photon pulses. In fact, the extremely slow group velocity is obtained as a consequence of EIT , which however, as originally suggested by Schmidt and Imamoğlu in Ref. , can also be used to achieve giant crossed-Kerr nonlinearities. In fact, Harris and Hau , developing the suggestions of Ref. , showed that when the ultraslow group velocity is the dominant feature of the problem, nonlinear optical processes between traveling pulses with low number of photons become feasible. In particular, in the limit of very small group velocity, and therefore with light pulses compressed to a spatial length much smaller than the medium length, they find a conditional phase shift per photon between two pulses (characterized by frequencies $`\omega _{24}`$ and $`\omega _p`$ in ) given by $`\phi =\gamma _{24}\mathrm{\Delta }\omega _{24}/\left(4\gamma _{24}^2+4\mathrm{\Delta }\omega _{24}^2\right)`$, accompanied by a two-photon absorption $`\gamma _{24}^2/\left(4\gamma _{24}^2+4\mathrm{\Delta }\omega _{24}^2\right)`$, where $`\mathrm{\Delta }\omega _{24}`$ is the detuning of one of the two pulses and $`\gamma _{24}`$ the associated linewidth (see Eq. (10) of Ref. ). For a sufficiently large detuning $`\mathrm{\Delta }\omega _{24}\gamma _{24}`$, two-photon absorption is negligible and we have just the desired result, i.e., a significant conditional phase shift between two traveling single photon pulses without appreciable absorption. Unfortunately, in this same limit, the phase shift becomes $`\phi \gamma _{24}/4\mathrm{\Delta }\omega _{24}`$ which cannot be too large and close to $`\pi `$, as we have assumed above in the Bell box scheme. This may be a problem because it is possible to see that if the phase $`\phi `$ of the QPG is not equal to $`\pi `$, the scheme of Fig. 1 is no longer perfect and it does not discriminate the four Bell states with $`100\%`$ success. However, it should be noted that this is not a theoretical limitation, but only a practical drawback of the specific scheme of Ref. . Furthermore, as mentioned above, the QPG represented by $`P(\phi )`$ is a universal two-qubit gate, capable of entangling and disentangling qubits as soon as $`\phi 0`$. Moreover, even though different from $`\pi `$, the conditional phase shift $`\phi `$ is a given and measurable property, and it is reasonable to expect that, using the knowledge of the actual value of $`\phi `$, it is possible to adapt and optimize the teleportation protocol in order to achieve a truly quantum teleportation (i.e., that cannot be achieved with only classical means), even in the presence of an imperfect Bell-state measurement. Optimization means that Bob has to suitably modify the four local unitary transformations he has to perform on the received qubit according to the Bell measurement result communicated by Alice. In the optimized protocol, Bob’s local unitary transformations will now depend on the phase $`\phi `$ of the QPG and will reduce to those of the original proposal in the ideal case of perfect Bell-state discrimination $`\phi =\pi `$. We expect that, $`\phi 0`$, the average fidelity of the teleported state will be always larger than $`2/3`$, as it must be for any truly quantum teleportation of a qubit state . Let us therefore consider a generic one-photon state $`|\psi _1=\alpha |H_1+\beta |V_1`$, which is given to Alice and has to be teleported to Bob, and let us assume that Alice and Bob share the Bell state $`|\psi ^+_{23}=(|H_2V_3+|V_2H_3)/\sqrt{2}`$, so that the input state for the teleportation process is $`|\psi _1|\psi ^+_{23}`$. Alice is provided with the “imperfect” Bell box with a QPG $`P(\phi )`$, so that the disentangler of Fig. 1 will now be described by the transformation $`R_1^{}R_2P(\phi )R_2^{}`$. It is easy to check that when $`\phi \pi `$, the four Bell states are no longer completely disentangled and therefore no longer discriminated with $`100\%`$ success. Alice has to perform the Bell-state measurement on modes 1 and 2, and the resulting joint state of the three modes just before the photodetections is $$|\stackrel{~}{\psi }_{123}=\underset{i=1}{\overset{4}{}}|e_i_{12}\widehat{G}_i(\phi )|\psi _3,$$ (20) where $`|e_i_{12}`$ are the factorized basis states (9) and $`\widehat{G}_1(\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\begin{array}{cc}0& ie^{i\phi /2}\mathrm{sin}\frac{\phi }{2}\\ 1& e^{i\phi /2}\mathrm{cos}\frac{\phi }{2}\end{array}\right)`$ (22) $`\widehat{G}_2(\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\begin{array}{cc}1& e^{i\phi /2}\mathrm{cos}\frac{\phi }{2}\\ 0& ie^{i\phi /2}\mathrm{sin}\frac{\phi }{2}\end{array}\right)`$ (23) $`\widehat{G}_3(\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\begin{array}{cc}0& ie^{i\phi /2}\mathrm{sin}\frac{\phi }{2}\\ 1& e^{i\phi /2}\mathrm{cos}\frac{\phi }{2}\end{array}\right)`$ (24) $`\widehat{G}_4(\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\begin{array}{cc}1& e^{i\phi /2}\mathrm{cos}\frac{\phi }{2}\\ 0& ie^{i\phi /2}\mathrm{sin}\frac{\phi }{2}\end{array}\right).`$ (25) When the photons are detected, Alice sends the results through the classical channel to Bob. Bob is left with the photon of mode 3, and applies a local unitary transformation $`\widehat{U}_i(\phi )`$ in correspondence to the $`i`$-th result of the Bell-state measurement. As a consequence, the output state of the teleportation process is $$\rho _{out}=\underset{i=1}{\overset{4}{}}\widehat{U}_i(\phi )\widehat{G}_i(\phi )|\psi _3\psi |\widehat{G}_i(\phi )^{}\widehat{U}_i(\phi )^{}.$$ (26) Since the output state has to reproduce the unknown input state $`|\psi `$ as much as possible, it is evident that to optimize the local unitary transformations $`\widehat{U}_i(\phi )`$, one should “invert” $`\widehat{G}_i(\phi )`$. The best strategy is suggested by the use of the polar decomposition of the matrices $`\widehat{G}_i(\phi )`$, $$\widehat{G}_i(\phi )=\widehat{T}_i(\phi )\widehat{R}_i(\phi ),$$ (27) where $`\widehat{R}_i(\phi )=\sqrt{\widehat{G}_i(\phi )^{}\widehat{G}_i(\phi )}`$ is Hermitian and $`\widehat{T}_i(\phi )`$ unitary, so that Bob’s optimal local unitary transformations will be $$\widehat{U}_i(\phi )=\widehat{T}_i(\phi )^1=\widehat{R}_i(\phi )\widehat{G}_i(\phi )^1.$$ (28) Using Eqs. (20), (27) and (28), one finds the following Bob’s optimal unitary transformations $`\widehat{U}_1(\phi )`$ $`=`$ $`\left(\begin{array}{cc}i\mathrm{cos}\frac{\pi +\phi }{4}& \mathrm{sin}\frac{\pi +\phi }{4}\\ ie^{i\phi /2}\mathrm{sin}\frac{\pi +\phi }{4}& e^{i\phi /2}\mathrm{cos}\frac{\pi +\phi }{4}\end{array}\right)`$ (30) $`\widehat{U}_2(\phi )`$ $`=`$ $`\left(\begin{array}{cc}\mathrm{cos}\frac{\pi \phi }{4}& i\mathrm{sin}\frac{\pi \phi }{4}\\ e^{i\phi /2}\mathrm{sin}\frac{\pi \phi }{4}& ie^{i\phi /2}\mathrm{cos}\frac{\pi \phi }{4}\end{array}\right)`$ (31) $`\widehat{U}_3(\phi )`$ $`=`$ $`\left(\begin{array}{cc}i\mathrm{cos}\frac{\pi +\phi }{4}& \mathrm{sin}\frac{\pi +\phi }{4}\\ ie^{i\phi /2}\mathrm{sin}\frac{\pi +\phi }{4}& e^{i\phi /2}\mathrm{cos}\frac{\pi +\phi }{4}\end{array}\right)`$ (32) $`\widehat{U}_4(\phi )`$ $`=`$ $`\left(\begin{array}{cc}\mathrm{cos}\frac{\pi \phi }{4}& i\mathrm{sin}\frac{\pi \phi }{4}\\ e^{i\phi /2}\mathrm{sin}\frac{\pi \phi }{4}& ie^{i\phi /2}\mathrm{cos}\frac{\pi \phi }{4}\end{array}\right),`$ (33) which (once the conditional phase shift is known) can be easily implemented using appropriate birefringent plates and polarization rotators. It can be checked that, in the special case $`\phi =\pi `$, the above optimized teleportation protocol coincides with the original one , since one has $`\widehat{U}_1(\pi )=\sigma _x`$, $`\widehat{U}_2(\pi )=1`$, $`\widehat{U}_3(\pi )=i\sigma _y`$, and $`\widehat{U}_4(\pi )=\sigma _z`$. Finally, we have to check that the proposed teleportation protocol, even though no longer with $`100\%`$ success when $`\phi \pi `$, always implies the realization of a true quantum teleportation, that cannot be achieved with only classical means. This amounts to check that the average fidelity of the output state is larger than $`2/3`$ for $`0<\phi <2\pi `$. For pure qubit states, the average fidelity $`F_{av}`$ is defined as $$F_{av}=\frac{1}{4\pi }𝑑\mathrm{\Omega }\psi |\rho _{out}|\psi ,$$ (34) where the integral is over the Bloch sphere and $`|\psi `$ is the generic input state. Using Eqs. (26) and (28) one has $$\psi |\rho _{out}|\psi =\underset{i=1}{\overset{4}{}}\left|\psi |\widehat{R}_i(\phi )|\psi \right|^2,$$ (35) so that, using the explicit expressions for $`\widehat{R}_i(\phi )`$ that can be obtained from Eqs. (20), and performing the average over the Bloch sphere, one finally finds $$F_{av}(\phi )=\frac{2}{3}+\frac{1}{3}\mathrm{sin}\frac{\phi }{2},$$ (36) which is larger than the upper classical bound $`F_{av}=2/3`$ for $`0<\phi <2\pi `$, as expected. In conclusion, we have presented a physical implementation for the quantum teleportation of the polarization state of single photons, such as those produced in spontaneous parametric down-conversion, based on a crossed-Kerr nonlinearity. In the ideal case, the scheme provides a perfect Bell-state discrimination and it could be implemented using the giant nonlinearities already demonstrated in atomic gases exploiting EIT . Note added in proof. After submission, we have become aware of Ref. which shows that a conditional phase shift $`\phi `$ close to $`\pi `$ could be achieved at single photon level if both light pulses are subject to EIT and propagate with slow but equal group velocities. This fact makes us more confident on the feasibility of the proposed scheme.
no-problem/0003/cond-mat0003248.html
ar5iv
text
# Two-magnon Raman scattering in insulating cuprates: Modifications of the effective Raman operator ## I Introduction Raman scattering provides an important tool for examining the structure of antiferromagnetic materials. Even though optical processes in Mott insulators necessarily depend on the energy-bands in a complex way, the Fleury-Loudon-Elliott theory allows one to bypass that complexity and develop a theory for the line-shape of the Raman spectra entirely within the framework of pure spin models. This theory has been highly successful in many cases and is the primary reason why the Raman scattering becomes an investigative tool for these class of materials. However, in case of the cuprates such as La<sub>2</sub>CuO<sub>4</sub>, the stochiometric parent compounds of the high temperature superconducting materials, the Fleury-Loudon-Elliott theory runs into several difficulties. This has been a subject of intense debate, and many explanations have been proposed ranging from the inadequacy of the theory to novel and exotic microscopic physics in these materials. The goal of this paper is to review the various explanations and to examine how far a simple modification to the effective “Raman Hamiltonian” that allows two-magnon states with arbitrary total momentum, can help bridge the gap between theory and experiments. It is fair to say, that Raman scattering provided some of the earliest accurate estimates for the antiferromagnetic exchange constant in the cuprates. This, by itself, is proof that the peak-frequency of the Raman scattering intensity matches reasonably with theoretical expectations. Furthermore, the lineshape of the spectra is reasonably universal from one material to another within the insulating cuprates, and although there are details in the shape whose dependence on the incident photon energy can be clearly recognized, the gross features of the lineshape are largely independent of such resonance effects. Thus, the discrepency between theory and experiment only come in when a more detailed calculation of the lineshape is performed within the Fleury-Loudon-Elliott theory. The experimental spectra is much broader, perhaps by about a factor of 3, and has a clear asymmetry extending towards high energies. The most notable discrepency with the theory is that in the experiments there is comparable scattering intensity in $`A_{1g}`$ and $`B_{1g}`$ polarizations of incident and outgoing light, whereas theory predicts scattering predominantly in $`B_{1g}`$ geometry only. The fact that the main features of the spectra are so universal suggests that it is intrinsic and significant. The theoretical work focusing on these discrepencies can be grouped into the following categories: (i) Inaccuracies of numerical calculations: Even given a system well described by a nearest-neighbor Heisenberg model and an effective spin-Hamiltonian which describes the Raman scattering process, the calculation of Raman scattering lineshape remains a challenging task. The spin-wave theory, which works well for higher spin and higher dimensional systems, need not be accurate for a 2D spin-half system. Improved calculations have involved higher-order spin-wave theory, series expansions, , exact-diagonalization of small systems and finite temperature Quantum Monte Carlo simulations. These calculations have established the first few moments of the spectra quite well. The Quantum Monte Carlo calculation is perhaps the best in terms of getting the lineshape correct and suggests that the actual lineshape can be fairly different from spin-wave theory. It maybe both broader than spin-wave theory and have some of the high energy asymmetry, but perhaps not as much as in the experiments. (ii) The Heisenberg model is not good enough for the cuprates: Other work has focused on extending the nearest-neighbor Heisenberg model in order to get better agreement with the experiments. For example, one could introduce second neighbor antiferromagnetic interactions to explain scattering in $`A_{1g}`$ geometries. A more radical proposal has been the possibility of substantial or dominant ring-exchange terms, which can dramatically broaden the spectra. The consistency of such an approach with other measurements (most notably neutron scattering) has not been shown. (iii) Lineshape depends on resoance: Chubukov and Frenkel and independently Kampf et al. have argued that the lineshape does depend on the frequency of incident photon energy and these features can also make the spectra appear broader and give enhanced scattering at higher frequencies. (iv) Other degrees of freedom, most notably phonons are important: It has been argued by several authors that coupling between spin and phonons can lead to substantial broadening of the spectra. Calculations in this respect have included modeling phonons by substantial modulation of local coupling constants as well as by spin-wave theory. Again, the consistency of strong spin-phonon couplings to neutron scattering and other measurements have not been shown. In particular, the fact that neutron scattering measurements, especially the temperature dependent correlation length $`\xi (T)`$ and the spin-dynamics, agree remarkably well with the Heisenberg model does not leave room for such couplings. (v) Magnons are not good elementary excitations at short wavelengths: One of the most exciting suggestions from a physics point of view has been to invoke spinons and not magnons as elementary excitations of the system, at least at short wavelengths. Such an approach naturally leads to a much broader spectra, and can be considered to be successful at a phenomenological level. The primary difficulty with this approach is that the existence of spinon-like excitations in two-dimensions remains highly controversial. (vi) The need to go beyond the spin-subspace to describe the scattering process: The work of Shastry and Shraiman has presented a comprehensive theoretical framework for understanding the Fleury-Loudon-Elliott scheme for effective Raman Hamiltonians starting from an electronic Hamiltonian. However, the cuprate materials are far from the large-U limit where such a scheme can be rigorously shown to work, and thus multiple bands and detailed band-structure may play a role here. However, as noted before, the fact that the spectral features are reasonably universal over different family of materials suggests a more generic explanation may be appropriate. In this paper, we primarily concern ourselves with the numerical calculation of the Raman scattering lineshape with a modified effective Hamiltonian. Such an approach does not alter the ground state properties and elementary excitations of the system, but only the way in which the Raman scattering process is described within the spin subspace. The basic idea is based on the work of Sawatzsky and Lorenzana for optical absorption in the cuprates. They argued that the optical absorption was assisted by phonons, whose role can be incorporated into the theory by simply assuming that they acted as momentum sinks. Thus optical absorption could proceed through excitation of two-magnons with arbitrary total momentum. Here we explore the analogous situation for Raman scattering aided by phonons. This immedeately leads to scattering in both $`B_{1g}`$ and $`A_{1g}`$ geometries. Furthermore, the spectral features come closer to experiments. Given our finite-size numerical calculations, it is difficult to say whether these are now in complete agreement with experiments. ## II Phonon assisted scattering: the Single Site Operator (SSO) Let us first examine the rationale behind the very successful Fleury-Loudon theory as formulated by Elliott. Although Raman scattering proceeds through virtual charge excitations, the scattering process can be described by an effective spin Hamiltonian, simply by incorporating the important symmetries of the problem. This is possible because the initial and final states both lie well below the charge-gap and thus the resulting excitation must be a pure spin excitation. Since light has very long-wavelength and the scattering involves the electric field and not the magnetic field, the effective Raman Hamiltonian must have zero total momentum and be a spin-singlet. It must be linear in the polarizations of incoming and outgoing electric field vectors and must be a scalar. If we further assume the dominance of nearest-neighbor superexchange, the effective Raman Hamiltonian, is essentially fully determined apart from an overall multiplicative constant. It takes the Fleury-Loudon-Elliott form: $$_R=\underset{<ij>}{}(\stackrel{}{ϵ}_{in}\widehat{r}_{ij})(\stackrel{}{ϵ}_{out}\widehat{r}_{ij})\stackrel{}{S}_𝐢\stackrel{}{S}_𝐣,$$ (1) where, the sum runs over the nearest-neighbor pairs, $`ϵ_{in}`$ and $`ϵ_{out}`$ are the incoming and outgoing electric field polarization vectors and $`\widehat{r}_{ij}`$ is a unit vector connecting the sites i and j. Thus in the $`B_{1g}`$ configuration, where the incoming and outgoing light are polarized in the plane of the copper-oxides at right angles to each other and at an angle $`45`$ degrees from the $`x`$ and $`y`$ axes of the CuO<sub>2</sub> lattice, the effective scattering operator becomes: $$O_{B_{1g}}=\underset{<ij>,x}{}\stackrel{}{S}_𝐢\stackrel{}{S}_𝐣\underset{<ij>,y}{}\stackrel{}{S}_𝐢\stackrel{}{S}_𝐣$$ (2) where the first sum is over the nearest neighbor bonds parallel to the x- axis, and the second sum is over the nearest neighbor bonds parallel to the y-axis. In contrast, the effective Hamiltonian vanishes in the $`B_{2g}`$ configuration, where the incoming and outgoing light are polarized in the plane of the copper-oxides at right angles to each other, with one being along the $`x`$ and the other along the $`y`$ axis. In the $`A_{1g}`$ configuration, the Fleury-Loudon-Elliott operator becomes $$O_{A_{1g}}=\underset{<ij>,x}{}\stackrel{}{S}_𝐢\stackrel{}{S}_𝐣+\underset{<ij>,y}{}\stackrel{}{S}_𝐢\stackrel{}{S}_𝐣,$$ (3) which is just the Heisenberg Hamiltonian and, thus, results in no scattering. Thus the theory predicts scattering in $`B_{1g}`$ geometry only. The spectra obtained by treating this effective Hamiltonian in the two-magnon approximation for $`S1`$, provide a remarkably accurate description of the experiments in K<sub>2</sub>NiF<sub>4</sub> and other materials. Numerical calculations for $`S=1/2`$, and their lack of agreement with the cuprate materials will be discussed in the following sections. Here we will examine the possibility that the phonons or impurities play an important role in the Raman scattering process, even though they do not much effect the system in the absence of incident light. One way to mimic the role of phonons follows from the work of Lorenzana and Sawatzky on optical absorption in antiferromagnets. The key effect of phonons, in their theory, is to act as a momentum sink, allowing absorption via two-magnon states of arbitrary total momentum. This theory has proved to be very successful in describing optical absorption in the quasi-1D material Sr<sub>2</sub>CuO<sub>3</sub>. We can incorporate this idea of phonons acting as momentum sinks in Raman scattering by modifying the Raman scattering operator. The most natural choice is to consider the following single site operator (SSO) for the $`B_{1g}`$ configuration: $$O_{B_{1g}}=\underset{<j>,x}{}\stackrel{}{S}_\mathrm{𝟎}\stackrel{}{S}_𝐣\underset{<j>,y}{}\stackrel{}{S}_\mathrm{𝟎}\stackrel{}{S}_𝐣.$$ (4) And, for the $`A_{1g}`$ configuration: $$O_{A_{1g}}=\underset{<j>,x}{}\stackrel{}{S}_\mathrm{𝟎}\stackrel{}{S}_𝐣+\underset{<j>,y}{}\stackrel{}{S}_\mathrm{𝟎}\stackrel{}{S}_𝐣.$$ (5) Notice that the latter no longer commutes with the antiferromagnetic Heisenberg Hamiltonian, and can thus produce scattering in the $`A_{1g}`$ channel. In general, one would expect that by including two-magnons at arbitrary momentum the two magnons will scatter less with each other in the final state and thus lead to a broadening of the spectra. Whether this effect combined with quantum fluctuations can lead to a spectra consistent with the experiments becomes a numerical issue. ## III Computational methods Results shown in this paper will be based on exact diagonalization computations of Raman spectra. The antiferromagnetic Heisenberg model is used for systems of 16 and 26 sites. To obtain a ground state vector $`|\psi _0>`$ having energy $`E_0`$, a conjugate gradient method was used. Once that was accomplished, it was possible to compute zero-temperature Raman spectra using a variety of methods, which we will now discuss. Let us assume that our scattering operator is $`O`$; the equation for the scattering intensity $`I`$ at the shifted frequency $`\omega `$ has the form $$I(\omega )=\frac{1}{\pi }Im[<\psi _0|O^{}\frac{1}{\omega +E_0+iϵH}O|\psi _0>],$$ (6) where $`H`$ is the Hamiltonian of the system and $`ϵ`$ is a small real number introduced to allow computation. This equation can also be expressed in a Fermi’s golden rule form, $$I(\omega )=\underset{n}{}|<\psi _n|O|\psi _0>|^2\delta (\omega (E_nE_0)),$$ (7) where $`|\psi _n>`$ and $`E_n`$ are eigenvectors and eigenvalues of the system. There are many possible ways to perform this calculation using these two equation forms. One standard method is to use a continued fraction calculation on the first form. Dagotto describes how this calculation can be performed. More recent techniques relying on the second form of the scattering equation are simpler to implement, however. The first one we shall examine, which is sometimes called the spectral decoding technique, was first introduced by Loh and Campbell. Let us define a set of vectors $`|\varphi _n>`$ using the well-known Lanczös iteration technique: $$|\varphi _0>=\frac{O|\psi _0>}{\sqrt{<\psi _0|O^{}O|\psi _0>}},$$ (8) $$|\varphi _1>=H|\varphi _0>\frac{<\varphi _0|H|\varphi _0>}{<\varphi _0|\varphi _0>}|\varphi _0>,$$ (9) and $$|\varphi _{n+1}>=H|\varphi _n>\frac{<\varphi _n|H|\varphi _n>}{<\varphi _n|\varphi _n>}|\varphi _n>\frac{<\varphi _n|\varphi _n>}{<\varphi _{n1}|\varphi _{n1}>}|\varphi _{n1}>.$$ (10) With this set of vectors defined, we now have a simple tridiagonal form for the Hamiltonian matrix that can be easily diagonalized. We can now say that the eigenvectors $`|\psi _n>`$ are related to the $`|\varphi _n>`$ by the relationship $$|\psi _n>=\underset{m}{}c_m^n|\varphi _m>.$$ (11) It can now be shown that $$|<\psi _n|O|\psi _0>|^2=|c_0^n|^2<\psi _0|O^{}O|\psi _0>.$$ (12) The final spectrum can be displayed by replacing the Dirac delta functions in Eq. (7) with finite Lorenzians of an arbitrary width. Spectral decoding is a very useful technique, but it has some disadvantages. It relies on the Lanczös method for eigenvector computation above the ground state, and it is known that the Lanczös method can produce eigensolutions which are either incorrect or are duplicates of other solutions found previously. Techniques exist for checking the validity of solutions provided by the Lanczös method, which we shall call sorting, but they can be cumbersome. It would be preferable to use another technique where sorting is not necessary. For the spectra computed in this paper, the kernel polynomial method (KPM) was used. In KPM, a convergent approximation to the true spectrum is computed using Chebyshev polynomials. The delta function in Eq. (7) is replaced with a Chebyshev expansion of the delta function, and Gibbs damping factors are included to eliminate the Gibbs phenomenon. Calculations are performed using the operator $`X`$ instead of $`H`$, where $`X`$ is simply rescaled so that all energies lie between -1 and 1. Similarly, we use $`x`$ instead of $`\omega `$, where $`x`$ is $`\omega `$ rescaled to lie between 0 and 2. The final calculation involved is $$I(x)=\frac{1}{\pi \sqrt{1(x1)^2}}[g_0\mu _0+2\underset{m}{}g_mT_m(x1)\mu _m]$$ (13) where the $`T_m`$ are Chebyshev polynomials, the $`g_m`$ are Gibbs damping factors, and the moments $`\mu _m`$ are defined by $$\mu _m=<\psi _0|O^{}T_m(X)O|\psi _0>.$$ (14) The $`T_m(X)`$ here are Chebyshev polynomials of the operator $`X`$. In practice, the moments are most easily calculated using Chebyshev recurrence relations. Using these relations, computing M moments requires only $`\frac{M}{2}+1`$ calculations. KPM results are equivalent to those of other methods mentioned above. KPM is used here because it is simpler to implement computationally than other methods for a given level of accuracy. ## IV Results Now let us examine some of our results. Calculations for the 16 site model were performed on a 200MHz personal computer. The Néel state was used as a starting point for ground state calculations. Spin flip symmetry was used to reduce the final size of the Hilbert space to 6435. Memory requirements were minimal, and calculations were accomplished in minutes. The 26 site system spectra were computed on various machines with Alpha processors. Again, a spin flip symmetry was the only symmetry used, the Néel state was used as an initial approximation to the ground state, and the Hilbert space had a dimensionality of 5,200,300. Several hundred megabytes of RAM were required, and the calculations were completed in several hours time. Fig. 1 shows some computed spectra in the $`B_{1g}`$ configuration for a 16 site Heisenberg model of the square lattice with periodic boundary conditions. If we examine the Fleury-Loudon-Elliott spectrum (the solid curve in part (a)) and the SSO spectrum (the solid curve in part (b)) we see that there is much more activity in the SSO spectrum, and that its greatest activity occurs more toward the peak determined by experiment for La<sub>2</sub>CuO<sub>4</sub> (dashed curves). In Fig. 2, we see the SSO spectrum broadened and shifted slightly so that its main peak is in the same location as the experimental curve. Here we see that the spectrum is beginning to resemble the experimentally determined one fairly closely, with its characteristic asymmetry. In the computed $`A_{1g}`$ spectrum from the single site operator, we find even more encouraging results. For a 16 site model, the SSO spectrum shown in Fig. 3 is peaked at almost exactly the same location (about 4.2J) as the experimentally-determined spectrum. The scaling used here is the same scaling used to match the peak heights for the $`B_{1g}`$ spectrum. It is interesting to note that this scaling, chosen independently of the $`A_{1g}`$ results, puts the peak at exactly the correct height. We find similarly encouraging results with a 26 site model. In Fig. 4, we again see the $`B_{1g}`$ spectra for the SSO (solid curve of part (a)) and the Fleury-Loudon-Elliott operator (solid curve, part (b)) compared with experiment for La<sub>2</sub>CuO<sub>4</sub> (dashed curves). Again we see that the SSO spectrum has more activity in better proportions than the Fleury-Loudon-Elliott curve does. The two-magnon peak (the maximum) is shifted slightly closer to what experiment shows, and there is more broadly distributed four-magnon activity in the SSO curve. In short, the asymmetry and line broadening seen in experiment is better suggested by the SSO spectrum than by the pure Fleury-Loudon-Elliott spectrum. If we again broaden the SSO spectrum and shift it slightly, in the same manner as for the 16 site model, we see in Fig. 5 that we have a better approximation of the experimentally-determined spectrum. In the $`A_{1g}`$ spectrum shown in Fig. 6, we see the same encouraging signs of extra activity from a larger model, as compared with Fig. 3. Lastly, it should be pointed out that the goodness of the fit does appear to improve with increased system size. It would be helpful to see these calculations performed for larger systems. The SSO lacks translational symmetry, unfortunately, which prohibits many reductions in Hilbert space size that would otherwise be possible. For the moment, exact diagonalization of larger systems is beyond the capabilities of the authors’ computing facilities. Only the Quantum Monte Carlo method can deal with substantially larger sizes and should prove specially informative. ## V Conclusion In this paper, we have presented numerical data from exact diagonalization studies that suggest that improved understanding of magnetic Raman scattering in the insulating cuprates can result from a modification of the Fleury-Loudon-Elliott Raman operator. This assumes that phonons participate in the Raman scattering process, acting as momentum sinks and allowing for Raman scattering from two-magnon states with arbitrary total momentum. This provides a natural explanation for comparable Raman scattering in $`A_{1g}`$ and $`B_{1g}`$ configurations, and leads to a broadening of the spectra. This is achieved without invoking substantial modulations of local exchange constants, which can strongly effect long-wavelength properties. Due to limitations of sizes, the results presented are not fully conclusive about how close this brings the theoretical results to the experiments. Quantum Monte Carlo simulations may prove helpful in this regard. Given the large number of experiments on insulating cuprates, which can be modeled in terms of the square-lattice Heisenberg model, with a single nearest-neighbor exchange constant $`J`$, it seems natural that this be regarded as a good model for this system unless clear evidence to the contrary emerges. Raman scattering by itself cannot be invoked to justify more fancy terms such as ring-exchange terms for these systems. Raman scattering is also not the ideal ground for establishing the existence of spinons and other exotic excitations, although in the cuprates it definitely leaves room for such exotic physics. As more direct probes of quasiparticles, photoemission and neutron scattering can be more persuasive in this regard. ###### Acknowledgements. This work was supported in part by the Campus Laboratory Collaboration of the University of California and by the National Science Foundation under grant number DMR-9986948. Computations were carried out at Lawrence Livermore National Laboratory.
no-problem/0003/cond-mat0003488.html
ar5iv
text
# Charge Order in NaV2O5 studied by EPR ## Abstract We present angular dependent EPR measurements in NaV<sub>2</sub>O<sub>5</sub> at X-band frequencies in the temperature range $`4.2`$K $`T670`$ K. A detailed analysis in terms of the antisymmetric Dzyaloshinski-Moriya and the anisotropic exchange interactions yields the following scheme of charge order: On decreasing temperature a quarter-filled ladder with strong charge disproportions, existing for $`T100`$ K, is followed by zig-zag charge-order fluctuations which become long-range and static below $`T_{\mathrm{SP}}=34`$ K. The observation of an exponential decrease in magnetic susceptibility in $`\alpha `$’-NaV<sub>2</sub>O<sub>5</sub> below 34 K triggered many experimental and theoretical investigations. Following the determination of the crystal structure by Carpy and Galy as space group P$`2_1mmn`$ with linear chains of V<sup>4+</sup> ions (spin $`S=1/2`$) separated by non magnetic V<sup>5+</sup> chains, NaV<sub>2</sub>O<sub>5</sub>, analogous to CuGeO<sub>3</sub>, was classified as inorganic spin-Peierls system. Later on a re-investigation of the crystal structure showed that NaV<sub>2</sub>O<sub>5</sub> has to be considered a quarter-filled ladder system with only one vanadium site in the high-temperature phase described by space group P$`mmn`$. The direction of the ladders is given by the crystallographic $`b`$ axis. Each rung consists of two VO<sub>5</sub> pyramids along the $`a`$ axis, which share one corner at their base. Based on this structure a charge-order transition followed by different kinds of spin order were proposed . A recent investigation suggested a low-temperature phase consisting of modulated ladders with zig-zag charge order alternating with non modulated ladders with vanadium ions of intermediate valence V<sup>4.5+</sup> . Electron-paramagnetic resonance (EPR) is ideally suited for the investigation of the magnetic properties of vanadium systems, since the vanadium ions themselves (as V<sup>4+</sup> with electron configuration $`3d^1`$ in the present case) can be used as a microscopic probe of the spin system. In this paper we present angular dependent EPR measurements at X-band frequencies (9.48 GHz) within the temperature range 4.2 K $`<T<670`$ K. The experimental details concerning the crystal growth of NaV<sub>2</sub>O<sub>5</sub> and EPR measurements have been described in previous papers . For all temperatures the EPR spectrum consists of a single strongly exchange-narrowed Lorentzian line. The EPR intensity does not show any angular dependence within experimental error. Its temperature behavior follows that of a one dimensional antiferromagnetic Heisenberg chain at high temperatures $`T>100`$ K , whereas it decreases exponentially below the transition temperature $`T_{\mathrm{SP}}`$. Strong deviations from the model predictions were observed between 100 K and $`T_{\mathrm{SP}}`$ . The $`g`$-values are found as $`g_\mathrm{a}g_\mathrm{b}1.98`$ and $`g_\mathrm{c}1.94`$, which are typical for V<sup>4+</sup> in octahedral crystal symmetry , and increase only slightly below $`T_{\mathrm{SP}}`$. Here we confine ourselves to the discussion of the angular and temperature dependence of the resonance linewidth, which carries the information about the interactions of the vanadium spins with their local environment. Figure 1 shows the temperature dependence of the linewidth $`\mathrm{\Delta }H`$ with the external magnetic field applied along the three main axis of the crystal. Starting with an isotropic value $`\mathrm{\Delta }H=8`$ Oe at the transition temperature $`T_{\mathrm{SP}}=34`$ K, the linewidth increases monotonously with increasing temperature and develops a remarkable anisotropy with respect to the $`c`$ axis of the crystal. In the respective temperature regime the curvature of $`\mathrm{\Delta }H(T)`$ starts with a positive sign and changes to negative values above 100 K. Towards low temperatures $`T<T_{\mathrm{SP}}`$ the line broadens anisotropically again probably due to the unresolved hyperfine structure or due to a spin-glass transition . It has been reported by Yamada et al. that the line broadening in the high-temperature regime $`T>100`$ K must be due to the antisymmetric Dzyaloshinski-Moriya (DM) interaction, which is the only possible mechanism to explain the observed order of magnitude of some 100 Oe for the linewidth and its anisotropy. Estimations for anisotropic exchange and dipole-dipole interactions yield values which are 100 and 1000 times smaller, respectively. The angular dependence of the linewidth is shown in the upper inset of figure 1 at four different temperatures, where the $`b`$ axis was taken as rotation axis perpendicular to the static magnetic field. Following Yamada the data are well described by $$\mathrm{\Delta }H=A_{\mathrm{DM}}(1+\mathrm{cos}^2\vartheta )+\mathrm{\Delta }H_0$$ (1) where $`\vartheta `$ is the polar angle with respect to the $`c`$ axis. $`A_{\mathrm{DM}}`$ is proportional to the strength of the DM interaction and $`\mathrm{\Delta }H_0`$ the residual linewidth due to further relaxation mechanisms. Above 100 K the parameter $`A_{\mathrm{DM}}`$ strongly increases, as it is shown in the lower inset of figure 1 and the experimental ratios $`\mathrm{\Delta }H_\mathrm{c}/\mathrm{\Delta }H_\mathrm{a}`$ and $`\mathrm{\Delta }H_\mathrm{c}/\mathrm{\Delta }H_\mathrm{b}`$ nearly approximate the value of 2, underlining the dominant influence of the DM interaction. Below $`T=100`$ K the results strongly deviate from the high-temperature behavior. The positive curvature of the linewidth indicates a change of the dominant interaction. In the temperature range 34 K $`<T<60`$ K, the maximum linewidth is not found for the magnetic field applied in $`c`$ direction any more, but it appears with respect to the $`a`$ axis. Figure 2 nicely shows this cross-over regime. The angular dependence within the $`b`$-$`c`$ plane shown in the upper inset of figure 2 is of special interest, because here an additional modulation appears, which is well described by $$\mathrm{\Delta }H=A\mathrm{cos}(4\vartheta )+B\mathrm{cos}(2\vartheta )+\mathrm{\Delta }H_0$$ (2) The amplitude $`A`$ of the modulation is about 1 Oe, which is of the order of magnitude estimated for the anisotropic exchange interaction, and increases slightly with decreasing temperature, whereas the prefactor $`B`$, which can be considered to be due to the DM interaction, using $`\mathrm{cos}(2\vartheta )=2\mathrm{cos}^2\vartheta 1`$, as discussed below, strongly decreases. The angular dependence of the EPR linewidth contains important information about the local electronic distribution on the vanadium ladders. We start our discussion with respect to the high-temperature regime $`T>100`$ K, where the linewidth is determined by the DM interaction $$_{\mathrm{ij}}=𝕕_{𝕚𝕛}[𝕊_𝕚\times 𝕊_𝕛].$$ (3) The vanadium spins $`𝕊_𝕚`$ and $`𝕊_𝕛`$ are coupled by superexchange via an oxygen ion. The direction of the DM vector $`𝕕_{𝕚𝕛}`$ is determined by $$𝕕_{ij}=d_0[𝕟_{iO}\times 𝕟_{Oj}],$$ (4) where the space vectors $`𝕟_{𝕚𝕆}`$ and $`𝕟_{𝕆𝕛}`$ connect the spins i and j with the oxygen-bridge ion respectively. The maximum linewidth, which assigns the direction of the DM vector, is found for the magnetic field applied parallel to the crystallographic $`c`$ axis. Therefore the relevant oxygen-bridge vectors $`𝕟_{𝕚𝕆}`$ and $`𝕟_{𝕆𝕛}`$ must build an angle smaller than $`180^{}`$ within the $`a`$-$`b`$ plane. Moreover the existence of a non vanishing DM interaction requires a unit cell without inversion center, because otherwise the DM vectors cancel each other, as it is the case for a quarter-filled spin ladder, where each vanadium spin is equally distributed between two V<sup>5+</sup> ions on the rungs of the ladder (cf. Fig. 3a). The structure determined by Carpy and Galy fulfills the condition of asymmetry, because of the separation of the vanadium ladders in V<sup>4+</sup> and V<sup>5+</sup> chains. Here the DM interaction takes place only via one oxygen bridge between two neighboring V<sup>4+</sup> sites along the $`b`$ axis (cf. Fig. 3b). However, using the structural data for the vanadium and oxygen ions under consideration, we find that, with respect to the vanadium positions, the oxygen ions within the chains are farther displaced along the $`c`$ axis $`\mathrm{\Delta }c0.58\mathrm{\AA }`$ than along the $`a`$ axis $`\mathrm{\Delta }a0.28\mathrm{\AA }`$. This yields a dominant contribution of the DM vector pointing into the $`a`$ direction, which is not observed experimentally. We conclude that the EPR data strictly contradict both proposed high-temperature structures: the linear spin chain as well as the symmetric quarter-filled ladder . Recently, detailed optical investigations by Damascelli et al. revealed a local charge disproportion on each rung of the ladders, which also destroys the inversion symmetry yielding a finite DM interaction. Hence the charge distribution has to be taken into account more carefully (cf. Fig. 3c). Following the analysis of Damascelli, the center of the electronic distribution $`x_\mathrm{e}`$ on the rungs of the ladder can be determined. Considering each rung as an independent linear molecule of two V<sup>5+</sup> ions, the eigenstates of the additional electron in between are given by $`\psi _1=u\psi _\mathrm{l}+v\psi _\mathrm{r}`$, where the electron stays near the left-hand side and $`\psi _2=u\psi _\mathrm{r}v\psi _\mathrm{l}`$, where the electron favors the right-hand side. The atomic orbitals $`\psi _\mathrm{r}`$ and $`\psi _\mathrm{l}`$ of the right-hand and left-hand side are occupied according to the following weights: $$u=\frac{1}{\sqrt{2}}\sqrt{1+\frac{\mathrm{\Delta }}{E_{\mathrm{CT}}}},v=\frac{1}{\sqrt{2}}\sqrt{1\frac{\mathrm{\Delta }}{E_{\mathrm{CT}}}}$$ (5) The parameter $`\mathrm{\Delta }`$ describes the asymmetry of the onsite energy of the electron located in $`\psi _\mathrm{r}`$ with respect to $`\psi _\mathrm{l}`$. The charge-transfer energy $`E_{\mathrm{CT}}=\sqrt{\mathrm{\Delta }^2+4t^2}`$ is determined by the hopping integral $`t`$ and resembles the energy splitting between the two eigenstates. We choose the origin of the coordinate system $`x=0`$ in the left-hand ion, giving the position of the right-hand ion at $`x=R`$. If the electron favors for example the left-hand side, the center of the charge distribution $`x_\mathrm{e}`$ can be obtained from the eigenstate $`\psi _1`$ as $$x_\mathrm{e}=\frac{v^2R+(R/2)2uvS_{\mathrm{RL}}}{u^2+v^2+2uvS_{\mathrm{RL}}}\frac{R}{2}(1\frac{\mathrm{\Delta }}{E_{\mathrm{CT}}})$$ (6) where $`S_{\mathrm{RL}}`$ is the overlap integral between $`\psi _\mathrm{r}`$ and $`\psi _\mathrm{l}`$. The last expression was obtained assuming $`S_{\mathrm{RL}}1`$, which is justified with respect to the distance $`R3.6\mathrm{\AA }`$ of both ions, and using equation 5. Taking the experimental values $`\mathrm{\Delta }0.8`$ eV and $`t0.3`$ eV from , we calculate $`x_\mathrm{e}=0.1R0.36\mathrm{\AA }`$, yielding a shift of the electronic charge distribution by $`0.36\mathrm{\AA }`$ with respect to the center of the left-side V<sup>5+</sup> ion. If now neighboring rungs of the ladder locally obey the same charge distribution, we obtain an overall shift of $`\mathrm{\Delta }a0.64\mathrm{\AA }`$ of the vanadium spins with respect to the connecting oxygen ion. This yields an V-O-V bridge angle of about $`140^{}`$ within the $`a`$-$`b`$ plane and therefore a dominant DM contribution in $`c`$ direction. The EPR data provide strong experimental evidence for the appearance of charge disproportions in the quarter-filled spin ladder at high temperatures. The peculiarities observed in the cross-over regime (34 K $`<T<60`$ K) can be described in the framework of the competition of antisymmetric DM exchange and symmetric anisotropic exchange, which has been discussed by Yamada et al. for the one dimensional antiferromagnet KCuF<sub>3</sub> and more extensively for ferromagnetic Cu layers by Soos et al. . In these papers the evolution of the linewidth with temperature is calculated for both interactions. In both cases the linewidth saturates at high temperatures: The antisymmmetric DM exchange yields a strong temperature dependence of the linewidth with a negative curvature within the whole temperature regime. The anisotropic exchange gives rise only to a weak temperature dependence of the linewidth with a positive curvature at low temperatures changing to a negative one at high temperatures. Combining both contributions, we qualitatively obtain the temperature dependence of the linewidth observed in NaV<sub>2</sub>O<sub>5</sub> experimentally. Comparing these data to those of the one dimensional Heisenberg antiferromagnet KCuF<sub>3</sub> in more detail, we recognize an important difference. Whereas in KCuF<sub>3</sub>, in agreement with theoretical predictions, the DM interaction vanishes exactly at $`T=0`$, for NaV<sub>2</sub>O<sub>5</sub> the extrapolation of the high-temperature data suggests the DM interaction to disappear already at 100 K, as documented in the lower inset of figure 1. This indicates that already for temperatures $`T`$ 100 K the charge disproportions change to a symmetric zig-zag order of V<sup>4+</sup> and V<sup>5+</sup> ions in the ladder, where the DM vectors cancel each other (cf. Fig. 3d). This result is in good agreement with reports of charge-order fluctuations for 80 K $`TT_{\mathrm{SP}}`$ from Raman spectroscopy and the onset of a static long-range zig-zag order below $`T_{\mathrm{SP}}`$ for ultrasonic and dielectric results . Finally the $`\pi /2`$-periodic modulation of the angular dependence in the $`b`$-$`c`$ plane according to equation 2 can be understood in terms of the anisotropic exchange, only. The DM interaction produces $`\pi `$-periodic modulations in any case. The anisotropic-exchange interaction yields $`\pi /2`$-periodic modulations only if its secular contributions are enhanced with respect to the non-secular contributions by spin diffusion, which is characteristic for low dimensional systems . As we will report in a more detailed subsequent publication, the fact that the $`\pi /2`$-periodic modulation is only observed in the $`b`$-$`c`$ plane, whereas it does not occur in both $`a`$-$`b`$ and $`a`$-$`c`$ plane, means that all elements of the anisotropic exchange tensor vanish except $`J_{\mathrm{bc}}0`$. Under these conditions, the prefactors of equation 2 resemble anisotropic $`AJ_{\mathrm{bc}}^2`$ and antisymmetric $`BA_{\mathrm{DM}}`$ exchange. From the lower inset of figure 2 we obtain that for 60 K $`>T>34`$ K the Dzyaloshinsky-Moriya interaction more and more decreases with decreasing temperature whereas the anisotropic exchange slightly increases. The anisotropic exchange parameter $`J_{\mathrm{bc}}`$ is due to the coupling between electron spins of adjacent layers in the $`c`$ direction. This is a strong hint that the ordering below $`T_{\mathrm{SP}}`$ involves vanadium ions of neighboring layers. In conclusion the EPR data confirm the structure of a quarter-filled spin ladder with strong local charge disproportions at high temperatures $`T>100`$ K, which result in a non-vanishing DM interaction (Fig. 3c). The direction of the DM vector is determined by the V-O-V bridge angle $`140^{}`$ along the chains within the $`a`$-$`b`$ plane. The pronounced weakening of the DM interaction below 100 K but far above $`T_{\mathrm{SP}}=34`$ K indicates the onset of zig-zag charge-order fluctuations, which become long-range and static below $`T_{\mathrm{SP}}`$ (Fig. 3d). The zig-zag structure probably results from inter-layer couplings. This work was supported by the Bundesministerium für Bildung und Forschung (BMBF) under Contract No. EKM 13N6917/0 and by the Sonderforschungsbereich 484 of the Deutsche Forschungsgemeinschaft. M. V. E. was partially supported by RFFI Grant - 00-02-17597.
no-problem/0003/cond-mat0003475.html
ar5iv
text
# Dynamical models for sand ripples beneath surface waves ## I Introduction When a flat surface of sand is exposed to the flow of air or water, patterns known as ripples, dunes, sandwaves and draas are formed . Here we focus on the so-called vortex ripples (Fig. 1) which are created by oscillatory fluid flow, e.g., beneath surface waves. Ripples are of interest to coastal engineers since their properties determine the friction of the flow in the coastal region , the dissipation of surface waves and the net sediment transport over the ripples . More recently ripples have attracted the attention of physicists interested in non-equilibrium systems . The physics underlying sand ripple formation involves the interaction between the turbulent fluid flow and a granular medium, and is therefore extremely complex. A description of the pattern forming aspects is hindered by the strong nonlinearity of the fully developed ripples due to the subcritical nature of the initial bifurcation from a flat bed. Previous theoretical studies of this initial bifurcation have described the onset of ripple formation. Vortex ripple pattern formation occurs, however, far from equilibrium: typical wavelengths of fully developed ripples can be a factor five larger than those predicted by (weakly nonlinear) analysis . In this paper we will discuss the pattern forming aspects of fully developed vortex ripples. Many of the problems associated with the complicated underlying phenomenology can be circumvented by noting that the sizes of the ripples are the most relevant parameter for determination of their dynamics; further details of their shapes are not important. Dynamical equations for the evolution of the ripples can then be constructed once the mass exchange between ripples of certain sizes is known. We base our expression for this mass exchange on detailed numerical simulations of the flow and sand transport over vortex ripples (see below), hence going beyond a pure “toy-model” approach. As far as we are aware, the model presented here is the first to capture both instabilities and coarsening of fully developed vortex ripples. The outline of the paper is as follows. We start with a brief description of the main phenomenology of ripples in section II. Although the amplitude of the fluid oscillations determines the length of the ripples, a dimensional analysis (section II A) reveals that the most relevant dimensionless control parameter is the Shields parameter which characterizes stress at the sandy surface. We discuss our numerical simulations of the mass exchange between vortex ripples in section II B. Section III is devoted to the formulation of simple ripple models in one-dimensional geometries. The linear stability of these models is performed in section III B, and the coarsening and selection of the final ripple patterns starting from random initial conditions is discussed in section III D. In section IV we extend our model to two dimensions and discuss the impact of defect motion on the selection of the final two–dimensional pattern. ## II Vortex Ripples Following the much earlier work of Ayrton , the study of vortex ripples was taken up again by Bagnold in 1946 . In this seminal study, Bagnold distinguished between rolling grain ripples and vortex ripples. The former are generated when starting from an unstable flat bed and consist of small triangular ridges separated by a comparatively long stretch of flat bed. These rolling grain ripples grow and coarsen to become vortex ripples with no flat bed between them. Here the flow is dominated by separation bubbles (vortices) on the lee sides of the fully developed ripples. We will concentrate on these fully developed vortex ripples, since recent studies have confirmed that rolling grain ripples essentially constitute a transient. Many experiments have studied the average wave length of fully developed ripples as a function of, e.g., the amplitude and frequency of the fluid motion. It appears that the (dimensional) length of the ripples ($`\lambda _{dim}`$) is proportional to the amplitude of the oscillatory flow ($`a`$), and roughly independent of its frequency. Estimates in the literature of the proportionality constant $`\lambda _{dim}/a`$ range from one to two, with a preference for values around 1.3 (see Fig. 8 in ). Recently the ripples have also been studied from the view point of pattern formation. Both Scherer et al. and Stegner and Wesfreid studied a one–dimensional annular system in which the conservation of sand is guaranteed. Stegner and Wesfried observed strong hysteresis when the driving amplitude of fully developed ripples was ramped up and down: an increase in the amplitude of the driving yielded larger ripples, while for a decrease, the ripples did not change length. Lofquist also observed hysteretic behavior, but in this case the ripples were initially stable for both an increase or decrease of the driving amplitude . Hysteresis of the ripples was also observed in a recent set of field measurements . ### A Dimensional analysis and setup of the problem Ripples are governed by a large number of dimensional parameters which characterize fluid flow and sand. We will show that while in general three dimensionless parameters (density ratio of fluid and sand grains $`s`$, settling velocity $`w_s`$ and maximum Shields parameter $`\theta _{max}`$) characterize the system, for the case of interest here (sand/water systems in the regime where suspension is unimportant) the only free parameter is the Shields parameter. Ripple formation is driven by an oscillatory fluid motion with amplitude $`a`$ and angular frequency $`\omega `$. The Reynolds number $`Re`$ for this situation is $`a^2\omega /\nu `$, where $`\nu `$ is the fluid viscosity. For water in a typical experimental situation ($`a=5`$ cm, $`\omega =3`$ s<sup>-1</sup>), $`Re`$ is of order $`10^3`$, and the flow is turbulent. Therefore, large scale flow structures such as separation bubbles are independent of the Reynolds number and hence viscosity. For turbulent flow, the roughness of the bed is of minor importance as long as the typical grain sizes are much smaller than $`a`$. The only relevant length scale is then $`a`$, which we use to define the non-dimensional ripple length as $`\lambda =\lambda _{dim}/a`$. The large scale flow is then completely specified by the boundary conditions, i.e., the shape of the ripples. The sand introduces four new dimensional parameters into the problem. These are, respectively, the density of water $`\rho _w`$ and sand $`\rho _s`$, the median diameter of the grains $`d`$ and gravity $`g`$. From these we form the following three non-dimensional parameters: $$s=\frac{\rho _s}{\rho _w},w_s=\frac{w_{s.dim}}{a\omega },\theta =\frac{\tau _{bed}}{\rho _w(s1)gd},$$ (1) The relative density of the grains $`s`$ has a value of $`2.65`$ for quartz sand in water. The settling velocity $`\omega _s`$ characterizes the amount of sand kept in suspension; here we assume a regime where the settling velocity is large ($`w_s0.15`$) such that suspension is not important. This leaves us with the last parameter, $`\theta `$, which is known as the Shields parameter. The Shields parameter expresses the ratio between the drag and gravitational forces on a single grain and depends on the shear stress $`\tau _{bed}(x,t)`$, which varies with time and along the profile of the ripple. Following we propose to use the maximum shear stress on a flat bed, $`\tau _{max}`$, to characterize the flow. For laminar flow $`\tau _{max}`$ can be found exactly from the solution of Stokes’ second problem . For turbulent conditions, which prevail here, an analytical expression does not exist. We will follow coastal engineers in using an empirical relation for the maximum shear stress : $$\tau _{max}=0.02\rho _w\left(\frac{a}{k_N}\right)^{0.25}(a\omega )^2.$$ (2) Note that the instantaneous Shields parameter on a rippled bed $`\theta (x,t)`$ can be several times larger than $`\theta _{max}`$. The transport of sand $`q`$ takes place in a thin layer above the bed, the so-called bed load layer (for an introduction to sediment transport see chapter 7 in ref. ). The non-dimensionalized flux of sand $`\varphi q/\sqrt{g(s1)d^3}`$ in the bed load layer is a function of the local Shields parameter and can be modeled as: $$\varphi =\alpha (\theta \theta _c)^\beta .$$ (3) When $`\theta (x,t)`$ smaller than a critical value $`\theta _c`$ for all $`x`$, which for turbulent boundary layers is approximately $`0.06`$ , sand grains do not move and the ripple profile freezes. The constants $`\alpha `$ and $`\beta `$ have been determined empirically by Meyer-Peter and Müller to be approximately $`\alpha =8`$ and $`\beta =1.5`$, which are in good agreement with theoretical estimates . The formation and the dynamical properties of the ripples are mainly determined by the fluid flow, so the exact values of the constants $`\alpha `$ and $`\beta `$ together with the detailed form of Eq. (3), turn out to be relatively unimportant for the content of this work. ### B Numerical studies and mass transport The computational model that we have developed to study the ripples calculates turbulent fluid flow over ripples based on the standard $`k`$-$`\omega `$ turbulence model . Once this flow is known, the sediment transport, which is governed by the shear stress on the bed, can be calculated from Eq. (3). In Fig. 2 we show some results for the flow and the non-dimensionalized shear stress $`\varphi `$ for $`\lambda =1.15`$. We see that there are two mechanisms that generate the shear stress on the bed, namely the converging flow on the “wind” side of the ripple (here left) and the separation bubble formed in the lee (right) side. Typically, these stresses are several times stronger than the stresses on a flat bed. The separation bubble, where the flow near the bed is directed opposite to the mean flow direction, is clearly visible. This bubble moves out into the trough of the ripple ($`\omega t=90^{}`$), where it stays ($`\omega t=150^{}`$) until it is thrown over the crest as the flow reverses. The shear stresses are uphill on both sides of the ripple, and consequently sand is transported from the trough toward the crest; the result is a steepening of the ripple profile. This steepening continues until the slopes of the ripple reach the angle of repose, when avalanches limit the growth of the ripple slopes. As a consequence, most slopes of the fully developed ripples are close to the angle of repose. These fully developed ripples are thus approximately triangular, joined by smooth troughs, which is also evident from experiments . Ripples interact by exchanging sand between their neighbours over the troughs. The amount of this mass flow is closely connected to the extension and strength of the separation bubble. We have studied this mass transport as a function of the non-dimensional ripple size $`\lambda `$. In Fig. 3 we show the net sediment transport during the first half wave period, for short ($`\lambda =0.6`$), medium ($`\lambda =1.0`$) and large ripples ($`\lambda =2.0`$). For short ripples the separation bubble almost covers the space between the two ripple crests, but it is not very strong, giving rise to a small transport. For long ripples, the separation bubble does not reach over the trough, again giving only a small mass exchange between adjacent ripples. Most mass is exchanged for medium sized ripples, where the separation bubble is both strong and reaches over the trough. We define $`f`$ as the amount of sand transported over the trough during the first half wave period: $$f(\lambda )=_0^{\pi /\omega }\varphi (x_{tr},t)𝑑t,$$ (4) where $`x_{tr}`$ is the position of the trough. The minus sign is simply related to the fact that the fluid and mass flows have opposite directions during each half period; here we wish to have a positive $`f(\lambda )`$. The rescaled mass exchange $`f(\lambda )/f(\lambda =1.0)`$ is shown in Fig. 4 for values of $`\theta _{max}`$ ranging from $`0.075`$ to $`0.75`$. The rescaled graphs of the mass exchange (i) collapse in good approximation, and (ii) have a single maximum around $`\lambda =1.0`$. In our model, developed in section III below, we will incorporate these two properties. If the critical Shields parameter had been zero, the rescaling factor $`f(\lambda =1.0)`$ would have been proportional to $`\theta _{max}^{1.5}`$. That this is almost, but not exactly, the case is seen in the inset in Fig. 4. ## III Discrete models for one-dimensional ripples In this section we will introduce and study simple models for ripples in one-dimensional geometries. We assume that the angles of the ripple slopes are fixed, so that the only degrees of freedom are the lengths of their left and right slopes. Two different versions of the model will be described. In the simplest case we only take the total ripple sizes $`\lambda _i`$ into account (see Fig. 5). The ensuing “minimal model” is formulated in section III A, and is analyzed theoretically in section III B. A more refined model which takes the lengths of left and right slopes into account is presented in section III C (see Fig. 6), and numerical simulations of this model are presented in section III D. ### A Minimal model In this model ripples are triangular and symmetric and characterized by their length $`\lambda _i`$. We will now determine the mass transfer between two ripples with lengths $`\lambda _1`$ and $`\lambda _2`$ (Fig. 5) from the information that we have for the mass transfer between equal ripples. When $`\lambda _1`$ and $`\lambda _2`$ are approximately equal, one expects the size and strength of the separation bubble emanating from the crest of ripple 1 to be independent of the size of ripple 2. This is our central assumption: the mass transport during a half period only depends on the size of the ripple that creates the separation bubble. Let us denote the first half period of the driving, when the flow is from left to right, by a subscript $`I`$, and the second half by $`II`$. Under our assumption stated above, we obtain: $`\mathrm{\Delta }m_I=f(\lambda _1)`$ and $`\mathrm{\Delta }m_{II}=f(\lambda _2)`$, where $`\mathrm{\Delta }m_I`$ denotes the change in the mass of ripple 1 in the first half period. During each half period, the amount of mass transported is small in comparison to the mass of a single ripple. We therefore can neglect changes in the ripple shapes during a half period, and obtain the mass flow during a full period, $`\mathrm{\Delta }m`$, by simply adding up the half period mass–flows: $$\mathrm{\Delta }m=f(\lambda _1)f(\lambda _2).$$ (5) Clearly Eq. (5) can be extended to the case of a row of ripples. Then the mass-flow to ripple $`i`$, $`\mathrm{\Delta }m_i`$, is due to interactions with both ripple $`i1`$ and $`i+1`$: $`\mathrm{\Delta }m_i=2f(\lambda _i)f(\lambda _{i+1})f(\lambda _{i1})`$. To close the equations we need to relate the mass flow to a change in the size of the ripples. Since the mass-flow is small, it is reasonable to assume that the change in ripple size is linear in the mass transport. The greatest simplification is obtained if we assume all ripples to be of near equal size, so that the ratio $`\mathrm{\Delta }m/\mathrm{\Delta }\lambda `$ is equal for all ripples. Taking the continuum time limit and rescaling time to absorb a proportionality constant we obtain: $$d\lambda _i/dt=f(\lambda _{i1})+2f(\lambda _i)f(\lambda _{i+1}).$$ (6) The total length of a system of ripples evolving according to Eq. (6) is conserved, but the total mass is not; we will discuss this further in section III C. Finally, we supplement the model with an annihilation rule which removes ripples that have shrunk to size zero, and a creation rule, which adds ripples in the troughs between ripples of sizes larger than a certain length $`\lambda _{max}`$ that will be specified in the next section. ### B Equilibria and stability There are three types of equilibria in the minimal model (6). (i) Homogeneous states where all $`\lambda `$’s are equal. (ii) “Period two” states, for which $`f(\lambda _i)=f(\lambda _{i+1})`$ but $`\lambda _i\lambda _{i+1}`$ (see Fig. 16 in for similar states). (iii) More complicated equilibria constructed by arbitrary juxtapositions of ripples of lengths $`\lambda _a`$ or $`\lambda _b`$ when $`f(\lambda _a)=f(\lambda _b)`$. The linear stability of the homogeneous state follows from setting $`\lambda _i=\lambda _{eq}+\delta _i`$ and linearizing Eq. (6): $$d\delta _i/dt=f^{}(\lambda _{eq})(\delta _{i1}2\delta _i+\delta _{i+1}).$$ (7) This is the linear stability equation for the space-discretized diffusion equation, with diffusion coefficient $`f^{}(\lambda _{eq})`$, and the sign of $`f^{}`$ will be important. As we demonstrated in section II B, the mass transport $`f`$ displays a single maximum as a function of $`\lambda `$ at a value that we will refer to as $`\lambda _{min}`$. When $`\lambda _{eq}`$ is larger (smaller) than $`\lambda _{min}`$, $`f^{}(\lambda _{eq})`$ is positive (negative) and the pattern is stable (unstable). Hence the smallest possible stable wavelength is $`\lambda _{min}`$, where $`f`$ has a maximum. This instability can be seen directly from the mass transport: when we inspect two unequal adjacent ripples with sizes larger than $`\lambda _{min}`$ we obtain from Eq. (5) that mass will flow from the larger to the smaller ripple, hence leading to a stable equilibrium, while if their sizes are smaller than $`\lambda _{min}`$ mass flows from the smaller to the larger ripple, leading to an instability. An additional instability occurs for large ripples when their troughs lie outside the separation zone (see $`\lambda =2.0`$ in Fig. 3); in this case the flow creates new ripples in the troughs. This instability has been observed in experiments and also in our numerical studies . This instability is consistent with our models when we assume that $`f`$ is defined for arbitrary small ripples. For a homogeneous pattern of large ripples where $`f(\lambda _{eq})<f(0)`$, infinitesimal ripples inserted between the large ripples will gain mass and grow, and the maximum value $`\lambda _{max}`$ where homogeneous patterns are stable, is given by $`f(\lambda _{max})=f(0)`$. This is the motivation for having a creation rule in the model. The period-two and more complicated equilibrium states can be shown to be unstable in our framework . Thus our model illustrates an important consequence of the shape of the mass exchange function. There is a band of wavelengths for which ripple patterns are stable; outside this band, short wavelength instabilities occur. ### C Refined model Both our numerical studies and experiments frequently display ripples that are asymmetric during their evolution (although, on average, they are not). We extend the minimal model from the previous section to allow for asymmetric ripples by characterizing the ripples by the length of both their left ($`l_i`$) and right ($`r_i`$) slopes; obviously $`\lambda _i=l_i+r_i`$ (see Fig. 6). In addition, such a model can be tuned so as to conserve mass. As before we assume that the lee side of ripples determines the size of the separation bubbles. During the first half period the bubble takes $`\mathrm{\Delta }m_I`$ mass from the left slope of ripple $`i+1`$, and transports this mass to ripple $`i`$; the ratio between the mass deposited on the left and right slopes of ripple $`i`$ is given by a parameter $`\sigma `$ that we always fix at a value of $`0.5`$ (see Fig. 6). The mass flow in the first half period is therefore: $`\mathrm{\Delta }m_{li.I}`$ $`=`$ $`f(2r_{i1})+(1\sigma )f(2r_i)`$ (8) $`\mathrm{\Delta }m_{ri.I}`$ $`=`$ $`\sigma f(2r_i).`$ (9) The mass-flow in the second half period follows by symmetry. Assuming that the mass transport is small, we can neglect the change in ripple size during one half cycle, and add the contributions from each half period. To obtain a closed set of dynamical equations we have to establish how the lengths $`l_i`$ and $`r_i`$ evolve under a certain mass flow. When an amount of mass $`M`$ is deposited on the right slope of ripple $`i`$, we incorporate this by an increase of $`l_i`$ and a subsequent decrease of $`l_{i+1}`$; the length $`r_i`$ itself does not change. Assuming for simplicity the angle of repose to be $`45^{}`$, we can calculate the volume of the slab of deposited sand and find that the change in the length of nearby ripples is: $`\mathrm{\Delta }l_i`$ $`=`$ $`\mathrm{\Delta }m_{ri}/(2r_i)`$ (10) $`\mathrm{\Delta }l_{i+1}`$ $`=`$ $`\mathrm{\Delta }m_{ri}/(2r_i).`$ (11) Ignoring higher order effects we obtain the total change in the length as a function of the mass changes as: $`\mathrm{\Delta }r_i`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }m_{li+1}}{2l_{i+1}}}+{\displaystyle \frac{\mathrm{\Delta }m_{li}}{2l_i}}`$ (12) $`\mathrm{\Delta }l_i`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }m_{ri}}{2r_i}}{\displaystyle \frac{\mathrm{\Delta }m_{ri1}}{2r_{i1}}}.`$ (13) This relation together with the mass flow from (9) defines the refined model. This model has the same linear stability properties as the minimal model defined in Eq. (5). The total length of the system is conserved, and the total mass is approximately conserved. The masses which are ignored are associated with the small areas that are cross-hatched in Fig. 6. It is possible to formulate the model in a strictly mass-conserving manner, by updating the slope lengths when both removing and depositing mass, but this does not alter the model in any substantial way. In our numerical simulations two different forms of the mass transport function $`f(\lambda )`$ were used. Both functions have a maximum at $`\lambda =\lambda _{min}=1/2`$ and are zero at 0 and $`\lambda _{max}`$. The simplest function that satisfies these requirements is bi-linear, while a smooth function with a quadratic maximum that satisfies $`f(0)=f(\lambda _{max})=0`$ can be constructed as the sum of a linear function and a square root (see Fig. 7): $`f(z)={\displaystyle \frac{4z}{2\lambda _{max}}}+{\displaystyle \frac{\lambda _{max}(\lambda _{max}4)}{2(\lambda _{max}2)^2}}+`$ (14) $`{\displaystyle \frac{\lambda _{max}}{2(\lambda _{max}2)^2}}\sqrt{16(\lambda _{max}2)z+(\lambda _{max}4)^2}.`$ (15) This smooth function for $`\lambda _{max}=1.6`$ resembles the one found from the computational flow model in section II B. ### D Coarsening of fully developed ripples When ripples are grown experimentally from a flat bed, initially many small ripples are created. They subsequently coarsen and form a final regular steady state with a well-defined final wavelength (see for example Fig. 1 in ). Our model shows the same behavior for initial conditions of (disordered) unstable small wavelength patterns. An example of such evolution is shown in Fig. 8. A fast coarsening process is seen in the beginning ($`t<1`$), followed by a slower relaxation toward an equilibrium state. The important dynamical process leading to the equilibrium state is the annihilation of ripples, with each annihilation resulting in a longer average ripple length; creation does not play a role here. After the final annihilation, slow diffusive dynamics sets in. The stability analysis performed in section III B shows that a wide range of ripple wave lengths can be linearly stable, namely $`\lambda _{min}<\lambda <\lambda _{max}`$. We will show here that, starting from small ripples, the dynamics leads to the selection of a sharply defined final wavelength. We assume periodic boundary conditions in our simulations. The parameters entering the model are the length of the domain $`L`$ and the maximum ripple length $`\lambda _{max}`$. The initial conditions are disordered ripples with an average wavelength $`\lambda _0<\lambda _{min}`$. We found that the final wavelength is quite independent upon the initial average wavelength $`\lambda _0`$ (when this is sufficiently small) and the initial degree of disorder. This result could not have been predicted a priori from the model equations, but is in good agreement with experimental evidence. The final wavelength does, however, depend on the shape of the interaction function and the value of $`\lambda _{max}`$. In Fig. 9, $`\lambda _{eq}`$ is plotted as a function of $`\lambda _{max}`$ for the two interaction functions. The final wavelength appears to be a nontrivial function of $`\lambda _{max}`$ for both interaction functions. The interaction function which resembles the one from the numerical flow calculations (the smooth function with $`\lambda _{max}=1.6`$) results in an equilibrium wavelength of $`\lambda _{eq}=1.28\pm 0.03`$, a result which is in good agreement with ripple lengths measured in experiments. ## IV Two-dimensional ripple patterns Unless one forces the ripple patterns to occur in a narrow channel or annulus, ripple patterns are two-dimensional, even though the alignment perpendicular to the flow yields quasi one-dimensional patterns. However, during the evolution toward the final state the pattern contains many defects . We have thus extended our model to study their role in the selection of the final state. In our two-dimensional model the individual rows of ripples are similar to those in Fig. 6 and are labeled by an indices $`i`$ and $`j`$, where $`j`$ is the new coordinate perpendicular to the driving direction. In the $`i`$-direction, the mass flow is given by expression (9). We then determine which ripples are neighbors in the $`j`$-direction, and impose an angle of repose in the $`j`$ direction by inducing a flow of mass between ripple $`j`$ and $`j+1`$ when their height difference is above a certain maximum. At a defect, such mass flow can nucleate a new ripple in the trough of an adjacent row. Finally, it is reasonable to expect that a $`j`$-flow is induced when the ripples are not aligned perpendicular to the main flow, and the simplest choice for such flow between ripple $`j`$ and $`j+1`$ is $`\mathrm{\Delta }m_{lj}=C_x(x_{j+1}x_j)`$. The coupling in the $`j`$-direction is thus diffusive, and basically acts to align the ripple crests perpendicular to the oscillatory motion. In the simulations which are presented, the value of $`C_x`$ has been fixed to $`0.08`$. The qualitative results are not sensitive to a change of this parameters. To study the motion of defects in this model, we initiate the system with two patches of nearby wavelengths $`\lambda _1`$ and $`\lambda _2`$ separated by two defects. The motion of these defects depends on the values of the wavelengths (Fig. 10), and we find that when these are larger than $`1.28\pm 0.02`$ the defect climbs in the direction of the lowest wavelength, otherwise it moves in the direction of the largest wavelength. Thus in a pattern with many defects, as encountered during the coarsening process, one expects that only regions with wavelengths larger than $`\lambda _{def}=1.28`$ will survive. We can therefore expect the final wavelength $`\lambda _{eq}`$ in a two-dimensional system to lie between $`\lambda _{def}`$ and $`\lambda _{max}`$. To check this we have performed simulations in large systems with initial conditions consisting of unstable ripple patterns of wavelength $`0.5`$. The system rapidly coarsens and evolves to a state where most of the wavelengths lie in the 1D stable regime $`\lambda _{min}<\lambda <\lambda _{max}`$. The dynamics then slows down dramatically and becomes dominated by defect climbing. In the final relaxed state of the system the peak of the distribution of ripples lengths lie between $`\lambda _{def}`$ and $`\lambda _{max}`$. Runs with different values of $`\lambda _{max}`$ have confirmed that the equilibrium wavelength in the two-dimensional model is systematically larger than in the one-dimensional case (see Fig. 9). ## V Discussion and conclusion By focusing on a realistic law for the mass exchange between adjacent ripples, simple models have been formulated that capture a number of phenomena observed in real ripples. First of all, our model predicts the existence of a finite band of stable ripple wavenumbers $`\lambda _{min}<\lambda <\lambda _{max}`$, which is consistent with the hysteresis observed in experiments . The model predicts that the instabilities encountered once these boundaries are crossed are of short wavelength nature, in agreement with experiments on one- and two-dimensional sand patterns performed in Copenhagen . Coarsening which occurs in the intermediate stages of the development of vortex ripples is present in our models, and we predict that even though there is a finite band of stable ripple patterns, the dynamics selects a well defined final wave length. The exact value of the final wave length depends on the details of the mass exchange function, however for a function similar to the results from our simulations of the fluid and sand flow, we find an equilibrium wavelength of $`\lambda =1.28`$. The good collapse of the mass exchange function with the maximum shear stress $`\theta _{max}`$ indicates that the final wavenumber should be independent of $`\theta _{max}`$ as long as suspension is not important. Following the picture of the mass exchange as in the models, it is clear that the maximum value of the mass exchange function sets a time scale for the evolution of the ripples. We have shown that this maximum value is approximately proportional to $`\theta _{max}^{1.5}`$. Thus the time scale of the evolution of the ripples can be expected to scale as $`\theta _{max}^{1.5}`$. Finally we have demonstrated that the models can be applied to two dimensional ripples patterns, and have found that defect motion renders the final wavelength of ripples in two-dimensions larger than in one dimension. All these predictions are open to experimental verification. In particular, we are eager to see how consistent the mass exchange mechanism proposed here is for real data of ripple evolution. ## VI Acknowledgments It is a pleasure to acknowledge discussions with Markus Abel, Tomas Bohr, Jørgen Fredsøe, Jonas Lundbek Hansen, Nigel Marsh and Alexandre Stegner. M.v.H. acknowledges financial support from the EU under contract ERBFMBICT 972554. M.-L.C. thanks the Niels Bohr Institute for hospitality.
no-problem/0003/astro-ph0003312.html
ar5iv
text
# Resonance Paramagnetic Relaxation and Alignment of Small Grains ## 1 Introduction Experiments to study the cosmic background radiation have stimulated renewed interest in diffuse galactic emission. Recent maps of the microwave sky brightness have revealed a component of the 10-100 GHz microwave continuum which is correlated with 100 $`\mu `$m thermal emission from interstellar dust (see review by Draine & Lazarian 1999a). Draine & Lazarian (1998a,b, henceforth DL98a,b) attributed this emission to electric dipole radiation from small ($`<10^7`$ cm) rapidly rotating grains. Recent observations by de Oliveira-Costa et al. (1999) support this interpretation. The question now is whether these small grains are aligned and their emission polarized. One process that might produce alignment of the ultrasmall grains is the paramagnetic dissipation mechanism suggested by Davis and Greenstein (1951) to explain the polarization of starlight. The Davis-Greenstein mechanism is straightforward: the component of interstellar magnetic field perpendicular to the grain angular velocity varies in grain coordinates, resulting in time-dependent magnetization, energy dissipation, and a torque acting on the grain. As a result grains tend to rotate with angular momenta parallel to the interstellar magnetic field. Although recent research (Draine & Weingartner 1996, 1997, Lazarian & Draine 1999a,b) suggests that paramagnetic alignment may not be the dominant alignment mechanism for $`a\stackrel{>}{}10^5\mathrm{cm}`$ grains, it may be effective for small ($`a\stackrel{<}{}5\times 10^6`$ cm) grains. In the present paper we claim that the traditional picture of paramagnetic relaxation is incomplete, as it disregards the splitting of energy levels that arises within a rotating body. Unpaired electrons having spin parallel and antiparallel to the grain angular velocity have different energies resulting in the Barnett effect (Landau & Lifshitz 1960) – the spontaneous magnetization of a paramagnetic body rotating in field-free space. Therefore the implicit assumption in Davis & Greenstein (1951) – that the magnetization within a rotating grain in a static magnetic field is equivalent to the magnetization within a stationary grain in a rotating magnetic field – is clearly not exact. In what follows we show that a very important effect due to rotation has thus far been overlooked. This effect, which we term “resonance relaxation”, leads to energy dissipation – and grain alignment – which is much more rapid than the classical Davis-Greenstein estimate when the grain rotates very rapidly. ## 2 Davis-Greenstein Theory Paramagnetic dissipation in a stationary grain depends upon the imaginary part of the magnetic susceptibility $`\chi ^{\prime \prime }`$, which characterizes the phase delay between the grain magnetization and the rotating magnetic field. Due to this delay a grain rotating in a static magnetic field experiences a decelerating torque: the energy dissipated in the grain comes from rotational kinetic energy. The Davis-Greenstein alignment time scale is $$\tau _{\mathrm{DG}}3\times 10^2\mathrm{yr}\left(\frac{a}{10^7\mathrm{cm}}\right)^2\left[\frac{10^{13}\mathrm{s}}{K(\omega )}\right]\left(\frac{5\mu \mathrm{G}}{B_0}\right)^2$$ (1) where $`K(\omega )\chi ^{\prime \prime }(\omega )/\omega `$ (see Davis & Greenstein 1951). Following DL99b we estimate $$K(\omega )\frac{\chi _0\tau _2}{[1+(\omega \tau _2)^2]^2}10^{13}\mathrm{s}\frac{(20\mathrm{K}/T_d)}{[1+(\omega \tau _2)^2]^2}.$$ (2) Very small grains are expected to be paramagnetic both due to presence of free radicals, paramagnetic carbon rings (see Altshuler & Kozyrev 1964) and captured ions. The spin-spin coupling time $$\tau _2\frac{\mathrm{}}{3.8n_pg\mu _\mathrm{B}^2}2\times 10^9\left(\frac{10^{21}\mathrm{cm}^3}{n_p}\right)\mathrm{s}$$ (3) (see DL98b) where $`\mu _\mathrm{B}`$ is the Bohr magneton, and $`n_p10^{21}`$ cm<sup>-3</sup> is the concentration of unpaired electrons, greater than in coals (Tsvetkov et al 1993), but less than the concentration of free radicals envisaged by Greenberg (1982). Eq. (3) then predicts a cut-off frequency $`\nu _{\mathrm{cut}}=(2\pi \tau _2)^10.1`$ GHz. In the extreme case where $``$10% of the atoms are paramagnetic one can get $`\nu _{\mathrm{cut}}`$ as large as 1 GHz but hardly any higher. ## 3 Barnett effect and Bloch equations The Barnett effect states that a body rotating with velocity ! develops a magnetization $$𝐌=\chi _0\frac{\mathrm{}\text{!}}{g\mu _B}\chi _0𝐇_{\mathrm{BE}}.$$ (4) where $`\chi _0`$ is the static susceptibility, and $`H_{\mathrm{BE}}`$ is the “Barnett-equivalent” field. The essence of the Barnett effect is easily understood: a rotating body can decrease its energy, while keeping its angular momentum constant, if some of the angular momentum is taken up by its unpaired spins. By flipping one spin of angular momentum $`\mathrm{}/2`$, the system can reduce its rotational kinetic energy by $`\mathrm{}\omega `$. Although the Barnett effect has been long known in physics (see Landau & Lifshitz 1960), its importance in the context of interstellar grains was only appreciated recently (Dolginov & Mytrophanov 1975; Purcell 1979; Lazarian & Roberge 1997; Lazarian & Draine 1997, 1999a). In the present paper we discuss a hitherto-unrecognized aspect of the Barnett effect, namely its influence on paramagnetic dissipation in a rapidly rotating grain. Rotation removes the spin degeneracy of the electron energy levels. The energy difference between electron spin parallel or antiparallel to ! provides a level splitting corresponding to $`\mathrm{}\omega =g\mu _\mathrm{B}H_{\mathrm{BE}}`$. Insofar as the energy levels and magnetization are concerned, rotation of the grain is analogous to application of the “Barnett equivalent” field. Now consider a (weak) static magnetic field $`𝐇_1`$ at an angle $`\theta `$ to !. In grain coordinates, this appears like a static field $`H_1\mathrm{cos}\theta `$ plus a field $`H_1\mathrm{sin}\theta `$ rotating with frequency $`\omega `$. This rotating field can be resonantly absorbed, since the energy level splitting is exactly $`\mathrm{}\omega `$. The Bloch equations (Bloch 1946) are useful for describing both resonant and nonresonant absorption (see Pake 1962). These phenomenological equations reflect the tendency of the magnetization $`𝐌`$ to precess and to tend exponentially towards its thermal equilibrium value $`𝐌_0`$. In a stationary grain $`𝐌_0=\chi _0𝐇`$, where $`\chi _0`$ is the static paramagnetic susceptibility and $`𝐇`$ is the external field. In the case of a rotating grain the magnetization arises due to the Barnett effect and is therefore given by eq. (4). In what follows we assume that the grain magnetization is directed along the $`z`$-axis and is dominated by the Barnett effect (the magnetic field for a grain rotating at $`20`$ GHz is 7.2kG, much greater than the internal field in a paramagnetic grain). Changes in $`𝐌`$ along this axis involve changes in the energy of the spin system, thus require spin-lattice interactions, and therefore occur on the spin-lattice relaxation time scale $`\tau _1`$ (Atherton 1973). Changes in $`𝐌`$ in the perpendicular direction only slightly perturb its direction but not its magnitude. The interactions within the electron spin system are sufficient to deflect the direction of magnetization and these perturbations relax on the spin-spin relaxation time $`\tau _2`$. Using $``$ for the $`x`$ and $`y`$ components, the Bloch equations in the presence of the interstellar magnetic field $`𝐇_1`$ are (see Morrish 1980) $$\left(\frac{d}{dt}\right)𝐌_{}+[\text{!}\times 𝐌]_{}=\gamma g\left[𝐇_1\times 𝐌\right]_{}\frac{𝐌_{}}{\tau _2},$$ (5) $$\left(\frac{d}{dt}\right)M_z=\gamma g\left[𝐇_1\times 𝐌\right]_z+\frac{M_0M_z}{\tau _1},$$ (6) where $`d𝐌_{}/dt`$ represents the motion of $`𝐌_{}`$ in body coordinates and $`\gamma e/2m_ec=8.8\times 10^6`$ s<sup>-1</sup>G<sup>-1</sup>. Consider a reference frame $`(𝐢,𝐣,𝐤)`$ rotating with the grain at angular velocity !. In this frame the stationary interstellar magnetic field is $$𝐇_1=[\widehat{𝐢}H_1\mathrm{cos}(\omega t)\widehat{𝐣}H_1\mathrm{sin}(\omega t)]\mathrm{sin}\theta +\widehat{𝐤}H_1\mathrm{cos}\theta .$$ (7) As the magnetization in the $`z`$-direction $`\chi _0H_1\mathrm{cos}\theta `$ is much smaller than $`\chi _0H_{\mathrm{BE}}`$, it is disregarded in our treatment. For $`M_z`$ the stationary solution is $`M_z=\chi _0H_{\mathrm{BE}}/f_{\mathrm{st}}`$ where $`f_{\mathrm{st}}=1+\gamma ^2g^2\tau _1\tau _2H_1^2\mathrm{sin}^2\theta `$, while $`M_x`$ and $`M_y`$ oscillate with a $`\pi /2`$ lag with respect to the interstellar magnetic field. For instance, $`M_x=\chi _0\omega \tau _2H_1\mathrm{sin}(\omega t)/f_{\mathrm{st}}`$. Therefore $$\chi ^{\prime \prime }=\chi _0\frac{\omega \tau _2}{1+\gamma ^2g_s^2\tau _1\tau _2H_1^2\mathrm{sin}^2\theta },$$ (8) which coincides with the expression for $`\chi ^{\prime \prime }`$ for electron paramagnetic resonance in a stationary sample when the frequency of the oscillating field $`H_1`$ is equal to the resonance frequency. In our problem the only relevant frequency is the frequency of grain rotation. Therefore it is not accidental that the paramagnetic relaxation is “resonant” when the grain rotates in the external magnetic field. The term $`\gamma ^2g\tau _1\tau _2H_1^2\mathrm{sin}^2\theta `$ in the denominator allows for “saturation” when the energy dissipated in the spin system raises its temperature due to slow spin-lattice coupling; below we show that this term can be important for very small grains, for which $`\tau _1`$ can be large. An important difference between paramagnetic resonance in an external magnetic field and the resonance relaxation discussed here is that the “Barnett equivalent” magnetic field is different for species with different magnetic moments. Therefore, unlike paramagnetic resonance, resonance relaxation happens simultaneously to species with completely different magnetic moments and $`g`$-factors. For example, the conditions for electron spin resonance and nuclear magnetic resonance are satisfied simultaneously when a grain rotates in a static weak magnetic field. ## 4 Spin-Lattice Relaxation For a spin to flip in a rotating grain, it is necessary for the total energy in lattice vibrations to change by $`\mathrm{}\omega `$. Because the density of states is finite, it may not be possible for the lattice vibrations to absorb an energy $`\mathrm{}\omega `$. To estimate the lowest vibrational frequency $`\omega _{\mathrm{min}}`$ we note that the lowest frequency bending mode of the coronene molecule (C<sub>24</sub>H<sub>12</sub>) is $`\omega _{\mathrm{min}}=1.9\times 10^{13}\mathrm{s}^1`$ (Cyvin 1982). Coronene has an effective radius $`a=4.3\mathrm{\AA }`$ for an assumed density $`\rho =1.5\mathrm{g}\mathrm{cm}^3`$; if $`\omega _{\mathrm{min}}a^1`$, then $`\omega _{\mathrm{min}}=8.3\times 10^{12}a_7^1\mathrm{s}^1`$, large compared to $`kT/\mathrm{}`$. Thus for ultrasmall grains one cannot use spin-lattice relaxation times $`\tau _1`$ measured for macroscopic samples. We obtain an upper estimate for $`\tau _1`$ by appealing to the Raman scattering of phonons (see Waller 1932, Pake 1962): annihilation of a vibrational quantum $`\mathrm{}\omega ^{}`$ and creation of a quantum $`\mathrm{}(\omega \omega ^{})`$. Elastic lattice vibrations of all frequencies participate and therefore the relaxation is present for small grains. After integrating over various modes of vibrations one gets the probability of the spin-lattice transition via Raman scattering (Al’tshuler & Kozyrev 1964) $$\tau _1^1K_2\left(kT/\mathrm{}\right)^{m+1}\stackrel{~}{J}_m/\rho c_s^{10},$$ (9) where $`\rho `$ is the density, $`c_s`$ the sound speed, $`K_2`$ is a function that depends on the density of states, and $`\stackrel{~}{J}_m`$ ($`m=`$6 or 8) is an integral over the body’s phonon frequencies: $$\stackrel{~}{J}_m=_{T_l/T}^{\theta _D/T}\frac{x^me^x}{(e^x1)^2}𝑑x,$$ where $`\theta _D`$ is the Debye temperature. The conventional treatment assumes the body to be infinite with the integration extending down to $`T_l=0`$. In this case, and for $`T\theta _D/m`$, we have $`\stackrel{~}{J}_mm!\zeta (m)`$, where $`\zeta `$ is the Riemann zeta function. For a grain of size $`a`$, $$T_l=\frac{\mathrm{}\omega _{\mathrm{min}}}{k}\frac{63\mathrm{K}}{a_7}.$$ (10) For $`TT_l/m`$ we have $`\stackrel{~}{J}_m\left(T_l/T\right)^m\mathrm{exp}(T_l/T)`$. The ratio of the Raman spin-lattice relaxation in a small grain at temperature $`TT_l/m`$ to such relaxation in an infinite body at $`77K\stackrel{<}{}\theta _D/m`$ is then $$\frac{\tau _1(T)}{\tau _{1,\mathrm{}}(77\mathrm{K})}\left(\frac{77\mathrm{K}}{T}\right)^{m+1}\left(\frac{T}{T_l}\right)^m\mathrm{exp}(T_l/T)m!\zeta (m).$$ (11) Data in Al’tshuler & Kozyrev (1964) suggests that ionic crystals have a spin-lattice relaxation time $`\tau _{1,\mathrm{}}(77\mathrm{K})10^6\mathrm{s}`$. If due to the Raman process, then we would estimate that a macroscopic sample at $`T=4\mathrm{K}`$ would have $`\tau _128\mathrm{s}`$ if $`m=6`$, or $`2\times 10^4\mathrm{s}`$ if $`m=8`$. ¿From eq. (11), a grain with $`a=10^7\mathrm{cm}`$ at $`T=4\mathrm{K}`$ would have $`\tau _13.1\times 10^5\mathrm{s}`$ for $`m=6`$, or $`2.7\times 10^7\mathrm{s}`$ for $`m=8`$. Grains with $`a<10^7\mathrm{cm}`$ would have even larger values of $`\tau _1`$. If we adopt $`\tau _2`$ from eq. (3), then $$g_s^2\gamma ^2\tau _2\tau _1H_1^2\mathrm{sin}^2\theta =8\left(\frac{\tau _{1,grain}}{10^6\mathrm{s}}\right)\left(\frac{H_1}{5\mu \mathrm{G}}\right)^2\frac{\mathrm{sin}^2\theta }{2/3}$$ (12) so we see from (8) that saturation may be important for $`a<10^7`$ grains even in the $`5\mu \mathrm{G}`$ fields in diffuse interstellar gas. How reliable is our above estimate for $`\tau _1`$? Our calculations were based on so-called Waller theory, which frequently overestimates $`\tau _1`$ by a factor up to $`10^8`$ (Pake 1962). If the dependence of the spin-lattice relaxation time on temperature is different from that given by Eq. (9) our estimates of $`\tau _1`$ at $`4K`$ would be very different. Ultimately we require laboratory measurements of $`\tau _1`$ in small particles of appropriate composition. ## 5 Grain alignment Paramagnetic alignment of grains with a given axis ratio depends on two parameters: the ratio $`T_\mathrm{d}/T_{\mathrm{rot}}`$ of grain vibrational and rotational temperatures, and the ratio of the alignment time $`\tau _{\mathrm{DG}}`$ to the rotational damping time $`t_\mathrm{d}`$. The rotational damping time $`t_\mathrm{d}`$ depends on various processes of damping and excitation (e.g. collisions with ions and neutrals, plasma drag, and emission of photons) discussed in DL98b. Assuming that the paramagnetic torque only marginally reduces $`\omega `$, we follow the analysis of DL98b to obtain $`\omega `$ as a function of $`a`$. In our calculations we assume that a grain spends most of its time between thermal spikes with a vibrational temperature $`T4\mathrm{K}`$ (cf. Rouan et al. 1992), since the time between photon absorptions is $`10^9`$ s, while the grain cools much more rapidly. Note that photons usually contribute marginally to the disorientation of the grain angular momentum $`𝐉`$ (see Fig. 5 in DL98b). Let $`\theta `$ be the angle between $`𝐉`$ and the interstellar magnetic field $`𝐁`$. Fig. 1 presents the measure of alignment $`\sigma =\frac{3}{2}\mathrm{cos}^2\theta 1/3`$, for grains in the cold neutral medium. For our estimate we used standard formulae for paramagnetic alignment of angular momentum with $`𝐁`$ (Lazarian 1997, Roberge & Lazarian 1999), which for weak alignment provide $`\sigma 2/15[1(1+rt)/(1+r)]`$, where $`r\tau _{\mathrm{DG}}/t_\mathrm{d}`$ and $`tT/T_{rot}`$. The discontinuity at $`6\times 10^8`$ cm is due to the assumption that smaller grains are planar, and larger grains are spherical. The degree of polarization of rotational electric dipole emission $`p\sigma \mathrm{cos}^2\psi `$, where $`\psi `$ is the angle between $`𝐁`$ and the plane of the sky. $`p/\mathrm{cos}^2\psi `$ as a function of frequency is also shown in Fig. 1. The dipole rotational emission predicted in DL98a,b is sufficiently strong that polarization of a few percent may interfere with efforts to measure the polarization of the cosmic microwave background radiation. It worth noting that the degree of microwave polarization is sensitive to the magnetic field intensity (through $`\tau _{\mathrm{DG}}`$). ## 6 Discussion We have discussed a new gyromagnetic effect – resonance relaxation – which is closely related to normal paramagnetic resonance, and arises naturally whenever a body rotates in a weak magnetic field. The standard assumption of the equivalence of relaxation when the magnetic field rotates about a grain or a grain rotates in a static magnetic field is incorrect; the difference is directly related to the spontaneous magnetization due to the Barnett effect. Although present for all grains, resonance relaxation is most prominent for the smallest ones. When grains rotate very rapidly, as is the case for very small grains, the resonance relaxation effect ensures that $`\chi ^{\prime \prime }`$ does not plunge as the rotation frequency increases. As a result, we conclude that small grains (e.g. $`a10^7`$ cm) should be paramagnetically aligned. The degree of their aligment depends on the particular phase of the interstellar medium and on the efficiency of spin-lattice relaxation. The latter factor is unfortunately uncertain for very small grains for which the existing laboratory data is not applicable. If the ultrasmall grains are partially aligned, the implications are as follows: (1) The microwave radiation described in DL98ab will be polarized – by a few % – and could have dramatic consequences for experiments – such as MAP or PLANCK – designed to measure polarization of the cosmic microwave background. (2) If the grain body axes are aligned with $`𝐉`$, then absorption by these small grains will contribute to starlight polarization in the ultraviolet, and (3) the infrared emission following absorption of starlight photons by these small grains will also be polarized. However, the contribution to starlight polarization is expected to be small due to only partial alignment of the grain body axes with $`𝐉`$. The infrared emission will be even less polarized, due to disorientation of the grain axes (Lazarian & Roberge 1997) during the thermal spike following a photon absorption, i.e., while the infrared emission is taking place. This work was supported in part by NASA grant NAG5 7030 and in part by NSF grant AST-9619429. We thank … J. Mathis and J. Weingartner for valuable discussions.
no-problem/0003/cs0003058.html
ar5iv
text
# A Note on Knowledge-Based Programs and Specifications ## 1 Introduction Consider a simple program such as This program, denoted $`\mathrm{𝖯𝗀}_1`$ for future reference, describes an action that a process (or agent—I use the two words interchangeably here) should take, namely, setting $`y`$ to $`y+1`$, under certain conditions, namely, if $`x=0`$. One way to way to provide formal semantics for such a program is to assume that each agent is in some local state, which, among other things, describes the value of the variables of interest. For this simple program, we need to assume that the local state contains enough information to determine the truth of the test $`x=0`$. We can then associate with the program a protocol, that is, a function describing what action the agent should in each local state. Note that a program is a syntactic object, given by some program text, while a protocol is a function, a semantic object. Knowledge-based programs, introduced in (based on the knowledge-based protocols of ) are intended to provide a high-level framework for the design and specification of protocols. The idea is that, in knowledge-based programs, there are explicit tests for knowledge. Thus, a knowledge-based program might have the form where $`K(x=0)`$ should be read as “you know $`x=0`$”. We can informally view this knowledge-based program, denoted $`\mathrm{𝖯𝗀}_2`$, as saying “if you know that $`x=0`$, then set $`y`$ to $`y+1`$”. Roughly speaking, an agent knows $`\phi `$ if, in all situations consistent with the agent’s information, $`\phi `$ is true. Knowledge-based programs are an attempt to capture the intuition that what an agent does depends on what it knows. They have already met with some degree of success, having been used in papers such as both to help in the design of new protocols and to clarify the understanding of existing protocols. However, Sanders has pointed out what seems to be a counterintuitive property of knowledge-based programs. Roughly speaking, she claims that knowledge-based programs do not satisfy a certain monotonicity property: a knowledge-based program can satisfy a specification under a given initial condition, but fail to satisfy it if we strengthen the initial condition. On the other hand, standard programs (ones without tests for knowledge) do satisfy the monotonicity property. In this paper, I consider Sanders’ claim more carefully. I show that it depends critically on what it means for a program to satisfy a specification. There are two possible definitions, which agree for standard programs. If we use the one closest in spirit to the ideas presented in , the claim is false, although it is true for the definition used by Sanders. But, even in the case of Sanders’ definition, rather than being a defect of knowledge-base programs, this lack of monotonicity is actually a feature. In general, we do not want monotonicity. Moreover, once we allow a more general class of knowledge-based specifications, then standard programs do not satisfy the monotonicity property either. The rest of this paper is organized as follows: In the next section, there is an informal review of the semantics of standard and knowledge-based programs. In Section 3, I discuss standard and knowledge-based specifications. In Section 4, I consider the monotonicity property described by Sanders, and show in what sense it is and is not satisfied by knowledge-based programs. I give some examples in Section 5 showing why monotonicity is not always desirable. I conclude in Section 6 with some discussion of knowledge-based programs and specifications. ## 2 Standard and knowledge-based programs: an informal review Formal semantics for standard and knowledge-based programs are provided in . To keep the discussion in this paper at an informal level, I simplify things somewhat here, and review what I hope will be just enough of the details so that, together with the examples given here, the reader will be able to follow the main points; the interested reader should refer to for further discussion and all the formal details. Informally, we view a distributed system as consisting of a number of interacting agents. We assume that, at any given point in time, each agent in the system is in some local state. A global state is just a tuple consisting of each agent’s local state, together with the state of the environment, where the environment consists of everything that is relevant to the system that is not contained in the state of the processes. The agents’ local states typically change over time, as a result of actions that they perform. A run is a function from time to global states. Intuitively, a run is a complete description of what happens over time in one possible execution of the system. A point is a pair $`(r,m)`$ consisting of a run $`r`$ and a time $`m`$. At a point $`(r,m)`$, the system is in some global state $`r(m)`$. For simplicity, time here is taken to range over the natural numbers (so that time is viewed as discrete, rather than continuous). A system $``$ is a set of runs; intuitively, these runs describe all the possible executions of the system. For example, in a poker game, the runs could describe all the possible deals and bidding sequences. Of major interest in this paper are the systems that we can associate with a program. To do this, we must first associate a system with a joint protocol. As was said in the introduction, a protocol is a function from local states to actions. (This function may be nondeterministic, so that in a given local state, there is a set of actions that may be performed.) A joint protocol is just a set of protocols, one for each process. While the joint protocol describes what each process does, it does not give us enough information to generate a system. It does not tell us what the legal behaviors of the environment are, the effects of the actions, or the initial conditions. We specify these in the context. Formally, a context $`\gamma `$ is a tuple $`(P_e,𝒢_0,\tau ,\mathrm{\Psi })`$, where $`P_e`$ is a protocol for the environment, $`𝒢_0`$ is a set of initial global states, $`\tau `$ is a transition function, and $`\mathrm{\Psi }`$ is a set of admissible runs. The environment is viewed as running a protocol just like the agents; its protocol is used to capture features of the setting like “all messages are delivered within 5 rounds” or “messages may be lost”. Given a joint protocol $`P=(P_1,\mathrm{},P_n)`$ for the agents, an environment protocol $`P_e`$, and a global state $`(s_e,s_1,\mathrm{},s_n)`$, there is a set of possible joint actions $`(𝖺_e,𝖺_1,\mathrm{},𝖺_n)`$ that can be performed in this global state according to the protocols of the agents and the environment. (It is a set since the protocols may be nondeterministic.) The transition function $`\tau `$ describes how these joint actions change the global state by associating with each joint action a global state transformer, that is, a mapping from global states to global states. The set $`\mathrm{\Psi }`$ of admissible runs is used to characterize notions like fairness. For the simple programs considered in this paper, the transition function will be almost immediate from the description of the global states and $`\mathrm{\Psi }`$ will typically consist of all runs (so that it effectively plays no interesting role). What will change as we vary the context is the set of possible initial global states. A run $`r`$ is consistent with joint protocol $`P`$ in context $`\gamma `$ if (1) $`r(0)`$, the initial global state of $`r`$, is one of the initial global states in $`𝒢_0`$, (2) for all $`m`$, the transition from global state $`r(m)`$ to $`r(m+1)`$ is the result of applying $`\tau `$ to a joint action that can be performed by $`(P_e,P)`$ in the global state $`r(m)`$, and (3) $`r\mathrm{\Psi }`$. A system $``$ represents a joint protocol $`P`$ in context $`\gamma `$ if it consists of all runs consistent with $`P`$ in $`\gamma `$. Assuming that each test in a standard program run by process $`i`$ can be evaluated in each local state, we can derive a protocol from the program in an obvious way: to find out what process $`i`$ does in a local state $`\mathrm{}`$, we evaluate the tests in $`\mathrm{𝖯𝗀}`$ in $`\mathrm{}`$ and perform the appropriate action.<sup>1</sup><sup>1</sup>1Strictly speaking, to evaluate the tests, we need an interpretation that assigns truth values to formulas in each global state. For the programs considered here, the appropriate interpretation will be immediate from the description of the system, so I ignore interpretations here for ease of exposition. A run is consistent with $`\mathrm{𝖯𝗀}`$ in context $`\gamma `$ if it is consistent with the protocol derived from $`\mathrm{𝖯𝗀}`$. Similarly, A system represents $`\mathrm{𝖯𝗀}`$ in context $`\gamma `$ if it represents the protocol derived from $`\mathrm{𝖯𝗀}`$. We use $`𝐑(\mathrm{𝖯𝗀},\gamma )`$ to denote the system representing $`\mathrm{𝖯𝗀}`$ in context $`\gamma `$. ###### Example 2.1 : Consider the simple standard program $`\mathrm{𝖯𝗀}_1`$ in Figure 1 and suppose there is only one agent in the system. Further suppose the agent’s local state is a pair of natural numbers $`(a,b)`$, where $`a`$ is the current value of variable $`x`$ and $`b`$ is the current value of $`y`$. The protocol derived from $`\mathrm{𝖯𝗀}_1`$ increments the value of $`b`$ by 1 precisely if $`a=0`$. In this simple case, we can ignore the environment state, and just identify the global state of the system with the agent’s local state. Suppose we consider the context $`\gamma `$ where the initial states consist of all possible local states of the form $`(a,0)`$ for $`a0`$ and the transition function is such that the action $`y:=y+1`$ transforms $`(a,b)`$ to $`(a,b+1)`$. We ignore the environment protocol (or, equivalently, assume that $`P_e`$ performs the action no–op at each step) and assume $`\mathrm{\Psi }`$ consist of all runs. A run $`r`$ is then consistent with $`\mathrm{𝖯𝗀}_1`$ in context $`\gamma `$ if either (1) $`r(0)`$ is of the form $`(0,b)`$ and $`r(m)`$ is of the form $`(0,b+m)`$ for all $`m1`$, or (2) $`r(m)`$ is of the form $`(a,b)`$ for all $`m`$ and $`a>0`$. That is, either the $`x`$ component is originally 0, in which case the $`y`$ component is continually increased by 1, or else nothing happens. Now we turn to knowledge-based programs. Here the situation is somewhat more complicated. In a given context, a process can determine the truth of a test such as “$`x=0`$” by simply looking at its local state. However, in a knowledge-based program, there are tests for knowledge. According to the definition of knowledge in systems, an agent $`i`$ knows a fact $`\phi `$ at a given point $`(r,m)`$ in system $``$ if $`\phi `$ is true at all points in $``$ in which $`i`$ has the same local state as it does at $`(r,m)`$. Thus, $`i`$ knows $`\phi `$ at the point $`(r,m)`$ if $`\phi `$ holds at all points consistent with $`i`$’s information at $`(r,m)`$. The truth of a test for knowledge cannot in general be determined simply by looking at the local state in isolation. We need to look at the whole system. As a consequence, given a run, we cannot in general determine if it is consistent with a knowledge-based program in a given context. This is because we cannot tell how the tests for knowledge turn out without being given the other possible runs of the system; what a process knows at one point will depend in general on what other points are possible. This stands in sharp contrast to the situation for standard programs. This means it no longer makes sense to talk about a run being consistent with a knowledge-based program in a given context. However, notice that, given a system $``$, we can derive a protocol from a knowledge-based program $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ for process $`i`$ by using $``$ to evaluate the knowledge tests in $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$. That is, a test such as $`K\phi `$ holds in a local state $`l`$ if $`\phi `$ holds at all points in $``$ where process $`i`$ has local state $`l`$. In general, different protocols can be derived from a given knowledge-based program, depending on what system we use to evaluate the tests. Let $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}^{}`$ denote the protocol derived from $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ given system $``$. We say that a system $``$ represents a knowledge-based program $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ in context $`\gamma `$ if $``$ represents the protocol $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}^{}`$. That is, $``$ represents $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ if $`=𝐑(\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}^{},\gamma )`$. Thus, a system represents $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ if it satisfies a certain fixed-point equation. This definition is somewhat subtle, and determining the system representing a given knowledge-based program may be nontrivial. Indeed, as shown in , in general, there may be no systems representing a knowledge-based program $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ in a given context, only one, or more than one, since the fixed-point equation may have no solutions, one solution, or many solutions. Moreover, computing the solutions may be a difficult task, even if we have only finitely many possible global states. There are conditions sufficient to guarantee that there is exactly one system representing $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$, and these conditions are satisfied by many knowledge-based programs of interest, and, in particular, by the programs discussed in this paper. If $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ has a unique system representing it in context $`\gamma `$, then we again denote this system $`𝐑(\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}},\gamma )`$. ###### Example 2.2 : The knowledge-based program $`\mathrm{𝖯𝗀}_2`$ in Figure 2, with the test $`K(x=0)`$, is particularly simple to analyze. If we consider the context $`\gamma `$ discussed in Example 2.1, then whether or not $`x=0`$ holds is determined by the process’ local state. Thus, in context $`\gamma `$, $`x=0`$ holds iff $`K(x=0)`$ holds, and the knowledge-based program reduces to the standard program. On the other hand, consider the context $`\gamma ^{}`$ where the agent’s local state just consists just of the value of $`y`$, while the value of $`x`$ is part of the environment state. Again, we can identify the global state with a pair $`(a,b)`$, where $`a`$ is the current value of $`x`$ and $`b`$ is the current value of $`y`$, but now $`a`$ represents the environment’s state, while $`b`$ represents the agent’s state. We can again assume the environment performs the no–op action at each step, $`\mathrm{\Psi }`$ consists of all runs, the transition function is as in Example 2.1, and the initial states are all possible global states of the form $`(a,0)`$. In this context, there is a also unique system representing $`\mathrm{𝖯𝗀}_2`$: The agent never knows whether $`x=0`$, so there is a unique run corresponding to each initial state $`(a,0)`$, in which the global state is $`(a,0)`$ throughout the run. Finally, let $`\gamma ^{\prime \prime }`$ be identical to $`\gamma ^{}`$ except that the only initial state is $`(0,0)`$. Again, there will be a unique system representing $`\mathrm{𝖯𝗀}_2`$ in $`\gamma ^{\prime \prime }`$, but it is quite different from $`𝐑(\mathrm{𝖯𝗀}_2,\gamma ^{})`$. In $`𝐑(\mathrm{𝖯𝗀}_2,\gamma ^{\prime \prime })`$, the agent knows that $`x=0`$ at all times. There is only one run, where the value of $`y`$ is augmented at every step. This discussion suggests that a knowledge-based program can be viewed as specifying a set of systems, the ones that satisfy a certain fixed-point property, while a standard program can be viewed as specifying a set of runs, the ones consistent with the program. ## 3 Standard and knowledge-based specifications Typically, we think of a protocol being designed to satisfy a specification, or set of properties. Although a specification is often written in some specification language (such as temporal logic), many specifications can usefully be viewed as predicates on runs. This means that we can associate a set of runs with a specification; namely, all the runs that satisfy the required properties. Thus, a specification such as “all processes eventually decide on the same value” would be associated with the set of runs in which the processes do all decide the same value.<sup>2</sup><sup>2</sup>2Of course, there are useful specifications that cannot be viewed as predicates on runs. While linear time temporal logic assertions are predicates on runs, branching time temporal logic assertions are best viewed as predicates on trees. (See for a discussion of the differences between linear time and branching time.) For example, Koo and Toueg’s notion of weak termination requires that at every point there is a possible future where everyone terminates. In the notation used in this paper, this would mean that for every point $`(r,m)`$, there must be another point $`(r^{},m)`$ such that $`r`$ and $`r^{}`$ are identical up to time $`m`$, and at some point $`(r^{},m^{})`$ with $`m^{}m`$, every process terminates. This assertion is easily expressed in branching time logic. Probabilistic assertions such as “all processes terminate with probability .99” also cannot be viewed as predicates on individual runs. Other examples of specifications that cannot be viewed as a predicate on runs are discussed later in this section. Nevertheless, specifications that are predicates on runs are sufficiently prevalent that it seems reasonable to give them special attention. Researchers have often focused attention on two types of specifications: safety properties—these are invariant properties that have the form “a particular bad thing never happens”—and liveness properties—these are properties that essentially say “a particular good thing eventually does happen” . Thus, a run $`r`$ has a safety property $`p`$ if $`p`$ holds at all points $`(r,m)`$, while $`r`$ has the liveness property $`q`$ if $`q`$ holds at some point $`(r,m)`$. Suppose we are interested in a program that guarantees that all the processes eventually decide on the same value. We model this by assuming that each process $`i`$ has a decision variable $`x_i`$, initially undefined, in its local state (we can assume a special “undefined” value in the domain), which is set once in the course of a run, when the decision is made. Given the way we have chosen to model this problem, we would expect this program to satisfy two safety properties: (1) each process’ decision variable is changed at most once (so that it is never the case that it is set more than once); and (2) if neither $`x_i`$ nor $`x_j`$ has value “undefined”, then they are equal. We also expect it to satisfy one liveness property: each decision variable is eventually set. We say that a standard program $`\mathrm{𝖯𝗀}`$ satisfies a specification $`\sigma `$ in a context $`\gamma `$ if every run consistent with $`\mathrm{𝖯𝗀}`$ in $`\gamma `$ (that is, every run in the system representing $`\mathrm{𝖯𝗀}`$ in $`\gamma `$) satisfies $`\sigma `$. Similarly, we can say that a knowledge-based program $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ satisfies specification $`\sigma `$ in context $`\gamma `$ if every run in every system representing $`\mathrm{𝖯𝗀}_{\mathrm{𝑘𝑏}}`$ satisfies $`\sigma `$. The notion of specification we have considered so far can be thought of as being run based. A specification $`\sigma `$ is a predicate on (i.e., set of) runs and a program satisfies $`\sigma `$ if each run consistent with the program is in $`\sigma `$. Although run-based specifications arise often in practice, there are reasonable specifications that are not run based. There are times that it is best to think of a specification as being, not a predicate on runs, but a predicate on entire systems. For example, consider a knowledge base (KB) that responds to queries by users. We can imagine a specification that says “To a query of $`\phi `$, answer ‘Yes’ if you know $`\phi `$, answer ‘No’ if you know $`\neg \phi `$, otherwise answer ‘I don’t know’.” This specification is given in terms of the KB’s knowledge, which depends on the whole system and cannot be determined by considering individual runs in isolation. We call such a specification a knowledge-based specification. Typically, we think of a knowledge-based specification being given as a formula involving operators for knowledge and time. Formally, it is simply a predicate on (set of) systems. (Intuitively, it consists of all the systems where the formula is valid—i.e., true at every point in the system.)<sup>3</sup><sup>3</sup>3As the examples discussed in Footnote 2 show, not all predicates on systems can be expressed in terms of formulas involving knowledge and time. I will not attempt to characterize here the ones that can be so expressed. It is not even clear that such a characterization is either feasible or useful. We can think of a run-based specification $`\sigma `$ as a special case of a knowledge-based specification. It consists of all those systems all of whose runs satisfy $`\sigma `$. A (standard or knowledge-based) program $`\mathrm{𝖯𝗀}`$ satisfies a knowledge-based specification $`\sigma `$ in context $`\gamma `$ if every system representing $`\mathrm{𝖯𝗀}`$ in $`\gamma `$ satisfies the specification. Notice that knowledge-based specifications bear the same relationship to (standard) specifications as knowledge-based programs bear to standard programs. A knowledge-based specification/program in general defines a set of systems; a standard specification/program defines a set of runs (i.e., a single system). ## 4 Monotonicity Sanders focuses on a particular monotonicity property of specifications. To understand this property, and Sanders’ concerns, we first need some definitions. Given contexts $`\gamma =(P_e,𝒢_0,\tau ,\mathrm{\Psi })`$ and $`\gamma ^{}=(P_e^{},𝒢_0^{},\tau ^{},\mathrm{\Psi }^{})`$, we write $`\gamma ^{}\gamma `$ if $`P_e=P_e^{}`$, $`𝒢_0^{}𝒢_0`$, $`\tau =\tau ^{}`$, and $`\mathrm{\Psi }^{}\mathrm{\Psi }`$. That is, in $`\gamma ^{}`$ there may be fewer initial states and fewer admissible runs, but otherwise $`\gamma `$ and $`\gamma ^{}`$ are the same. The following lemma is almost immediate from the definitions. ###### Lemma 4.1 : If $`\gamma ^{}\gamma `$, then for all protocols $`P`$, every run consistent with $`P`$ in $`\gamma ^{}`$ is also consistent with $`P`$ in $`\gamma `$, so $`𝐑(P,\gamma ^{})𝐑(P,\gamma )`$. Similarly, for every standard program $`\mathrm{𝖯𝗀}`$, we have $`𝐑(\mathrm{𝖯𝗀},\gamma ^{})𝐑(\mathrm{𝖯𝗀},\gamma )`$. The restriction in Lemma 4.1 to standard programs is necessary. It is not true for knowledge-based programs. The set of systems consistent with a knowledge-based program can be rather arbitrary, as Example 2.2 shows. This example also shows that safety and liveness properties need not be preserved when we restrict the context. The safety property “$`y`$ is never equal to 1” is satisfied by $`\mathrm{𝖯𝗀}_2`$ in context $`\gamma ^{}`$ but not in context $`\gamma ^{\prime \prime }`$. On the other hand, the liveness property “$`y`$ is eventually equal to 1” is satisfied by $`\mathrm{𝖯𝗀}_2`$ in context $`\gamma ^{\prime \prime }`$ but not $`\gamma ^{}`$. Sanders suggests that this behavior is somewhat counterintuitive. To quote : > \[A\] knowledge-based protocol need not be monotonic with respect to the initial conditions …\[In particular,\] safety and liveness properties of knowledge-based protocols need not be preserved by strengthening the initial conditions, thus violating one of the most intuitive and fundamental properties of standard programs \[italics Sanders’\].<sup>4</sup><sup>4</sup>4In , a notion of knowledge-based protocol was introduced, and Sanders is referring to that notion, rather than the notion of knowledge-based program that I am using here. See for a discussion of the difference between the two notions. Sanders’ comments apply without change to knowledge-based programs as defined here. It is certainly true that the system representing a knowledge-based program in a restricted context is not necessarily a subset of the system representing it in the original context. However, under what is arguably the most natural interpretation of what it means for a program to satisfy a specification with respect to an initial condition, a knowledge-based program is monotonic with respect to initial conditions. To understand why this should be so, we need to make precise what it means for a (knowledge-based) program to satisfy a specification with respect to an initial condition. Formally, we can take an initial condition to be a predicate on global states (so that an initial condition corresponds to a set of global states). An initial condition INIT$`^{}`$ is a strengthening of $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$ if INIT$`^{}`$ is a subset of $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$. (In logical terms, this means that INIT$`^{}`$ can be thought of as implying $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$.) A set $`G`$ of global states satisfies an initial condition $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$ if $`G\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$. Suppose that we fix $`P_e`$, $`\tau `$, and $`\mathrm{\Psi }`$, that is, all the components of a context except the set of initial global states, and consider the family $`\mathrm{\Gamma }=\mathrm{\Gamma }(P_e,\tau ,\mathrm{\Psi })`$ of contexts of the form $`(P_e,𝒢_0,\tau ,\mathrm{\Psi })`$, where the set $`𝒢_0`$ varies over all subsets of global states. Now it seems reasonable to say that program $`\mathrm{𝖯𝗀}`$ satisfies specification $`\sigma `$ (with respect to $`\mathrm{\Gamma }`$) given initial condition INIT if $`\mathrm{𝖯𝗀}`$ satisfies $`\sigma `$ in every context in $`\mathrm{\Gamma }`$ whose initial global states satisfy $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$. With this definition, it is clear that if $`\mathrm{𝖯𝗀}`$ satisfies $`\sigma `$ given $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$, and INIT$`^{}`$ is a strengthening of $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$, then $`\mathrm{𝖯𝗀}`$ must also satisfy $`\sigma `$ with respect to INIT$`^{}`$, since every context whose initial global states are in INIT$`^{}`$ also has its initial global states in $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$. Thus, under this definition of what it means for a program to satisfy a specification, Sanders’ observation is incorrect. However, Sanders used a somewhat different definition. Suppose that rather than considering all contexts in $`\mathrm{\Gamma }`$ whose initial global states satisfy $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$, we consider the maximal one, that is, the one whose set of initial global states consists of all global states in $`\mathrm{\Sigma }`$ that satisfy $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$. We say that $`\mathrm{𝖯𝗀}`$ maximally satisfies specification $`\sigma `$ (with respect to $`\mathrm{\Gamma }`$) given $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$ if $`\mathrm{𝖯𝗀}`$ satisfies $`\sigma `$ in the context in $`\mathrm{\Gamma }`$ whose set of initial global states consists of all global states satisfying $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}`$. It is almost immediate from Lemma 4.1 and the definitions that for standard programs and standard specifications, “satisfaction with respect to $`\mathrm{\Gamma }`$” coincides with “maximal satisfaction with respect to $`\mathrm{\Gamma }`$”. On the other hand, they can be quite different for knowledge-based programs and knowledge-based specifications, as the following examples show. ###### Example 4.2 : For the knowledge-based program $`\mathrm{𝖯𝗀}_2`$, if we take $`\mathrm{\Gamma }`$ to consist of all contexts $`(P_e,𝒢_0,\tau ,\mathrm{\Psi })`$, where $`P_e`$, $`\tau `$, and $`\mathrm{\Psi }`$ are as discussed in Example 2.2 and $`𝒢_0`$ is some subset of the global states, then, as we observed above, $`\mathrm{𝖯𝗀}_2`$ satisfies the specification “$`y`$ is never equal to 1” for the initial condition $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}_1`$ which can be characterized by the formula $`y=0`$ but not for the initial condition $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}_2`$ characterized by $`x=0y=0`$. Similarly, if $`\mathrm{𝖯𝗀}_3`$ is the result of replacing the test $`K(x=0)`$ in $`\mathrm{𝖯𝗀}_2`$ by $`\neg K(x=0)`$, then $`\mathrm{𝖯𝗀}_3`$ satisfies the liveness condition “$`y`$ is eventually equal to 1” for $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}_1`$ but not for $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}_2`$. This shows that a standard specification (in particular, one involving safety or liveness) may not be monotonic with respect to maximal specification for a knowledge-based program. ###### Example 4.3 : Consider the standard program $`\mathrm{𝖯𝗀}_1`$ again, but now consider a context where there are two agents. Intuitively, the second agent never learns anything and plays no role. Formally, this is captured by taking the second agent’s local state to always be $`\lambda `$. Thus, a global state now has the form $`(a,b,\lambda )`$. We can again identify the global state with the local state of the first agent (the one performing all the actions). Thus, abusing notation somewhat, we can consider the same set of contexts as in Example 4.2. Now consider the knowledge-based specification $`K_2(y=0)`$. This is true with respect to $`\mathrm{\Gamma }`$ for the initial condition $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}_1`$ but not for $`\mathrm{I}\mathrm{N}\mathrm{I}\mathrm{T}_2`$. This shows that even for a standard program, a knowledge-based specification may not be monotonic with respect to maximal satisfaction. ###### Example 4.4 : In the muddy children problem discussed in , the father of the children says “Some \[i.e., one or more\] of you have mud on your forehead.” The father then repeatedly asks the children “Do you know that you have mud on your own forehead?” Thus, the children can be viewed as running a knowledge-based program according to which a child answers “Yes” iff she knows that she has mud on her forehead. The father’s initial statement is taken to restrict the possible initial global states to those where one or more children have mud on their foreheads. It is well known that, under this initial condition, the knowledge-based program satisfies the liveness property “all the children with mud on their foreheads eventually know it”. On the other hand, if the father instead gives the children more initial information, by saying “Child 1 has mud on his forehead” (thus restricting the set of initial global states to those where child 1 has mud on his forehead), none of the children that have mud on their forehead besides child 1 will be able to figure out that they have mud on their forehead. Roughly speaking, this is because the information available to the children from child 1’s “No” answer in the original version of the story is no longer available once the father gives the extra information. (See \[6, Example 7.25\].) This problem is not an artifact of using knowledge-based programs or specifications. Rather, it is really the case in the original puzzle that if the father had said “Child 1 has mud on his forehead” rather than “Some of you have mud on your foreheads”, the children with mud on their foreheads would never be able to figure out that they had mud on their foreheads. Sometimes extra knowledge can be harmful!<sup>5</sup><sup>5</sup>5Another example of the phenomenon that extra knowledge can be harmful can be found in . This is also a well-known phenomenon in the economics/game theory literature . As should be clear from the preceding discussion, there are two notions of monotonicity, which happen to coincide (and hold) for standard programs and specifications, but differ if we consider knowledge-based programs or knowledge-based specifications. For knowledge-based programs and specifications, the first notion of monotonicity holds, while the second (monotonicity with respect to maximal satisfaction) does not. Monotonicity is certainly a desirable property—for a monotonic specification and program, once we prove that the specification holds for the program for a given initial condition, then we can immediately conclude that it holds for all stronger specifications. Without monotonicity, one may have to reprove the property for all stronger initial conditions. Maximal satisfaction also certainly seems like a reasonable generalization from the standard case. Thus, we should consider to what extent it is a problem that we lose monotonicity for maximal satisfaction when we consider knowledge-based programs and specifications. Of course, whether something is problematic is, in great measure, in the eye of the beholder. Nevertheless, I would claim that, in the case of maximal satisfaction, the only properties that are lost when the initial condition is strengthened are either unimportant properties, or properties that, roughly speaking, ought to be lost. More precisely, they are properties that happen to be true of a particular context, but are not intrinsic properties of the program. The examples and the technical discussion below should help to make the point clearer. Thus, this lack of monotonicity should not be viewed as a defect of knowledge-based programs and specifications. Rather, it correctly captures the subtleties of knowledge acquisition in certain circumstances. ## 5 Some examples Consider again the program $`\mathrm{𝖯𝗀}_2`$. It can be viewed as saying “perform a sequence of actions (continually increasing $`y`$) if you know that $`x=0`$”. In the system $`𝐑(\mathrm{𝖯𝗀}_2,\gamma ^{})`$, the initial condition guarantees that the agent does not know the value of $`x`$, and thus nothing is done. The strengthening of the initial condition to $`x=0y=0`$ described by $`\gamma ^{\prime \prime }`$ guarantees that the agent does know that $`x=0`$, and thus actions are performed. In this case, we surely do not want a safety condition like “$`y`$ is never equal to 1”, which holds if the sequence of actions is not performed, to be preserved when we strengthen the initial condition in this way. Similarly, for the program $`\mathrm{𝖯𝗀}_3`$ defined in Example 4.2, where the action is performed if the agent does not know that $`x=0`$, we would not expect a liveness property like “$`y`$ is eventually equal to 1” to be preserved. Clearly, there are times when we would like a safety or a liveness property to be preserved when we strengthen initial conditions. But these safety or liveness properties are typically ones that we want to hold of all systems consistent with the knowledge-based program, not just the ones representing the program in certain maximal contexts. The tests in a well-designed knowledge-based program are often there precisely to ensure that desired safety properties do hold in all systems consistent with the program. For example, there may be a test for knowledge to ensure that an action is performed only if it is known to be safe (i.e., it does not violate the safety property). It is often possible to prove that such safety properties hold in all systems consistent with the knowledge-based program; thus, the issue of needing to reprove the property if we strengthen the initial conditions does not arise. (See \[6, pp. 259–270\] for further discussion of this issue.) In the case of liveness properties, we often want to ensure that a given action is eventually performed. It is typically the case that an action in a knowledge-based program is performed when a given fact is known to be true. Thus, the problem reduces to ensuring that the knowledge is eventually obtained. As a consequence, the knowledge-based approach often makes it clearer what is required for the liveness property to hold. One example of how safety properties can be ensured by appropriate tests for knowledge and how liveness properties reduce to showing that a certain piece of knowledge is eventually obtained is given by the knowledge-based programs of . I illustrate these points here using a simpler example. Suppose we have a network of $`n`$ processes, connected via a communication network. The network is connected, but not necessarily completely connected. For simplicity, assume each communication link is bidirectional. We assume that all messages arrive within one time unit. Each process knows which processes it is connected to; formally, this means that the local state of each process includes a mapping associating each outgoing link with the identity of the neighbor at the other end. We also assume that each process records in its local state the messages it has sent and received. We want a program for process 1 to broadcast a binary value to all the processes in the network. Formally, we assume that each process $`i`$ has a local variable, say $`x_i`$, which is intended to store the value. The specification that the program must satisfy consists of three properties. For every run, and for all $`i=1,\mathrm{},n`$, we require the following: 1. $`x_i`$ changes value at most once, 2. $`x_1`$ never changes value, and 3. eventually the value of $`x_i`$ is equal to that of $`x_1`$. Note that the first two properties are safety properties, and the last is a liveness property. A simple standard program that satisfies this specification is for process 1 to send $`v`$, the value of $`x_1`$, to all its neighbors; then the first time process $`i`$ ($`i1`$) gets the value $`v`$, it sets $`x_i`$ to $`v`$ and sends $`v`$ to all its neighbors except the one from which it received the message. Process $`i`$ does nothing if it later gets the value $`v`$ again. This program is easily seen to satisfy the specification in the context implicitly described above. We remark that, in principle, we could modify the first property to allow $`x_1`$ to change value a number of times before finally “stabilizing” on a final value. However, allowing this would only complicate the description of the property, since we would have to modify the third property to guarantee that the value of $`x_i`$ after stabilizing is equal to that of $`x_1`$. We return to this point below. The behavior of each process can easily be captured in terms of knowledge: When a process knows the value of $`x_1`$, it sends the value to all its neighbors except those that it knows already know the value of $`x_1`$. Let $`K_i(x_1)`$ be an abbreviation for “process $`i`$ knows the value of $`x_1`$”. (Thus, $`K_i(x_1)`$ is an abbreviation for $`K_i(x_1=0)K_i(x_1=1)`$.) Similarly, let $`K_iK_j(x_1)`$ be an abbreviation for “process $`i`$ knows that process $`j`$ knows the value of $`x_1`$.” Then we have the joint knowledge-based program $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}=(\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}_1,\mathrm{},\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}_n)`$, where $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}_i`$, the program followed by process $`i`$, is $$\begin{array}{c}\mathrm{𝐝𝐨}\mathrm{𝐟𝐨𝐫𝐞𝐯𝐞𝐫}\hfill \\ \mathrm{𝐢𝐟}K_i(x_1)\hfill \\ \mathrm{𝐭𝐡𝐞𝐧}\hfill \\ x_i:=x_1;\hfill \\ \text{for}\text{ each neighbor }j\text{ of }i\hfill \\ \mathrm{𝐝𝐨}\hfill \\ \mathrm{𝐢𝐟}\neg K_iK_j(x_1)\mathrm{𝐭𝐡𝐞𝐧}\text{ send the value of }x_1\text{ to }j\text{ }\text{end}\hfill \\ \mathrm{𝐞𝐧𝐝}\hfill \\ \mathrm{𝐞𝐧𝐝}\hfill \\ \mathrm{𝐞𝐧𝐝}.\hfill \end{array}$$ By considering this knowledge-based program, we abstract away from the details of how $`i`$ gains knowledge of the value of $`x_1`$. If $`i=1`$, then presumably the value was known all along; otherwise it was perhaps acquired through the receipt of a message. Similarly, the fact that $`i`$ sends the value of $`x_1`$ to a neighbor $`j`$ only if $`i`$ doesn’t know that $`j`$ knows the value of $`x_1`$ handles two of the details of the standard program: (1) it guarantees that $`i`$ does not send the value of $`x_1`$ to $`j`$ if $`i`$ received the value of $`x_1`$ from $`j`$, and (2) it guarantees that $`i`$ does not send the value of $`x_1`$ to its neighbors more than once.<sup>6</sup><sup>6</sup>6This argument depends in part on our assumption that process $`i`$ is keeping track of the messages it sends and receives. If $`i`$ forgets the fact that it received the value of $`x_1`$ from $`j`$ then (if $`i`$ follows $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}_i`$), it would send the value of $`x_1`$ back to $`j`$. Similarly, if $`i`$ receives the value of $`x_1`$ a second time and forgets that it has already sent it once to its neighbors, then according to $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}_i`$, it would send it again. In addition, the assumption that there are no process failures is crucial. Finally, observe that $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}`$ is correct even if messages can be lost, as long as the system satisfies an appropriate fairness assumption (if a message is sent infinitely often, it will eventually be delivered).<sup>7</sup><sup>7</sup>7Note that this fairness assumption can be captured by using an appropriate set $`\mathrm{\Psi }`$ (consisting only of runs where the fairness condition is satisfied) in the context. In this case process $`i`$ would keep sending the value of $`x_1`$ to $`j`$ until $`i`$ knows (perhaps by receiving an acknowledgment from $`j`$) that $`j`$ knows the value of $`x_1`$. The fact that $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}`$ is correct “even if messages can be lost” or “no matter what the network topology” means that the program meets its specification in a number of different contexts. This knowledge-based program has another advantage: it suggests ways to design more efficient standard programs. For example, process $`i`$ does not have to send the value of $`x_1`$ to all its neighbors (except the one from which it received the value of $`x_1`$) if it has some other way of knowing that a neighbor already knows the value of $`x_1`$. This may happen if the value of $`x_1`$ has a header describing to which processes it has already been sent. It might also happen if the receiving process has some knowledge of the network topology (for example, there is no need to rebroadcast the value of $`x_1`$ if communication is reliable and all processes are neighbors of process 1). Returning to our main theme, notice that in every context $`\gamma `$ consistent with our assumptions, in the system(s) representing $`\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤}`$ in $`\gamma `$, the three properties described above are satisfied: $`x_i`$ changes value at most once in any run, $`x_1`$ never changes value, and eventually the value of $`x_i`$ is equal to that of $`x_1`$. Notice also the role of the test $`K_i(x_1)`$ in ensuring that the safety properties hold. As a result of the test, we know that $`x_i`$ is not updated until the value of $`x_1`$ is known; when it is updated, it is set to $`x_1`$. This guarantees that $`x_1`$ never changes value, and that $`x_i`$ changes value at most once and, when it does, it is set to $`x_1`$. All that remains is to guarantee that $`x_i`$ is eventually set to $`x_1`$. What the knowledge-based program makes clear is that this amounts to ensuring that all processes eventually know the value of $`x_1`$. It is easy to prove that this is indeed the case. It is also easy to see that there are other properties that do not hold in all contexts. For a simple example, suppose that $`n=3`$, so there are three processes in the network. Suppose that there is a link from process 1 to process 2, and a link from process 2 to process 3, and that these are the only links in the network. Moreover, suppose that the network topology is common knowledge. Given these simplifying assumptions, a process $`i`$’s initial state consists of an encoding of the network topology, its name, and the value of $`x_i`$. Now consider two contexts: in context $`\gamma _1`$, there are 8 initial global states, in which $`(x_1,x_2,x_3)`$ take on all values in $`\{0,1\}^3`$; in $`\gamma _2`$, there are 4 initial global states, in which $`(x_1,x_2,x_3)`$ take on all values in $`\{0,1\}^3`$ such that $`x_1=x_3`$. Intuitively, in context $`\gamma _2`$, process 3 knows the value of $`x_1`$ (since it is the same as the value of $`x_3`$, which is part of process 3’s initial state), while in $`\gamma _1`$, neither process 2 nor process 3 know the value of $`x_1`$. Let $`_1=𝐑(\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤},\gamma _1)`$ and let $`_2=𝐑(\mathrm{𝖣𝖨𝖥𝖥𝖴𝖲𝖤},\gamma _2)`$. It is not hard to see that $`_1`$ has eight runs, one corresponding to each initial global state. In each of these runs, process 1 sends the value of $`x_1`$ to process 2 in round 1; process 2 sets $`x_2`$ to this value in round 2 and forwards the value to process 3; in round 3, process 3 sets $`x_3`$ to $`i`$ (and sends no messages). (Note that, formally, round $`k`$ takes place between times $`k1`$ and $`k`$.) Similarly, $`_2`$, has four runs, one corresponding to each initial global state. In these runs, process 3 initially knows the value of $`x_1`$, although process 2 does not. Moreover, process 2 knows this. Thus, in the round of the runs in $`_2`$, both process 1 and process 3 send the value of $`x_1`$ to process 2. But now, process 2 does not send a message to process 3 in the second round. As expected, we can observe that not all liveness properties are preserved as we move from $`_1`$ to $`_2`$. For example, the runs in $`_1`$ all satisfy the liveness property “eventually process 2 sends a message to process 3”. Clearly the runs in $`_2`$ do not satisfy this liveness property. This should be seen as a feature, not a bug! There is no reason to preserve the sending of unnecessary messages. The extra knowledge obtained when the initial conditions are strengthened may render sending the message unnecessary. ## 6 Discussion When designing programs, we often start with a specification and try to find an (easily-implementable) standard program that satisfies it. The process of going from a specification to an implementation is often a difficult one. I would argue that quite often it is useful to express the properties we desire using a knowledge-based specification, proceed from there to construct a knowledge-based program, and then go from the knowledge-based program to a standard program. While this approach may not always be helpful (indeed, if a badly designed knowledge-based program is used, then it may actually be harmful), there is some evidence showing that it can help. The first examples of going from knowledge-based specifications to (standard) programs can be found in (although the formal model used in is somewhat different from that described here). The approach described here was used in to derive solutions to the sequence transmission problem (the problem of transmitting a sequence of bits reliably over a possibly faulty communication channel). All the programs derived in are (variants of) well-known programs that solved the problem. While I would argue that the knowledge-based approach shows the commonality in the approaches used to solve the problem, and allows for easier and more uniform proofs of correctness, certainly this example by itself is not convincing evidence of the power of the knowledge-based approach. Perhaps more convincing evidence is provided by the results of , where this approach is used to derive programs that are optimal (in terms of number of rounds required) for Byzantine Agreement and Eventual Byzantine Agreement. In this case, the programs derived were new, and it seems that it would have been quite difficult to derive them directly from the original specifications. Knowledge-based specifications are more prevalent than it might at first seem. We are often interested in constructing programs that not only satisfy some safety and liveness conditions, but also use a minimal number of messages or rounds. As we have already observed, specifications of the form “do not send unnecessary messages” are not standard specifications; the same is true for a specification of the form “halt as soon as possible”. Such specifications can be viewed as knowledge-based specifications. The results of can be viewed as showing how knowledge-based specifications arise in the construction of round-efficient programs. The tests for knowledge in the knowledge-based programs described in these papers explicitly embody the intuition that a process decides as soon as it is safe to do so. Similar sentiments about the importance of knowledge-based specifications are expressed by Mazer (although the analogy between knowledge-based programs and knowledge-based specifications is not made in that paper): > Epistemic \[i.e., knowledge-based\] specifications are surprisingly common: a problem specification that asserts that a property or value is private to some process is an epistemic specification (e.g., “each database site knows whether it has committed the transaction”). We are also interested in epistemic properties to capture assertions on the extent to which a process’s local state accurately reflects aspects of the system state, such as “each database site knows whether the others have committed the transaction”. For another example of the usefulness of knowledge-based specifications, recall our earlier discussion of the specification of the program for broadcasting a message through a network. If we replace the liveness requirements by the simple knowledge-based requirement “eventually process $`i`$ knows the value of $`x_1`$”, we can drop the first property (that $`x_i`$ changes value at most once) altogether. Indeed, we do not have to mention $`x_i`$, $`i1`$, at all. The knowledge-based specification thus seems to capture our intuitive requirements for the program more directly and elegantly than the standard specification given. A standard specification can be viewed as a special case of a knowledge-based specification, one in which the set of systems satisfying it is closed under unions and subsets. It is because of these closure properties that we have the property if a standard program satisfies a standard specification $`\sigma `$ in a context $`\gamma `$, then it satisfies it in any restriction of $`\gamma `$. Clearly, this is not a property that holds of standard programs once we allow knowledge-based specifications. Nevertheless, as the examples above suggest, there is something to be gained—and little to be lost—by allowing the greater generality of knowledge-based specifications. In particular, although we do lose monotonicity, there are other ways of ensuring that safety and liveness properties do hold in the systems of interest. By forcing us to think in terms of systems, rather than of individual runs, both knowledge-based programs and knowledge-based specifications can be viewed as requiring more “global” thinking than their standard counterparts. The hope is that thinking at this level of abstraction makes the design and specification of programs easier to carry out. We still need more experience using this framework before we can decide whether this hope will be borne out and whether the knowledge-based approach as described here is really useful. Sanders has other criticisms of the use of knowledge-based programs that I have not addressed here. Very roughly, she provides pragmatic arguments that suggest that we use predicates that have some of the properties of knowledge (for example $`K\phi \phi `$), but not necessarily all of them. This theme is further pursued in . While I believe that using predicates that satisfy some of the properties of knowledge will not prove to be as useful as sticking to the original notion of knowledge, we clearly need more examples to better understand the issues. Besides more examples, as pointed out by Sanders , it would also be useful to have techniques for reasoning about knowledge-based programs without having to construct the set of runs generated by the program. In , a simple knowledge-based programming language is proposed. Perhaps standard techniques for proving program correctness can be applied to it (or some variant of it). A first step along these lines was taken by Sanders , who extended UNITY in such a way as to allow the definition of knowledge predicates (although it appears that the resulting knowledge-based programs are somewhat less general than those described here), and then used proof techniques developed for UNITY to prove the correctness of another knowledge-based protocol for the sequence transmission problem. (We remark that techniques for reasoning about knowledge obtained in CSP programs, but not for knowledge-based programs, were given in .) Once we have a number of examples and better techniques in hand, we shall need to carry out a careful evaluation of the knowledge-based approach, and a comparison of it and other approaches. I believe that once the evidence is in, it will show that there are indeed significant advantages that can be gained by thinking at the knowledge level. Acknowledgments: I would like to thank Ron Fagin, Yoram Moses, Beverly Sanders, and particularly Vassos Hadzilacos, Murray Mazer, Moshe Vardi, and Lenore Zuck for their helpful comments on earlier drafts of the paper. Moshe gets the credit for the observation that knowledge-based protocols do satisfy monotonicity. Finally, I would like to thank Karen Seidel for asking a question at PODC ’91 that inspired this paper.
no-problem/0003/hep-ph0003031.html
ar5iv
text
# Timelike and spacelike QCD characteristics of the 𝑒⁺⁢𝑒⁻ annihilation process ## Acknowledgement The authors would like to thank A.L. Kataev, D.V. Shirkov and A.N. Sissakian for useful discussions and interest in this work. Partial support of the work by the US National Science Foundation, grant PHY-9600421, by the U.S. Department of Energy, Grant DE-FG-03-98ER41066, by the University of Oklahoma, and by the RFBR, Grants 99-02-17727 and 99-01-00091, is gratefully acknowledged.
no-problem/0003/hep-ph0003078.html
ar5iv
text
# 1 Introduction ## 1 Introduction The last two decades of field theory development is marked by considerable efforts to avoid the problem of color charge confinement formulating a closed hadron field theory. The remarkable attempt based on the string model, in its various realizations . But, in spite of remarkable success (in formalism especially) there is not an experimentally measurable predictions of this approach till now, e.g. . The string model is a natural consequence of the old dual resonance model and we hope that our toy approach includes main characteristic features of this model. We would like to describe in this paper production of ‘stable’ hadrons through decay of resonances. This channel was considered firstly in the papers . Our consideration will use following assumptions. A. The string interpretation of the dual-resonance model bring to the observation that the mass spectrum of resonances, i.e. the total number $`\rho (m)`$ of mass $`m`$ resonance excitations, grows exponentially: $$\rho (m)=(m/m_0)^\gamma e^{\beta _0m},\beta _0=\mathrm{const},m>m_0.$$ (1.1) Note also that the same hadron mass spectrum (1.1) was predicted in the ‘bootstrap’ approach . It predicts that $$\gamma =5/2.$$ (1.2) Another assumptions are based on the ordinary (resonance $``$ Regge pole) duality. B. The ‘probability’ of mass $`m`$ resonance creation $`\sigma ^R(m)`$ has the Regge pole asymptotics: $$\sigma ^R(m)=g^R\frac{m_0}{m},mm_00.2\mathrm{Gev},g^R=\mathrm{const}.$$ (1.3) It was assumed here that the intercept of Regge pole trajectory $`\alpha ^R=1/2`$. So, the meson resonances only would be taken into account. C. If $`\sigma _n^R(m)`$ describes decay of mass $`m`$ resonance on the $`n`$ hadrons, then the mean multiplicity of hadrons $$\overline{n}^R(m)=\frac{_nn\sigma _n^R(m)}{\sigma ^R(m)}.$$ (1.4) Following to the Regge model, $$\overline{n}^R(m)=\overline{n}_0^R\mathrm{ln}\frac{m^2}{m_0^2}.$$ (1.5) D. We will assume that there is a definite vicinity of $`\overline{n}^R(m)`$ where $`\sigma _n^R(m)`$ is defined by $`\overline{n}^R(m)`$ only. I.e. in this vicinity $$\sigma _n^R(m)=\sigma ^R(m)e^{\overline{n}^R(m)}(\overline{n}^R(m))^n/n!.$$ (1.6) This is the direct consequence of the Regge pole model, if $`m/m_0`$ is high enough. The connection between $`S`$-matrix approach and the real-time statistics (finite temperature field theories) will be used to formulate our model quantitatively. This interpretation will be useful since it allows to formulate the description in terms of a few parameters only. All this statistical parameters are expressed through created particles energies and momenta. We will use also the virial decomposition technique. It was extremely effective for description of the phase transitions critical region, where the correlation radii tends to infinity. The Mayer’ decomposition over ‘connected groups’ is well known in this connection . Following to our idea, we will distinguish the short-range correlations among hadrons and the long-range correlations among resonances. The ‘connected groups’ would be described by resonances (strings) and the interactions among them should be described introducing for this purpose the correlation functions among strings. So, we will consider the ‘two-level’ model of hadrons creation: first level describes the short-range correlation among hadrons and the second level is connected to the correlations among strings. ## 2 Asymptotic estimations Our purpose is to investigate the role of exponential spectrum (1.1) in the asymptotics over multiplicity $`n`$. In this case one can valid heavy resonances creation and such formulation of the problem have definite advantage. (i) If creation of heavy resonances at $`n\mathrm{}`$ is expected, then one can neglect the dependence on resonances momentum $`𝐪_𝐢`$ . So, the ‘low-temperature’ expansion is valid in the VHM region. (ii) Having the big parameter $`n`$ one can construct the perturbations expanding over $`1/n`$. We will see that there is a wide domain for $`n`$, where one can neglect resonance correlations. (iii) We will be able to show at the end the range of applicability of this assumptions. For this purpose following formal phenomena will be used. Let us introduce the ‘grand partition function’ $$\mathrm{\Xi }(z,s)=\underset{n}{}z^n\sigma _n(s),\mathrm{\Xi }(1,s)=\sigma _{tot}(s),n\sqrt{s}/m_0n_{max}(s),$$ (2.7) and let us assume that just $`\mathrm{\Xi }`$ is known. Then, using the inverse Mellin transformation, $$\sigma _n(s)=\frac{1}{2\pi i}\frac{dz}{z^{n+1}}\mathrm{\Xi }(z,s).$$ (2.8) This integral will be computed expanding it in vicinity of solution $`z_c>0`$ of equation: $$n=z\frac{}{z}\mathrm{ln}\mathrm{\Xi }(z,s).$$ (2.9) It is assumed, and this should be confirmed at the end, that the fluctuations in vicinity of $`z_c`$ are Gaussian. It is natural at first glance to consider $`z_c=z_c(n,s)`$ as the increasing function of $`n`$. Indeed, this immediately follows from positivity of $`\sigma _n(s)`$ and finiteness of $`n_{max}(s)`$ at finite $`s`$. But one can consider the limit $`m_00`$. Theoretically this limit is rightful because of PCAC hypotheses and nothing should be happen if the pion mass $`m_00`$. In this sense $`\mathrm{\Xi }(z,s)`$ may be considered as the whole function of $`z`$. Then, $`z_c=z_c(n,s)`$ would be increasing function of $`n`$ if and only if $`\mathrm{\Xi }(z,s)`$ is regular function at $`z=1`$. The prove of this statement is as follows. We should conclude, as follows from eq.(2.9), that $$z_c(n,s)z_s\mathrm{at}n\mathrm{},\mathrm{and}\mathrm{at}s=const,$$ (2.10) i.e. the singularity points $`z_s`$ $`attracts`$ $`z_c`$ in asymptotics over $`n`$. If $`z_s=1`$, then $`(z_cz_s)+0`$, when $`n`$ tends to infinity . But if $`z_s>1`$, then $`(z_cz_s)0`$ in VHM region. On may find the estimation: $$\frac{1}{n}\mathrm{ln}\frac{\sigma _n(s)}{\sigma _{tot}(s)}=\mathrm{ln}z_c(n,s)+O(1/n),$$ (2.11) where $`z_c`$ is the $`smallest`$ solution of (2.9). It should be underlined that this estimation is $`independent`$ on the character of singularity, i.e. the position $`z_s`$ only is important. ## 3 Partition function Introducing the ‘grand partition function’ (2.7) the ‘two-level’ description means that $`\mathrm{ln}{\displaystyle \frac{\mathrm{\Xi }(z,\beta )}{\sigma _{tot}(s)}}=\beta (z,s)=`$ $`={\displaystyle \underset{k}{}}{\displaystyle \frac{1}{k!}}{\displaystyle \underset{i=1}{\overset{k}{}}\left\{\frac{d^3q_idm_i\xi (q_i,z)e^{\beta \epsilon _i}}{(2\pi )^32\epsilon _i}\right\}N_k(q_1,q_2,\mathrm{},q_k;\beta )},`$ (3.12) where $`\epsilon _i=\sqrt{q_i^2+m_i^2}`$. This is our virial decomposition. Indeed, by definition $$\mathrm{\Xi }(z,s)|_{\xi =1}=\sigma _{tot}(s).$$ (3.13) The quantity $`\xi (q,z)`$ may be considered as the local activity. So, $$\frac{\delta \mathrm{\Xi }}{\delta \xi (q,z)}|_{\xi =1}\sigma _{tot}N_1(q)$$ (3.14) So, if decay of resonances form a group with 4-momentum $`q`$, then $`N_1(q)`$ is the mean number of such groups. The second derivative gives: $$\frac{\delta ^2\mathrm{\Xi }}{\delta \xi (q_1,z)\delta \xi (q_2,z)}|_{\xi =1}\sigma _{tot}\{N_2(q_1,q_2)N_1(q_1)N_1(q_2)\}\sigma _{tot}K_2(q_1,q_2)$$ (3.15) where $`K_2(q_1,q_2)`$ is two groups correlation function, and so on. The Lagrange multiplier $`\beta `$ was introduced in (3.12) to each resonance: the Bolzmann exponent $`\mathrm{exp}\{\beta \epsilon \}`$ takes into account the energy conservation law $`_i\epsilon _i=E,`$ where $`E`$ is the total energy of colliding particles, $`2E=\sqrt{s}`$ in the CM frame. This conservation law means that $`\beta `$ is defined by equation: $$\sqrt{s}=\frac{}{\beta }\mathrm{ln}\mathrm{\Xi }(z,\beta ).$$ (3.16) So, to define the state one should solve two equations of state (2.9) and (3.16). The solution $`\beta _c`$ of the eq.(3.16) have meaning of inverse temperature of gas of strings if and only if the fluctuations in vicinity of $`\beta _c`$ are Gaussian. On the second level we should describe the resonances decay onto hadrons. Using (1.6) we can write in some vicinity of $`z=1`$: $$\xi (q,z)=\underset{n}{}z^n\sigma _N^R(q)=g^R(\frac{m_0}{m})e^{(z1)\overline{n}(m)},m=|q|.$$ (3.17) The assumptions B and D was used here. So, $$\beta (z,s)=\underset{k}{}\underset{i=1}{\overset{k}{}}\{dm_i^2\xi (m_i,z)\}\stackrel{~}{N}_k(m_1,m_2,\mathrm{},m_k;\beta ),$$ (3.18) where $`\xi `$ was defined in (3.17) and $$\stackrel{~}{N}_k(m_1,m_2,\mathrm{},m_k;\beta )=\underset{i=1}{\overset{k}{}}\left\{\frac{d^3q_ie^{\beta \epsilon _i(q_i)}}{2\epsilon _i(q_i)}\right\}N_k(q_1,q_2,\mathrm{},q_k;m_1,m_2,\mathrm{},m_k).$$ (3.19) Assuming now that $`|q_i|<<m`$ are essential, $$\stackrel{~}{N}_k(m_1,m_2,\mathrm{},m_k;\beta )N_k^{}(m_1,m_2,\mathrm{},m_k)\underset{i=1}{\overset{k}{}}\left\{\sqrt{\frac{2m_i}{\beta ^3}}e^{\beta m_i}\right\}$$ (3.20) Following to the duality assumption one may assume that $$N_k^{}(m_1,m_2,\mathrm{},m_k)=\overline{N}_k(m_1,m_2,\mathrm{},m_k)\underset{i=1}{\overset{k}{}}\left\{m_i^\gamma e^{\beta _0m_i}\right\}$$ (3.21) and $`\overline{N}_k(m_1,m_2,\mathrm{},m_k)`$ is slowly varying function: $$\overline{N}_k(m_1,m_2,\mathrm{},m_k)C_k$$ In result the low-temperature expansion looks as follows: $$\beta (z,s)=\underset{k}{}\frac{2^{k/2}m_0^k(g^R)^kC_k}{\beta ^{3k/2}}\left\{_{m_0}^{\mathrm{}}𝑑mm^{\gamma +3/2}e^{(z1)\overline{n}^R(m)(\beta \beta _0)m}\right\}^k.$$ (3.22) We should assume that $`(\beta \beta _0)0`$. In this sense one may consider $`1/\beta _0`$ as the limiting temperature and above mentioned constraint means that the string energies should be high enough. ## 4 Thermodynamical parameters Remembering that the position of singularity over $`z`$ is essential only, let us assume that the resonance interactions can not renormalize it. Then, living first term in the sum (3.22), $$\beta (z,s)=\frac{m_0g^RC_1}{\beta ^{3/2}}_{m_0}^{\mathrm{}}𝑑m(m/m_0)^{\gamma +3/2}e^{(z1)\overline{n}^R(m)(\beta \beta _0)m}.$$ (4.23) We expect that this assumption is hold if $$n\mathrm{},s\mathrm{},\frac{nm_0}{\sqrt{s}}\frac{n}{n_{max}}<<1.$$ (4.24) So, we would solve our equations of state with following ‘free energy’: $$\beta (z,s)=\frac{\alpha }{\beta ^{3/2}}_{m_0}^{\mathrm{}}d(\frac{m}{m_0})(\frac{m}{m_0})^{\gamma ^{}1}e^{\mathrm{\Delta }(m/m_0)},$$ (4.25) where, using (1.2), $$\gamma ^{}=\gamma +2(z1)\overline{n}_0^R+5/2=2(z1)\overline{n}_0^R,\mathrm{\Delta }=m_0(\beta \beta _0)0,\alpha =const.$$ (4.26) We have in terms of this new variables following equation for $`z`$, $$n=z\frac{2\alpha \overline{n}_0^R}{\beta ^{3/2}}\frac{}{\gamma ^{}}\frac{\mathrm{\Gamma }(\gamma ^{},\mathrm{\Delta })}{\mathrm{\Delta }^\gamma ^{}}.$$ (4.27) The equation for $`\beta `$ takes the form: $$n_{max}=\frac{\alpha m_0}{\beta ^{3/2}}\frac{\mathrm{\Gamma }(\gamma ^{}+1,\mathrm{\Delta })}{\mathrm{\Delta }^{\gamma ^{}+1}},$$ (4.28) where $`n_{max}=(\sqrt{s}/m_0)`$ and $`\mathrm{\Gamma }(\mathrm{\Delta },\gamma ^{})`$ is the incomplete $`\mathrm{\Gamma }`$-function: $$\mathrm{\Gamma }(\gamma ^{},\mathrm{\Delta })=_\mathrm{\Delta }^{\mathrm{}}𝑑xx^{\gamma ^{}1}e^x.$$ ## 5 Asymptotic solutions Following to physical intuition one should expect the cooling of the system when $`n\mathrm{}`$ (at fixed $`\sqrt{s}`$) and heating when $`n_{max}\mathrm{}`$ (at fixed $`n`$). But, as was mentioned above, since the solution of eq.(4.28) $`\beta _c`$ is defined the value of total energy, one should expect that $`\beta _c`$ decrease in both cases. So, the solution $$\mathrm{\Delta }_c0,\frac{\mathrm{\Delta }_c}{n}<0\mathrm{at}n\mathrm{},\frac{\mathrm{\Delta }_c}{s}<0\mathrm{at}s\mathrm{}$$ (5.29) is natural for our consideration. The physical meaning of $`z`$ is activity. It defines at $`\beta =const`$ the work needed for one particles creation. Then, if the system is stable and $`\mathrm{\Xi }(z,s)`$ may be singular at $`z>1`$ only, $$\frac{z_c}{n}>0\mathrm{at}n\mathrm{},\frac{z_c}{s}<0\mathrm{at}s\mathrm{}.$$ (5.30) One should assume solving equations (4.27) and (4.28) they $$z_c\mathrm{\Delta }^{\gamma _c^{}+1}\frac{}{\gamma _c^{}}\frac{\mathrm{\Gamma }(\gamma _c^{},\mathrm{\Delta }_c)}{\mathrm{\Delta }_c^{\gamma _c^{}}}<<\mathrm{\Gamma }(\gamma _c^{}+1,\mathrm{\Delta }_c).$$ (5.31) This condition stands the physical requirement that $`n<<n_{max}`$. In opposite case the finiteness of the phase space for $`m_00`$ should be taken into account. As was mentioned above the singularity $`z_s`$ attracts $`z_c`$ at $`n\mathrm{}`$. By this reason one may consider following solutions. A. $`z_s=\mathrm{}`$: $`z_c>>\mathrm{\Delta }`$, $`\mathrm{\Delta }<<1`$. In this case $$\mathrm{\Delta }^\gamma ^{}\mathrm{\Gamma }(\gamma ^{},\mathrm{\Delta })e^{\gamma ^{}\mathrm{ln}(\gamma ^{}/\mathrm{\Delta })}.$$ (5.32) This estimation gives following equations: $`n=C_1\gamma ^{}\mathrm{ln}(\gamma ^{}/\mathrm{\Delta })e^{\gamma ^{}\mathrm{ln}(\gamma ^{}/\mathrm{\Delta })},`$ $`{\displaystyle \frac{n}{n_{max}}}=C_2\mathrm{\Delta }\gamma ^{}\mathrm{ln}({\displaystyle \frac{\gamma ^{}}{\mathrm{\Delta }}})<<1,`$ (5.33) where $`C_i=O(1)`$ are the unimportant constants. The inequality is consequence of (5.31). This equations have following solutions: $$\mathrm{\Delta }_c\frac{n}{n_{max}\mathrm{ln}n}<<1,\gamma _c^{}\mathrm{ln}n>>1.$$ (5.34) Using this solution one can see from (2.11) that it gives $$\sigma _n<O(e^n).$$ (5.35) B. $`z_s=+1`$: $`z_c1`$, $`\mathrm{\Delta }_c<<1`$. One should estimate $`\mathrm{\Gamma }(\gamma ^{},\mathrm{\Delta })`$ near the singularity at $`z=1`$ and in vicinity of $`\mathrm{\Delta }=0`$ to consider the consequence of this solution. Expanding $`\mathrm{\Gamma }(\gamma ^{},\mathrm{\Delta })`$ over $`\mathrm{\Delta }`$ at $`\gamma ^{}0`$, $$\mathrm{\Gamma }(\gamma ^{},\mathrm{\Delta })=\mathrm{\Gamma }(\gamma ^{})\mathrm{\Delta }^\gamma ^{}e^\mathrm{\Delta }+O(\mathrm{\Delta }^{\gamma ^{}+1})\frac{1}{\gamma ^{}}+O(1).$$ (5.36) This gives following equations for $`\gamma ^{}`$: $$n=C_1^{}\frac{\gamma ^{}\mathrm{ln}(1/\mathrm{\Delta })1}{\gamma ^{}}e^{\gamma ^{}\mathrm{ln}(1/\mathrm{\Delta })}.$$ (5.37) The equation for $`\mathrm{\Delta }`$ has the form: $$n_{max}=C_2^{}e^{(\gamma ^{}+1)\mathrm{ln}(1/\mathrm{\Delta })}.$$ (5.38) Where $`C_i^{}=O(1)`$ are unimportant constants. At $$0<\gamma ^{}\mathrm{ln}(1/\mathrm{\Delta })1<<1,\mathrm{i}.\mathrm{e}.\mathrm{at}\mathrm{ln}(1/\mathrm{\Delta })<<n<<\mathrm{ln}^2(1/\mathrm{\Delta }),$$ (5.39) we find: $$\gamma _c^{}\frac{1}{\mathrm{ln}(1/\mathrm{\Delta }_c)}.$$ (5.40) Inserting this solution into (5.38): $$\mathrm{\Delta }_c\frac{1}{n_{max}}.$$ (5.41) It is remarkable that $`\mathrm{\Delta }_c`$ in the leading approximation is $`n`$ independent. By this reason $`\gamma _c^{}`$ becomes $`n`$ independent also: $$\gamma _c^{}\frac{1}{\mathrm{ln}(n_{max})}:z_c=1+\frac{1}{\overline{n}_0^R\mathrm{ln}(n_{max})}.$$ (5.42) This means that $$\sigma _n=O(e^n)$$ (5.43) and obey the KNO scaling with mean multiplicity $`\overline{n}=\overline{n}_0^R\mathrm{ln}(n_{max})`$. ## 6 Conclusion Comparing A and B solutions we can see the change of attraction points with rising $`n`$: at $`n\overline{n}^2(s)=\overline{n}_0^R\mathrm{ln}(\sqrt{s}/m_0)`$ the transition from (5.43) asymptotics to (5.35) should be seen. At the same time one should see the strong KNO scalings violation at the tail of multiplicity distribution. The comparison of considered above model with experimental data at moderate and high energies will be given in subsequent papers. We have neglect the strings interactions and the final state particles interactions deriving this results. This assumption seems natural since $`z_c1<<1`$ is essential at $`\overline{n}<<n<<\overline{n}^2`$. By this reason one can neglect higher powers of $`(z_c1)`$ in expansion of $`\mathrm{ln}\xi (z,m)`$ over $`(z_c1)`$. Therefore, describing $`\xi (x,m)`$ we may restrict ourselves by Poisson distribution (1.6). At the same time, at first glance, we can not neglect in (3.22) the contributions with $`k>1`$ in the moderate region $`\overline{n}<<n<<\overline{n}^2`$. Indeed, $`k`$-th order in (3.22) is $$\mathrm{\Gamma }^k(\gamma _c^{},\mathrm{\Delta }_c)(\frac{1}{\gamma _c^{}})^k(\mathrm{ln}\mathrm{\Delta }_c)^k(\mathrm{ln}n_{max})^k(\mathrm{ln}(s/m_0^2))^k>>1$$ Nevertheless it can be shown that the higher terms with $`k>1`$ can not change our semiqualitative conclusion. This question will be considered later. Acknowledgments We are grateful to V.G.Kadyshevski for interest to discussed in the paper questions. We would like to note with gratitude that the discussions with E.Kuraev was interesting and important.
no-problem/0003/hep-ph0003299.html
ar5iv
text
# CP and T violation in neutrino oscillations ## Abstract The conditions to induce appreciable CP-and T-odd effects in neutrino oscillations are discussed. The propagation in matter leads to fake CP-and CPT-odd asymmetries, besides a Bohm-Aharonov type modification of the interference pattern. We study the separation of fake and genuine CP violation by means of energy and distance dependence. Complex neutrino mixing for three neutrino families originates CP and T violation in neutrino oscillations. The requirement of CPT invariance leads to the condition $`A\left(\overline{\alpha }\overline{\beta };t\right)=A^{}\left(\alpha \beta ;t\right)`$ for the probability amplitude between flavour states, so that CP or T violation effects can take place in Appearance Experiments only. CPT invariance and the Unitarity of the Mixing Matrix imply that the CP-odd probability $$D_{\alpha \beta }=\left|A\left(\alpha \beta ;t\right)\right|^2\left|A\left(\overline{\alpha }\overline{\beta };t\right)\right|^2$$ (1) is unique for three flavours: $`D_{e\mu }=D_{\mu \tau }=D_{\tau e}`$. The T-odd probabilities $`T_{\alpha \beta }`$ $`=`$ $`\left|A\left(\alpha \beta ;t\right)\right|^2\left|A\left(\beta \alpha ;t\right)\right|^2;`$ $`\overline{T}_{\alpha \beta }`$ $`=`$ $`\left|A\left(\overline{\alpha }\overline{\beta };t\right)\right|^2\left|A\left(\overline{\beta }\overline{\alpha };t\right)\right|^2`$ (2) are odd functions of time by virtue of the hermitian character of the evolution hamiltonian. The last property does not apply to the effective hamiltonian of the $`K^0\overline{K}^0`$ system. The oscillation terms are controlled by the phases $`\mathrm{\Delta }_{ij}\mathrm{\Delta }m_{ij}^2L/4E`$, where $`L=t`$ is the distance between source and detector. In order to generate a non-vanishing CP-odd probability, the three families have to participate actively: in the limit $`\mathrm{\Delta }_{12}1`$, where ($`1`$, $`2`$) refer to the lowest mass eigenvalues, the effect tends to vanish linearly with $`\mathrm{\Delta }_{12}`$ <sup>2</sup><sup>2</sup>2We disregard the alternative solution where the solar neutrino oscillations are associated to an almost degenerate $`\mathrm{\Delta }m_{23}^2`$.. In addition, all mixings and the CP-phase have to be non-vanishing. Contrary to the CP-violating $`D_{\alpha \beta }`$, the CP-conserving probabilities are not unique. If $`\mathrm{\Delta }_{12}1`$, the flavour transitions can be classified by the mixings leading to a contribution from the main oscillatory phase $`\mathrm{\Delta }_{23}\mathrm{\Delta }_{13}`$. The strategy would be the selection of a “forbidden” transition, i.e., an appearance channel with very low CP-conserving probability, in order to enhance the CP-asymmetry. This scenario appears plausible and perhaps favoured by present indications on neutrino masses and mixings from atmospheric and solar neutrino experiments. The use of the atmospheric results for $`\left|\mathrm{\Delta }m_{23}^2\right|`$ leads to the energy dependent $$\left|\mathrm{\Delta }_{23}\right|4\times 10^3\left(\frac{L}{Km}\right)\left(\frac{GeV}{E}\right)$$ (3) which is of the order of unity for long-base-line experiments. This oscillation phase generates an “allowed” transition $`\nu _\mu \nu _\tau `$ proportional to $`s_{23}^2`$ and a “forbidden” transition $`\nu _\mu \nu _e`$ proportional to $`s_{13}^2s_{23}^2`$ for terrestrial or atmospheric neutrinos. In that case, the CP-even probability is independent of $`\mathrm{\Delta }_{12}`$. The CP-odd probability is, on the contrary, linear in $`s_{13}`$ and $`\mathrm{\Delta }_{12}`$. We conclude that the ratio $`\mathrm{\Delta }_{12}/s_{13}`$ is the crucial parameter to induce an appreciable CP-odd asymmetry in the forbidden $`\nu _\mu \nu _e`$ transition. The large-mixing-angle MSW solution to the solar neutrino data provides $`\mathrm{\Delta }m_{12}^2`$ of the order of few $`\times 10^5eV^2`$ and a mixing $`\mathrm{sin}^22\theta _{12}\stackrel{>}{}0.7`$. For reactor neutrinos, the survival probability proceeds through $`\mathrm{\Delta }_{13}`$ (neglecting $`\mathrm{\Delta }_{12}1`$) with the result $`P_{\overline{\nu }_e\overline{\nu }_e}`$ $`=`$ $`c_{13}^4+s_{13}^4+2c_{13}^2s_{13}^2\mathrm{cos}(2\mathrm{\Delta }_{13})`$ (4) $``$ $`14s_{13}^2\mathrm{sin}^2\mathrm{\Delta }_{23}`$ The CHOOZ limit gives $`s_{13}^2\stackrel{<}{}0.05`$. Under these circumstances, one can reach CP-odd asymmetries for the forbidden channel $`\nu _\mu \nu _e`$ of the order 10-20 %. Long-base-line experiments have large matter effects. They are described in the flavour basis by the effective hamiltonian $`H_\nu `$ $`=`$ $`{\displaystyle \frac{1}{2E}}\left\{U\left(\begin{array}{ccc}m_1^2& & \\ & m_2^2& \\ & & m_3^2\end{array}\right)U^++\left(\begin{array}{ccc}a& & \\ & 0& \\ & & 0\end{array}\right)\right\}`$ (11) $`H_{\overline{\nu }}`$ $`=`$ $`{\displaystyle \frac{1}{2E}}\left\{U^{}\left(\begin{array}{ccc}m_1^2& & \\ & m_2^2& \\ & & m_3^2\end{array}\right)U^T\left(\begin{array}{ccc}a& & \\ & 0& \\ & & 0\end{array}\right)\right\}`$ (18) where $`U`$ is the neutrino mixing matrix and $`a`$ is the effective potential of electron-neutrinos with electrons. The mismatch of this charged current electron-flavour interaction induces a relative phase among the electron- and the other neutrinos which is energy independent $$\frac{aL}{2E}0.58\times 10^3\left(\frac{L}{Km}\right);a=G\sqrt{2}N_e2E$$ (19) with $`N_e`$ the number density of electrons in the Earth. We have then the hierarchy $`\mathrm{\Delta }m_{23}^2\stackrel{>}{}a\mathrm{\Delta }m_{12}^2`$. When we diagonalize $`H_\nu `$ and $`H_{\overline{\nu }}`$ in the $`\mathrm{\Delta }_{12}=0`$ limit, we observe that matter effects break the degeneracy and there is a resonance energy obtained from the condition $`a=\mathrm{\Delta }m_{23}^2a_R`$. The effective mixing $`s_{13}`$ in matter is affected by the resonance amplitude for neutrino beams. Although the vacuum mixing $`s_{12}`$ is irrelevant in the $`\mathrm{\Delta }_{12}=0`$ limit, the effective mixing matrix in matter becomes determined, with $`U_{e2}=0`$. This transmutation of the vanishing $`\mathrm{\Delta }_{12}`$ in vacuum to the vanishing $`U_{e2}`$ in matter forbids, in both cases, genuine CP violating effects. Contrary to the “allowed” $`\nu _\mu \nu _\tau `$ transition, which is little affected by matter effects, the forbidden transition $`\nu _\mu \nu _e`$ in matter has a CP-even probability $$P_{\nu _\mu \nu _e}=4\left(\frac{s_{13}}{1\frac{a}{\mathrm{\Delta }m_{23}^2}}\right)^2s_{23}^2\mathrm{sin}^2\left(\mathrm{\Delta }_{23}\frac{aL}{4E}\right)$$ (20) This result shows both the enhanced probability for neutrinos (suppressed for antineutrinos, for which $`aa`$), and a modification of the interference pattern with an energy independent phase-shift induced by matter. This quantum-mechanical interference provides a Bohm-Aharonov type experiment able to detect a potential difference between the two “arms” of an interferometer. The interferometer is represented here by the Mixing Matrix, the optical path difference by $`\mathrm{\Delta }_{23}`$ and the potential by the energy-independent term $`\frac{a}{2E}`$. Although there are no genuine CP-violating effects in the limit $`\mathrm{\Delta }_{12}=0`$, the medium induces fake CP-and CPT- odd effects. Even with fundamental CPT invariance, the survival probability of electron-neutrinos in matter gets modified when going to antineutrinos: $`P_{\nu _e\nu _e}P_{\overline{\nu }_e\overline{\nu }_e}`$. The corresponding asymmetry is, for this background effect, an even function of $`L`$. In order to generate genuine CP-odd effects in matter, one has to allow a non vanishing $`\mathrm{\Delta }_{12}`$. The results in perturbation theory are given in Table 1, where $`\mathrm{\Delta }_{12}`$ is only maintained when needed to avoid a zero. The CP-asymmetry in matter contains then two different terms: one fake component induced by matter asymmetry, which is an even function of $`L`$, and one genuine component, odd function of $`L`$, which is a true signal of CP violation (modified by matter effects). Again the CP-odd asymmetry associated with the “forbidden” transition $`\nu _\mu \nu _e`$ in long-base-line experiments is much more promising. The separation of fake and genuine components is possible by using the energy and distance dependence of the asymmetry. An alternative to the CP-asymmetry is provided by T-odd effects . As matter is, in good approximation, T-symmetric, the T-odd asymmetry does not suffer from fake effects. However, its implementation will need the construction of neutrino factories from muon-storage-rings, able to provide both $`\nu _\mu `$ and $`\nu _e`$ beams. Under the conditions discussed above, the corresponding T-odd asymmetry for the “forbidden” process $`\overline{\nu }_\mu \overline{\nu }_e`$ is given by $$A_{\mathit{}}\mathrm{\Delta }_{12}\frac{1+\frac{a}{a_R}}{s_{13}}s_\delta \frac{\mathrm{sin}\left(\frac{aL}{2E}\right)}{\frac{aL}{2E}}$$ (21) which can reach again appreciable values. As in vacuum, the asymmetry varies linearly with the crucial parameter $`\mathrm{\Delta }_{12}/s_{13}`$. If neutrinos were Majorana particles, nothing would change in the discussion of this paper, as long as we discuss only flavour oscillations. The additional Majorana phases do not enter into the relevant Green function $`<0|T\left\{\psi (x)\overline{\psi }(0)\right\}|0>`$, neither for vacuum oscillations nor for matter . One would need a Majorana propagator $`<0|T\left\{\psi (x)\psi ^T(0)\right\}|0>`$ to be sensitive to these new ingredients. Such a situation would affect the so-called “neutrino-antineutrino oscillations”. We conclude with the comment that CP and T violation in neutrino oscillations, although possible in appearance experiments involving three neutrino families, will be difficult to observe. With a hierarchical spectrum to explain atmospheric and solar neutrino data, better prospects appear for the forbidden transition $`\nu _\mu \nu _e`$ in long-base-line experiments and the large mixing angle MSW solution to the solar neutrino observation. The CP-odd asymmetry becomes linear in $`\mathrm{\Delta }_{12}/s_{13}`$, the parameter associated with “forbiddeness”. Although matter effects break the degeneracy in $`(1,2)`$, the CP-odd asymmetry is still linear in $`\mathrm{\Delta }_{12}/s_{13}`$ as induced by the effective mixings in matter (see Table 1). This work has been supported by CICYT, Spain, under Grant AEN99-0692. One of us (M.C.B.) is indebted to the Spanish Ministry of Education and Culture for her fellowship.
no-problem/0003/cond-mat0003155.html
ar5iv
text
# Dynamics of rough interfaces in Chemical Vapor Deposition: experiments and model for silica films ## Abstract We study the surface dynamics of silica films grown by low pressure chemical vapor deposition. Atomic force microscopy measurements show that the surface reaches a scale invariant stationary state compatible with the Kardar-Parisi-Zhang (KPZ) equation in three dimensions. At intermediate times the surface undergoes an unstable transient due to shadowing effects. By varying growth conditions and using spectroscopic techniques, we determine the physical origin of KPZ scaling to be a low value of the surface sticking probability, related to the surface concentration of reactive groups. We propose a stochastic equation that describes the qualitative behavior of our experimental system. The dynamics of growing interfaces has become of increasing interest in order to understand the physical processes that determine film quality . In the absence of morphological instabilities, many surfaces evolving out of equilibrium display time and space scale invariance, $`i.e.`$ such surfaces are rough. A successful framework for the study of rough interface dynamics has been the formulation of stochastic differential equations for the surface height $`h(𝒓,t)`$, where $`𝒓`$ denotes a site of a two-dimensional substrate, and $`t`$ is time. A prominent example is the Kardar-Parisi-Zhang (KPZ) equation $$_th=\nu ^2h+(\lambda /2)(h)^2+\eta (𝒓,t),$$ (1) expected to describe the large length scale dynamics of any rough surface growing in the absence of specific conservation laws. In Eq. (1), $`\nu `$ and $`\lambda `$ are constants. The first term on the rhs is sometimes called a surface tension term, as it tends to smooth the interface through evaporation-condensation processes . The nonlinear term accounts for growth along the local normal direction (lateral growth), and $`\eta (𝒓,t)`$ is a Gaussian white noise of constant strength $`2D`$, accounting for microscopic fluctuations, $`e.g.`$, in a deposition beam. One of the techniques for thin film production which might be expected to lead to surfaces described by (1) is Chemical Vapor Deposition (CVD). CVD is very widely used for technological applications and industrial devices . It is also very interesting from the fundamental point of view due to its conceptual similarities with other growth techniques such as electrodeposition. Thus, a considerable effort has been made to model CVD growth using partial differential equations. These models predict the development of unstable morphologies within the time ranges studied, contradicting the above expectation of KPZ scale invariant behavior. Let us note that, to date, very few experimental systems are known whose scaling is described by KPZ in three dimensions . Moreover, on the fundamental level there are no experiments addressing the long time behavior of the surface morphologies produced by CVD, and the identification of the determining physico-chemical mechanisms. In this Letter we study the interface dynamics of low pressure SiO<sub>2</sub> CVD films by Atomic Force Microscopy (AFM), Raman and infrared spectroscopies, and numerical simulations. SiO<sub>2</sub> films were studied because of their amorphous nature, to avoid Schwoebel barrier effects on the surface dynamics , and to prevent formation of facets that can also alter the scaling behavior . Our experimental system displays a transient unstable behavior related to the gas phase transport character of CVD growth , but a stationary state is achieved which, rather, is compatible with KPZ scaling. The surface dynamics is well described, over a wide spatial and temporal range (up to 2 days of deposition time), by a stochastic differential equation in which surface diffusion, shadowing effects, and lateral growth are allowed for. Amorphous SiO<sub>2</sub> films were grown at a temperature $`T`$ = 723 K on Si (100) substrates in a hot wall, horizontal, low pressure tubular CVD reactor. The precursor gases were silane (diluted at 2% in nitrogen, 99.999% purity) and oxygen (99.9992% purity), with an oxygen/silane ratio equal to 20 and a total gas flow rate of 50 sccm. The chamber pressure was 1.4 Torr. The film thickness increases linearly with deposition time at a constant growth rate of 20 nm/min. Films were deposited in the range 5 min $`t`$ 2 days. The surface morphology was characterized by AFM (Nanoscope III from Digital Instruments, CA) operating in tapping mode at ambient conditions up to a scale of 50 $`\mu `$m using silicon cantilevers. AFM imaging of our silica films shows (Fig. 1) a deposit formed initially by small rounded grains 30-60 nm in size. A similar morphology has been reported on amorphous silicon films deposited by thermal evaporation . As deposition proceeds, structures resembling mountains and valleys appear at larger length scales increasing in size, until a stationary regime is attained in the 15-30 hours range. Power Spectral Density (PSD) (or surface power spectrum) plots of the surface morphology at different times are displayed in Fig. 2. For $`t<`$ 35 min and small enough wave vector $`k`$, the PSD follows that of the initial substrate, while for $`t>`$ 35 min and $`k<k_c`$ it takes a constant $`k`$-independent value, interface portions separated by distances $`r>1/k_c`$ being uncorrelated. For a rough interface , the value of $`k_c`$ decreases with time as $`k_ct^{1/z}`$, where $`z`$ is the dynamic exponent. Indeed, from the log-log plot of $`k_c`$ vs $`t`$ (Fig. 3), a slope value $`z=1.6\pm 0.1`$ is obtained. Therefore, the time behavior of our CVD growth process is well described by that of a self-affine rough surface. The same conclusion is obtained from the spatial behavior of the surface morphology. Specifically, for a two-dimensional rough interface the PSD behaves as PSD$`(k)1/k^{2\alpha +2}`$, with $`\alpha `$ the roughness exponent. In Fig. 2, as time proceeds, up to three spatial regions can be observed: i) For $`t<50`$ min we can distinguish two regions: Region I, for $`k>k_0`$, features $`\alpha _I=0.99\pm 0.04`$; $`k_0`$ initially decreases from 0.04 nm<sup>-1</sup> for $`t=20`$ min to a constant value of 0.02 nm<sup>-1</sup> for $`t`$ 50 min. Region II with $`\alpha _{II}=0.75\pm 0.03`$ is observed for $`k_1<k<k_0`$, where $`k_1`$ changes from 0.025 nm<sup>-1</sup> for $`t`$ = 20 min to 0.0034 nm<sup>-1</sup> for $`t`$ = 50 min. ii) For $`t>50`$ min a new region III appears for $`k_c<k<k_1`$, with $`\alpha _{III}=0.42\pm 0.03`$; as noted above, $`k_c`$ decreases with time. Regions I and II are still observed. The above behavior of the PSD is compatible with that of the surface width, $`W^2(t)=\overline{(h(𝒓,t)\overline{h}(t))^2}`$, which we measure independently (Fig. 3). The bar denotes spatial average. For a rough interface, initially $`W(t)t^{\alpha /z}=t^\beta `$, as long as $`tL^z`$, with $`L`$ a measure of the spatial extent of the system. For $`tL^z`$, the interface saturates into a stationary state for which $`WL^\alpha `$. As a difference with the PSD, we observe only two temporal behaviors in $`W(t)`$, corresponding to regions II and III of the PSD. For $`t<`$ 50 min, the width is described by the growth exponent $`\beta _{II}=0.42\pm 0.04`$, while for 50 min $`t`$ 15-30 hours, $`W(t)`$ data are consistent with $`\beta _{III}=\alpha _{III}/z=0.26\pm 0.03`$ as obtained from the PSD data. For $`t>`$15-30 hours, the width $`W(t)`$ saturates to a constant value. Note that the contribution of the short length scales (region I) to the surface width is masked by the spatial region II. However, despite the few data points for $`t`$ 50 min, an attempt to estimate $`\beta _I`$ from the PSD curves leads to $`\beta _I=0.28\pm 0.09`$. Regarding the long distance properties of our system, the observed $`\alpha _{III}=0.42\pm 0.03`$, $`z=1.6\pm 0.1`$, and $`\beta _{III}=0.26\pm 0.03`$ are close, within experimental errors, to the approximate exponents for the KPZ universality class in three dimensions, namely , $`\alpha _{KPZ}=0.39`$, $`z_{KPZ}=1.63`$, and $`\beta _{KPZ}=0.24`$. Thus, the asymptotic scaling behavior of the growing CVD surface is of the KPZ class, an experimental result very rarely found in three dimensions . It is important to understand the origin of the KPZ regime under our experimental conditions. It is known that, for CVD systems with a low surface reaction kinetics (low sticking probability $`s`$), lateral growth, which is the fingerprint of KPZ behavior, is promoted leading to the so-called conformal growth. The mechanism inducing conformal growth is the reemission of the precursor ($`i.e.`$, depositing) species due to a low sticking probability. For our experimental conditions $`s`$ has actually been reported to decrease with temperature from $`s`$ = 0.5 at 573 K down to $`s`$ = 0.08 at 723 K. Also, the growth conformality of submicron trenches by SiO<sub>2</sub> films has been reported to improve as $`T`$ increases. Physically, $`s`$ can be related to the concentration of reactive sites at the growing film surface, which for our system are mainly associated with hydrogenated groups, such as Si-OH and Si-H, and with strained siloxane (Si-O-Si) groups, rather than with the less reactive relaxed siloxane groups . To study the role played by the sticking probability on our film morphology, we have grown SiO<sub>2</sub> films at lower $`T`$ = 611 K (higher $`s`$) with the same growth rate ($``$ 20 nm/min). Analysis of our films by Raman and infrared spectroscopies shows that the 611 K films indeed present a significantly higher concentration of reactive groups than the 723 K films, in agreement with the higher sticking probability expected for the low temperature growth. Moreover, regarding the surface dynamics of the low $`T`$ films, neither the KPZ nor the saturation regimes were found after 2 days of deposition. In Fig. 4 we plot the PSD of a film grown at 611 K after 2 days. The KPZ region does not appear but only regions I and II do. In Fig. 4 (inset) we plot $`W(t)`$ for the same set of films. Again, only unstable growth is obtained for long times. From the point of view of Eq. (1), these experiments suggest that when $`T`$ is reduced the effective $`\lambda `$ coefficient of the KPZ term decreases. These results are analogous to the crossover found between diffusion-limited aggregation and Eden growth when tuning the sticking probability , and allow us to identify region II in the scaling behavior at the high tempererature value ($`T=723`$ K) as an unstable transient. The above results suggest that the KPZ equation might be a good starting point to describe the observed behavior. However, in our system we expect the first term in equation (1) to be negligible, since the SiO<sub>2</sub> vapor pressure is extremely low ($`i.e.`$ $`\nu `$ is very small) in our temperature range, whereas surface stabilization by surface diffusion seems to be relevant . In fact, $`\alpha _I0.99`$ and $`\beta _I0.28`$ are consistent within experimental errors with the linear theory of surface diffusion . On the other hand, due to the large values of the effective exponents $`\alpha _{II}`$ and $`\beta _{II}`$ and in view of the discussion above, we believe region II corresponds to unstable growth. This instability can be related to shadowing effects which occur in CVD due to the random walk motion of the depositing particles. Nonlinear surface diffusion can be discarded as the origin of the unstable behavior in region II since it leads to anomalous scaling , not present in our measurements. Similarly, the unstable equation for $`h`$ derived in can be ruled out as a description of the unstable behavior in region II because it leads to a non constant growth rate, incompatible with our experimental setup. Thus, we propose the following continuum equation to describe the silica CVD growth: $$_th=K^4h+\epsilon \theta /\overline{\theta }+(\lambda /2)(h)^2+\eta (𝒓,t).$$ (2) In Eq. (2), $`K`$ and $`\epsilon `$ are positive constants; the first term on the rhs represents relaxation by surface diffusion and the second term represents geometric shadowing effects, $`\theta `$ being the local exposure angle and $`\overline{\theta }`$ being the spatial average of $`\theta `$ . In order to check the validity of (2) we have performed numerical simulations in two dimensions (three dimensional simulations are limited to small system sizes ). In Fig. 5 we plot the time evolution of the PSD for $`K=D=1`$, $`\epsilon =1/2`$, and two values of the strength of the KPZ non-linearity $`\lambda =0.2,3`$. For these sets of parameters we have employed system sizes up to $`L`$ = 8192 in order to reliably identify the different scaling regimes. For $`\lambda =3`$ (solid line in Fig. 5), and as experimentally observed at high $`T`$ (Fig. 2), the surface PSD evolves from a scaling regime dominated by surface diffusion at short distances to KPZ scaling at large length scales, through an unstable transient due to the shadowing effects . Also in agreement with the experimental data, after a certain growth time both $`k_0`$ and $`k_1`$ are frozen, and no anomalous scaling is observed. The time evolution of the surface width is shown in the inset of Fig. 5 for the same parameters. Again, for $`\lambda =3`$, it resembles quite closely that shown in Fig. 3 since, as $`t`$ increases, the behavior changes from surface diffusion to unstable growth and finally to KPZ scaling. For smaller $`\lambda =0.2`$ (dashed lines in Fig. 5), corresponding experimentally to low $`T`$ (Fig. 4), we observe from the PSD and $`W(t)`$ behaviors that the crossover from the unstable region II to KPZ behavior takes place, if at all, at longer time and larger length scales. Similar behaviors to those in Fig. 5 are obtained for other parameter sets , provided that the relative weight of the different contributions is preserved. In summary, we have found that low pressure CVD growth of silica films is governed by the relative balance between surface diffusion, shadowing and lateral (KPZ) growth. Our experimental system is well described by a differential continuum stochastic equation in which these three mechanisms are allowed for. Moreover, our study allows to link the value of the effective KPZ nonlinearity to the physical and chemical properties of the growing interface. Thus, we conclude that the observation of asymptotic KPZ scaling is favoured under growth conditions ($`e.g.`$, high temperatures) which promote a low sticking probability of the depositing species. This work has been performed within the CONICET-CSIC and Programa de Cooperación con Iberoamérica (MEC) research programs, and has been partially supported by CAM grants 7220-ED/082, 07N-0028, and DGES grants MAT97-0698-C04, PB96-0119. F. O. acknowledges support by CAM.
no-problem/0003/quant-ph0003065.html
ar5iv
text
# Untitled Document 1. Introduction. The experimental work of the Paris group of S. Haroche and of the Boulder group of D. Wineland demonstrate convincingly that the theoretical ideas of quantum theory really do work in careful experiments performed, in effect, on individual atoms interacting with controlled electromagnetic probes and environments. It is an impressive tribute to the power of human reason and logic that the creators of quantum theory were able to accurately forecast effects so far removed in scale and intricacy from the data that they possessed. The experiments of these groups both confirm the emergence of decoherence effects whose strength and rapidity of onset increase rapidly with the size of the system being disturbed by interactions with its environment. In recent theoretical paper Max Tegmark computes, on the basis of the thus-confirmed ideas, some expected time intervals for the disappearance of quantum coherence in various brain structures that have been proposed as the seat of the neural correlates of consciousness. He finds that quantum coherence disappears on time scales of $`10^{13}`$ to $`10^{20}`$ seconds, and concludes from this that classical concepts should provide a completely adequate basis for understanding the dynamical connection between mind and brain. This conclusion depends on the idea that the quantum interaction between mind and brain depends upon quantum coherence. It is indeed usually thought that coherence is the essence of quantum theory, and that all quantum effects depend upon it. But the development of the von Neumann-Wigner quantum theory of mind pursued by this author was specifically designed so that the effect of mental effort on brain process is not weakened by decoherence. Indeed, quantum decoherence was assumed to decompose the state of the brain into a mixture of essentially classical states. But the quantum effect of mental effort on brain activity is not curtailed by this decomposition. I shall now explain how this works. 2. Overview of the Theory Before giving the specific computation I must first describe the general form of the theory. It is based on objectively interpreted von Neumann-Wigner quantum theory. I have argued elsewhere that the evolving state S(t) of von Neumann-Wigner quantum theory can be construed to be our theoretical representation of an objectively existing and evolving informational structure that can properly be called “physical reality”. The theory has four basic equations. The first defines the state of a subsystem. If S(t) is the operator that represents the state of the universe and b is a subsystem of the universe then the state of b is defined to be $$S(t)_b=Tr_bS(t),$$ $`(2.1)`$ where $`Tr_b`$ means the trace over all variable except those that characterize b. The second basic equation specifies von Neumann’s process I. This process “poses a question”. If $`S(t0)`$ represents the limit of $`S(t^{})`$ as $`t^{}`$ approaches t from below then at certain times t the following jump occurs: $$S(t)=PS(t0)P+(1P)S(t0)(1P).$$ $`(2.2)`$ Here P is a projection operator (i.e., $`P^2=P`$) that acts as the unit operator on all degrees of freedom except those associated with the processor b. The third basic equation specifies the (Dirac) reduction. This reduction specifies nature’s answer to the question: $$S(t+0)=PS(t)P\text{ with probability }TrPS(t)/TrS(t)$$ $`(2.3)`$ or $$S(t+0)=(1P)S(t)(1P)\text{ with probability }Tr(1P)S(t)/TrS(t).$$ Between jumps the state evolves according to: $$S(t+\mathrm{\Delta }t)=\mathrm{exp}(iH\mathrm{\Delta }t)S(t)\mathrm{exp}(+iH\mathrm{\Delta }t).$$ $`(2.4)`$ The projection operator P has two eigenvalues, 1 and 0, and is therefore associated with a Yes-No question: the two alternative possible reductions specified in (2.3) are associated with the two alternative possible answers, Yes or No, to the question associated with P. Thus the reduction (2.3) specifies one bit of information, and implants that information in the state S(t) of the physical universe. This state S(t) can be regarded as just the evolving carrier of the bits of information generated by these reduction events. Information is normally conceived to be associated with an interpreting system. In Copenhagen quantum theory each reduction is associated with an increment in human knowledge, and the interpreting system is the brain and body of the observer. Generalizing from this one known kind of example, I shall assume that each reduction (2.3) is associated with a quantum information processor, call it b, that both poses the question —picks P—and, when nature responds by picking, say, the answer P=1, ‘interprets’ that bit of information by evolving in a characteristic way. The projection operator P cannot be local: any point-like projection would inject infinite energy into the processor. This jump of S(t) to P S(t)P, because it is basically a nonlocal process, has no counterpart in classical dynamics: it is a new kind of element, relative to classical physical theory. Generalizing again from the one known example, I assume that each reduction event is connected to some sort of “knowing”: each such event has a characteristic experiential “feel”. Each thought involves an effort to attend to something— i.e., to pose a question—followed by a registration of the answer. This conforms exactly to the quantum dynamics. Normally a sequence of thoughts consists of a string of thoughts each of which differs just slightly from its predecessor: the sequence becomes a ‘stream’ of consciousness. So the basic process is self-replication: the thought T creates conditions that tend to create a likeness of T. This means that a key requirement for P is that PSP not evolve rapidly out of the subspace defined by P, or at least that PSP quickly evolve into a state nearly the same as PSP, so that the sequence of thought is likely to be a sequence of similar thoughts. One possibility is that the projection operator P may act in the space of a set of conjugate variables that is undergoing periodic motion, and that it projects onto a band of neighboring orbits in phase space. For a simple harmonic oscillator in a state of high energy one could take the projection operator P to be the sum of the projection operators onto a large set of neighboring energy eigenstates. This would effectively project onto a band of neighboring orbits in phase space. 3. The Quantum Zeno Effect In this theory the main effect of mind on brain is via the quantum Zeno effect. Suppose the initial state is PS(t)P, and that in that state the next question is again P, and that this question repetitiously repeats. If these questions are posed at intervals $`\mathrm{\Delta }t`$ then equations (2.4) and (2.2) give $$S(t+\mathrm{\Delta }t)=P\mathrm{exp}(iH\mathrm{\Delta }t)PS(t)P\mathrm{exp}(+iH\mathrm{\Delta }t)P$$ $$+(1P)\mathrm{exp}(iH\mathrm{\Delta }t)PS(t)P\mathrm{exp}(+iH\mathrm{\Delta }t)(1P).$$ If $`\mathrm{\Delta }t`$ is small on the scale of the leakage of PS(t)P out of the subspace defined by P then the second term is small and of second order in $`\mathrm{\Delta }t`$. Thus as $`\mathrm{\Delta }t`$ gets small, on the scale of the leakage of $`PSP`$ into the subspace associated with $`(1P)`$, the Hamiltonian $`H`$ gets effectively replaced by $`PHP`$: evolution within the $`P`$ subspace proceeds normally, but leakage out of that subspace is blocked. The point here is that the linear-in-time leakage out of the subspace defined by $`P`$ is killed by the reduction events. Thus only the quadratic and higher terms survive, and these are damped out if the reductions occurs fast on the time scale of the relevant oscillations. This replacement of the full Hamiltonian H by PHP is the usual quantum Zeno effect. We see that it is just as effective for a statistical mixture S(t) of quasi-classical states as for a pure state: the decoherence generated by interaction with the environment does not weaken this quantum effect. 4. Explanatory Power Von Neumann-Wigner quantum theory encompasses all the valid predictions of classical physical theory. So for any computation, or argumentation, for which quantum effects are unimportant one can use classical physics. Hence vN/W theory is at least as good as classical physical theory: the two theories are effectively equivalent insofar as quantum effects are unimportant. In the purely physical domain the vN/W theory is certainly better, because it predicts also all of the quantum effects, including all of the “nonlocal” quantum effects. But our interest here is on the nature of the dynamical link between mind and brain, and the nature of the consequences of this connection. The only power given to the mind by this theory is the power to choose the questions P. And the only effects of these choices that has thus far been identified are the consequences achieved by the quantum Zeno effect. This effect is to keep the brain activity focussed on a question for longer than it would stay focussed in the classical theory. To make the theory still more constrained, let me assume that the quantum processor, in this case the human brain/body, possesses a certain set of possible questions P, and that at a prescribed sequence of instants the processor can either consent, or not consent, to posing a certain possible question P. Let this question P be the one that maximizes $`Tr_bPS(T)/Tr_bS(t)`$. To accomodate our intuitive feeling that mental ‘effort’ does effect brain/body activity I add the postulate that the rapidity of the sequence of instants can be increased by mental effort. This is a simple theory. But the effect of mind on brain is highly constrained. The only variables under mental control are “consent’ and ‘effort’. Does this theory explain anything? Consider the following passage from “Psychology: The Briefer Course” by William James . In the final section of the chapter on Attention he writes: “I have spoken as if our attention were wholly determined by neural conditions. I believe that the array of things we can attend to is so determined. No object can catch our attention except by the neural machinery. But the amount of the attention which an object receives after it has caught our attention is another question. It often takes effort to keep mind upon it. We feel that we can make more or less of the effort as we choose. If this feeling be not deceptive, if our effort be a spiritual force, and an indeterminant one, then of course it contributes coequally with the cerebral conditions to the result. Though it introduce no new idea, it will deepen and prolong the stay in consciousness of innumerable ideas which else would fade more quickly away. The delay thus gained might not be more than a second in duration— but that second may be critical; for in the rising and falling considerations in the mind, where two associated systems of them are nearly in equilibrium it is often a matter of but a second more or less of attention at the outset, whether one system shall gain force to occupy the field and develop itself and exclude the other, or be excluded itself by the other. When developed it may make us act, and that act may seal our doom. When we come to the chapter on the Will we shall see that the whole drama of the voluntary life hinges on the attention, slightly more or slightly less, which rival motor ideas may receive. …” Posing a question is the act of attending. In the chapter on Will, in the section entitled “Volitional effort is effort of attention” James writes: “Thus we find that we reach the heart of our inquiry into volition when we ask by what process is it that the thought of any given action comes to prevail stably in the mind. and later The essential achievement of the will, in short, when it is most ‘voluntary,’ is to attend to a difficult object and hold it fast before the mind. … Effort of attention is thus the essential phenomenon of will.” Still later, James says: “Consent to the idea’s undivided presence, this is effort’s sole achievement.” …“Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away.” The vN/W theory, with the quantum zeno effect incorporated, explains naturally the features that are the basis of James’s conception of the action of human volition. References 1. M. Brune, et. al. Phys. Rev. Lett. 77, 4887 (1996) 2. C.J. Myatt, et. al. Nature, 403, 269 (2000) 3. Max Tegmark, “The Importance of Quantum Decoherence in Brain Process,” Phys. Rev E, to appear. 4. H.P. Stapp, “Nonlocality, Counterfactuals, and Consistent Histories, http://xxx.lanl.gov/abs/quant-ph/9905055 5. H.P. Stapp, “From Einstein Nonlocality to Von Neumann Reality,” http://www-physics.lbl.gov/$``$stapp/stappfiles.html quant-ph/0003064 6. H.P. Stapp, “Attention, Intention, and Will in Quantum Physics,” in J. Consc. Studies 6, 143-64 (1999). 7. Wm. James, “Psychology: The Briefer Course”, ed. Gordon Allport, University of Notre Dame Press, Notre Dame, IN. Ch. 4 and Ch. 17
no-problem/0003/cond-mat0003014.html
ar5iv
text
# What makes an insulator different from a metal? ## Introduction The present contribution, dealing with such a general issue as the difference between insulators and metals, may appear out of place at a workshop focussing on the “Fundamental Physics of Ferroelectrics”. But indeed, the present results are in a sense the ultimate developments of the theory of polarization modern ; rap\_a12 ; Ortiz94 ; rap100 ; rap112 based on a Berry phase Berry , which is crucial to the modern understanding of ferroelectrics William . The link between the present subject and polarization is simply stated: insulators sustain nontrivial bulk polarization, metals do not. To the present purposes, materials are conveniently divided in only two classes: insulators and metals. We use here the terms in a loose sense as synonym of nonconducting and conducting: an insulator is distinguished from a metal by its vanishing conductivity at low temperature and low frequency. This qualitative difference in the dc conductivity must reflect a qualitative difference in the organization of the electrons in their ground state. So the question we are going to address is: is it possible to find a pure ground state property which discriminates between an insulator and a metal? Before proceeding to answer, let me discuss the alternative phenomenological characterization of the insulating/metallic behavior: instead of making direct reference to the dc conductivity, we address macroscopic polarization. Suppose we expose a finite macroscopic sample to an electric field, say inserting it in a charged capacitor. Then the induced macroscopic polarization is qualitatively different in metals and insulators. In the former materials polarization is trivial: universal, material–independent, due to surface phenomena only (screening by free carriers). Therefore polarization in metals is not a bulk phenomenon. The opposite is true for insulators: macroscopic polarization is a nontrivial, material–dependent, bulk phenomenon. We can therefore phenomenologically characterize an insulator, in very general terms, as a material whose ground wavefunction sustains a bulk macroscopic polarization whenever the electronic Hamiltonian is non centrosymmetric. From this definition it is clear that the modern theory of polarization, based on a Berry’s phase, can lead to a better understanding of the insulating state of matter. This paper is organized as follows. First we briefly outline Kohn’s theory of the insulating state Kohn64 . Then we address dipole and localization for a lone electron in a Born–von–Kàrmàn periodic box; subsequently, we apply similar ideas to an extended system of $`N`$ electrons in a periodic box, addressing macroscopic polarization and electronic localization in the many–body case. We then show how this works for a crystalline system of independent electrons, and finally—following Ref. rap107 —we demonstrate localization for a model correlated system displaying two different insulating phases. For the sake of simplicity, we explicitate here the relevant algebra only for the case of one–dimensional electrons. The generalization to three–dimensional electrons can be found in Refs. rap\_a20 ; rap\_a21 ; Souza00 . ## The insulating state and Kohn’s theory Within any classical theory, the electronic responses of insulators and metals are qualitatively described by “bound” and “free” charges, respectively. Microscopic models for such charges are provided by the Lorentz theory (insulators) and by the Drude theory (metals). Within the former, each electron is tied (by an harmonic force) to a particular center; within the latter, electrons roam freely over macroscopic distances, hindered only by atomic scattering potentials. Therefore, from a purely classical model viewpoint, one explains the insulating/metallic behavior of a material by means of the localized/delocalized character of the electron distribution. Switching to quantum mechanics, this clearcut character of the electron distribution is apparently lost. Textbooks typically explain the insulating/metallic behavior by means of band structure theory, focussing on the position of the Fermi level of the given material: either in an band gap (insulators), or across a band (metals). This picture is obviously correct, but very limited and somewhat misleading. First of all, the band picture applies only to a crystalline material of independent electrons: a very limited class of insulators indeed. Noncrystalline insulators do in fact exist, and the electron–electron interaction is a fact of nature: in some materials the insulating behavior is dominated by disorder (Anderson insulators), in some other materials the insulating behavior is dominated by electron correlation (Mott insulators). Further classes of insulators are also known, as e.g. excitonic insulators. Therefore, for a large number of insulators, the band picture is totally inadequate. Second: even for a material where a band–structure description is adequate, the simple explanation of the insulating/metallic behavior focusses on the spectrum of the system, hence on the nature of the low lying electronic excitations. Instead, the qualitative difference in the dc conductivity at low temperature must reflect a qualitative difference in the organization of the electrons in their ground state. Such a difference is not evident in a band–structure picture: the occupied states are of the Bloch form both in insulators and in metals, and qualitatively rather similar (in particular those of simple metals and of simple semiconductors). In a milestone paper published in 1964, Kohn was able to define the insulating state of matter in a way which in a sense is close to the classical picture. In fact he gave evidence that electron localization is the main feature determining the insulating behavior of a many–electron wavefunction Kohn64 , thus restoring the same basic distinction as in the classical picture: the key is how to define and to measure the degree of electronic localization, visualizing in a qualitative and quantitative way the peculiar organization of the electrons which is responsible for the insulating state of matter. As previously stated, a superficial look indicates that electrons are roughly speaking equally delocalized in insulators and in metals. One needs therefore a sharp criterion which singles out the relevant character of the wavefunction. Kohn’s criterion is the following Kohn64 ; Kohn68 ; Souza00 : the many–electron wavefunction is localized if it breaks up into a sum of functions $`\mathrm{\Psi }=_J\mathrm{\Psi }_J`$ which are localized in essentially disconnected regions $`_J`$ of the configuration space. Any two such $`\mathrm{\Psi }_J`$’s have exponentially small overlap. Under such a localization hypothesis, Kohn proves that the dc conductivity vanishes. ## Dipole and localization for a single electron The dipole moment of any finite $`N`$–electron system in its ground state is a simple and well defined quantity. Given the many–body wavefunction $`\mathrm{\Psi }`$ and the corresponding single–particle density $`n(𝐫)`$ the electronic contribution to the dipole is: $$𝐑=𝑑𝐫𝐫n(𝐫)=\mathrm{\Psi }|\widehat{𝐑}|\mathrm{\Psi },$$ (1) where $`\widehat{𝐑}=_{i=1}^N𝐫_i`$. This looks very trivial, but we are exploiting here an essential fact: the ground wavefunction of any bound $`N`$–electron system is square–integrable and vanishes exponentially at infinity. Going at the very essence, we simplify matter at most and we consider in the present Section only a single electron in one dimension. The dipole (or equivalently the center) of the electronic distribution is then: $$x=_{\mathrm{}}^{\mathrm{}}𝑑xx|\psi (x)|^2,$$ (2) where again we understand $`\psi (x)`$ as a square–integrable function over $``$. This is not the way condensed matter theory works. Because of several good reasons, in either crystalline or disordered systems it is almost mandatory to assume BvK boundary conditions: the wavefunction $`\psi (x)`$ is periodic over a period $`L`$, large with respect to atomic dimensions. Adopting a given choice for the boundary conditions is tantamount to defining the Hilbert space where our solutions of Schrödinger’s equation live. By definition, an operator maps any vector of the given Hilbert space into another vector belonging to the same space: the multiplicative position operator $`x`$ is therefore not a legitimate operator when BvK are adopted for the state vectors, while any periodic function of $`x`$ is legitimate: this is the case e.g. of the nuclear potential acting on the electrons. Suppose we have an electron distribution such as the one in Fig. 1. The main issue is then: how do we define the center of the distribution? Intuitively, the distribution appears to have a “center”, which however is defined only modulo the replica periodicity, and furthermore cannot be evaluated simply as in Eq. (2), precisely because of BvK. Solutions to this and similar problems have been attempted several times: many incorrect papers—which will not be identified here—have been published over the years. The good solution has been found by Selloni et al. in 1987 by means of a very elegant and far–reaching formula brodo . According to them, the key quantity for dealing with the position operator within BvK is the dimensionless complex number $`𝔷`$, defined as: $$𝔷=\psi |\mathrm{e}^{i\frac{2\pi }{L}x}|\psi =_0^L𝑑x\mathrm{e}^{i\frac{2\pi }{L}x}|\psi (x)|^2,$$ (3) whose modulus is no larger than 1. The most general electron density, such as the one depicted in Fig. 1, can always be written as a superposition of a function $`n_{\mathrm{loc}}(x)`$, normalized over $`(\mathrm{},\mathrm{})`$, and of its periodic replicas: $$|\psi (x)|^2=\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}n_{\mathrm{loc}}(xx_0mL).$$ (4) Both $`x_0`$ and $`n_{\mathrm{loc}}(x)`$ have a large arbitrariness: we restrict it a little bit by imposing that $`x_0`$ is the center of the distribution, in the sense that $`_{\mathrm{}}^{\mathrm{}}𝑑xxn_{\mathrm{loc}}(x)=0`$. Using Eq. (4), $`𝔷`$ can be expressed in terms of the Fourier transform of $`n_{\mathrm{loc}}`$ as: $$𝔷=\mathrm{e}^{i\frac{2\pi }{L}x_0}\stackrel{~}{n}_{\mathrm{loc}}(\frac{2\pi }{L}).$$ (5) If the electron is localized in a region of space much smaller than $`L`$, its Fourier transform is smooth over reciprocal distances of the order of $`L^1`$ and can be expanded as: $$\stackrel{~}{n}_{\mathrm{loc}}(\frac{2\pi }{L})=1\frac{1}{2}\left(\frac{2\pi }{L}\right)^2_{\mathrm{}}^{\mathrm{}}𝑑xx^2n_{\mathrm{loc}}(x)+𝒪(L^3).$$ (6) A very natural definition of the center of a localized periodic distribution $`|\psi (x)|^2`$ is therefore provided by the phase of $`𝔷`$ as: $$x=\frac{L}{2\pi }\text{Im log}𝔷,$$ (7) which is in fact the formula first proposed by Selloni et al. brodo . The expectation value $`x`$ is defined modulo $`L`$, as expected since $`|\psi (x)|^2`$ is BvK periodic. It is also worth to observe that for an extremely delocalized state we have $`|\psi (x)|^2=1/L`$ and $`𝔷=0`$: hence the center of the distribution $`x`$, according to Eq. (7), is ill–defined, as one would indeed expect. So far, we have not specified which Hamiltonian we were addressing when discussing electron distributions $`|\psi (x)|^2`$ of the kind depicted in Fig. 1. It is however obvious to imagine that the wavefunction $`\psi (x)`$ is the eigenstate of a (periodically repeated) potential well of suitable shape. Suppose for a moment we are not adopting BvK boundary conditions, having thus only a genuinely isolated potential well. In this case the eigenstates can belong to two different classes: bound (localized) states, and scattering (delocalized) states. The distinction is a qualitatively clearcut one, and can be stated in several ways. One of them is to consider the second cumulant moment, or spread: $$x^2_\mathrm{c}=x^2x^2=_{\mathrm{}}^{\mathrm{}}𝑑xx^2|\psi (x)|^2\left(_{\mathrm{}}^{\mathrm{}}𝑑xx|\psi (x)|^2\right)^2,$$ (8) which is finite for bound states and divergent (when using appropriate normalizations) for scattering ones. But if we study the same potential well within BvK, the qualitative distinction is lost: all states appear in a sense as “delocalized” since all wavefunctions $`\psi (x)`$ are periodic over the BvK period. And in fact the integrals in Eq. (8) become ill defined. The main issues therefore are: How do we distinguish between localized and delocalized states within BvK? In case of a localized state, how we actually measure the amount of localization? In the literature, such issues have been previously addressed by means of the participation ratio participation . The complex number $`𝔷`$, whose phase provides the center of the distribution, Eq. (7), is our key to addressing localization: it is enough to consider its modulus. It has already been observed that $`|𝔷|`$ is bounded between 0 and 1, and that $`|𝔷|`$ equals zero for an extremely delocalized state with $`|\psi (x)|^2=1/L`$. If we take instead an extremely localized state, with $`n_{\mathrm{loc}}(x)=\delta (x)`$, it is straightforward to get $`|𝔷|=1`$. It is therefore natural to measure localization by means of the negative of the logarithm of $`|𝔷|`$: it is a nonnegative number, equal to zero in the case of extreme localization, and divergent in the case of extreme delocalization. A glance at Eq. (6) yields: $$\mathrm{log}|𝔷|\frac{1}{2}\left(\frac{2\pi }{L}\right)^2_{\mathrm{}}^{\mathrm{}}𝑑xx^2n_{\mathrm{loc}}(x),$$ (9) hence a natural expression for measuring the actual spread within BvK is: $$x^2_\mathrm{c}=x^2x^2=\left(\frac{L}{2\pi }\right)^2\mathrm{log}|𝔷|^2.$$ (10) Having in mind again the eigenstates of a potential well, we can study the expression in Eq. (10) as a function of $`L`$. For a localized state, the shape of of $`n_{\mathrm{loc}}(x)`$ can be taken as $`L`$–independent for large $`L`$, hence Eq. (10) goes to a finite limit, which is the “natural” spread of the distribution. Quite on the contrary, for a delocalized state the distribution is smeared all over the $`(0,L)`$ segment, preserving the norm over one period: therefore $`𝔷`$ goes to zero and the spread diverges in the large–$`L`$ limit. ## Dipole and localization for many electrons So much about the one–electron problem: we are now going to consider a finite density of electrons in the periodic box. To start with, irrelevant spin variables will be neglected, and a system of spinless electrons in one dimension is considered. Even for a system of independent electrons, our approach takes a simple and compact form if a many–body formulation is adopted; BvK imposes periodicity in each electronic variable separately. Our interest is in studying a bulk system: $`N`$ electrons in a segment of length $`L`$, where eventually the thermodynamic limit is taken: $`L\mathrm{}`$, $`N\mathrm{}`$, and $`N/L=n_0`$ constant. We start defining the one–dimensional analogue of $`\widehat{𝐑}`$ of Eq. (1), namely, the multiplicative operator $`\widehat{X}=_{i=1}^Nx_i`$, and the complex number $$𝔷_N=\mathrm{\Psi }|\mathrm{e}^{i\frac{2\pi }{L}\widehat{X}}|\mathrm{\Psi }.$$ (11) It is obvious that the operator $`\widehat{X}`$ is ill–defined in our Hilbert space, while its complex exponential appearing in Eq. (11) is well defined. The main result of Ref. rap100 is that the ground–state expectation value of the position operator is given by the analogue of Eq. (7), namely: $$X=\frac{L}{2\pi }\text{Im ln }𝔷_N,$$ (12) a quantity defined modulo $`L`$ as above. The right–hand side of Eq. (12) is not simply the expectation value of an operator: it is the phase of it, converted into length units by the factor $`L/(2\pi )`$. This phase can be called a single–point Berry phase, for reasons explained elsewhere rap101 ; rap\_a20 ; rap\_a21 . Furthermore, the main ingredient of Eq. (11) is the expectation value of the multiplicative operator $`\mathrm{e}^{i\frac{2\pi }{L}\widehat{X}}`$: it is important to realize that this is a genuine many–body operator. In general, one defines an operator to be one–body whenever it is the sum of $`N`$ identical operators, acting on each electronic coordinate separately: for instance, the $`\widehat{X}`$ operator is such. In order to express the expectation value of a one–body operator the full many–body wavefunction is not needed: knowledge of the one–body reduced density matrix $`\rho `$ is enough: I stress that, instead, the expectation value of $`\mathrm{e}^{i\frac{2\pi }{L}\widehat{X}}`$ over a correlated wavefunction cannot be expressed in terms of $`\rho `$, and knowledge of the $`N`$-electron wavefunction is explicitly needed. In the special case of a single–determinant, the $`N`$-particle wavefunction is uniquely determined by the one–body reduced density matrix $`\rho `$ (which is the projector over the set of the occupied single–particle orbitals): therefore the expectation value $`X`$, Eq. (12), is uniquely determined by $`\rho `$. But this is peculiar to uncorrelated wavefunctions only. The expectation value $`X`$ is extensive, as the dipole in Eq. (1). For the corresponding intensive quantity we borrow from Ref. Souza00 a useful notation: $$x_\mathrm{c}=X/N=\frac{L}{2\pi N}\text{Im ln }𝔷_N,$$ (13) where the subscript means “cumulant”. The quantity $`x_\mathrm{c}`$ goes to a well defined termodynamic limit, which is in fact proportional to the macroscopic polarization of the system. This result is proved in Ref. rap107 ; its three dimensional generalization is discussed in Refs. rap\_a20 ; rap\_a21 ; Souza00 . We stress that nowhere have we assumed crystalline periodicity. Therefore our definition of $`x_\mathrm{c}`$ is very general: it applies to any condensed system, either ordered or disordered, either independent–electron or correlated. In the special case of a crystalline system, either interacting or noninteracting, the present approach can be shown equivalent to the previous formulations of polarization theory modern ; rap\_a12 ; Ortiz94 ; rap100 ; rap\_a20 ; rap\_a21 ; Souza00 . We are now ready to discuss electron localization in a condensed system: the present view is the one of Refs. rap107 ; rap\_a21 , recently reexamined by Souza et al. Souza00 , who also discuss its relationship to Kohn’s localizationKohn64 . This view is based on the modulus of $`𝔷_N`$, in full analogy with the previous Section about the single electron. We define therefore an intensive quantity, the second cumulant moment, by analogy with Eqs. (10) and (13): $$x^2_\mathrm{c}=\frac{1}{N}\left(\frac{L}{2\pi }\right)^2\mathrm{log}|𝔷_N|^2,$$ (14) where again the notation is borrowed from Ref. Souza00 . This second moment is a very meaningful measure of electron localization in the electronic ground wavefunction, and enjoys two important properties: (1) When applied to a crystalline system of independent electrons, we recover an important gauge–invariant quantity which controls the Marzari–Vanderbilt localization Marzari97 ; (2) Even for more general systems, correlated and/or disordered, $`x^2_\mathrm{c}`$ assumes a finite value in insulators, and diverges in metals. Indeed in a metal the modulus of $`𝔷_N`$ goes to zero in such a way that its phase is ill defined, and hence macroscopic polarization is ill defined as well. We have emphasized throughout this paper that one of the main phenomenological features differentiating insulators from metals is that the former materials sustain a nontrivial bulk polarization, while the latter do not. The complex number $`𝔷_N`$ provides the key formal link between polarization and localization, via its phase and its modulus. ## Noninteracting electrons For independent electrons, we may write the many–body wavefunction $`\mathrm{\Psi }`$ as a Slater determinant of Bloch orbitals, but in the case of a metal not all the Bloch vectors in the reciprocal cell correspond to occupied orbitals: this fact is of overwhelming importance. We consider the simple case of one band in one dimension, whose Bloch vectors are illustrated in Fig. 2, imposing BvK boundary conditions over $`M`$ cristal cells: $`L=Ma`$, where $`a`$ is the lattice constant. We restore electron spin here: we get an insulator if the number of electrons $`N`$ equals $`2M`$ (filled band, top sketch), and a metal if $`N=M`$ (half–filled band, bottom sketch). Both in the insulating and in the metallic case the $`N`$–electron wavefunction is a Slater determinant of size $`N`$, built of $`N/2`$ doubly occupied spatial orbitals. Following the same algebra as in Refs. rap100 ; rap\_a20 ; rap\_a21 , the complex number $`𝔷_N`$ can be written in any case (insulator or metal) by means of the determinant of a matrix $$𝔷_N=(\text{det}𝒮)^2,$$ (15) whose elements are $$𝒮_{q_s,q_s^{}}=\frac{1}{a}𝑑x\psi _{q_s}^{}(x)\psi _{q_s^{}}(x)\mathrm{e}^{i\frac{2\pi }{Ma}x},$$ (16) and these elements are nonzero whenever $`s=s^{}+1`$. In the insulating case, owing to complete filling, both $`s`$ and $`s^{}`$ run over all the $`M`$ values: for any given $`s`$, there is always one (and only one) $`s^{}`$ such that the matrix element in Eq. (16) is nonzero. That means that in any row of the $`𝒮`$ matrix—whose size is $`M\times M`$—there is one, and only one, nonvanishing element. Under these circumstances, the determinant factors as a product of $`M`$ numbers: $$\text{det}𝒮=\underset{s=0}{\overset{M1}{}}\frac{1}{a}𝑑x\psi _{q_{s+1}}^{}(x)\psi _{q_s}(x)\mathrm{e}^{i\frac{2\pi }{Ma}x},$$ (17) where the identity $`\psi _{q_M}(x)\psi _{q_0}(x)`$ is understood (periodic gauge). All the factors are nonvanishing, and the logarithm of $`𝔷_N`$ is therefore a finite number. It can be shown that the $`N\mathrm{}`$ limit of $`x^2_\mathrm{c}`$ coincides with the spread of the optimally localized Wannier functions, as defined by Marzari and Vanderbilt Marzari97 . The metallic case is very different. Since not all the $`q_s`$ vectors are occupied, the indices $`s`$ and $`s^{}`$ run over a subset of the $`M`$ values (Fig. 2): the matrix $`𝒮`$ is of size $`M/2\times M/2`$. There is one of the two $`q_s`$ at the Fermi level, for which the integrals in Eq. (16) are all vanishing, for any occupied $`s^{}`$. Therefore the matrix $`𝒮`$ has a row of zeros, and its determinant vanishes: its phase, and hence macroscopic polarization, is ill defined. The logarithm of $`𝔷_N`$ is formally $`\mathrm{}`$, and the spread $`x^2_\mathrm{c}`$ diverges to $`+\mathrm{}`$. This is what we expected for a metal; the nontrivial fact is that it diverges even at finite $`N`$, while on general grounds we only expected it to diverge in the thermodynamic ($`N\mathrm{}`$) limit. So far, the compact and elegant expression of Eq. (14) has been proved to be appropriate to discriminate between insulators and metals only for a crystalline systems of independent electrons. For the—much more interesting—general case of a correlated and/or disordered system, we postulate that Eq. (14) performs the same task. The postulate is based on the general argument—much stressed above—about macroscopic polarization: well defined in insulators, ill defined in metals (as a bulk property). The correctness of this postulate has been verified by Resta and Sorella rap107 for a one–dimensional model of a correlated crystal. Work on a model disordered system is in progress. Other very interesting discussions about the physical meaning of $`x^2_\mathrm{c}`$ and its relationships to the insulating/metallic character of the system can be found in a paper of Souza et al. Souza00 . ## Localization in a model correlated system We review here the very recent work of Resta and Sorella rap107 , where Eq. (14) is implemented for a one–dimensional two–band Hubbard model at half filling, intended to mimic an insulator having a mixed ionic/covalent character, and whose ground wavefunction is explicitly correlated. The macroscopic polarization of this model system was studied in Refs. rap87 ; Ortiz95 . To the present purpose, it is enough to study the centrosymmetric geometry: polarization is zero, the wavefunction is real, and the phase of $`𝔷_N`$ is either 0 or $`\pi `$. The model has a very interesting behavior as a function of $`U`$: at small $`U`$ it is a band insulator, while at a critical $`U_c`$ undergoes a transition to a Mott–like insulating phase. In the centrosymmetric geometry, $`𝔷_N`$ is a real number, which changes sign at $`U_c`$. Its phase $`\gamma `$ (i.e. the single–point Berry phase rap101 ; rap\_a20 ; rap\_a21 ) jumps therefore by $`\pi `$: it turns out that this occurrence is the main fact signalling the transition: the topological quantum number $`\gamma /\pi `$ can be used as an order parameter to identify the two different phases of the system Thouless ; Aligia . At low values of $`U`$ our model is a band insulator, while above $`U_c`$ is a Mott–like insulator. What happens to the second cumulant moment (squared localization length) $`x^2_\mathrm{c}`$ as a function of $`U`$? The results are shown in Fig. 3 in terms of the dimensionless quantity $$𝒟_N=N\mathrm{log}|𝔷_N|^2,$$ (18) such that the squared localization length, Eq. (14) is; $$x^2_\mathrm{c}=\frac{1}{(2\pi n_0)^2}\underset{N\mathrm{}}{lim}𝒟_N.$$ (19) At $`U=0`$ the system is noninteracting, and the squared localization length $`x^2_\mathrm{c}`$ coincides with the second moment of the (optimally localized) Wannier function of the occupied band. In the correlated case at $`U0`$ no Wannier analysis can be performed; yet $`x^2_\mathrm{c}`$ mantains its role as a meaningful measure of the localization of the electronic wavefunction as a whole. The localization length increases with $`U`$ below the transition, diverges at the transition point $`U_c`$, and becomes localized again in the highly correlated regime, where in fact $`x^2_\mathrm{c}`$ decreases with increasing $`U`$. Since the localization length remains finite at all values of $`U`$ different from $`U_c`$, the system is always insulating except at the transition point; the two insulating phases are topologically different and correspond to a qualitatively different organization of the electrons in the wavefunction Ortiz95 . What about the transition point? According to the previously stated viewpoint, the delocalized behavior implies a metallic character of the many–electron system. Notice that $`x^2_\mathrm{c}`$ is a pure ground state property and apparently carries no information about the excitation spectrum of the system. Yet we have explicitly verified—by exploiting the metastability of the Lanczos algorithm—that at the critical $`U`$ value there is indeed a level crossing. At the transition value $`U_c`$ the ground state is twice degenerate and the lowest lying excitation (at constant $`N`$) has vanishing energy. ## Acknowledgments Discussions with R. M. Martin, F. Mauri, Q. Niu, G. Ortíz, A. Pasquarello, S. Sorella, I. Souza, and D. Vanderbilt are gratefully acknowledged. Part of this work was performed at the 1998 workshop “Physics of Insulators” at the Aspen Center for Physics. Partly supported by the Office of Naval Research, through grant N00014-96-1-0689.
no-problem/0003/physics0003055.html
ar5iv
text
# Enhanced dielectronic recombination of lithium-like Ti19+ ions in external E×B fields ## References
no-problem/0003/hep-ex0003011.html
ar5iv
text
# 1 Introduction ## 1 Introduction The interaction of electrons and protons at the HERA collider is dominated by photoproduction processes in which quasireal photons emitted by the electrons interact with the protons. The center of mass energies in the $`\gamma `$p system extend to 300 GeV. A fraction of these events has large transverse energy in the final state and contains jets. Previous studies of hard $`\gamma `$p scattering processes by H1 and ZEUS have shown that the photoproduction of jets can be described in perturbative QCD. The photon interacts either directly with a parton from the proton, or it develops hadronic structure and one of its own partons interacts with one of those from the proton. The former are referred to as direct interactions, whereas the latter are referred to as resolved interactions. Jet cross-section predictions are obtained to leading order (LO) in the strong coupling constant as a convolution of the hard scattering cross-sections calculated at the tree level with the parton densities in the photon and the proton. The partons leaving the hard scattering reaction are identified with jets. In the kinematic range of the present analysis the parton densities in the proton are rather well known and the quark densities of the photon have been determined from two-photon processes at $`e^+e^{}`$ colliders. The measurement of the di-jet cross-section can therefore be used to determine the gluon distribution in the photon. From previous studies it is known that the energy flow in the events of interest here is complicated by a large “underlying event” energy, which can be described as arising from multiple interactions. That is, in addition to the primary hard interaction, further interactions occur between partons in the proton and photon remnants. Modelling of higher order QCD effects using parton showers is also important if an accurate description of the energy flow in and around the jets is to be obtained. Current Monte Carlo (MC) models including such effects are based on LO QCD matrix element calculations; NLO predictions are available at the parton level only. The analysis described in this paper is similar to that presented in an earlier publication . However, the data used now correspond to an integrated luminosity of 7.2 pb<sup>-1</sup> as opposed to 0.29 pb<sup>-1</sup>. Its main emphasis is on the study of di-jet production at small $`x_\gamma `$ where gluons in the photon are expected to make the largest contribution to the cross-section. The data in this kinematic region are strongly affected by non-perturbative effects as discussed in detail below. We therefore limit ourselves here to a LO QCD analysis of the parton distributions in the photon. A NLO analysis of di-jet events in photoproduction has been published recently by the ZEUS collaboration for a high cut in transverse jet energy $`E_T>11`$ GeV. In this kinematic region of large $`x_\gamma `$, where the influence of the underlying event energy is reduced, the quark rather than the gluon content of the photon is expected to dominate the cross-section. ## 2 The H1 Detector A detailed description of the H1 detector can be found elsewhere . Here we describe only those components which are important for this analysis. The H1 central tracking system is mounted coaxially around the beam-line and covers polar angles $`\theta `$, measured with respect to the proton beam direction, in the range $`20^{}<\theta <160^{}`$. Momentum measurements of charged particles are provided by two cylindrical drift chambers. The central tracking system is complemented at two radii by $`z`$\- drift chambers, which provide accurate measurements of the $`z`$ coordinate along the beam line of charged particle tracks, and multiwire proportional chambers (MWPCs), which allow triggering on central tracks. In the present analysis the tracking detectors are used to define the vertex position along the beam axis and to improve the measurement of the hadronic energy flow at low hadron energies. The tracking system is surrounded by a highly segmented liquid argon (LAr) sampling calorimeter with an inner electromagnetic section consisting of lead absorber plates with a total depth of 20 to 30 radiation lengths and an outer hadronic section with steel absorber plates. The LAr calorimeter covers polar angles between $`4^{}`$ and $`154^{}`$ with full azimuthal acceptance. The total depth of the calorimeter varies between 4.5 and 8 hadronic interaction lengths. The energy resolution was measured to be $`\sigma (E)/E0.12/\sqrt{E}`$ for electrons and $`\sigma (E)/E0.5/\sqrt{E}`$ for hadrons ($`E`$ in GeV) in test beam experiments. The absolute energy scale is known for the present data sample to a precision of 1 to 3% for positrons and 4% for hadrons. The region $`153^{}<\theta <177.8^{}`$ is covered by a lead/scintillating-fibre calorimeter. The luminosity determination is based on the measurement of the bremsstrahlung process, $`epep\gamma `$, using the small angle photon detector ($`z=103`$ m), and by detecting the scattered positron in the small angle electron detector ( $`z=33`$ m) where $`z`$ is the coordinate along the beam line with the nominal vertex at the origin. Both detectors are crystal Čerenkov calorimeters with an energy resolution of $`\sigma (E)/E0.22/\sqrt{E}`$. The small angle electron detector is used in the present analysis also to tag photoproduction events. ## 3 Event Selection and Kinematic Reconstruction The events used in this analysis were taken during the 1996 running period, in which HERA collided 820 GeV protons with 27.5 GeV positrons. The transverse jet energies required in this analysis are as low as 4 GeV. In order to improve the jet energy resolution at low jet energies the energy flow is reconstructed by combining the energy measurements made in the LAr calorimeter with the measured momenta of spatially associated charged tracks with transverse momenta smaller than 1.5 GeV, avoiding double counting. More details are given in . Events were selected according to the following requirements: 1. The event is triggered by a combination of trigger signals from the small angle electron detector and from charged tracks in the central detectors with a minimum requirement on their transverse momentum of about 300 MeV. 2. The scattered positron is detected and measured in the small angle electron detector in order to ensure a low photon virtuality ($`Q^2<0.01`$ GeV<sup>2</sup>). The energy fraction $`y_e`$ carried by the radiated photon is restricted to the range $`0.5<y_e<0.7`$, where $`y_e`$ is reconstructed from the energy of the scattered positron. The lower cut on $`y_e`$ ensures that a high momentum photon enters the hard scattering process such that the detector acceptance for the two hard jets is large; the upper cut is required by the acceptance of the small angle electron detector. 3. At least two jets with transverse energy $`E_{T,jet}>4`$ GeV and an invariant jet-jet mass $`M_{1,2}>12`$ GeV have to be found using a cone algorithm in the region $`0.5<\eta _{jet}<2.5`$. Here $`\eta _{jet}`$ is the pseudorapidity in the laboratory and positive $`\eta `$ corresponds to the direction of the outgoing proton. A small cone radius of $`R=0.7`$ in the $`\eta \varphi `$ plane is used to reduce the effects of the underlying event energy on the jet energy measurement. The two jets with highest transverse energy are associated to the hard scattering process. 4. The difference in pseudorapidity between the jets is restricted to $`|\eta _{jet1}\eta _{jet2}|<1`$. This cut reduces the background of events where one jet is in the beam pipe and a second jet, not associated with the hard scattering process, is found instead. Using these cuts 1889 di-jet events remain. The longitudinal momentum fraction of the incident parton in the photon is estimated using $`y_e`$ and the transverse energies and pseudorapidities of the two jets with the highest $`E_T`$, $$x_{\gamma ,jets}=\frac{E_{T,jet1}e^{\eta _{jet1}}+E_{T,jet2}e^{\eta _{jet2}}}{2y_eE_{e,0}},$$ (1) where $`E_{e,0}`$ denotes the electron beam energy. In the selected event sample, $`x_\gamma `$ is limited to the range $`x_{\gamma ,jets}>0.03`$ as a result of the cuts on the transverse energy, on the invariant mass of the jets, on the pseudorapidity and on $`y_e`$. The trigger efficiency is monitored in the data by using an independent calorimetric reference trigger. The efficiency ranges from $`90\%`$ at high $`x_{\gamma ,jets}`$ to $`65\%`$ at low $`x_{\gamma ,jets}`$, and is well described by the detector simulation. An error of $`\pm 5`$% is assigned to the trigger efficiency. ## 4 Monte Carlo Generators for Hard $`𝜸𝒑`$ Processes The analysis uses simulated events to correct the measurements for detector effects, and to further compare the data with perturbative QCD predictions for the hard parton scattering and different models for multiple interactions. The Monte Carlo generators used in this analysis are PHOJET and PYTHIA . Both use LO QCD matrix elements for the hard scattering subprocesses. Initial and final state parton radiation and the string fragmentation model are included as implemented in the JETSET program . The two Monte Carlo generators differ in the treatment of multiple interactions and the transition from hard to soft processes at low transverse parton momentum $`\widehat{p}_T`$. The hard parton-parton cross-section diverges towards low $`\widehat{p}_t`$ and therefore needs a regularisation to normalise to the measured total cross-section. This regularisation is achieved for PHOJET by a simple cut-off at $`\widehat{p_t}=2.5`$ GeV. For the PYTHIA generator we have chosen the option to use a damping factor $`\widehat{p}_t^2/(\widehat{p}_t^2+\widehat{p}_{0t}^2)`$ where $`\widehat{p}_{0t}`$ was taken to be 1.55 GeV. <sup>1</sup><sup>1</sup>1 This regularisation corresponds to a model with variable impact parameter for multiple parton interactions as explained in reference , section 11.2. The PHOJET event generator simulates in a consistent way all components that contribute to the total photoproduction cross-section. PHOJET incorporates detailed simulations of both multiple soft and hard parton interactions on the basis of a unitarisation scheme. The PYTHIA 5.7 event generator uses LO QCD calculations to simulate both the primary parton-parton scattering process and multiple parton interactions. The latter are considered to result from the scattering of partons from the photon and proton remnants. The final state partons are required to have a transverse momentum of at least $`1.2`$ GeV in all cases. For both Monte Carlo models the factorisation and renormalisation scales were set to the transverse momentum $`\widehat{p}_t`$ of the scattered partons. GRV92-LO parton distribution functions for the proton and photon were used for the generation of the events. ## 5 Energy Flow and Jet Correlations A precise measurement of the transverse jet energy $`E_{T,jet}`$ is very important because the measured transverse jet energy distribution falls roughly as $`(E_{T,jet})^{5.5}`$. Therefore a poor description of the energy flow around the jet leads to severe systematic biases in the determination of cross-sections. The transverse energy flow is well described by both Monte Carlo simulations within the jet cone. They differ, however, outside the jets: PYTHIA slightly overestimates and PHOJET underestimates the transverse energy . Remaining differences between the two Monte Carlo models are used to estimate the systematic error of the jet reconstruction due to the underlying event energy. The transverse energy outside the jet cones depends mainly on $`\eta `$ because the energy available for multiple interactions is large for small $`x_\gamma `$, i.e. large $`\eta `$, where the photon spectator has large fractional momentum $`1x_\gamma `$. The average transverse energy density (per unit area in $`\eta \varphi `$) outside the jets in the interval $`1<\eta \eta _{jet}<1`$ is shown in Figure 1 versus $`\eta _{jet}`$ compared to the predictions of the two Monte Carlo models. This “pedestal” energy $`E_{T,Ped}`$ is calculated as follows. For every jet the transverse energy is summed in the region $`\mathrm{\Omega }`$ with area A defined by $`1<\eta \eta _{jet}<1`$ and $`\pi <\varphi \varphi _{jet}<\pi `$ around the analysed jet but excluding the jets themselves using a cone radius R=1.0. $`E_{T,Ped}`$ is finally given by: $$E_{T,Ped}=\frac{1}{A}\underset{\mathrm{\Omega }}{}E_T$$ (2) The average transverse energy density measured outside of the jets (pedestal energy) is as high as 1.4 GeV at large $`\eta _{jet}`$ and can therefore give, at small jet energies, a substantial contribution to the transverse energy of a jet. A detailed study of the jet-jet correlations using the two Monte Carlo models shows good agreement between data and MC which justifies the use of these LO QCD Monte Carlo generators for the analysis. ## 6 Di-jet Cross-Section for $`𝑬_{𝑻\mathbf{,}𝒋𝒆𝒕}\mathbf{>}\mathrm{𝟒}`$ GeV Jets at the detector level are reconstructed using calorimeter clusters and tracks. Monte Carlo events offer both the possibility to reconstruct jets at the detector level and to reconstruct jets using the generated hadrons. The reconstructed jets are then used to calculate $`x_\gamma `$ at the detector level (termed $`x_{\gamma ,det}`$) and at the hadron jet level (termed $`x_{\gamma ,jets}`$). The correlation between these two quantities is used to unfold the measured $`x_{\gamma ,det}`$ distribution to the hadron jet level in bins of $`x_{\gamma ,jets}`$. This correlation can be characterised by a Gaussian distribution in the quantity $`(\mathrm{log}(x_{\gamma ,det})\mathrm{log}(x_{\gamma ,jets}))`$ with a dispersion of $`\sigma 0.12`$ and small non-Gaussian tails. Finally, the differential cross-section $`\mathrm{d}\sigma /\mathrm{d}\mathrm{log}(x_{\gamma ,jets})`$ is calculated. After reweighting the distributions according to the new cross-section derived during unfolding, both Monte Carlo simulations describe all aspects of the measured data distributions in the detector equally well. The dominant systematic error (up to $`24\%`$ at low $`x_\gamma `$) results from the uncertainty on the hadronic energy scale of $`\pm 4\%`$. The stability of the unfolding procedure is studied by starting with very different cross-sections in the Monte Carlo simulation. This results in changes of the unfolded cross-section of less than 10%. The systematic errors of the corrections for detector acceptance and resolution are evaluated by using both Monte Carlo simulations (PYTHIA and PHOJET) and by using renormalisation and factorisation scales $`0.5\widehat{p_t}`$ in addition to the default choices $`\widehat{p_t}`$. The detector corrections are found to differ by up to $`10\%`$. Additional experimental uncertainties arise from the trigger efficiency ($`5\%`$) and the acceptance of the small angle electron detector and from the luminosity measurement (combined error $`6\%`$). All systematic errors are added in quadrature. The di-jet cross-section $`\mathrm{d}\sigma /\mathrm{d}\mathrm{log}(x_{\gamma ,jets})`$ is shown in Figure 2 and Table 1 where the data points are averages of the results obtained using the two Monte Carlo simulations for unfolding. The measurement is made in the kinematic region $`E_{T,jet}>4`$ GeV, $`M_{1,2}>12`$ GeV, $`0.5<\eta _{jet}<2.5`$, $`|\eta _{jet1}\eta _{jet2}|<1`$ and $`0.5<y_e<0.7`$. The inner error bars reflect the statistical errors and the outer error bars show the statistical and systematic errors added in quadrature. This cross-section determination relies on an adequate description of the energy flow and of the angular correlations of the jets, both of which are achieved by the PYTHIA and PHOJET Monte Carlo simulations as discussed in section 5. Remaining differences outside the jet cones between data and either of the Monte Carlo simulations are comparable to the difference between the two Monte Carlo simulations. The systematic error associated with the uncertainty in the description of the energy flow, especially of the underlying event energy, is therefore estimated as half the difference of the results obtained when unfolding with the alternative Monte Carlo simulations. It amounts to 10 to 15 %. The absolute predictions of the PHOJET and PYTHIA models using the same parton density functions for the photon and the proton and the same factorisation and renormalisation scales are shown in Figure 2 in comparison to the data. The two predictions should be the same if this low $`E_T`$ jet sample were dominated by the effects of hard scattering. However, they differ by almost a factor 2 for $`x_\gamma <0.5`$. This can be traced back to the parton transverse momentum spectra of the selected di-jet events which differ greatly for PYTHIA and PHOJET at low $`\widehat{p_t}`$ due to the different regularisation procedures. PYTHIA predicts a much larger fraction of di-jet events than PHOJET with parton $`\widehat{p_t}`$ between 2 and 4 GeV. Such partons produce jets with $`E_{T,jet}>4`$ GeV because of a large underlying event energy in the jet cone. We conclude that this event sample is strongly influenced by effects such as the regularisation procedure and the underlying event energy which makes a comparison to perturbative QCD predictions difficult. However, both models lead to a comparable good description of all aspects of the data once the predictions are reweighted to the measured $`x_\gamma `$ distribution. Therefore the measured cross-section is a solid experimental result which can be compared to every model which gives a complete description of hard $`\gamma `$p processes. ## 7 Analysis of Di-jet events for $`𝑬_{𝑻\mathbf{,}𝒋𝒆𝒕}\mathbf{>}\mathrm{𝟔}`$ GeV The determination of cross-sections usable for perturbative QCD analysis requires a data sample which is dominated by the effects of the partons from the hard scattering process. A more restrictive data selection is therefore used for the subsequent analysis steps. The initial selection used here is as described in the previous section, but without the application of the jet-jet mass cut, which becomes ineffective for the increased $`E_T`$ cut (see below). The following further requirements are then made: 1. The transverse energy in the jet cone for each jet is corrected for the average expected underlying event energy $`E_{T,Ped}`$ as function of $`\eta _{jet}`$ ( Figure 1). To do this, the average transverse energy density, as determined outside the jet cone using the Monte Carlo simulations, is subtracted from the measured energy in the cone. Monte Carlo studies show that this simple procedure leads to good agreement between the average jet and parton energies and at the same time improves the energy correlation significantly for individual events. After this subtraction the remaining transverse jet energy has to be larger than 6 GeV. This procedure reduces non-perturbative effects much more effectively than just raising the cut on $`E_{T,jet}`$. 2. The pseudorapidity of each jet has to satisfy the requirement $`\eta _{jet}>0.9\mathrm{ln}[x_{\gamma ,jets}]`$. While this cuts hardly effects genuine di-jet events with $`E_{T,jet}>6`$ GeV and $`|\eta _{jet1}\eta _{jet2}|<1`$, it eliminates a large fraction of those events where one of the two jets used to reconstruct $`x_\gamma `$ is not associated with the hard scattering process. These additional cuts are introduced to achieve a good correlation between the measured and true values of $`x_\gamma `$. They also reduce the differences in the parton distributions of the two Monte Carlo event samples. The selected di-jet sample contains 750 events. ### 7.1 Di-jet Cross-Section for $`𝑬_{𝑻\mathbf{,}𝒋𝒆𝒕}\mathbf{>}\mathrm{𝟔}`$ GeV The cross-section $`\mathrm{d}\sigma /\mathrm{d}\mathrm{log}(x_{\gamma ,jets})`$ is determined from the event sample with the jet transverse energy cut $`E_{T,jet}>6`$ GeV after pedestal energy subtraction. The result is shown in Figure 3 and Table 1 where the data points are obtained by averaging the results from the two Monte Carlo used for unfolding. In Figure 3 the data are compared to the predictions of the two Monte Carlo simulations using the GRV92 LO structure functions and $`\widehat{p_t}`$ for the renormalisation and factorisation scales. For PHOJET the contributions of resolved photon interactions due to quarks and gluons from the photon and from direct photon interactions are shown separately. The higher $`E_T`$ cut, combined with the pedestal subtraction, strongly depopulates the low $`x_\gamma `$ region and therefore also the region where gluons from the photon dominate. Nevertheless, the data remain sensitive to the gluon distribution down to $`x_\gamma `$= 0.05. The differences between the PYTHIA and PHOJET predictions based on the same parton densities and using the same scales are now at a level between 10 and 40%. ### 7.2 The Effective Parton Distribution The di-jet cross-section in LO QCD is given by a sum of direct and resolved photon contributions. The direct photon contribution depends only on the well known parton distributions in the proton and can therefore be predicted. The resolved part of the di-jet cross-section in LO QCD is a sum of quark-quark ($`qq`$), gluon-quark ($`gq`$) and gluon-gluon ($`gg`$) scattering processes with different angular distributions and weights. To a good approximation, however, the differential cross-section, in the $`\mathrm{\Delta }\eta `$ range chosen for this analysis, can be described with a single effective subprocess using effective parton distributions for the photon and proton and a single differential parton-parton angular distribution $`\mathrm{d}\widehat{\sigma }/\mathrm{d}\mathrm{cos}\widehat{\mathrm{\Theta }}`$ . This is because the angular distribution is very similar for the largest contributing subprocesses within the present $`\mathrm{\Delta }\eta `$ range. For resolved processes the differential cross-section can therefore be approximately expressed as $$\frac{\mathrm{d}^4\sigma ^{ep}}{\mathrm{d}y\mathrm{d}x_\gamma \mathrm{d}x_p\mathrm{d}\mathrm{cos}\widehat{\mathrm{\Theta }}}=\frac{1}{32\pi s_{ep}}\frac{f_{\gamma /e}}{y}\frac{f_{\gamma ,eff}(x_\gamma )f_{p,eff}(x_p)}{x_\gamma x_p}\frac{\mathrm{d}\widehat{\sigma }}{\mathrm{d}\mathrm{cos}\widehat{\mathrm{\Theta }}}$$ Here the effective parton distributions for the photon and proton can be written $$f_{\gamma ,eff}(x_\gamma )=\left[q(x_\gamma )+\overline{q}(x_\gamma )+9/4g(x_\gamma )\right]$$ $$f_{p,eff}(x_p)=\left[q(x_p)+\overline{q}(x_p)+9/4g(x_p)\right]$$ $`f_{\gamma /e}`$ is the photon flux and $`s_{ep}`$ is the center of mass energy squared of the $`ep`$ system. The quark densities $`q(x)`$ comprise the sum over all flavours. Since the parton densities in the proton are well constrained, the effective parton density in the photon can be determined from the measured cross-section. Monte Carlo studies show that the correlation between $`x_{\gamma ,det}`$ as reconstructed in the detector and the generated momentum fraction $`x_\gamma `$ of the parton entering the hard scattering process from the photon side for both PYTHIA and PHOJET is rather good. It can be characterised by a Gaussian distribution of the quantity $`(\mathrm{log}(x_{\gamma ,jets})\mathrm{log}(x_\gamma ))`$ with a dispersion of $`\sigma 0.2`$ and only small non-Gaussian tails over the full measured range of $`x_\gamma `$. This correlation is used to correct the measured jet cross-section for the effects of hadronisation and underlying event energy (which is only subtracted on average) using the unfolding method of . During the unfolding procedure the direct and resolved photon contributions are calculated keeping all parton densities in the proton fixed to the GRV92 LO parton distributions , while the effective parton density in the photon is adjusted to get best agreement with the measured $`x_\gamma `$ distribution. This determines the effective parton density in the photon. After unfolding and reweighting to the new effective parton density, all Monte Carlo distributions are in good agreement with the data for both models . This demonstrates that it is possible in a leading order Monte Carlo model for the hard scattering process to get a good description of the observed di-jet events when the optimised photon parton densities derived from our data are used. Figure 4 and Table 2 show the measured effective parton density of the photon multiplied by $`\alpha ^1x_\gamma `$, where $`\alpha `$ is the fine structure constant. The LO QCD expectation for the direct photon contribution, as given by the Monte Carlo simulations, has been subtracted. The magnitude and distribution of this contribution can be seen in Figure 3. It amounts to about 23% of the selected di-jet events. The measured points correspond to an average scale $`\widehat{p_t}^2=74`$ GeV<sup>2</sup> as determined from the $`\widehat{p_t}`$ of the partons in the weighted Monte Carlo sample which describes the data distributions. Both statistical and total errors are given. Systematic errors have been determined using the method outlined in section 6. A systematic error due to model uncertainties in the $`x_\gamma `$ correlation is added. This error is dominated by the subtraction of the underlying event energy and taken to be half the difference between the effective parton distributions using PYTHIA or PHOJET respectively for the unfolding. It amounts to 15 to 20%. The data points of Figure 4 and Table 2 are finally obtained by averaging the results obtained using the two Monte Carlo simulations for unfolding. The measured effective parton distribution is compared to the GRV92 LO parametrisation of the parton densities in the photon which is obtained by a fit to $`e^+e^{}`$ two-photon data alone . These data constrain the quark density in the photon, but give only indirect information on the gluon distribution via the observed scaling violations. The contribution of quarks plus antiquarks in the photon as given by the GRV92 parametrisation is shown separately. It includes the charm contribution (about 25% of the quark contribution) as calculated for $`\gamma `$p interactions. The predicted quark plus antiquark contribution describes the data well at the highest values of $`x_\gamma `$ but falls far below at small $`x_\gamma `$. Within LO QCD the difference can only be attributed to a gluon contribution which thus is shown to rise strongly towards low $`x_\gamma `$. The extracted effective parton density constitutes the main result of this analysis at the parton level since, in contrast to the gluon density, it can be extracted from our data alone. ### 7.3 The Gluon Distribution Since the quark density in the photon is well constrained by studies of photon-photon collisions in $`e^+e^{}`$ data it can be subtracted from the measured effective parton density within the present LO QCD approach. The subtraction is performed using the GRV92 LO parton distributions which are in good agreement with the data and with other parametrisations. The resulting gluon distribution in the photon is shown in Figure 5 and Table 2. The total error includes the uncertainty of the quark plus antiquark contribution in the photon which is known with an error of less than 30% for $`0.1<x_\gamma <0.8`$ as derived directly from the measurement . This uncertainty increases up to 60% at the smallest $`x_\gamma `$values considered here. This conservative error estimate covers also the uncertainty in the calculated charm contribution. The gluon distribution is only large for small $`x_\gamma `$ as expected. The present measurement is in good agreement with an earlier H1 measurement based on high transverse momentum tracks in photoproduction at scales $`\widehat{p_t}^2=38`$ GeV<sup>2</sup> where systematic uncertainties are very different. The analysis based on high transverse momentum tracks has the advantage that it is hardly affected by the underlying event energy. However, in such an analysis it is not possible to define a quantity which is strongly correlated to $`x_\gamma `$. This made the unfolding procedure less effective and required coarse binning in $`x_\gamma `$. The measurements are compared to LO parametrisations of the gluon distribution. Of the two older parametrisations, that of GRV92 LO gives best agreement with the data and has been used throughout this paper for comparisons of the data with Monte Carlo predictions, whereas the parametrisation of LAC1 shows a too steep rise at small $`x_\gamma `$. The more recent parametrisations GRS99 and SaS1D agree very well with each other but fall below the measured distribution. ## 8 Conclusions Two new measurements of the differential di-jet cross-section $`\mathrm{d}\sigma /\mathrm{d}\mathrm{log}x_{\gamma ,jets}`$ in photoproduction at HERA are presented for rather low transverse jet energies. They reach parton fractional energies down to $`x_{\gamma ,jets}=0.05`$, a range where the gluons from the photon are found to dominate the di-jet cross-section. This kinematic region is strongly affected by underlying event energy and, for the $`E_{T,jet}>4`$ GeV selection, by the uncertainties in the description of the transition from hard to soft processes. For di-jet events with $`E_{T,jet}>6`$ GeV, where the cut is applied after subtraction of the underlying event energy, the correlation to the parton dynamics is greatly improved. Leading order QCD gives a good description of these data which makes possible a determination of the effective parton density in the photon. This quantity is dominated by the gluon density for $`x_\gamma 0.2`$ which is found to rise strongly towards small $`x_\gamma `$. The result is in good agreement with earlier measurements of the H1 collaboration but more precise. ## Acknowledgements We are grateful to the HERA machine group whose outstanding efforts have made and continue to make this experiment possible. We thank the engineers and technicians for their work in constructing and now maintaining the H1 detector, our funding agencies for financial support, the DESY technical staff for continual assistance, and the DESY directorate for the hospitality which they extend to the non DESY members of the collaboration.
no-problem/0003/cond-mat0003071.html
ar5iv
text
# Logarithmic corrections from ferromagnetic impurity ending bonds of open antiferromagnetic host chains \[ ## Abstract We analyze the logarithmic corrections due to ferromagnetic impurity ending bonds of open spin 1/2 antiferromagnetic chains, using the density matrix renormalization group technique. A universal finite size scaling $`\frac{1}{L\mathrm{log}L}`$ for impurity contributions in the quasi-degenerate ground state energy is demonstrated for a zigzag spin 1/2 chain at the critical next nearest neighbor coupling and the standard Heisenberg spin 1/2 chain, in the long chain limit. Using an exact solution for the latter case it is argued that one can extract the impurity contributions to the entropy and specific heat from the scaling analysis. It is also shown that a pure spin $`3/2`$ open Heisenberg chain belongs to the same universality class. \] The logarithmic corrections due to marginally irrelevant operators complicate greatly the comparison of experiments and numerical simulation results for finite size systems with analytical calculations. However, a careful analysis in some specific cases may yield useful information on the low energy excitation spectrum and relevant physical quantities. In this Report, we consider the finite size scaling for open spin 1/2 antiferromagnetic (AF) chains with ferromagnetic (FM) coupling at the ending bonds. This system exhibits the same behavior as a Kondo impurity coupled ferromagnetically to a Luttinger liquid. In these systems the Kondo screening is not complete, and the ground state is quasi-degenerate, i.e. the level spacing is vanishing faster than $`1/L`$, where $`L`$ is the system size. Recently, some exactly solvable models belonging to this class have been found, and the impurity entropy as well as specific heat has been obtained through thermodynamic Bethe ansatz. We will show in this report the universal behavior of logarithmic corrections for this class of systems and, using the exact solution, we will argue that one can extract from the scaling analysis the impurity contributions to the entropy and specific heat when the exact solutions are not available. We will also show that a pure open spin 3/2 AF chain (without additional FM ending bonds) belongs to the same universality class. The impurity effects in spin 1/2 Heisenberg AF chains have been discussed extensively in Ref.. The logarithmic corrections to scaling functions for spin 1/2 Heisenberg chains have been discussed in recent papers. Those corrections are due to bulk marginally irrelevant operators, although their manifestations depend on boundary conditions. Instead, we will carry out a detailed numerical analysis of the finite size scaling for the ground state near-degeneracy and low energy spectrum of open $`s=1/2`$ spin chains due to FM ending bonds. To the best of our knowledge, this issue has not been addressed numerically up to now. We first consider the following Heisenberg chain with next nearest neighbor coupling, or equivalently, a zigzag chain: $`H`$ $`=`$ $`{\displaystyle \underset{i=2}{\overset{L2}{}}}𝐒_i𝐒_{i+1}+J_{2c}{\displaystyle \underset{i=2}{\overset{L3}{}}}𝐒_i𝐒_{i+2}+H_{imp},`$ (1) $`H_{imp}`$ $`=`$ $`𝐒_1𝐒_2𝐒_{L1}𝐒_L,`$ (2) where $`L`$ is the chain length and $`𝐒_i`$ is the $`s=1/2`$ spin on site $`i`$. We draw the system in Fig. 1. The nearest neighbor coupling is set to $`J=1`$ and the next nearest neighbor coupling is set to the critical value $`J_2=J_{2c}=0.2411`$. The two ending bonds are FM $`J^{}=1`$, and the ending sites are impurity spins, also $`s=1/2`$. There are no logarithmic corrections due to zero initial bulk marginal coupling at the critical value $`J_2=J_{2c}`$. Therefore, all logarithmic corrections in our calculations are coming entirely from the boundary effects. A pure open zigzag chain without $`H_{imp}`$ has the same low energy spectrum as the pure open $`s=1/2`$ chain when $`0<J_2<J_{2c}`$. There is a unique ground state with total spin $`S=0`$, and one first excited state of spin $`S=1`$, with excitation energy scaled as $`\pi v/L`$ for finite size chains, where $`v`$ is the spin velocity. For the zigzag chain shown in Fig. 1 with FM ending bonds, the ending spins are not fully screened, and there is an RKKY coupling between them scaled as $`J_{RKKY}(L)=\frac{a}{L\mathrm{log}L}`$ in the large $`L`$ limit. The two ending impurity spins form a singlet and a triplet with energy spacing $`J_{RKKY}(L)`$. We identify the following low energy states: (1) Two quasi-degenerate ground states: One is composed of the bulk $`S=0`$ state and singlet impurity state with energy $`E_0^O`$; the other is formed by the bulk $`S=0`$ state and the triplet impurity state with energy $`E_1^O`$. We take $`J_{RKKY}(L)=E_1^OE_0^O`$ as the definition of $`J_{RKKY}(L)`$. (2) Four quasi-degenerate first excited states with excitation energy scaled as $`\pi v/L`$. One is composed of the bulk $`S=1`$ state and singlet impurity state with energy $`E_1^I`$. The other three are formed by the bulk first excited $`S=1`$ state and triplet impurity state with energies $`E_2^{II}`$, $`E_1^{II}`$, and $`E_0^{II}`$, for the total spin $`S=2,1`$, and 0, respectively. Due to the bulk $`S=1`$ excitation propagating in-between the ending spins, the energy difference $`E_i^{II}E_j^{II}`$ is not the same as $`J_{RKKY}(L)`$. We use density matrix renormalization group (DMRG) method to calculate low energy levels for the above Hamiltonian. By keeping $`m=150`$ states, the truncation error is as small as $`10^7`$. We study even length chains only. The low-lying excitation energies are plotted vs $`1/\mathrm{log}L`$ in Fig. 2. In Fig. 2a we see the ground state is degenerate at the scale of the graph. The first four excitation energies scale to $`\pi v/L`$. They correspond to $`E_2^{II}`$, $`E_1^I`$, $`E_1^{II}`$, and $`E_0^{II}`$, respectively, from bottom up. We note the ratio between the two energy spacings $`E_0^{II}E_1^{II}`$ and $`E_1^{II}E_2^{II}`$ is approximately $`1:2`$, which indicates these excitations can be identified as due to coupling of the bulk $`S=1`$ state and impurity triplet state. The next group of six energy levels scales to $`2\pi v/L`$. They are composed of the impurity singlet/triplet states and the two bulk levels scaled as $`2\pi v/L`$ of spin $`S=0`$ and $`S=1`$. We have drawn guiding lines to separate these two groups of lowest excitations. In Fig. 2b, we magnify the scale of the energy spacing for the quasi-degenerate ground states to show the scaling $`E_1^OE_0^O=J_{RKKY}(L)\frac{1}{L\mathrm{log}L}`$. Since there are no bulk logarithmic corrections involved at the critical coupling $`J_{2c}`$ for zigzag chains, the logarithmic term appears solely due to the Kondo impurity effect. If the ending bound coupling $`J^{}=0`$, the end impurity spins are decoupled from the bulk and the ground state is exactly degenerate. The impurity has nonzero entropy at zero temperature. When $`J^{}<0`$, the ground state and low energy spectrum have an asymptotic degeneracy and the energy difference between these quasi-degenerate states scales as $`\frac{1}{L\mathrm{log}L}`$. This has been very clearly seen for the ground state. The zero temperature entropy will change, as we shall argue, depending on the coefficient $`a`$ in the ground state energy scaling $`J_{RKKY}(L)=\frac{a}{L\mathrm{log}L}`$. We will see that this logarithmic scaling behavior due to boundary spins is universal for systems composed of Kondo impurities ferromagnetically coupled to Luttinger liquids, even when there are also logarithmic corrections due to bulk marginally irrelevant operators. We consider now an open spin 1/2 AF Heisenberg chain with impurity ending bonds described by the Hamiltonian $`H`$ $`=`$ $`{\displaystyle \underset{i=2}{\overset{L2}{}}}𝐒_i𝐒_{i+1}𝐒_1𝐒_2𝐒_{L1}𝐒_L.`$ (3) The nearest neighbor coupling is set to $`J=1`$ and the FM coupling $`J^{}`$ for the ending spin is set to $`J^{}=1`$. For a pure spin 1/2 open chain without FM impurity bonds, the ground state energy scales as $`E=e_0L+e_1\frac{\pi v}{24L}[1+b/\mathrm{log}^2(L)+\mathrm{}]`$, where $`e_0`$ is the site energy, $`e_1`$ is the boundary energy, and $`v`$ is the spin velocity for spin 1/2 Heisenberg chain. The logarithmic correction appears here due to the bulk marginally irrelevant operator. We will demonstrate the ground state energy has one more term $`\frac{1}{L\mathrm{log}L}`$ in its finite size scaling due to FM Kondo coupling: $`E_1^O=e_0L+e_1{\displaystyle \frac{\pi v}{24L}}+{\displaystyle \frac{a}{L\mathrm{log}L}}+\mathrm{}.`$ (4) We calculate the energy levels by using DMRG method for even length chains. We keep $`m=200`$ states and the truncation error is of the order $`10^9`$. The ground state is the same as for Hamiltonian (2), with energy $`E_0^O`$. The energy levels $`E_i^K`$ are also labeled the same way. We plot the excitation energies $`(E_i^KE_0^O)L`$ vs $`1/\mathrm{log}L`$ in Fig. 3a. In Fig.3b, the scaling $`E_1^OE_0^O\frac{0.6}{L\mathrm{log}L}`$ is exhibited. The first excited states are four-fold degenerate as we analyzed before for the zigzag chains. We have drawn guiding lines in Fig. 3 to group these first excited states together. For the low energy spectrum, the excitations are again composed of the combined impurity and bulk spin states. The logarithmic scaling behavior of a standard Heisenberg chain due to FM impurity bonds is identical to that of zigzag chain, as the correction due to the bulk marginally irrelevant operator is of higher order. On the other hand, the model(3) has been solved exactly using the Bethe Ansatz, we can extract its ground state energy from the exact solution. We calculate, using the Bethe Ansatz equations of Ref. , the energy $`E_1^O`$ of $`S=1`$ state for system size up to more than four thousands sites. (We have not yet obtained the energy for another degenerate ground state with lower energy $`E_0^O`$ from the Bethe ansatz equations.) Following Eq.(4), we plot $`E_1^Oe_0L+\frac{\pi v}{24L}`$ vs $`\frac{1}{L\mathrm{log}L}`$ in Fig. 4, where the site energy $`e_0=1/4\mathrm{log}2`$, and the spin velocity $`v=\pi /2`$. We obtain $`E_1^Oe_0L+\frac{\pi v}{24L}=0.787984+\frac{0.66}{L\mathrm{log}L}`$ by the least square fitting. Based on Eq.(4), we have $`e_1=0.787984`$. We obtain also the scaling $`J_{RKKY}(L)\frac{0.66}{L\mathrm{log}L}`$ from the exact solution, to be compared with an almost identical fitting of the DMRG data (coefficient $`a=0.6`$ instead of 0.66 in the scaling formula $`J_{RKKY}(L)=\frac{a}{L\mathrm{log}L}`$). Therefore, we conclude that the quasi-degenerate ground state and low-lying excitations of Luttinger liquids with FM Kondo impurities can be indeed described by this logarithmic scaling function. Moreover, using the exact solution, the impurity entropy and specific heat can be expressed in terms of the coefficient $`a`$ in the above finite size scaling. If two systems have the same ground state degeneracy and the same low-energy excitation spectrum, the thermodynamic properties at very low temperatures should also be identical to each other. Using this argument, we can estimate the impurity entropy and specific heat from the finite size scaling analysis of the energy spectrum even in the case when the exact solution is not available (e.g. the zigzag chain considered earlier). Using the same argument we will discuss another related system, an open spin 3/2 AF chain without additional FM ending bonds. The spin $`3/2`$ chain has been shown to have the same low energy physics as for spin 1/2 chain. When the chain is open, there are effective edge $`s^{}=1/2`$ spins left at the ends of the chain. It was shown earlier that the energy spacing between the two degenerate ground states scales as $`\frac{1}{L\mathrm{log}L}`$. However, the scaling of the low energy spectrum is very difficult to study and the logarithmic corrections are big. Following the way of dealing with such edge spins for integer spin chains, it was proposed to treat them as impurity spins ferromagnetically coupled to the bulk spin excitations. Unlike the previous cases, there are no additional FM impurity bonds at the ends of spin 3/2 chains. Now we calculate the scaling of the ground state energy spacing $`J_{RKKY}(L)=\frac{a}{L\mathrm{log}L}`$ by DMRG. We keep $`m=1000`$ states in DMRG and the truncation error is $`10^6`$. We calculate only a few low energy levels and plot the excitation energies times length vs $`1/\mathrm{log}L`$ in Fig. 5. We obtain the coefficient $`a=3.9`$ in scaling $`J_{RKKY}(L)=\frac{a}{L\mathrm{log}L}`$. With the known spin velocity $`v=3.87`$ for spin 3/2 chain, we obtain a nonzero entropy at zero temperature. Its exact value can be expressed in term of $`a`$ analytically when exact solution is available.(Non-zero impurity entropy is also predicted in Takhatajan-Babujian spin-3/2 chain with spin-1/2 boundary impurities, where Bethe ansatz solution is available.) We hope such a nonzero entropy at zero temperature can be measured in quasi- one-dimensional spin 3/2 materials such as $`CsVCl_3`$ (Ref. ), $`AgCrP_2S_6`$ (Ref. ), etc. In summary, we have carried out a detailed analysis of the low energy spectrum of FM Kondo impurity in Luttinger liquids, or equivalently, an open spin 1/2 AF Heisenberg chain. We have shown this class of models has a universal logarithmic quasi-degeneracy in the low energy states due to the RKKY interaction between the unscreened edge spins. We argue that nonzero entropy at zero temperature can be obtained for FM Kondo impurity system by studying the finite size scaling of the ground state energies. J. Lou and S. Qin would like to thank Prof.T.K. Ng for valuable discussions. This work is partially supported by Chinese Natural Science Foundation.
no-problem/0003/astro-ph0003393.html
ar5iv
text
# A Characteristic Scale on the Cosmic Microwave Sky <sup>1</sup>Department of Physics and Astronomy, University of British Columbia, B.C. V6T 1Z1, Canada <sup>2</sup>Canadian Institute for Theoretical Astrophysics,Toronto, ON M5S 3H8, Canada <sup>3</sup>Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, U.S.A. The 1965 discovery (1) of the Cosmic Microwave Background (CMB) was key evidence supporting the hot Big Bang model for the evolution of the Universe. The tiny temperature variations discovered in 1992 (2) – of just the right size for gravity to have grown the observed large-scale structures over the age of the Universe – established gravitational instability as the mechanism of structure formation. Those first measurements of CMB anisotropy on tens of degree scales have been followed by many experiments concentrating on smaller angular scales. Even 5 years ago (3) there were indications for enhanced temperature variations on half-degree scales. By combining results from all current experiments it is now clear that this ‘excess power’ decreases again below half a degree – in other words there is a distinctive scale imprinted upon the microwave sky. The existence of such a feature at roughly $`0\text{}\text{.}5`$ has profound implications for the origin of structure in the Universe and the global curvature of space. It is conventional to expand the CMB sky into a set of orthogonal basis functions labeled by ‘multipole number’ $`\mathrm{}`$. Functions with higher $`\mathrm{}`$ probe smaller angular scales. We then consider the squares of the expansion coefficient amplitudes as a function of $`\mathrm{}`$, or inverse angle, and this is referred to as the ‘anisotropy power spectrum’ (4). This power spectrum is easy to compute theoretically, and in popular models contains essentially all of the cosmological information in the CMB. What remains is to obtain this power spectrum experimentally. Each experiment is sensitive to a range of angular scales, and its sensitivity as a function of $`\mathrm{}`$ is encoded in its ‘window function’. Several experiments can now divide their $`\mathrm{}`$ range into overlapping window functions and thus obtain information on the shape of the power spectrum. Each experiment thus quotes results for one or more ‘band-powers’, which is the amplitude of the anisotropies integrated over the window function (5). Individual experiments until now have had limited angular range, so each has provided only a small piece of the puzzle. However a number of different CMB experiments can be combined together to provide an essentially model-independent estimate of the power spectrum. This estimate, provided it is carefully calculated, can then be used to constrain models. We used a maximum likelihood technique to combine the band-powers into a binned power spectrum encapsulating the knowledge gained from the different observations. We have included all the experimental results of which we are currently aware. Specifically those collected in Ref. (6), together with the more recent results of the QMAP (7), MAT (8), Viper (9) and BOOM97 (10) experiments; as summarized in the Radpack package (11) with some minor corrections. For definiteness we have divided the range $`\mathrm{}=2`$$`1000`$ into 8 bins (spaced at roughly equal logarithmic intervals, with slight adjustment to allow for regions where data are scarcer). As the experimental situation improves, particularly at higher $`\mathrm{}`$, we expect that emphasis will shift to plots linear in $`\mathrm{}`$ and having a wider range – however, for now the situation is adequately summarized in a log plot. We have approximated the power spectrum as a piece-wise constant and fit the values of that constant within each bin to the combined data, taking into account non-symmetric error bars and calibration uncertainties in a manner similar to (12). We maximize the likelihood function for the 8 parameters (plus 17 calibrations) using a simulated annealing technique (13). From the maximum likelihood position we then use Monte-Carlo integration to calculate the covariance matrix of the parameters. The final result is a power spectrum, with realistic estimates of the error bars and bin-to-bin correlations. We show the points and errors in Figure 1, and present the values in Table 1. These points are somewhat correlated, with the strongest correlation being typically a 30% anti-correlation with immediately neighbouring bins, and more distant correlations being almost negligible. Table 2 explicitly shows the correlations between the difference bins, fixing the calibrations at the maximum likelihood value. Any use of these binned poewr spectrum estimates to constrain cosmological models should include these correlations. Our best fitting model has $`2\mathrm{ln}=78`$, a marginally acceptable fit. We note that if the experimental calibrations were not allowed to float, then the overall $`\chi ^2`$ would be far from acceptable. In fact we find that the best fitting calibration scalings are very close to unity for most experiments, with the most discrepant values being 0.76 for MAT97, 0.83 for QMAT, 1.15 for MSAM and 1.11 for BOOM97. These data show a prominent, localized peak in the angular power spectrum. There is a distinct fall-off at high $`\mathrm{}`$, which is indicated within the data sets of individual experiments (particularly Saskatoon (14), MAT, Viper and BOOM97), but is more dramatically revealed in this compilation of data sensitive to different angular scales. Further confidence in the decrease in power comes from upper limits at even larger $`\mathrm{}`$, not plotted or used in our fit. In other words, there is a particular angular scale on which CMB temperature fluctuations are highly correlated and that scale is around $`\mathrm{}=200`$, or $`0\text{}\text{.}5`$. It corresponds theoretically to the distance a sound wave can have traveled in the age of the Universe when the CMB anisotropies formed. Such a characteristic scale was suggested in models of cosmological structure formation at least as far back as 1970 (15). The field is now in an exciting phase, with two main parts: (a) confirming/refuting the basic paradigm; and (b) constraining the parameters within that paradigm. These go hand in hand, of course. The peak prominent in Figure 1 confirms our ideas of the early evolution of structure. Understanding the physical basis for the peak allows a constraint to be placed on the curvature of the universe (e.g. 16, 17). The overall geometry of space appears to be close to flat, indicating that something other than normal matter contributes to the energy density of the Universe. Together with data from distant supernovae and other cosmological tests, this implies that models with cold dark matter and Einstein’s cosmological constant are in good shape (18). Soon the detailed structure of the CMB spectrum should be measurable and we expect it will contain a series of peaks and troughs. Finding such structure in the spectrum at the correct $`\mathrm{}`$s would be strong confirmation for ‘adiabatic’ fluctuations (which perturb matter and radiation in a similar way) produced at very early times. Eventually this would lead to the possibility of ‘proving’ inflation, or stimulating research on other ways of generating similar fluctuations on apparently acausal scales. Of course, failure to see multiple peaks in the predicted locations would require theorists to be more imaginative! If we verify the framework we then need to determine precisely the parameters within our model; namely the amounts of matter of different types, the expansion rate, the precise form of the initial conditions, etc. With a well characterized set of initial conditions we will clearly wish to extend our understanding of cosmic origins to more recent epochs. Even here the upcoming high resolution maps of the CMB will play a crucial role carrying imprints, through reionization and gravitational lensing, of object formation in the recent universe. The future remains bright. New results from a long duration flight of the BOOMERANG experiment are expected in the very near future. There are also several ground-based experiments, including interferometric instruments, nearing completion. NASA’s Microwave Anisotropy Probe is expected to return data in 2001, and the ambitious Planck satellite is scheduled for launch in 2007. Beyond this, information from challenging CMB polarization measurements and the combination of CMB data with other cosmological probes will be even more powerful. We are on the threshold of precision measurements of the global properties of our Universe. The history of CMB research can be split into 5 phases. Firstly, its mere existence showed that the early Universe was hot and dense. Secondly, the blackbody nature of the CMB spectrum and its isotropic distribution imply that the Universe is approximately homogeneous on large scales. The third step came with the detection of anisotropies which confirmed the theory of structure formation through gravitational instability. Here we have outlined a fourth stage, which is the discovery of a characteristic (angular) scale on the CMB sky. This supports a model with adiabatic initial conditions and a Universe with approximately flat geometry. Higher fidelity data, of the sort which will soon be available, should decide whether or not our models are vindicated. And now we are on the verge of the fifth phase, which involves determining the precise values of the fundamental cosmological parameters to figure out exactly what kind of Universe we live in. (1) A. A. Penzias, R. W. Wilson, Astrophys. J. 142, 1149 (1965). (2) G. F. Smoot, et al., Astrophys. J. 396, L1 (1992). (3) D. Scott, J. Silk, M. White, Science 268, 829 (1995). (4) Technically, one takes $`\mathrm{\Delta }T(\theta ,\varphi )=_{\mathrm{},m}a_\mathrm{}mY_\mathrm{}m(\theta ,\varphi )`$, and plots $`\mathrm{}\left(\mathrm{}+1\right)C_{\mathrm{}}/2\pi `$ vs $`\mathrm{}`$, where $`C_{\mathrm{}}\left|a_\mathrm{}m\right|^2/\left(2\mathrm{}+1\right)`$. (5) L. Knox, Phys. Rev. D60, 103516 (1999) \[astro-ph/9902046\]. (6) G. F. Smoot, D. Scott, in Review of Particle Properties, C. Casi, et al., Eur. Phys. J. C3, 127 (1998) \[astro-ph/9711069\]. (7) QMAP: A. de Oliveira-Costa, et al., Astrophys. J. 509, L77 (1998) \[astro-ph/9808045\]. (8) MAT: E. Torbet, et al., Astrophys. J. 521, L79 (1999) \[astro-ph/9905100\]; A.D. Miller, et al., Astrophys. J. 524, L1 (1999) \[astro-ph/9906421\]. (9) Viper: J.B. Peterson, et al., Astrophys. J., submitted \[astro-ph/9910503\]. (10) BOOM97: P.D. Mauskopf, et al., Astrophys. J., submitted \[astro–ph/9911444\]. (11) We are grateful to Lloyd Knox for making available his Radpack package (http://flight.uchicago.edu/knox/radpack.html), which we adapted for our analysis. (12) J. R. Bond, A. H. Jaffe, L. E. Knox, Astrophys. J., in press \[astro-ph/9808264\]. (13) S. Hannestad, Phys. Rev. D61, 023002 (2000) \[astro-ph/9911330\]. (14) Saskatoon: C. B. Netterfield, M. J. Devlin, N. Jarosik, L. Page, E. Wollack, Astrophys. J. 474, 47 (1997) \[astro-ph/9601197\]. (15) P. J. E. Peebles, J. T. Yu, Astrophys. J. 162, 815 (1970). (16) S. Dodelson, L. Knox, Phys. Rev. Lett., in press \[astro-ph/9909454\]. (17) A. Melchiorri, et al., Astrophys. J., in press \[astro-ph/9911445\]. (18) N. A. Bahcall, J. P. Ostriker, S. Perlmutter, P. J. Steinhardt, Science 284, 1481 (1999) \[astro-ph/9906463\]. This research was supported by the Natural Sciences and Engineering Research Council of Canada and by NSF-9802362. Authors e-mail addresses: elena@astro.ubc.ca; dscott@astro.ubc.ca; mwhite@cfa.harvard.edu
no-problem/0003/astro-ph0003273.html
ar5iv
text
# Spectroscopic identification of ten faint hard X-ray sources discovered by Chandra ## 1 Introduction The Chandra X-ray Observatory was launched on July 23 1999, carrying on board a revolutionary high resolution mirror assembly, with a Point Spread Function of 0.5 arcsec (half power radius) over the broad 0.1 to 10 keV band (Van Speybroeck, et al. 1997). This, together with the aspect camera which at the moment provides attitude solutions with errors of the order of 1-2 arcsec<sup>1</sup><sup>1</sup>1When the data are definitively reprocessed, the aspect for image reconstruction should be $`0.5`$ arcsec and the source positions should be better than 1 arcsec, allows the study of spatial extent of X-ray sources on similar scales, i.e. smaller or similar to the size of a L galaxy at any redshift; and gives X-ray source positions at least as good as 2–3 arcsec, immediately allowing the unambiguous identification of the optical counterparts of faint X-ray sources. Consequently, the determination of the source redshifts via optical spectroscopy becomes highly efficient. The improvement provided by Chandra is especially significant in the hard (2–10 keV) X-ray band. Surveys of the hard X-ray sky have been performed in the past by ASCA and BeppoSAX (Ueda et al. 1998, Della Ceca et al. 1999, Fiore et al. 2000a, Giommi et al. 2000, Comastri et al. 2000). However, the large error boxes (1-2 arcmin) limited the optical identification process to classes of objects with low surface density, at a given optical magnitude limit. As a result, most of the identified sources are emission line AGN (Fiore et al. 1999, Akiyama et al. 2000, and La Franca et al. in preparation). About 30 % of the BeppoSAX HELLAS survey sources studied spectroscopically down to R=20.5 have escaped a secure identification (Fiore et al. 2000b, La Franca et al. in preparation), although many normal galaxies and stars have been observed in these error-boxes. Chandra’s unprecedented capabilities make identifications unambiguous, and open up the possibility of searching for and studying classes of sources not previously recognized as strong hard X-ray emitters, and of assessing their contribution to the hard X-ray cosmic background (XRB; e.g. Griffiths & Padovani 1990). In particular, it will be possible for the first time to begin studying normal galaxies at $`z>0.1`$, as well as possible “minority” hard X-ray source populations (Kim & Elvis 1999). We have started a pilot project of spectroscopic identification of Chandra sources in two medium–deep fields that were visible from La Silla in January 2000, with the aim of verifying the feasibility of such studies with 4m class telescopes. The results on 10 X-ray sources are very encouraging and are described in the following. ## 2 X-ray data The Chandra X-ray Observatory consists of four pairs of concentric Wolter I mirrors reflecting 0.1-10 keV X-rays into one of the four focal plane detectors: ACIS-I, ACIS-S, HRC-I or HRC-S (Weisskopf, O’Dell & VanSpeybroeck 1996). All the results presented in the following were obtained with the $`16^{}\times 16^{}`$ ACIS-I CCD instrument. Table 1 gives the log of the Chandra observations. One field is centered on the $`z`$=0.6 quasar PKS0312$``$77, the other is located $`180^o`$ away from the radiant point of the Leonid meteor shower. Level 2 processed data (Fabbiano et al. in preparation) were obtained from the Chandra public archive. Data were cleaned and analyzed using the CIAO Software (release V1.1, Elvis et al. in preparation). Time intervals with large background rate were removed, as well as hot pixels and bad columns. Only the standard event grades (0, 2, 3, 4 and 6) were used. Sources were detected in images accumulated in the 2–10 keV band (channels 136-680). Robust sliding-cell algorithms were used to locate the sources. We used both the $`celldetect`$ program available in the CXC data analysis package CIAO (Dobrzycki et al. 1999) and a variation of the DETECT routine included in the XIMAGE package (Giommi et al. 1991). The method consists in first convolving the X-ray image with a wavelet function, in order to smooth the image and increase contrast, and then running a standard sliding cell detection method on the smoothed image. The quality of each detection was checked interactively. Final net counts are estimated from the original (un–smoothed) image, to preserve Poisson statistics. A binning factor of 2 (i.e. pixels of $``$ 1 arcsec) and source box sizes of 6 arcsec (offaxis angle $`<5^{}`$), 10 arcsec ($`5^{}<`$ offaxis angle $`<10^{}`$) and 30 arcsec (offaxis angle $`>10^{}`$) maximize the signal to noise ratio, given the local background, PSF and source intensity. The background is calculated using source-free boxes near the sources. This is usually very low in the 2–10 keV band, i.e. 0.15–0.3 counts per 6 arcsec detection box side, per 10 ks. For this study only sources with more than 10, 15 and 40 counts in the 6 arcsec, 10 arcsec and 30 arcsec detection cells respectively, are considered. Given a conservative background of 0.6, 1.5 and 15 counts in the detection cells this corresponds to a Poisson probability for a background fluctuation of $`10^9`$ in all cases. The total number of detection cells can be estimated in about 7800 6<sup>′′</sup> cells plus 5000 10<sup>′′</sup> cells plus a few tens of 30<sup>′′</sup> cells in each observation (4 ACIS-I chips). Therefore, the total number of spurious sources in the two fields (0.14 deg<sup>2</sup>) at the chosen detection thresholds is absolutely negligible (10<sup>-5</sup>). Thirteen sources detected in the 2–10 keV images passed these thresholds. All but one are also detected in the softer 0.5–2 keV band (the non–detected source has about 6 counts in the soft band). Count rates were corrected for the PSF and the telescope vignetting calculated at 4.5 keV (2–10 keV band) and 1.5 keV (0.5–2 keV band) using Figures 3.8 and 3.10 of the AXAF Observatory Guide (1997). The correction is about 25 % at an off-axis angle of 12 arcmin and 10 % at 6 arcmin. Fluxes in the 2–10 keV band were computed assuming a count rate to flux conversion factor of $`2.5\times 10^{11}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ per count s<sup>-1</sup>. Fluxes in the 0.5–2 keV were computed assuming factors of $`5.2\times 10^{12}`$ and $`4.6\times 10^{12}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ per count s<sup>-1</sup> for Leonid Anti-Rad and the PKS0312$``$770 fields respectively. These conversion factors are appropriate for power law models with $`\alpha _E=0.7`$ (2–10 keV) and $`\alpha _E=1.0`$ (0.5–2 keV) respectively, assuming the Galactic absorbing column densities quoted in Table 1. Table 2 gives for the thirteen detected sources the X-ray position and off-axis angle, the 2–10 keV and 0.5–2 keV counts (corrected for the PSF and the vignetting) and the corresponding fluxes. ## 3 Optical spectroscopy Optical counterparts were found in the USNO catalogue down to R$`20`$ in ten cases, in the DSS2 E(R) images in two cases (LAR1 and LAR3), and on EFOSC2 R band image in the last case (P4). Optical counterparts were found for 11 sources with a typical displacement of $`\genfrac{}{}{0pt}{}{_<}{^{}}2`$ arcsec. In one case (LAR4) the displacement is of 4”. We consider the identification secure because a) LAR4 is observed by ACIS at an offaxis angle of 12 arcmin, where the PSF is highly asymmetric and broader than 10 arcsec, making the X-ray position more uncertain, and b) the optical source is a bright, broad line quasar, the surface density of which is so low that the probability to find one by chance at $`4^{\prime \prime }`$ from an X-ray source is negligible. In fact, the probability of finding any R$``$22 galaxy in a $`4^{\prime \prime }`$ radius error-box by chance is $`<`$ 0.04, while the same probability for an AGN is at least ten times smaller. Therefore the number of AGN misidentifications in our sample is $`<0.05`$. The number of expected galaxy misidentifications at R$``$20 is $`<`$ 0.2, while that of R$``$22 is $`<`$ 0.5. In one case (LAR2) no optical counterpart brighter than R $``$ 20 is present in the X–ray error box. However a R=18 galaxy, with an optical extension of about 3 arcsec, is present at 6 arcsec from the X-ray position. Therefore the galaxy outskirts touch the X-ray errorbox. We conservatively consider this identification uncertain. We obtained long slit spectra of ten counterparts <sup>2</sup><sup>2</sup>2the three objects not observed in the Leonid anti-rad field have optical magnitudes and X-ray fluxes similar to those of the ten observed objects. The four sources observed in this field were selected by chance, using EFOSC2 at the ESO 3.6m telescope on January 3-6 2000. Spectra were obtained using grism N.6 in the 3800-8100 $`\AA `$ range and a slit 2 arcsec wide, corresponding to a resolution of 26 Å . The complete set of flux-calibrated spectra is shown in Figure 1. R magnitude, redshift, optical and X-ray luminosity, and a classification of the optical spectrum are given in Table 2. Classification of narrow line objects is done using standard line ratio diagnostics (e.g. Osterbrook 1981, Tresse et al. 1996), in particular the \[OIII\]$`\lambda 5007`$/H$`\beta `$, \[SII\]$`\lambda 6725`$/H$`\alpha `$ and \[OIII\]$`\lambda 5007`$/\[OII\]$`\lambda 3727`$ line ratios. We find six broad line quasars at redshift between 0.42 and 1.19 and 2–10 keV X–ray luminosities in the range $`10^{44}4\times 10^{45}`$ erg s<sup>-1</sup>. The four remaining source identifications are as follows: LAR5 is identified with a bright $`z`$=0.016 starburst galaxy with an X-ray luminosity of $`6\times 10^{40}`$ erg s<sup>-1</sup>, similar to that of other starburst galaxies of similar optical luminosity (Ptak et al. 1999). In LAR6, although \[OIII\] and $`H\beta `$ are detected, the spectrum is rather noisy and it is not clear if $`H\beta `$ has broad wings. Optical classification is therefore uncertain. The 2–10 keV luminosity of $`2\times 10^{43}`$ erg s<sup>-1</sup> and the X-ray softness ratio (see below) suggest that LAR6 is an obscured AGN. P3 is identified with a normal galaxy without strong emission lines (only a weak \[OII\] line with an equivalent width of $`5\pm 1`$ Å is apparent in the spectrum). The X-ray and optical luminosities are $`L_{210keV}3\times 10^{42}`$ erg s<sup>-1</sup> and $`M_V=20.5`$. The galaxy continuum is very red, suggesting an early type galaxy. The Calcium break is 0.44$`\pm 0.11`$, consistent with little or no dilution from non–stellar light. A faint H$`\delta `$ is detected in absorption (equivalent width of $`3.8\pm 2.3`$ Å). The spectrum of P4 is rather noisy because the source lies only 9 arcsec from a 10 mag star, which strongly enhances the background. We identify P4 with an AGN at z=0.683 thanks to a rather strong \[OII\] and weaker \[NeIII\] and MgII emission lines. The MgII to \[OII\] intensity ratio of $`0.9`$ is much smaller than in broad line quasars and similar to those of type 2 AGN (Woltjer 1990). The 2–10 keV luminosity of $`2\times 10^{44}`$ erg s<sup>-1</sup> and the X-ray softness ratio (see below) suggest that P4 is a moderately obscured quasar. We have searched the NVSS, IRAS faint source catalog, clusters of galaxies (ACO, Abell, Corwin & Orowin 1989) , normal galaxies, stars and AGN catalogs, finding only one coincidence: LAR5 is identified with an APM galaxy. ## 4 X-ray and optical properties ### 4.1 X-ray versus optical imaging The high spatial resolution of the Chandra telescope allows for the first time a comparison of X-ray and optical images at similar, arcsec, resolution. We have studied the spatial extent of the X-ray sources by comparing their count profiles with the Chandra PSF, taking into account its dependence on the off-axis angle (as calibrated on the ground). The spatial extent is smaller than a few arcsec in all cases and therefore consistent with their being point sources. Conversely, at least three of the optical counterparts are extended galaxies (“gal.” in Table 2). Figures 2a,b show the X-ray (2–10 keV) contours overlaid on the optical R band images of the $`z`$=0.158 galaxy (P3) and the $`z`$=0.016 starburst galaxy (LAR5) obtained with EFOSC2. Galaxy P3 is bulge dominated, the size of the bulge being 2–3 arcsec. Also galaxy LAR5 has a bright bulge, of similar size. These sizes are similar to or smaller than the Chandra PSF at the offaxis angle where the sources were detected in ACIS-I (2.6 arcmin and 6 arcmin respectively). Both galaxies are extended (up to 10–20 arcsec). If the X-ray emission were connected to the outer parts of the galaxy, it would have been resolved by Chandra. Both X-ray sources are centered on the galaxy nuclei. The X-ray source in LAR5 appears slightly elongated in a direction perpendicular to the galaxy major axis, as seen in several nearby starburst galaxies (e.g. Fabbiano et al. 1992, Dahlem et al. 1998). However, this elongation may also be due to the degradation of the Chandra PSF at the offaxis-angle of this source (6 arcmin). ### 4.2 X-ray to optical flux ratio Figure 3 shows the X-ray to optical flux ratios of the thirteen Chandra sources as a function of the 2–10 keV flux, and compares them with those of local AGN and of the AGN found in the BeppoSAX HELLAS survey. This Figure shows that the X-ray to optical ratio of most Chandra sources is similar to that of local and HELLAS AGN. Their optical magnitude is bright enough to allow redshift determination with 4m or 8m class telescopes. At fainter fluxes Figure 3 shows the sources recently optically identified by Mushotzky et al. (2000). R band fluxes have been computed assuming R-I=0.3. The flux ratio of about 60% of the Mushotzky et al. Chandra sources is within the range of values covered by brighter AGN (i.e. log$`(f_X/f_R)`$ from $``$2 to 1). The remaining 40 % show an X-ray to optical ratio higher than that of the brighter Chandra sources presented here and of the local AGN (see Figure 3). As discussed by Mushotzky et al. (2000) the nature of these optically faint sources is mysterious. Many of them have optical magnitude fainter than R$``$24 which make difficult to obtain a precise redshift through optical spectroscopy. The $`z`$=0.158 normal galaxy has an X-ray to optical ratio of 0.36, much higher than the typical value of nearby normal galaxies (see section 5.2). ### 4.3 X-ray spectral properties For most of the Chandra sources the total number of detected counts is $`<`$100, preventing the use of proper spectral fitting procedures to study their spectrum. The broad band X-ray spectral properties can only be investigated using count ratios. If the spectrum is parameterized as an absorbed power law and the redshift of the source absorber is known, the column density can be evaluated. Following Fiore et al. (1998), we assume here that the X-ray absorber redshift coincides with the optical redshift. Figure 4 shows the softness ratio (S$``$H)/(S+H) (S=0.5–2 keV band, H=2–10 keV band) of the ten spectroscopically identified Chandra sources as a function of the redshift. Errors include counting statistics only. Dashed lines show the expectation of power law models with $`\alpha _E=0.8`$ absorbed by varying column densities in the source frame. Two of the Chandra sources (P3 and LAR6) have a softness ratio inconsistent with that expected by an intrinsically unobscured power law at better than the 90% confidence level. In particular, LAR6 is likely to be obscured by a column of $`510\times 10^{22}`$ cm<sup>-2</sup>. It is worth noting that the column densities implied by Figure 4 are probably lower limits to the columns toward the nuclear hard X-ray source, because highly obscured AGN usually have strong soft components (e.g. Schachter et al. 1998, Maiolino et al. 1998, Della Ceca et al. 1999, Fiore et al. 2000a,b). To quantify this possible underestimate we simulated ACIS-I observations of three highly obscured type 2 AGN (NGC1068, $`N_H>10^{25}`$ cm<sup>-2</sup>; the Circinus Galaxy $`N_H=4.3\times 10^{24}`$ cm<sup>-2</sup>; and NGC6240, $`N_H=2.2\times 10^{24}`$ cm<sup>-2</sup>), based on ASCA and BeppoSAX results (Matt et al. 1997, 1999, Iwasawa & Comastri 1998, Vignati et al. 1999). The (S$``$H)/(S+H) observed by ACIS-I would have been 0.88, $``$0.08 and 0.57 respectively. The results are strongly dependent from the sources spectral shape and redshift. At $`z`$=1 the simulated hardness ratios are 0.53, $``$0.30 and $``$0.05 respectively. We also note that relatively low values of (S$``$H)/(S+H) can be produced by intrinsically flat and unobscured power laws (for example the P3 hardness ratio of 0.16 would correspond to $`\alpha _E=0.2`$). Efficient X-ray follow-up of relatively bright X–ray sources with XMM will provide a relatively accurate measure of the intrinsic spectrum through proper spectral fitting allowing to remove the ambiguity between an intrinsically flat spectrum and a spectrum flattened by absorption. ## 5 Discussion ### 5.1 Faint X-ray sources and the XRB One of the main goals of hard X-ray surveys is to investigate the origin of the hard X-ray cosmic background. The most popular model explains the XRB in terms of a mixture of obscured and unobscured AGN (Setti & Woltjer 1989) following a strong cosmological evolution. The relatively small number of sources in the present survey (only thirteen), implies a $`30\%`$ statistical error on their number density. This limits the strength of the constraints we are able to put on AGN synthesis models for the XRB. However we believe it is useful to begin addressing this issue here. The main problem when comparing model predictions and observations is that the flux limit of count rate limited surveys depends on the actual intrinsic spectrum of the sources (Zamorani et al. 1988). Harder sources would generally produce less counts (because of the decrease of the effective area toward the higher energies), and therefore they would pass a detection threshold only if their flux is higher than that of softer sources with similar count rate. In order to quantify this effect we have folded a heavily absorbed (10<sup>23</sup> cm<sup>-2</sup>) power law spectrum ($`\alpha _E`$ = 0.8) with the Chandra sensitivity, and computed the flux limit corresponding to the count rate threshold. This turns out to be a factor 5–6 higher than that of an unabsorbed power law. Taking into account the column density distribution predicted by the Comastri et al. (1995) synthesis model and the Chandra sensitivity, the predicted fraction of obscured (log$`N_H>`$ 22) sources is 20–30 %. The softness ratio results (Figure 4) suggest that at least 3 out of 10 optically identified sources may be absorbed by column densities equal or higher than $`10^{22}`$ cm<sup>-2</sup>. Furthermore, one of the three unidentified objects has a hard spectrum, as can be judged from the hard to soft X–ray flux ratio of Table 2. We therefore conclude that the present observations are consistent with the predictions of AGN synthesis models for the XRB. ### 5.2 P3: an X-ray loud normal galaxy? A surprising result from our pilot study is the detection of a luminous X-ray source in an otherwise normal galaxy (P3). The X-ray luminosities of $`L_{210keV}3\times 10^{42}`$ erg s<sup>-1</sup>, $`L_{0.52keV}0.8\times 10^{42}`$ erg s<sup>-1</sup> are about a factor 30 higher than those expected on the basis of the optical luminosity ($`L_B=10^{10}L_{}`$) for both spiral (Fabbiano et al. 1992) and Elliptical/S0 (Eskridge, Fabbiano & Kim 1995; Pellegrini 1999) galaxies. A few optically “dull” galaxies with strong X-ray emission have been reported in the past (e.g. 3C264 and NGC4156, Elvis et al. 1981; J2310$``$437, Tananbaum et al. 1997; Griffiths et al. 1996) and more recently in a deep Chandra observation (Mushotzky et al. 2000). Hard power law tails have been also discovered in a few nearby elliptical galaxies (Allen, Di Matteo & Fabian 2000). The presence of relatively strong X–ray emission in objects with no evidence of activity in the optical spectrum is still not well understood. One possibility would be a large contribution from a beamed non-thermal component in the X-ray band (Elvis et al. 1981, Tananbaum et al. 1997, Worrall et al. 1999) as for BL Lacertae objects. If P3 hosts a BL Lac then the non–thermal featureless optical continuum would reduce the height of the Calcium break from a typical value of 0.5 found in elliptical and S0 galaxies to $`<0.25`$ (Stocke et al. 1991). This threshold has been raised to about 0.4 by Marcha and Browne (1995) to take into account the spread of galaxy luminosity and sizes. The P3 Calcium break of 0.44$`\pm `$0.11 does not allow to rule out the presence of a BL Lacertae object, even if it would be a rather extreme member of its class. If this is the case, the radio flux predicted from the observed X-ray flux would be in the 0.1–30 mJy range (Padovani & Giommi 1996). The PMN (Griffith et al. 1991) limit of about 50 mJy is not useful to settle this issue. An obscured AGN could also provide a viable explanation. Indeed the P3 hardness ratio implies a substantial column for a typical AGN X–ray continuum. The optical emission lines could also be completely hidden except for a weak \[OII\] emission feature. It is worth remarking that examples of X–ray obscured AGN with neither BLR nor NLR already exist (e.g. NGC4945, Marconi et al. 2000; NGC6240, Vignati et al. 1999 and references therein), even if in these cases emission lines related to starburst emission are present. Finally the presence of a low–radiative–efficiency accretion flow (ADAF) might also be tenable. The putative central black hole mass estimated from the observed B luminosity of the galaxy’s bulge following the Magorrian et al. (1998) relation is about 4$`\times `$10<sup>8</sup> M$``$ (this is actually an upper limit, as we have assumed for the bulge luminosity that of the entire galaxy, see Fig. 2a). If the X–ray emission comes from the nucleus, and it is a sizeable fraction of the bolometric luminosity, its Eddington ratio is of the order of 10<sup>-4</sup>, consistent with the ADAF regime (Narayan et al. 1998). An estimate of the 8.4 GHz flux has been obtained assuming the ADAF spectral models calculated by Di Matteo et al. (2000) as well as the X–ray to radio flux ratios of a few ADAF candidates observed at the VLA (Di Matteo et al. 1999) rescaled to the P3 X–ray flux. In both cases the maximum expected radio emission is of the order of a few mJy and thus well below the present limit. The ADAF contribution to the optical–UV light strongly depends on the adopted model (see figure 2 in Di Matteo et al. 2000). The P3 hardness ratio is consistent with a very flat power law ($`\alpha `$ 0.2$`\pm `$0.3), and a wind model is therefore to be preferred. If this is the case, the ADAF flux in the optical–UV would be much fainter (at least two order of magnitude) than that of the host galaxy, again consistent with the present findings. ## 6 Conclusions We have carried out a pilot program to study the faint hard X-ray source population using the revolutionary capabilities of the Chandra satellite. We identified ten 2–10 keV selected sources from two 4-chip, medium deep Chandra fields covering about 0.14 deg<sup>2</sup> of sky at fluxes in the range $`1.525\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ , a factor of 3 fainter than previous ASCA and BeppoSAX surveys. Recently, Mushotzky et al. (2000) reported detection of faint X-ray sources from a single 1–chip field (0.0175 deg<sup>2</sup>) at fluxes $`0.33\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ . Our results fill the gap between the shallow BeppoSAX and ASCA surveys and the deep Chandra field. Almost all sources have an optical counterpart within 2 arcsec. Optical spectra allow us to measure their redshifts, and assess their optical classification. We find six broad line quasars, two emission line AGN (LAR6 and P4), one starburst galaxy (LAR5), and one apparently normal galaxy at $`z`$=0.158 (P3). LAR6 and P4 are likely to be obscured in X-ray by a column densities of $`10^{23}`$ cm<sup>-2</sup> and $`10^{22}`$ cm<sup>-2</sup> respectively. The X-ray source in the $`z`$=0.158 normal galaxy P3 may be covered by a column density of about $`10^{22}`$ cm<sup>-2</sup> too. The spatial extension of all X-ray sources is smaller than a few arcsec in all cases, and it is roughly consistent with the Chandra PSF. The X-ray sources associated with the $`z`$=0.016 starburst galaxy LAR5 and the $`z`$=0.158 normal galaxy P3 appear coincident with the galaxy nuclei. The X-ray to optical luminosity ratio of P3 is higher by a factor of at least 30 than those of normal galaxies, while it is similar to those of AGN. The high X-ray luminosity and the lack of optical emission lines suggest an AGN in which either continuum beaming is important or emission lines are absorbed or not efficiently produced. In any case, objects like P3 would be missed or ignored in optical surveys or in X-ray surveys with large error boxes. It is only thanks to the new revolutionary capabilities of Chandra that this kind of sources can be detected and identified. Based on the ASCA and BeppoSAX surveys at fluxes $`>5\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ and on our first Chandra identifications, which push the flux limit down to $`2\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ , the hard X-ray sky appears populated by a large fraction of broad line AGN (about 50%), by a mixture of intermediate AGN (type 1.8-2.0 and composite starburst/AGN) and, most intriguingly, also by X-ray luminous apparently normal galaxies. These populations span ranges of X-ray and optical properties wider than previously thought. Acknowledgements The results presented in this paper are made possible by the successful effort of the entire Chandra team. In particular we thank the XRT and ACIS teams for building and calibrating the high resolution mirror and the CCD camera, the CXC team, and in particular A. Fruscione, for the quick data reduction and archiving. We thank P. Giommi, S. Molendi, G. Fabbiano, E. Giallongo, M. Mignoli, L. Stella, M. Vietri and H. Tananbaum for useful discussions and L.A. Antonelli for help in the ESO observation preparation. We also thank the referee, Gianni Zamorani, for his detailed and constructive comments which improved the quality of the paper. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work is partly supported by the Italian Space Agency, contract ARS–99–75 and by the Ministry for University and Research (MURST) under grant COFIN–98–02–32. M.E., F.N, and M.C. acknowledge support from NASA contract NAS8–39073.
no-problem/0003/astro-ph0003244.html
ar5iv
text
# 1 Motivation ## 1 Motivation An accreting X-ray binary having millisecond pulses was recently discovered (). Presumably it is one of a class of objects that represent the missing link between canonical and millisecond pulsars. If so, it confirms a long-standing conjecture that millisecond pulsars are formed by accretion onto canonical pulsars (). Many other accreting X-ray sources have been discovered consisting of a neutron star and a low-mass companion from which material is gathered, presumably, from an accretion disk (). An interpretation of quasi-periodic oscillations in X-ray brightness characteristic of these sources suggests that upper limits on mass and radius of the neutron star may possibly be deduced (). However, the interpretation in such terms is the subject of some controversy and it remains to be seen how the models of the observations finally play out (). Certainly there is much uncertainty concerning the magnetic field and accretion disk interaction which play an important role in the modeling of X-ray pulsations. Tentative limits on mass and radius deduced from models of quasi periodic oscillations in X-ray brightness (QPOs) have been employed recently to discriminate among models of the equation of state of dense nuclear matter (). The X-ray pulsar, Sax J1808.4-3658, is a particularly interesting object; it produces coherent X-ray emission with a 2.5 ms period as well as X-ray bursts. Based on an analysis of radiation from this object, a limiting mass-radius relationship was derived which is difficult to reconcile with existing neutron star models (). The mass-radius relationship derived would be consistent with an interpretation of Sax J1808.4 as a strange star candidate as found by the above authors. Against this background our purpose is to derive a model-independent mass-radius constraint for neutron stars that depends only on minimal and well accepted principles. The limiting relation is analogous to a previously obtained lower limit on the Kepler period of a rotating star as a function of its mass (), and to an even earlier analysis of limits on the gravitational redshift from neutron stars (). The most conservative minimal principles and constraints are: 1. Einstein’s general relativistic equations for stellar structure hold. 2. The matter of the star satisfies $`dp/d\rho 0`$ which is a necessary condition that a body is stable, both as a whole and also with respect to the spontaneous expansion or contraction of elementary regions away from equilibrium (Le Chatelier’s principle). 3. The equation of state satisfies the causal constraint for a perfect fluid; a sound signal cannot propagate faster than the speed of light, $`v(ϵ)\sqrt{dp/dϵ}1`$, which is also the appropriate expression for sound signals in General Relativity (). 4. The high-density equation of state matches continuously in energy and pressure to the low-density equation of state of and has no bound state at any density. The last condition assures that the $`MR`$ relation obtained is for a neutron star and not some sort of exotic. We mean “neutron star” in the generic sense: it is made of charge neutral nuclear matter at low density, while at higher density in the interior, matter may be in a mixed or pure quark-matter or other high-density phase of nuclear matter. The last condition also implies that the star is bound by gravity as is a neutron star and is not a self-bound star such as a strange star. As we will see, a self-bound star can lie in a region of the $`MR`$ plane that is forbidden to neutron stars. In referring to the constraints as conservative, we mean that we make no assumption about dense matter aside from the constraints mentioned and we specifically allow for a phase transition above a baryon density of $`0.1625\mathrm{fm}^3`$. We discuss this further in the Section, Caveats. We can adapt the results of our earlier search for a model-independent minimum Kepler period by searching for the radius at fixed mass that minimizes $`P(R^3/M)^{1/2}`$ (). Several researchers found that the above classical result applies to relativistic stars to within a few percent accuracy with a suitable constant of proportionality (). We use variational equations of state subject to the above constraints and techniques as described in the above reference. Our earlier results for the Kepler period agree to six percent with the results of , who performed a numerical solution for rotating stars in place of the above approximation formula for the Kepler period in terms of mass and radius of the non-rotating counterparts. Our results are shown in Fig. 1. Neutron stars at the mass limit can have radii as small as those shown by the line, and otherwise must lie in the shaded region marked for neutron stars. The region can be approximated in the interval illustrated by $`R\left(3.11250.44192x+2.3089x^20.38698x^3\right)\mathrm{km},`$ $`(1x{\displaystyle \frac{M}{M_{}}}2.5).`$ (1) Of course, for neutron stars (unlike white dwarfs), there is only one equation of state in nature; all neutron stars form a single family, and whatever the trajectory of the mass-radius relationship is for that family, the limiting mass star has the smallest radius, and it is greater or equal to the limit derived. For example, if the most massive neutron star that could exist in nature, independent of formation mechanism, is $`2M_{}`$, the radius of all neutron stars would have to exceed 8.37 km. (Recall that measured masses tell us nothing about the maximum possible mass that can be supported by nature’s equation of state.) Another limit of interest follows from the properties of General Relativity. Schartzschild’s limit $`R>2M`$ is actually less stringent than $`R>9M/4`$, which must be obeyed by any relativistic star (). The latter is also plotted. If a star’s mass and radius placed it in the region between the above described regions, it could be made of matter that is self-bound at high density, matter that would be bound in microscopic to stellar like-objects even in the absence of gravity (see Eq. 3). Strange stars, if the strange matter hypothesis is true, are examples. ## 2 Application to X-ray Emitters In the approximations and hypotheses that have been used to interpret the oscillations in X-ray luminosity, there appears a Keplerian radius. Such a radius expresses the balance of gravitational and centrifugal forces. In classical physics as well as in General Relativity for a non-rotating star (units are $`G=c=1`$): $`\mathrm{\Omega }=\sqrt{M/R_K^3},`$ (2) where $`M`$ is the mass of the star and $`\mathrm{\Omega }`$ is the angular velocity of a particle in circular orbit at $`R_K`$. This relation has nothing to do with the nature of the interior of the star, whatever that may be. It only relates gravity in the exterior region to the centrifugal force on a particle at $`R_K`$. The expression is exact for a non-rotating star in General Relativity, and only approximate for a rotating star, but there is a multiplicative factor on the right side, found to be accurate for many models that have been tested, which is $`\zeta 0.65`$ (). The same relationship $`M/R^3=const`$ holds also for a self-bound object, such as a strange star as can be seen as follows: The average energy density $`\overline{ϵ}`$ satisfies the identity $`ϵ_{\mathrm{equil}.}\overline{ϵ}M/\left({\displaystyle \frac{4\pi }{3}}R^3\right)`$ (3) where $`ϵ_{\mathrm{equil}.}`$ denotes the equilibrium density at which hypothetical strange matter is bound. The equality would hold for spherical objects of mass such that gravity is unimportant. Thus in either case, $`M/R^3=const`$ characterizes both a Keplerian orbit and a strange star, though they have nothing to do with each other à priori. We make this point since superficially the constraint derived from X-ray luminosity oscillations looks like the $`MR`$ relationship for a strange star \[cf. Fig. 3 in Ref. ()\]. In fact, strange stars have been considered as candidates that satisfy the constraints of the QPO model () and of a model of of periodic pulsations from the X-ray pulsar (). The suggestions seem especially appealing because of the above coincidence. They also have some appeal in as much as many explicit neutron star models cannot satisfy the constraints imposed by the theoretical analyses. Figure 2 illustrates the limit obtained for the $`2.5`$ ms X-ray pulsar, Sax J1808.4, by Li et al. (1999), together with the limit obtained in our model independent way. Neutron stars must lie to the right of our limit and the X-ray object must lie on or to the left of the line so marked. Li et al. (1999) have proposed the object as a strange-star candidate because the neutron star models they tested did not meet their constraint, whereas the strange star models did. However, from our model independent constraint, it is clear that neutron stars cannot be ruled out, even if many explicit models can, always provided the X-ray phenomena are modeled correctly. ## 3 Caveats We have derived a mass-radius relationship for “neutron stars” employing the minimal conservative constraints enumerated above. In doing so we are recognizing that there is no empirical knowledge of the properties of nuclear matter above saturation density of $`0.15\mathrm{fm}^3`$. In our search for the minimum radius as a function of mass, we have allowed a constant pressure region to develop above the fiducial density of $`0.1625\mathrm{fm}^3`$, the density closest to saturation in the BPS tables of the low-density equation of state. The minimum radius is increased by $`0.1`$ km if the equation of state is merely very soft just above the fiducial density. These features are permitted by our ignorance above saturation density, and in this sense provide a conservative estimate of the minimum radius as a function of mass. However, we may be permitted some prejudice: If a low-density phase transition of any kind were not plausible, then the minimum radius that we have derived, would be increased. The central density of the minimum radius star, in either of the above two cases is about 26 times nuclear density for a canonical $`1.44M_{}`$ star. Almost certainly, stars of such high central density must contain a deconfined quark matter core. However, these stars, with the constraints that we have imposed, are gravitationally bound, rather than self-bound, as a strange star would be. So, if the analyses of these X-ray objects really does imply an extraordinarily small radius, that fact would be consistent with the star having a quark matter core, the quarks being liberated from hadrons by the high pressure. This is in distinction with the strange matter hypothesis, according to which the entire star would be made of self-bound quark matter, a so-far undiscovered state, the actual ground state of hadronic matter, if the hypothesis is true. ## 4 Comments Two or three properties of pulsars can be measured with great accuracy—the period of rotation, sometimes the time rate of change of period, and the masses involved in close binaries. The first two are directly observed, and the third deduced from measurement of orbital parameters. With sufficient observation time, these can be determined accurately, and little doubt surrounds orbital mechanics. It is possible that no other properties gained from any other phenomena will rival these types of measurements either in accuracy or in clarity of interpretation. The detection of a pulsar with a rotational period smaller for its mass than that obtained as a model independent limit for neutron stars would be decisive in distinguishing between the neutron star interpretation of pulsars as compared to an exotic star—a star that is self-bound at very high equilibrium density (see Eq. (8) in Ref. ()). However, nature may never provide a mechanism for approaching the limiting period, which for a neutron star of mass $`1.44M_{}`$ is about $`0.3`$ ms. On the other hand, the pulsation phenomenon in X-ray stars may involve a relation between mass and radius which could also be decisive. However, the interpretation is subject to some uncertainty, both as to the origin of the pulsations and most certainly as to the accuracy of the mass-radius connection, Eq. 2. This relationship holds only for classical and for non-rotating relativistic stars. There is no formula for the Kepler frequency of a particle orbiting rotating relativistic stars because of the position dependent frame-dragging frequency; rather the Kepler frequency can be determined only as a self-consistency condition on the solution of Einstein’s equations and therefore only for specific model assumptions. There is no possibility of evading this model dependence; however it becomes weaker at further distance outside the star. \[See Eq. (8) in ()\]. Therefore, it is of general interest to have a model-independent limit on radius as a function of mass of neutron stars, such as we have provided here, and it is of particular interest in connection with the oscillations in X-ray brightness of neutron star accreters. Acknowledgements: I am indebted to I. Bombaci for enlightening comments and criticism of an early draft. This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, of the U.S. Department of Energy under Contract DE-AC03-76SF00098.
no-problem/0003/hep-ph0003025.html
ar5iv
text
# Phase transitions from preheating in gauge theories \[ ## Abstract We show by studying the Abelian Higgs model with numerical lattice simulations that non-thermal phase transitions arising out of preheating after inflation are possible in gauge-Higgs models under rather general circumstances. This may lead to the formation of gauged topological defects and, if the scale at which inflation ends is low enough, to electroweak baryogenesis after preheating. ⓒ The American Physical Society \] Over the past few years there has been somewhat of a revolution in our understanding of the dynamics of the end of inflation. The traditional picture of reheating arising out of the perturbative decay of the inflaton field as it oscillates about the minima of its potential has been replaced by the possibility of an explosive particle production during an earlier period, known as preheating. During preheating, parametric resonance of the inflaton field generates very large fluctuations of the scalar fields coupled to the inflaton, leading to the production of large numbers of particles . Due to the weakness of the interactions, the short-wavelength modes do not thermalize, and the effective temperature of the long-wavelength modes is much higher than in the standard reheating scenario. This may lead to symmetry restoration and, when the Universe cools down as it expands further, a subsequent non-thermal phase transition . The fact that the fluctuations produced during preheating have large occupation numbers implies that they can be considered as interacting classical waves, an important result because it means that the dynamics of fluctuations during and after preheating can be studied using lattice simulations . A concrete example of a non-thermal phase transition occuring after preheating was presented in Ref. (see also Ref. ). The phase transition that they found is first-order, and depending on the field content of the model being investigated topological defects may form, an intriguing result as it opens up the possibility that inflation can create a defect problem if for example they produce gauged monopoles or domain walls . Non-thermal phase transitions may even solve the old puzzle of baryon asymmetry in the Universe . Although the baryon number is conserved perturbatively in the Standard Model, there are non-perturbative interactions that violate this conservation law. The rate of baryon number violation is extremely low at low energies, but it becomes much higher in the high-temperature phase of the electroweak theory. Thus it is possible to generate the observed baryon asymmetry if, for some reason, the fields are out of equilibrium at the electroweak scale and thermalize to a temperature below $`T_c`$. This could be the case even in the standard big bang cosmology, if the electroweak phase transition were strongly first order, but at least in the minimal Standard Model it is not, as lattice simulations have shown . However, in a non-thermal phase transition, the fields are driven out of equilibrium by the oscillations of the inflaton, and baryogenesis may be possible, if the reheating temperature is much lower than $`T_c`$ . Despite the exciting possibility of electroweak baryogenesis, most of the numerical work on non-thermal phase transitions so far has concentrated on scalar fields or has been restricted to one spatial dimension . In this letter we present results from simulations of the Abelian Higgs model in two rather different cases. The first case is a direct analogue of the simulations in Ref. . The inflaton itself is charged under a gauge group and eventually breaks the gauge symmetry in a first-order phase transition. (For simplicity, we use the terminology of spontaneous symmetry breakdown, although the gauge symmetry is not actually broken in the Higgs phase.) The second case is more relevant for electroweak baryogenesis. We show that even with a Higgs mass that is compatible with experimental bounds, the transition is sharp, and electroweak baryogenesis is therefore possible. Although we restrict ourselves to the Abelian case in our simulations, we expect that our conclusions apply to non-Abelian theories as well. The Lagrangian of our model is $$L=\frac{1}{4}F^{\mu \nu }F_{\mu \nu }+(D^\mu \varphi )^{}D_\mu \varphi \lambda (|\varphi |^2v^2)^2.$$ (1) Here the gauge covariant derivative is $`D_\mu \varphi =_\mu \varphi +ieA_\mu \varphi `$, and $`F_{\mu \nu }=A_{\nu ;\mu }A_{\mu ;\nu }`$. The couplings $`\lambda `$ and $`e`$ are assumed to be small, and we will use $`\lambda e^2`$ in our estimates. Ideally, we would like to study the quantum field theory defined by Eq. (1), but solving for the time evolution of even a simple quantum system is a formidable task. Therefore we have to resort to the classical approximation, which is expected to work as long as the dynamics is determined by modes with a macroscopic occupation number . For studying the dynamics, it is convenient to fix the temporal gauge $`A_0=0`$ and use the conformal time $`\eta `$ defined by $`d\eta dt/a`$ and the rescaled fields $`\stackrel{~}{\varphi }a\varphi `$, $`\stackrel{~}{E}_i_\eta A_i`$. The equations of motion for $`\stackrel{~}{\varphi }`$ and $`A_i`$ follow from Eq. (1): $`_\eta ^2\stackrel{~}{\varphi }`$ $`=`$ $`D_iD_i\stackrel{~}{\varphi }+(2\lambda v^2a^2+_\eta ^2a/a)\stackrel{~}{\varphi }2\lambda |\stackrel{~}{\varphi }|^2\stackrel{~}{\varphi },`$ (2) $`_\eta \stackrel{~}{E}_i`$ $`=`$ $`_jF_{ij}+2e\mathrm{Im}\stackrel{~}{\varphi }^{}D_i\stackrel{~}{\varphi },`$ (3) $`_i\stackrel{~}{E}_i`$ $`=`$ $`2e\mathrm{Im}\stackrel{~}{\varphi }^{}_\eta \stackrel{~}{\varphi }.`$ (4) The initial conditions for the fields are those produced by inflation: the gauge field is in vacuum and the covariant derivatives of the Higgs field vanish. This allows us to fix the remaining gauge degree freedom by setting initially $`A_i=0`$ and $`\varphi =\overline{\varphi }_0=`$constant. We separate $`\varphi `$ into the homogeneous zero mode $`\overline{\varphi }`$ and the inhomogeneous fluctuations $`\delta \varphi =\varphi \overline{\varphi }`$. We take the quantum nature of the system into account by introducing small fluctuations for the fields $`A_i`$ and $`\delta \varphi `$ and for their canonical momenta $`E_i`$ and $`\delta \pi _\eta \delta \varphi `$. The width of these classical fluctuations is chosen to be equal to the width of quantum fluctuations in the vacuum calculated for free fields. We allow fluctuations in the phase of $`\delta \varphi `$ and fix the associated gauge degree of freedom by choosing $`_iA_i=0`$. The longitudinal component of $`E_i`$ is determined from the Gauss law (4). In the very beginning of our simulation, when the fields are in vacuum, the conditions required by the classical approximation are not satisfied, but we expect that the final results will be unaffected. What is important is not the precise nature of the initial fluctuations, but that some small fluctuations are present. As in the scalar theory , the time evolution begins with a period of parametric resonance. The resonance parameter $`q`$ is given by $`qe^2/\lambda `$. Let us first consider the case $`q1`$, in which the resonance is broad, and during the first oscillations, a large amount of energy is transferred from the zero mode $`\overline{\varphi }`$ to the long-wavelength modes $`p\lambda ^{1/2}\overline{\varphi }_0`$ of $`A_i`$ and $`\delta \varphi `$, from which it soon spreads to all modes with $`p<p_{}e\overline{\varphi }_0`$. We can approximate the state of the system after this period by assuming that the modes with $`p<p_{}`$ thermalize to some effective temperature $`T_{\mathrm{eff}}`$, but those with $`p>p_{}`$ remain in vacuum. Then the energy density in these fluctuations is $$\rho ^p_{}\frac{d^3p}{(2\pi )^3}p^2\frac{T_{\mathrm{eff}}}{p^2}p_{}^3T_{\mathrm{eff}},$$ (5) and after preheating it is of the same order as the initial energy density in the zero mode $`\rho _0e^2\overline{\varphi }_0^4`$, which implies $`T_{\mathrm{eff}}\overline{\varphi }_0/e`$. In the reheating picture, the temperature after the equilibration of the fields would be $`T_r\sqrt{e}\overline{\varphi }_0T_{\mathrm{eff}}`$. Since the occupation number of the long-wavelength modes is $`n_pT_{\mathrm{eff}}/p`$, which is large when $`p<p_{}`$ provided that $`e1`$, the classical approximation works well after preheating begins. The zero mode $`\overline{\varphi }`$ continues oscillating around the minimum, but the fluctuations in $`\delta \varphi `$ and $`A_i`$ induce an effective mass term $$m_{\mathrm{eff}}^22\lambda v^2+4\lambda \delta \varphi ^2+e^2A_i^2.$$ (6) The magnitude of the fluctuation terms is $$\delta \varphi ^2A_i^2^p_{}\frac{d^3p}{(2\pi )^3}\frac{T_{\mathrm{eff}}}{p^2}p_{}T_{\mathrm{eff}}\overline{\varphi }_0^2.$$ (7) In the reheating picture, the fluctuation terms would be much smaller, $`\delta \varphi ^2A_i^2T_r^2e\overline{\varphi }_0^2`$. This shows that $`m_{\mathrm{eff}}^2`$ can become positive, thereby restoring the symmetry, even if the reheating temperature is below $`T_c`$. When the Universe expands further, the fluctuation terms decrease and the system undergoes a phase transition to the broken phase. The nature of this transition can be studied by calculating the effective potential of $`\overline{\varphi }`$ in the background of the fluctuations $`\delta \varphi `$ and $`A_i`$. If $`e^2\lambda `$, the contribution from $`A_i`$ will be more important. Taking the one-loop contribution from the gauge field into account, we have $$V_{\mathrm{eff}}(\overline{\varphi })2\lambda v^2\overline{\varphi }^2+\lambda \overline{\varphi }^4+T_{\mathrm{eff}}^p_{}\frac{d^3p}{(2\pi )^3}\mathrm{log}\frac{p^2+m_A^2}{p^2},$$ (8) where $`m_Ae\overline{\varphi }`$ is the photon mass generated by the Higgs mechanism. To understand the shape of the potential (8), we expand it both for small and large $`\overline{\varphi }`$, $$V_{\mathrm{eff}}(\overline{\varphi })\{\begin{array}{cc}m_{\mathrm{eff}}^2\overline{\varphi }^2C_1e^3T_{\mathrm{eff}}\overline{\varphi }^3+\lambda \overline{\varphi }^4,\hfill & \hfill (\overline{\varphi }p_{}/e)\\ C_2T_{\mathrm{eff}}p_{}^3\mathrm{ln}\frac{e\overline{\varphi }}{p_{}}2\lambda v^2\overline{\varphi }^2+\lambda \overline{\varphi }^4,\hfill & \hfill (\overline{\varphi }p_{}/e)\end{array}$$ (9) where $`C_1`$ and $`C_2`$ are numerical factors and $`m_{\mathrm{eff}}^2`$ is given by Eq. (6). The origin $`\overline{\varphi }=0`$ is a local minimum whenever $`m_{\mathrm{eff}}^2`$ is positive. Assuming first that $`p_{}>ev`$, the cubic term in Eq. (9) induces another minimum for the potential when $`m_{\mathrm{eff}}^2`$ becomes small enough, and eventually when this new minimum becomes the global one the system enters the Higgs phase in a first order phase transition. While this phenomenon is present also in equilibrium , the transition is stronger in our case, since the cubic term is proportional to $`T_{\mathrm{eff}}T_r`$. The existence of this minimum requires $`e^2>\lambda `$, since otherwise the contribution from the scalar loop, which does not contain any cubic term, would dominate in Eq. (9). Even if $`e^2<\lambda `$, the potential can have two minima, provided that $`p_{}<ev`$ . Then the tree-level minimum is the global one if the logarithmic term in Eq. (9) is smaller than $`\lambda v^4`$, i.e. $`T_{\mathrm{eff}}p_{}^3<\lambda v^4`$. It is difficult to simultaneously satisfy this inequality, along with the condition $`m_{\mathrm{eff}}^2>0`$, and we were unable to do this in our simulations. To confirm this expected behaviour, we carried out a numerical lattice simulation with our model. We chose $`\overline{\varphi }_0=0.25M_{\mathrm{Pl}}`$ and $`\lambda =210^{13}`$, $`e=6.410^6`$, $`v=7.210^4M_{\mathrm{Pl}}`$. The lattice spacing was $`\delta x=9.310^5M_{\mathrm{Pl}}^1`$, the time step $`\delta \eta =1.210^5M_{\mathrm{Pl}}^1`$ and the size of the lattice was $`320^3`$. The universe was assumed to be radiation dominated with $`a(\eta )=1+\eta H`$, where $`H=8.310^8M_{\mathrm{Pl}}`$. Since $`\overline{\varphi }`$ is not a gauge-invariant quantity and can therefore only be defined in the vacuum, we did not measure its value. Instead, we show $`|\varphi |^2`$ as a function of time in Fig. 1. The fact that $`|\varphi |^2<v^2`$ when $`1.510^9M_{\mathrm{Pl}}^1<\eta <310^9M_{\mathrm{Pl}}^1`$ clearly shows that the gauge symmetry is restored and the system is in the Coulomb phase. The amplitude of the oscillations remains quite large, which is probably a finite-size effect. In an infinite system, there would be more infrared modes to which the zero mode of $`\varphi `$ could decay. Eventually, the system enters the Higgs phase in a first-order transition, as in the scalar theory. The first-order nature of the transition can be seen from the configurations during the transition; for example by looking at the isosurface of $`|\varphi |^2`$ we would see a growing bubble of the Higgs phase characterized by a larger value of $`|\varphi |^2`$. In order to check that the separation of scales below and above $`p_{}`$ indeed takes place, we measured the effective temperature of different Fourier modes of the electric fields $`E_i`$ at various times during the simulation. A reason for choosing this quantity rather than the power spectrum of $`\stackrel{~}{\varphi }`$ or $`A_i`$ is that it is gauge-invariant. In equilibrium $`|\stackrel{~}{E}_{i,k}|^2`$ is constant and its magnitude is proportional to the temperature. Therefore we can use it to define the effective temperature of a single mode $$T_{\mathrm{eff}}(p=k/a)=\frac{1}{2a}|\stackrel{~}{E}_{i,k}^\mathrm{T}|^2\frac{d^3k}{(2\pi )^3},$$ (10) where the superscript T indicates that we have included only the transverse component $`k_i\stackrel{~}{E}_{i,k}^\mathrm{T}=0`$, since the longitudinal component is fixed by the Gauss law. The inset of Fig. 1 shows the product $`aT_{\mathrm{eff}}`$, which is the effective temperature of the rescaled fields, as a function of the conformal momentum $`k=pa`$. Immediately after preheating, the temperature of the long-wavelength modes is $`T_{\mathrm{eff}}10^4M_{\mathrm{Pl}}`$ and the occupation number $`n_p=T_{\mathrm{eff}}(p)/p10^{10}`$ is huge. The cutoff momentum is $`p_{}=k_{}/a10^6M_{\mathrm{Pl}}/a`$. With time, the modes with higher and higher $`k`$ thermalize and the temperature decreases, but since the couplings are small, this process is very slow. The modes with $`kk_{}`$ are strongly suppressed even after the phase transition, and therefore we believe that the lattice approximation remains reliable even at the end of the simulation. Because the modes with the highest momenta do not remain exactly in the vacuum, discretization errors cannot be ruled out completely. For the electroweak theory, the opposite case $`q<1`$ is more relevant, since $`qm_W^2/m_H^2`$. In this case, the parametric resonance is narrow and the energy transfer is less efficient. However, since the expansion rate of the Universe is much slower in this case, it could still lead to a similar phenomenon. Most of the energy of the inflaton is transferred to a narrow momentum range of the fluctuations, but the long-wavelength modes thermalize and the energy is spread to all long-wavelength modes. After that, the system should behave as in the case with a broad resonance. In equilibrium, the phase transition is not of first order if $`e^2>\lambda `$, but as discussed earlier, we expect the transition to be stronger in our case. The realistic values for the couplings in the electroweak theory would be $`\lambda e^21`$, but in that case our simulations are not reliable. With these couplings, the interactions are important even in the vacuum state, and the classical approximation cannot be trusted. Therefore, we have used slightly smaller couplings, $`\lambda =0.04`$ and $`e=0.14`$, which allow us to use the classical approximation. The initial value of the Higgs field was $`\overline{\varphi }_0=1`$ TeV. We also chose $`v=246`$ GeV and $`a(\eta )=1+\eta H`$ with $`H=0.7`$ GeV. The lattice spacing was $`\delta x=1.4`$ TeV<sup>-1</sup>, time step $`\delta \eta =0.14`$ TeV<sup>-1</sup>, and the lattice size $`240^3`$. In this case, $`\varphi `$ cannot be the inflaton, because its couplings are much too strong. However, the homogeneous initial condition for $`\varphi `$ may arise from a previous preheating phase, in which $`\varphi `$ couples to the inflaton with a coupling constant that is much smaller than $`e`$. Then the parametric resonance will transfer a large amount of energy to modes of $`\varphi `$ with very long wavelengths. The alternative possibility is that quantum fluctuations of $`\varphi `$ give it a large spatial average during inflation. As in the earlier case for the inflaton, we show $`|\varphi |^2`$ and the effective temperature of different modes of $`E_i`$ in Fig. (2). This time, the energy is transferred into a narrow band of gauge field modes. Nevertheless, the long-wavelength modes thermalize, and we reach a similar situation to that in the first case, in which the long-wavelength modes $`k<300`$ GeV have an effective temperature $`T_{\mathrm{eff}}10^4`$ GeV, and the symmetry is restored. At $`\eta 3`$ GeV<sup>-1</sup>, the system undergoes a phase transition to the Higgs phase. The transition is not of first order, but it is still rather sharp. In the electroweak theory, the conservation of baryon number would be violated by sphaleron configurations with a rate $`\mathrm{\Gamma }_{\mathrm{sph}}\alpha _W^5T_{\mathrm{eff}}^5/p_{}`$ when the symmetry is temporarily restored, and as discussed in Ref. , the oscillations of the Higgs field could create a large baryon asymmetry. If the transition to the Higgs phase is sharp enough, the baryon number violation ceases instantaneously, and the produced baryon asymmetry remains. Our simulations show that a gauge-Higgs system exhibits the same behaviour as the scalar model considered in Ref. . The first case we considered shows that a non-thermal phase transition is possible if the inflaton is charged under a gauge group. Although we restricted ourselves to an Abelian model, we believe that the qualitative features of our results would be the same in non-Abelian theories. In many models, this phase transition would lead to the formation of cosmic strings or other topological defects. The second case we considered has the qualitative features of the electroweak theory, and we find that the symmetry gets restored although the parametric resonance is narrow, provided that the expansion of the Universe is slow enough. Unlike in the standard thermal phase transition scenario, the transition to the Higgs phase is sharp, which makes it possible to preserve the produced baryon asymmetry. This supports the picture of electroweak baryogenesis at preheating. ###### Acknowledgements. We would like to thank Andrei Linde and Mark Hindmarsh for useful conversations. EJC and AR are supported by PPARC, and AR also partly by the University of Helsinki. This work was conducted on the SGI Origin platform using COSMOS Consortium facilities, funded by HEFCE, PPARC and SGI.
no-problem/0003/hep-th0003266.html
ar5iv
text
# Shape Invariant potentials depending on 𝑛 parameters transformed by translation11footnote 1To appear in J. Phys. A: Math. Gen. (2000) ## 1 Introduction There has been much interest in the search of exactly solvable problems in Quantum Mechanics from the early days of the theory to date. To this respect, the Factorization Method introduced by Schrödinger and later developed by Infeld and Hull has been shown to be very efficient. Later, the introduction of Supersymmetric Quantum Mechanics by Witten and the concept of Shape Invariance by Gendenshteïn have renewed to great extent the interest in the subject. For an excellent review, see . In particular, Shape Invariant problems have been shown to be exactly solvable, and it was observed that a number of known exactly solvable potentials belonged to such a class. The natural question which arose was whether all exactly solvable problems have the property of being Shape Invariant in the sense of . This question has been treated in an interesting paper several years ago . There, the Natanzon class of potentials was investigated in detail. Following that line of reasoning, the authors gave a classification of Shape Invariant potentials whose parameters are transformed by translation. They proposed the general case which depends on an arbitrary but finite number $`n`$ of parameters, and established the equations to be solved in order to find such a class. But they asserted to have failed to find any solution of the equations. For several years this class of Shape Invariant potentials has been considered to be a good candidate to enlarge the class of known solutions of the Shape Invariance condition, see e.g. . But the solutions are not known so far. As it seems to be an interesting problem we have analyzed it carefully and we have proved that it is possible to find in an easy way the solution. The main point is to use in an appropriate way some interesting properties of a related Riccati equation. As a consequence, the aim of this paper is to answer to the question proposed in . The organization of the paper is as follows. After a quick description of the problem of Shape Invariance in Section 2, we will develop in Section 3 the mathematical study of a particularly interesting first order ordinary differential equation system of key importance for the problem. Then we will proceed to study in Section 4 the problem of Shape Invariant potentials depending on $`n`$ parameters. We will do some *Ansätze* for the superpotentials assuming translations as the transformation law for the parameters, including the one proposed in and its more immediate generalizations. The results are presented in some tables. ## 2 Shape invariance and the Factorization Method We recall some basic ideas of the theory of related operators, the concept of partner potentials and Shape Invariance. Two Hamiltonians $$H=\frac{d^2}{dx^2}+V(x),\stackrel{~}{H}=\frac{d^2}{dx^2}+\stackrel{~}{V}(x),$$ (1) are said to be related whether there exists an operator $`A`$ such that $`AH=\stackrel{~}{H}A`$, where $`A`$ need not to be invertible. If we assume that $$A=\frac{d}{dx}+W(x),$$ (2) then, the relation $`AH=\stackrel{~}{H}A`$ leads to $$W(V\stackrel{~}{V})=W^{\prime \prime }V^{},V\stackrel{~}{V}=2W^{},$$ (3) while the relation $`HA^{}=A^{}\stackrel{~}{H}`$ leads to $$W(V\stackrel{~}{V})=W^{\prime \prime }\stackrel{~}{V}^{},V\stackrel{~}{V}=2W^{}.$$ (4) One can easily integrate both pair of equations; we obtain $$V=W^2W^{}+c,\stackrel{~}{V}=W^2+W^{}+d,$$ where $`c`$ and $`d`$ are constants. But taking into account the equation $`V\stackrel{~}{V}=2W^{}`$ we have $`c=d`$. Therefore (see e.g ), two Hamiltonians $`H`$ and $`\stackrel{~}{H}`$ of the form (1) can be related by a first order differential operator $`A`$ like (2) if and only if there exists a real constant $`d`$ such that $`W`$ satisfies the pair of Riccati equations $$Vd=W^2W^{},\stackrel{~}{V}d=W^2+W^{},$$ (5) and then the Hamiltonians can be factorized as $$H=A^{}A+d,\stackrel{~}{H}=AA^{}+d.$$ (6) Using equations in (5) we obtain the equivalent pair $$\stackrel{~}{V}d=(Vd)+2W^2,\stackrel{~}{V}=V+2W^{}.$$ (7) The potentials $`\stackrel{~}{V}`$ and $`V`$ are usually said to be *partners*. We would like to remark that these equations have an intimate relation with what it is currently known as *Darboux transformations* in the context of one dimensional or Supersymmetric Quantum Mechanics. In fact, it is easy to prove that the first of the equations (5) can be transformed into a Schrödinger equation $`\varphi ^{\prime \prime }+(V(x)d)\varphi =0`$ by means of the change $`\varphi ^{}/\varphi =W`$, and by means of $`\stackrel{~}{\varphi }^{}/\stackrel{~}{\varphi }=W`$ the second of (5) transforms into $`\stackrel{~}{\varphi }^{\prime \prime }+(\stackrel{~}{V}(x)d)\stackrel{~}{\varphi }=0`$. The relation between $`V`$ and $`\stackrel{~}{V}`$ is given by (7). Obviously, $`\varphi \stackrel{~}{\varphi }=1`$, up to a non–vanishing constant factor. It is also worth noting that these Schrödinger equations express that $`\varphi `$ and $`\stackrel{~}{\varphi }`$ are respective eigenfunctions of the Hamiltonians (1) for the eigenvalue $`d`$. These are the essential points of the mentioned Darboux transformations, as exposed e.g. in \[13, pp. 7, 24\]. The concept of *Shape invariance* introduced by Gendenshteïn : $`V`$ is assumed to depend on certain set of parameters and equations (5) define $`V`$ and $`\stackrel{~}{V}`$ in terms of a superpotential $`W`$. The condition for a partner $`\stackrel{~}{V}`$ to be of the same form as $`V`$ but for a different choice of the values of the parameters involved in $`V`$, is called Shape Invariance condition . More explicitly, if $`V=V(x,a)`$ and $`\stackrel{~}{V}=\stackrel{~}{V}(x,a)`$, where $`a`$ denotes a set of parameters, Gendenshteïn showed that if we assume the further relation between $`V(x,a)`$ and $`\stackrel{~}{V}(x,a)`$ given by $$\stackrel{~}{V}(x,a)=V(x,f(a))+R(f(a)),$$ (8) where $`f`$ is a transformation of the set of parameters $`a`$ and $`R(f(a))`$ is a remainder not depending on $`x`$, then the complete spectra of the Hamiltonians $`H`$ and $`\stackrel{~}{H}`$ can be found easily. Just writing the $`a`$–dependence the equations (5) become $$V(x,a)d=W^2W^{},\stackrel{~}{V}(x,a)d=W^2+W^{}.$$ (9) Therefore, we will assume that $`V(x,a)`$ and $`\stackrel{~}{V}(x,a)`$ are obtained from a superpotential function $`W(x,a)`$ by means of $$V(x,a)d=W^2(x,a)W^{}(x,a),\stackrel{~}{V}(x,a)d=W^2(x,a)+W^{}(x,a).$$ (10) The Shape Invariance property in the sense of requires the further condition (8) to be satisfied. The relationship of a slight generalization of the Factorization Method developed by Infeld and Hull with the Shape Invariance theory has been explicitly established in . There, the following identifications between the symbols used in the Factorization Method and those of Shape Invariance problems were found: $`\stackrel{~}{V}(x,a)d`$ $`=`$ $`r(x,f(a))L(a),`$ (11) $`V(x,a)d`$ $`=`$ $`r(x,a)L(a),`$ (12) $`W(x,a)`$ $`=`$ $`k(x,a),`$ (13) $`R(f(a))`$ $`=`$ $`L(f(a))L(a).`$ (14) ## 3 General solution of equations $`y^2+y^{}=a`$, $`zy+z^{}=b`$ We will study next the general solution of a certain first order ordinary differential equation system. It will play a key role in the derivation of the main subject in this paper. The system is $`y^2+y^{}`$ $`=`$ $`a,`$ (15) $`yz+z^{}`$ $`=`$ $`b,`$ (16) where $`a`$ and $`b`$ are real constants and the prime denotes derivative respect to $`x`$. The equation (15) is a Riccati equation with constant coefficients, meanwhile (16) is an inhomogeneous linear first order differential equation for $`z`$, provided the function $`y`$ is known. The general solution of (16) is easily obtained once we know the solutions of (15), e.g. by means of $$z(x)=\frac{b^x\mathrm{exp}\left\{^\xi y(\eta )𝑑\eta \right\}𝑑\xi +D}{\mathrm{exp}\left\{^xy(\xi )𝑑\xi \right\}},$$ (17) where $`D`$ is an integration constant . The general Riccati equation $$\frac{dy}{dx}=a_2(x)y^2+a_1(x)y+a_0(x),$$ (18) where $`a_2(x)`$, $`a_1(x)`$ and $`a_0(x)`$ are differentiable functions of the independent variable $`x`$, has very interesting properties. It is to be remarked that in the most general case there is no way of writing the general solution by using some quadratures, but one can integrate it completely if one particular solution $`y_1(x)`$ of (18) is known. Then, the change of variable (see e.g. ) $$u=\frac{1}{y_1y},\text{with inverse}y=y_1\frac{1}{u},$$ (19) transforms (18) into the inhomogeneous first order linear equation $$\frac{du}{dx}=(2a_2y_1+a_1)u+a_2,$$ (20) which can be integrated by two quadratures. An alternative change of variable was also proposed recently : $$u=\frac{yy_1}{y_1y},\text{with inverse}y=\frac{uy_1}{u+y_1}.$$ (21) This change transforms (18) into the inhomogeneous first order linear equation $$\frac{du}{dx}=\left(\frac{2a_0}{y_1}+a_1\right)u+a_0,$$ (22) which is integrable by two quadratures, as well. We also remark that the general Riccati equation (18) admits the identically vanishing function as a solution if and only if $`a_0(x)=0`$ for all $`x`$ in the domain of the solution. But the most important property of Riccati equation is that when three particular solutions of (18), $`y_1(x),y_2(x),y_3(x)`$ are known, the general solution $`y`$ can be automatically written, by means of the formula $$y=\frac{y_2(y_3y_1)k+y_1(y_2y_3)}{(y_3y_1)k+y_2y_3},$$ (23) where $`k`$ is a constant determining each solution. As an example, it is easy to check that $`y|_{k=0}=y_1`$, $`y|_{k=1}=y_3`$ and that the solution $`y_2`$ is obtained as the limit of $`k`$ going to $`\mathrm{}`$. For more information on geometric and group theoretic aspects of Riccati equation see e.g. . We are interested here in the simpler case of the Riccati equation with constant coefficients (15). The general equation of this type is $$\frac{dy}{dx}=a_2y^2+a_1y+a_0,$$ (24) where $`a_2`$, $`a_1`$ and $`a_0`$ are now real constants, $`a_20`$. This equation, unlike the general Riccati equation (18), is always integrable by quadratures, and the form of the solutions depends strongly on the sign of the discriminant $`\mathrm{\Delta }=a_1^24a_0a_2`$. This can be seen by separating the differential equation (24) in the form $$\frac{dy}{a_2y^2+a_1y+a_0}=\frac{dy}{a_2\left(\left(y+\frac{a_1}{2a_2}\right)^2\frac{\mathrm{\Delta }}{4a_2^2}\right)}=dx.$$ Integrating (24) in this way we obtain non–constant solutions. Looking for constant solutions of (24) amounts to solve an algebraic second degree equation. So, if $`\mathrm{\Delta }>0`$ there will be two different real constant solutions, when $`\mathrm{\Delta }=0`$ there is only one constant real solution and if $`\mathrm{\Delta }<0`$ we have no constant real solutions at all. These properties may be used for finding the general solution of (15). For this equation the discriminant $`\mathrm{\Delta }`$ is just $`4a`$. Then, if $`a>0`$ we can write $`a=c^2`$, where $`c>0`$ is a real number. The non–constant particular solution $$y_1(x)=c\mathrm{tanh}(c(xA)),$$ (25) where $`A`$ is an arbitrary integration constant, is readily found by direct integration. In addition, there exist two different constant real solutions, $$y_2(x)=c,y_3(x)=c.$$ (26) The general solution obtained using formula (23), is $$y(x)=c\frac{B\mathrm{sinh}(c(xA))\mathrm{cosh}(c(xA))}{B\mathrm{cosh}(c(xA))\mathrm{sinh}(c(xA))},$$ (27) where $`B=(2k)/k`$, $`k`$ being the arbitrary constant in (23). Substituting in (17) we obtain the general solution for $`z(x)`$, $`z(x)={\displaystyle \frac{\frac{b}{c}\{B\mathrm{sinh}(c(xA))\mathrm{cosh}(c(xA))\}+D}{B\mathrm{cosh}(c(xA))\mathrm{sinh}(c(xA))}},`$ (28) where $`D`$ is a new integration constant. For the case $`a=0`$, a particular solution is $$y_1(x)=\frac{1}{xA},$$ (29) where $`A`$ is an integration constant. If we apply the change of variable (21) with $`y_1`$ given by (29), then (15) with $`a=0`$ transforms into $`du/dx=0`$. Then, the general solution for (15) with $`a=0`$ is $$y(x)=\frac{B}{1+B(xA)},$$ (30) with $`A`$ and $`B`$ being arbitrary integration constants. Substituting in (17) we obtain the general solution for $`z(x)`$ in this case, $`z(x)={\displaystyle \frac{b(\frac{B}{2}(xA)^2+xA)+D}{1+B(xA)}},`$ (31) where $`D`$ is a new integration constant. If now $`a=c^2<0`$, where $`c>0`$ is a real number we find by direct integration the particular solution $$y_1(x)=c\mathrm{tan}(c(xA)),$$ (32) where $`A`$ is an arbitrary integration constant. With either the change of variable (19) or alternatively (21), with $`y_1(x)`$ given by (32) we get the general solution of (15) for $`a>0`$ $$y(x)=c\frac{B\mathrm{sin}(c(xA))+\mathrm{cos}(c(xA))}{B\mathrm{cos}(c(xA))\mathrm{sin}(c(xA))},$$ (33) where $`B=cF`$, $`F`$ arbitrary constant. Substituting in (17) we obtain the general solution for $`z(x)`$ in this case, $`z(x)={\displaystyle \frac{\frac{b}{c}\{B\mathrm{sin}(c(xA))+\mathrm{cos}(c(xA))\}+D}{B\mathrm{cos}(c(xA))\mathrm{sin}(c(xA))}},`$ (34) where $`D`$ is a new integration constant. These solutions can be written in many mathematically equivalent ways. We have tried to give their simplest form and in such a way that the symmetry between the solutions for the case $`a>0`$ and $`a<0`$ were clearly recognized. Indeed, the general solution of (15) for $`a>0`$ can be transformed into that of the case $`a<0`$ by means of the formal changes $`cic`$, $`BiB`$ and the identities $`\mathrm{sinh}(ix)=i\mathrm{sin}(x)`$, $`\mathrm{cosh}(ix)=\mathrm{cos}(x)`$. The results are summarized in Table 1. Looking at the general solution of (15) for $`a>0`$, i.e. equation (27), one could be tempted to write it in the form of a logarithmic derivative, $$y(x)=\frac{d}{dx}\mathrm{log}|B\mathrm{cosh}(c(xA))\mathrm{sinh}(c(xA))|.$$ This is equivalent except for $`B\mathrm{}`$. In fact, if we want to calculate $$\underset{B\mathrm{}}{lim}\frac{d}{dx}\mathrm{log}|B\mathrm{cosh}(c(xA))\mathrm{sinh}(c(xA))|$$ we *cannot* interchange the limit with the derivative, otherwise we would get a wrong result. But this limit for $`B`$ is particularly important since when taking it in (27), we recover the particular solution (25). A similar thing happens in the general solutions (30) and (33). When taking the limit $`B\mathrm{}`$ we recover, respectively, the particular solutions (29) and (32), from which we have started. Both of (30) and (33) can be written in the form of a logarithmic derivative, but then the limit $`B\mathrm{}`$ could not be calculated properly. ## 4 Shape Invariant potentials depending on an arbitrary number of parameters transformed by translation We will try now to generalize the class of possible factorizations considered in . We analyze the possibility of introducing superpotentials depending on an arbitrary but finite number of parameters $`n`$ which transforms by translation. This will give in turn the still unsolved problem proposed in . More explicitly, suppose that within the parameter space some of them transform according to $$f(a_i)=a_iϵ_i,i\mathrm{\Gamma },$$ (35) and the remainder according to $$f(a_j)=a_j+ϵ_j,j\mathrm{\Gamma }^{},$$ (36) where $`\mathrm{\Gamma }\mathrm{\Gamma }^{}=\{1,\mathrm{},n\}`$, and $`ϵ_i0`$ for all $`i`$. Using a reparametrization, one can normalize each parameter in units of $`ϵ_i`$, that is, we can introduce the new parameters $$m_i=\frac{a_i}{ϵ_i},i\mathrm{\Gamma },\text{and}m_j=\frac{a_j}{ϵ_j},j\mathrm{\Gamma }^{},$$ (37) for which the transformation law reads, with a slight abuse of the notation $`f`$, $$f(m_i)=m_i1,i=1,\mathrm{},n.$$ (38) Note that with these normalization, the initial values of each $`m_i`$ are defined by some value in the interval $`(0,1](mod)`$. We will use the notation $`m1`$ for the $`n`$–tuple $`m1=(m_11,m_21,\mathrm{},m_n1)`$. The transformation law for the parameters (38) is just a particular case of a more general transformation considered in . As a corollary of a result proved there we have following one. *The problem of finding the square integrable solutions of the equation* $$\frac{d^2y}{dx^2}+r(x,m)y+\lambda y=0,$$ (39) *according to the generalization of the Infeld and Hull Factorization Method treated in \[7, Sec. 3\], is equivalent to that of solving the discrete eigenvalue problem of Shape Invariant potentials in the sense of depending on the same $`n`$–tuple of parameters $`m(m_1,m_2,\mathrm{},m_n)`$ which transform according to (38)*. In order to find solutions for these problems, we should find solutions of the difference-differential equation $$k^2(x,m+1)k^2(x,m)+\frac{dk(x,m+1)}{dx}+\frac{dk(x,m)}{dx}=L(m)L(m+1),$$ (40) where now $`m=(m_1,m_2,\mathrm{},m_n)`$ denotes the set of parameters and $`m+1`$ means $`m+1=(m_1+1,m_2+1,\mathrm{},m_n+1)`$, and $`L(m)`$ is some function to be determined, related to $`R(m)`$ by $`R(m)=L(m)L(m+1)`$. Equation (40) is essentially equivalent to the Shape Invariance condition $`\stackrel{~}{V}(x,m)=V(x,m1)+R(m1)`$ for problems defined by (38) . We would like to remark that (40) always has the trivial solution $`k(x,m)=h(m)`$, for every arbitrary function $`h(m)`$ of the parameters only. Our first assumption for the dependence of $`k(x,m)`$ on $`x`$ and $`m`$ will be a generalization of the one used for the case of one parameter introduced in , $$k(x,m)=k_0(x)+mk_1(x),$$ (41) where $`k_0`$ and $`k_1`$ are functions of $`x`$ only. The generalization to $`n`$ parameters is $$k(x,m)=g_0(x)+\underset{i=1}{\overset{n}{}}m_ig_i(x).$$ (42) This form for $`k(x,m)`$ is exactly the same as the one proposed in \[8, Eqs. (6.24)\] taking into account (37) and (38), up to a slightly different notation. Substituting into (40) we obtain $`L(m)L(m+1)`$ $`=2{\displaystyle \underset{j=1}{\overset{n}{}}}m_j\left(g_j^{}+g_j{\displaystyle \underset{i=1}{\overset{n}{}}}g_i\right)+{\displaystyle \underset{j=1}{\overset{n}{}}}(g_j^{}+g_j{\displaystyle \underset{i=1}{\overset{n}{}}}g_i)+2\left(g_0^{}+g_0{\displaystyle \underset{i=1}{\overset{n}{}}}g_i\right).`$ (43) Since the coefficients of the powers of each $`m_i`$ have to be constant, we obtain the following first order differential equation system to be satisfied, $`g_j^{}+g_j{\displaystyle \underset{i=1}{\overset{n}{}}}g_i=c_j,j\{1,\mathrm{},n\},`$ (44) $`g_0^{}+g_0{\displaystyle \underset{i=1}{\overset{n}{}}}g_i=c_0,`$ (45) where $`c_i`$, $`i\{0,\mathrm{\hspace{0.17em}1},\mathrm{},n\}`$ are real constants. The solution of the system can be found by using barycentric coordinates for the $`g_i`$’s, that is, the functions which separate the unknowns $`g_i`$’s in their mass–center coordinates and relative ones. Hence, we will make the following change of variables and use the notations $`g_{cm}(x)`$ $`=`$ $`{\displaystyle \frac{1}{n}}{\displaystyle \underset{i=1}{\overset{n}{}}}g_i(x),`$ (46) $`v_j(x)`$ $`=`$ $`g_j(x)g_{cm}(x)={\displaystyle \frac{1}{n}}\left(ng_j(x){\displaystyle \underset{i=1}{\overset{n}{}}}g_i(x)\right),`$ (47) $`c_{cm}`$ $`=`$ $`{\displaystyle \frac{1}{n}}{\displaystyle \underset{i=1}{\overset{n}{}}}c_i,`$ (48) where $`j\{1,\mathrm{},n\}`$. Note that not all of the functions $`v_j`$ are now linearly independent, but only $`n1`$ since $`_{j=1}^nv_j=0`$. Taking the sum of equations (44) we obtain that $`ng_{cm}`$ satisfies the Riccati equation with constant coefficients $$ng_{cm}^{}+(ng_{cm})^2=nc_{cm}.$$ On the other hand, we will consider the independent functions $`v_j(x)`$, $`j\{2,\mathrm{},n\}`$ to complete the system. Using equations (47) and (44) we find $`v_j^{}`$ $`=`$ $`{\displaystyle \frac{1}{n}}(ng_j^{}{\displaystyle \underset{i=1}{\overset{n}{}}}g_i^{})`$ $`=`$ $`{\displaystyle \frac{1}{n}}(g_j^{}g_1^{}+g_j^{}g_2^{}+\mathrm{}+g_j^{}g_j^{}+\mathrm{}+g_j^{}g_n^{})`$ $`=`$ $`v_jng_{cm}+c_jc_{cm},`$ and we will take the corresponding equations from $`2`$ to $`n`$. The system of equations (44) and (45) is written in the new coordinates as $`ng_{cm}^{}+(ng_{cm})^2=nc_{cm},`$ (49) $`v_j^{}+v_jng_{cm}=c_jc_{cm},j\{2,\mathrm{},n\},`$ (50) $`g_0^{}+g_0ng_{cm}=c_0,`$ (51) and therefore the motion of the center of mass is decoupled from the other coordinates. But we already know the general solutions of equation (49), which is nothing but the equation (15) studied in the preceding section with the identification of $`y`$ and $`a`$ with $`ng_{cm}`$ and $`nc_{cm}`$, respectively. Therefore the possible solutions depend on the sign of $`nc_{cm}`$, that is, on the sign of the sum $`_{i=1}^nc_i`$ of all the constants appearing in equations (44). Moreover, all the remaining equations (50) and (51) are linear differential equations of the form (16), identifying $`z`$ as $`v_j`$ or $`g_0`$, and the constant $`b`$ as $`c_jc_{cm}`$ or $`c_0`$, respectively. The general solution of these equations is readily found once $`ng_{cm}`$ is known, by means of the formula (17) adapted to each case. As a result the general solutions for the variables $`ng_{cm}`$, $`v_j`$ and $`g_0`$ are directly found by just looking at Table 1 and making the proper substitutions. The results are shown in Table 2. Once the solutions of equations (49), (50) and (51) are known it is easy to find the expressions for $`g_i(x)`$ and $`g_0(x)`$ by reversing the change defined by (46) and (47). It is easy to prove that it is indeed invertible with inverse change given by $`g_1(x)`$ $`=`$ $`g_{cm}(x){\displaystyle \underset{i=2}{\overset{n}{}}}v_i(x),`$ (52) $`g_j(x)`$ $`=`$ $`g_{cm}(x)+v_j(x),j\{2,\mathrm{},n\}.`$ (53) For each of the three families of solutions shown in Table 2, one can quickly find the corresponding functions $`g_i(x)`$, $`g_0(x)`$, and hence the function $`k(x,m)`$ according to (42). The results are shown in Table 3. We can now calculate the corresponding Shape Invariant partner potentials by means of the formulas (10), (13) and (14) adapted to this case. The results are shown in Table 4. Let us comment on the solutions for the function $`k(x,m)`$ in Table 3 and for the Shape Invariant potentials in Table 4 we have just found. It is remarkable that the constants $`c_i`$, $`c_0`$, of equations (44), (45) appear always in the solutions by means of the combination $`c_0+_{i=1}^nm_ic_i`$. On the other hand, $`\stackrel{~}{D}`$ does not change under the transformation $`m_im_i1`$ since it depends only on differences of the $`m_i`$’s. As $`D_0,D_2,\mathrm{},D_n`$ are arbitrary constants, $`\stackrel{~}{D}=D_0+_{i=2}^nD_i(m_im_1)`$ can be regarded as an arbitrary constant as well. It is very easy to check that the functions $`k(x,m)`$ satisfy indeed (40), just taking into account that $`nc_{cm}=_{i=1}^nc_i`$ and that when $`nc_{cm}=C^2`$, $`_{i=1}^nc_i/C=C`$, meanwhile $`_{i=1}^nc_i/C=C`$ when $`nc_{cm}=C^2`$. Obviously, for the case $`nc_{cm}=0`$ we have $`_{i=1}^nc_i=0`$. As we have mentioned already, (40) is essentially equivalent to the Shape Invariance condition $`\stackrel{~}{V}(x,m)=V(x,m1)+R(m1)`$, but this last can be checked directly. In order to do it, it may be useful to recall several relations that the functions defined in Table 2 satisfy. When $`nc_{cm}=C^2`$ we have $`f_+^{}=C(1f_+^2)=C(B^21)h_+^2,h_+^{}=Cf_+h_+,`$ when $`nc_{cm}=0`$, $`f_0^{}=Bf_0^2,h_0^{}=Bf_0h_0+1,`$ and finally when $`nc_{cm}=C^2`$, $`f_{}^{}=C(1+f_{}^2)=C(B^2+1)h_{}^2,h_{}^{}=Cf_{}h_{},`$ where the prime means derivative respect to $`x`$. The arguments of the functions are the same as in the mentioned table and have been dropped out for simplicity. When we have only one parameter, that is, $`n=1`$, one recovers the solutions for $`k(x,m)=k_0(x)+mk_1(x)`$ shown in the first column of \[7, Table 6\], and the corresponding Shape Invariant partner potentials of Table 7 in the same reference. For all cases in Table 4, the formal expression of $`R(m)`$ is exactly the same, but either $`_{i=1}^nc_i=nc_{cm}`$ have different sign or vanish. Let us consider now the problem of how to determine $`L(m)`$ from $`R(m)`$. The method does not provide the expression of $`L(m)`$ but of $`L(m)L(m+1)`$. In fact, there is a freedom in determining this function $`L(m)`$. Fortunately, for the purposes of Quantum Mechanics the relevant function is $`R(m)`$, from which the energy spectrum of Shape Invariant potentials in the sense of is calculated . However, let us show how this underdetermination appears. Since $`R(m)=L(m)L(m+1)=2(c_0+{\displaystyle \underset{i=1}{\overset{n}{}}}m_ic_i)+{\displaystyle \underset{i=1}{\overset{n}{}}}c_i`$ (54) is a polynomial in the $`n`$ parameters $`m_i`$, and we have considered only polynomial functions of these quantities so far, $`L(m)`$ should be also a polynomial. It is of degree two, otherwise a simple calculation would show that the coefficients of terms of degree 3 or higher must vanish. So, we propose $`L(m)=_{i,j=1}^nr_{ij}m_im_j+_{i=1}^ns_im_i+t`$, where $`r_{ij}`$ is symmetric, $`r_{ij}=r_{ji}`$. Therefore, there are $`\frac{1}{2}n(n+1)+n+1`$ constants to be determined. Then, making use of the symmetry of $`r_{ij}`$ in its indices we obtain $`L(m)L(m+1)`$ $`=`$ $`2{\displaystyle \underset{i,j=1}{\overset{n}{}}}r_{ij}m_i{\displaystyle \underset{i,j=1}{\overset{n}{}}}r_{ij}{\displaystyle \underset{i=1}{\overset{n}{}}}s_i.`$ Comparing with (54) we find the following conditions to be satisfied $$\underset{j=1}{\overset{n}{}}r_{ij}=c_i,i\{1,\mathrm{},n\},\text{and}\underset{i=1}{\overset{n}{}}s_i=2c_0.$$ The first of these equations expresses the problem of finding symmetric matrices of order $`n`$ whose rows (or columns) sum $`n`$ given numbers. That is, to solve a linear system of $`n`$ equations with $`\frac{1}{2}n(n+1)`$ unknowns. For $`n>1`$ the solutions determine an affine space of dimension $`\frac{1}{2}n(n+1)n=\frac{1}{2}n(n1)`$. Moreover, for $`n>1`$ the second condition determine always an affine space of dimension $`n1`$. The well known case of of $`n=1`$ gives unique solution to both conditions. However, the constant $`t`$ remains always undetermined. We will try to find now other generalizations of Shape Invariant potentials which depend on $`n`$ parameters transformed by means of a translation. We should try a generalization using inverse powers of the parameters $`m_i`$; we know already that for the case $`n=1`$ there appear at least three new families of solutions (see Table 6 in ). So, we will try a solution of the following type, provided $`m_i0`$, for all $`i`$, $$k(x,m)=\underset{i=1}{\overset{n}{}}\frac{f_i(x)}{m_i}+g_0(x)+\underset{i=1}{\overset{n}{}}m_ig_i(x).$$ (55) Here, $`f_i(x)`$, $`g_i(x)`$ and $`g_0(x)`$ are functions of $`x`$ to be determined. Substituting into (40) we obtain, after a little algebra, $`L(m)L(m+1)={\displaystyle \underset{i,j=1}{\overset{n}{}}}{\displaystyle \frac{f_if_j(1+m_i+m_j)}{m_i(m_i+1)m_j(m_j+1)}}2g_0{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \frac{f_i}{m_i(m_i+1)}}`$ $`2{\displaystyle \underset{i,j=1}{\overset{n}{}}}{\displaystyle \frac{m_jg_jf_i}{m_i(m_i+1)}}+2{\displaystyle \underset{i,j=1}{\overset{n}{}}}{\displaystyle \frac{g_jf_i}{m_i+1}}+{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \frac{2m_i+1}{m_i(m_i+1)}}{\displaystyle \frac{df_i}{dx}}+\mathrm{},`$ where the dots represents the right hand side of (43). The coefficients of each of the different dependences on the parameters $`m_i`$ have to be constant. The term $$\underset{i,j=1}{\overset{n}{}}\frac{f_if_j(1+m_i+m_j)}{m_i(m_i+1)m_j(m_j+1)}$$ involves a symmetric expression under the interchange of the indices $`i`$ and $`j`$. As a consequence we obtain that $`f_if_j=\text{Const.}`$ for all $`i,j`$. Since $`i`$ and $`j`$ run independently the only possibility is that $`f_i=\text{Const.}`$ for all $`i\{1,\mathrm{},n\}`$. We will assume that at least one of the $`f_i`$ is different from zero, otherwise we would be in the already studied case. Then, the term $$2g_0\underset{i=1}{\overset{n}{}}\frac{f_i}{m_i(m_i+1)},$$ gives us $`g_0=\text{Const.}`$ and the term which contains the derivatives of the $`f_i`$’s vanishes. The sum of the terms $$2\underset{i,j=1}{\overset{n}{}}\frac{g_jf_i}{m_i+1}2\underset{i,j=1}{\overset{n}{}}\frac{m_jg_jf_i}{m_i(m_i+1)}$$ is only zero for $`n=1`$. Then, for $`n>1`$ the first of them provides us $`_{i=1}^ng_i=\text{Const.}`$ and the second one, $`g_i=\text{Const.}`$ for all $`i\{1,\mathrm{},n\}`$. This is just a particular case of the trivial solution. For $`n=1`$, however, we obtain more solutions; is the case already discussed in . It should be noted that, in general, $$2\underset{i,j=1}{\overset{n}{}}\frac{g_jf_i}{m_i+1}2\underset{i,j=1}{\overset{n}{}}\frac{m_jg_jf_i}{m_i(m_i+1)}2\underset{i,j=1}{\overset{n}{}}\frac{f_ig_j}{m_i+1}\left(1\frac{m_j}{m_i}\right),$$ as one could be tempted to write if one does not take care. Using the last equation as being true will lead to incorrect results. As a conclusion we obtain that the trial solution $`k(x,m)`$ corresponding to that of the case $`n=1`$ admits no non–trivial generalization to solutions of the type (55). It can be shown that if we propose further generalizations to greater degree inverse powers of the parameters $`m_i`$, the only solution is also a trivial one. For example, if we try a solution of type $$k(x,m)=\underset{i,j=1}{\overset{n}{}}\frac{h_{ij}(x)}{m_im_j}+\underset{i=1}{\overset{n}{}}\frac{f_i(x)}{m_i}+g_0(x)+\underset{i=1}{\overset{n}{}}m_ig_i(x),$$ (56) where $`h_{ij}(x)=h_{ji}(x)`$, the only possibility we will obtain is that all involved functions of $`x`$ have to be constant. Now we try to generalize (42) to higher positive powers. That is, we will try now a solution of type $$k(x,m)=g_0(x)+\underset{i=1}{\overset{n}{}}m_ig_i(x)+\underset{i,j=1}{\overset{n}{}}m_im_je_{ij}(x).$$ (57) Substituting into (40) we obtain, after several calculations, $`L(m)L(m+1)=4{\displaystyle \underset{i,j,k,l=1}{\overset{n}{}}}m_im_jm_ke_{ij}e_{kl}+4{\displaystyle \underset{i,j,k,l=1}{\overset{n}{}}}m_ie_{ij}m_k(e_{kl}+g_k)`$ $`+2{\displaystyle \underset{i,j=1}{\overset{n}{}}}m_im_j\left({\displaystyle \underset{k,l=1}{\overset{n}{}}}(e_{kl}+g_l)e_{ij}+{\displaystyle \frac{de_{ij}}{dx}}\right)`$ $`+4{\displaystyle \underset{i,j=1}{\overset{n}{}}}m_ie_{ij}\left({\displaystyle \underset{k,l=1}{\overset{n}{}}}(e_{kl}+g_l)+g_0\right)`$ $`+2{\displaystyle \underset{i=1}{\overset{n}{}}}m_i\left(g_i{\displaystyle \underset{j,k=1}{\overset{n}{}}}(e_{jk}+g_j)+{\displaystyle \frac{d}{dx}}{\displaystyle \underset{k=1}{\overset{n}{}}}(e_{ik}+g_i)\right)`$ $`+{\displaystyle \underset{i,j=1}{\overset{n}{}}}(e_{ij}+g_i)\left({\displaystyle \underset{k,l=1}{\overset{n}{}}}(e_{lk}+g_l)+2g_0\right)+{\displaystyle \frac{d}{dx}}\left({\displaystyle \underset{i,j=1}{\overset{n}{}}}(e_{ij}+g_i)+2g_0\right).`$ (58) As in previous cases, the coefficients of each different type of dependence on the parameters $`m_i`$ have to be constant. Let us analyze the term of higher degree i.e. the first term on the right hand side of (58). Since it contains a completely symmetric sum in the parameters $`m_i`$, the dependence on the functions $`e_{ij}`$ should also be completely symmetric in the corresponding indices. For that reason, we rewrite it as $$4\underset{i,j,k,l=1}{\overset{n}{}}m_im_jm_ke_{ij}e_{kl}=\frac{4}{3}\underset{i,j,k,l=1}{\overset{n}{}}m_im_jm_k(e_{ij}e_{kl}+e_{jk}e_{il}+e_{ki}e_{jl}),$$ from where it is found the necessary condition $$\underset{l=1}{\overset{n}{}}(e_{ij}e_{kl}+e_{jk}e_{il}+e_{ki}e_{jl})=d_{ijk},i,j,k\{1,\mathrm{},n\},$$ where $`d_{ijk}`$ are completely symmetric in their three indices constants. The number of independent equations of this type is just the number of independent components of a completely symmetric in its three indices tensor, each one running from 1 to $`n`$. This number is $`\frac{1}{6}n(n+1)(n+2)`$. The number of independent variables $`e_{ij}`$ is $`\frac{1}{2}n(n+1)`$ from the symmetry on the two indices. Then, the number of unknowns minus the number of equations is $$\frac{1}{2}n(n+1)\frac{1}{6}n(n+1)(n+2)=\frac{1}{6}(n1)n(n+1).$$ For $`n=1`$ the system has the simple solution $`e_{11}=\text{Const}`$. For $`n>1`$ the system is not compatible and has no solutions apart from the trivial one $`e_{ij}=\text{Const.}`$ for all $`i,j`$. In either of these cases, it is very easy to deduce from the other terms in (58) that all of the remaining functions have to be constant as well, provided that not all of the constants $`e_{ij}`$ vanish. For higher positive power dependence on the parameters $`m_i`$’s a similar result holds. In fact, let us suppose that the higher order term in our trial solution is of degree $`q`$, $`_{i_1,\mathrm{},i_q=1}^nm_{i_1}m_{i_2}\mathrm{}m_{i_q}T_{i_1,\mathrm{},i_q},`$ where $`T_{i_1,\mathrm{},i_q}`$ is a completely symmetric tensor in its indices. Then, is easy to prove that the higher order term appearing after substitution in (40) is a sum whose general term is of degree $`2q1`$ in the $`m_i`$’s. It is completely symmetric under interchange of their indices, and appears the product of $`T_{i_1,\mathrm{},i_q}`$ by itself but with one index summed. One then has to symmetrize the expression of the $`T`$’s in order to obtain the independent equations to be satisfied, which is equal to the number of independent components of a completely symmetric tensor in its $`2q1`$ indices. This number is $`(n+2(q1))!/(2q1)!(n1)!`$. The number of independent unknowns is $`(n+q1)!/q!(n1)!`$. So, the number of unknowns minus the one of equations is $$\frac{(n+q1)!}{q!(n1)!}\frac{(n+2(q1))!}{(2q1)!(n1)!}.$$ This number vanishes always for $`n=1`$, which means that the problem is determined and we obtain that $`T_{1,\mathrm{},\mathrm{\hspace{0.17em}1}}=\text{Const.}`$, in agreement with \[12, p. 28\]. If $`n>1`$, one can easily check that for $`q>1`$ that number is negative and hence there cannot be other solution apart from the trivial solution $`T_{i_1,\mathrm{},i_q}=\text{Const.}`$ for all $`i_1,\mathrm{},i_q\{1,\mathrm{},n\}`$. From the terms of lower degree one should conclude that the only possibility is a particular case of the trivial solution. ## 5 Conclusions and outlook Let us comment on the relevance of the more important result of this paper, that is, the fact that we have been able to solve the differential equation system (44) and (45). That problem was posed, but not solved, in a often cited paper by Cooper, Ginocchio and Khare in Physical Review D \[8, pp. 2471–2\]. They use a slightly different notation but it can be identified their formulas $`(6.24)`$ with our (42) and our procedure by an appropriate redefinition of the parameters taking into account (37) and (35). However, they failed to find any solution to these equations (for $`n>2`$), and believed that such a solution could hardly exist. The conclusion is conceptually of great importance. That is, it has been made clear that an arbitrary but finite number of parameters subject to transformation is not a limitation to the existence of Shape Invariant partner potentials, and hence, to the existence of exactly solvable problems in Quantum Mechanics. This leaves the door open to the possibility to pose and maybe solve further generalizations. We have as well the possibility of englobing particular cases of known Shape Invariant partner potentials spread over the extensive literature in the subject (see e.g. and references therein) into one simple but powerful scheme of classification. In this sense, we think the solution we have found here is very important as it completes the excellent work started in . Other conceptual point of great importance is that we have gained much more generality in the solution to the problem by a particularly simple but powerful idea. That is, to consider the general solution of the Riccati equation with constant coefficients which gives all subsequent solutions, rather than particular ones. For doing this it has been of great use the important properties of the Riccati equation. As a byproduct of our present results and that of it is not difficult to see that for $`n=1`$ most of the solutions contained in \[8, Sec. VI\], later reproduced e.g. in , are directly related to some results of the classic paper , since they are solutions of essentially the same equations. One of the authors (A.R.) thanks the Spanish Ministerio de Educación y Cultura for a FPI grant, research project PB96–0717. Support of the Spanish DGES (PB96–0717) is also acknowledged. ## References
no-problem/0003/astro-ph0003028.html
ar5iv
text
# On the tilting of protostellar disks by resonant tidal effects ## 1. Introduction The existence of disks around young stars was spectacularly confirmed by direct images from the Hubble Space Telescope (HST) (McCaughrean & O’Dell 1996; Burrows et al. 1996). Observations suggest that young stars are usually found in binary systems and that young binaries typically interact strongly with the disks that surround the stars (Ghez, Neugebauer, & Matthews 1993; Mathieu 1994; Osterloh & Beckwith 1995; Jensen, Mathieu, & Fuller 1996). There is growing evidence that disks within a binary are sometimes inclined with respect to the binary orbital plane. Such a case may have been seen in HST and Keck images of a disk in the young binary HK Tau (Stapelfeldt et al. 1998; Koresko 1998). Suppose that a protostellar disk surrounds a star in a circular-orbit binary system, and that the disk is tilted with respect to the binary orbital plane. The evolution of the disk is affected by the tidal field of the companion star, as has been considered by Papaloizou & Terquem (1995). Some features of their analysis were confirmed in three-dimensional numerical simulations by Larwood et al. (1996). The basic physics involved may be summarized as follows (see also Bate et al. 2000). In a non-rotating frame of reference centered on the star about which the disk orbits, the companion star orbits at the binary frequency $`\mathrm{\Omega }_\mathrm{b}`$ and exerts a time-dependent tidal torque on the disk. This torque may be decomposed into a steady component and an oscillatory component with a frequency of $`2\mathrm{\Omega }_\mathrm{b}`$, and their effects may be considered separately. Consider first the steady torque. If the disk were composed of non-interacting circular rings, the steady torque would cause each ring to precess, about an axis perpendicular to the binary plane, at a rate that depends on the radius of the ring, resulting in a rapid twisting of the disk. However, if the disk is able to maintain efficient radial communication, whether by wave propagation, viscosity, or self-gravitation, it may be able to resist this differential precession by establishing an internal torque in the disk. This can be arranged so that the net torque on each ring is such as to produce a single, uniform precession rate. However, to establish this internal torque, the disk must become warped. The concomitant dissipation changes the total angular momentum of the disk, tending to bring it into alignment with the binary plane in addition to causing accretion. Consider now the oscillatory torque. Applied to a single ring, this would cause a modulation of the precession rate and also a nutation (Katz et al. 1982). However, in the presence of radial communication, the oscillatory torque drives a bending wave (with azimuthal wavenumber $`m=1`$) in the disk. Papaloizou & Terquem (1995) showed that, if the wave is subject to dissipation, it too may change the total angular momentum of the disk and tend to increase its inclination. The net effect of the steady and oscillatory torques determines whether an initially coplanar disk will acquire a tilt over time or whether an initially inclined disk will evolve towards coplanarity. The purpose of this paper is to determine this outcome, which could provide clues to the origin of misaligned disks in systems such as HK Tau. The basic mechanism suggested by Papaloizou & Terquem for generating a tilt by the oscillatory torque can be related to earlier work by Lubow (1992), who showed that an aligned, Keplerian disk in a circular binary may be linearly unstable to tilting if it contains a local resonance at which the orbital angular velocity $`\mathrm{\Omega }(r)`$ satisfies $$\mathrm{\Omega }=\left(\frac{m_{}}{m_{}2}\right)\mathrm{\Omega }_\mathrm{b}.$$ (1) Here $`m_{}`$ is the azimuthal wavenumber of the component of the tidal potential that is involved in the instability cycle. The cycle works through a mode-coupling process as follows: given a perturbation with $`m=1`$ (a tilt), the tidal potential interacts with it to a drive a wave with $`m=m_{}1`$ at the resonant radius. This in turn interacts with the tidal potential to produce a stress with $`m=1`$, which can influence the tilt. The role of dissipation is subtle, since some dissipation is required to provide a change in the angular momentum of the disk if instability is to occur, yet the associated damping can compete with the intrinsic growth rate of the instability. In particular, if the disk extends to the $`3:1`$ resonance ($`\mathrm{\Omega }=3\mathrm{\Omega }_\mathrm{b}`$) it may be unstable to tilting through the $`m=3`$ component of the tidal potential. This resonance has the smallest $`m_{}`$ for which equation (1) can be satisfied (for a prograde disk) and is the closest resonance to the central star. It is difficult for disks to extend even as far as the $`3:1`$ resonance, because of the effects of tidal truncation (Paczyński 1977; Papaloizou & Pringle 1977). Superhump binary disks might extend to the $`3:1`$ resonance because of their extreme binary mass ratios, the secondary companion having less than $`1/5`$ the mass of the primary about which the disk orbits (see the review by Osaki 1996). This instability is related to, and occurs at the same position as, the eccentric instability that is believed to be responsible for superhumps in cataclysmic variable disks (Lubow 1991). However, the growth rate is invariably much smaller for tilting than for eccentricity, and the weak tilt instability may be suppressed by the effects of viscous damping and accretion (Murray & Armitage 1998). The same instabilities had been previously identified in the context of planetary rings for higher $`m_{}`$ (Goldreich & Tremaine 1981; Borderies, Goldreich, & Tremaine 1984). More fundamentally, free-particle orbits undergo even stronger, parametric instabilities at these resonant locations (Paczyński 1977), although free particles fail to model properly the behavior of a fluid disk at resonances. We relate this theory to the suggestion of Papaloizou & Terquem (1995) by noticing in equation (1) that, for $`m_{}=2`$, a near resonance is obtained in the inner part of the disk where $`\mathrm{\Omega }\mathrm{\Omega }_\mathrm{b}`$. Indeed, Papaloizou & Terquem rely on the $`m=2`$ component of the tidal potential to drive an $`m=1`$ bending wave in the tilted disk. The resulting response is a slowly rotating $`m=1`$ bending wave, with frequency $`2\mathrm{\Omega }_\mathrm{b}`$ in the inertial frame. Such a wave is close to resonance in the inner part of a nearly Keplerian disk because of the near coincidence of the effective wave driving frequency $`\mathrm{\Omega }2\mathrm{\Omega }_\mathrm{b}`$ and the frequency of vertical oscillations $`\mathrm{\Omega }_z\mathrm{\Omega }`$; this is indeed the origin of equation (1) with $`m_{}=2`$. An additional resonant effect occurs owing to the near coincidence of the driving frequency and the epicyclic frequency of horizontal oscillations $`\kappa \mathrm{\Omega }`$. We describe the instability cycle associated with the oscillatory torque as a mode-coupling process in Fig. 1. However, because the resonance is not exact, and because of the importance of resonantly induced horizontal motions, a proper treatment requires a distinct analysis from that of Lubow (1992). In this paper, we therefore examine whether a flat, aligned disk in a binary is linearly unstable to tilting even if it does not extend to the $`3:1`$ resonance. This problem is most conveniently analyzed in the binary frame where the tidal potential is static, since the disk can then be considered to be steady and to admit normal modes. These modes do not have a pure azimuthal wavenumber because the disk is non-axisymmetric as a result of tidal distortions. However, the tilting instability, if present, may be expected to appear as a modification of the rigid-tilt mode, which is trivial in the absence of the companion star. This mode may be followed continuously as the mass of the companion is increased, in order to determine whether it acquires a net rate of growth or decay. In general, the analysis of a normal mode of a tidally distorted disk is very difficult owing to the non-axisymmetric distortions of the disk. We therefore adopt the following simple approach, which is appropriate when only $`m=1`$ bending waves are involved. We start by writing down the reduced equations for linear bending waves in a protostellar disk subject to an axisymmetric external potential (Section 2). These can be derived formally without great effort (see the Appendix). We then give a physical interpretation of these equations and use this insight to see how to modify them in the presence of a non-axisymmetric potential (Sections 3 and 4). We present a simple disk model (Section 5) and describe the results of numerical calculations of normal modes (Section 6). Some further analysis illuminates the underlying physics and helps to explain the numerical results (Section 7). Finally, we summarize our findings (Section 8). ## 2. Reduced description of linear bending waves Consider a thin, non-self-gravitating disk in an external gravitational potential $`\mathrm{\Phi }(r,z)`$ that is axisymmetric, but not necessarily spherically symmetric. Here $`(r,\varphi ,z)`$ are cylindrical polar coordinates. The orbital angular velocity $`\mathrm{\Omega }(r)`$, the epicyclic frequency $`\kappa (r)`$, and the vertical frequency $`\mathrm{\Omega }_z(r)`$ are defined by <sup>1</sup><sup>1</sup>1The true angular velocity of the fluid will depart from $`\mathrm{\Omega }`$ as a result of the radial pressure gradient and the vertical variation of the potential. Such departures generally depend on $`z`$ and are of fractional order $`(H/r)^2`$. They are fully taken into account in the analysis in the Appendix. $`\mathrm{\Omega }^2`$ $`=`$ $`{\displaystyle \frac{1}{r}}{\displaystyle \frac{\mathrm{\Phi }}{r}}|_{z=0},`$ (2) $`\kappa ^2`$ $`=`$ $`4\mathrm{\Omega }^2+2r\mathrm{\Omega }{\displaystyle \frac{d\mathrm{\Omega }}{dr}},`$ (3) $`\mathrm{\Omega }_z^2`$ $`=`$ $`{\displaystyle \frac{^2\mathrm{\Phi }}{z^2}}|_{z=0}.`$ (4) We consider a situation in which the disk is nearly Keplerian and almost inviscid in the sense that $`\left|{\displaystyle \frac{\kappa ^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}}\right|`$ $``$ $`{\displaystyle \frac{H}{r}},`$ (5) $`\left|{\displaystyle \frac{\mathrm{\Omega }_z^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}}\right|`$ $``$ $`{\displaystyle \frac{H}{r}},`$ (6) $`\alpha `$ $``$ $`{\displaystyle \frac{H}{r}},`$ (7) where $`H(r)`$ is the semi-thickness of the disk and $`\alpha `$ the dimensionless viscosity parameter. Then the linearized equations for bending waves (with azimuthal wavenumber $`m=1`$) may be written $$\mathrm{\Sigma }r^2\mathrm{\Omega }\left[\frac{W}{t}+\left(\frac{\mathrm{\Omega }_z^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}\right)\frac{i\mathrm{\Omega }}{2}W\right]=\frac{1}{r}\frac{G}{r},$$ (8) $$\frac{G}{t}+\left(\frac{\kappa ^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}\right)\frac{i\mathrm{\Omega }}{2}G+\alpha \mathrm{\Omega }G=\frac{r^3\mathrm{\Omega }^3}{4}\frac{W}{r}.$$ (9) Here $`\mathrm{\Sigma }(r)`$ is the surface density and $`(r)`$ the second vertical moment of the density, defined by $$\mathrm{\Sigma }=\rho 𝑑z,=\rho z^2𝑑z.$$ (10) The second moment is related to the integrated pressure through $$p𝑑z=\mathrm{\Omega }_z^2.$$ (11) The dimensionless complex variable $`W(r,t)`$ is defined by $`W=\mathrm{}_x+i\mathrm{}_y`$, where $`\mathbf{}(r,t)`$ is the tilt vector, a unit vector parallel to the local angular momentum vector of the disk. The complex variable $`G(r,t)`$ represents the internal torque which acts to communicate stresses radially through the disk (see below). The derivation of these equations may be found in the Appendix. Equivalent equations, although presented in quite different notations, have been derived by Papaloizou & Lin (1995) and Demianski & Ivanov (1997). In addition to conditions (5)–(7), it is required that the warp vary on a length-scale long compared to the thickness of the disk, and on a time-scale long compared to the local orbital time-scale. However, any evolution of the disk on the (much longer) viscous time-scale is neglected. The (dynamic) viscosity is assumed to be isotropic and proportional to the pressure ($`\mu =\alpha p/\mathrm{\Omega }`$). We emphasize the physical interpretation of these equations. Equation (8) contains the horizontal components of the angular momentum equation encoded in the combination ‘$`x+iy`$’. In vectorial form it may be written $$\mathrm{\Sigma }r^2\mathrm{\Omega }\frac{\mathbf{}}{t}=\frac{1}{r}\frac{𝑮}{r}+𝑻,$$ (12) where $`2\pi 𝑮(r,t)`$ is the internal torque and $`𝑻(r,t)`$ the external torque density acting on the disk. In the present case the external torque arises from a lack of spherical symmetry in the potential. The complex variable $`G`$ is simply $`G_x+iG_y`$. Its equation may be written, in vectorial form, $$\frac{𝑮}{t}+\left(\frac{\kappa ^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}\right)\frac{\mathrm{\Omega }}{2}𝒆_z\times 𝑮+\alpha \mathrm{\Omega }𝑮=\frac{r^3\mathrm{\Omega }^3}{4}\frac{\mathbf{}}{r}.$$ (13) The internal torque is mediated by horizontal epicyclic motions that are driven near resonance by horizontal pressure gradients in the warped disk, an effect identified by Papaloizou & Pringle (1983). The horizontal motions are proportional to $`z`$ and are therefore subject to strong viscous dissipation which is the dominant channel of damping of the bending waves. We note that slowly varying $`m=1`$ bending waves or warps may be quite generally described by conservation equations for mass and angular momentum (Pringle 1992; Ogilvie 1999). The relevant relation of $`𝑮`$ to $`\mathbf{}`$ and its derivatives, however, depends strongly on the thickness of the disk, the viscosity, and the rotation law. In this paper we are considering a parameter range appropriate to protostellar disks, and are assuming that the warping is small so that a linear theory is valid. For disks in which $`\alpha (H/r)`$, see Papaloizou & Pringle (1983), Pringle (1992), and Ogilvie (1999, 2000). ## 3. Tidal torque on a tilted ring There are two dynamical degrees of freedom in the system described by the above equations. One is the tilting of the disk at each radius according to the tilt vector $`\mathbf{}`$. The other is the horizontal motions described by $`𝑮`$, which cause eccentric distortions of the disk that are proportional to $`z`$. In spite of this complexity, the external torque density $`𝑻`$ in equation (12) is very simple (cf. eq. ): it is equal to the torque exerted by the external potential on a disk composed of arbitrarily thin circular rings of uniform density that are tilted according the tilt vector $`\mathbf{}`$. The eccentric distortions may be disregarded when calculating the external torque to the required order. <sup>2</sup><sup>2</sup>2The effects of eccentric and tidal distortions are considered implicitly in Section 7, where they are found to be unimportant for the linear growth rates we derive. We proceed to derive an expression for the torque exerted by the full potential of the companion star (of mass $`M_2`$) on a tilted ring of the disk, treated as a thin and narrow circular ring of radius $`r`$ and uniform density. Adopt Cartesian coordinates $`(x,y,z)`$ with origin at the center of the ring, and with the ring in the $`xy`$-plane. Then the position of an arbitrary point on the ring is $$𝒓=(r\mathrm{cos}\varphi ,r\mathrm{sin}\varphi ,0),$$ (14) where $`\varphi `$ is the azimuthal angle measured around the ring. Assume, without loss of generality, that the companion star lies instantaneously in the $`xz`$-plane at position $$𝒓_\mathrm{b}=(r_\mathrm{b}\mathrm{cos}\beta ,0,r_\mathrm{b}\mathrm{sin}\beta ),$$ (15) where $`r_\mathrm{b}`$ is the binary radius and $`\beta `$ the angle of inclination. Then the force per unit mass at position $`𝒓`$ on the ring is $$𝒇=\frac{GM_2(𝒓_\mathrm{b}𝒓)}{|𝒓_\mathrm{b}𝒓|^3},$$ (16) and the corresponding torque per unit mass is $$𝒕=𝒓\times 𝒇=\frac{GM_2(𝒓\times 𝒓_\mathrm{b})}{|𝒓_\mathrm{b}𝒓|^3}.$$ (17) The azimuthally averaged torque per unit mass is $$𝒕=\frac{1}{2\pi }_0^{2\pi }𝒕𝑑\varphi .$$ (18) The $`x`$\- and $`z`$-components vanish owing to the antisymmetry of the integrands. The remaining component is $$t_y=\frac{GM_2rr_\mathrm{b}\mathrm{sin}\beta }{2\pi }_0^{2\pi }\mathrm{cos}\varphi \left(r^2+r_\mathrm{b}^22rr_\mathrm{b}\mathrm{cos}\beta \mathrm{cos}\varphi \right)^{3/2}𝑑\varphi .$$ (19) In general, this may be expressed in terms of elliptic integrals. For small $`\beta `$, however, we have $$t_y=\frac{GM_2r\beta }{2r_\mathrm{b}^2}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]+O(\beta ^3),$$ (20) where $$b_\gamma ^{(m)}(x)=\frac{2}{\pi }_0^\pi \mathrm{cos}(m\varphi )\left(1+x^22x\mathrm{cos}\varphi \right)^\gamma 𝑑\varphi $$ (21) is the Laplace coefficient. In vectorial form, therefore, the torque density is $$𝑻=\frac{GM_2}{2r_\mathrm{b}^4}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]\mathrm{\Sigma }r(𝒓_\mathrm{b}\mathbf{})(𝒓_\mathrm{b}\times \mathbf{}),$$ (22) with fractional corrections of $`O(W^2)`$. ## 4. Dynamics in the binary frame We now consider the dynamics in the binary frame, which rotates with angular velocity $`𝛀_\mathrm{b}=\mathrm{\Omega }_\mathrm{b}𝒆_z`$. This requires that we replace, in equations (12) and (13), $$\frac{\mathbf{}}{t}\frac{\mathbf{}}{t}+𝛀_\mathrm{b}\times \mathbf{},\frac{𝑮}{t}\frac{𝑮}{t}+𝛀_\mathrm{b}\times 𝑮.$$ (23) With the companion star located on the positive $`x`$-axis, we have, in linear theory, $$𝑻=\frac{GM_2}{2r_\mathrm{b}^2}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]\mathrm{\Sigma }r\mathrm{}_x𝒆_y.$$ (24) The orbital, epicyclic, and vertical frequencies are all calculated using the $`m=0`$ total potential. This gives $`\mathrm{\Omega }^2`$ $`=`$ $`{\displaystyle \frac{GM_1}{r^3}}+{\displaystyle \frac{GM_2}{2r_\mathrm{b}^2r}}\left[{\displaystyle \frac{r}{r_\mathrm{b}}}b_{3/2}^{(0)}\left({\displaystyle \frac{r}{r_\mathrm{b}}}\right)b_{3/2}^{(1)}\left({\displaystyle \frac{r}{r_\mathrm{b}}}\right)\right],`$ (25) $`\kappa ^2`$ $`=`$ $`{\displaystyle \frac{GM_1}{r^3}}+{\displaystyle \frac{GM_2}{2r_\mathrm{b}^2r}}\left[{\displaystyle \frac{r}{r_\mathrm{b}}}b_{3/2}^{(0)}\left({\displaystyle \frac{r}{r_\mathrm{b}}}\right)2b_{3/2}^{(1)}\left({\displaystyle \frac{r}{r_\mathrm{b}}}\right)\right],`$ (26) $`\mathrm{\Omega }_z^2`$ $`=`$ $`{\displaystyle \frac{GM_1}{r^3}}+{\displaystyle \frac{GM_2}{2r_\mathrm{b}^2r}}\left[{\displaystyle \frac{r}{r_\mathrm{b}}}b_{3/2}^{(0)}\left({\displaystyle \frac{r}{r_\mathrm{b}}}\right)\right],`$ (27) where $`M_1`$ is the mass of the star about which the disk orbits. We assume, without loss of generality, that $`\mathrm{\Omega }>0`$, but allow for the orbit of the companion star to be either prograde or retrograde according to $$\mathrm{\Omega }_\mathrm{b}=\pm \left[\frac{G(M_1+M_2)}{r_\mathrm{b}^3}\right]^{1/2}.$$ (28) The final equations are $$\mathrm{\Sigma }r^2\mathrm{\Omega }\left(\frac{\mathrm{}_x}{t}\mathrm{\Omega }_\mathrm{b}\mathrm{}_y\right)=\frac{1}{r}\frac{G_x}{r},$$ (29) $$\mathrm{\Sigma }r^2\mathrm{\Omega }\left(\frac{\mathrm{}_y}{t}+\mathrm{\Omega }_\mathrm{b}\mathrm{}_x\right)=\frac{1}{r}\frac{G_y}{r}\frac{GM_2}{2r_\mathrm{b}^2}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]\mathrm{\Sigma }r\mathrm{}_x,$$ (30) $$\frac{G_x}{t}\mathrm{\Omega }_\mathrm{b}G_y+\frac{GM_2}{4r_\mathrm{b}^2r\mathrm{\Omega }}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]G_y+\alpha \mathrm{\Omega }G_x=\frac{r^3\mathrm{\Omega }^3}{4}\frac{\mathrm{}_x}{r},$$ (31) $$\frac{G_y}{t}+\mathrm{\Omega }_\mathrm{b}G_x\frac{GM_2}{4r_\mathrm{b}^2r\mathrm{\Omega }}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]G_x+\alpha \mathrm{\Omega }G_y=\frac{r^3\mathrm{\Omega }^3}{4}\frac{\mathrm{}_y}{r}.$$ (32) Since the coefficients of these equations are independent of time, we may seek normal modes of the form $`\mathrm{}_x(r,t)`$ $`=`$ $`\mathrm{Re}\left[\stackrel{~}{\mathrm{}}_x(r)e^{i\omega t}\right],`$ (33) $`\mathrm{}_y(r,t)`$ $`=`$ $`\mathrm{Re}\left[\stackrel{~}{\mathrm{}}_y(r)e^{i\omega t}\right],`$ (34) $`G_x(r,t)`$ $`=`$ $`\mathrm{Re}\left[\stackrel{~}{G}_x(r)e^{i\omega t}\right],`$ (35) $`G_y(r,t)`$ $`=`$ $`\mathrm{Re}\left[\stackrel{~}{G}_y(r)e^{i\omega t}\right],`$ (36) where $`\omega `$ is a complex frequency eigenvalue. The problem has then been reduced to solving an eigenvalue problem involving a fourth-order system of ordinary differential equations (ODEs). If we return to the original complex notation, we find that the equations have become non-analytic, effectively increasing the order of the dynamical system: $$\mathrm{\Sigma }r^2\mathrm{\Omega }\left(\frac{W}{t}+i\mathrm{\Omega }_\mathrm{b}W\right)=\frac{1}{r}\frac{G}{r}\frac{GM_2}{4r_\mathrm{b}^2}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]\mathrm{\Sigma }ri(W+W^{}),$$ (37) $$\frac{G}{t}+i\mathrm{\Omega }_\mathrm{b}G\frac{GM_2}{4r_\mathrm{b}^2r\mathrm{\Omega }}\left[b_{3/2}^{(1)}\left(\frac{r}{r_\mathrm{b}}\right)\right]iG+\alpha \mathrm{\Omega }G=\frac{r^3\mathrm{\Omega }^3}{4}\frac{W}{r}.$$ (38) In the combination $`W+W^{}`$, the term $`W`$ arises from the $`m=0`$ component of the tidal potential (cf. eq. ), while the non-analytic term $`W^{}`$ arises from the $`m=2`$ component. In the normal-mode solution, $`W`$ and $`G`$ have the form $`W`$ $`=`$ $`W_+e^{i\omega t}+W_{}e^{i\omega ^{}t},`$ (39) $`G`$ $`=`$ $`G_+e^{i\omega t}+G_{}e^{i\omega ^{}t},`$ (40) where $`W_+`$ $`=`$ $`\frac{1}{2}(\stackrel{~}{\mathrm{}}_x+i\stackrel{~}{\mathrm{}}_y),`$ (41) $`W_{}`$ $`=`$ $`\frac{1}{2}(\stackrel{~}{\mathrm{}}_x^{}+i\stackrel{~}{\mathrm{}}_y^{}),`$ (42) $`G_+`$ $`=`$ $`\frac{1}{2}(\stackrel{~}{G}_x+i\stackrel{~}{G}_y),`$ (43) $`G_{}`$ $`=`$ $`\frac{1}{2}(\stackrel{~}{G}_x^{}+i\stackrel{~}{G}_y^{}).`$ (44) The motion seen in the inertial frame is more complicated than a single mode. We have $$\mathbf{}=\mathrm{}_x𝒆_x+\mathrm{}_y𝒆_y+\mathrm{}_z𝒆_z,$$ (45) where $`(𝒆_x,𝒆_y,𝒆_z)`$ are unit vectors in the binary frame. These are related to the unit vectors $`(\widehat{𝒆}_x,\widehat{𝒆}_y,𝒆_z)`$ in the inertial frame by $$𝒆_xi𝒆_y=(\widehat{𝒆}_xi\widehat{𝒆}_y)e^{i\mathrm{\Omega }_\mathrm{b}t},$$ (46) and so $$\mathbf{}=\widehat{\mathrm{}}_x\widehat{𝒆}_x+\widehat{\mathrm{}}_y\widehat{𝒆}_y+\mathrm{}_z𝒆_z,$$ (47) where $$\widehat{\mathrm{}}_x+i\widehat{\mathrm{}}_y=We^{i\mathrm{\Omega }_\mathrm{b}t}=e^{\omega _\mathrm{i}t}\left[W_+e^{i(\omega _\mathrm{r}+\mathrm{\Omega }_\mathrm{b})t}+W_{}e^{i(\omega _\mathrm{r}\mathrm{\Omega }_\mathrm{b})t}\right].$$ (48) Here $`\omega =\omega _\mathrm{r}+i\omega _\mathrm{i}`$. Therefore two components are seen in the inertial frame, which have distinct frequencies, $`|\omega _\mathrm{r}\pm \mathrm{\Omega }_\mathrm{b}|`$, but the same rate of growth or decay. ## 5. Disk model For simplicity, we assume that the vertical structure of the unperturbed disk is that of a polytrope of index $`n`$. To satisfy vertical hydrostatic equilibrium, the density distribution, for a thin disk, is then of the form $$\rho (r,z)=\rho (r,0)\left(1\frac{z^2}{H^2}\right)^n,$$ (49) where $`H(r)`$ is the semi-thickness. The surface density and second moment are related by $$=\frac{\mathrm{\Sigma }H^2}{2n+3}.$$ (50) For the radial structure, we specify $$\frac{H}{r}=ϵ$$ (51) and $$\mathrm{\Sigma }=\mathrm{\Sigma }_0r^{1/2}f,$$ (52) where $`ϵ`$ is a small constant, $`\mathrm{\Sigma }_0`$ an arbitrary constant, and $`f(r)`$ a function that is approximately equal to unity except near the inner and outer radii of the disk, where it tapers linearly to zero. Over most of the disk this gives approximately $`Hr`$, $`\mathrm{\Sigma }r^{1/2}`$, and $`r^{3/2}`$. For the tapering function, we take $$f=\mathrm{tanh}\left(\frac{rr_1}{w_1}\right)\mathrm{tanh}\left(\frac{r_2r}{w_2}\right),$$ (53) where $`r_1`$ and $`r_2`$ are the inner and outer radii of the disk, and $`w_1`$ and $`w_2`$ are the widths of the tapers near each edge, which are taken to be equal to the local semi-thickness. With $`f`$ tapering linearly to zero, the edges are regular singular points of the governing equations. The appropriate boundary condition in each case is that $`W`$ should be regular there, which implies that $`G`$ vanishes. Clearly the internal torque cannot be transmitted across a free boundary of the disk. However, if the inner disk were terminated by a magnetosphere, for example, this boundary condition may require modification. This model is very similar to that used by Papaloizou & Terquem (1995) except that the disk has an inner edge. For reasons that we explain in Section 7, we do not attempt to impose an ‘ingoing wave’ boundary condition at the center of the disk. ## 6. Numerical results Equations (29)–(32) are solved numerically using the complex variables defined in equations (33)–(36). When solving the ODEs for a normal mode, it is advisable to integrate away from the singular points at the edges of the disk. We apply the arbitrary normalization condition $`\stackrel{~}{\mathrm{}}_x(r_1)=1`$ and guess the values of the four complex parameters $`\omega `$, $`\stackrel{~}{\mathrm{}}_y(r_1)`$, $`\stackrel{~}{\mathrm{}}_x(r_2)`$, and $`\stackrel{~}{\mathrm{}}_y(r_2)`$. We then integrate separately into $`r>r_1`$ and $`r<r_2`$, meeting at the midpoint, where $`\stackrel{~}{\mathrm{}}_x`$, $`\stackrel{~}{\mathrm{}}_y`$, $`\stackrel{~}{G}_x`$, and $`\stackrel{~}{G}_y`$ should all be continuous. These four conditions are solved by Newton-Raphson iteration, using derivative information obtained by simultaneously integrating the ODEs differentiated with respect to the four parameters. ### 6.1. Reference model We first identify a ‘reference model’ with parameters that we consider appropriate for a protostellar disk that is tidally truncated by the companion star (Table 1). The orbit of the companion is taken to be prograde. Before considering the reference model as such, we examine the same disk but with no viscosity ($`\alpha =0`$) and with a companion of zero mass ($`q=0`$). An infinite set of discrete bending modes is obtained, which are characterized by the number of nodes in the eigenfunction $`\stackrel{~}{\mathrm{}}_x`$ (say). The basic frequencies of these modes in the inertial frame are $`\omega _0=0`$, $`\omega _1=0.6926\mathrm{\Omega }_\mathrm{b}`$, $`\omega _2=1.3163\mathrm{\Omega }_\mathrm{b}`$, $`\omega _3=1.9213\mathrm{\Omega }_\mathrm{b}`$, $`\omega _4=2.5180\mathrm{\Omega }_\mathrm{b}`$, etc. We refer to these modes as modes $`0`$, $`1`$, $`2`$, $`3`$, $`4`$, etc. Mode $`0`$ is the (trivial) rigid-tilt mode and has no nodes. In the binary frame, the full set of frequencies appears much more complicated, as shown in Table 2. The modes in the left-hand column consist purely of $`W_+`$ and $`G_+`$, having $`W_{}=0`$ and $`G_{}=0`$. For such a mode, the frequency in the binary frame is less than the frequency in the inertial frame by an amount $`\mathrm{\Omega }_\mathrm{b}`$ (cf. eq. ). Since, in the inertial frame, we may have a prograde or retrograde mode $`n`$ with frequency $`\pm \omega _n`$, we obtain frequencies $`\pm \omega _n\mathrm{\Omega }_\mathrm{b}`$ in the binary frame. These are labeled $`n_\pm `$. The modes in the right-hand column are physically equivalent. The eigenfunctions and eigenvalues are obtained from those in the left-hand column by complex conjugation and a change of sign. Such modes consist purely of $`W_{}`$ and $`G_{}`$, having $`W_+=0`$ and $`G_+=0`$. Thus the frequencies in the binary frame are $`\omega _n+\mathrm{\Omega }_\mathrm{b}`$. These modes are labeled $`n_\pm ^{}`$. We consider next the effect of a small viscosity on the modes by increasing $`\alpha `$ from $`0`$ to its reference value $`0.01`$, but still with a companion of zero mass ($`q=0`$). The results are shown in Table 3. We omit the complex-conjugate modes from now on, but their existence should not be forgotten. Evidently the real part of the frequency changes very little in the presence of a small viscosity, but, with the exception of the rigid-tilt mode, the frequency acquires a positive imaginary part, which signifies a damping rate. The damping rate depends relatively little on the order of the mode. It can be seen from the governing equations that the effect of viscosity is simply to damp the horizontal motions locally at a rate $`\alpha \mathrm{\Omega }`$ (cf. eq. ). Since the horizontal motions are an essential part of each proper bending mode, this leads to a damping rate for each mode of order $`\alpha \mathrm{\Omega }`$ (evaluated in the outer parts of the disk). The exception is the rigid-tilt mode, for which the horizontal motions are exactly zero. Finally, we reach the reference model by increasing the binary mass ratio $`q`$ from $`0`$ to its reference value $`1`$. We start with mode $`0`$, which corresponds to a rigid tilt and consists purely of $`W_+`$. The frequency of the mode (now the ‘modified’ rigid-tilt mode) changes continuously from $`\mathrm{\Omega }_\mathrm{b}`$ to $`(1.0484+0.000258i)\mathrm{\Omega }_\mathrm{b}`$. The mode also acquires a $`W_{}`$ component. Viewed in the inertial frame, the mode changes from a pure $`W_+`$ mode with zero frequency to a combination of $`W_+`$ and $`W_{}`$ contributions having frequencies of $`0.0484\mathrm{\Omega }_\mathrm{b}`$ and $`2.0484\mathrm{\Omega }_\mathrm{b}`$, respectively (see eq. ). The first frequency corresponds to a retrograde precession of the tilted disk, forced by the $`m=0`$ component of the tidal potential. The second corresponds to the forcing of a bending wave ($`W_{}`$) by the $`m=2`$ component of the potential. The two potential components provide the ‘steady’ and ‘oscillatory’ torques, respectively. Since the imaginary part of the frequency is positive, the whole pattern decays at a rate $`0.000258\mathrm{\Omega }_\mathrm{b}`$. The other modes of the disk are of course damped much more rapidly, and we conclude that the reference model disk is linearly stable to tilting. ### 6.2. Resonances We now search the parameter space around the reference model for any regions of instability. In particular, we try varying the outer radius $`r_2`$ of the disk. In Fig. 2 we plot the dimensionless growth rate $`\omega _\mathrm{i}/\mathrm{\Omega }_\mathrm{b}`$ against $`r_2/r_\mathrm{b}`$ for a number of different values of $`\alpha `$. It is clear that the net growth rate is a combination of two parts. One part is a damping ($`\omega _\mathrm{i}>0`$) that is proportional to $`\alpha `$ and increases rapidly with increasing $`r_2`$. The second part is a growth ($`\omega _\mathrm{i}<0`$) with an entirely different behavior. The growth is localized in a sequence of peaks which become higher and narrower as $`\alpha `$ decreases. In Fig. 3 we show an expanded view of the primary peak for the cases $`\alpha =0`$ and $`\alpha =0.001`$. To verify the origin of the two parts, we repeated the calculation using equations that retain only the $`m=0`$ component of the tidal potential, or only the $`m=2`$ component. It is obvious from this that the damping is due entirely to the $`m=0`$ component, while the growth is due entirely to the $`m=2`$ component. There is a slight shift in the positions of the peaks when the $`m=0`$ component of the tidal potential is neglected. It is evident that the growth (that is, the instability) is associated with a series of resonances that occur when the outer radius of the disk is in the vicinity of certain discrete values. In the absence of viscosity, the resonances come about as follows. As $`r_2/r_\mathrm{b}`$ is varied, the frequency eigenvalues of all bending modes migrate along the real axis in the $`\omega `$-plane. With the exception of mode $`0`$, all modes are very sensitive to the position of the outer boundary, which reflects the waves. As a result, collisions occur on the real axis. In particular, when $`r_2/r_\mathrm{b}`$ is increased from $`0.1`$ towards the primary resonance, mode $`0`$ undergoes a collision with mode $`1_+^{}`$ (a bending mode with one node). The modes move briefly off the real axis, producing a complex-conjugate pair, and then return to the real axis to continue their original migration. The other resonances occur when mode $`0`$ undergoes collisions with modes $`2_+^{}`$, $`3_+^{}`$, etc. During a collision, the two modes exchange characteristics, and the eigenfunctions are hybrids of the two original ones. In particular, mode $`0`$ no longer resembles a rigid tilt during a collision with a proper bending mode. This means that a disk made unstable by this means would develop a warped shape (see Section 6.4 below). In the presence of a very small viscosity, the proper bending modes are damped and their eigenvalues are displaced somewhat above the real axis. The collisions are no longer exact and each mode can be followed continuously as $`r_2/r_\mathrm{b}`$ is varied. For $`\alpha =0.001`$, say, the modes pass sufficiently close that a strong interaction occurs. The tracks of the eigenvalues are deflected to avoid a collision, and, in so doing, mode $`0`$ acquires a positive growth rate that appears as a resonance. During the interaction, the eigenfunction of mode $`0`$ is distorted significantly from a rigid tilt, but not so strongly as in the inviscid case (see Section 6.4 below). When the viscosity is increased, the resonances become broader and weaker. A positive growth rate is not achieved if the height of the resonance is less than the damping rate arising from the $`m=0`$ potential. Therefore the regions of instability are suppressed as $`\alpha `$ is increased. It appears that, as long as the primary resonance survives, the net growth rate (for $`\alpha >0`$) is always positive for disks smaller than the size of the primary resonance, although the growth rate may be minuscule. This may be considered as a long tail of the primary resonance. However, the primary peak is dramatically reduced in height as $`\alpha `$ is increased, and it also shifts to smaller radius. In the cases investigated here, all traces of instability are eliminated when $`\alpha =0.1`$. To elucidate further the condition for resonance, we examined the bending modes at their points of collision with mode $`0`$ and evaluated their natural frequencies (i.e. in the absence of the tidal potential, and evaluated in the inertial frame). In each case the natural frequency is close to $`2\mathrm{\Omega }_\mathrm{b}`$ at the point of collision. The resonances therefore occur when the oscillatory torque due to the $`m=2`$ potential resonates with a free bending mode of the disk. We remark that the global resonant excitation of bending waves has been identified by Terquem (1998) when calculating the tidal torque exerted on a protostellar disk by a companion in an inclined circular orbit. However, the consequences for the evolution of the relative inclination of the system were not investigated. The results for a companion in a retrograde orbit are not significantly different. The heights of the resonant peaks are very similar, but they are shifted slightly in radius. The shift of the resonances (also observed, as noted above, when the $`m=0`$ component of the tidal potential is omitted) is related to the precession of the disk, which changes the effective frequency of the oscillatory torque and, therefore, the condition for resonance. The precession is always retrograde in the inertial frame, irrespective of the sense of the companion’s orbit. Therefore the effective driving frequency depends on the sense of the orbit, but the shift is generally small. ### 6.3. Precession rate and decay rate In Fig. 4 we plot the precession rate of the modified rigid-tilt mode against the outer radius of the disk, for the reference model. The precession is always retrograde and the rate increases rapidly with increasing $`r_2`$. Excellent agreement is found with the simple analytic approximation given by Bate et al. (2000; eq. ). (We have set the dimensionless parameter $`K=0.4`$, since this represents fairly accurately the disk models we have adopted.) For much smaller values of $`\alpha `$, a noticeable deviation from this curve occurs in the vicinity of resonances, since the path of the eigenvalue in the $`\omega `$-plane is temporarily diverted. In Fig. 5 we plot the decay rate of the modified rigid-tilt mode, for the reference model. When only the $`m=0`$ component of the tidal potential is included, the decay rate is always positive and increases rapidly with increasing $`r_2`$. When the full potential is used, the behavior is modified in the vicinity of resonances. Also shown is the simple estimate $`1/t_{\mathrm{align}}`$ given by Bate et al. (2000; eq. ). Apart from the resonances, the simple estimate captures the correct dependence on $`r_2`$. It should be borne in mind that the estimate of Bate et al. (2000) was based on an order-of-magnitude analysis, and can be expected to be accurate only within a factor of order unity. ### 6.4. Shape of the disk For comparison with observations, it is of interest to examine the shape adopted by the disk while executing the modified rigid-tilt mode. Information on the shape of the disk is contained in four real functions of radius, namely the real and imaginary parts of the eigenfunctions $`\stackrel{~}{\mathrm{}}_x(r)`$ and $`\stackrel{~}{\mathrm{}}_y(r)`$. We display this information in Figs 6 and 7 by showing cross-sections through the disk in the $`xz`$\- and $`yz`$-planes at two instants, corresponding to phase $`0`$ and phase $`\pi /2`$ of the period seen in the binary frame. Fig. 6 is for a disk with $`r_2/r_\mathrm{b}=0.118`$, in the middle of the primary resonance. Three different viscosities, $`\alpha =0`$, $`\alpha =0.001`$, and $`\alpha =0.01`$, are considered. In each case the mode has a positive growth rate. In the absence of viscosity, the resonance is strong and the disk becomes distinctly warped in a smooth and global manner. As already noted, when viscosity is included, the resonance is much weaker and the disk appears tilted with less noticeable warping. Fig. 7 is for a disk with the reference value $`r_2/r_\mathrm{b}=0.3`$ representative of a tidally truncated disk. We fix $`\alpha =0.01`$ and consider disks of varying thickness, $`ϵ=0.1`$, $`ϵ=0.05`$, and $`ϵ=0.03`$. In each case the mode is damped. For $`ϵ=0.1`$, the disk appears tilted without noticeable warping. For thinner disks, the deviation from a rigid tilt is noticeable in the outer part of the disk where the tidal forcing is strongest. Recall that the derivation of equations (12) and (13) requires that the warp vary on a length-scale long compared to the thickness of the disk (see the Appendix). This condition is indeed satisfied in the solutions we present here. ## 7. Expansion in the tidal potential ### 7.1. Basic equations The normal-mode description affords an especially compact representation of the dynamics and is very suitable for the numerical analysis. In this section we ‘unpack’ the eigenfunction to reveal the essential physics of the problem. We write the basic equations in the general form $$\mathrm{\Sigma }r^2\mathrm{\Omega }\left(\frac{W}{t}+i\mathrm{\Omega }_\mathrm{b}W\right)=\frac{1}{r}\frac{G}{r}i(AW+BW^{}),$$ (54) $$\frac{G}{t}+i\mathrm{\Omega }_\mathrm{b}Gi(CG+DG^{})+\alpha \mathrm{\Omega }G=\frac{r^3\mathrm{\Omega }^3}{4}\frac{W}{r},$$ (55) with unspecified coefficients $`A`$, $`B`$, $`C`$, and $`D`$ arising from the tidal potential. In view of our earlier discussion, terms $`A`$ and $`C`$ are due to the $`m=0`$ component of the potential, while the non-analytic terms $`B`$ and $`D`$ are due to the $`m=2`$ component. We allow for the possibility that tidal distortions of the disk may introduce additional complexities (such as a term $`D`$) that we have not foreseen. For a normal mode of the form (39)–(40), we have $`(i\omega +i\mathrm{\Omega }_\mathrm{b})\mathrm{\Sigma }r^2\mathrm{\Omega }W_+`$ $`=`$ $`{\displaystyle \frac{1}{r}}{\displaystyle \frac{dG_+}{dr}}i(AW_++BW_{}^{}),`$ (56) $`(i\omega ^{}+i\mathrm{\Omega }_\mathrm{b})\mathrm{\Sigma }r^2\mathrm{\Omega }W_{}`$ $`=`$ $`{\displaystyle \frac{1}{r}}{\displaystyle \frac{dG_{}}{dr}}i(AW_{}+BW_+^{}),`$ (57) $`(i\omega +i\mathrm{\Omega }_\mathrm{b})G_+i(CG_++DG_{}^{})+\alpha \mathrm{\Omega }G_+`$ $`=`$ $`{\displaystyle \frac{r^3\mathrm{\Omega }^3}{4}}{\displaystyle \frac{dW_+}{dr}},`$ (58) $`(i\omega ^{}+i\mathrm{\Omega }_\mathrm{b})G_{}i(CG_{}+DG_+^{})+\alpha \mathrm{\Omega }G_{}`$ $`=`$ $`{\displaystyle \frac{r^3\mathrm{\Omega }^3}{4}}{\displaystyle \frac{dW_{}}{dr}}.`$ (59) ### 7.2. Expansions We now expand the equations in powers of the tidal potential, indicated by a numerical superscript. The unspecified coefficients may be assumed to have expansions $$A=A^{(1)}+A^{(2)}+\mathrm{},$$ (60) etc., since they vanish in the absence of the tidal potential. The eigenvalue and eigenfunction have the expansions $`\omega `$ $`=`$ $`\omega ^{(0)}+\omega ^{(1)}+\omega ^{(2)}+\mathrm{},`$ (61) $`W_+`$ $`=`$ $`W_+^{(0)}+W_+^{(1)}+W_+^{(2)}+\mathrm{},`$ (62) $`W_{}`$ $`=`$ $`W_{}^{(1)}+W_{}^{(2)}+\mathrm{},`$ (63) $`G_+`$ $`=`$ $`G_+^{(1)}+G_+^{(2)}+\mathrm{},`$ (64) $`G_{}`$ $`=`$ $`G_{}^{(1)}+G_{}^{(2)}+\mathrm{},`$ (65) where, at leading order, we have the rigid-tilt mode with $$\omega ^{(0)}=\mathrm{\Omega }_\mathrm{b},W_+^{(0)}=\mathrm{constant}.$$ (66) The rigid-tilt amplitude could be arbitrarily specified as $`W_+^{(0)}=1`$, but we retain $`W_+^{(0)}`$ for clarity in the equations below. ### 7.3. Solution At first order, we obtain $$i\omega ^{(1)}\mathrm{\Sigma }r^2\mathrm{\Omega }W_+^{(0)}=\frac{1}{r}\frac{dG_+^{(1)}}{dr}iA^{(1)}W_+^{(0)},$$ (67) $$2i\mathrm{\Omega }_\mathrm{b}\mathrm{\Sigma }r^2\mathrm{\Omega }W_{}^{(1)}=\frac{1}{r}\frac{dG_{}^{(1)}}{dr}iB^{(1)}W_+^{(0)},$$ (68) $$\alpha \mathrm{\Omega }G_+^{(1)}=\frac{r^3\mathrm{\Omega }^3}{4}\frac{dW_+^{(1)}}{dr},$$ (69) $$(2i\mathrm{\Omega }_\mathrm{b}+\alpha \mathrm{\Omega })G_{}^{(1)}=\frac{r^3\mathrm{\Omega }^3}{4}\frac{dW_{}^{(1)}}{dr}.$$ (70) From equation (67), using the fact that $`G`$ vanishes at the edges of the disk, we immediately obtain the solvability condition $$\omega ^{(1)}=_{r_1}^{r_2}A^{(1)}W_+^{(0)}r𝑑r/_{r_1}^{r_2}\mathrm{\Sigma }r^2\mathrm{\Omega }W_+^{(0)}r𝑑r,$$ (71) which relates the precession rate (at first order) to the total horizontal tidal torque on the disk (at first order) divided by the horizontal angular momentum of the tilted disk. This effect is due to $`A`$ and therefore to the $`m=0`$ component of the tidal potential. Equations (67)–(70) can then, in principle, be solved for $`W_\pm ^{(1)}`$ and $`G_\pm ^{(1)}`$. At second order, we obtain $$i\omega ^{(2)}\mathrm{\Sigma }r^2\mathrm{\Omega }W_+^{(0)}+i\omega ^{(1)}\mathrm{\Sigma }r^2\mathrm{\Omega }W_+^{(1)}=\frac{1}{r}\frac{dG_+^{(2)}}{dr}i\left(A^{(2)}W_+^{(0)}+A^{(1)}W_+^{(1)}+B^{(1)}W_{}^{(1)}\right),$$ (72) plus three further equations, which will not be required. This time the solvability condition is $$\omega ^{(2)}=_{r_1}^{r_2}\left(A^{(2)}W_+^{(0)}+A^{(1)}W_+^{(1)}+\omega ^{(1)}\mathrm{\Sigma }r^2\mathrm{\Omega }W_+^{(1)}+B^{(1)}W_{}^{(1)}\right)r𝑑r/_{r_1}^{r_2}\mathrm{\Sigma }r^2\mathrm{\Omega }W_+^{(0)}r𝑑r.$$ (73) If, as we assume, $`A`$ is real, then $`\omega ^{(1)}`$ is real. After some further manipulations we then obtain $$\mathrm{Im}\left(\omega ^{(2)}\right)=_{r_1}^{r_2}\left(\frac{4\alpha }{r^4\mathrm{\Omega }^2}\right)\left(\left|G_+^{(1)}\right|^2\left|G_{}^{(1)}\right|^2\right)r𝑑r/_{r_1}^{r_2}\mathrm{\Sigma }r^2\mathrm{\Omega }\left|W_+^{(0)}\right|^2r𝑑r.$$ (74) This shows that $`G_+^{(1)}`$, which is caused by the $`m=0`$ component of the potential, causes pure damping, while $`G_{}^{(1)}`$, which is caused by the $`m=2`$ component, causes pure growth. The net effect depends on which is larger in the norm defined above. Note that coefficients $`C`$ and $`D`$ have no effect to this order. Also, the second-order coefficient $`A^{(2)}`$, which we did not attempt to calculate, does not affect the growth or decay rate at second order (although it does affect the precession frequency at second order). Such a coefficient could arise because the tidal torque on a tilted ring may have a second-order correction owing to the tidal distortion of the ring. Furthermore, since $`G_+^{(1)}`$ is independent of $`\alpha `$ according to equation (67), the damping is simply proportional to $`\alpha `$. The dependence of the growth on $`\alpha `$ is less clear since $`G_{}^{(1)}`$ itself depends on $`\alpha `$ in a complicated way according to the coupled equations (68) and (70). Some insight into these equations is obtained by considering the case $`\alpha =0`$, for which we find $$\frac{d^2W_{}^{(1)}}{dr^2}+\frac{d\mathrm{ln}(r^3\mathrm{\Omega }^3)}{dr}\frac{dW_{}^{(1)}}{dr}+\frac{16\mathrm{\Sigma }\mathrm{\Omega }_\mathrm{b}^2}{\mathrm{\Omega }^2}W_{}^{(1)}=\frac{8\mathrm{\Omega }_\mathrm{b}B^{(1)}W_+^{(0)}}{r^2\mathrm{\Omega }^3}.$$ (75) For our disk model, we have approximately $`\mathrm{\Omega }r^{3/2}`$, $`\mathrm{\Sigma }r^{1/2}`$, and $`r^{3/2}`$ over most of the disk. We then obtain, approximately, $$\frac{d^2W_{}^{(1)}}{dr^2}+\frac{r}{\lambda ^3}W_{}^{(1)}=\frac{8\mathrm{\Omega }_\mathrm{b}B^{(1)}W_+^{(0)}}{r^2\mathrm{\Omega }^3},$$ (76) where $$\lambda =\left[\frac{ϵ^2}{16(2n+3)(1+q)}\right]^{1/3}r_\mathrm{b},$$ (77) and $`ϵ`$ is the angular semi-thickness $`H/r`$ of the disk. This is an inhomogeneous Airy equation such as is common in problems of resonant wave excitation in differentially rotating disks. Here, the resonance is at the exact center of the disk, in accord with equation (1). The forcing term on the right-hand side, however, is proportional to $`r^{5/2}`$ and is therefore concentrated in the outer parts of the disk. ### 7.4. Interpretation The magnitude of the response $`W_{}^{(1)}`$ (and therefore $`G_{}^{(1)}`$) depends on the overlap between the forcing function and the solutions of the homogeneous equation, $`Ai(r/\lambda )`$ and $`Bi(r/\lambda )`$. One may then distinguish two cases, depending on whether the outer radius $`r_2`$ satisfies $`r_2\lambda `$ or $`r_2\lambda `$. If $`r_2\lambda `$, the homogeneous solutions are highly oscillatory over the disk and the overlap will be very small unless a global resonance occurs. This happens when there is a homogeneous solution that (nearly) satisfies both radial boundary conditions. This means, in fact, that the frequency of a free bending mode of the disk (in the inertial frame) is (nearly) equal to $`2\mathrm{\Omega }_\mathrm{b}`$. Then the operator on the left-hand side of equation (76) is (nearly) singular and a large response results. This clearly occurs during the inviscid resonances. When this happens, equation (76) breaks down; however, the analysis in Section 6 is valid. If, instead, $`r_2\lambda `$, the response is also reasonably large because there is little or no cancellation in the overlap integral. This can explain the long tail of the primary resonance, where the net growth rate is found to be positive for sufficiently small, but non-zero, $`\alpha `$. For the reference model, $`\lambda 0.037r_\mathrm{b}`$. As $`r_2/r_\mathrm{b}`$ is reduced from $`0.3`$ to $`0.05`$, we pass from the first case $`r_2\lambda `$, through several resonances, towards the second case, $`r_2\lambda `$. The interpretation given above can therefore explain the behavior found in Section 6. It is natural to ask whether the parameters of real disks are likely to allow a tilting instability in practice. We have tested how the the value of the outer radius at which the primary resonance occurs, $`r_2=r_\mathrm{p}`$, varies with all the parameters of the model. The variations with $`ϵ`$, $`n`$, and $`q`$ are well approximated by $`r_\mathrm{p}3.2\lambda `$, $`\lambda `$ being given by equation (77). The variations with $`r_1/r_\mathrm{b}`$, $`w_1/r_1`$, and $`w_2/r_2`$ are all less significant. Comparing this estimate of $`r_\mathrm{p}`$ with the tidal radius $`r_\mathrm{t}`$ of a disk estimated by Papaloizou & Pringle (1977), we find that the inequality $$\frac{r_\mathrm{p}}{r_\mathrm{t}}0.4\left(\frac{ϵ}{0.1}\right)^{2/3},$$ (78) is satisfied for $`0.2<q<10`$. We conclude that tidally truncated disks extend too far beyond the primary resonance for instability to occur, unless $`ϵ0.4`$, which is not suggested by observations (although it should be remembered that $`H`$ is the true semi-thickness of our polytropic model, and not an approximate scale-height). For tidally truncated disks, a tilting instability would occur only in the unlikely case that a higher-order resonance condition were met. It can also be seen from the above that to impose an ‘ingoing wave’ boundary condition at the center of the disk, as was done by Papaloizou & Terquem (1995), is questionable. Those authors envisaged that the bending wave (i.e. $`W_{}^{(1)}`$) would be excited at the outer edge of the disk and would propagate inwards, growing in amplitude until nonlinear effects caused it to dissipate. The wave would then fail to reflect from the center of the disk. In contrast, we find that the wave may be considered to be launched at a local resonance located at the center of the disk. Since the tidal forcing vanishes there, the wave is not significantly excited unless the width of the resonance (proportional to $`\lambda `$) becomes comparable to the radius of the disk. In this case, the wave is launched at all radii and global resonant effects must be taken into account. However, the wave amplitude does not diverge at the center of the disk; equation (76) has a solution with a finite tilt and vanishing torque at $`r=0`$ for the surface density profile adopted. Nonlinear dissipation may not occur. Indeed, perhaps contrary to conventional wisdom, the instability at the primary resonance (where the disk edge is at radius $`r_p`$) operates in a completely inviscid disk with reflecting boundaries. The effects of the contribution of $`W_{}`$ to the tilt growth can be related back to the mode-coupling description seen in Fig. 1. In particular, wave equation (76) describes the generation of the wave $`W_{}`$ through the driving term on the right-hand side of that equation. This term involves the interaction of the $`m=2`$ tidal potential, represented by $`B^{(1)}`$, with the rigid tilt, $`W_+^{(0)}`$. This interaction produces a wave, $`W_{}`$, of the form of an $`m=1`$ bending wave having frequency nearly equal to $`2\mathrm{\Omega }_\mathrm{b}`$ in the inertial frame. The interaction of the wave with the tidal field produces a stress that corresponds to the tilt growth-rate contribution $`B^{(1)}W_{}^{(1)}`$ in equation (73). Therefore the instability mechanism described by Lubow (1992) is always at work here, but with the differences that global resonant effects can be important, and that dissipation is not required. Furthermore, our comparison with the damping rate induced by the $`m=0`$ potential indicates that the instability is suppressed for tidally truncated disks, except in the unlikely event of a high-order resonance. ## 8. Summary and discussion In this paper, we have considered the linear stability of a coplanar protostellar disk that surrounds a star in a circular-orbit binary system. We have determined whether a slight tilt introduced into the disk would grow or decay in time. The outcome depends on the size of the disk. For disks that are truncated by standard tidal torques, typically resulting in an outer disk radius of about $`0.3`$ times the binary separation, we find that the disk tilt generally decays in time. For smaller disks, tilt growth is possible. As seen in Fig. 2, a disk undergoes a strong, ‘primary’ resonance with the tidal field when its outer radius is a certain fraction of the binary separation. This characteristic radius, which we denote by $`r_\mathrm{p}`$, is approximately $`0.118`$ times the binary separation for the parameters we have considered (see Table 1), but would be smaller still for thinner disks with $`H/r<0.1`$. In such a resonance, the disk experiences a growing tilt and becomes significantly warped (see Fig. 6). This resonance occurs when the frequency of the lowest-order global bending mode in the disk matches the tidal forcing frequency, which is here twice the binary orbital frequency. Weaker resonances occur at a series of discrete resonances corresponding to radii greater than $`r_\mathrm{p}`$. There is also a near resonance that occurs close to the disk center. For disks smaller than radius $`r_\mathrm{p}`$, this resonance causes a very slight tilt growth if $`\alpha `$ is sufficiently small (but non-zero), and any initial tilt would be retained. For disks with radii larger than $`r_\mathrm{p}`$, including disks truncated by standard tidal torques, the tilt will decay on approximately the viscous time-scale of the disk, or roughly $`10^3`$ binary orbits for $`\alpha =0.01`$ (see Fig. 5). For disks with large tilts, nonlinear effects may shorten the time-scale to reach small tilts, perhaps to the precessional time-scale of the disk, or about $`20`$ binary orbits (Bate et al. 2000). The net outcome of growth or decay of the disk tilt is determined by the competition of two torques. As seen in the inertial frame, the tidal torque acting on a tilted disk may be decomposed into a steady component and an oscillatory component with twice the binary orbital frequency. The steady torque, resulting from the $`m=0`$ component of the tidal field, causes the disk to become aligned with the binary orbit in the presence of dissipation, while the oscillatory torque, resulting from the $`m=2`$ component of the tidal field, causes misalignment. The steady torque produces an intuitively simple result because it causes the disk to settle to a state of coplanarity, where it experiences a minimum tidal potential energy, as a result of dissipation. The effect of the oscillatory torque is somewhat counterintuitive, but can be understood in terms of a mode-coupling model (see Fig. 1). Provided that $`\alpha `$ is sufficiently small, the oscillatory torque slightly dominates for smaller disks because material in such disks is generally closer to the near resonance that occurs in the vicinity of the disk center (see eq. ). A major issue is the origin of the tilt in observed protostellar disks. In the case of HK Tau, the disk surrounds the secondary star, but the two stars are similar in spectral type (Monin, Ménard, & Duchêne 1998). Although there are considerable uncertainties in the system parameters, the disk could extend to its standard tidal truncation radius, as suggested by Stapelfeldt et al. (1998). In that case, the results of this paper imply that tidal effects may cause decay of the primordial tilt, but in any case would not cause tilt growth. On the other hand, the existence of the tilt means that the decay time-scale cannot be much shorter than the binary age, estimated as $`5\times 10^5`$ yr. This places some constraints on both the theory and the binary parameters, although there are considerable uncertainties. For example, consider the case that the binary separation is close to its projected value of 340 AU. For $`\alpha =0.01`$, the linear tilt decay time-scale (based on Figure 2) would be several times longer than the estimated system age. On the other hand, the nonlinear decay time-scale estimate of Bate et al (2000) suggests a decay time-scale substantially shorter than the estimated age. The nonlinear time-scale estimate would be more compatible with a somewhat larger binary separation. The predicted shape of a tilted, tidally truncated disk with $`H/r0.1`$ is not strongly warped (see Fig. 7), in accord with the observations (Stapelfeldt et al. 1998; Koresko 1998). The lack of an observed warp cannot be used as evidence against binarity. On the other hand, a slight warp does occur for thinner disks such as in Fig. 7 case b, which could be observed as a small asymmetry. Note that the decay time-scale of proper bending modes of this disk (based on Table 3) is of order $`10^4`$ yr if $`\alpha =0.01`$, much shorter than the linear tilt decay time-scale. If the disk were tilted and warped in an arbitrary way as a result of its formation process, we would expect it to evolve rapidly to a tilted but essentially unwarped shape, then the tilt itself would decay on a longer time-scale. However, the nonlinear effects discussed by Bate et al. (2000) are likely to speed up both stages considerably. Similar considerations apply to a recent observational test of coplanarity among a sample of T Tauri binaries by Donar, Jensen, & Mathieu (2000). The data show some evidence for approximate coplanarity on a statistical basis. It is possible that some tidal evolution of the tilt towards coplanarity may be have occurred, if the tilt decays as rapidly as a disk precessional time-scale. It is important to understand whether disk truncation could occur close to the resonant radius $`r_\mathrm{p}`$, so that the disk would be unstable to tilting. For disks with a substantial tilt, Terquem (1998) has shown that a disk of radius close to $`r_\mathrm{p}`$ is sometimes subject to a strong resonant torque that is parallel to its spin axis. This resonant torque can exceed the viscous torque in the disk for sufficiently small values of $`\alpha `$, $`\alpha 10^3`$. If this torque could truncate an initially tilted disk at radius $`r_\mathrm{p}`$, the disk might become strongly warped (as seen in Fig. 6) and tilted further. The disk radius would be less than half of the standard tidal truncation radius. However, it is unclear that this torque would lead to disk truncation at $`r_\mathrm{p}`$ because it is smoothly distributed over the disk rather than being concentrated near $`r_\mathrm{p}`$. This is because the resonance is global rather than local. The lack of a strong warp in HK Tau argues against this process in that system. Disks in cataclysmic binaries are expected to be much colder than protostellar disks, having a smaller value of $`H/r`$. Consequently, such disks are even less likely to be unstable to tilting as a result of the $`m=2`$ component of the tidal field (see eq. ). In several X-ray binaries, most notably Her X-1, there is evidence for a tilted, precessing disk (see Wijers & Pringle 1999 and references therein). The tilting mechanism we have described is very unlikely to operate in such disks, which are expected to be tidally truncated and to have $`H/r0.4`$. Therefore, it appears that tidal torques are not responsible for the tilting of disks in X-ray binaries (cf. Larwood 1998). Possible mechanisms for tilting these disks include wind torques (Schandl & Meyer 1994) and radiation torques (Wijers & Pringle 1999). Another possible application of this work is to nearly Keplerian disks that surround black holes in active galactic nuclei. If the disk is subject to a bar potential from the galaxy and the disk radius is sufficiently smaller than the corotation radius of the bar, then the disk will be subject to this tilt instability. The results in this paper have implications to protostellar disks perturbed by inclined planets. A secular resonance occurs where the precession frequency of a planet matches the local precession frequency of an orbiting particle. The resonant radius changes as the nebula disperses and the resonance sweeps across a major portion of the solar nebula (Ward 1981). However, the current results suggest that the effects of such resonances on the gaseous nebula are mild and are distributed over the disk. Further analysis can be carried out through the methods described in this paper. We thank Jim Pringle for encouraging this investigation and for providing useful discussions. We acknowledge support from NASA grant NAG5-4310 and from the STScI visitor program. GIO was supported by the European Commission through the TMR network ‘Accretion on to Black Holes, Compact Stars and Protostars’ (contract number ERBFMRX-CT98-0195). ## Appendix A Derivation of the reduced equations for linear bending waves Let the small parameter $`ϵ`$ be a characteristic value of the angular semi-thickness $`H/r`$ of the disk. Then set $`\kappa ^2`$ $`=`$ $`\mathrm{\Omega }^2\left[1+ϵf_\kappa (r)\right],`$ (A1) $`\mathrm{\Omega }_z^2`$ $`=`$ $`\mathrm{\Omega }^2\left[1+ϵf_z(r)\right],`$ (A2) $`\alpha `$ $`=`$ $`ϵf_\alpha (r),`$ (A3) where the functions $`f`$ are $`O(1)`$, which includes the possibility of their being arbitrarily small. The disk is assumed to satisfy the Navier-Stokes equation with a (dynamic) shear viscosity given by $$\mu =\frac{\alpha p}{\mathrm{\Omega }}.$$ (A5) To describe the internal structure of the disk, adopt units in which the radius of the disk and the orbital frequency are $`O(1)`$. Introduce the stretched vertical coordinate $$\zeta =\frac{z}{ϵ},$$ (A6) which is $`O(1)`$ inside the disk. We then find, for the unperturbed disk, $`u`$ $`=`$ $`O(ϵ^3),`$ (A7) $`v`$ $`=`$ $`r\mathrm{\Omega }(r)+ϵ^2r\mathrm{\Omega }_2(r,\zeta )+O(ϵ^3),`$ (A8) $`w`$ $`=`$ $`O(ϵ^4),`$ (A9) $`\rho `$ $`=`$ $`ϵ^s\left[\rho _0(r,\zeta )+ϵ\rho _1(r,\zeta )+O(ϵ^2)\right],`$ (A10) $`p`$ $`=`$ $`ϵ^{s+2}\left[p_0(r,\zeta )+ϵp_1(r,\zeta )+O(ϵ^2)\right],`$ (A11) $`\mu `$ $`=`$ $`ϵ^{s+3}\left[\mu _0(r,\zeta )+O(ϵ)\right],`$ (A12) where $`s`$ is an arbitrary positive parameter. Any viscous evolution of the disk occurs on a long time-scale $`O(ϵ^3)`$ and is consistently neglected. The vertical component of the equation of motion implies, at $`O(ϵ)`$, $$0=\frac{1}{\rho _0}\frac{p_0}{\zeta }\mathrm{\Omega }^2\zeta ,$$ (A13) and, at $`O(ϵ^2)`$, $$0=\frac{1}{\rho _0}\frac{p_1}{\zeta }+\frac{\rho _1}{\rho _0^2}\frac{p_0}{\zeta }f_z\mathrm{\Omega }^2\zeta ,$$ (A14) while the radial component at $`O(ϵ^2)`$ gives $$2r\mathrm{\Omega }\mathrm{\Omega }_2=\frac{1}{\rho _0}\frac{p_0}{r}\mathrm{\Omega }\frac{d\mathrm{\Omega }}{dr}\zeta ^2.$$ (A15) Consider linear bending waves with azimuthal wavenumber $`m=1`$ in which the Eulerian perturbation of $`u`$, say, is $$\mathrm{Re}\left[u^{}(r,z,t)e^{i\varphi }\right].$$ (A16) It is known that these waves travel radially at a speed comparable to the sound speed (Papaloizou & Lin 1995). Therefore the characteristic time-scale for the evolution of the warped shape is the radial sound crossing time $`r/c_\mathrm{s}ϵ^1\mathrm{\Omega }^1`$, implying that the perturbations evolve on a time-scale $`O(ϵ^1)`$ that is long compared to the orbital time-scale \[$`O(1)`$\] but much shorter than the viscous time-scale \[$`O(ϵ^3)`$\]. This is captured by a slow time coordinate $$T=ϵt.$$ (A17) For the perturbations, introduce the scalings $`u^{}`$ $`=`$ $`ϵu_1^{}(r,\zeta ,T)+ϵ^2u_2^{}(r,\zeta ,T)+O(ϵ^3),`$ (A18) $`v^{}`$ $`=`$ $`ϵv_1^{}(r,\zeta ,T)+ϵ^2v_2^{}(r,\zeta ,T)+O(ϵ^3),`$ (A19) $`w^{}`$ $`=`$ $`ϵw_1^{}(r,\zeta ,T)+ϵ^2w_2^{}(r,\zeta ,T)+O(ϵ^3),`$ (A20) $`\rho ^{}`$ $`=`$ $`ϵ^s\left[\rho _1^{}(r,\zeta ,T)+ϵ\rho _2^{}(r,\zeta ,T)+O(ϵ^2)\right],`$ (A21) $`p^{}`$ $`=`$ $`ϵ^{s+2}\left[p_1^{}(r,\zeta ,T)+ϵp_2^{}(r,\zeta ,T)+O(ϵ^2)\right].`$ (A22) The overall amplitude of the perturbations is of course arbitrary since this is a linear analysis. <sup>3</sup><sup>3</sup>3The scaling adopted here is, however, of some significance, since it corresponds to a (differential) tilt angle comparable to the angular thickness of the disk. This is appropriate for a warp of observational consequence. We note that nonlinear effects may be significant for warps of this amplitude. Unfortunately, the Eulerian perturbation method used here tends to overestimate the degree of nonlinearity. For example, although the fractional Eulerian density perturbation is of order unity, the dominant perturbation is (locally) a rigid tilt and the Lagrangian density perturbation is in fact of higher order in $`ϵ`$. Nonlinear effects can occur, however, owing to the fact that the horizontal motions are comparable to the sound speed. In particular, these motions can be damped by a parametric instability (Gammie, Goodman, & Ogilvie 2000; Bate et al. 2000). The Eulerian method remains the most convenient way of obtaining the equations if one is satisfied with a formal linearization. Since we are considering a stability problem in this paper, this method is sufficient for our purposes. The perturbed equations for $`w`$, $`\rho `$, and $`p`$ at leading order are $`i\mathrm{\Omega }w_1^{}+{\displaystyle \frac{1}{\rho _0}}{\displaystyle \frac{p_1^{}}{\zeta }}{\displaystyle \frac{\rho _1^{}}{\rho _0^2}}{\displaystyle \frac{p_0}{\zeta }}`$ $`=`$ $`0,`$ (A23) $`i\mathrm{\Omega }\rho _1^{}+w_1^{}{\displaystyle \frac{\rho _0}{\zeta }}+\rho _0{\displaystyle \frac{w_1^{}}{\zeta }}`$ $`=`$ $`0,`$ (A24) $`i\mathrm{\Omega }p_1^{}+w_1^{}{\displaystyle \frac{p_0}{\zeta }}+\gamma p_0{\displaystyle \frac{w_1^{}}{\zeta }}`$ $`=`$ $`0,`$ (A25) where $`\gamma `$ is the adiabatic exponent. These may be combined to give $$\frac{}{\zeta }\left(\gamma p_0\frac{w_1^{}}{\zeta }\right)=0.$$ (A26) The general solution, regular at the disk surface, is $`w_1^{}`$ $`=`$ $`ir\mathrm{\Omega }W,`$ (A27) $`\rho _1^{}`$ $`=`$ $`r{\displaystyle \frac{\rho _0}{\zeta }}W,`$ (A28) $`p_1^{}`$ $`=`$ $`r{\displaystyle \frac{p_0}{\zeta }}W,`$ (A29) where $`W(r,T)`$ is a dimensionless complex function to be determined. These perturbations correspond to applying a rigid tilt to each annulus of the disk. The tilt varies with radius and time according to the function $`W(r,T)`$, which is related to the unit tilt vector $`\mathbf{}`$ through $$W=\mathrm{}_x+i\mathrm{}_y.$$ (A30) The perturbed equations for $`u`$ and $`v`$ at leading order are $`i\mathrm{\Omega }u_1^{}2\mathrm{\Omega }v_1^{}`$ $`=`$ $`0,`$ (A31) $`i\mathrm{\Omega }v_1^{}+\frac{1}{2}\mathrm{\Omega }u_1^{}`$ $`=`$ $`0.`$ (A32) The general solution is $`u_1^{}`$ $`=`$ $`U,`$ (A33) $`v_1^{}`$ $`=`$ $`\frac{1}{2}iU,`$ (A34) where $`U(r,\zeta ,T)`$ is a complex function to be determined. The perturbed equations for $`w`$, $`\rho `$, and $`p`$ at the next order are $`i\mathrm{\Omega }w_2^{}+{\displaystyle \frac{1}{\rho _0}}{\displaystyle \frac{p_2^{}}{\zeta }}{\displaystyle \frac{\rho _2^{}}{\rho _0^2}}{\displaystyle \frac{p_0}{\zeta }}`$ $`=`$ $`F_w,`$ (A35) $`i\mathrm{\Omega }\rho _2^{}+w_2^{}{\displaystyle \frac{\rho _0}{\zeta }}+\rho _0{\displaystyle \frac{w_2^{}}{\zeta }}`$ $`=`$ $`F_\rho ,`$ (A36) $`i\mathrm{\Omega }p_2^{}+w_2^{}{\displaystyle \frac{p_0}{\zeta }}+\gamma p_0{\displaystyle \frac{w_2^{}}{\zeta }}`$ $`=`$ $`F_p,`$ (A37) where $`F_w`$ $`=`$ $`ir\mathrm{\Omega }{\displaystyle \frac{W}{T}}{\displaystyle \frac{\rho _1}{\rho _0}}r\mathrm{\Omega }^2W{\displaystyle \frac{1}{\rho _0}}{\displaystyle \frac{\rho _0}{\zeta }}f_zr\mathrm{\Omega }^2\zeta W,`$ (A38) $`F_\rho `$ $`=`$ $`r{\displaystyle \frac{\rho _0}{\zeta }}{\displaystyle \frac{W}{T}}ir\mathrm{\Omega }{\displaystyle \frac{\rho _1}{\zeta }}W{\displaystyle \frac{1}{r}}{\displaystyle \frac{}{r}}(\rho _0rU)+{\displaystyle \frac{\rho _0U}{2r}},`$ (A39) and $`F_p`$ will not be required. Now the linear operator defined by the left-hand sides of equations (A35)–(A37) is singular owing to the existence of the tilt mode identified above. The corresponding solvability condition is $$\left(i\rho _0F_w+F_\rho \mathrm{\Omega }\zeta \right)𝑑\zeta =0,$$ (A40) where the integral is over the entire vertical extent of the disk. This evaluates to $$\mathrm{\Sigma }_0r\mathrm{\Omega }\left(2\frac{W}{T}+if_z\mathrm{\Omega }W\right)\frac{1}{r^2}\frac{}{r}\rho _0r^2\mathrm{\Omega }U\zeta 𝑑\zeta =0,$$ (A41) where $$\mathrm{\Sigma }_0=\rho _0𝑑\zeta $$ (A42) is the surface density. The perturbed equations for $`u`$ and $`v`$ at the next order are $`i\mathrm{\Omega }u_2^{}2\mathrm{\Omega }v_2^{}`$ $`=`$ $`F_u,`$ (A43) $`i\mathrm{\Omega }v_2^{}+\frac{1}{2}\mathrm{\Omega }u_2^{}`$ $`=`$ $`F_v,`$ (A44) where $`F_u`$ $`=`$ $`{\displaystyle \frac{U}{T}}{\displaystyle \frac{1}{\rho _0}}{\displaystyle \frac{}{r}}\left(r{\displaystyle \frac{p_0}{\zeta }}W\right)+{\displaystyle \frac{r}{\rho _0^2}}{\displaystyle \frac{\rho _0}{\zeta }}{\displaystyle \frac{p_0}{r}}W+{\displaystyle \frac{1}{\rho _0}}{\displaystyle \frac{}{\zeta }}\left(\mu _0{\displaystyle \frac{U}{\zeta }}\right),`$ (A45) $`F_v`$ $`=`$ $`\frac{1}{2}i{\displaystyle \frac{U}{T}}\frac{1}{2}f_\kappa \mathrm{\Omega }Uir^2\mathrm{\Omega }{\displaystyle \frac{\mathrm{\Omega }_2}{\zeta }}W+{\displaystyle \frac{i}{\rho _0}}{\displaystyle \frac{p_0}{\zeta }}W{\displaystyle \frac{i}{2\rho _0}}{\displaystyle \frac{}{\zeta }}\left(\mu _0{\displaystyle \frac{U}{\zeta }}\right).`$ (A46) Again the linear operator is singular, with the solvability condition $$F_u+2iF_v=0.$$ (A47) This evaluates to $$2\frac{U}{T}if_\kappa \mathrm{\Omega }U+\frac{2f_\alpha }{\rho _0\mathrm{\Omega }}\frac{}{\zeta }\left(p_0\frac{U}{\zeta }\right)+r\mathrm{\Omega }^2\zeta \frac{W}{r}=0.$$ (A48) By inspection, the solution is of the form $`U\zeta `$, with $$\left(2\frac{}{T}+if_\kappa \mathrm{\Omega }+2f_\alpha \mathrm{\Omega }\right)\left(\frac{U}{\zeta }\right)=r\mathrm{\Omega }^2\frac{W}{r}.$$ (A49) If we now define $$G=\frac{_0r^2\mathrm{\Omega }}{2}\left(\frac{U}{\zeta }\right),$$ (A50) where $$_0=\rho _0\zeta ^2𝑑\zeta $$ (A51) is the second vertical moment of the density, we obtain the coupled equations $$\mathrm{\Sigma }_0r^2\mathrm{\Omega }\left(\frac{W}{T}+\frac{1}{2}if_z\mathrm{\Omega }W\right)=\frac{1}{r}\frac{G}{r},$$ (A52) $$\frac{G}{T}+\frac{1}{2}if_\kappa \mathrm{\Omega }G+f_\alpha \mathrm{\Omega }G=\frac{_0r^3\mathrm{\Omega }^3}{4}\frac{W}{r}.$$ (A53) Finally, if we step back from the asymptotic analysis and present the equations in physical terms, we obtain $$\mathrm{\Sigma }r^2\mathrm{\Omega }\left[\frac{W}{t}+\left(\frac{\mathrm{\Omega }_z^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}\right)\frac{i\mathrm{\Omega }}{2}W\right]=\frac{1}{r}\frac{G}{r},$$ (A54) $$\frac{G}{t}+\left(\frac{\kappa ^2\mathrm{\Omega }^2}{\mathrm{\Omega }^2}\right)\frac{i\mathrm{\Omega }}{2}G+\alpha \mathrm{\Omega }G=\frac{r^3\mathrm{\Omega }^3}{4}\frac{W}{r},$$ (A55) where now $`\mathrm{\Sigma }`$ $`=`$ $`{\displaystyle \rho 𝑑z},`$ (A56) $``$ $`=`$ $`{\displaystyle \rho z^2𝑑z}.`$ (A57)
no-problem/0003/cond-mat0003127.html
ar5iv
text
# Magnetic field – temperature phase diagram of the organic conductor 𝛼-(BEDT-TTF)2KHg(SCN)4 \[ ## Abstract We present systematic magnetic torque studies of the “magnetic field – temperature” phase diagram of the layered organic conductor $`\alpha `$-(BEDT-TTF)<sub>2</sub>KHg(SCN)<sub>4</sub> at fields nearly perpendicular and nearly parallel to the highly conducting plane. The shape of the phase diagram is compared to that predicted for a charge-density-wave system in a broad field range. \] Organic metals $`\alpha `$-(BEDT-TTF)$`{}_{2}{}^{}M`$Hg(SCN)<sub>4</sub>, where $`M=`$ K, Tl, or Rb , have attracted much attention in the last decade due to their exotic low-temperature electronic state. They are characterized by a layered crystal structure and a unique co-existence of quasi-one-dimensional (Q1D) and quasi-two-dimensional (Q2D) conducting bands . The transition into the low-temperature state is associated with a $`2k_F`$ nesting instability of the Q1D part of the Fermi surface. Indeed, experiments on the angle-dependent magnetoresistance oscillations have revealed a significant change in the electronic system due to a periodic potential with the wave vector close to the doubled Fermi wave vector of the Q1D band. On the other hand, studies of the magnetization anisotropy and $`\mu `$SR give evidence for a low amplitude modulation of the magnetic moment suggestive of a spin-density wave (SDW). Many of the striking anomalies displayed by these compounds in magnetic field can be fairly well explained by the density-wave instability, taking into account the coexistence of the Q1D and Q2D Fermi surfaces (see e.g. Refs. ). However, there remain several questions which can hardly be understood within the SDW model. One of the important questions concerns the effect of magnetic field on the low-temperature state. It is known that magnetic field applied perpendicular to the direction of the spin polarization may stimulate the SDW formation in systems with imperfectly nested Fermi surfaces due to effective reduction of the electron motion to one dimension . This orbital effect leads to a slight increase of the SDW transition temperature as was shown for a Q1D conductor (TMTSF)<sub>2</sub>PF<sub>6</sub> . The situation with the $`\alpha `$-(BEDT-TTF)$`{}_{2}{}^{}M`$Hg(SCN)<sub>4</sub> salts is rather controversial in this respect. In agreement with the SDW model, Sasaki et al. reported the transition temperature, $`T_p`$, in $`\alpha `$-(BEDT-TTF)<sub>2</sub>KHg(SCN)<sub>4</sub> to increase in magnetic field perpendicular to the spin polarization plane (which is the highly-conducting $`ac`$-plane in this compound ). On the contrary, numerous other experiments suggest a reduction of $`T_p`$ in magnetic field. Some authors claim that the low-temperature state is completely suppressed in this salt and the normal metallic state is restored above the so-called kink transition at $`B_{\text{kink}}24`$ T. On the other hand, several works suggest a new phase, different from the normal one, to emerge above $`B_{\text{kink}}`$ . Based on the shape of the “magnetic field – temperature” ($`BT`$) phase diagrams , Biskup et al. proposed the phase transition to be driven by a charge-density wave (CDW) rather than SDW instability. It should be noted, that the studies of the high-field region of the $`BT`$ diagram of the $`\alpha `$-(BEDT-TTF)$`{}_{2}{}^{}M`$Hg(SCN)<sub>4</sub> compounds have been mostly done by use of magnetoresistance technique. Obviously, such experiments are difficult to interpret unambigously in terms of phase transitions. Therefore a detailed investigation of thermodynamic properties is necessary in order to establish the phase boundaries. So far only few magnetization data at fields above 15 T were presented in two works . However, the conclusions made in these works concerning the field effect on the transition temperature contradict each other. To elucidate the problem, we have carried out a systematic study of the $`BT`$ phase diagram of $`\alpha `$-(BEDT-TTF)<sub>2</sub>KHg(SCN)<sub>4</sub> by means of magnetic torque experiments. Several high quality samples chosen for the experiment were grown by the standard electrochemical method and had a typical mass of 100 to 350 $`\mu `$g. A cantilever beam magnetometer was used to measure the torque in fields nearly perpendicular and nearly parallel to the highly conducting $`ac`$-plane. The measurements were performed at temperatures between 0.4 and 18 K in magnetic fields up to 28 T produced at the High Magnetic Field Laboratory in Grenoble, France. We first focus on field directions almost perpendicular to the layers. Typical field dependencies of the steady part of the torque $`\tau _{\text{st}}(B)`$ are shown in Fig. 1a, for the angle $`\theta `$ between the magnetic field and the normal to the $`ac`$-plane equal to 2.2. At high temperatures ($`T`$ 8 K) we find an almost temperature insensitive quadratic dependence of the torque on magnetic field. On lowering the temperature below 8 K the quadratic term increases at small fields, but above 4 T the dependence becomes weaker than quadratic and at high fields the curves bend to merge with the high temperature curve. The field at which the torque returns back to its normal behaviour coincides with the kink field $`B_{\text{kink}}`$ as determined in other experiments . In addition to the steady part of the torque, de Haas-van Alphen (dHvA) oscillations were observed. At 10 K these oscillations were resolved only at the highest fields, but at 5.0 K their amplitude was already comparable to $`\tau _{\text{st}}(B)`$ as shown by a dashed line in Fig. 1a. To extract $`\tau _{\text{st}}(B)`$ we used a Fourier filter. In contrast to the measurements at higher temperatures, the curve at 3.2 K does not return to the high temperature part at $`B_{\text{k}ink}`$ but stays below. For temperatures below 3 K the dHvA amplitude becomes so strong that the steady torque cannot be extracted reliably any more. In Fig. 1b we show a trace of a field sweep from 18 T to 28 T and back made at 0.4 K. There is a clear transition from a low field state (characterized by a splitting of the oscillation amplitude) to a high field state (characterized by a higher oscillation amplitude and the absence of splitting). This transition shows a strong hysteresis of the dHvA amplitude in the field interval marked by fat arrows in Fig. 1b. Furthermore, there is a significant shift between up and down sweep curves in the high field part indicating a complex magnetic state. To clarify the latter point, we performed temperature sweeps at constant fields. For these experiments it is of crucial importance to suppress the influence of the oscillatory part . We therefore performed these sweeps at field values, at which the dHvA contribution to the temperature dependence is nearly zero. The results are shown in Fig. 2a. Despite a small remanent dHvA contribution there is still a clear transition into a new state even at the highest field. In order to determine anisotropy effects in the phase diagram, we performed torque experiments at fields almost parallel to the layer plane. The phase transition is clearly seen in temperature sweeps. Typical examples taken at different fields at $`\theta =`$ 87.5 are given in Fig. 2b. The field dependence of the torque below 4 K shows a complex behavior with a strong hysteresis between up and down field sweeps . This behavior is drastically different from the feature observed at the kink transition at low angles. An example of a field sweep at 1.3 K is shown in the inset in Fig. 2b. The results of our studies can be summarized by plotting a $`BT`$ phase diagram as shown in Fig. 3. Here the data obtained on 4 samples having slightly different $`T_p`$ (ranging from 8.0 to 8.4 K) are presented. That is why the temperature and field are given in reduced units $`T/T_p(0)`$ and $`\mu _BB/k_BT_p(0)`$, respectively \[here $`T_p(0)`$ is the extrapolated critical temperature at zero field\]. The definition of the transition points is illustrated in Figures 1 and 2. The low-angle data in Fig. 3 are qualitatively consistent with the $`BT`$ diagrams obtained from earlier magnetoresistance and torque measurements in tilted fields: Firstly, the transition temperature continuously decreases with increasing the field; secondly, the low-temperature state is different from the normal non-magnetic state even above the kink transition. Quantitatively, our data are in perfect agreement with those obtained from specific heat measurements at $`B14`$ T . These results are obviously in conflict with the SDW model. On the other hand, they can be compared to what is expected for a CDW . At low field the CDW<sub>0</sub> phase with an optimal zero-field wave vector is stable below $`T_p`$. As the field increases, the Zeeman splitting of the subbands with antiparallel spins leads to the deterioration of the nesting conditions and, consequently, suppression of $`T_p`$ . However, when the Zeeman splitting energy reaches the value of the zero-temperature energy gap, a formation of a spatially modulated CDW<sub>x</sub> state with a longitudinally shifted wave vector is expected. This state is analogous to the Fulde-Ferrel-Larkin-Ovchinnikov state predicted for superconductors and persists to considerably higher fields than the conventional CDW<sub>0</sub>. The phase diagram proposed by Zanchi et al. for a CDW system with perfect nesting is shown by dashed lines in Fig. 3. Apart from different field scales, the phase diagrams are remarkably similar to each other. Assuming the CDW model, the deviation of the actual phase boundary for fields nearly perpendicular to the plane to higher temperatures at $`T_p/T_p(0)>0.6`$ can be ascribed to a significant orbital effect of the magnetic field. This effect is important for an imperfectly nested Fermi surface and leads to a relative increase of $`T_p`$ . In our case, when the warping of the open Fermi surface sheets is much stronger within the $`ac`$-plane than in the interlayer direction, the orbital effect should be anisotropic: its contribution decreases as the angle $`\theta `$ approaches $`90^{}`$. Indeed, the critical temperature of the transition into the low-temperature low-field state is found to be systematically lower at $`\theta 90^{}`$, lying perfectly on the theoretical line (Fig. 3). This implies that the orbital effect is absent for the in-plane field direction. In the high-field region, the phase lines determined at different field orientations seem to converge, suggesting an isotropic effect of magnetic field on the transition temperature into the low-temperature high-field state. For a definite conclusion, more detailed studies at different angles are needed. The considerable difference between the field scale in the phase diagram obtained from the experiment and that predicted by the CDW model is not very surprising. Indeed, the model calculations are made within a mean-field approximation neglecting fluctuation effects. The latter may significantly lower $`T_p(0)`$ with respect to the mean-field value. Furthermore, the imperfect nesting which likely occurs in the present system has a stronger suppressing effect on $`T_p(0)`$ than on the critical field . Both these factors lead to an underestimation of the actual critical fields. Finally, we note that the field dependence of the torque at high angles has no simple explanation within the proposed model. The non-monotonic torque with a hysteresis between up and down field sweeps observed at $`\theta 60^{}`$ is reminiscent of multiple phase transitions. As the angle approaches 90, the features become less pronounced though still persist to the angles as high as 88-89 (see inset in Fig. 2b). In principle, an additional phase transition into a CDW<sub>y</sub> state with a transversally shifted wave vector may be expected at high angles at which the orbital effect is sufficiently suppressed . Still, it cannot account for the whole structure of the torque at high angles and its complicated angular dependence. Obviously, the applied model is too oversimplified to explain all the field effects. For a more adequate description it seems very important to include the Q2D band into consideration. In particular, it was recently shown that oscillations of the chemical potential due to the quantization of the 2D orbits have a significant impact on the CDW gap . On the other hand, the magnetization anisotropy itself, revealing an “easy-plane” spin polarization at low temperatures , indicates a non-trivial magnetic structure linked to the probable CDW. In conclusion, we have presented a $`BT`$ phase diagram of $`\alpha `$-(BEDT-TTF)<sub>2</sub>KHg(SCN)<sub>4</sub> built on the basis of magnetization measurements. The shape of the diagram and the effect of the field orientation are suggestive of a CDW formation accompanied by imperfect nesting of the Q1D part of the Fermi surface. If this is true, the high-field phase would represent the first example of a CDW with a spatially modulated wave vector. We thank A. Bjeliš for very useful discussions. The work was supported in part by the TMR Program of the European Community, contract No. ERBFMGECT950077. Figure captions Fig. 1. Torque as a function of magnetic field applied nearly perpendicular to the $`ac`$-plane: (a) - steady part of the torque at different temperatures; the dotted curve represents the total signal from the sample, with the dHvA oscillations at $`T=5.0`$ K; (b) - up (dotted line) and down (solid line) field sweeps of the torque at low temperature. Fig. 2. Temperature sweeps of the torque at $`\theta =2.2^{}`$ (a) and $`87.5^{}`$ (b) at different fields. The inset shows the field dependence of the torque at $`\theta =87.5^{}`$, $`T=1.3`$ K. Fig. 3. Phase diagram of $`\alpha `$-(BEDT-TTF)<sub>2</sub>KHg(SCN)<sub>4</sub>. Different symbols correspond to the transition points obtained from: the $`\tau _{\text{s}t}(T)`$ sweeps at $`\theta =2.2^{}`$ (stars, sample #1), 6.5 (solid diamonds, sample #2), 11.8 (solid up-triangles, sample #3), 87.5 (open squares, sample #1), and 89.5 (open up triangles, sample #1); $`\tau _{\text{s}t}(B)`$ sweeps at $`\theta =2.2^{}`$ (crosses, sample #1); and characteristic changes in the dHvA signal at $`\theta =4.0^{}`$ (solid down-triangles, sample #4). The dashed lines represent the phase diagram predicted for a CDW system with a perfectly nested Fermi surface .
no-problem/0003/cond-mat0003451.html
ar5iv
text
# Anomalous rotational properties of Bose-Einstein condensates in asymmetric traps ## Abstract We study the rotational properties of a Bose-Einstein condensate confined in a rotating harmonic trap for different trap anisotropies. With simple arguments, we obtain the velocity field of the quantum fluid for condensates with or without vortices. While the condensate describes open spiraling trajectories, on the frame of reference of the rotating trap the fluid moves against the trap’s rotation. We also find expressions for the angular momentum and linear and Thomas-Fermi solutions for a vortex-less state. In these two limits we find the same analytic relation between the shape of the cloud and the rotation speed. Our predictions are supported by numerical simulations of the mean field Gross-Pitaevskii model. The question of whether an atomic gas may be a superfluid and the quest for distinct superfluid properties are among the central goals of recent research with atomic Bose-Einstein condensates (BEC). These targets have been largely inspired by previous achievements on <sup>4</sup>He and its superfluid phase He-II, so that from one known property of “classical” superfluids some analog is found for the “new” condensates. To name a few phenomena which have been found this way we can cite the discovery of anomalous sound velocities , the discovery of a critical velocity for the beginning of viscous damping , the study of vortices and their generation and the study of rotational properties such as scissors modes or moments of inertia . In this context there has been a great interest in achieving Bose-Einstein condensation with rotating traps, a task which has been completed recently . The motivation of this interest is that the ground state of a condensate in a rotating trap can be forced to host one or more vortices depending on the angular speed , thus providing us with persistent currents which are themselves traces of superfluidity. Although most of the theoretical work regarding the generation and stability of vortices has focused on isotropic traps , it is only using truly anisotropic traps that one may expect the condensate to offer a mechanical response to the rotation and produce a vortex. This idea is developed throughout the paper. There are few relevant results related to non-isotropic traps. First , the moment of inertia of an inhomogeneous condensate have been studied assuming that the gas is an ideal one, i. e. without interactions. Among the most striking results is the formula which relates the moment of inertia to the asymmetry of the condensate given by the expectation value $`x^2y^2`$. In Ref. the dynamics and stability of a vortex in this type of traps is studied. And finally in Refs. the ground state of a condensate in a rotating asymmetric trap is found numerically for particular values of the rotation speed and their stability is studied. In this paper we study the rotational properties of a condensate in an rotating anisotropic trap, with and without a vortex. We consider the problem within the mean field approximation, thus taking interactions into account (an essential difference with previous work ). Our treatment is both analytic and numeric: we obtain expressions for the most relevant observables, which are exact in some limits, and verify them with complex numerical simulations over a large range of parameters. The main predictions of the paper are the explicit expressions for the moments of inertia and a characterization of the flow of the quantum fluid, whose main features are reflected in Fig. 1. The model.- We will study a two-dimensional condensate in a harmonic trap whose axes rotate at a certain angular speed, $`\mathrm{\Omega }`$. The potential may be described by $$V(𝐱,t)=V_0\left(e^{\mathrm{\Omega }tR_z}𝐱\right)=V_0\left(U\left(\mathrm{\Omega }t\right)𝐱\right),$$ (1) where $`R_z`$ is the generator of the rotations in the $`\mathrm{}^2`$ space, $`U(\theta )`$ is an orthogonal transformation which rotates a vector an angle $`\theta `$ around the origin, and $`V_0(r_1,r_2)=\frac{1}{2}\left(\omega _1^2r_1^2+\omega _2^2r_2^2\right)`$. We will work on two different frames of reference: the laboratory frame $`\{S,𝐱\}`$ which is stationary and the rotating frame $`\{\stackrel{~}{S},𝐫=U(\mathrm{\Omega }t)𝐱\}`$ which moves with the trap. On the first frame the zero temperature mean field theory reads $$i_t\psi (x,t)=\left[\frac{1}{2}\mathrm{}_x+V(x,t)+g\left|\psi \right|^2\right]\psi (x,t).$$ (2) Here $`g`$ characterizes the interaction and is defined in terms of the ground state scattering length. It is important to remark that our solutions have a well defined norm that we may take as equal to the number of particles , $`\psi ^2=\varphi ^2=N`$. To simplify the treatment we have assumed a set of units in which $`\mathrm{}=m=1`$. We will define a second wave function, this time on $`\stackrel{~}{S}`$, given by $`\varphi (𝐫,t)=\psi (U(\mathrm{\Omega }t)𝐫,t)`$ with $`𝐫=(r_1,r_2)`$. The evolution of this function is ruled by $$i_t\varphi =\left[\frac{1}{2}\mathrm{}_r+V_0\left(𝐫\right)+g\left|\varphi \right|^2\mathrm{\Omega }L_z\right]\varphi .$$ (3) Here $`L_z=i\left(r_1_2\varphi r_2_1\varphi \right)`$ is a representation of the angular momentum operator along the z-axis. Stationary states.- Let us write Eq. (2) in the modulus-phase representation in the rotating frame of reference. Defining ($`\varphi =\sqrt{\rho }e^{i\mathrm{\Theta }}`$) we obtain the continuity equation for the density $$_t\rho \mathrm{\Omega }\rho \left(R_z𝐫\right)=\left[\rho 𝐯\right],$$ (4) where $`𝐯=\text{Im}(\overline{\varphi }\varphi )=\mathrm{arg}\varphi =\mathrm{\Theta }`$ is almost the velocity field of the quantum fluid in the stationary frame. Actually the velocity frame on $`S`$ is given by $`𝐕=U\left(\mathrm{\Omega }t\right)𝐯\left(U\left(\mathrm{\Omega }t\right)\right)`$. Eq. (3) has a set of stationary states, which are solutions of the type $`\varphi (𝐫,t)=e^{i\mu t}\varphi (𝐫,0).`$ These states represent configurations of the condensed cloud which maintain their shape and move rigidly with the trap (keep in mind that $`𝐫`$ corresponds to the rotating system). In our paper we will be interested in the lowest energy state, the ground state. *The main assumption* throughout the paper will be that the lines of constant density of a ground state are ellipses of an unknown shape, i. e., $$\rho =\rho \left(u^2\right)=\rho \left(r_1^2+r_2^2/a\right),$$ (5) which is exact for the ground state in certain limits to be discussed later. We will also need a normalized anisotropy factor, $`k=(1a)/(1+a)[1,+1]`$. Using Eqs (5) and (4), one finds $$h\left(u\right)\left[\mathrm{\Omega }\left(1a\right)r_1r_2+ar_1_1+r_2_2\right]\mathrm{\Theta }=\mathrm{}\mathrm{\Theta },$$ (6) where $`h=\text{d}\mathrm{ln}\rho /\text{d}(u^2).`$ Let us look for solutions corresponding to an incompressible flow, i.e. $`\mathrm{}\mathrm{\Theta }=0`$. Then $$\mathrm{\Omega }\left[\left(1a\right)r_1r_2\right]+\left[ar_1_1+r_2_2\right]\mathrm{\Theta }=0,$$ (7) Writing the solution of Eq. (7) in the form $$\mathrm{\Theta }(r_1,r_2,t)=\mu t+\mathrm{\Omega }\frac{a1}{a+1}r_1r_2+\mathrm{\Theta }_{vort}(r_1,r_2),$$ (8) it is clear that there is still an undetermined part of the phase, $`\mathrm{\Theta }_{vort}`$ which satisfies $`\mathrm{}\mathrm{\Theta }_{vort}=0`$ and $`\mathrm{\Theta }_{vort}\rho `$. This phase varies along elliptic paths surrounding the origin and carries vorticity in case there is any. Once we have found the phase, the density may be obtained by solving $$\mu =\frac{\mathrm{}\sqrt{\rho }}{2\sqrt{\rho }}+\frac{\left(\mathrm{\Theta }\right)^2}{2}+V_0+g\rho +i\mathrm{\Omega }L_z\mathrm{\Theta }.$$ (9) This can be done analytically both in the non-interacting limit, $`gN0`$ and on the Thomas-Fermi limit, $`gN\mathrm{}`$ as we will show later. States without vorticity.- When the BEC ground state has no vorticity we can make $`\mathrm{\Theta }_{vort}=0`$ in Eq. (8) $$\mathrm{\Theta }=\mu t+\mathrm{\Omega }\frac{a1}{a+1}r_1r_2.$$ (10) As it was already expected, in the radially symmetric case ($`a=0`$) the solution is also radially symmetric, the phase is uniform and the velocity field becomes zero. For an asymmetric trap the velocity field is different from zero but still irrotacional, $`\times 𝐕=0.`$ The definition of $`𝐕`$ above provides us with analytic expressions for the flow on the laboratory. This flow is a product of two rotations around the origin: a circular rotation with angular speed $`\mathrm{\Omega }`$ and an elliptic rotation with a smaller angular speed $`\omega =\mathrm{\Omega }\sqrt{1k^2}`$ on the opposite sense! This is a striking two-fold result. First, on the trap frame $`\stackrel{~}{S}`$ the fluid is moving along closed elliptic contours opposite to the rotation of the trap \[See Fig. 1(d)\] but with a smaller velocity. And second, the composition of both movements on the laboratory gives a flow made of slow and typically open spiraling paths \[See Fig. 1(c)\]. Roughly speaking, the fact that the actual flow on the laboratory frame is much slower is a signature that the inertia of a condensate is smaller than the inertia of a classical fluid. We will prove this calculating the angular momentum of the cloud, which is given by a simple expression $$L_z_{ground}=\rho \left(r\times \mathrm{\Theta }\right)_z=\mathrm{\Omega }\frac{r_1^2r_2^2^2}{r_1^2+r_2^2}.$$ (11) Here we have used $`a=r_1^2/r_2^2.`$ As it was expected, the moment of inertia, $`I=L_z/\mathrm{\Omega }`$, becomes zero for a symmetric trap. Furthermore, it is always smaller than the classical value $`I=r_1^2+r_2^2`$. We must also remark that that Eq. (11) differs from the zero temperature limit of Ref. because our method is not perturbative and allows the shape of the cloud to depend on the rotation speed. Inserting Eq. (10) into Eq. (3) we find the equation of a nonlinear harmonic oscillator with a pair of effective frequencies which depend on $`\mathrm{\Omega }`$, $`\omega _{x,eff}^2`$ $`=`$ $`\omega _1^2+\mathrm{\Omega }^2k(k2),`$ (13) $`\omega _{y,eff}^2`$ $`=`$ $`\omega _2^2+\mathrm{\Omega }^2k(k+2),`$ (14) We can recover the asymmetry of the cloud from the exact solutions of this oscillator on two different limits, the non-interacting or linear limit, $`g0`$, and the Thomas-Fermi limit. Both expressions are $`a_L`$ $`=`$ $`a\left(g0\right)=\omega _{x,eff}/\omega _{y,eff},`$ (16) $`a_{TF}`$ $`=`$ $`a\left(g\mathrm{}\right)=\left(\omega _{x,eff}/\omega _{y,eff}\right)^2.`$ (17) It is apparent in Eq. (Anomalous rotational properties of Bose-Einstein condensates in asymmetric traps) that rotation emphasizes the anisotropy. In Fig. 2(c-d) we compare the Thomas-Fermi and linear approximations with two different numerical experiments with an anisotropic trap. The main conclusion is that Eq. (Anomalous rotational properties of Bose-Einstein condensates in asymmetric traps(b)) works extremely well for medium to large $`N`$. There is a small error which depends on the norm of the solution. This error is related to the fact that a Thomas-Fermi approximation wipes out any dependence of the shape of the cloud on the the number of particles, $`N`$. Nucleation of the first vortex.- As the rotation speed is increased the fluid adapts by evolving to different states, which may involve the nucleation of one or more vortices. These vortices arise both as zeros of the density and as additive contributions, $`\mathrm{\Theta }_{vortex}`$, to the phase of the cloud. The phase of the vortices suffers a discontinuous change around any closed circuit which encloses them. Hence vortices satisfy Feynmann’s condition for the quantization of the superfluid flow $$\mathrm{\Theta }_{vortex}\text{d}𝐥=2\pi m,m=0,\pm 1,\pm 2\mathrm{}$$ (18) Following the reasoning developed throughout this paper, we should now solve a Poisson equation for $`\mathrm{\Theta }_{vortex}`$ with some restrictions on the direction of $`\mathrm{\Theta }_{vortex}`$. Instead we will follow a more intuitive path which gives accurate results. In the framework of our approximation $`\mathrm{\Theta }_{vortex}`$ will correspond to a flow around the lines of constant density of the cloud, as it happens in the symmetric case. In elliptic polar coordinates, $`\{r_1=u\mathrm{cos}\theta ,r_2=\sqrt{a}u\mathrm{sin}\theta \}`$ the gradient of the phase becomes $$\mathrm{\Theta }_{vortex}g(u)(\mathrm{sin}\theta ,\sqrt{a}\mathrm{cos}\theta ),$$ (19) where $`g(u)`$ is a decreasing function of the radius of each ellipse. This expression may be integrated along elliptic contours to find the dependence of the vortex phase on the elliptic angle $$\mathrm{\Theta }_{vortex}(\theta )m\theta +m\frac{a1}{2\left(a+1\right)}\mathrm{sin}2\theta .$$ (20) We have found the typical dependence for a symmetric vortex, $`m\theta `$, plus a correction which comes from the asymmetry. Using Eq. (20) we estimate the new value of the angular momentum. It has two contributions, one coming from the rotation of the cloud and another one coming from the vortex, $$L_z=\mathrm{\Omega }\frac{r_1^2r_2^2^2}{r_1^2+r_2^2}+\frac{2m\sqrt{r_1^2r_2^2}}{r_1^2+r_2^2}N.$$ (21) In the symmetric trap case the introduction of a vortex implies a fixed change of the total angular momentum $`\mathrm{\Delta }L_z=L_z_{vortex}L_z_{ground}=N`$. The asymmetry radically changes this picture and the more asymmetric the trap is the smaller the amount angular momentum kept in the vortex – a quantity which eventually becomes zero in the $`|k|1`$ limit. Numerical simulations.- Up to this point we have obtained several predictions concerning the stationary states of a BEC in a rotating trap. Although there are no experimental results on this yet, we can contrast our theoretical work with numerical simulations of the Gross-Pitaevskii equation (GPE). First let us recall that the stationary solutions of Eq. (3) are critical points of the functional $$E[\varphi ]=\overline{\varphi }\left(\frac{1}{2}\mathrm{}_r+V_0\left(r\right)+\frac{g}{2}\left|\varphi \right|^2\mathrm{\Omega }L_z\right)\varphi ,$$ (22) subject to the constraint of a fixed norm $`\left|\varphi \right|^2d^nr=N`$. Some of the critical points of Eq. (22) correspond to ground states. These states represent the typical configuration of the gas when it is Bose-condensed in the rotating trap. To find the ground states we have minimized Eq. (22) using the Sobolev’s gradients method in a discrete Fourier basis. This was performed for a two-dimensional radially symmetric trap, $`\omega _1=\omega _2`$, and for an asymmetric one, $`\omega _1=\sqrt{2}\omega _2`$, varying the only relevant parameters, $`\mathrm{\Omega }`$ and $`N`$, over a wide range. As a result we obtained two maps which show the angular momentum of the ground state as a function of $`(\mathrm{\Omega },N)`$, and which are plotted in Fig. 2(a-b). In Fig. 2(a) we recover the discontinuous distribution which was found in Ref. . Here the system starts with zero angular momentum and remains still until a certain angular speed, $`\mathrm{\Omega }_1(N)`$. Beyond $`\mathrm{\Omega }_1(N)`$ a vortex grows and we have another plateau on which the angular momentum remains constant, $`L_z=N`$. Once more there is a critical frequency, $`\mathrm{\Omega }_2(N)`$, which marks the nucleation of another vortex. From then on the evolution of $`L_z`$ is a piecewise differentiable one. There are still more jumps due to the creation of more vortices, but $`L_z`$ is no longer constant in each interval, since the vortices may move to accommodate more angular momentum. In Fig. 2(b) the picture is quite different. First the angular momentum has a non-trivial dependence on the number of particles and on the angular speed. It is neither zero for a vortex-less state, nor constant for a state with one vortex. Indeed, the dependence with respect to $`\mathrm{\Omega }`$ is extremely well reproduced by Eq. (11) and by Eq. (21), which suggests that these are exact laws which could be derived by some other mean. Conclusions.- We have achieved several goals in this work. First, we have shown how to obtain analytic results about the macroscopic properties of a rotating condensate, which leads to unexpected and intuitively appealing results for the flow of the condensate. And second, we have derived precise laws which relate the anisotropy of the condensate, its moment of inertia and the two relevant parameters of the problem: the norm, $`N`$, and the trap angular speed, $`\mathrm{\Omega }`$. Our predictions have been numerically confirmed generating maps of $`L_z`$, $`x^2`$ and $`y^2`$ as a function of $`(\mathrm{\Omega },N)`$. We hope that these methods will support further work, such as the study of linear stability of vortices in asymmetric condensates, the development of better approximations to the shape of the cloud or even a variational analysis which describes the transition between different vorticities. Furthermore, our predictions are easily extensible to three-dimensional condensates giving many results (condensate shape, moment of inertia or energy release, for instance) which could be tested in current experiments . For instance, the discontinuity on the asymmetry of the cloud when a vortex is nucleated could be used as a non-destructive mean to ensure the existence of a phase singularity in the atomic cloud. This work has been partially supported by CICYT under grant PB96-0534.
no-problem/0003/cond-mat0003116.html
ar5iv
text
# The underdoped-overdoped transition in YBa2Cu3Ox ## 1 INTRODUCTION The overdoped regime of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> is accessible with heavy oxygen dopings close to $`x`$ = 7. The crossover from the underdoped to the overdoped regimes occurs around $`x_{opt}`$ = 6.92, nearly coinciding with a displacive structural phase transition . It is suggesting to associate the structural instability close to optimum doping with a kind of barrier limiting $`T_c`$. The 123 bi-layer cuprates exhibit comparatively low “optimum” $`T_c`$‘s around 90 K, but chemical trends and pressure experiments imply that $`T_c`$ is not yet at its optimum value. In this communication we discuss a mechanism being possibly responsible for the barrier limiting $`T_c`$ of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub>, and stabilizing the overdoped regime for $`x6.95`$. ## 2 DOPING INDUCED LATTICE EFFECTS Oxygen doping of the parent cuprate YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6</sub> induces two displacive structural phase transitions: the first at the insulator metal transition around $`x=6.42`$ transforming the tetragonal unit cell of the antiferromagnetic insulator into the orthorhombic symmetry of the metal, and the second close to optimum doping, $`x=6.95`$, changing the orthorhombic unit cell from the $`\alpha `$-ortho to the $`\beta `$-ortho type. ### 2.1 $`\alpha `$-orthorhombic deformation In the underdoped regime ($`6.42x6.86`$) doping is well established to increase the orthorhombicity, $`2000(ba)/(b+a)`$. Positive axial strain, $`b/x0`$ expands weakly the $`b`$-axis, while negative axial strain, $`a/x0`$ compresses strongly the $`a`$-axis, cf. Fig.1 (left). The oppposite axial strains result in a quadrupolar instability in the orthorhombic basal plane, labelled $`\alpha `$-ortho. ### 2.2 $`\beta `$-orthorhombic deformation In the overdoped regime ($`6.95x7`$) the $`b`$-axis strain changes its sign, hence doping shrinks both, $`a`$ and $`b`$, resulting in a breathing instability of the orthorhombic basal plane, labelled $`\beta `$-ortho, cf. Fig.1 (right). Since both, $`b`$\- and $`a`$-axis strains, are negative in all samples, independent on their various routes of chemical preparation, the planar breathing instability has to be considered as a generic property of the overdoped regime. ### 2.3 $`c`$-axis stress Doping compresses strongly the $`c`$-axis, usually described by a straight throughout all regimes up to $`x=7`$. Samples synthezised in the absence of carbonate however were shown to exhibit a significant minimum in $`c(x)`$ close to the onset of the overdoped regime . Raman spectroscopy and Y-EXAFS confirm the $`c`$-axis anomaly also in other samples: the in-phase O2,O3 mode (out-of-plane) softens abruptly in the overdoped regime, while the static O2,3–Cu2 interlayer spacing (“dimple”) increases discontinously by $`0.015`$ Å . The strong contraction of $`c`$ is nearly completely attributable to the compression of the Cu2–O1(Apex) bonds reflecting the charge transfer from the chains to the planes. The Cu2 atoms may be seen to be pulled out the CuO<sub>2</sub> planes. Notably thereby the average Y–O2,3 interlayer spacings remain unaffected. Doping across the underdoped–overdoped phase boundary continues compressing the Cu2–O1(Apex) bonds. In the overdoped regime however the average O2,3-layers experience a significant repulsion from the Cu2-layers, strongly contrasting with the underdoped regime. In some samples the Y–O2,3 interlayer spacing remains nearly unaffected, and as a result $`c(x)`$ develops a minimum . ## 3 CORRELATED DISPLACEMENTS IN THE CuO<sub>2</sub> PLANE We have shown from Y-EXAFS that the Y–Cu2 bond lengths are almost independent on doping . This suggests an “umbrella mode” to describe the static correlations between the Cu2 out-of-plane, and the O2, O3 in-plane displacements . A refinement of the “umbrella mode” model may also explain the $`\alpha `$\- and $`\beta `$-ortho deformations in the under- and overdoped regimes, respectively. The lengths of the rigid semicovalent in-plane copper-oxygen bonds can be safely assumed to be almost independent on doping, too. Then collapsing the “umbrella” moves the planar oxygens O2,3 out of their planes. As schematically shown in Fig.1 (left) the quadrupolar $`\alpha `$-ortho instability pushes the O3 atoms beyond, and the O2 atoms beneath their planes. These shifts in opposite directions along $`c`$ tend to cancel (depending slightly on the orthorhombicity), and the plane dimpling is solely determined by the Cu2 out-of-plane displacement. On the other hand side (Fig.1, right) the breathing $`\beta `$-ortho instability pushes both, O2 and O3 atoms, beyond their planes, thus increasing the dimpling not only by the Cu2 out-of-plane shift, but also by the oppositely directed O2,O3 out-of-plane shifts. ## 4 ELECTRONIC STRUCTURE EFFECTS Out-of-plane displacements of the planar copper and oxygens were shown to have important effects on the electronic band structure near $`E_F`$. In particular they determine the strengths and symmetries of the interlayer hoppings . Remote hybridization of the $`\sigma `$ Cu$`4s`$ orbitals with the $`\sigma `$ Cu$`3d_{x^2y^2}`$– O$`2p_{x,y}`$ band provides interlayer hoppings with $`d`$-symmetry, exhibiting maxima at ($`\pi `$,0) and (0,$`\pi `$) in $`k`$-space. The strength of the $`sdp`$ hybridization is mainly controlled by the length of Cu2–O1 (Apex) bond. Hybridizations of the the $`pd\pi `$ Cu$`3d_{x,z}`$–O$`2p_z`$, Cu$`3d_{y,z}`$–O$`2p_z`$ with the $`pd\sigma `$ Cu$`3d_{x^2y^2}`$–O$`2p_{x,y}`$ bands become feasible through Cu2 out-of-plane positions. Once activated, $`pd\pi `$-$`pd\sigma `$ hybridizations repell the O2,3 from the Cu2 plane . We conclude, that the quadrupolar $`\alpha `$-ortho instability ought to suppress hybridizations of the $`pd\pi `$ with the $`pd\sigma `$ bands. The breathing $`\beta `$-ortho instability however ought to favour them and to provide a possible mechanism relaxing the confinement of the carriers, thus pushing the system into the overdoped regime. I acknowledge the support of the ESRF through the projects HE731 and HE516.
no-problem/0003/cond-mat0003459.html
ar5iv
text
# Hall Crystal States at 𝜈=2 and Moderate Landau Level Mixing \[ ## Abstract The $`\nu =2`$ quantum Hall state at low Zeeman coupling is well-known to be a translationally invariant singlet if Landau level mixing is small. At zero Zeeman interaction, as Landau level mixing increases, the translationally invariant state becomes unstable to an inhomogeneous state. This is the first realistic example of a full Hall crystal, which shows the coexistence of quantum Hall order and density wave order. The full Hall crystal differs from the more familiar Wigner crystal by a topological property, which results in it having only linearly dispersing collective modes at small $`q`$, and no $`q^{3/2}`$ magnetophonon. I present calculations of the topological number and the collective modes. \] Integer quantum Hall states are some of the best understood, because single-particle approximations such as Hartree-Fock (HF) work very well for filled Landau levels (for an overview of the rich physics in these systems, see ref. ). The three important parameters that describe an integer state are the cyclotron frequency $`\omega _c`$, the Zeeman coupling $`E_Z`$, and the dimensionless parameter characterizing the strength of the electron-electron interaction $`r_s`$. For $`\nu =2`$ we have $`r_s=\frac{e^2}{\epsilon l_0\mathrm{}\omega _c}`$. As $`r_s`$ becomes large Landau level mixing increases. We will work in the limit $`E_Z=0`$, which is an excellent approximation for GaAs based systems, due to the reduction of the $`g`$ factor, and the enhancement of the cyclotron frequency by band effects. In this limit, $`r_s`$ survives as the only dimensionless parameter characterizing the system. The Hamiltonian in standard notation is $$H=n\mathrm{}\omega _ca_{\sigma ,n,X}^{}a_{\sigma ,n,X}+\frac{1}{2L^2}v(q):\rho (\stackrel{}{q})\rho (\stackrel{}{q}):$$ (1) where $`(\sigma ,n,X)`$ are spin, Landau level, and degeneracy indices for single-particle states in the Landau gauge, $`a,a^{}`$ are the fermion operators which destroy and create electrons in these single-particle states, and $`\rho (\stackrel{}{q})`$ is the density operator $$\rho (\stackrel{}{q})=\underset{\sigma n_1n_2X}{}e^{iq_xX}\rho _{n_1n_2}(\stackrel{}{q})a_{\sigma ,n_1,Xq_yl_0^2/2}^{}a_{\sigma ,n_2,X+q_yl_0^2/2}$$ (2) Here $`l_0`$ is the magnetic length, and $`\rho _{nn^{}}(\stackrel{}{q})`$ is a matrix element. What are the possible ground states of the system? At $`\nu =2`$ the simplest possiblities are (i) Fill the $`n=0,`$ and $`n=0,`$ Landau levels to form the singlet state, or (ii) Fill $`n=0,1`$ for the $``$ spins only to form the fully polarized state. One can check that in the HF approximation, the fully polarized state becomes lower in energy than the singlet state for $`r_s2.12`$ for the Coulomb interaction, a transition first pointed out in a slightly different context by Giuliani and Quinn. Finite thickness effects are modelled by an interaction $`v(q)=\frac{e^2}{\epsilon q}e^{\lambda q}`$ where the length $`\lambda `$ is related to the sample thickness. This modifies the Coulomb interaction at large $`q`$ and pushes the critical $`r_s`$ for the singlet-fully polarized transition higher. Translationally invariant states cannot take advantage of Landau level mixing (in HF), while inhomogeneous states can. Inhomogeneous states have been the subject of intense investigation in the early eighties in HF in the context of their possible relevance to the fractional quantum Hall effect and the high-field Wigner crystal. In HF, such states are described by nonzero expectation values $$\mathrm{\Delta }_{\sigma n,\sigma ^{}n^{}}(\stackrel{}{Q})=\frac{2\pi l_0^2}{L^2}e^{iQ_xX}a_{\sigma n,XQ_yl_0^2/2}^{}a_{\sigma ^{}n^{},X+Q_yl_0^2/2}$$ (3) where $`L^2`$ is the area of the system, and $`\stackrel{}{Q}`$ are the set of reciprocal vectors of some lattice. Since we have two spin flavors, we can have states with spin mixing or not. Using these expectation values, one decouples the interaction term. One then performs a sequence of canonical transformations which reduces the problem to diagonalizing a matrix for every point in the magnetic Brillouin zone. The dimension of this matrix is connected to the number of flux quanta per unit cell, and the number $`n_{LL}`$ of LLs kept (typically I keep $`n_{LL}=10`$ levels). If the flux per unit cell is $`\varphi =p\varphi _0`$ (here $`\varphi _0`$ is the flux quantum), then each Landau level breaks up into $`p`$ nonoverlapping subbands. I have examined density wave states with two and three flux quanta per unit cell, with varying number of majority-spin subbands occupied, and for the square and triangular lattices. In the regime $`3r_s9`$ that I will mostly focus on, the triangular lattice with two flux quanta per unit cell, no spin-mixing, and a total polarization of half the maximal polarization turns out to have the lowest energy among all the density wave states that I studied. Anticipating the result that this is a Hall crystal, I will call this the partially polarized Hall crystal (PPHC). This state is the integer analog of the partially polarized density wave state for $`\nu =2/5`$ that I have proposed to explain the direct spin polarization measurements of Kukushkin et al. Figure 1 presents the results for the ground state energy for the translationally invariant singlet (S) state, the fully polarized (FP) state, and for the triangular PPHC state for the pure Coulomb interaction ($`\lambda =0`$) and for a sample with finite thickness ($`\lambda =0.4l_0`$). For $`\lambda =0`$ the singlet is the lowest state among these for $`r_s2.12`$, while the PPHC state becomes the lowest energy state for $`r_s6`$. For $`\lambda =0.4l_0`$ there is a direct transition from the S state to the PPHC state at $`r_s4.5`$, and the FP state is never the ground state. The PPHC state is distinguished from the S and the FP states by a density wave and a partial polarization. Another very important topological property of a crystal state was elucidated by Thouless and co-workers. Suppose one considers a set of noninteracting electrons in the presence of a periodic potential. Then one can ask how much charge is transported when the lattice potential is adiabatically dragged by a lattice translation vector. It was shown that in the thermodynamic limit, if the chemical potential lies in a gap, this charge transported is quantized, and characterized by an integer Chern index. In integer quantum Hall systems with a periodic potential, two integers, the quantized Hall conductance, and the above-mentioned Chern number, characterize each state. More generally, one can ask whether crystalline and quantized Hall order can coexist, as was suggested by the cooperative ring-exchange theory. This question was answered in the affirmative and the physical significance of the second integer was clarified by Tesanovic, Axel, and Halperin. Given the two integers, the average density obeys the equation $$\overline{\rho }=n_H\rho _\varphi +n_CA_0^1$$ (4) where $`n_H=h\sigma _{yx}/e^2`$ is the integer characterizing the Hall conductance, $`\rho _\varphi `$ is the density of flux quanta, $`A_0`$ is the area of the unit cell, and $`n_C`$ is the Chern number describing the adiabatic transport of charge. The usual quantum Hall states have $`n_H0`$, but $`n_C=0`$, while the usual Wigner crystals in the quantum Hall regime have $`n_H=0`$, but $`n_C0`$. Tesanovic et al considered the case when there was a density wave, but with $`n_C=0`$, which they called a full Hall crystal. They also labelled states with nonzero values of both integers as partial Hall crystals (not to be confused with partial polarization!). They explicitly constructed a (rotationally anisotropic) interaction with two-body and four-body parts for which they were able to show that the ground state was a full Hall crystal. One of their most important results concerns the low-energy collective modes of the various states. It has long been known that the Wigner crystal has a single gapless magnetophonon collective mode with a dispersion of $`\omega q^{3/2}`$ (for the long-range Coulomb interaction). This arises because the magnetic field mixes the usual ($`B=0`$) linearly dispersing longitudinal and transverse lattice modes, both of which transport charge. After mixing, one mode is pushed up to $`\omega _c`$, and the other is the magnetophonon. Tesanovic et al explicitly computed the collective modes for the full Hall crystal and showed that there are only two linearly dispersing gapless modes. Simply put, for $`n_C=0`$, small $`q`$ oscillations of the lattice produce no charge motion, and thus no magnetophonon. In view of the above, it is interesting to ask for the values of the two integers characterizing the PPHC states. Here there are two spin flavors, and for charge motion we can treat the two additively. In this calculation (and in the collective mode calculation that follows), I employ a trick to keep only the active levels, the $`n=1`$ majority-spin LL and the $`n=0`$ minority-spin LL. The trick is to make $`w_cE_Z`$ large compared to $`e^2/\epsilon l_0`$, which eliminates LL-mixing. The PPHC state in this regime is adiabatically connected to the PPHC state at large $`r_s`$ and $`E_Z=0`$ (no gaps close as $`E_Z`$ is decreased and LL-mixing is introduced). Therefore the two Chern numbers, and by implication, the structure of the low-energy collective charge modes, cannot change as one turns on LL-mixing. The integers can be easily computed by adapting the results of Tesanovic et al for the connection between $`n_H`$ and $`n_C`$, $$n_C=\frac{\varphi }{p\varphi _0}n_b\frac{\varphi }{\varphi _0}n_H$$ (5) where $`n_b`$ is the number of filled nonoverlapping subbands (=4 including both spins for our case), and by borrowing the results of Yoshioka (see also MacDonald) for $`n_H`$ for the triangular lattice periodic potential. To summarize Yoshioka’s results, if, in a partially filled LL the electrons form an electron-like Wigner crystal, the contribution to $`n_H`$ from this LL is zero, while if they form a hole-like Wigner crytal, they contribute 1 towards $`n_H`$. Which type of crystal the electrons like to form depends in turn on the sign of the effective potential $`V(\stackrel{}{Q})`$. In our case, it turns out that the $`n=0`$ minority spin electrons form an electron-like Wigner crystal, while the $`n=1`$ majority spin electrons form a hole-like Wigner crystal (the $`n=0`$ majority-spin electrons occupy a full LL, and therefore contribute $`n_H=1`$). This leads to $`n_H=2`$, $`n_C=0`$, implying that this is a full Hall crystal. Note that the entire Hall current is carried by majority spin electrons, which has implications for spin-polarized transport. Partially polarized square lattice crystalline states also exist. They are never lower in energy than the triangular ones for the model interaction I have chosen (though they are quite close). One can compute the Chern indices for this state as well, by adapting the results of Hatsugai and Kohmoto. The difference here is that a fully gapped half-filled LL with two flux quanta per unit cell contributes $`\pm 1`$ to $`n_H`$, depending on the sign of the effective potential. I find that this state is a partial Hall crystal, which has $`n_H=1`$ and $`n_C=2`$. This has the amusing feature that its Hall conductance is $`e^2/h`$ despite a filling of $`\nu =2`$! Let us now turn to collective excitations. In a Wigner crystal one finds a gapless $`q^{3/2}`$ magnetophonon, and in a full Hall crystal one finds two linearly dispersing gapless modes. Thus it is natural to expect both sets of modes in a partial Hall crystal. Furthermore, since we have an additional spin degree of freedom there should be more modes than in the spin-polarized case. I have computed the collective modes around the HF solution in the time-dependent HF approximation (TDHF) for both the triangular and square lattice crystalline states. As explained by Cote and MacDonald, one can reduce the computation of collective modes to the diagonalization of a large matrix, from whose eigenvalues and eigenvectors one computes a response function. The poles of this response function give the physical collective modes. The imaginary part of the charge response function as a function of $`\omega `$ for the triangular lattice is shown in Figure 2 at $`ql_0=0.03`$. The feature at $`\omega 0.007`$ is an optical mode, while the sharp features at $`\omega 0.001,0.002,0.0033`$ are gapless linearly dispersing charge modes. It can be seen that the optical mode has most of the weight. Now one follows these features as a function of $`q`$. The resulting set of dispersions for the triangular lattice is shown in Figure 3. As can be seen, there is no magnetophonon mode dispersing as $`q^{3/2}`$, while there are several linearly dispersing collective modes. All the linearly dispersing modes extrapolate to $`\omega =0`$ at $`q=0`$ within error, showing that they are indeed gapless. In Figure 4 we see the corresponding set of dispersions for the square lattice crystalline state. Here, in addition to the linearly dispersing gapless modes, the $`q^{3/2}`$ magnetophonon (symbolized by stars) does make an appearance, as expected for a partial Hall crystal. Let us now turn to other related work. Very recently, Park and Jain have performed a collective mode analysis (for zero thickness, $`\lambda =0`$) of the S and FP states in the $`(r_s,E_Z)`$ plane. Concentrating on their results for $`E_Z=0`$, we see that both states are unstable for $`r_s3`$ and greater. To what state might this instability lead? The PPHC state is definitely not a candidate at this small $`r_s`$. I have found another state with equal occupations of the two spin flavors ($`S_{z,total}=0`$), with a triangular lattice density wave with spin-mixing, whose ground state energy in HF is lower than that of the S and FP states for $`1.75r_s2.7`$. This state is a full Hall crystal with $`n_H=2`$ and $`n_C=0`$. Strangely enough, this state does not have the full symmetry of the triangular lattice, implying that the triangular lattice is not the optimal structure. Some variant of this spin-mixed density wave is likely to be the ground state for smaller $`r_s`$. These are likely to be spin-density waves but total singlets, raising the possibility of an inhomogeneous quantum Hall antiferromagnet at $`\nu =2`$ (an analogous state at the $`\nu =2`$ edge has been explored recently). The energy of this state is relatively higher than the PPHC state for larger $`r_s`$, so I believe that the PPHC state is still the ground state at large $`r_s`$. I am intensively exploring various spin-mixed states at small $`r_s`$ to resolve this issue. There are also experimental results on the $`\nu =2`$ system. Recently, Eriksson et al have measured the collective excitations of the $`\nu =2`$ system by inelastic light scattering. They find that while they see a clear signature of the singlet nature of the ground state for $`r_s3.3`$, with a three-fold Zeeman split spin-density excitation, the situation changes for $`r_s3.3`$. Here they observe two nondispersing peaks which they interpret as two roton-like critical points in the dispersion around a singlet state which has been modified to include Fermi-liquid like parameters. They further see the energies of these peaks decreasing linearly as $`r_s`$ is increased, suggesting another transition. It is possible that the first transition is associated with a transition to the FP or a spin-mixed state, while the second could be the transition to the PPHC state. Certainly, as one approaches the transition to the PPHC one expects to see the would-be linearly dispersing modes soften if the transition is second-order or weakly first order (see Tesanovic et al for an example of this). However, further measurements, specifically of the spin polarization, the Hall conductance, and the collective modes for $`r_s6`$ are needed to uniquely determine the nature of the state. In summary, I have shown that there exist partially polarized Hall crystal states which are likely to be ground states of the $`\nu =2`$ quantum Hall system at around $`r_s6`$. These are HF results, and subject to the usual caveat: Fluctuations beyond HF can alter the energies of various states. However, in fully gapped systems such as these, one expects HF to be not too far off. The triangular lattice PPHC state is, to my knowledge, the first realistic full Hall crystal, and has only linearly dispersing low energy modes for small $`q`$. The square lattice crystal state is a partial Hall crystal, with both a $`q^{3/2}`$ magnetophonon, and linearly dispersing modes. The square lattice PPHC state also has an unusual Hall conductance of $`e^2/h`$ despite a filling of $`\nu =2`$. It is a pleasure to thank J.K.Jain, A.H.MacDonald, Z.Tesanovic, and especially H.A.Fertig for illuminating conversations.
no-problem/0003/cond-mat0003495.html
ar5iv
text
# Observation of harmonic generation and nonlinear coupling in the collective dynamics of a Bose condensate ## Abstract We report the observation of harmonic generation and strong nonlinear coupling of two collective modes of a condensed gas of rubidium atoms. Using a modified TOP trap we changed the trap anisotropy to a value where the frequency of the $`m=0`$ high-lying mode corresponds to twice the frequency of the $`m=0`$ low-lying mode, thus leading to strong nonlinear coupling between these modes. By changing the anisotropy of the trap and exciting the low-lying mode we observed significant frequency shifts of this fundamental mode and also the generation of its second harmonic. Ever since the first Bose condensed gases were produced , collective excitations have played a key role in the theoretical models and their experimental verification. Measurements of collective excitation frequencies may be compared with theoretical predictions and the first measurements of the lowest collective modes of a Bose gas were carried out soon after the first condensates were made . Those experiments verified that the Gross-Pitaevskii (GP) equation, also called the nonlinear Schrödinger equation (NLSE), gives a very accurate prediction of the frequencies of the measured collective modes. The nonlinearity of the condensate is manifested as a term in the GP equation proportional to the mean-field or condensate density. In this experiment we have observed effects arising directly from this term. The spectroscopy of the excited states of the condensate has been extended to include measurements of damping rates and frequency shifts as a function of temperature (in the range $`0<T/T_c<1`$, where $`T_c`$ is the critical temperature of condensation). The finite temperature measurements test dynamical aspects of the theory which require higher order terms in the calculations and as yet there is not full agreement . Recent theoretical work has contributed to a better understanding of damping of the collective excitations at finite temperature . Although mechanisms that lead to damping are complex one may consider two broad types of behavior. Firstly, energy may be lost from a mode by being transferred into a multitude of other modes through the nonlinear coupling that is intrinsic to condensates. The second cause of damping is scattering of non-condensate particles in so-called Landau processes. The relative importance of these two processes depends on the shape of the condensate. In this work we have been able to vary the aspect ratio of the condensate around a value where nonlinear coupling is large and hence likely to dominate over other processes. In the hydrodynamic limit which applies to condensates with a large number of atoms, the GP equation reduces to the hydrodynamic equations of nondissipative fluid dynamics. In this limit Stringari found analytic solutions for the frequencies of the collective modes and Edwards et al. extended the calculations to the non-hydrodynamic region by direct integration of the GP equation. The measured oscillation frequencies correspond to those obtained from linear response theory only in the limit of small driving and response amplitudes (zero amplitude limit). For strong driving the inherent nonlinearity in the condensate becomes apparent and the actual response frequency is shifted from the zero amplitude limit, $`\omega _0`$. The shift $`\mathrm{\Delta }\omega `$ is predicted to be proportional to the square of the response amplitude to first order approximation, i.e. $`\mathrm{\Delta }\omega /\omega _0=A^2\delta (\lambda )`$, where $`A`$ is the excitation response amplitude, $`\delta (\lambda )`$ is a nonlinearity factor which varies for different trap geometries characterized by $`\lambda =\omega _z/\omega _r`$ (the ratio of axial to radial trap frequencies). Thus a frequency shift of the collective excitation can be achieved in two ways: either by increasing the driving and consequently the response amplitude of the oscillation or by changing the trap geometry and hence the nonlinearity factor. In previous work only the amplitude dependence of the frequency shift for a fixed trap geometry has been studied . This paper describes the use of a trap with adjustable anisotropy to study the effects of trap geometry on collective excitations. The three lowest collective modes of a condensate in an axially symmetric trap ($`\omega _x=\omega _y=\omega _r`$) are the $`m=2`$ mode, the low-lying $`m=0`$ mode and the high-lying $`m=0`$ mode, where $`m`$ is the azimuthal angular momentum (the trivial centre of mass motion is not considered). The $`m=2`$ mode corresponds to a quadrupole type excitation in the radial plane. The low-lying $`m=0`$ mode corresponds to a radial oscillation of the width which is out of phase with an oscillation along the trap axis. The high-lying $`m=0`$ mode is an in-phase compressional mode along all directions (breathing mode). In our experiment we changed the trap anisotropy parameter $`\lambda `$ around a resonance between the low-lying and the high-lying $`m=0`$ modes. At this resonance two quanta of excitation of the low-lying mode are converted into one quantum of excitation of the high-lying mode, i.e. the second harmonic of the low-lying collective mode is excited. The theoretical plot (for the hydrodynamic limit) in Fig.1a shows that at a trap anisotropy of $`\lambda 1.95`$ (resonance) the frequency of the high-lying $`m=0`$ mode corresponds to exactly twice the frequency of the low-lying mode. The nonlinearity factor $`\delta (\lambda )`$ also changes dramatically across the resonance. A theoretical plot of $`\delta (\lambda )`$ against $`\lambda `$ is given in Fig.1b using the formula given in . Our experimental apparatus for creating Bose condensates is described elsewhere and we only briefly mention the relevant features here. Atoms are loaded into a TOP trap and then evaporatively cooled to the quantum degenerate regime. We cool the atoms to well below the critical temperature where no thermal cloud component is observable ($`0.5T_c`$). For this experiment the usual TOP trap configuration with a rotating field in the radial plane was modified by the addition of a pair of ’z-coils’ in Helmholtz configuration which produced an oscillating axial bias field along the (vertical) quadrupole coil axis. The total field in the trap is $`𝐁(t)=B_q^{}\left(x𝐞_x+y𝐞_y2z𝐞_z\right)+`$ (1) $`B_r\left(\mathrm{cos}\omega _tt𝐞_x+\mathrm{sin}\omega _tt𝐞_y\right)+B_z\mathrm{cos}2\omega _tt𝐞_z.`$ (2) The first term is the static quadrupole field and is written in terms of its gradient $`B_q^{}`$ in the radial direction. The second term is the conventional TOP field of magnitude $`B_r`$ rotating in the xy-plane at a frequency $`\omega _t`$. The third term is the additional z-bias field of magnitude $`B_z`$ modulated at twice the frequency of the field in the xy-plane. Hence the locus of the quadrupole field does not simply follow a planar circular path like that of a standard TOP trap but it also moves up and down following a saddle shape. The time average of this field configuration yields the value for a trap frequency ratio that is smaller than $`\lambda =\sqrt{8}`$ in a TOP trap with $`B_z=0`$. By increasing $`B_z`$ the anisotropy $`\lambda `$ is tuned continuously from $`2.83`$ to $`1.6`$. The practical limit on $`\lambda `$ in our present apparatus arises because of the noise created by the amplifier driving the z-coils which induces Zeeman substate changing transitions for large amplitudes of $`B_z`$. This limit may be overcome to create a spherical trap ($`\lambda =1`$) as described in , which also gives more details of the technique. We calibrated the trap by measuring the centre of mass oscillations of a thermal cloud for various amplitudes of $`B_z`$. The measured frequencies agree to better than $`1\%`$ with a theoretical prediction obtained by numerically time-averaging the field in Eq.2, i.e. integrating over one complete TOP cycle. Note that the radial trap frequency remains approximately constant whereas the axial frequency decreases as the amplitude $`B_z`$ increases. The trap frequencies for this experiment were $`126`$ Hz radially and the axial frequencies varied from $`356`$ Hz at $`B_z/B_r=0`$ ($`\lambda =\sqrt{8}`$) to $`194`$ Hz at $`B_z/B_r=2.3`$ ($`\lambda =1.6`$). Once the trap had been calibrated we investigated the behavior of the $`m=0`$ mode for various anisotropies. To excite the $`m=0`$ low-lying mode, the TOP-field amplitude $`B_r`$ was modulated sinusoidally at a frequency of $`225`$ Hz (matching the hydrodynamic mode frequency in a $`126`$ Hz radial trap with $`B_z=0`$) for a period of $`8`$ cycles ($`35`$ ms). Note that the actual measured mode frequency for the above trap parameters differs from $`225`$ Hz because the finite number of atoms leads to deviations from the hydrodynamic limit. Also, the change of mode frequency with trap anisotropy (see Fig.1a) means that the driving frequency does not exactly match the hydrodynamic mode frequency for traps with anisotropies different from $`2.83`$ (the value for $`B_z=0`$), but it is close enough to excite the fundamental mode for all our measurements. After being excited the condensate was left to oscillate for variable times of between $`2`$ to $`30`$ ms before the magnetic trap was switched off and the condensate allowed to expand freely for $`12`$ ms to make a time-of-flight (TOF) measurement. Then absorption images were taken of the condensate from which we extracted both the radial and axial widths and the total number of atoms in the condensate (typically $`2\times 10^4`$). The radial TOP field produced an $`8\%`$ modulation of the trap spring constant and we measured response amplitudes of around $`20\%`$ in TOF. These correspond to even smaller amplitudes of oscillation for the condensate before expansion which produce no measurable frequency shift in the ordinary TOP trap (for $`\lambda =2.83`$ the nonlinear coupling $`\delta `$ for the $`m=0`$ mode is close to zero, see Fig.1b). Even for strong driving and large response amplitudes no significant frequency shift of the $`m=0`$ mode from the zero amplitude limit has been observed before . The anisotropy of our trap was tuned across the resonance between the low and the high-lying $`m=0`$ mode. We observed the occurrence of the second harmonic frequency when we tuned our trap close to resonance (Fig.2b) and on resonance it was the dominant component (Fig.2c). It arises from the excitation due to nonlinear coupling of the $`m=0`$ high-lying mode and could only be observed in the oscillations of the axial width. The radial width oscillated at a single frequency corresponding to the $`m=0`$ low-lying mode and did not have a second frequency component (Fig.2d,e). This is also what we found in simulations based on the hydrodynamic equations and indicates that the geometries of the low and high-lying modes are such that the excitation of the high-lying mode can only be observed by measuring the axial widths. We obtained the low-lying mode frequency from a single frequency fit to the radial oscillations and also from a two frequency fit to the axial oscillations and found good agreement between the two values (exponential decay factors were included in both fits). The frequency of the second harmonic (corresponding to the high-lying mode) was obtained as the second frequency component in the fit along the axial direction. We found that the fundamental frequency was strongly suppressed on resonance in both the radial and axial oscillations as shown in Fig.2c,f. This indicates that the low-lying mode is not populated anymore and all the energy of the excitation has been transferred to the high-lying mode . The large error for the point at resonance ($`\lambda =1.93`$) in Fig.3 reflects the difficulty in obtaining the fundamental component from the data in Fig.2c. The frequencies of the fundamental mode for various trap geometries were normalized by the corresponding radial trap frequency and plotted as a function of the trap anisotropy parameter $`\lambda `$ in Fig.3a. The dotted line is the prediction of the small amplitude mode frequencies in the hydrodynamic limit as given in . The solid line was calculated by Hutchinson from the GP equation for the atomic number and trap frequencies of our experiment. Figure 3a shows that the measured mode frequency is above the hydrodynamic prediction for the ordinary TOP trap with $`\lambda =2.83`$ in agreement with previous experiments . Far from resonance the data agree very well with the finite number prediction (solid line). However, when approaching the resonance from above the nonlinear coupling becomes large enough to shift the mode frequency below the predicted curve. On the lower ($`\lambda <1.95`$) side of the resonance the frequency shift becomes positive in agreement with the predicition illustrated in Fig.1b. One can see that the measured frequency shifts follow a dispersive curve which crosses zero at $`\lambda =1.93\pm 0.02`$, in good agreement with the prediction of $`\lambda =1.95`$ for the hydrodynamic regime . Figure 3b shows a plot of the ratio of the squared amplitudes (higher to lower-lying mode) which is proportional to the ratio of the mode energies. Away from resonance we observed no second harmonic contribution and the ratio (and its error) is taken as zero. These data have been fitted by a simple Lorentzian centered at $`\lambda =1.94\pm 0.02`$ with HWHM$`=0.035\pm 0.005`$. The damping rates are plotted against the anisotropy $`\lambda `$ in Fig. 3c. They were around $`20s^1`$ for the fundamental and $`50s^1`$ for the second harmonic frequency. In conclusion, we have a TOP trap in which the anisotropy can be tuned whilst keeping the trap frequencies high . This offers the opportunity to investigate a range of interesting phenomena in Bose-Einstein condensates related to the trap geometry, in particular the greatly enhanced nonlinear effects for certain anisotropy parameters $`\lambda `$. We observed the generation of the second harmonics of the $`m=0`$ low-lying mode frequency as well as a dispersive resonance in the mode frequency at $`\lambda =1.93\pm 0.02`$. Future improvements and modifications of our experiment will allow us to study another predicted resonance at $`\lambda =1.5`$ as well as additional nonlinear phenomena such as the collapse and revival of oscillations and the onset of chaos for stronger driving amplitude . A particularly interesting case is the spherical trap ($`\lambda =1`$) where all frequencies are degenerate. Quantitative theoretical predictions have been made for the damping rates in a spherical trap which can be compared with the experiment to provide a stringent test. We would like to thank all the members of the Oxford theoretical BEC group, in particular D. Hutchinson for providing the theoretical prediction, K. Burnett, M. Davis, S. Morgan and M. Rusch for their help and many useful discussions. This work was supported by the EPSRC and the TMR program (No. ERB FMRX-CT96-0002). O.M. Maragò acknowledges the support of a Marie Curie Fellowship, TMR program (No. ERB FMBI-CT98-3077).
no-problem/0003/astro-ph0003433.html
ar5iv
text
# The black hole in IC 1459 from HST observations of the ionized gas diskBased on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. ## 1 Introduction Supermassive central black holes (BH) have now been discovered in more than a dozen nearby galaxies (e.g., Kormendy & Richstone 1995; Ford et al. 1998; Ho 1998; Richstone 1998, and van der Marel 1999a for recent reviews). BHs in quiescent galaxies were mainly found using stellar kinematics while the BHs in active galaxies were detected through the kinematics of central gas disks. Other techniques deployed are VLBI observations of water masers (e.g., Miyoshi et al. 1995) and the measurement of stellar proper motions in our own Galaxy (Genzel et al. 1997; Ghez et al. 1998). The BH masses measured in active galaxies are all larger than a few times $`10^8\mathrm{M}_{}`$, while the BH masses in quiescent galaxies are often smaller. The number of accurately measured BHs is expected to increase rapidly in the near future, especially through the use of STIS on board HST. This will establish the BH ‘demography’ in nearby galaxies, yielding BH masses as function of host galaxy properties. In this respect two correlations in particular have been suggested in recent years. First, a correlation between BH mass and host galaxy (spheroid) optical luminosity (or mass) was noted (e.g., Kormendy & Richstone 1995; Magorrian et al. 1998; van der Marel 1999b). However, this correlation shows considerable scatter (a factor $`10`$ in BH mass at fixed luminosity). The scatter might be influenced by selection effects (e.g., it is difficult to detect a low mass BH in a luminous galaxy) and differences in the dynamical modeling. Second, a correlation between BH mass and either core or total radio power of the host galaxy was proposed (Franceschini, Vercellone, & Fabian 1998). However, the available sample is still small and incomplete. Establishing the BH masses for a large range of optical and radio luminosities is crucial to determine the nature of galactic nuclei. An accurate knowledge of BH demography will put constraints on the connection between BH and host galaxy formation and evolution and the frequency and duration of activity in galaxies harboring BHs. In this paper we measure the BH mass of IC 1459. IC 1459 is an E3 giant elliptical galaxy and member of a loose group of otherwise spiral galaxies. It is at a distance of $`16.5h^1\mathrm{Mpc}`$ with $`M_V=21.19`$ (Faber et al. 1989). Williams & Schwarzschild (1979) noted twists in the outer optical stellar isophotes. Stellar spiral ‘arms’ outside the luminous stellar part of the galaxy were detected in deep photographs (Malin 1985). Several stellar shells at tens of kpc from the center were discovered by Forbes & Reitzel (1995). A remarkable feature is the counter-rotating stellar core (Franx & Illingworth 1988) with a maximum rotation of $`170\mathrm{km}\mathrm{s}^1`$. IC 1459 also has an extended emission gas disk (diameter $`100^{\prime \prime }`$) with spiral arms (Forbes et al. 1990, Goudfrooij et al. 1990) aligned with the galaxy major axis. The disk rotates in the same direction as the outer part of the galaxy (Franx & Illingworth 1988). The nuclear region of IC 1459 has line ratios typical of the LINER class (see e.g., Heckman 1980, Osterbrock 1989 for the definition of LINERS). A warped dust lane is also present. It is misaligned by $`15^{}`$ from the galaxy major axis and some dust patches are observed at a radius of $`2^{\prime \prime }`$ (Carollo et al. 1997). IC 1459 has a blue nuclear optical source with $`V=18.3`$ (Carollo et al. 1997; Forbes et al. 1995) which is unresolved by HST. It also has a variable compact radio core (Slee et al. 1994). There is no evidence for a radio-jet down to a scale of $`1^{\prime \prime }`$ (Sadler et al. 1989). IC 1459 has a hard X-ray component, with properties typical of low-luminosity AGNs (Matsumoto et al. 1997). Given the abovementioned properties, IC 1459 might best be described as a galaxy in between the classes of active and quiescent galaxies. This makes it an interesting object for extending our knowledge of BH demography, in particular since there are only few other galaxies similar to IC 1459 for which an accurate BH mass determination is available. We therefore made IC 1459, and in particular its central gas disk, the subject of a detailed study with the Hubble Space Telescope (HST). We observed the emission gas of IC 1459 with the Second Wide Field and Planetary Camera (WFPC2) through a narrow-band filter around H$`\alpha `$+\[NII\] and took spectra with the Faint Object Spectrograph (FOS) at six locations in the inner $`1^{\prime \prime }`$ of the disk. In Section 2 we discuss the WFPC2 observations and data reduction. In Section 3 we describe the FOS observations and data reduction, and we present the inferred gas kinematics. To interpret the data we construct detailed dynamical models for the kinematics of the H$`\beta `$ and H$`\alpha `$+\[NII\] emission lines in Section 4, which imply the presence of a central BH with mass in the range $`1`$$`4\times 10^8\mathrm{M}_{}`$. In Section 5 we discuss how the kinematics of other emission line species differ from those for H$`\beta `$ and H$`\alpha `$+\[NII\], and what this tells us about the central structure of IC 1459. In Section 6 we present dynamical models for ground-based stellar kinematical data of IC 1459, for comparison to the results inferred from the HST data. We summarize and discuss our findings in Section 7. We adopt $`H_0=80\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ throughout this paper. This does not directly influence the data-model comparison for any of our models, but does set the length, mass and luminosity scales of the models in physical units. Specifically, distances, lengths and masses scale as $`H_0^1`$, while mass-to-light ratios scale as $`H_0`$. ## 2 Imaging ### 2.1 WFPC2 Setup and Data Reduction We observed IC 1459 in the context of HST program GO-6537. We used the WFPC2 instrument (described in, e.g., Biretta et al. 1996) on September 20, 1996 to obtain images in two narrow-band filters. The observing log is presented in Table 1. The ‘Linear Ramp Filters’ (LRFs) of the WFPC2 are filters with a central wavelength that varies as a function of position on the detector. The LRF FR680P15 was used as ‘on band’ filter, with the galaxy position chosen so as to center the filter transmission on the H$`\alpha `$+\[NII\] emission lines. The narrow-band filter F631N was chosen as ‘off-band’ filter, and covers primarily stellar continuum<sup>1</sup><sup>1</sup>1The F631N filter covers some of the redshifted \[OI\]6300 emission, but the equivalent width of this line is small enough to have negligible influence on the off-band subtraction.. The position of the galaxy on the chip in the off-band observations was chosen to be the same as in the on-band observations. In all images the galaxy center was positioned on the PC chip, yielding a scale of $`0.0455^{\prime \prime }`$/pixel. The images were calibrated with the standard WFPC2 ‘pipeline’, using the most up to date calibration files. This reduction includes bias subtraction, dark current subtraction and flat-fielding. A flatfield was not available for the LRF filter, so we used the flatfield of F658N, a narrow-band filter with similar central wavelength (6590 Å). Three back to back exposures were taken through each filter. In each case, the third exposure was offset by (2,2) PC pixels to facilitate bad pixel removal. The alignment of the exposures (after correction for intentional offsets) was measured using both foreground stars and the galaxy itself, and was found to be adequate. For each filter we combined the three available images without additional shifts, but with removal of cosmic rays, bad pixels and hot pixels. Construction of a H$`\alpha `$+\[NII\] emission image requires subtraction of the stellar continuum from the on-band image. To this end we first fitted isophotes to estimate the ratio of the stellar continuum flux in the on-band and off-band image. This ratio could be fitted as a slowly varying linear function of radius in regions with no emission flux. The off-band image was multiplied by this ratio and subtracted from the on-band image. The resulting H$`\alpha `$+\[NII\] emission image was calibrated to units of $`\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ using calculations with the STSDAS/SYNPHOT package in IRAF. The resulting flux scale was found to be in agreement with that inferred from our FOS spectra (see Section 3). ### 2.2 The Ionized Gas Disk Figure 1 shows both the F631N stellar continuum image of the central region of IC1459, as well as the H$`\alpha `$+\[NII\] image. The continuum image shows a weakly obscuring warped dust lane across the center which is barely visible in the emission image. This dust lane is evident even more clearly in a $`VI`$ image of IC 1459 (Carollo et al. 1997). The lane makes an angle of $`15^{}`$ with the stellar major axis. The H$`\alpha `$+\[NII\] emission image shows the presence of a gas disk. The existence of this disk was already known from ground-based imaging, which showed that it has a total linear extent of $`100^{\prime \prime }`$ (Goudfrooij et al. 1990; Forbes et al. 1990). The outer parts of the disk show weak spiral structure and dust patches. Inside the central $`10^{\prime \prime }`$ the disk has a somewhat irregular non-elliptical distribution, with filaments extending in various directions. Our HST image shows that the distribution becomes more regular again in the central $`r0.5^{\prime \prime }`$. Throughout its radial extent, the position angle (PA) of the disk coincides with the PA of the stellar distribution. In the case of the central $`0.5^{\prime \prime }`$, we derive from isophotal fits $`\mathrm{PA}37^{}`$ for the gas disk. This agrees roughly with the PA of the major axis of the stellar continuum in the same region, for which the F631N image yields $`\mathrm{PA}34^{}`$. Assuming an intrinsically circular disk, Forbes & Reitzel (1995) infer from the ellipticity $`ϵ=0.5`$ of the gas disk at several arcseconds an inclination of $`60^{}`$. We performed a fit to the contour levels of the extended gas emission published by Goudfrooij et al. (1994), which also yields an inclination of $`60^{}`$. By contrast, the gas distribution in the central $`0.5^{\prime \prime }`$ of the HST image is rounder than that at large radii. The ellipticity increases from $`ϵ=0.37`$ at $`r=1.0^{\prime \prime }`$ to $`ϵ=0.17`$ at $`r=0.25^{\prime \prime }`$ (approximately the smallest radius at which the ellipticity is not appreciably influenced by the HST point spread function (PSF)). While this could possibly indicate a change in the inclination angle of the disk, it appears more likely that the gas disk becomes thicker towards the center. This latter interpretation receives support from an analysis of the gas kinematics, as we will discuss below (see Section 4.4). In the following we therefore assume an inclination angle of $`60^{}`$ for IC 1459, as suggested by the ellipticity of the gas disk at large radii. ### 2.3 The Stellar Luminosity Density For the purpose of dynamical modeling we need a model for the stellar mass density of IC 1459. Carollo et al. (1997) obtained a HST/WFPC2 F814W (i.e., $`I`$-band) image of IC 1459, and from isophotal fits they determined the surface brightness profile reproduced in Figure 2. Carollo et al. corrected their data approximately for the effects of dust obscuration through use of the observed $`VI`$ color distribution, so dust is not an important factor in the following analysis. To fit the observed surface brightness profile we adopt a parameterization for the three-dimensional stellar luminosity density $`j`$. We assume that $`j`$ is oblate axisymmetric, that the isoluminosity spheroids have constant flattening $`q`$ as a function of radius, and that $`j`$ can be parameterized as $$j(R,z)=j_0(m/a)^\alpha [1+(m/a)^2]^\beta ,m^2R^2+z^2q^2.$$ (1) Here $`(R,z)`$ are the usual cylindrical coordinates, and $`\alpha `$, $`\beta `$, $`a`$ and $`j_0`$ are free parameters. When viewed at inclination angle $`i`$, the projected intensity contours are aligned concentric ellipses with axial ratio $`q^{}`$, with $`q^2\mathrm{cos}^2i+q^2\mathrm{sin}^2i`$. The projected intensity for the luminosity density $`j`$ is evaluated numerically. In the following we adopt $`i=60^{}`$, based on the discussion in Section 2.2. We take $`q^{}=0.74`$ based on the isophotal shape analysis of Carollo et al. (their figure 1o), which shows an ellipticity $`ϵ`$ of $`0.26`$ in the inner $`10^{\prime \prime }`$ with variations $`<0.05`$. The isophotal PA is almost constant, with a monotonic increase of $`5^{}`$ between $`1^{\prime \prime }`$ and $`10^{\prime \prime }`$. The Carollo et al. results show larger variations in $`ϵ`$ and PA in the inner $`1^{\prime \prime }`$, but these are probably due to the residual effects of dust obscuration. Our model with constant $`ϵ`$ and PA is therefore expected to be adequate in the present context. The projected intensity of the model was fit to the observed surface brightness profile between $`0.15^{\prime \prime }`$ and $`5^{\prime \prime }`$. The best fit model has $`\alpha =0.62`$, $`\beta =0.76`$, $`a=0.90^{\prime \prime }`$ and $`j_0=1.9\times 10^2\mathrm{L}_{}\mathrm{pc}^3`$. Its predictions are shown by the solid curve in Figure 2. The fit was restricted to the range $`R5^{\prime \prime }`$. As a result, the fit is somewhat poor at larger radii. This can of course be improved by extending the fit range, but with the simple parametrization of equation (1) this would have led to a poorer fit in the region $`R1^{\prime \prime }`$. Since this is the region of primary interest in the context of our spectroscopic HST data (described below), we chose to accept the fit shown in the figure. The central $`0.15^{\prime \prime }`$ were excluded from the fit because IC 1459 has a nuclear point source. This point source has a blue $`VI`$ color. It is most likely of non-thermal origin (similar to the point source in M87; Kormendy 1992; van der Marel 1994) and associated with the core radio emission in IC 1459. If so, the point source does not contribute to the mass density of the galaxy (which is what we are interested in here), and it is therefore appropriate to exclude it from consideration. In Section 7 we briefly discuss the implications of the alternative possibility that the point source is a cluster of young stars. ## 3 Spectroscopy ### 3.1 FOS Setup and Data Reduction We used the red side detector of the FOS (described in, e.g., Keyes et al. 1995) on November 30, 1996 to obtain spectra of IC 1459. The COSTAR optics corrected the spherical aberration of the HST primary mirror. The observations started with a ‘peak-up’ target acquisition on the galaxy nucleus. The sequence of peak-up stages was similar to that described in van der Marel, de Zeeuw & Rix (1997) and van der Marel & van den Bosch (1998; hereafter vdMB98). We then obtained six spectra, three with the 0.1-PAIR square aperture (nominal size, $`0.086^{\prime \prime }`$) and three with the 0.25-PAIR square aperture (nominal size, $`0.215^{\prime \prime }`$). The G570H grating was used in ‘quarter-stepping’ mode, yielding spectra with 2064 pixels covering the wavelength range from 4569 Å to 6819 Å. Periods of Earth occultation were used to obtain wavelength calibration spectra of the internal arc lamp. At the end of the observations FOS was used in a special mode to obtain an image of the central part of IC 1459, to verify the telescope pointing. Galaxy spectra were obtained on the nucleus and along the major axis. A log of the observations is provided in Table 2. Target acquisition uncertainties and other possible systematic effects caused the aperture positions on the galaxy to differ slightly from those commanded to the telescope. We determined the actual aperture positions from the data themselves, using the independent constraints provided by the target acquisition data, the FOS image, and the ratios of the continuum and emission-line fluxes observed through different apertures. This analysis was similar to that described in Appendix A of vdMB98. The inferred aperture positions are listed in Table 2, and are accurate to $`0.02^{\prime \prime }`$ in each coordinate. The roll angle of the telescope during the observations was such that the sides of the apertures made angles of $`39^{}`$ and $`129^{}`$ with respect to the galaxy major axis. Figure 3 shows a schematic drawing of the aperture positions. Henceforth we use the labels ‘S1’–‘S3’ for the small aperture observations, and ‘L1’–‘L3’ for the large aperture observations. Most of the necessary data reduction steps were performed by the HST calibration pipeline, including flat-fielding and absolute sensitivity calibration. We did our own wavelength calibration using the arc lamp spectra obtained in each orbit, following the procedure described in van der Marel (1997). The relative accuracy (between different observations) of the resulting wavelength scale is $`0.04`$Å ($`2\mathrm{km}\mathrm{s}^1`$). Uncertainties in the absolute wavelength scale are larger, $`0.4`$Å ($`20\mathrm{km}\mathrm{s}^1`$), but influence only the systemic velocity of IC 1459, not the inferred BH mass. ### 3.2 Gas Kinematics The spectra show several emission lines, of which the following have a sufficiently high signal-to-noise ratio ($`S/N`$) for a kinematical analysis: H$`\beta `$ at 4861 Å; the \[OIII\] doublet at 4959, 5007 Å; the \[OI\] doublet at 6300, 6364 Å; the H$`\alpha `$+\[NII\] complex at 6548, 6563, 6583 Å; and the \[SII\] doublet at 6716, 6731 Å. To quantify the gas kinematics we fitted the spectra under the assumption that each emission line is a Gaussian. This yields for each line the total flux, the mean velocity $`V`$ and the velocity dispersion $`\sigma `$. For doublets we fitted both lines simultaneously under the assumption that the individual lines have the same $`V`$ and $`\sigma `$. The H$`\alpha `$+\[NII\] complex is influenced by blending of the lines, and for this complex we made the additional assumptions that H$`\alpha `$ and the \[NII\] doublet have the same kinematics, and for the \[NII\] doublet that the ratio of the fluxes of the individual lines equals the ratio of their transition probabilities (i.e., 3). Figure 4 shows the observed spectra for each of the five line complexes listed above, with the Gaussian fits overplotted. The figure shows that the observed emission lines are not generally perfectly fit by Gaussians; they often have a narrower core and broader wings. It was shown in vdMB98 that this arises naturally in dynamical models such as those constructed below. In the present paper we will not revisit the issue of line shapes, but restrict ourselves to Gaussian fits (both for the data and for our models). The mean and dispersion of the best-fitting Gaussian are well-defined and meaningful kinematical quantities, even if the lines themselves are not Gaussians. The Gaussian fit parameters for each of the line complexes are listed in Table 3. The listed velocities are measured with respect to the systemic velocity of IC 1459. The systemic velocity was estimated from the HST data themselves, by including it as a free parameter in the dynamical models described below (see Section 4.2). This yields $`v_{\mathrm{sys}}=1783\pm 10\mathrm{km}\mathrm{s}^1`$ (but with the possibility of an additional systematic error due to uncertainties in the FOS absolute wavelength calibration). This result is a bit higher than values previously reported in the literature (e.g., $`v_{\mathrm{sys}}=1707\pm 40\mathrm{km}\mathrm{s}^1`$ by Sadler 1984; $`v_{\mathrm{sys}}=1720\mathrm{km}\mathrm{s}^1`$ by Franx & Illingworth 1988; $`v_{\mathrm{sys}}=1748\pm 42\mathrm{km}\mathrm{s}^1`$ by Da Costa et al. 1991). In fact, systemic velocities that are up to $`150\mathrm{km}\mathrm{s}^1`$ smaller than our value have been reported as well (e.g., Davies et al. 1987; Drinkwater et al. 1997). Figure 5 shows the inferred kinematical quantities for the five line complexes as function of major axis distance. The observational setup provides only sparse sampling along the major axis and with apertures of different sizes, but nonetheless, two items are clear. First, for all apertures and line species there is a steep positive mean velocity gradient across the nucleus (i.e., between observations S1 and S2, or L1 and L2). Second, the velocity dispersion tends to be highest for the smallest aperture closest to the nucleus (observation S1); this is true for all line species with the exception of \[SII\], for which the dispersion peaks for observation S2. The steep central velocity gradient and centrally peaked velocity dispersion profile are similar to what has been found for other galaxies with nuclear gas disks (e.g., Ferrarese, Ford & Jaffe, 1996; Macchetto et al. 1997; Bower et al. 1998; vdMB98). The kinematical properties of the different emission line species show both significant similarities and differences. For example, the kinematics of H$`\beta `$ and H$`\alpha `$+\[NII\] are in excellent quantitative agreement. By contrast, \[OIII\] shows a significantly steeper central mean velocity gradient, and both \[OIII\] and \[OI\] have a higher velocity dispersion for several apertures; the central velocity dispersion for \[OIII\] exceeds that for H$`\beta `$ and H$`\alpha `$+\[NII\] by more than a factor two. The kinematics of the \[SII\] emission lines deviates somewhat from that for H$`\beta `$ and H$`\alpha `$+\[NII\], but only for the small apertures. There is no a priori reason to expect identical flux distributions, and hence identical kinematics for the different species, because they differ in their atomic structure, ionization potential, critical density, etc. Differences of similar magnitude have been detected in the kinematics of other gas disks as well (e.g., Harms et al. 1994; Ferrarese, Ford & Jaffe 1996). The former authors studied the gas disk in M87, and also found that the \[OIII\] line indicates a larger mean velocity gradient and higher dispersion than H$`\beta `$ and H$`\alpha `$+\[NII\]. We discuss the differences in the kinematics of the different line species in Section 5, after first having analyzed in detail the kinematics of H$`\beta `$ and H$`\alpha `$+\[NII\] in Section 4. ### 3.3 Ground-based Spectroscopy In our modeling it proved useful to complement the FOS spectroscopy with ground-based data that extends to larger radii. We therefore reanalyzed a major axis long-slit spectrum of IC 1459 obtained at the CTIO 4m telescope. The data were taken with a $`1.5^{\prime \prime }`$-wide slit using a CCD with $`0.73^{\prime \prime }`$ pixels, in seeing conditions with FWHM $`1.9^{\prime \prime }`$. The spectra have a smaller spectral range than the FOS spectra, but do cover the emission lines of H$`\beta `$ and \[OIII\]. Fluxes and kinematics for these lines were derived using single Gaussian fits, as for the FOS spectra. The inferred gas kinematics are listed in Table 4. The stellar kinematics implied by the absorption lines in the same spectrum (presented previously by van der Marel & Franx 1993) are used in Section 6 for the construction of stellar dynamical models. ## 4 Modeling and Interpretation of the H$`\alpha `$+\[NII\] and H$`\beta `$ Kinematics The FOS spectra of the H$`\alpha `$+\[NII\] and H$`\beta `$ lines yield similar relative fluxes (cf. Figure 6 below) and similar mean velocities and dispersions (cf. Figure 5). We therefore assume that these emission lines have the same intrinsic flux distributions and kinematics. We start in Sections 4.14.4 with the construction of models for the H$`\beta `$ and H$`\alpha `$+\[NII\] gas kinematics in which the gas disk is assumed to be an infinitesimally thin structure in the equatorial plane of the galaxy. However, as discussed in Section 2.2, this assumption may not be entirely appropriate at small radii, where the projected isophotes of the gas disk become rounder. In Section 4.5 we therefore discuss models in which the gas distribution is extended vertically. ### 4.1 Flux Distribution To model the H$`\beta `$ and H$`\alpha `$+\[NII\] gas kinematics we need a description of the intrinsic (i.e., the deconvolved and de-inclined) flux profile for these emission lines. We model the (face-on) intrinsic flux distribution as a triple exponential, $$F(R)=F_1\mathrm{exp}(R/R_1)+F_2\mathrm{exp}(R/R_2)+F_3\mathrm{exp}(R/R_3),$$ (2) and assume that the disk is infinitesimally thin and viewed at an inclination $`i=60^{}`$, (cf. Section 2.2). The total flux contributed by each of the three exponential components is $`I_i=2\pi R_i^2F_i`$ ($`i=1,2,3`$), and the overall total flux is $`I_{\mathrm{tot}}=I_1+I_2+I_3`$. The best-fitting parameters of the model flux distribution were determined by comparison to the available data. Flux data are available for H$`\alpha `$+\[NII\] from both the WFPC2 imaging and FOS spectra. For H$`\beta `$ they are available from the FOS and the CTIO spectra. For the spectra we determined the fluxes in the relevant lines (and their formal errors) using single Gaussian fits.<sup>2</sup><sup>2</sup>2It was verified that the fluxes extracted using single Gaussian fits are not significantly different from those obtained by simply adding the pixel data in the relevant wavelength range. For the WFPC2 image data we included for simplicity not the full two-dimensional brightness distribution in the fit, but only image cuts along the major and minor axes. Image fluxes outside $`1^{\prime \prime }`$ are dominated by read-noise, and were excluded. The errors for each image data-point were computed taking into account the Poisson-noise and the detector read-noise. The combined flux data from all sources are shown in Figure 6. We performed an iterative fit of the triple exponential to all the available flux data, taking into account the necessary convolutions with the appropriate PSF, pixel size and aperture size for each setup. The HST and CTIO fluxes constrain the flux distribution predominantly for $`r1^{\prime \prime }`$ and $`r1^{\prime \prime }`$, respectively, due to their relatively narrow and broad PSF. The WFPC2 data have a pixel area that is $`3`$ and $`20`$ times smaller than the respective FOS apertures, and therefore provide the strongest constraints on the flux distribution close to the center. The solid line in Figure 6 shows the predictions of the model that best fits all available data (which we will refer to as ‘the standard flux model’). This model has parameters $`R_1=0.026^{\prime \prime }`$, $`R_2=0.20^{\prime \prime }`$, $`R_3=1.65^{\prime \prime }`$, $`I_1/I_{\mathrm{tot}}=0.260`$, $`I_2/I_{\mathrm{tot}}=0.353`$ and $`I_3/I_{\mathrm{tot}}=0.387`$. The absolute calibration gives $`I_{\mathrm{tot}}=3.0\times 10^{13}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ for H$`\alpha `$+\[NII\] and $`I_{\mathrm{tot}}=1.3\times 10^{14}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ for H$`\beta `$. The total H$`\alpha `$+\[NII\] flux inferred from our model agrees to within 25% with that inferred from a previous ground-based observation of IC 1459 (Macchetto et al. 1996). Figure 7 shows the intrinsic flux distribution as function of radius for the standard flux model. Approximately one quarter of the total flux is contained in a component that is essentially unresolved at the spatial resolution of HST. The standard flux model provides an adequate fit to the observed fluxes, but the fit is not perfect. The model predicts too little flux in the central WFPC2 pixel, while at the same time predicting too much flux in the small FOS aperture closest to the galaxy center. So the different data sets are not fully mutually consistent under the assumptions of our model. This is presumably a result of uncertainties in the PSFs and aperture sizes for the different observations. To explore the influence of this on the inferred flux distribution we performed fits to two subsets of the flux data. The first subset consists only of the FOS and CTIO data, while the second subset consists only of the WFPC2 and CTIO data. The flux distribution models that best fit these subsets of the data are also shown in Figure 7. The results show that the standard flux model represents a compromise between the FOS and the WFPC2 data. At small radii the FOS data by themselves would imply a broader profile, while the WFPC2 data by themselves would imply a narrower profile. At larger radii the situation reverses. As will be discussed in Section 4.3, the uncertainties in the intrinsic flux distribution have only a very small effect on the inferred BH mass. ### 4.2 Dynamical Models Our thin-disk models for the gas kinematics are similar to those employed in vdMB98. The galaxy model is axisymmetric, with the stellar luminosity density $`j(R,z)`$ chosen as in Section 2.3 to fit the available surface photometry. The stellar mass density $`\rho (R,z)`$ follows from the luminosity density upon the assumption of a constant mass-to-light ratio $`\mathrm{{\rm Y}}`$. The mass-to-light ratio can be reasonably accurately determined from the ground-based stellar kinematics for IC 1459. This yields $`\mathrm{{\rm Y}}=4.5`$ in solar $`I`$-band units, cf. Section 6 below. We keep the mass-to-light ratio fixed to this value in our modeling of the gas kinematics. We assume that the gas is in circular motion in an infinitesimally thin disk in the equatorial plane of the galaxy, and has the circularly symmetric flux distribution $`F(R)`$ given in Section 4.1. We take the inclination of the galaxy and the gas disk to be $`i=60^{}`$, as discussed in Section 2.2. The circular velocity $`V_\mathrm{c}(R)`$ is calculated from the combined gravitational potential of the stars and a central BH of mass $`M_{}`$. The line-of-sight velocity profile (VP) of the gas at position $`(x,y)`$ on the sky is a Gaussian with mean $`V_\mathrm{c}(R)\mathrm{sin}i`$ and dispersion $`\sigma _{\mathrm{gas}}(R)`$, where $`R=\sqrt{x^2+(y/\mathrm{cos}i)^2}`$ is the radius in the disk. The velocity dispersion of the gas is assumed to be isotropic, with contributions from thermal and non-thermal motions: $`\sigma _{\mathrm{gas}}^2=\sigma _{\mathrm{th}}^2+\sigma _{\mathrm{turb}}^2`$. We refer to the non-thermal contribution as ‘turbulent’, although we make no attempt to describe the underlying physical processes that cause this dispersion. It suffices here to parameterize $`\sigma _{\mathrm{turb}}`$ through: $$\sigma _{\mathrm{turb}}(R)=\sigma _0+[\sigma _1\mathrm{exp}(R/R_\mathrm{t})].$$ (3) The parameter $`\sigma _0`$ was kept fixed to $`120\mathrm{km}\mathrm{s}^1`$, as suggested by the CTIO data for H$`\beta `$ with $`|r|3.5^{\prime \prime }`$ (see Figure 11 below). The predicted VP for any given observation is obtained through flux weighted convolution of the intrinsic VPs with the PSF of the observation and the size of the aperture. The convolutions are described by the semi-analytical kernels given in Appendix A of van der Marel et al. (1997), and were performed numerically using Gauss-Legendre integration. A Gaussian is fit to each predicted VP for comparison to the observed $`V`$ and $`\sigma `$. The model was fit to the FOS gas kinematics for H$`\alpha `$+\[NII\] and H$`\beta `$. Rotation velocity and velocity dispersion measurements were both included, yielding a total of 24 data points. Three free parameters are available to optimize the fit: $`M_{}`$, and the parameters $`\sigma _1`$ and $`R_\mathrm{t}`$ that describe the radial dependence of the turbulent dispersion. The temperature of the gas is not an important parameter: the thermal dispersion for $`T10^4\mathrm{K}`$ is $`\sigma _{\mathrm{th}}10\mathrm{km}\mathrm{s}^1`$, and is negligible with respect to $`\sigma _{\mathrm{turb}}`$ for all plausible models. We define a $`\chi ^2`$ quantity that measures the quality of the fit to the kinematical data, and the best-fitting model was found by minimizing $`\chi ^2`$ using a ‘downhill simplex’ minimization routine (Press et al. 1992). ### 4.3 Data-model comparison for the FOS data The curves in Figure 8 show the predictions of the model that provides the overall best fit to the H$`\alpha `$+\[NII\] and H$`\beta `$ kinematics, using the standard flux model of Section 4.1. Its parameters are: $`M_{}=1.0\times 10^8\mathrm{M}_{}`$, $`\sigma _1=563\mathrm{km}\mathrm{s}^1`$ and $`R_\mathrm{t}=0.1^{\prime \prime }`$. This model (which we will refer to as ‘the standard kinematical model’) adequately reproduces the important features of the HST kinematics, including the central rotation gradient and the nuclear velocity dispersion. To determine the range of BH masses that provides an acceptable fit to the data we compared the predictions of models with different fixed values of $`M_{}`$, while at each $`M_{}`$ varying the remaining parameters to optimize the fit. The radial dependence of the intrinsic velocity dispersion of the gas is essentially a free function in our models, so the observed velocity dispersion measurements can be fit equally well for all plausible values of $`M_{}`$. Thus only the predictions for the HST rotation velocity measurements depend substantially on the adopted $`M_{}`$. To illustrate the dependence on $`M_{}`$, Figure 9 compares the predictions for the HST rotation measurements for three different models. The solid curves show the predictions of the standard kinematical model defined above. The dotted and dashed curves are the predictions of models in which $`M_{}`$ was fixed a priori to $`0`$ and $`7.0\times 10^8\mathrm{M}_{}`$, respectively. The model without a BH predicts a rotation curve slope which is too shallow and the model with $`M_{}=7.0\times 10^8\mathrm{M}_{}`$ predicts a rotation curve slope which is too steep. Both these BH masses are ruled out by the data at more than the $`99\%`$ confidence level (see discussion below). To assess the quality of the fit to the HST rotation velocity measurements we define a new $`\chi ^2`$ quantity, $`\chi _V^2`$, that measures the fit to these data only. At each $`M_{}`$, the parameters $`\sigma _1`$ and $`R_\mathrm{t}`$ are fixed almost entirely by the velocity dispersion measurements. These parameters can therefore not be varied independently to improve the fit to the HST rotation velocity measurements. As a result, $`\chi _V^2`$ is expected to follow approximately a $`\chi ^2`$ probability distribution with $`N_{\mathrm{df}}=121=11`$ degrees of freedom (there are 12 HST measurements, and there is one free parameter, $`M_{}`$). The expectation value for this distribution is $`\chi _V^2=11`$. However, for the standard kinematical model we find $`\chi _V^2=59`$. To determine the cause of this statistically poor fit we inspected the goodness of fit as function of BH mass for each line species separately. Figure 10 shows $`\chi _V^2`$ as function of $`M_{}`$ for both H$`\alpha `$+\[NII\] and H$`\beta `$. The kinematics of H$`\alpha `$+\[NII\] are formally poorly fitted, despite the apparently good qualitative agreement in Figure 8. In particular, the observed H$`\alpha `$+\[NII\] velocity gradient between the FOS-0.1 apertures S1 and S2 is steeper than predicted by the best-fit model with $`M_{}=1.0\times 10^8\mathrm{M}_{}`$, which suggests that the BH mass may actually be twice as high (since $`M_{}\mathrm{\Delta }V^2`$ in our models). The poor formal fit may not be too surprising, given that our modeling of the gas as a flat circular disk in bulk circular rotation with an additional turbulent component is almost certainly an oversimplification of what in reality must be a complicated hydrodynamical system. The fits to the kinematics of H$`\beta `$ are statistically acceptable, but this may be in part because the formal errors on the H$`\beta `$ kinematics are twice as large as for H$`\alpha `$+\[NII\]. This would cause any shortcomings in the models to be less apparent for this emission line. Nonetheless, an important result in Figure 10 is that the BH masses implied by the H$`\alpha `$+\[NII\] and H$`\beta `$ kinematics are virtually identical. Formal errors on the BH mass can be inferred using the $`\mathrm{\Delta }\chi ^2`$ statistic, as illustrated in Figure 10. For H$`\beta `$ this yields $`M_{}=(1.0\pm 0.5)\times 10^8\mathrm{M}_{}`$ at $`68.3`$% confidence (i.e., 1-$`\sigma `$), and $`0.2\times 10^8\mathrm{M}_{}M_{}2.5\times 10^8\mathrm{M}_{}`$ at 99% confidence. The formal $`\mathrm{\Delta }\chi ^2`$ confidence intervals inferred from the H$`\alpha `$+\[NII\] lines are smaller, but this is not necessarily meaningful since the $`\chi ^2`$ itself is not acceptable for these lines. In Section 4.1 we showed that there is some uncertainty in the flux distributions of H$`\alpha `$+\[NII\] and H$`\beta `$. The mean velocities and velocity dispersions predicted by the dynamical model are flux-weighted quantities, and therefore depend on the adopted flux distribution. To assess the influence on the inferred BH mass we repeated the analysis using the two non-standard flux distributions shown in Figure 7. With these distributions we found fits to the kinematical data of similar quality as for the standard kinematical model. The inferred values of $`M_{}`$ agree with those for the standard kinematical model to within 10%. This shows that the uncertainties in the flux distribution have negligible impact on the inferred BH mass. ### 4.4 Data-model comparison for the CTIO data The model parameters in Section 4.3 were chosen to best fit the FOS data. Here we investigate what this model predicts for the setup of the CTIO data. Figure 11 shows the resulting data-model comparison (without any further changes to the model parameters). At radii $`|r|4^{\prime \prime }`$ the standard kinematical model fits the data acceptably well. The agreement in the velocity dispersion is trivial since it is the direct result of our choice of the model parameter $`\sigma _0`$. However, the agreement for the rotation velocities is quite important. It shows that outside the very center of the galaxy, the observations are consistent with the assumed scenario of gas rotating at the circular velocity in an infinitesimally thin disk. Moreover, it suggests that the value of mass-to-light ratio $`\mathrm{{\rm Y}}`$ used in the models (derived from ground-based stellar kinematics) is accurate. By contrast, the fit to the CTIO data is less good at radii $`|r|4^{\prime \prime }`$. In particular, the predicted rotation curve is too steep, and the central peak in the predicted velocity dispersion is too high. These discrepancies cannot be attributed to possible errors in the assumed value for the seeing FWHM for the CTIO observations. The latter was calibrated from the spectra themselves. Due to their superior spatial resolution, the WFPC2 data set the inner intrinsic flux profile. We could therefore determine the CTIO PSF by optimizing the agreement between the predicted and observed central three CTIO fluxes using this intrinsic inner flux profile. The resulting FWHM determination ($`1.9^{\prime \prime }`$) is quite accurate, and is inconsistent with the large FWHM values needed to make the standard kinematical model fit the CTIO data. The discrepancies must therefore be due to an inaccuracy or oversimplification in the modeling. The models discussed so far assume that the observed velocity dispersion is due to local turbulence in gas that has bulk motion along circular orbits. However, an alternative interpretation could be that the gas resides in individual clouds, and that the observed dispersion of the gas is due to a spread in the velocities of individual clouds. In this case the gas would behave as a collisionless fluid obeying the Boltzmann equation. An important consequence would be that the velocity dispersion $`\sigma `$ of the gas clouds would provide some of the pressure responsible for hydrostatic support, so that the mean rotation velocity $`\overline{v_\varphi }`$ would be less than the circular velocity $`v_\mathrm{c}`$ by a certain amount $`\mathrm{\Delta }vv_\mathrm{c}\overline{v_\varphi }`$. This effect is know as asymmetric drift (for historical reasons having to do with the stellar dynamics of the Milky Way; see e.g., Binney & Merrifield 1998). The presence of a certain amount of asymmetric drift would simultaneously explain several observations. First, if close to the center the gas receives a certain amount of pressure support from bulk velocity dispersion, and if this dispersion would be near-isotropic, it would induce a thickening of the disk. This would cause the isophotes of the gas close to the center to be rounder than those at larger radii, which is exactly what is observed (cf. Sections 2.2 and 7). Second, asymmetric drift would cause the gas to have a mean velocity less than the circular velocity, which would explain why the models of Section 4.3 overpredict the observed rotation velocities (cf. Figure 11). Third, the central peak in the velocity dispersion seen in the ground-based data is due in part to rotational broadening (spatial convolution of the steep rotation gradient near the center). So if the mean velocity of the gas in the models were lowered due to asymmetric drift, then the predicted central velocity dispersion would go down as well. This would tend to improve the agreement between the predicted and the observed velocity dispersions in Figure 11. ### 4.5 The influence of asymmetric drift If the gas in IC 1459 indeed has a non-zero asymmetric drift at small radii, then the models of Section 4.2 would have under-estimated the enclosed mass within any given radius (and models without a BH would be ruled out at even higher confidence than already indicated by Figure 10). In the following we address how much of an effect this would have on the inferred BH mass. In the limit $`\sigma /\overline{v_\varphi }1`$ one has that $`\mathrm{\Delta }v/v_\mathrm{c}=𝒪([\sigma /v_\mathrm{c}]^2)`$ (e.g., Binney & Tremaine 1987), and any asymmetric drift correction to $`M_{}`$ would be fairly small. However, at the resolution of HST we find that $`\sigma /\overline{v_\varphi }1`$, so the approximate formulae that exist for the limit $`\sigma /\overline{v_\varphi }1`$ cannot be used. For a proper analysis the gas kinematics would have to be modeled as a ‘hot’ system of point masses, using any of the techniques that have been developed in the context of stellar dynamical modeling of elliptical galaxies (e.g., Merritt 1999). While fully general collisionless modeling of the gas in IC 1459 is beyond the scope of the present paper, it is important to establish whether such an analysis would yield a very different BH mass. To address this issue we constructed spherical isotropic models for the gas kinematics using the Jeans equations, as in van der Marel (1994). The three-dimensional density of the gas was chosen so as to reproduce the major axis profile given by equation (2) after projection. As before, the gravitational potential of the system was characterized by a variable $`M_{}`$ and a fixed $`\mathrm{{\rm Y}}=4.5`$. Any turbulent velocity dispersion component of the gas was assumed to be zero. We then calculated the RMS projected line-of-sight velocity $`v_{\mathrm{RMS}}`$ predicted for the smallest FOS aperture positioned on the galaxy center. The H$`\alpha `$+\[NII\] and H$`\beta `$ observations yield $`v_{\mathrm{RMS}}[V^2+\sigma ^2]^{1/2}600\mathrm{km}\mathrm{s}^1`$ (cf. Table 3). We found that the spherical isotropic Jeans models require $`M_{}=4.0\times 10^8\mathrm{M}_{}`$ to reproduce this value. Larger BH masses would be ruled out because they predict more RMS motion than observed. In more general collisionless models the required BH mass will depend on the details of the model, but not very strongly. Velocity anisotropy and axisymmetry influence the projected dispersion of a population of test particles in a Kepler potential only at the level of factors of order unity (de Bruijne, van der Marel & de Zeeuw 1996). So as expected, if the velocity dispersion of the gas is interpreted as gravitational motion of individual clouds, then the BH mass must be larger than inferred in Section 4.3. However, the increase would only be a factor of $`3`$—4. Models without a BH would remain firmly ruled out. We emphasize that both types of model that we have studied are fairly extreme. In one case we assume that the gas resides in an infinitesimally thin disk, and has a large turbulent (or otherwise non-thermal) velocity dispersion and no bulk velocity dispersion or asymmetric drift. In the other case we assume the opposite, that the gas is in a spherical distribution, and has no turbulent velocity dispersion but instead a large bulk velocity dispersion. The truth is likely to be found somewhere between these extremes, and we therefore conclude that IC 1459 has a BH with mass in the range $`M_{}=1`$$`4\times 10^8\mathrm{M}_{}`$. ## 5 Modeling and Interpretation of the \[OIII\], \[OI\] and \[SII\] data Spectroscopic information is not only available for H$`\alpha `$+\[NII\] and H$`\beta `$, but also for three other line species: \[OI\], \[OIII\] and \[SII\]. Interestingly, the flux distributions and kinematics of these lines show some notable differences from those of H$`\beta `$ and H$`\alpha `$+\[NII\]. We have no narrow-band imaging for \[OI\], \[OIII\] and \[SII\], so information on the flux distributions of these lines is available only from the six apertures for which we obtained FOS spectra. Figure 12 shows the relative surface brightness $``$ for the different apertures and line species, using the definition $$(\mathrm{aperture},\mathrm{species})[F(\mathrm{aperture},\mathrm{species})/A(\mathrm{aperture})]/[F(\mathrm{L2},\mathrm{species})/A(\mathrm{L2})],$$ (4) where $`F`$ is the flux observed through an aperture for a given species, and $`A`$ is the area of the aperture; the aperture L2 is the observation with the large 0.25-PAIR aperture on the nucleus (cf. Figure 3). The relative surface brightnesses for the different species as seen through the large aperture are all quite similar. By contrast, those for the small 0.1-PAIR aperture differ considerably. The profiles for \[OI\] and especially \[OIII\] are more centrally peaked than that for H$`\alpha `$+\[NII\], while the profile for \[SII\] is less centrally peaked than that for H$`\alpha `$+\[NII\]. To quantify this, we have defined a measure of the peakedness of the surface brightness profile as $$𝒫(\mathrm{species})2(\mathrm{S1},\mathrm{species})/[(\mathrm{S2},\mathrm{species})+(\mathrm{S3},\mathrm{species})].$$ (5) The values of this quantity for the different species are $`2.1`$, $`3.5`$, $`3.0`$, $`1.9`$ and $`0.87`$ for H$`\beta `$, \[OIII\], \[OI\], H$`\alpha `$+\[NII\] and \[SII\], respectively. The kinematics for the different species also show differences, as already pointed out in Section 3.2. Figure 5 shows that the most pronounced differences are seen in the value of the velocity dispersion as observed through the smallest aperture on the nucleus (observation S1). This quantity varies from $`211\pm 25\mathrm{km}\mathrm{s}^1`$ for \[SII\] to $`1014\pm 47\mathrm{km}\mathrm{s}^1`$ for \[OIII\], cf. Table 3. Such differences in the line width for different species have previously been found in the central regions of other LINER galaxies and Seyferts, and have been extensively modeled (e.g., Whittle 1985; Simpson & Ward 1996; Simpson et al. 1996; and references therein). One result from these studies has been that there is generally a correlation between velocity dispersion and critical density of the lines. We find a similar result for the nucleus of IC 1459. Figure 13a shows the velocity dispersion for the different lines observed through aperture S1, versus the critical density (the Balmer lines are not plotted because for them interpretation of the critical density is complicated by the effects of radiative transfer for permitted lines; see e.g. Filippenko & Halpern 1984). There is indeed a rough correlation. The fact that \[OIII\] has a larger dispersion than \[OI\] is somewhat surprising in view of this, but this has also been found for other galaxies (Whittle 1985). It has been hypothesized that at a basic level the approximate correlation between velocity dispersion and critical density can be understood as the result of differences in the spatial distribution of the line flux for different species (Osterbrock 1989, p. 366). Lines with a high critical density tend to be more strongly concentrated towards the ionizing source in the galaxy nucleus than species with a low critical density. So for a line with a high critical density, the observed flux will on average come from smaller radii. Either the presence of a central BH or increased turbulence (cf. equation 3) would naturally cause the gas at smaller radii to move faster, which qualitatively explains the correlation. To make detailed quantitative predictions one would need to model the complete ionization structure (ionizing flux, electron density, temperature, etc. as function of radius) and kinematics of the gas, which can generally be done only with simplifying assumptions. However, even without a detailed model we can test the basic interpretation in an observational sense. Figure 12 shows that we are observing different flux distributions for the different lines, which implies that we are actually resolving the region from which the emission arises. Thus we can test directly whether the lines for which the velocity dispersion is high have a flux that is strongly concentrated towards the nucleus. Figure 13b confirms this. It shows the velocity dispersion for the different lines observed through aperture S1 versus the flux-peakedness parameter $`𝒫`$. The strong correlation provides direct support for the proposed interpretation. In our dynamical models, the gas motions are a function of position in the disk only; they do not depend on the physical properties of the line species. Hence, differences in the observed kinematics of the lines must be due entirely to differences in their flux distributions. For accurate modeling it is therefore essential that the intrinsic flux distributions are well-constrained by the observations. This is true for H$`\alpha `$+\[NII\], for which a narrow-band image is available at the WFPC/PC resolution of $`0.0455^{\prime \prime }`$/pixel. However, this is not true for \[OI\], \[OIII\] and \[SII\], for which only the six FOS spectral flux measurements are available, with resolutions no better than $`0.086^{\prime \prime }`$/aperture. This is insufficient for accurate modeling. Hence, we cannot test in detail whether the different line species all independently indicate the same BH mass. However, a very simple argument can be used to set an upper limit on how much the BH masses implied by the \[OI\], \[OIII\] and \[SII\] data could differ from that inferred in Section 4.3 from the H$`\alpha `$+\[NII\] and H$`\beta `$ data. None of the line species have either a central rotation velocity gradient $`\mathrm{\Delta }VV(\mathrm{S2})V(\mathrm{S1})`$ or a central velocity dispersion $`\sigma (\mathrm{S1})`$ that exceeds the value for H$`\alpha `$+\[NII\] by more than a factor of $`2`$ (cf. Table 3). In the models of Section 4.2 one has approximately $`M_{}\mathrm{\Delta }V^2`$, while in the models of Section 4.5 one has approximately $`M_{}\sigma (\mathrm{S1})^2`$. So if we would assume (incorrectly) that all species have the same flux distribution, then we would infer BH masses that exceed that inferred from the H$`\alpha `$+\[NII\] and H$`\beta `$ data by at most a factor $`4`$. However, this is a very conservative upper limit since the flux distributions for e.g. \[OI\] and \[OIII\] are actually more peaked than for H$`\alpha `$+\[NII\] and H$`\beta `$, which would tend to reduce the inferred BH mass. So the data provide no compelling reason to believe that the \[OI\], \[OIII\] and \[SII\] observations imply a very different BH mass than inferred from H$`\alpha `$+\[NII\] and H$`\beta `$, but we cannot test this in detail. For \[SII\], both the mean velocity and the velocity dispersion profiles are quite irregular. The \[SII\] lines have a relatively low equivalent width and form a blended doublet, so this could be due to systematic problems in the extraction of the kinematics from the data. On the other hand, irregularities in the dispersion profiles are also seen for \[OIII\] and \[OI\] as observed through the small apertures. Although such irregularities are not seen for H$`\beta `$ and H$`\alpha `$+\[NII\], this may indicate that our modeling of the gas flux distribution (eq. ) and the turbulent velocity dispersion (eq. ) as smooth functions is somewhat oversimplified. We compared the observed line ratios for IC 1459 with modeled values for shocks (Allen et al. 1998; Dopita et al. 1997) and found them to be consistent. Shocks could naturally explain the turbulence that we invoke in our models of IC 1459. At the same time, it would suggest that the gas properties could easily possess more small-scale structure than the smooth functions that we have adopted. ## 6 Ground-based Stellar Kinematics Ground-based stellar kinematical data are available from the same major axis CTIO spectrum for which the emission lines were discussed in Section 3.3. The stellar rotation velocities $`V`$ and velocity dispersions $`\sigma `$ inferred from this spectrum were presented previously in van der Marel & Franx (1993). To interpret these data we constructed a set of axisymmetric stellar-dynamical two-integral models for IC 1459 in which the phase-space distribution function $`f(E,L_z)`$ depends only on the two classical integrals of motion. As before, the mass density was taken to be $`\rho =\mathrm{{\rm Y}}j`$, with the luminosity density $`j(R,z)`$ given by equation (1). The gravitational potential is the sum of the stellar potential and the Kepler potential of a possible central BH. Predictions for the root-mean-square (RMS) stellar line-of-sight velocities $`v_{\mathrm{RMS}}[V^2+\sigma ^2]^{1/2}`$ were calculated by solving the Jeans equations for hydrostatic equilibrium, projecting the results onto the sky, and convolving them with the observational setup. This modeling procedure is equivalent to that applied to large samples by, e.g., van der Marel (1991) and Magorrian et al. (1998). The calculations presented here were done with the software developed by van der Marel et al. (1994). Figure 14 shows the data for $`v_{\mathrm{RMS}}`$ as function of major axis distance. The value of $`\mathrm{{\rm Y}}`$ in the models was chosen so as to best fit the data outside the central region, yielding $`\mathrm{{\rm Y}}4.5`$. This leaves $`M_{}`$ as the only remaining free parameter. The curves in Figure 14 show the model predictions for various values of $`M_{}`$. A model without a BH predicts a central dip in $`v_{\mathrm{RMS}}`$, which is not observed. The models therefore clearly require a BH. None of the models fits particularly well, but models with $`M_{}3`$$`4\times 10^9\mathrm{M}_{}`$ provide the best fit. This exceeds the BH mass inferred from the HST gas kinematics by a factor of 10 or more. This suggests that the assumptions underlying the stellar kinematical analysis may not be correct. The velocity dispersion anisotropy of a stellar system can have any arbitrary value, and the sense of the anisotropy can have a large effect on inferences about the nuclear mass distribution. Two-integral $`f(E,L_z)`$ models can be viewed as axisymmetric generalizations of spherical isotropic models. Such models will overestimate the BH mass if galaxies are actually radially anisotropic (Binney & Mamon 1982). Van der Marel (1999a) showed that $`f(E,L_z)`$ models can easily overestimate the BH mass by a factor of 10 when applied to ground-based data of similar quality as that available here, even for galaxies that are only mildly radially anisotropic. Support that this may be happening comes from various directions. First, several detailed studies of bright galaxies similar to IC 1459 have concluded that these galaxies are radially anisotropic (e.g., Rix et al. 1997; Gerhard et al. 1998; Matthias & Gerhard 1999; Saglia et al. 1999; Cretton, Rix & de Zeeuw 2000). Second, detailed three-integral distribution function modeling of stellar kinematical HST data for several galaxies (Gebhardt et al. 2000) does indeed yield BH masses that are many times smaller than inferred by Magorrian et al. (1998) from $`f(E,L_z)`$ models for ground-based data for the same galaxies. And third, models of adiabatic BH growth for HST photometry also suggest that $`f(E,L_z)`$ models for ground-based data yield BH masses that are too large (van der Marel 1999b). So in summary, the fact that the BH mass inferred from $`f(E,L_z)`$ models for ground-based IC 1459 data does not agree with that inferred from the HST gas kinematics provides little reason to be worried. It simply shows that one cannot generally place very meaningful constraints on BH masses from ground-based stellar kinematics of $`2^{\prime \prime }`$ spatial resolution. While the BH mass inferred from ground-based stellar kinematical data is generally very sensitive to assumptions about the structure and dynamics of the galaxy, this is not true for the inferred mass-to-light ratio. Models with different inclination (van der Marel 1991) or anisotropy (van der Marel 1994; 1999a) yield the same value of $`\mathrm{{\rm Y}}`$ to within $`20`$%. Additional support for the accuracy of the inferred $`\mathrm{{\rm Y}}=4.5`$ comes from the fact that this value yields a good fit to the observed rotation velocities of the gas outside the central few arcsec (cf. Figure 11). ## 7 Discussion and conclusions We have presented the results from a detailed HST study of the central structure of IC 1459. The kinematics of the gas disk in IC 1459 was probed with FOS observations through six apertures along the major axis. In our modeling of the observed kinematics we took into account the stellar mass density in the central region by fitting WFPC2 broad-band imaging, and we determined the flux distribution of the emission-gas from WFPC2 narrow-band imaging. From the models we have determined that IC1459 harbors a black hole with a mass in the range $`1\times 10^8\mathrm{M}_{}`$$`4\times 10^8\mathrm{M}_{}`$, with the exact value depending somewhat on whether we model the gas as rotating on circular orbits, or as an ensemble of collisionless cloudlets. While the dynamical models that we have constructed provide good fits to the observations, the true structure of IC 1459 could of course be more complex than our models. Below we discuss several aspects of this. Ground-based observations (Goudfrooij et al. 1994; Forbes & Reitzel 1995) indicate an ellipticity $`ϵ=0.5`$ for the gas disk at radii larger than a few arcseconds, implying an inclination angle $`i=60^{}`$. By contrast, from our HST emission-line image we found a monotonic increase in ellipticity from $`ϵ=0.17`$ to $`ϵ=0.37`$ between $`r=0.25^{\prime \prime }`$ to $`1.0^{\prime \prime }`$. In our modeling we have assumed that these rounder inner isophotes are due to a thickening of the gas disk caused by asymmetric drift, and we estimated the effect of this on the inferred $`M_{}`$ (see Sections 4.4 and 4.5). However, an alternative interpretation would be to assume that the disk is warped. The presence of a dust lane slightly misaligned with the gas disk, a counter-rotating stellar core, and stellar shells and ripples in the outer galaxy make it plausible that the central gas and dust were accreted from outside, displaying warps as it settles down. In this interpretation we infer an increase in inclination angle from $`32.9^{}`$ to $`47.9^{}`$ between $`r=0.25^{\prime \prime }`$ to $`1.0^{\prime \prime }`$. The BH mass in the models of Section 4.2 scales as $`M_{}\mathrm{sin}^2i`$ due to the projection of the rotational velocities. Hence the inferred differences in inclination angle would amount to an increase in $`M_{}`$ by only a factor $`1.5`$, which would not significantly change our results. IC 1459 hosts an unresolved blue nuclear point source. Carollo et al. (1997) estimated a luminosity of $`L_V1.5\times 10^7\mathrm{L}_{}`$. So far, we have assumed that this light is non-stellar radiation from the active nucleus. However, one could assume alternatively that the blue light is emitted by a cluster of young stars. If the cluster were to have a mass equal to the mass $`M_{}`$ that we have inferred from our models, this would require $`\mathrm{{\rm Y}}_V10`$. Depending on age and metallicity, stellar evolution models typically predict $`\mathrm{{\rm Y}}_V2`$ for a young cluster (e.g., Worthey 1994). Thus the assumption that the point source is stellar in origin cannot lift the need for a central concentration of non-luminous matter, most likely a BH. The data show differences between the fluxes and kinematics for the various line species. The main characteristics of the observed kinematics are similar for all species, but we see that for the species with higher critical densities the flux distribution is generally more concentrated towards the nucleus and the observed velocities are higher. This can be well understood qualitatively in the framework of a single $`M_{}`$. However, to actually verify quantitatively whether each species implies the same value for $`M_{}`$ would require information on the flux distributions for each species at high spatial resolution as well as detailed knowledge on the ionization structure of the gas, neither of which is available. An extremely conservative upper limit on the differences in the $`M_{}`$ implied by the different line species is obtained by assuming that all species have the same flux distribution (not actually correct), in which case $`M_{}`$ values are obtained that are up to $`4`$ times larger than inferred from H$`\beta `$ and H$`\alpha `$+\[NII\]. Irregularities in the velocity dispersion profiles of \[OIII\], \[OI\] and \[SII\] suggest localized turbulent motions. We incorporated turbulent motion in our models in a very simple manner (cf. equation 3), using a parametrization that fits the main trend of an increase of the velocity dispersion toward the nucleus. Nevertheless, going to the extreme, one could assume that all observed motions have a non-gravitational origin. The overall kinematics would then be due to in- or outflows. There are several objections to this interpretation. A spherical in- or outflow could not produce any net mean velocity. A bi-directional flow is unlikely since no hint of this is seen in the HST H$`\alpha `$+\[NII\] emission image. Next we consider the location of IC 1459 with respect to the correlations between $`M_{}`$ and host total optical and radio luminosity, mentioned in the Introduction. The ratio of $`M_{}`$ and galaxy mass is in the range $`0.4`$$`1.5\times 10^3`$. This is somewhat lower than the average value of $`2\times 10^3`$ seen for other galaxies (Kormendy & Richstone 1995), but still comfortably within the observed scatter. As discussed in the Introduction, IC 1459 is probably the end-product of a merger between two galaxies. Apparently, this merger history has not moved IC 1459 to an atypical spot in the $`M_{}`$ vs. galaxy mass scatter diagram. The fact that IC 1459 has a LINER type spectrum and core radio emission, but no jets, makes it interesting to see where it is located on a $`M_{}L_{\mathrm{radio}}`$ plot. IC 1459 has a radio luminosity of $`5.5\times 10^{22}\mathrm{WHz}^1`$ at 5 GHz (Wright et al. 1996). Interestingly, this puts IC 1459 within the scatter observed around the correlation between $`M_{}`$ and total radio luminosity inferred for a small sample of galaxies with available BH mass determinations, but quite off the correlation with core radio luminosity (see Figs. 3 and 4 of Franceschini, Vercellone, & Fabian 1998). However, one should be weary of beaming and variability of core radio sources, and possible resolution differences among the observations. Our dynamical modeling has used the gas disk in IC 1459 as a diagnostic tool to constrain $`M_{}`$. A logical next step to improve on our work would be to obtain better two-dimensional coverage of the gaseous and stellar kinematics. A definite advance in our understanding of nuclear gas disks would also be obtained if we had a better knowledge of properties of the gas disk such as the electron density, metallicity, and the ionization structure and ionization mechanism. One could then try to simultaneously understand these properties and the gas dynamics. We now know that quite likely most, or even all, bright ellipticals host a massive central BH. More detailed knowledge of the chemical properties and kinematics of the dust and gas surrounding the BH could tell us about the origin of this material: accretion of small satellites, stripping of companions, or internal mass loss from stars. This would immediately constrain the frequency and probability with which any bright elliptical hosts this material. A better understanding of the kinematics, such as the importance of dissipative shocks generated by turbulence, could then help to determine the accretion rate of the black hole. These pieces of information together could tell us what fraction of the observed $`M_{}`$ could have come from this process over the life time of a galaxy. Knowledge on the ratio of black hole and stellar mass as function of time would be valuable for understanding the formation and evolution of early-type galaxies in general. Support for this work was provided by NASA through grant number #GO-06537.01-95A, and through C.M.C.’s Hubble Fellowship #HF-01079.01-96A, awarded by the Space Telescope Science Institute which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. The authors would like to thank Marijn Franx for helpful comments on an earlier version of the manuscript.
no-problem/0003/astro-ph0003059.html
ar5iv
text
# Model of peak separation in the gamma lightcurve of the Vela pulsar ## 1 Introduction Two prominent peaks are a characteristic feature of gamma-ray pulse shapes in the three brightest out of seven gamma-ray pulsars detected so far: Crab (PSR B0531+21), Vela (PSR B0833-45), and Geminga (J0633+1746). Phase separation between the two peaks is very large in each case, in the range between 0.4 and 0.5 (e.g. Fierro, Michelson & Nolan 1998). The separation, which we denote by $`\mathrm{\Delta }^{\mathrm{peak}}`$, was determined with photons from the entire energy range of EGRET. Kanbach suggested that the separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ in Vela might be energy dependent. The effect would be of the order of a few percent or less. The plot of the phase separation against energy (fig.2 of Kanbach 1999) shows that $`\mathrm{\Delta }^{\mathrm{peak}}`$ decreases by about $`5\%`$ over 20 energy intervals covering the range between $`50\mathrm{MeV}`$ and $`9\mathrm{GeV}`$. The scatter of points is, however, large enough for this result still to be consistent with the separation staying at a constant level of $`0.43`$, especially when one rejects two energy intervals: of the lowest and the highest value. Such effects as suggested by Kanbach can be justified qualitatively, at least within polar cap scenarios. Their origin may be different at different energy ranges, and their magnitude may vary as well. For example, Miyazaki & Takahara found dramatic changes in peak-to-peak phase separation due to magnetic absorption effects in their attempts to model the Crab pulse shapes. Their numerical calculations were performed with low photon energy resolution for a model with homogeneous polar cap, and instant acceleration. This new aspect of studying the HE properties of pulsars is potentially attractive. The problem of poor photon statistics should become less essential with future high-sensitivity missions like GLAST. Then any well established empirical relation between the phase separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ and the photon energy $`\epsilon `$ (including $`\mathrm{\Delta }^{\mathrm{peak}}=const`$) may help to discriminate in favour of some particular models of pulsar activity. In this context we present a model of the peak-to-peak phase separation in the gamma-ray lightcurve of Vela and confront it with the results of Kanbach . Our aim is to present properties of $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$ predicted by the polar cap model with curvature (CR) and synchrotron (SR) radiation being dominant emission mechanisms. This is an extension of the preliminary results of Dyks, Rudak & Bulik . In section 2 we outline the model and introduce the input parameters for which Monte Carlo simulations were performed. Section 3 describes the numerical results and offers their interpretation; conclusions follow in section 4. ## 2 The Model The presence of two peaks with large (0.4 - 0.5) phase separation in gamma-ray lightcurves within single polar cap models requires a nearly aligned rotator (e.g. Daugherty & Harding 1994) where the following three characteristic angles are to be of the same order: $`\alpha `$ \- the angle between the spin axis $`𝛀`$ and the magnetic moment $`\mu `$, $`\vartheta _\gamma `$ \- the opening angle between the direction of the gamma-ray emission and $`\mu `$, and $`\zeta `$ \- the angle between $`𝛀`$ and the line of sight. For a canonical polar cap and instant electron acceleration, $`\vartheta _\gamma `$ roughly equals $`0.02/\sqrt{P}`$ radians only (where $`P`$ denotes a spin period). To avoid uncomfortably small characteristic angles, Daugherty & Harding , postulated that primary electrons come from extended polar caps, and with the acceleration occuring at a height $`h`$ of several neutron-star radii $`R_{\mathrm{ns}}`$. The latter assumption may be supported by the results of Harding & Muslimov who investigated in a self-consistent way particle acceleration by the electrostatic field due to field-line curvature and inertial frame dragging effect . Harding & Muslimov found that a stable accelerator, with double pair-formation-front controlled by curvature radiation, is possible at a height of about 0.5 to 1 stellar radii. Here we use a polar cap model combined with the assumption of a nearly aligned rotator. Most ingredients of the pc-model come from Daugherty & Harding . Geometry of the magnetic field of a neutron star is assumed to be well described by a static, axisymmetric dipole. Within a given polar cap (pc) model with a fixed value of $`\alpha `$ there are two possible values of viewing angle $`\zeta `$ resulting in an identical peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ (defined as a fraction of $`2\pi `$) but with a reversed order of the two peaks (in terms of a leading peak, and then a trailing peak). Fig. 1 illustrates the ambiguity in the definition of a double-peak pulse. A simple geometrical relation connects the three angles of interest - $`\alpha `$, $`\zeta `$ and $`\vartheta _\gamma `$ \- to the phase separation $`\mathrm{\Delta }^{\mathrm{peak}}`$: $$\mathrm{cos}\vartheta _\gamma =\mathrm{cos}\delta \mathrm{sin}\alpha \mathrm{sin}\zeta +\mathrm{cos}\alpha \mathrm{cos}\zeta ,$$ (1) (eg. Ochelkov & Usov 1980), where $`\delta =\pi \mathrm{\Delta }^{\mathrm{peak}}`$ for $`\zeta `$<sub>large</sub>, and $`\delta =\pi (1\mathrm{\Delta }^{\mathrm{peak}})`$ for $`\zeta `$<sub>small</sub> This relation holds as far as aberration of photon propagation due to rapid rotation is neglected. Throughout the paper we always take the case of the larger $`\zeta `$ in each model (4 models are considered). Its value ($`\zeta _{\mathrm{large}}=3.75,4.5,15.`$, and $`3.65`$ degrees for models A, B, C, and D, respectively) along with our choice for the angle $`\alpha `$ (see Table 1) is to yield the separation $`\mathrm{\Delta }^{\mathrm{peak}}=0.43`$ at $`300\mathrm{MeV}`$. The dominant HE emission process is the curvature radiation (CR) by ultrarelativistic beam particles (primary electrons which leave the polar cap), followed by magnetic pair production with the subsequent synchrotron emission (SR). Our numerical code to follow the cascades induced by beam particles is based on Daugherty & Harding , and takes advantage of the following approximations relevant to the problem of directional and spectral distributions of the photons: The curvature photons are emitted tangentially to the local magnetic field direction in a frame rotating with the star. The created e<sup>±</sup>-pairs are assumed to follow the directions of their parent CR photons and they share the photons’ energy equally (for justification see Daugherty & Harding 1983). The synchrotron photons are emitted perpendicularly to the local magnetic field direction in a frame comoving with electron/positron center of gyration. The pairs are assumed not to change their position on the field line when radiating (for magnetic field strengths considered here, energy-loss length scales due to SR are of the order of $`10^6`$ cm). The emergent high-energy spectrum is a superposition of CR and SR. We follow Rudak & Dyks to calculate detailed broad band energy spectra of the high-energy emission. Beam particles are injected along magnetic field lines into the magnetosphere either from the outer rim of the polar cap (hollow cone column) or from the entire polar cap surface (filled column). We use two simple models for their acceleration to ultrarelativistic energies: 1) instant acceleration and 2) acceleration due to a uniform longitudinal electric field $``$ over a scale height $`\mathrm{\Delta }h`$. The pulse shapes as a function of photon energy were calculated numerically for four sets of initial parameters (hereafter called models A, B, C and D). Table 1 features most important model parameters. In models A and B the primary electrons are distributed evenly along a hollow cone formed by the magnetic field lines from the outer rim of a canonical polar cap, i.e. with a magnetic colatitude $`\theta _{\mathrm{init}}=\theta _{\mathrm{pc}}`$, where $`\theta _{\mathrm{pc}}(2\pi R_{\mathrm{ns}}/cP)^{1/2}`$ radians at the stellar surface level ($`h=0`$). The beam particles are injected at a height $`h_{\mathrm{init}}`$ with some initial ultrarelativistic energy $`E_{\mathrm{init}}`$ (listed in Table 1) and no subsequent acceleration takes place. The main difference between models A and B is due to different values of $`h_{\mathrm{init}}`$ (equal to 0 and 1 $`R_{\mathrm{ns}}`$ respectively) which result in different locations of origin of secondary particles. Changing these locations is an easy way to modify spectral properties of emergent radiation and enables to change (preferrably - to increase) the angle $`\alpha `$ as constrained by the observed energy-averaged peak separation $`\mathrm{\Delta }^{\mathrm{peak}}0.43`$ . In model C we assume $`\theta _{\mathrm{init}}=2\theta _{\mathrm{pc}}`$. The primaries are injected at $`h_{\mathrm{init}}=2R_{\mathrm{ns}}`$ with $`E_{\mathrm{init}}=mc^2`$ and undergo acceleration by a longitudinal electric field $``$ present over a characteristic scale height $`\mathrm{\Delta }h=0.6R_{\mathrm{ns}}`$, resulting in a total potential drop $`V_0`$: $$(h)=\{\begin{array}{cc}V_0/\mathrm{\Delta }h,\hfill & \text{for }h_{\mathrm{init}}h(h_{\mathrm{init}}+\mathrm{\Delta }h)\hfill \\ 0,\hfill & \text{elsewhere.}\hfill \end{array}$$ (2) For comparison, we considered model D with a uniform electron distribution over the entire polar cap surface (i.e. $`\theta _{\mathrm{init}}[\mathrm{\hspace{0.17em}0},\theta _{\mathrm{pc}}]`$). All its remaining features are indentical to model A. The values of $`E_{\mathrm{init}}`$ in models A, B and D, and the potential drop $`V_0`$ in model C were chosen to yield similar number of secondary pairs — about $`10^3`$ per beam particle. In all cases the spin period of the Vela pulsar $`P=0.089`$s was assumed. ## 3 Results General properties of the function $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$ may be described in short in the following way: In the low-energy range (i.e. below a few GeV) the peak separation either remains constant with the increasing photon energy (Models B, C, and D) or slightly decreases (Model A). In either case, however, the slope of $`\mathrm{\Delta }^{\mathrm{peak}}`$ versus $`\epsilon `$ looks consistent with the results of Kanbach . Then, around a few GeV there exists a critical energy $`\epsilon _{\mathrm{turn}}`$, at which the separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ undergoes a sudden turn: in the high-energy domain (i.e. for $`\epsilon >\epsilon _{\mathrm{turn}}`$) it increases in our hollow-column models (A, B, and C), whereas it decreases in the filled-column model (D), at a rate $`0.28`$ phase per decade of photon energy. The existence of $`\epsilon _{\mathrm{turn}}`$ is a direct consequence of magnetic absorption ($`\gamma 𝐁e^\pm `$) in the magnetosphere (see also Miyazaki & Takahara 1997). \[Note, that $`\epsilon _{\mathrm{turn}}`$ is not equivalent to a high-energy cut-off in a spectrum due to the magnetic absorption\]. The dependence of peak separation upon photon energy for all four models is presented in Fig. 2 (upper panels). To understand why the slope in the low-energy domain ($`\epsilon <\epsilon _{\mathrm{turn}}`$) is different in different models, we will now present several factors which are of interest in this respect. Let us start with possible consequences of the orientation on $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$ in this context, i.e. with the choice of the angles $`\alpha `$ and $`\zeta `$. For a given emission pattern in a frame of magnetic dipole rotating with the star and for a set of values of angles $`\alpha `$ and $`\zeta `$ fulfilling the condition $`\mathrm{\Delta }^{\mathrm{peak}}(300\mathrm{MeV})=0.43`$, a line of view can cross the hollow cone of emission at different impact angles. Let us estimate how geometry alone would affect the relations $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$ shown in Fig. 2. We neglect aberration of photon propagation due to rotation when calculating directional emission pattern as seen from some inertial observer frame. For $`\mathrm{\Delta }^{\mathrm{peak}}`$ staying close to 0.5 (roughly between $`0.3`$ and $`0.7`$) $`\mathrm{cos}(\pi \mathrm{\Delta }^{\mathrm{peak}})\frac{\pi }{2}\pi \mathrm{\Delta }^{\mathrm{peak}}`$, the value of $`\mathrm{cos}\vartheta _\gamma `$ is approximately proportional to $`\mathrm{\Delta }^{\mathrm{peak}}`$ with a constant of proportionality equal to $`\pi \mathrm{sin}\alpha \mathrm{sin}\zeta `$. Moreover, for a radiating particle sliding along a dipolar field line with the dipole constant $`k=r\mathrm{sin}^2\theta `$ ($`r`$, $`\theta `$ are the coordinates of the particles in the dipolar frame), one can link the opening angle $`\vartheta _\gamma `$ with a radial position $`r`$ of the particle: $$\mathrm{cos}\vartheta _\gamma =\frac{23\frac{r}{k}}{\sqrt{43\frac{r}{k}}}.$$ (3) Since in polar cap models $`\frac{r}{k}1`$, the right-hand side of eq. (3) can be approximated by $`\mathrm{cos}\vartheta _\gamma 1\frac{9}{8}\frac{r}{k}`$ which along with eq.(1) gives $$r\frac{8}{9}\pi a\mathrm{\Delta }^{\mathrm{peak}}+b$$ (4) where $$ak\mathrm{sin}\alpha \mathrm{sin}\zeta ,$$ and $$b\frac{8}{9}k(1\frac{\pi }{2}\mathrm{sin}\alpha \mathrm{sin}\zeta \mathrm{cos}\alpha \mathrm{cos}\zeta ).$$ The slope of the relation $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$ is a combination of two factors: 1) viewing geometry and 2) directional and spectral changes in the radiation yielding the peaks with increasing radial coordinate $`r`$: $$\frac{d\mathrm{\Delta }^{\mathrm{peak}}}{d\epsilon }=\frac{d\mathrm{\Delta }^{\mathrm{peak}}}{dr}\frac{dr}{d\epsilon }.$$ (5) Any changes in viewing geometry (i.e. the angles $`\alpha `$ and $`\zeta `$) will affect the slope $`d\mathrm{\Delta }^{\mathrm{peak}}/d\epsilon `$ via $`d\mathrm{\Delta }^{\mathrm{peak}}/dr=\frac{9}{8\pi a}`$, the latter being calculated from eq.(4). For example, for models A and B the parameter $`a`$ equals $`1.46R_{\mathrm{ns}}`$ and $`2.91R_{\mathrm{ns}}`$, respectively. Should $`dr/d\epsilon `$ be identical for these two models, the slope in model A would be about two times bigger than in model B. Fig. 2 (upper panels) does not show any such effects: the slope in model B is $`0`$ for $`\epsilon <\epsilon _{\mathrm{turn}}`$. This is because the viewing geometry effects are dwarfed by differences in directional and spectral properties of the radiation in these models. To analyse directional and spectral properties of the peak emission it is instructive to begin with a simplified, one-component model. Suppose that the only contribution to the emission is due to optically thin CR, i.e. both magnetic absorption and synchrotron emission by secondary pairs created due to this absorption are neglected. Numerical simulations of this case (not included in this paper) show that the peak separation in such a case, which we denote as $`\mathrm{\Delta }_{\mathrm{cr}}^{\mathrm{peak}}`$, stays constant as a function of $`\epsilon `$ for the models with instant acceleration (A, B, D), while it increases with increasing $`\epsilon `$ for model C (acceleration over $`\mathrm{\Delta }h`$) at a rate dependent on the electron acceleration rate. The former case can be understood by arguing that energy $`E=\gamma mc^2`$ of the electron decreases monotonically from its starting value $`E_{\mathrm{init}}`$, while its radius of curvature $`\rho _{\mathrm{cr}}`$ increases. Recalling the properties of the curvature continuous spectrum due to a single particle (e.g. Landau & Lifshitz 1973) this makes the contribution to the spectrum per unit distance by the electron to be the highest one just at the initial altitude $`h_{\mathrm{init}}`$. In consequence, the contribution to the spectrum per unit phase angle by a bundle of electrons moving along a set of open field lines is the highest one at $`h_{\mathrm{init}}`$. Therefore, it is the opening angle of the CR photons at $`h_{\mathrm{init}}`$ which fixes the phase separation $`\mathrm{\Delta }_{\mathrm{cr}}^{\mathrm{peak}}`$ of the two peaks regardless the photon energy. In the latter case the qualitative behaviour of $`\mathrm{\Delta }_{\mathrm{cr}}^{\mathrm{peak}}(\epsilon )`$ can be understood by assuming that the phase of the pulse peak at a given photon energy $`\epsilon `$ is approximately determined by such altitude at which accelerated electrons reach the energy $`\gamma mc^2`$ which satisfies the condition $`0.29\epsilon _{\mathrm{cr}}(\gamma ,\rho _{\mathrm{cr}})\epsilon `$, where $`\epsilon _{\mathrm{cr}}=\frac{3}{2}c\mathrm{}\gamma ^3\rho _{\mathrm{cr}}^1`$ is a characteristic energy of the CR spectrum and $`\rho _{\mathrm{cr}}`$ is a local radius of curvature; both $`\gamma `$ and $`\rho _{\mathrm{cr}}`$ are functions of altitude $`h`$. We now relax these two simplifications and present the consequences. First, the inclusion of magnetic absorption results in a strong response of $`\mathrm{\Delta }_{\mathrm{cr}}^{\mathrm{peak}}`$ at highest photon energies - between some critical energy $`\epsilon _{\mathrm{turn}}`$ and the high-energy cutoff, as mentioned in the first paragraph. This effect will be addressed in the final part of this section. Second, adding up synchrotron component (SR) due to $`e^\pm `$ pairs changes the properties of high-energy emission for $`\epsilon <\epsilon _{\mathrm{turn}}`$ (i.e. below a few GeV) significantly. Most notable is the domination of SR over CR in terms of intensity. The total energy output per logarithmic energy bandwidth at the first peak as a function of photon energy $`\epsilon `$ for models A, B, C and D is presented in Fig. 2 (lower panels). Both components – SR and CR – are marked schematically to show their relative importance. Consequently, $`\mathrm{\Delta }^{\mathrm{peak}}`$ is affected by directional and spectral properties of the SR emission. The behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ is shown in Fig. 2 (upper panels). Its slight decrease with increasing $`\epsilon `$ (model A) or no change at all (models B, C, and D) - is due to a combination of factors which determine the directional and spectral properties of SR. These include energy and pitch angle distributions of secondary $`e^\pm `$ pairs, as well as their vertical spread within the magnetosphere combined with a strength of the local magnetic field. These factors change from one model to another: Model A - In strong local magnetic fields ($`B_{\mathrm{local}}10^{12}`$ G), efficient pair production requires lower electron energies than in a low-field case (like in model B). The production occurs over a wider range of altitudes. Consequently, the spectrum of SR contributed locally by the pairs evolves considerably over this range of altitudes. Lorentz factors of gyration $`\gamma _{}`$ are of low values (of the order of $`mc^2/(15\epsilon _B)`$, where $`\epsilon _Bmc^2B_{\mathrm{local}}/B_{\mathrm{crit}}`$). The resulting local SR spectra are, therefore, very narrow (see also the lower panel of fig.1 in Rudak & Dyks 1999): they spread between the (local) characteristic SR energy $`\epsilon _{\mathrm{sr}}`$ and the (local) cyclotron turnover energy $`\epsilon _{\mathrm{ct}}=1.5\epsilon _B/\mathrm{sin}\psi `$ (where $`\psi `$ is the pitch angle) which roughly satisfy $`\epsilon _{\mathrm{sr}}/\epsilon _{\mathrm{ct}}\gamma _{}^2`$, and the ratio does not exceed $`10^3`$ in model A. With increasing height (therefore, with decreasing $`B_{\mathrm{local}}`$) both $`\epsilon _{\mathrm{sr}}`$ and $`\epsilon _{\mathrm{ct}}`$ move towards lower and lower values. The final effect of this softening is then a notable decrease of $`\mathrm{\Delta }^{\mathrm{peak}}`$ with increasing photon energy (Fig. 2, upper panel, Model A). In even stronger local magnetic fields (but not exceeding $`B_{\mathrm{crit}}`$) the rate of decrease of $`\mathrm{\Delta }^{\mathrm{peak}}`$ with increasing $`\epsilon `$ may be much larger, because $`e^\pm `$-pairs are produced with extremely low $`\gamma _{}`$ and synchrotron/cyclotron photons emitted at any altitude concentrate near the local cyclotron energy $`\epsilon _B`$. Model B - When cascades are to develop in a relatively weak magnetic field, $`B_{\mathrm{local}}<10^{11}`$ G, very high electron energy $`E_{\mathrm{init}}`$ is required. The high value of $`E_{\mathrm{init}}`$ means rapid CR cooling which brings the electron energy quickly below the level required for pair production. Curvature photons become too soft for pair creation via magnetic absorption very quickly after injection. The bulk of pairs is created by electrons with their energies confined to a narrow range at $`E_{\mathrm{init}}`$ and consequently the SR component originates within a narrow range of magnetospheric altitudes (radial positions $`r`$). The resulting $`\mathrm{\Delta }^{\mathrm{peak}}`$ does not change with photon energy $`\epsilon `$. The difference between model A and B in radial extension $`r`$ of the regions of origin of curvature photons which are absorbed, producing pairs and eventually SR, is presented in Fig. 3 (right-hand vertical axes). The difference is more appealing when presented in terms of $`\mathrm{\Delta }^{\mathrm{peak}}`$ (left-hand vertical axes). Model C - After initial stage of linear acceleration, the electrons enter a regime of ‘radiation reaction limited acceleration’. Over a considerable range of altitudes ($`3.3R_{\mathrm{ns}}<r<3.6R_{\mathrm{ns}}`$) the electrons’ energy remains approximately constant and so does the pair production efficiency (it is equal to $`410^3`$ pairs per centimetre of the primary electron path). In such conditions, any evolution of the SR-spectrum over this range might affect $`\mathrm{\Delta }^{\mathrm{peak}}`$ as a function of $`\epsilon `$. Nevertheless, no significant changes occur in $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$. This is because a single-particle SR-spectrum almost does not change its shape within the range of altitudes with stable, strong pair production. The balance between the acceleration rate and the CR-cooling rate stabilizes $`\epsilon _{\mathrm{cr}}`$ as well as $`\epsilon _{\mathrm{sr}}`$. Any spectral changes of SR in its low-energy part, nearby $`\epsilon _{\mathrm{ct}}`$, are not relevant in the context of gamma-rays, since they occur near 100 keV, i.e. well below the energy range of EGRET. <sup>1</sup><sup>1</sup>1The behaviour of SR in its low-energy part (i.e. in hard X-rays) may then lead to increase of $`\mathrm{\Delta }^{\mathrm{peak}}`$ in the hard X-ray domain in comparison to $`\mathrm{\Delta }^{\mathrm{peak}}`$ in the gamma-rays. Assuming alternative to our choice definition of $`\mathrm{\Delta }^{\mathrm{peak}}`$, with $`\zeta _{\mathrm{small}}`$ (see Section 2), the behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ would be a mirror-reflection of our case. Then this effect might explain low value of $`\mathrm{\Delta }^{\mathrm{peak}}`$ within the $`230`$ keV range found by Strickman, Harding & deJager in RXTE data of Vela. Moreover, below 2 keV the synchrotron component in the model spectrum drops below the level of CR-emission, and the value of $`\mathrm{\Delta }^{\mathrm{peak}}`$ as for the gamma-rays is expected to be resumed there. Model D (a uniform distribution of primary electrons over the polar cap but otherwise identical to model A) - the final effect does not resemble the decrease of $`\mathrm{\Delta }^{\mathrm{peak}}`$ with increasing photon energy found for model A. Apart from numerical fluctuations (see Fig. 2, upper panel) the separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ remains approximately constant as a function of $`\epsilon `$. Finally, let us discuss the inclusion of magnetospheric opacity due to $`\gamma 𝐁e^\pm `$. This effect becomes important above $`1\mathrm{GeV}`$, where most power is due to CR in all models. In this regime the position of each peak in the pulse is determined by magnetic absorption, and this results in a strong response of $`\mathrm{\Delta }^{\mathrm{peak}}`$ between $`\epsilon _{\mathrm{turn}}>1\mathrm{GeV}`$ and a high-energy cutoff: At $`\epsilon =\epsilon _{\mathrm{turn}}`$ the separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ undergoes a sudden turn and starts increasing (or decreasing) rapidly for $`\epsilon >\epsilon _{\mathrm{turn}}`$. For the hollow-column models (A, B and C) the photons in both peaks of a pulse come from low magnetospheric altitudes with narrow opening angles. When $`\epsilon `$ is high enough these photons will be absorbed. Photons which now found themselves in the ‘new’ peaks come from higher altitudes (the magnetosphere is transparent to them) and have wider opening angles. In other words, inner parts of the ‘original’ peaks in the pulse will be eaten-up and the gap between the peaks, i.e. the peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$, will increase (Fig.2). Our Monte Carlo results for $`\mathrm{\Delta }^{\mathrm{peak}}`$ at $`\epsilon >\epsilon _{\mathrm{turn}}`$ may be reproduced with good accuracy by a simple analytical solution of $`\tau _{\gamma B}1`$ which has to be combined with eq.(4). The requirement $`\tau _{\gamma B}1`$ is particularly simple (with some well known approximations being used) since it refers to a photon created with a momentum tangential to local dipolar magnetic field line at a height $`h`$ above the neutron star surface (at radial coordinate $`r=h+R_{\mathrm{ns}}`$). The photon will undergo magnetic absorption (to be more precise, with a probability of $`[1\mathrm{exp}(\tau _{\gamma B})]`$) if its energy $`\epsilon `$ satisfies the following condition: $$\epsilon >\epsilon _{\mathrm{esc}}=7.610^2\left(\frac{P}{0.1\mathrm{s}}\right)^{1/2}\left(\frac{B_{\mathrm{pc}}}{10^{12}\mathrm{G}}\right)^1\left(\frac{r}{R_{\mathrm{ns}}}\right)^{5/2}\mathrm{MeV},$$ (6) (cf. eq. 11 in Bulik et al. 1999). This formula is valid for hollow-column models but may be used for any dipolar field line (the factor $`P^{1/2}`$ comes just from choosing the outer rim of a polar cap to be the site of field-line footpoints). Although the pulsar spin (which leads to an aberration and a slippage of magnetic field under the photon’s path) has been neglected in derivation of eq. (6), the formula gives $`\epsilon _{\mathrm{esc}}`$ in excellent agreement with Monte Carlo method for the emission regions placed up to several $`R_{\mathrm{ns}}`$ above the surface and rotation periods typical for strong-field pulsars ($`10^2`$ s). One may look at the eq.(6) as the condition for a radius of escape $`r_{\mathrm{esc}}`$ at a given energy $`\epsilon `$. Photons of energy $`\epsilon `$ will escape the magnetosphere only if they are emitted at $`rr_{\mathrm{esc}}`$ which satisfies $`\epsilon _{\mathrm{esc}}(r)\epsilon `$. The radial coordinate $`r_{\mathrm{esc}}(\epsilon )`$ has to be combined now with eq.(4) to give $`\mathrm{\Delta }^{\mathrm{peak}}`$ relevant for the ‘magnetic absorption ’regime. This analytical $`\mathrm{\Delta }^{\mathrm{peak}}`$ is shown as dashed lines in Fig.2 (upper panels) and in Fig.3, whereas the filled dots are the Monte Carlo results. This branch of solution intersects the horizontal line set by $`\mathrm{\Delta }^{\mathrm{peak}}=0.43`$ at $`\epsilon _{\mathrm{turn}}\mathrm{\hspace{0.17em}0.9}`$, $`4.5`$, and $`3`$ GeV for models A, B, and C, respectively. For model D, with a uniform distribution of primary electrons over the polar cap (but otherwise identical to model A) the changes of $`\mathrm{\Delta }^{\mathrm{peak}}`$ above $`\epsilon _{\mathrm{turn}}`$, occur in the opposite sense. Unlike in previous models, here both peaks of the pulse are formed by photons emitted tangentially to magnetic field lines attached to the polar cap at some opening angle $`\theta _{\mathrm{init}}<\theta _{\mathrm{pc}}`$ . These photons are less attenuated than those coming from a hollow-column, and in consequence the peak separation drops. A similar behaviour was obtained by Miyazaki & Takahara , in their model of a homogeneous polar-cap. We have also investigated the behaviour od $`\mathrm{\Delta }^{\mathrm{peak}}(\epsilon )`$ above $`\epsilon _{\mathrm{turn}}`$ for other distributions of primary electrons over the polar cap. We have considered intermediate cases between models A and D, (i.e. with a uniform coverage of only an outer part of the polar cap area between some inner radius $`r_{\mathrm{in}}<r_{\mathrm{pc}}`$ and the polar cap radius $`r_{\mathrm{pc}}`$), and models with uniformly filled interior of the polar cap surface but with increased electron density along the polar cap rim (cf. Daugherty & Harding 1996). We conclude that regardless the actual shape of the active part (i.e. ‘covered’ with primary electrons) of the polar cap (either an outer rim, or an entire cap, or a ring, or entire cap/ring + increased rim density), one does expect in general strong changes in the peak separation to occur at photon energies close to high-energy spectral cutoff due to magnetic absorption. A word of technical comment seems to be appropriate for $`\mathrm{\Delta }^{\mathrm{peak}}=0.43`$ at $`300\mathrm{MeV}`$. It appears that a technique adopted by Kanbach of fitting the observed pulse shapes with asymetric Lorentz profiles tends to overestimate the true value of $`\mathrm{\Delta }^{\mathrm{peak}}`$ by a few thousandth parts of phase. Therefore the actual value may be closer to 0.42 than 0.43 (Maurice Gros, private communication). Nonetheless, this shift in $`\mathrm{\Delta }^{\mathrm{peak}}`$ does not change any conclusions of this work. ## 4 Summary In this paper we addressed a recent suggestion of Kanbach that peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ in the double-peak gamma-ray pulses of the Vela pulsar may monotonically decrease with increasing photon energy at a rate $`0.025`$ phase per decade in energy over the range $`50\mathrm{MeV}`$ to $`9\mathrm{GeV}`$, We calculated gamma-ray pulses expected in polar-cap models with magnetospheric activity induced by curvature radiation of beam particles. Two types of geometry of magnetospheric column above the polar cap were assumed: a hollow-column associated with an outer rim of the polar cap and a filled column associated with a uniform polar cap. Four models were considered with two scenarios for the acceleration of beam particles. Pulsed emission in the models was a superposition of curvature radiation due to beam particles and synchrotron radiation due to secondary $`e^\pm `$ pairs in magnetospheric cascades. The changes in the peak separation were investigated with Monte Carlo numerical simulations. We found that regardless the differences in the models, the peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ below a few GeV, where the emission is dominated by synchrotron component, is either a weak decreasing function of photon energy $`\epsilon `$, or remains constant. Both variants may be considered to be in agreement with the results of Kanbach for the latter are affected by large statistical errors. A particular behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ depends on a combination of several factors, including strength of magnetic field in the region of pair formation and model of electron acceleration (both of which determine spectral and directional properties of the radiation at different altitudes), as well as viewing geometry. Essentially, in strong fields, $`B_{\mathrm{local}}10^{12}\mathrm{G}`$, $`\mathrm{\Delta }^{\mathrm{peak}}`$ decreases with increasing photon energy $`\epsilon `$, whereas for $`B_{\mathrm{local}}<10^{12}\mathrm{G}`$, the peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ stays at a constant level. Moreover, we found that due to the magnetic absorption ($`\gamma 𝐁e^\pm `$) there exists a critical energy $`\epsilon _{\mathrm{turn}}`$ at which the peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ makes an abrupt turn and then changes dramatically for $`\epsilon >\epsilon _{\mathrm{turn}}`$. It increases in the hollow-column models (A, B, and C) and decreases in the filled-column model (D), at a rate $`0.28`$ phase per decade of photon energy. The numerical behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ in this regime in the hollow-column models was easily reproduced to high accuracy with a simple analytical model of magnetospheric transparency for a photon of energy $`\epsilon `$, and its momentum tangential to local dipolar magnetic field line at a site of its origin. An exact value of $`\epsilon _{\mathrm{turn}}`$ is model-dependent but it is confined to a range between $`0.9\mathrm{GeV}`$ and $`4.5\mathrm{GeV}`$. To find such a hypothetical turnover of $`\mathrm{\Delta }^{\mathrm{peak}}`$ in real observational data would require, however, high-sensitivity detectors, since for $`\epsilon >\epsilon _{\mathrm{turn}}`$ the expected flux of gamma-rays drops significantly. If detected, this turnover would be an important signature of polar cap activity in gamma-ray pulsars. It would support the notion that high-energy cutoffs in gamma-ray spectra of pulsars are due to magnetic absorption. The CR-induced cascades models, like those considered in this work, are not the only possibility for nearly aligned rotators to produce double-peak pulses with large phase separations. There exists an alternative class of models - with pair cascades above polar cap induced by magnetic inverse Compton scatterings (ICS) of primary electrons in the field of soft photons from the stellar surface - proposed in a series of papers (e.g. Sturner & Dermer 1994, Sturner et al. 1995). In particular, Sturner et al. (1995) present the detailed Monte Carlo model spectra of the Vela pulsar. They also present pulse profiles at a fixed energy of 50 MeV (for several viewing angles) but no word of comment is given regarding the problem of $`\mathrm{\Delta }^{\mathrm{peak}}`$ versus photon energy. We expect the outcome to be qualitatively similar to our results. First, the scatterings take place mostly within a very limited height above the polar cap surface (below $`hR_{\mathrm{pc}}`$) and the preferred directions of propagation of the ICS photons will be fixed by magnetic field lines just above the surface. Therefore, $`\mathrm{\Delta }^{\mathrm{peak}}`$ due solely to ICS photons should stay constant for a wide range of energy. Inclusion of synchrotron photons due to pairs is unlikely to notably affect $`\mathrm{\Delta }^{\mathrm{peak}}`$ unless the pair formation front is vertically more extended than for CR-induced cascades. Second, some turnover point at $`\epsilon _{\mathrm{turn}}`$ not exceeding 1 GeV should be present due to magnetic absorption. The behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ for $`\epsilon >\epsilon _{\mathrm{turn}}`$ should roughly follow the dashed lines in Fig. 2 (upper panel) and Fig.3 as long as the assumption about photons (which are to be absorbed) propagating tangentially to local dipolar magnetic field line at their site of origin remains valid for majority of ICS photons. To verify this qualitative picture would, however, require detailed numerical calculations. ## ACKNOWLEDGEMENTS This work has been financed by the KBN grants 2P03D-01016, and 2P03D-02117. Support from Multiprocessor Systems Group at Nicholas Copernicus University’s Computer Centre in providing facilities for the time/memory-consuming Monte Carlo calculations is appreciated. We are grateful to Gottfried Kanbach and Maurice Gros for valuable discussions on processing and analysis of high-energy data for the Vela pulsar. We thank the anonymous referee for bringing our attention to the paper by Sturner et al. (1995).
no-problem/0003/astro-ph0003374.html
ar5iv
text
# Discovery of a New Deeply Eclipsing SU UMa-Type Dwarf Nova, IY UMa (= TmzV85) ## 1. Introduction Dwarf novae are cataclysmic binary stars which exhibit repetitive outbursts of several magnitudes. They contain a Roche-lobe-filling cool dwarf star that loses mass through the inner Lagrangian point, and a white-dwarf star accreting it (Warner 1995). The SU UMa stars form a sub-class of dwarf novae, showing two types of outburst, namely, a short “normal” outburst and a long “superoutburst”. According to theories for the superoutburst mechanism (e.g. Osaki 1996), after the accretion disk grows over a critical radius it becomes tidally unstable due to a gravitational interaction with the secondary. In this model the precession of an eccentric disk can explain the “superhump” modulation present in the superoutburst. Eclipsing systems provide a unique opportunity to reconstruct the brightness distribution of an accretion disk from the observed integrated light (Horne 1985; Baptista, Steiner 1991, 1993). There are only five known SU UMa stars which exhibit deep eclipses, indicating occultation of the accretion disk and the white dwarf by the secondary star. Of these systems, HT Cas (Zhang et al. 1986), OY Car (Krzeminski, Vogt 1985), and Z Cha (Bailey 1979) have long been studied; these limited samples have historically provided almost all of our knowledge concerning the spatial structure and time-evolution of accretion disks in SU UMa stars. Although two more eclipsing SU UMa stars, DV UMa (Nogami et al., in preparation) and V2051 Oph (Kiyota, Kato 1998), have recently been discussed, the low frequency of superoutbursts and the small number of the known eclipsing systems still make it difficult to directly clarify the eccentric disk, itself, and its evolution with time by an observational approach. In this letter we report on the discovery of a new deeply eclipsing northern SU UMa-type dwarf nova, IY UMa (= TmzV85), along with the results of our photometric monitoring and time-resolved photometry. A more detailed analysis of the eclipses, including the time-evolution of the accretion disk during this superoutburst and the subsequent rapid fading phase, will be presented in a separate paper. ## 2. Discovery and Observations Takamizawa (1998) discovered a new variable star which had been brightening at a photographic magnitude of 13.0 on 1997 November, 9.751 (UT) more fainter than 14.9 mag on November 1.753 (UT). He reported this star as TmzV85 to the Variable Star network (VSNET, http://www.kusastro.kyoto-u.ac.jp/ vsnet) along with his comment that this is a potential dwarf nova based on the lack of detections on other films and the relatively blue color of the corresponding USNO star. Following this report, visual and CCD monitoring was conducted, yielding negative results until 2000 January 13.509, when Schmeer (2000) detected a second brightening at an unfiltered CCD magnitude of 14.0. We started CCD time-series observations, which immediately revealed the presence of superhumps and deep eclipses, establishing that TmzV85 is a new deeply eclipsing SU UMa-type dwarf nova (Uemura et al. 2000). This object has subsequently been given the designation IY UMa (Samus 2000). This is the first case in which both superhumps and eclipses were discovered simultaneously. The position of IY UMa, derived by H. Yamaoka (in private communication), is R.A. = 10h 43m 56s.87, Decl. = +58 07 32<sup>′′</sup>.5 (equinox 2000.0). Figure 1 gives the finding chart of IY UMa. A description of the equipment of CCD time-series photometry is given in table 1. After correcting for the standard de-biasing and flat fielding, we processed object frames with the PSF and aperture photometry packages. We performed differential photometry relative to the comparison star, C1, shown in figure 1, whose constancy was confirmed by check stars, C2 and C3. ## 3. Results The first outburst occurred in 1997, which was recorded as two photographic magnitudes of 13.0 (November 9.751) and 13.4 (November 9.756); the second outburst was observed at 14.0 mag on 2000 January 13.509. The first outburst was most certainly also a superoutburst, because the observed maximum was brighter than that of the second outburst. This means the amplitude of a superoutburst of 5.4 mag, determined by the minimum magnitude of 18.4 (Henden 2000). No other outburst over 15.3 mag has been recorded since 1994 November 11 when K. Takamizawa took his oldest photographic image of the field of IY UMa, although this object has been relatively closely monitored, particularly since 1999. Because we cannot exclude the possibility of overlooking another superoutburst before 1999, we suggest the typical time interval between two subsequent superoutbursts, called “supercycle”, is $``$800 d, or its half. The light curve of the outburst in 2000 January is given in figure 2. The abscissa and ordinate denote time in heliocentric julian date and unfiltered CCD or visual magnitude, respectively. The points and their errorbars denote the nightly averaged outside-eclipse magnitudes of CCD time-series photometry and their standard error, respectively. The open circles denote the magnitude by CCD monitorings. The last recorded magnitude before the outburst of 17.6 mag on January 8.463 and the discovery date of the outburst suggest that the duration of this superoutburst was between 12 and 18 days. Figure 3 provides the light curve on HJD 2451561.9 – 2451562.7, representative of the intermediate stage of the superoutburst (upper panel) and HJD 2451567.1 – 2451567.9, representative of the advanced stage (lower panel), which clearly show the superhumps and the deep eclipses. The superhump amplitude of about 0.5 mag in the intermediate stage becomes smaller at the late stage, about 0.3 mag. On the other hand, the eclipses become deeper with time; the typical depth of the eclipse are about 1.3 mag in the upper panel and about 1.8 mag in the lower panel, which suggests that the brightness of the outer part of the disk gradually fades and/or the disk, itself, shrinks during the advanced stage. The profile of the eclipse is quite asymmetric, suggesting the presence of a strongly asymmetric disk. We first determined the ephemeris of the eclipse center, $`T`$ (in HJD): $`T=2451561.24546(\pm 0.00011)+0.0739132(\pm 0.0000018)\times E,`$ where $`E`$ is a cycle number. To determine the superhump period, we rejected observations within phase 0.10 of the eclipse center from all of the data obtained during the outburst (HJD 2451561.2098 – HJD 2451569.5865). After removing a linear trend of the decline, we performed a period analysis using the Phase Dispersion Minimization (PDM) method (Stellingwerf 1978), which indicates $`0.07588\pm 0.0000113`$ d as the best estimated superhump period. The superhump excess, $`\epsilon `$, defined by $`\epsilon =(P_{\mathrm{sh}}P_{\mathrm{orb}})/P_{\mathrm{orb}}`$ , where $`P_{\mathrm{orb}}`$ and $`P_{\mathrm{sh}}`$ denote the orbital and superhump period, respectively, is consequently calculated as 0.027. ## 4. Discussion and Summary We have derived some of the physical parameters of the newly discovered SU UMa-type dwarf nova with a deep eclipse, IY UMa, and summarize them in table 2 along with those of the other eclipsing SU UMa stars. As shown in this table, the orbital period of IY UMa is quite similar to those of Z Cha and HT Cas. Because the normal outburst of IY UMa has historically not been detected, continuous observations are essential to determine the frequency of a normal outburst and the part of accretion disk where the disk-instability begins. IY UMa is potentially the most valuable northern SU UMa-type dwarf nova with deep eclipses, the bright quiescence magnitude, and relatively frequent superoutbursts compared to HT Cas, which has not been observed to undergo a superoutburst since 1985. The star thus provides a unique opportunity for SUBARU telescope to study the structure of accretion disks of SU UMa-type dwarf novae in quiescence through eclipse-timing spectroscopic observations. We are pleased to acknowledge comments by D. Nogami, which lead to several improvements in this paper. This research has been supported in part by a Grant-in-Aid for Scientific Research (10740095) of the Japanese Ministry of Education, Science, Sports, and Culture. KM has been financially supported as a Research Fellow for Young Scientists by the Japan Society for the Promotion of Science. PS’s observations were made with the Iowa Robotic Observatory, and he wishes to thank Robert Mutel and his students. ## References Bailey J. 1979, MNRAS 188, 681 Baptista R., Steiner J. E. 1991, A&A 249, 284 Baptista R., Steiner J. E. 1993, A&A 277, 331 Downes R. A., Mateo M., Szkody P., Jenner D. C., Margon B. 1986, ApJ 301, 240 Henden A. 2000, vsnet-alert circulation 4060 (http://www.kusastro.kyoto-u.ac.jp/vsnet/Mail/vsnet-alert/msg04060.html) Horne K. 1985, MNRAS 213, 129 Kiyota S., Kato T. 1998, Inf. Bull. Variable Stars 4644 Krzeminski W., Vogt N. 1985 A&A 144, 124 Mattei J. A., Kinnunen T., Hurst G. 1985 IAUC 4027 Mennickent R. E., Matsumoto K., Arenas J. 1999, A&A 348, 466 Osaki Y. 1995, PASJ 47, 47 Osaki Y. 1996, PASP 108, 39 Patterson J. 1998, PASP 110, 1132 Ritter H., Kolb U. 1998, A&AS 129, 83 Samus N. N. 2000, IAU Circ. 7353 Schmeer P. 2000, vsnet-alert circulation 4027 (http://www.kusastro.kyoto-u.ac.jp/vsnet/Mail/vsnet-alert/msg04027.html) Stellingwerf R. F. 1978, ApJ 224, 953 Takamizawa K. 1998, vsnet-obs circulation 18078 (http://www.kusastro.kyoto-u.ac.jp/vsnet/Mail/obs18000/msg00078.html) Uemura M., Kato T., Novák R., Jensen L. T., Takamizawa K., Schmeer P., Yamaoka H., Henden A. 2000, IAU Circ. 7349 Warner B. 1995, Cataclysmic Variable Stars, p126–215 (Cambridge Univ. Press, Cambridge) Vanmunster T. 2000, in preparation Zhang E. H., Robinson E. L., Nather R. E. 1986, ApJ 305, 740
no-problem/0003/hep-ex0003004.html
ar5iv
text
# Search for 𝑪⁢𝑷 violation in 𝑩^±→𝑱/𝝍⁢𝑲^± and 𝑩^±→𝝍⁢(𝟐⁢𝑺)⁢𝑲^± decays ## Abstract We present a search for direct $`CP`$ violation in $`B^\pm J/\psi K^\pm `$ and $`B^\pm \psi (2S)K^\pm `$ decays. In a sample of $`9.7\times 10^6`$ $`B\overline{B}`$ meson pairs collected with the CLEO detector, we have fully reconstructed 534 $`B^\pm J/\psi K^\pm `$ and 120 $`B^\pm \psi (2S)K^\pm `$ decays with very low background. We have measured the $`CP`$-violating charge asymmetry to be $`(+1.8\pm 4.3[\mathrm{stat}]\pm 0.4[\mathrm{syst}])\%`$ for $`B^\pm J/\psi K^\pm `$ and $`(+2.0\pm 9.1[\mathrm{stat}]\pm 1.0[\mathrm{syst}])\%`$ for $`B^\pm \psi (2S)K^\pm `$. preprint: CLNS 00/1661 CLEO 00-01 G. Bonvicini,<sup>1</sup> D. Cinabro,<sup>1</sup> S. McGee,<sup>1</sup> L. P. Perera,<sup>1</sup> G. J. Zhou,<sup>1</sup> E. Lipeles,<sup>2</sup> S. Pappas,<sup>2</sup> M. Schmidtler,<sup>2</sup> A. Shapiro,<sup>2</sup> W. M. Sun,<sup>2</sup> A. J. Weinstein,<sup>2</sup> F. Würthwein,<sup>2,</sup><sup>*</sup><sup>*</sup>*Permanent address: Massachusetts Institute of Technology, Cambridge, MA 02139. D. E. Jaffe,<sup>3</sup> G. Masek,<sup>3</sup> H. P. Paar,<sup>3</sup> E. M. Potter,<sup>3</sup> S. Prell,<sup>3</sup> V. Sharma,<sup>3</sup> D. M. Asner,<sup>4</sup> A. Eppich,<sup>4</sup> T. S. Hill,<sup>4</sup> R. J. Morrison,<sup>4</sup> H. N. Nelson,<sup>4</sup> R. A. Briere,<sup>5</sup> B. H. Behrens,<sup>6</sup> W. T. Ford,<sup>6</sup> A. Gritsan,<sup>6</sup> J. Roy,<sup>6</sup> J. G. Smith,<sup>6</sup> J. P. Alexander,<sup>7</sup> R. Baker,<sup>7</sup> C. Bebek,<sup>7</sup> B. E. Berger,<sup>7</sup> K. Berkelman,<sup>7</sup> F. Blanc,<sup>7</sup> V. Boisvert,<sup>7</sup> D. G. Cassel,<sup>7</sup> M. Dickson,<sup>7</sup> P. S. Drell,<sup>7</sup> K. M. Ecklund,<sup>7</sup> R. Ehrlich,<sup>7</sup> A. D. Foland,<sup>7</sup> P. Gaidarev,<sup>7</sup> L. Gibbons,<sup>7</sup> B. Gittelman,<sup>7</sup> S. W. Gray,<sup>7</sup> D. L. Hartill,<sup>7</sup> B. K. Heltsley,<sup>7</sup> P. I. Hopman,<sup>7</sup> C. D. Jones,<sup>7</sup> D. L. Kreinick,<sup>7</sup> M. Lohner,<sup>7</sup> A. Magerkurth,<sup>7</sup> T. O. Meyer,<sup>7</sup> N. B. Mistry,<sup>7</sup> E. Nordberg,<sup>7</sup> J. R. Patterson,<sup>7</sup> D. Peterson,<sup>7</sup> D. Riley,<sup>7</sup> J. G. Thayer,<sup>7</sup> P. G. Thies,<sup>7</sup> B. Valant-Spaight,<sup>7</sup> A. Warburton,<sup>7</sup> P. Avery,<sup>8</sup> C. Prescott,<sup>8</sup> A. I. Rubiera,<sup>8</sup> J. Yelton,<sup>8</sup> J. Zheng,<sup>8</sup> G. Brandenburg,<sup>9</sup> A. Ershov,<sup>9</sup> Y. S. Gao,<sup>9</sup> D. Y.-J. Kim,<sup>9</sup> R. Wilson,<sup>9</sup> T. E. Browder,<sup>10</sup> Y. Li,<sup>10</sup> J. L. Rodriguez,<sup>10</sup> H. Yamamoto,<sup>10</sup> T. Bergfeld,<sup>11</sup> B. I. Eisenstein,<sup>11</sup> J. Ernst,<sup>11</sup> G. E. Gladding,<sup>11</sup> G. D. Gollin,<sup>11</sup> R. M. Hans,<sup>11</sup> E. Johnson,<sup>11</sup> I. Karliner,<sup>11</sup> M. A. Marsh,<sup>11</sup> M. Palmer,<sup>11</sup> C. Plager,<sup>11</sup> C. Sedlack,<sup>11</sup> M. Selen,<sup>11</sup> J. J. Thaler,<sup>11</sup> J. Williams,<sup>11</sup> K. W. Edwards,<sup>12</sup> R. Janicek,<sup>13</sup> P. M. Patel,<sup>13</sup> A. J. Sadoff,<sup>14</sup> R. Ammar,<sup>15</sup> A. Bean,<sup>15</sup> D. Besson,<sup>15</sup> R. Davis,<sup>15</sup> N. Kwak,<sup>15</sup> X. Zhao,<sup>15</sup> S. Anderson,<sup>16</sup> V. V. Frolov,<sup>16</sup> Y. Kubota,<sup>16</sup> S. J. Lee,<sup>16</sup> R. Mahapatra,<sup>16</sup> J. J. O’Neill,<sup>16</sup> R. Poling,<sup>16</sup> T. Riehle,<sup>16</sup> A. Smith,<sup>16</sup> J. Urheim,<sup>16</sup> S. Ahmed,<sup>17</sup> M. S. Alam,<sup>17</sup> S. B. Athar,<sup>17</sup> L. Jian,<sup>17</sup> L. Ling,<sup>17</sup> A. H. Mahmood,<sup>17,</sup>Permanent address: University of Texas - Pan American, Edinburg, TX 78539. M. Saleem,<sup>17</sup> S. Timm,<sup>17</sup> F. Wappler,<sup>17</sup> A. Anastassov,<sup>18</sup> J. E. Duboscq,<sup>18</sup> K. K. Gan,<sup>18</sup> C. Gwon,<sup>18</sup> T. Hart,<sup>18</sup> K. Honscheid,<sup>18</sup> D. Hufnagel,<sup>18</sup> H. Kagan,<sup>18</sup> R. Kass,<sup>18</sup> T. K. Pedlar,<sup>18</sup> H. Schwarthoff,<sup>18</sup> J. B. Thayer,<sup>18</sup> E. von Toerne,<sup>18</sup> M. M. Zoeller,<sup>18</sup> S. J. Richichi,<sup>19</sup> H. Severini,<sup>19</sup> P. Skubic,<sup>19</sup> A. Undrus,<sup>19</sup> S. Chen,<sup>20</sup> J. Fast,<sup>20</sup> J. W. Hinson,<sup>20</sup> J. Lee,<sup>20</sup> N. Menon,<sup>20</sup> D. H. Miller,<sup>20</sup> E. I. Shibata,<sup>20</sup> I. P. J. Shipsey,<sup>20</sup> V. Pavlunin,<sup>20</sup> D. Cronin-Hennessy,<sup>21</sup> Y. Kwon,<sup>21,</sup>Permanent address: Yonsei University, Seoul 120-749, Korea. A.L. Lyon,<sup>21</sup> E. H. Thorndike,<sup>21</sup> C. P. Jessop,<sup>22</sup> H. Marsiske,<sup>22</sup> M. L. Perl,<sup>22</sup> V. Savinov,<sup>22</sup> D. Ugolini,<sup>22</sup> X. Zhou,<sup>22</sup> T. E. Coan,<sup>23</sup> V. Fadeyev,<sup>23</sup> Y. Maravin,<sup>23</sup> I. Narsky,<sup>23</sup> R. Stroynowski,<sup>23</sup> J. Ye,<sup>23</sup> T. Wlodek,<sup>23</sup> M. Artuso,<sup>24</sup> R. Ayad,<sup>24</sup> C. Boulahouache,<sup>24</sup> K. Bukin,<sup>24</sup> E. Dambasuren,<sup>24</sup> S. Karamov,<sup>24</sup> G. Majumder,<sup>24</sup> G. C. Moneti,<sup>24</sup> R. Mountain,<sup>24</sup> S. Schuh,<sup>24</sup> T. Skwarnicki,<sup>24</sup> S. Stone,<sup>24</sup> G. Viehhauser,<sup>24</sup> J.C. Wang,<sup>24</sup> A. Wolf,<sup>24</sup> J. Wu,<sup>24</sup> S. Kopp,<sup>25</sup> S. E. Csorna,<sup>26</sup> I. Danko,<sup>26</sup> K. W. McLean,<sup>26</sup> Sz. Márka,<sup>26</sup> Z. Xu,<sup>26</sup> R. Godang,<sup>27</sup> K. Kinoshita,<sup>27,</sup><sup>§</sup><sup>§</sup>§Permanent address: University of Cincinnati, Cincinnati, OH 45221 I. C. Lai,<sup>27</sup> and S. Schrenk<sup>27</sup> <sup>1</sup>Wayne State University, Detroit, Michigan 48202 <sup>2</sup>California Institute of Technology, Pasadena, California 91125 <sup>3</sup>University of California, San Diego, La Jolla, California 92093 <sup>4</sup>University of California, Santa Barbara, California 93106 <sup>5</sup>Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 <sup>6</sup>University of Colorado, Boulder, Colorado 80309-0390 <sup>7</sup>Cornell University, Ithaca, New York 14853 <sup>8</sup>University of Florida, Gainesville, Florida 32611 <sup>9</sup>Harvard University, Cambridge, Massachusetts 02138 <sup>10</sup>University of Hawaii at Manoa, Honolulu, Hawaii 96822 <sup>11</sup>University of Illinois, Urbana-Champaign, Illinois 61801 <sup>12</sup>Carleton University, Ottawa, Ontario, Canada K1S 5B6 and the Institute of Particle Physics, Canada <sup>13</sup>McGill University, Montréal, Québec, Canada H3A 2T8 and the Institute of Particle Physics, Canada <sup>14</sup>Ithaca College, Ithaca, New York 14850 <sup>15</sup>University of Kansas, Lawrence, Kansas 66045 <sup>16</sup>University of Minnesota, Minneapolis, Minnesota 55455 <sup>17</sup>State University of New York at Albany, Albany, New York 12222 <sup>18</sup>Ohio State University, Columbus, Ohio 43210 <sup>19</sup>University of Oklahoma, Norman, Oklahoma 73019 <sup>20</sup>Purdue University, West Lafayette, Indiana 47907 <sup>21</sup>University of Rochester, Rochester, New York 14627 <sup>22</sup>Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309 <sup>23</sup>Southern Methodist University, Dallas, Texas 75275 <sup>24</sup>Syracuse University, Syracuse, New York 13244 <sup>25</sup>University of Texas, Austin, TX 78712 <sup>26</sup>Vanderbilt University, Nashville, Tennessee 37235 <sup>27</sup>Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061 $`CP`$ violation arises naturally in the Standard Model with three quark generations ; however, it still remains one of the least experimentally constrained sectors of the Standard Model. Decays of $`B`$ mesons promise to be a fertile ground for $`CP`$ violation studies. Direct $`CP`$ violation, also called $`CP`$ violation in decay, occurs when the amplitude for a decay and its $`CP`$-conjugate process have different magnitudes. Direct $`CP`$ violation can be observed in both charged and neutral $`B`$ meson decays. At least two interfering amplitudes with different $`CP`$-odd (weak) and $`CP`$-even (strong or electromagnetic) phases are the necessary ingredients for direct $`CP`$ violation. For the decays governed by the $`bc\overline{c}s`$ quark transition, such as $`B^\pm J/\psi K^\pm `$ and $`B^0(\overline{B}^0)J/\psi K_S^0`$, there are interfering Standard Model tree and penguin amplitudes (Fig. 1). These amplitudes could have a significant relative strong phase. The relative weak phase, however, is expected to be very small . Therefore, the $`CP`$ asymmetry in $`B^\pm J/\psi K^\pm `$ decay is firmly predicted in the Standard Model to be much smaller than the 4% precision of our measurement. A $`CP`$ asymmetry of $`𝒪(10\%)`$ in $`B^\pm J/\psi K^\pm `$ decay is possible in a specific two-Higgs doublet model described in Ref.; such a large asymmetry could be measured with our current data. In order to constrain any of the New Physics models, however, we need to know the relative strong phases which are difficult to determine. The measurement of the $`CP`$ asymmetry in $`B^0(\overline{B}^0)J/\psi K_S^0`$ decay allows an extraction of the relative phase between the $`B^0\overline{B}^0`$ mixing amplitude and the $`bc\overline{c}s`$ decay amplitude . In the Standard Model this phase is equal to $`\mathrm{sin}2\beta `$, where $`\beta \mathrm{Arg}\left(V_{cd}V_{cb}^{}/V_{td}V_{tb}^{}\right)`$. An observation of $`CP`$ asymmetry in $`B^\pm J/\psi K^\pm `$ decay at a few per cent or larger level will be a clear evidence for sources of $`CP`$ violation beyond the Standard Model. Such an observation will also mean that a measurement of the $`CP`$ asymmetry in $`B^0(\overline{B}^0)J/\psi K_S^0`$ decay no longer determines $`\mathrm{sin}2\beta `$. If some mechanism causes direct $`CP`$ violation to occur in $`B^\pm J/\psi K^\pm `$ decays, then the same mechanism could generate a $`CP`$ asymmetry in $`B^\pm \psi (2S)K^\pm `$ mode. Final state strong interactions, however, could be quite different for $`J/\psi K`$ and $`\psi (2S)K`$ states; thus, we measured $`CP`$-violating charge asymmetries separately for $`B^\pm J/\psi K^\pm `$ and $`B^\pm \psi (2S)K^\pm `$ decay modes. The data used for our measurement were collected at the Cornell Electron Storage Ring (CESR) with two configurations of the CLEO detector called CLEO II and CLEO II.V . The components of the CLEO detector most relevant to this analysis are the charged particle tracking system, the CsI electromagnetic calorimeter, and the muon chambers. In CLEO II the momenta of charged particles are measured in a tracking system consisting of a 6-layer straw tube chamber, a 10-layer precision drift chamber, and a 51-layer main drift chamber, all operating inside a 1.5 T solenoidal magnet. The main drift chamber also provides a measurement of the specific ionization, $`dE/dx`$, used for particle identification. For CLEO II.V, the straw tube chamber was replaced with a 3-layer silicon vertex detector, and the gas in the main drift chamber was changed from an argon-ethane to a helium-propane mixture. The muon chambers consist of proportional counters placed at increasing depth in steel absorber. For this measurement we used 9.2 $`\mathrm{fb}^1`$ of $`e^+e^{}`$ data taken at the $`\mathrm{{\rm Y}}(4S)`$ resonance and 4.6 $`\mathrm{fb}^1`$ taken 60 MeV below the $`\mathrm{{\rm Y}}(4S)`$ resonance. In $`\mathrm{{\rm Y}}(4S)`$ decays $`B^+`$ mesons are born only in pairs with $`B^{}`$ mesons, therefore $`B^+`$ and $`B^{}`$ mesons are produced in equal numbers. Two thirds of the data used were collected with the CLEO II.V detector. The simulated event samples used in this analysis were generated with a GEANT-based simulation of the CLEO detector response and were processed in a similar manner as the data. We reconstructed $`\psi ^{()}e^+e^{}`$ and $`\psi ^{()}\mu ^+\mu ^{}`$ decays, where $`\psi ^{()}`$ stands for either $`J/\psi `$ or $`\psi (2S)`$. We also reconstructed $`\psi (2S)`$ in the $`\psi (2S)J/\psi \pi ^+\pi ^{}`$ channel. Electron candidates were identified based on the ratio of the track momentum to the associated shower energy in the CsI calorimeter and on the specific ionization in the drift chamber. We recovered some of the bremsstrahlung photons by selecting the photon shower with the smallest opening angle with respect to the direction of the $`e^\pm `$ track evaluated at the interaction point, and then requiring this opening angle to be smaller than $`5^{}`$. We therefore refer to the $`e^+(\gamma )e^{}(\gamma )`$ invariant mass when we describe the $`\psi ^{()}e^+e^{}`$ reconstruction. For the $`\psi ^{()}\mu ^+\mu ^{}`$ reconstruction, one of the muon candidates was required to penetrate the steel absorber to a depth greater than 3 nuclear interaction lengths. We relaxed the absorber penetration requirement for the second muon candidate if it was not expected to reach a muon chamber either because its energy was too low or because it did not point to a region of the detector covered by the muon chambers. For these muon candidates we required the ionization signature in the CsI calorimeter to be consistent with that of a muon. We extensively used normalized variables, taking advantage of well-understood track and photon-shower four-momentum covariance matrices to calculate the expected resolution for each combination. The use of normalized variables allows uniform candidate selection criteria to be applied to the data collected with the CLEO II and CLEO II.V detector configurations. The $`\psi ^{()}`$ candidates were selected using the normalized invariant mass. For example, the normalized $`\mu ^+\mu ^{}`$ invariant mass is defined as $`[M(\mu ^+\mu ^{})M_{\psi ^{()}}]/\sigma (M)`$, where $`M_{\psi ^{()}}`$ is the world average value of the $`J/\psi `$ or $`\psi (2S)`$ mass and $`\sigma (M)`$ is the calculated mass resolution for that particular $`\mu ^+\mu ^{}`$ combination. The average $`\mathrm{}^+\mathrm{}^{}`$ invariant mass resolution is approximately 12 MeV$`/c^2`$. We required the normalized $`\mu ^+\mu ^{}`$ mass to be from $`4`$ to 3 for $`J/\psi \mu ^+\mu ^{}`$ candidates and from $`3`$ to 3 for $`\psi (2S)\mu ^+\mu ^{}`$ candidates. We required the normalized $`e^+(\gamma )e^{}(\gamma )`$ mass to be from $`10`$ to 3 for $`J/\psi e^+e^{}`$ candidates and from $`3`$ to 3 for $`\psi (2S)e^+e^{}`$ candidates. For each $`\psi ^{()}\mathrm{}^+\mathrm{}^{}`$ candidate, we performed a fit constraining its mass to the world average value. We selected the $`\psi (2S)J/\psi \pi ^+\pi ^{}`$ candidates by requiring the absolute value of the normalized $`J/\psi \pi ^+\pi ^{}`$ mass to be less than 3 and by requiring the $`\pi ^+\pi ^{}`$ invariant mass to be greater than 400 MeV/$`c^2`$. The average $`J/\psi \pi ^+\pi ^{}`$ mass resolution is approximately 3 MeV/$`c^2`$. For each $`\psi (2S)J/\psi \pi ^+\pi ^{}`$ candidate, we performed a fit constraining its mass to the world average value. Well-measured tracks consistent with originating at the $`e^+e^{}`$ interaction point were selected as the $`K^\pm `$ candidates. In order avoid any additional charge-correlated systematic bias in the $`K^\pm `$ selection, we did not impose any particle identification requirements on the $`K^\pm `$ candidates. The $`B^\pm J/\psi K^\pm `$ and $`B^\pm \psi (2S)K^\pm `$ candidates were selected by means of two observables. The first observable is the difference between the energy of the $`B^\pm `$ candidate and the beam energy, $`\mathrm{\Delta }EE(B^\pm )E_{\mathrm{beam}}`$. The average resolution in $`\mathrm{\Delta }E`$ is 10 MeV (8 MeV) for the $`B^\pm J/\psi K^\pm `$ ($`B^\pm \psi (2S)K^\pm `$) candidates. We used the normalized $`\mathrm{\Delta }E`$ observable for candidate selection and required $`|\mathrm{\Delta }E|/\sigma (\mathrm{\Delta }E)<3`$. The second observable is the beam-constrained $`B`$ mass, $`M(B)\sqrt{E_{\mathrm{beam}}^2p^2(B)}`$, where $`p(B)`$ is the magnitude of the $`B`$ candidate momentum. The resolution in $`M(B)`$ for the $`B^\pm \psi ^{()}K^\pm `$ candidates is 2.7 MeV/$`c^2`$ and is dominated by the beam energy spread. The $`M(B)`$ distributions for the $`B^\pm J/\psi K^\pm `$ and $`B^\pm \psi (2S)K^\pm `$ candidates passing the $`|\mathrm{\Delta }E|/\sigma (\mathrm{\Delta }E)<3`$ requirement are shown in Fig. 2. We used the normalized $`M(B)`$ observable for candidate selection and required $`|M(B)M_B|/\sigma (M)<3`$. The $`CP`$-violating charge asymmetry in $`B^\pm J/\psi K^\pm `$ decays is defined as a branching fraction asymmetry $`𝒜_{CP}{\displaystyle \frac{(B^{}J/\psi K^{})(B^+J/\psi K^+)}{(B^{}J/\psi K^{})+(B^+J/\psi K^+)}}.`$ In this definition we adopted the sign convention from Ref. . The same definition is used for $`B^\pm \psi (2S)K^\pm `$ mode. Table I lists signal yields together with observed charge asymmetries. The possible sources of systematic uncertainty and bias in the $`𝒜_{CP}`$ measurement are described below. Background. — From fits to the beam-constrained mass distributions (Fig. 2), we estimated the combinatorial background to be $`3.5_{1.7}^{+2.8}`$ ($`1.7_{1.0}^{+2.0}`$) for $`B^\pm J/\psi K^\pm `$ ($`B^\pm \psi (2S)K^\pm `$) mode. The background from $`B^\pm \psi ^{()}\pi ^\pm `$ decays has to be added because $`B^\pm \psi ^{()}\pi ^\pm `$ candidates contribute to the beam-constrained mass peaks. Using simulated events, we estimated the background from $`B^\pm \psi ^{()}\pi ^\pm `$ decays to be $`1.5\pm 0.5`$ events for $`B^\pm J/\psi K^\pm `$ and 0.1 event for $`B^\pm \psi (2S)K^\pm `$ mode. We assumed the branching ratio of $`(B^\pm J/\psi \pi ^\pm )/(B^\pm J/\psi K^\pm )=(5.1\pm 1.4)\%`$ ; the same value was assumed for $`B^\pm \psi (2S)\pi ^\pm `$ decays. Total background is therefore estimated to be $`5_2^{+3}`$ events for $`B^\pm J/\psi K^\pm `$ and $`2_1^{+2}`$ events for $`B^\pm \psi (2S)K^\pm `$ mode. As a check, we used samples of simulated events together with the data collected below the $`B\overline{B}`$ production threshold and estimated total background to be $`3.3\pm 0.8`$ events for $`B^\pm J/\psi K^\pm `$ and $`3.7\pm 0.9`$ events for $`B^\pm \psi (2S)K^\pm `$ mode. We verified that the simulation accurately reproduced the rate and distribution of candidates in the data in the $`\mathrm{\Delta }E`$ vs. $`M(B)`$ plane near, but not including, the signal region. Backgrounds are expected to be $`CP`$-symmetric. We measured the charge asymmetry for the candidates in the side-band regions of the $`\mathrm{\Delta }E`$ and $`M(B)`$ distributions to be $`(+2.2\pm 4.1)\%`$ for $`B^\pm J/\psi K^\pm `$ and $`(1.2\pm 6.4)\%`$ for $`B^\pm \psi (2S)K^\pm `$. We also verified that our final result does not critically depend on the assumption of zero $`CP`$ asymmetry for background events. We assumed that the number of background events entering our sample follows a Poisson distribution with a mean of 5 events for $`B^\pm J/\psi K^\pm `$ and 4 events for $`B^\pm \psi (2S)K^\pm `$ mode. We also assumed that the $`CP`$-violating charge asymmetry for the background is $`+30\%`$. Using Monte Carlo techniques, we found that background with such properties introduces a $`+0.3\%`$ ($`+1.0\%`$) bias in our $`𝒜_{CP}`$ measurement for the $`B^\pm J/\psi K^\pm `$ ($`B^\pm \psi (2S)K^\pm `$) mode. We assigned a systematic uncertainty on $`𝒜_{CP}`$ of $`0.3\%`$ for $`B^\pm J/\psi K^\pm `$ and $`1.0\%`$ for $`B^\pm \psi (2S)K^\pm `$. Charge asymmetry for inclusive tracks. — Collisions of particles with the nuclei in the detector material occasionally result in recoil protons, but almost never in recoil antiprotons. To fake a $`K^+`$ candidate, a recoil proton has to have a momentum of at least 1.2 GeV/$`c`$ and its track should be consistent with originating at the $`e^+e^{}`$ interaction point. In order to study the effect of possible recoil proton contamination of our $`K^+`$ sample, we selected inclusive tracks satisfying the same track quality criteria as for the charged kaon candidates in the $`B^\pm \psi ^{()}K^\pm `$ reconstruction. The kaon momentum in the laboratory frame is between 1.2 and 1.4 GeV/$`c`$ for the $`B^\pm \psi (2S)K^\pm `$ mode and between 1.55 and 1.85 GeV/$`c`$ for the $`B^\pm J/\psi K^\pm `$ mode. We have indeed found more positive than negative tracks in these two momentum ranges. For all tracks with momentum between 1.2 and 1.4 GeV/$`c`$, we have observed a charge asymmetry of $`(N^{}N^+)/(N^{}+N^+)=(0.22\pm 0.03)\%`$; the corresponding number for tracks with momentum between 1.55 and 1.85 GeV/$`c`$ is $`(0.17\pm 0.04)\%`$. Besides increasing our confidence that our track reconstruction procedure does not introduce significant charge-correlated bias, this study also confirms that the number of recoil protons entering the pool of $`K^+`$ candidates is negligible even before the reconstruction of the full $`B^\pm \psi ^{()}K^\pm `$ decay chain. We did not assign any systematic uncertainty. Difference in $`K^+`$ vs. $`K^{}`$ detection efficiencies. — The flavor of the $`B`$ meson is tagged by the charged kaon; therefore, we searched for charge-correlated systematic bias associated with the $`K^\pm `$ detection and momentum measurement. The cross sections for nuclear interactions are larger for negative than for positive kaons from $`B^\pm \psi ^{()}K^\pm `$ decays. We used two methods to evaluate the difference in $`K^+`$ vs. $`K^{}`$ detection efficiencies. In the first method we performed an analytic calculation of the expected asymmetry, combining the data on the nuclear interaction cross sections for the $`K^+`$ and $`K^{}`$ mesons with the known composition of the CLEO detector material. In the second method we used the GEANT-based simulation of the CLEO detector response, processing the simulated events in a similar manner as the data. Both methods are in excellent agreement that the $`K^+`$ reconstruction efficiency is approximately $`0.6\%`$ higher than the $`K^{}`$ reconstruction efficiency. The corresponding charge-correlated detection efficiency asymmetry is therefore $`0.3\%`$. We applied a $`+0.3\%`$ correction to the measured values of $`𝒜_{CP}`$ both for $`B^\pm J/\psi K^\pm `$ and for $`B^\pm \psi (2S)K^\pm `$ modes. We assigned 100% of the correction as a systematic uncertainty. Bias in $`K^+`$ vs. $`K^{}`$ momentum measurement. — This bias will separate the $`\mathrm{\Delta }EE(B^\pm )E_{\mathrm{beam}}`$ peaks for $`B^+`$ and $`B^{}`$ candidates so that the requirement on $`\mathrm{\Delta }E`$ can manifest a preference for the $`B`$ candidates of a certain sign. We measured the difference in mean $`\mathrm{\Delta }E`$ for the $`B^+`$ and $`B^{}`$ candidates to be $`0.6\pm 0.8`$ MeV. This result is consistent with zero and very small compared to the approximately $`\pm 30`$ MeV window used in the $`\mathrm{\Delta }E`$ requirement. We also used high-momentum muon tracks from $`e^+e^{}\mu ^+\mu ^{}`$ events as well as samples of $`D^0`$ and $`D_{(s)}^\pm `$ meson decays to put stringent limits on possible charge-correlated bias in the momentum measurement. We conclude that the bias in $`K^+`$ vs. $`K^{}`$ momentum reconstruction is negligible for our $`CP`$-violation measurement. In conclusion, we have measured the $`CP`$-violating charge asymmetry to be $`(+1.8\pm 4.3[\mathrm{stat}]\pm 0.4[\mathrm{syst}])\%`$ for $`B^\pm J/\psi K^\pm `$ and $`(+2.0\pm 9.1[\mathrm{stat}]\pm 1.0[\mathrm{syst}])\%`$ for $`B^\pm \psi (2S)K^\pm `$. These values of $`𝒜_{CP}`$ include a $`+0.3\%`$ correction due to a slightly higher reconstruction efficiency for the positive kaons. Our results are consistent with the Standard Model expectations and provide the first experimental test of the assumption that direct $`CP`$ violation is negligible in $`B\psi ^{()}K`$ decays. We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. We thank A. Soni and M. Neubert for useful discussions. This work was supported by the National Science Foundation, the U.S. Department of Energy, the Research Corporation, the Natural Sciences and Engineering Research Council of Canada, the A.P. Sloan Foundation, the Swiss National Science Foundation, and the Alexander von Humboldt Stiftung.
no-problem/0003/hep-ph0003319.html
ar5iv
text
# References INSTITUT FÜR KERNPHYSIK, UNIVERSITÄT FRANKFURT D - 60486 Frankfurt, August–Euler–Strasse 6, Germany IKF–HENPG/02–00 Charm Estimate from the Dilepton Spectra in Nuclear Collisions Marek Gaździcki<sup>a,</sup><sup>1</sup><sup>1</sup>1E–mail: marek@ikf.physik.uni–frankfurt.de and Mark I. Gorenstein<sup>b,</sup><sup>2</sup><sup>2</sup>2E–mail: goren@ap3.bitp.kiev.ua <sup>a</sup> CERN, Geneva, Switzerland, Institut für Kernphysik, Universität Frankfurt, Frankfurt, Germany <sup>b</sup> Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine, Institut für Theoretische Physik, Universität Frankfurt, Germany ## Abstract A validity of a recent estimate of an upper limit of charm production in central Pb+Pb collisions at 158 A$``$GeV is critically discussed. Within a simple model we study properties of the background subtraction procedure used for an extraction of the charm signal from the analysis of dilepton spectra. We demonstrate that a production asymmetry between positively and negatively charged background muons and a large multiplicity of signal pairs leads to biased results. Therefore the applicability of this procedure for the analysis of nucleus–nucleus data should be reconsidered before final conclusions on the upper limit estimate of charm production could be drawn. Measurement of the invariant mass spectra of opposite–sign lepton pairs (dileptons) allow to extract information otherwise difficult or even impossible to obtain. Among interesting processes which contribute to dilepton production are decays of vector mesons ($`\rho ,w,\varphi ,J/\psi ,\psi ^{^{}}`$), Drell–Yan as well as thermal creation of dileptons, and decays of charm hadrons. Decays of pions and kaons are a dominant source of uninteresting (background) dileptons which should be subtracted before deconvolution of contributions from the interesting (signal) sources is performed. Recent analysis of dimuon spectrum measured in central Pb+Pb collisions at 158 A$``$GeV by NA50 Collaboration suggests a significant enhancement of dilepton production in the intermidiate mass region (1.5$`÷`$2.5 GeV) over the standard sources. The primary interpretation attributes this observation to the increased production of open charm . In the following theoretical papers other possible sources of the observed effect are proposed which do not invoke enhancement of the open charm yield . This suggests to interpret the NA50 result as an estimate of the upper limit (about 3 times above pQCD predictions) of open charm multiplicity in Pb+Pb collisions at SPS. The above conclusion relies, however, on the assumption that the background subtraction procedure used to extract signal sources gives unbiased results. In this work we show that this assumption is questionable. In particular, an asymmetry in the production of positively and negatively charged background dileptons and a high multiplicity of signal pairs lead to the result which differs from the one usually assumed in the data interpretation. Our analysis is done within a simple model based on the assumptions used to justify the background subtraction procedure . In central Pb+Pb collisions at SPS, due to high multiplicity of produced hadrons, the multiplicity of background dileptons is much higher ($``$ 95 %) than the multiplicity of signal pairs ($``$ 5 %). The invariant mass spectra of the Drell–Yan, thermal, and open charm contributions are broad and essentially structureless. Consequently their extraction requires very precise knowledge of the shape and the absolute normalisation of the background distribution. The necessary accuracy can not be reached by calculation of the background based on a model. Therefore in order to decrease the systematic error of the background estimation a method based on the measured data was developed and used in the analysis of dilepton spectra . In this method the background contribution to dilepton spectra is calculated as $`2\sqrt{n_{++}n_{}}`$, where $`n_{++}`$ and $`n_{}`$ are measured multiplicities of like–sign lepton pairs. The NA50 experiment measured the mean multiplicity of like–sign, $`n_{++}`$ and $`n_{}`$, and opposite–sign, $`n_+`$, muon pairs. One usually distinguishes two classes of muons: the ”independent” muons coming from decays of pions and kaons ($`h`$ mesons) and the ”correlated” muons originating from vector meson decays, Drell–Yan and thermal creation of dimuons, and from decays of pairs of charm hadrons. For simplicity of the initial considerations let us assume that the correlated muons only come from the decays of charm hadrons, which we denote here by $`D`$ and $`\overline{D}`$. The meaning in which the words ”independent” and ”correlated” used above is the following. Let $`N_+`$, $`N_{}`$ be the numbers of positively and negatively charged hadrons (kaons and/or pions) produced in a given nucleus–nucleus (A+A) collision. The numbers $`N_+`$, $`N_{}`$ are independent when the probability to observe them can be factorized: $$P(N_+,N_{})=P_+(N_+)\times P_{}(N_{}),$$ (1) where $`P_+(N_+)`$ and $`P_{}(N_{})`$ are the probability distributions for independent observation of $`N_+`$ or $`N_{}`$ hadrons. Due to charm conservation the numbers of $`D`$ and $`\overline{D}`$ hadrons are expected to be equal in each event ($`N_D`$ = $`N_{\overline{D}}`$); the production of $`D`$ and $`\overline{D}`$ hadrons is correlated. The independence or the correlation of muon sources leads to an independence or a correlation of muons originating from these sources. The assumption of approximately independent $`K^+`$ and $`K^{}`$ (or $`\pi ^+`$ and $`\pi ^{}`$) production in A+A event is justified by large number of different hadron species created in the collision. Then, e.g. the electric charge and strangeness of produced $`K^+`$ in a given event could be in fact compensated by many different hadron combinations, not just only by $`K^{}`$. Let us denote by $`\alpha _h`$ and $`\alpha _D`$ the probabilities that a decay of a single $`h`$ or $`D`$ leads to a muon inside the NA50 spectrometer. In an event with multiplicities $`N_+`$, $`N_{}`$ and $`N_D`$ the probabilities to observe $`n`$ muons of a given sort are binominaly distributed: $$P_i(n_+^i)=\frac{N_+!}{n_+^i!(N_+n_+^i)!}(\alpha _h)^{n_+^i}(1\alpha _h)^{N_+n_+^i},$$ (2) $$P_i(n_{}^i)=\frac{N_{}!}{n_{}^i!(N_{}n_{}^i)!}(\alpha _h)^{n_{}^i}(1\alpha _h)^{N_{}n_{}^i}.$$ (3) $$P_c(n_+^c)=\frac{N_D!}{n_+^c!(N_Dn_+^c)!}(\alpha _D)^{n_+^c}(1\alpha _D)^{N_Dn_+^c},$$ (4) $$P_c(n_{}^c)=\frac{N_D!}{n_{}^c!(N_Dn_{}^c)!}(\alpha _D)^{n_{}^c}(1\alpha _D)^{N_Dn_{}^c}.$$ (5) where $`n_+^i`$, $`n_{}^i`$, $`n_+^c`$ and $`n_{}^c`$ are numbers of positively and negatively charged muons from ”independent” and ”correlated” sources. From Eqs. (2-5) one finds $`\overline{n_+^i}`$ $`=`$ $`\alpha _hN_+,\overline{n_{}^i}=\alpha _hN_{},\overline{n_+^c}=\overline{n_{}^c}=\alpha _DN_D,`$ (6) $`\overline{(n_+^i)^2}`$ $`=`$ $`\alpha _h(1\alpha _h)N_++\alpha _h^2N_+^2,`$ (7) $`\overline{(n_{}^i)^2}`$ $`=`$ $`\alpha _h(1\alpha _h)N_{}+\alpha _h^2N_{}^2,`$ (8) $`\overline{(n_+^c)^2}`$ $`=`$ $`\overline{(n_{}^c)^2}=\alpha _D(1\alpha _D)N_D+\alpha _D^2N_D^2.`$ (9) We introduce now the probabilities, $`A_h`$, $`A_D`$, and $`A_{hD}`$ that muon pairs from, respectively, $`hh`$, $`DD`$ and $`hD`$ decays are detected within the dimuon acceptance. These probabilities depend on cuts on the dimuon properties and, for given experimental cuts, on momentum spectra of dimuon sources. Assuming that the probabilities $`A`$ are multiplicity independent, we arrive at the following expressions for the numbers of like–sign and opposite–sign muon pairs, for fixed values of $`N_+`$, $`N_{}`$ and $`N_D`$ $`\overline{n_{++}}`$ $`=`$ $`A_h{\displaystyle \underset{n_+^i}{}}{\displaystyle \frac{n_+^i(n_+^i1)}{2}}P_i(n_+^i)+A_D{\displaystyle \underset{n_+^c}{}}{\displaystyle \frac{n_+^c(n_+^c1)}{2}}P_c(n_+^c)`$ $`+`$ $`A_{hD}{\displaystyle \underset{n_+^i,n_+^c}{}}n_+^in_+^cP_i(n_+^i)P_c(n_+^c)`$ $`=`$ $`{\displaystyle \frac{A_h}{2}}\left(\overline{(n_+^i)^2}\overline{n_+^i}\right)+{\displaystyle \frac{A_D}{2}}\left(\overline{(n_+^c)^2}\overline{n_+^c}\right)+A_{hD}\overline{n_+^i}\overline{n_+^c}`$ $`=`$ $`{\displaystyle \frac{A_h}{2}}\alpha _h^2\left(N_+^2N_+\right)+{\displaystyle \frac{A_D}{2}}\alpha _D^2\left(N_D^2N_D\right)+A_{hD}\alpha _h\alpha _DN_+N_D,`$ $$\overline{n_{}}=\frac{A_h}{2}\alpha _h^2\left(N_{}^2N_{}\right)+\frac{A_D}{2}\alpha _D^2\left(N_D^2N_D\right)+A_{hD}\alpha _h\alpha _DN_{}N_D,$$ (11) $`\overline{n_+}`$ $`=`$ $`A_h{\displaystyle \underset{n_+^i,n_{}^i}{}}n_+^in_{}^iP_i(n_+^i)P_i(n_{}^i)+A_D{\displaystyle \underset{n_+^c,n_{}^c}{}}n_+^cn_{}^cP_c(n_+^c)P_c(n_{}^c)`$ $`+`$ $`A_{hD}{\displaystyle \underset{n_+^i,n_{}^c}{}}n_+^in_{}^cP_i(n_+^i)P_c(n_{}^c)+A_{hD}{\displaystyle \underset{n_{}^i,n_+^c}{}}n_{}^in_+^cP_i(n_{}^i)P_c(n_+^c)`$ $`=`$ $`A_h\alpha _h^2N_+N_{}+A_D\alpha _D^2N_D^2+A_{hD}\alpha _h\alpha _DN_D(N_++N_{}).`$ Here we have made a simplified assumption that the shape of momentum spectra of $`h^+`$ and $`h^{}`$ (as well as $`D`$ and $`\overline{D}`$) are similar and, therefore, $`A_h^{++}=A_h^{}=A_h^+A_h`$, $`A_{hD}^{++}=A_{hD}^{}=A_{hD}^+A_{hD}`$ and $`A_D^{++}=A_D^{}=A_D^+A_D`$ (the last equation means that possible momentum correlations between $`D`$ and $`\overline{D}`$ are also neglected). Note that if there are no cuts on the dimuon properties the above probabilities become equal to unity, $`A_h=A_{hD}=A_D=1`$, i.e. assuming all $`A`$-probabilities equal to one in Eqs. (S0.Ex3-S0.Ex5) we count all possible dimuon pairs. However, as soon as one fixes some dimuon properties (e.g. an invariant mass of the dimuon pair) all $`A`$-probabilities are evidently smaller than unity and their actual numerical values become dependent on the shape of $`h`$ and $`D`$ momentum spectra and their decay kinematics. Note also that in Eqs. (S0.Ex3-S0.Ex5) an independence of muon numbers $`n_+^i`$ and $`n_{}^i`$ is due to assumed in Eq. (1) independence of $`N_+`$ and $`N_{}`$ which entered into $`P_i(n_+^i)`$ (2) and $`P_i(n_{}^i)`$ (3). A correlation of muon numbers $`n_+^c`$ and $`n_{}^c`$ is due to the correlation of $`N_D`$ and $`N_{\overline{D}}`$ ($`N_D=N_{\overline{D}}`$) which entered into $`P_c(n_+^c)`$ (4) and $`P_c(n_{}^c)`$ (5) probability distributions. The correlation of $`n_+^c`$ and $`n_{}^c`$ is of course weaker than that for $`N_D`$ and $`N_{\overline{D}}`$, so that $`n_+^c`$ are not necessarily equal to $`n_{}^c`$ in each event. In order to find the final mean multiplicities of the dimuons one should average the obtained numbers over all possible values of $`N_+,N_{},N_D`$. To simplify the following calculations we assume that the relevant multiplicity distributions are Poisson distributions $$P(N)=\frac{\overline{N}^N}{N!}\mathrm{exp}(\overline{N}).$$ (13) In this case one gets: $`n_{++}`$ $`=`$ $`{\displaystyle \underset{N_+,N_{},N_D}{}}\overline{n_{++}}P(N_+)P(N_{})P(N_D)={\displaystyle \frac{1}{2}}A_h\alpha _h^2\left(\overline{N_+}\right)^2`$ $`+`$ $`{\displaystyle \frac{1}{2}}A_D\alpha _D^2\left(\overline{N_D}\right)^2+A_{hD}\alpha _h\alpha _D\overline{N_+}\overline{N_D}.`$ $`n_{}`$ $`=`$ $`{\displaystyle \underset{N_+,N_{},N_D}{}}\overline{n_{}}P(N_+)P(N_{})P(N_D)={\displaystyle \frac{1}{2}}A_h\alpha _h^2\left(\overline{N_{}}\right)^2`$ $`+`$ $`{\displaystyle \frac{1}{2}}A_D\alpha _D^2\left(\overline{N_D}\right)^2+A_{hD}\alpha _h\alpha _D\overline{N_{}}\overline{N_D}.`$ $`n_+`$ $`=`$ $`{\displaystyle \underset{N_+,N_{},N_D}{}}\overline{n_+}P(N_+)P(N_{})P(N_D)=A_h\alpha _h^2\overline{N_+}\overline{N_{}}`$ $`+`$ $`A_D\alpha _D^2\left[\left(\overline{N_D}\right)^2+\overline{N_D}\right]+A_{hD}\alpha _h\alpha _D\overline{N_D}\left(\overline{N_+}+\overline{N_{}}\right).`$ Note again that $`N_D=N_{\overline{D}}`$ is assumed in each event and, therefore, there is no independent summation over $`N_{\overline{D}}`$ in the above equations. Eqs. (S0.Ex6-S0.Ex8) can be rewritten as $`n_{++}`$ $`=`$ $`{\displaystyle \frac{1}{2}}a_hh_+^2+{\displaystyle \frac{1}{2}}a_dD^2+a_mh_+D,`$ (17) $`n_{}`$ $`=`$ $`{\displaystyle \frac{1}{2}}a_hh_{}^2+{\displaystyle \frac{1}{2}}a_dD^2+a_mh_{}D,`$ (18) $`n_+`$ $`=`$ $`a_hh_+h_{}+a_dD^2+a_dD+a_mD\left(h_++h_{}\right),`$ (19) by introducing the following notations: $`a_h`$ $``$ $`A_h\alpha _h^2,a_dA_D\alpha _D^2,a_mA_{hD}\alpha _h\alpha _D,`$ (20) $`\overline{N_+}`$ $``$ $`h_+,\overline{N_{}}h_{},\overline{N_D}D.`$ (21) Parameters $`a_h`$, $`a_d`$ and $`a_m`$ are therefore the probabilities to observe two muons from the corresponding hadron sources (these probabilities are $`\alpha _h^2`$, $`\alpha _D^2`$ and $`\alpha _h\alpha _D`$) within experimental cuts on muon pair properties (these cuts lead to additional factors $`A_h`$, $`A_D`$ and $`A_{hD}`$). In the experimental procedure the background contribution to the dimuon spectrum is calculated as: $$n_+^{Bgr}2\sqrt{n_{++}n_{}}.$$ (22) The number of signal $`(\mu ^+,\mu ^{})`$–pairs is assumed to be: $$n_+^{Sgl}n_+n_+^{Bgr}.$$ (23) It is expected that the subtraction procedure (23) cancels out all false $`(\mu ^+,\mu ^{})`$–pairs i.e. the pairs from $`hh`$ and $`hD`$ decays, and that $`n_+^{Sgl}`$ is proportional to the multiplicity of $`D`$ hadrons: $$n_+^{Sgl}=a_dD.$$ (24) Let us consider some properties of the subtraction procedure (23) by discussing two simple examples within the model. Example 1: We assume that there is no contribution from $`D`$-decays. In our model this assumption can be introduced by setting $`\alpha _D=0`$. Consequently $`a_d=a_m=0`$ and Eqs. (17-19) result in: $$n_{++}=\frac{1}{2}a_hh_+^2,n_+=\frac{1}{2}a_hh_{}^2,n_+=a_hh_+h_{}.$$ (25) Using Eq. (23) one obtains that $`n_+^{Sgl}=0`$, i.e. the measured signal multiplicity is equal to zero as expected in the case of absence of dimuons from the correlated source. This result is valid for any value of $`h_+`$ and $`h_{}`$. Example 2: In this example we assume that there are correlated dimuons $`a_dD>0`$ but the number of positively and negatively charged background hadrons is equal ($`h_+=h_{}h`$). Under these conditions Eqs. (17-19) can be rewritten as $`n_{++}`$ $`=`$ $`n_{}={\displaystyle \frac{1}{2}}a_hh^2+{\displaystyle \frac{1}{2}}a_dD^2+a_mhD,`$ (26) $`n_+`$ $`=`$ $`a_hh^2+a_dD^2+a_dD+2a_mhD.`$ (27) Eq. (23) gives $`n_+^{Sgl}=a_dD`$ which agrees exactly with the expectation (24). Finally we consider the general case, i.e. $`a_dD>0`$ and $`h_+h_{}`$. This last condition corresponds to the relation between pion and kaon average multiplicities measured in heavy ion collisions: $`\pi ^{}>\pi ^+`$ and $`K^+>K^{}`$. From Eqs. (17-23) by straightforward calculations one finds $$n_+^{Sgl}=n_+\sqrt{\left(n_+a_dD\right)^2+\gamma D^2},$$ (28) where $$\gamma \left(a_ha_da_m^2\right)\left(h_+h_{}\right)^2.$$ It is easy to see that for $`\alpha _D=0`$ and/or $`h_+=h_{}=h`$ one gets $`\gamma =0`$, and the results obtained in Examples 1 and 2 are reproduced. We repeat that in the absence of cuts on the dimuon properties one has $`A_h=A_{hD}=A_D=1`$. Therefore, $`a_ha_da_m^2=0`$ (i.e. $`\gamma =0`$) and consequently we have again unbiased estimate of the mean multiplicity of D mesons. In general, however, the result differs from the expected one (24). The presence of experimental cuts on dimuons (e.g. one fixes the dimuon invariant mass in the region $`M=1.5÷2.5`$ GeV) causes that the probabilities $`A_h,A_{hD}`$ and $`A_D`$ are smaller than unity, destroys the equality $`A_hA_D=A_{hD}^2`$ and, therefore, leads to non-zero value of $`\gamma `$. With cuts on dimuon properties the experimental number of signal pairs $`n_+^{Sgl}`$ is not equal to $`a_dD`$. By fitting $`a_dD^{}`$ to $`n_+^{Sgl}`$ one finds the spurious number of $`D`$ hadrons which we denoted by $`D^{}`$. There are two distinct cases. Case 1: $`a_ha_da_m^2<0,(\gamma <0).`$ The experimentally measured signal, $`n_+^{Sgl}`$ (28), is larger than the expected value $`a_dD`$ and therefore the extracted spurious number of $`D`$ hadrons is larger than the true one ($`D<D^{}`$). Case 2: $`a_ha_da_m^2>0,(\gamma >0).`$ The experimentally measured signal, $`n_+^{Sgl}`$ (28), is smaller than the expected value $`a_dD`$ and therefore the extracted spurious number of $`D`$ hadrons is smaller than the true one ($`D>D^{}`$). In the NA50 analysis of the dimuon spectra in terms of the open charm enhancement the used background subtraction procedure was checked for two different cases. First of all, it was shown to work correctly for simulated central Pb+Pb collisions at 158 A$``$GeV. However in this simulation correlated (signal) muon sources were not included. Thus this check is equivalent to our Example 1, for which the procedure works exactly. Secondly the open charm yield was extracted for p+A interactions and it was shown to agree with the yield from direct measurements. Eq. (28) and Example 1 show that the deviation from the expected result decreases with decreasing multiplicity of $`D`$ hadrons. Thus the success of the procedure applied to p+A interactions does not proof its applicability to Pb+Pb collisions in which multiplicity of $`D`$ hadrons may be higher even by a factor of about $`10^4`$ . Note that our results are obtained in a highly simplified model. The assumptions concerning independent production of background muons (Eq. (1)), the Poissonian multiplicity distributions of hadrons (Eq. (13)) and the absence of $`D`$ meson momentum correlations ($`A_D^{++}=A_D^{}=A_D^+A_D`$) seem to be questionable or even incorrect. Discussion of the possible additional biases introduced by these effects is beyond the scope of this paper. We also do not attempt here to calculate numerical values of $`A_h,A_{hD},A_D`$ for the specific NA50 experimental acceptance of dimuon pairs. We close the paper by concluding that the applicability of the background subtraction procedure widely used in the analysis of dilepton spectra in nucleus–nucleus collisions should be reconsidered. In particular final statement on the upper limit of the open charm multiplicity in central Pb+Pb collisions at 158 A$``$GeV resulting from the analysis of the dimuon spectrum requiers further studies in order to quantify a magnitude of the bias. They should include numerical simulations of the specific experimental set–up and consider various particle production models. Acknowledgements We thank M. Botje, K.A Bugaev, D. Jouan, M. van Leeuwen, E. Scomparin, P. Seyboth and P. Sonderegger for discussions and comments to the manuscript. We acknowledge the financial support of BMBF and DFG, Germany. The research described in this publication was made possible in part by Award No. UP1-2119 of the U.S. Civilian Research & Development Foundation for the Independent States of the Former Soviet Union (CRDF).
no-problem/0003/gr-qc0003016.html
ar5iv
text
# REFERENCES \[ U. Leonhardt and P. Piwnicki reply to the “Comment on ‘Relativistic Effects of Light in Moving Media with Extremely Low Group Velocity ’ ” by M. Visser \] We are grateful to Matt Visser for clarifying the interpretation of optical black holes . Waves in moving media may become trapped if the flow outruns the wave velocity, an effect which may establish artificial black holes or equivalent sonic analogs in superfluids and alkali Bose-Einstein condensates . However, as Visser’s Comment points out clearly, the medium should flow towards a drain, in order to form a black hole. The flow should guide light into the drain such that it disappears beyond an event horizon, provided of course that the flow velocity is sufficient for a horizon to form. In our Letter we had chosen a radially spinning vortex as our theoretical model for a suitable flow, because a vortex allows to combine two intriguing aspects of slow light in moving media , the optical Aharonov–Bohm effect and the analog of gravitational attraction. The Aharonov–Bohm effect of slow light in Bose–Einstein condensates may facilitate, for the first time, the observation of the long–range nature of quantum vortices. Previously, only vortex cores have been seen directly, by the trapping of electrons at vortex lines in superfluid $`{}_{}{}^{4}\mathrm{He}`$ or by taking pictures of expanded droplets of alkali Bose–Einstein condensates carrying vortices . In superfluid $`{}_{}{}^{3}\mathrm{He}`$, the texture of vortex matter has been inferred using NMR . Slow light in moving quantum fluids experiences phase shifts due to the Doppler detuning of an atomic resonance, with ultrahigh motion sensitivity . Phase-contrast microscopy of Bose–Einstein condensates can be applied to measure local phase shifts and to retrieve the flow pattern from the phase profile, as a form of optical tomography. Consequently, apart from the exciting prospects of forming artificial black holes, slow light can serve as a major experimental tool to explore in situ quantum fluids. U. Leonhardt School of Physics and Astronomy University of St Andrews North Haugh St Andrews, Fife, KY16 9SS, Scotland P. Piwnicki Physics Department Royal Institute of Technology (KTH) Lindstedtsvägen 24 S-10044 Stockholm, Sweden PACS numbers: 42.50.Gy, 04.20.-q
no-problem/0003/quant-ph0003126.html
ar5iv
text
# Incoherent scattering of light by a Bose–Einstein condensate of interacting atoms ## Abstract We demonstrate that incoherent photon scattering by a Bose–Einstein condensate of non-ideal atomic gas is enhanced due to bosonic stimulation of spontaneous emission, similarly to coherent scattering in forward direction. Necessary initial population of non-condensate states is provided by quantum depletion of a condensate caused by interatomic repulsion. PACS numbers: 03.75.Fi, 32.80.–t, 42.50.Vk Since the most reliable methods of diagnostics of a Bose–Einstein condensate (BEC) of a dilute atomic gas are based on laser spectroscopic techniques, the optical properties of BEC are of geat interest. One of the most important features of the interaction of BEC with resonant light is the enhancement of scattering in the forward direction . If an excited atom returns, after spontaneous emission of a photon, to the state occupied by a large number $`N`$ of atoms, the probability of such a process is increased in proportion to $`N+1`$ by the effects of quantum Bose–Einstein statistics. A degenerate atomic sample interacts with external light as an integral object, so this process is coherent. Because of momentum conservation, photons are scattered mostly into the forward direction. The corresponding solid angle $`\mathrm{\Omega }_{coh}`$ is small but finite, because the atomic cloud size $`R`$ is finite, and is of order of $`(kR)^2`$, where $`k`$ is the resonant photon wavenumber. On the other hand, it is commonly believed, that the photon scattering into modes lying outside of $`\mathrm{\Omega }_{coh}`$, that leads to an escape of atoms from the condensate and heating of the atomic sample, is not modified with respect to the usual single-atom case . But such a conclusion is valid for an ideal gas only. In the present paper, we show how repulsive interaction of atoms composing BEC modifies incoherent photon scattering. Let us define, firstly, the initial $`|i`$ and final $`|f`$ states of the system. Initially, we have BEC of atoms in their internal ground state (the BEC chemical potential is $`\mu `$), no elementary excitation is present. As in Refs., we consider a zero-temperature case. One atom is optically excited due to photon absorption and moves with respect to the BEC at the velocity $`𝐯_{in}=𝐩_{in}/m`$, where $`m`$ is the mass of the atom, $`\left|𝐩_{in}\right|=\mathrm{}k`$, and the $`z`$-axis of the reference frame is chosen to be parallel to $`𝐩_{in}`$. Then the atom undergoes spontaneous relaxation, but does not return to BEC, so the final state corresponds to the presence of one elementary excitation (quasiparticle) above the BEC. Now we can start with the formula for the probability per unit time to emit a photon with the momentum $`𝐩_{out}`$ directed into the elementary solid angle $`d\mathrm{\Omega }=d\varphi \mathrm{sin}\theta d\theta `$ : $$dw=\frac{\omega ^3}{2\pi \mathrm{}c^3}\left|f\left|𝐞^{}\widehat{𝐝}\right|i\right|^2d\mathrm{\Omega }.$$ (1) Here the difference between energies of the initial and final states is equal to the energy $`\mathrm{}\omega `$ of the emitted photon; e is the polarization unit vector, $`\widehat{𝐝}`$ is the dipole moment operator. Explicitely, we can write $$𝐞^{}\widehat{𝐝}=(D_{eg}^{}\widehat{a}_{g𝐪}^{}\widehat{a}_{e𝐩_{in}}+D_{eg}\widehat{a}_{g𝐪}\widehat{a}_{e𝐩_{in}}^{})P(\theta ).$$ (2) $`D_{eg}`$ is the transition dipole moment matrix element, the operator $`\widehat{a}_{e𝐩_{in}}`$ annihilates an optically excited atom with the momentum $`𝐩_{in}`$, the operator $`\widehat{a}_{g𝐪}^{}`$ creates an atom in the ground internal state with the momentum $`𝐪=𝐩_{in}𝐩_{out}`$, $`P(\theta )`$ accounts for angular distribution of spontaneously emitted photons, $`\left|P(\theta )\right|^2=\frac{1}{2}(1+\mathrm{cos}^2\theta )`$ for the circular polarization of the emitted photon. The final state is not an eigenstate of the operator of number of particles with $`q=0`$ but an eigenstate of the quasiparticle operator number. The creation (annihilation) operators $`\widehat{\alpha }_𝐪^{}`$ ($`\widehat{\alpha }_𝐪`$) of a quasiparticle are introduced via the well-known Bogolyubov’s canonical transformation which in the semiclassical approximation reads as $$\widehat{a}_{g𝐪}=\mathrm{exp}(i\mu t/\mathrm{})\left[u_q\widehat{\alpha }_𝐪\mathrm{exp}(iϵ(q)t/\mathrm{})+v_q\widehat{\alpha }_𝐪^{}\mathrm{exp}(iϵ(q)t/\mathrm{})\right].$$ (3) The transformation coefficients are $$u_q=\sqrt{\frac{ϵ_{HF}(q)}{2ϵ(q)}+\frac{1}{2}},v_q=\sqrt{\frac{ϵ_{HF}(q)}{2ϵ(q)}\frac{1}{2}}.$$ (4) Note that $$u_q^2+v_q^2=1.$$ (5) In this context q should be treated as the quasiparticle momentum. The quasiparticle energy is equal to $$ϵ(q)=\sqrt{q^4/(2m)^2+gn_cq^2/m},$$ the Hartree-Fock energy is equal to $$ϵ_{HF}(q)=q^2/(2m)+gn_c.$$ Here $`n_c`$ is the local density of the BEC (we use for it the Thomas–Fermi approximation ), $`g=4\pi \mathrm{}^2a/m`$ is the interatomic interaction constant, $`a>0`$ is the s-wave scattering length. Since, by definition, $$f\left|\widehat{\alpha }_𝐪^{}\widehat{a}_{e𝐩_{in}}\right|i=1,f\left|\widehat{\alpha }_𝐪\widehat{a}_{e𝐩_{in}}\right|i=0,$$ (6) we get the following resulting formula $$dw=\frac{\omega ^3}{2\pi \mathrm{}c^3}\left|P(\theta )\right|^2\left|D_{eg}\right|^2\left(1+\overline{v}_q^2\right)d\mathrm{\Omega },$$ (7) where $$\overline{v}_q^2=\frac{1}{V}d^3𝐫v_q^2,$$ (8) and the integral is taken over the volume $`V=\frac{4}{3}\pi R^3`$ occupied by the BEC. The spatial averaging given by Eq.(8) appears when we pass from the local density treatment to consideration of a finite-size trap where $`n_c(𝐫)`$ is non-unform. Because $`ϵ(q)\mathrm{}\omega `$, i.e. photon scattering can be regarded as an elastic process, we can apply the formula $`q=2\mathrm{}k\mathrm{sin}(\theta /2)`$. Eq.(7) has a very transparent physical explanation. Indeed, $`dw`$ is the sum of the main part, which describes the spontaneous emission rate in the case of a single atom or in the case of vanishing interatomic interaction, and the positive correction term proportional to $`\overline{v}_q^2`$. But $`v_q^2`$ is the occupation number of an elementary cell $`(2\pi \mathrm{})^3`$ of the phase space surrounding the point (rq), and $`\overline{v}_q^2`$ is its spatially averaged value. Since this occupation number is non-zero, a bosonic stimulation of spontaneous relaxation takes place. Atoms with $`q>0`$ are present in the lowest energetic state of the system (the vacuum of quasiparticles) because of interatomic repulsion. This phenomenon is known as quantum depletion of a BEC. So we proove that it causes modification of incoherent light scattering. Let us discuss now a possibility to observe this effect in experiment. The total, integrated over angles, spontaneous emission rate differs very slightly from the standard value $`2\gamma =\frac{4}{3}\mathrm{}^1(\omega /c)^3\left|D_{eg}\right|^2`$, and in the first order with respect to the small parameter $`\overline{\beta }=4\pi k^2a\overline{n}_c`$, where $`\overline{n}_c=V^1d^3𝐫n_c`$ is the averaged condensate density, is equal to $$w=2\gamma \left(1+\frac{3}{8}\overline{\beta }\right).$$ (9) It is hard to detect such a small difference. So one should examine light scattering at small angles $$\theta \sqrt{\overline{\beta }}.$$ (10) In this range $$\frac{dw}{d\mathrm{\Omega }}\frac{3\gamma \sqrt{\overline{\beta }}(V\sqrt{\overline{n}_c})^1d^3𝐫\sqrt{n_c}}{8\pi \theta }.$$ (11) This is much greater than the analogous value for forward light scattering by a single atom which is equal to $`3\gamma /(8\pi )`$. To provide a possibility to distinguish between this modified incoherent light scattering and the coherent one, the inequality $$\overline{\beta }(kR)^2,$$ (12) has to be satisfied. Eq.(12) means that quantum depletion effects provide light scattering at relatively large angles in comparison with the coherent process caused by the BEC presence. Eq.(12) also can be written in the form $`NR/a`$, which is the well-known condition application of the Thomas–Fermi approximation . In any case, Eq.(11) holds for $`\theta _{}^>(kR)^1`$, because the minimum value of $`q`$ in a finite-size trap is of order of $`\mathrm{}/R`$. Such an additional incoherent scattering at angles of order of $`\sqrt{\overline{\beta }}`$ also can be explained as follows. Positions of the centers of mass of atoms in a BEC are not independent, but, due to interaction, a pair correlation is essential at distances up to $`(4\pi an_c)^1`$ . Such a correlation, in other words, a small-scale atomic density inhomogeneity, results in additional scattering at angles given by Eq.(10). A similar effect has been proposed recently to detect a Bardeen–Cooper–Schriffer transition in a trapped fermionic <sup>6</sup>Li gas. As a conclusion, we make some estimations for the parameters of the experiment of Ref.. A spherically symmetric trap contains $`1.610^6`$ sodium atoms (the scattering length $`a=2.7510^7`$ cm, the resonant wavenumber $`k=1.0710^5`$ cm<sup>-1</sup>), the cloud size $`R=3.6310^3`$ cm, so $`\sqrt{\overline{\beta }}=0.049`$ and $`(kR)^1=0.0026`$, therefore additional incoherent forward scattering caused by quantum depletion of BEC can be easily distinguished from the coherent process studied in Refs.. Of course, such a cloud is optically dense for a radiation detuned from resonance less than to few hundreds MHz. But if the optical density is smaller than $`\overline{\beta }^{1/2}`$, then multiple photon scattering does not wash out the discussed effect. Namely, excess peak in angular distribution of incoherently scattered photons becomes less high and more wide but still noticeable. This work is supported by the Russian Foundation for Basic Researches (project No. 99–02–17076) and the state program ”Universities of Russia” (project No. 990108).
no-problem/0003/astro-ph0003267.html
ar5iv
text
# Untitled Document SUPERMASSIVE BLACK HOLES IN ACTIVE GALACTIC NUCLEI<sup>1</sup> To appear in Encyclopedia of Astronomy and Astrophysics Luis C. Ho<sup>2</sup> Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101-1292 and John Kormendy<sup>3</sup> Department of Astronomy, RLM 15.308, University of Texas, Austin, TX 78712-1083 1. SUPERMASSIVE BLACK HOLES AND THE AGN PARADIGM Quasars are among the most energetic objects in the Universe. We now know that they live at the centers of galaxies and that they are the most dramatic manifestation of the more general phenomenon of active galactic nuclei (AGNs). These include a wide variety of exotica such as Seyfert galaxies, radio galaxies, and BL Lacertae objects. Since the discovery of quasars in 1963, much effort has gone into understanding their energy source. The suite of proposed ideas has ranged from the relatively prosaic, such as bursts of star formation that make multiple supernova explosions, to the decidedly more colorful, such as supermassive stars, giant pulsars or “spinars,” and supermassive black holes (hereinafter BHs). Over time, BHs have gained the widest acceptance. The key observations that led to this consensus are as follows. Quasars have prodigious luminosities. Not uncommonly, $`L10^{46}`$ erg s<sup>-1</sup>; this is 10 times the luminosity of the brightest galaxies. Yet they are tiny, because they vary on timescales of hours. From the beginning, the need for an extremely compact and efficient engine could hardly have been more apparent. Gravity was implicated, because collapse to a black hole is the most efficient energy source known. The most cogent argument is due to Donald Lynden-Bell (1969, Nature, 223, 690). He showed that any attempt to power quasars by nuclear reactions alone is implausible. First, a lower limit to the total energy output of a quasar is the energy, $`10^{61}`$ erg, that is stored in its radio-emitting plasma halo. This energy weighs $`10^{40}`$ g or $`10^7`$ M. But nuclear reactions produce energy with an efficiency of only $`ϵ`$ = 0.7 %. Then the waste mass left behind in powering quasars would be at least $`M_{}10^9`$ M. Lynden-Bell argued further that quasar engines are $`2R`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 10<sup>15</sup> cm in diameter because large variations in quasar luminosities are observed on timescales as short as 10 h. But the gravitational potential energy of $`10^9`$ M compressed into a volume as small as 10 light hours is $`GM_{}^2/R`$ $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 10<sup>62</sup> erg. As Lynden-Bell noted, “Evidently although our aim was to produce a model based on nuclear fuel, we have ended up with a model which has produced more than enough energy by gravitational contraction. The nuclear fuel has ended as an irrelevance.” We now know that the total energy output is larger than the energy that is stored in a quasar’s radio source; this strengthens the argument. Meanwhile, a caveat has appeared: the objects that vary most rapidly are now thought to contain relativistic jets that are beamed at us. This boosts the power of a possibly small part of the quasar engine and weakens the argument that the object cannot vary on timescales less than the light travel time across it. But this phenomenon would not occur at all if relativistic motions were not involved, so BH-like potential wells are still implicated. These considerations suggest that quasar power derives from gravity. The presence of deep gravitational potentials has long been inferred from the large velocity widths of the emission lines seen in optical and ultraviolet spectra of AGNs. These are typically 2000 to 10,000 km s<sup>-1</sup>. If the large Doppler shifts arise from gravitationally bound gas, then the binding objects are both massive and compact. The obstacle to secure interpretation has always been the realization that gas is easy to push around: explosions and ejection of gas are common astrophysical phenomena. The observation that unambiguously points to relativistically deep gravitational potential wells is the detection of radio jets with plasma knots that are seen to move faster than the speed of light, $`c`$. Apparent expansion rates of 1 – 10 $`c`$ are easily achieved if the true expansion rate approaches $`c`$ and the jet is pointed almost at us. The final pillar on which the BH paradigm is based is the observation that many AGN jets are well collimated and straight. Evidently AGN engines can remember ejection directions with precision for up to $`10^7`$ yr. The natural explanation is a single rotating body that acts as a stable gyroscope. Alternative AGN engines that are made of many bodies – like stars and supernovae – do not easily make straight jets. A variety of other evidence also is consistent with the BH picture, but the above arguments were the ones that persuaded a majority of the astronomical community to take the extreme step of adopting BHs as the probable engine for AGN activity. In the meantime, BH alternatives such as single supermassive stars and spinars were shown to be dynamically unstable and hence short-lived. Even if such objects can form, they are believed to collapse to BHs. The above picture became paradigm long before there was direct evidence for BHs. Dynamical evidence is the subject of the present and following articles. Meanwhile, there are new kinds of observations that point directly to BH engines. In particular, recent observations by the Advanced Satellite for Cosmology and Astrophysics (ASCA) have provided strong evidence for relativistic motions in AGNs. The X-ray spectra of many Seyfert galaxy nuclei contain iron K$`\alpha `$ emission lines (rest energies of 6.4 – 6.9 keV; see Figure 1). These lines show enormous Doppler broadening — in some cases approaching 100,000 km s<sup>-1</sup> or 0.3$`c`$ — as well as asymmetric line profiles that are consistent with relativistic boosting and dimming in the approaching and receding parts, respectively, of BH accretion disks as small as a few Schwarzschild radii. The foregoing discussion applies to the most powerful members of the AGN family, namely quasars and high-luminosity Seyfert and radio galaxies. It is less compelling for the more abundant low-luminous objects, where energy requirements are less demanding and where long jets or superluminal motions are seen less frequently or less clearly. Therefore a small but vocal competing school of thought continues to argue that stellar processes alone, particularly those that occur during bursts of star formation, can reproduce many AGN characteristics. Nonetheless, dynamical evidence suggests that BHs do lurk in some mildly active nuclei, and, as discussed in the next article, even in the majority of inactive galaxies. Figure 1. A composite x-ray spectrum of Seyfert nuclei taken with ASCA showing the relativistically broadened Fe K$`\alpha `$ line. The solid line is a fit to the line profile using two Gaussians, a narrow component centered at 6.4 keV and a much broader, redshifted component. \[Figure adapted from Nandra, K., et al. Astrophys. J. 477, 602 (1997).\] 2. MEASURING AGN MASSES: DIRECT METHODS Very general arguments suggest that quasar engines have masses $`M_{}`$ $``$ $`10^6`$ to $`10^9`$ M. Gravitational collapse is believed to liberate energy with an efficiency of $`ϵ0.1`$; Lynden-Bell’s arguments then imply that typical remnant masses are $`M_{}`$ $``$ $`10^8`$ M. Better estimates can be derived by asking what we need in order to power quasar luminosities, which range from 10<sup>44</sup> to 10<sup>47</sup> erg s<sup>-1</sup> or 10<sup>11</sup> to 10<sup>14</sup> $`L_{}`$. For $`ϵ`$ = 0.1, the engine must consume 0.02 to 20 M yr<sup>-1</sup>. How much waste mass accumulates depends on how long quasars live. This is poorly known. If they live long enough to make radio jets that are collimated over several Mpc, and if their lifetimes are conservatively estimated as the light travel time along the jets, then quasars last $`\genfrac{}{}{0pt}{}{_>}{^{}}`$10<sup>7</sup> yr and reach masses $`M_{}`$ $`\genfrac{}{}{0pt}{}{_>}{^{}}`$10<sup>5</sup> to 10<sup>8</sup> M. But the most rigorous lower limit on $`M_{}`$ follows from the condition that the outward force of radiation pressure on accreting matter not overwhelm the inward gravitational attraction of the engine, a condition which, admittedly, strictly holds only if the accreting material and the radiation have spherical symmetry. This so-called Eddington limit requires that $`LL_\mathrm{E}(4\pi Gcm_p/\sigma _T)M_{}`$ = 1.3$`\times `$10<sup>38</sup> ($`M_{}`$/M) erg s<sup>-1</sup>, or equivalently that $`M_{}`$ 8 $`\times `$10<sup>5</sup> ($`L`$/10<sup>44</sup> erg s<sup>-1</sup>) M. Here $`G`$ is the gravitational constant, $`m_p`$ is the mass of the proton, and $`\sigma _T`$ is the Thompson cross section for electron scattering. We conclude that we are looking for BHs with masses $`M_{}`$ $``$ 10<sup>6</sup> to 10<sup>9</sup> M. Finding them has become one of the “Holy Grails” of astronomy because of the importance of confirming or disproving the AGN paradigm. AGNs provide the impetus to look for BHs, but active galaxies are the most challenging hunting ground. Stellar dynamical searches first found central dark objects in inactive galaxies (see the next article), but they cannot be applied in very active galaxies, because the nonthermal nucleus outshines the star light. We can estimate masses using the kinematics of gas, but only if it is unperturbed by nongravitational forces. Fortunately, this complication can be ruled out a posteriori if we observe that the gas is in Keplerian rotation around the center, i. e., if its rotation velocity as a function of radius is $`V(r)r^{1/2}`$. We can also stack the cards in our favor by targeting galaxies that are only weakly active and that appear to show gas disks in images taken through narrow bandpasses centered on prominent emission lines. Figure 2a. HST image of the ionized gas disk near the center of the giant elliptical galaxy M 87. The data were taken with the Second Wide Field/Planetary Camera through a filter that isolates the optical emission lines H$`\alpha `$ and \[N II\] $`\lambda `$$`\lambda `$6548, 6583. The left inset is an expanded viewof the gas disk; for an adopted distance of 16.8 Mpc, the region shown is 5<sup>′′</sup> $`\times `$ 5<sup>′′</sup> or 410 $`\times `$ 410 pc. The disk has a major axis diameter of $``$ 150 pc, and it is oriented perpendicular to the optical jet. \[Image courtesy of NASA/Space Telescope Science Institute, based on data originally published by Ford, H. C., et al. Astrophys. J. 435, L27 (1994).\] Figure 2b. Optical emission-line rotation curve for the nuclear disk in M 87. The data were taken with the Faint Object Camera on HST. The curves in the upper panel correspond to two different Keplerian thin disk models, and the bottom panel shows the residuals for the best-fitting model. \[Figure adapted from Macchetto, F., et al. Astrophys. J. 489, 579 (1997).\] Figure 3. (Left) HST image of the central region of the giant elliptical galaxy M 84; the box measures 22<sup>′′</sup> $`\times `$ 19<sup>′′</sup> or 1.8 kpc $`\times `$ 1.6 kpc for an adopted distance of 16.8 Mpc. The data were taken with the Second Wide Field/Planetary Camera through a filter that isolates the optical emission lines H$`\alpha `$ and \[N II\] $`\lambda `$$`\lambda `$6548, 6583. The slit of the Space Telescope Imaging Spectrograph was placed along the major axis of the nuclear gas disk (blue rectangle). (Right) Resulting spectrum of the central 3<sup>′′</sup> (240 pc). The abscissa is velocity and the ordinate is distance along the major axis. The spectrum shows the characteristic kinematic signature of a rotating disk. The velocity scale is coded such that blue and red correspond to blue and red shifts, respectively; the total velocity range is 1445 km s<sup>-1</sup>. \[Image courtesy of NASA/Space Telescope Science Institute, based on data originally published by Bower, G. A., et al. Astrophys. J. Lett. 483, L33 (1997) and Bower, G. A., et al. Astrophys. J. Lett. 492, L111 (1998).\] 2.1 Kinematics of Optical Emission Lines High-resolution optical images taken with ground-based telescopes and especially with the Hubble Space Telescope (HST) show that many giant elliptical galaxies contain nuclear disks of dust and ionized gas. The most famous case is M 87 (Figure 2a). The disk measures $``$ 150 pc across, and its rotation axis is closely aligned with the optical and radio jet. This is in accord with the BH accretion picture. The disk is in Keplerian rotation (Figure 2b) around an object of mass $`M_{}`$ $``$ 3 $`\times `$10<sup>9</sup> M. Furthermore, this object is dark: the measured mass-to-light ratio exceeds 100 in solar units, and this is much larger than that of any known population of stars. Moreover, the dark mass must be very compact: the velocity field limits its radial extent to be less than 5 pc. Therefore its density exceeds 10<sup>7</sup> M pc<sup>-3</sup>. Another illustration of this technique is given in Figure 3. M 84, also a denizen of the Virgo cluster of galaxies, is a twin of M 87 in size, and it, too, harbors an inclined nuclear gas disk (diameter $``$80 pc), whose rotation about the center betrays an invisible mass of $`M_{}`$ $``$ 2$`\times `$10<sup>9</sup> M. Other cases are reported (NGC 4261, NGC 6251, NGC 7052), and searches for more are in progress. 2.2 Kinematics of Radio Masers A related approach exploits the few cases where 22 GHz microwave maser emission from water molecules has been found in edge-on nuclear disks of gas. Particularly strong “megamasers” allow radio astronomers to use interferometry to map the velocity field with exquisite angular resolution. In the most dramatic application of this method, the Very Long Baseline Array was used to achieve resolution 0.<sup>′′</sup>0006 – 100 times better than that delivered by HST – in observations of the Seyfert galaxy NGC 4258. This is only 6 Mpc away, so the linear resolution was a remarkable 0.017 pc. The masers trace out a slightly warped annulus with an inner radius of 0.13 pc, an outer radius of 0.26 pc, and a thickness of $`<`$ 0.003 pc (Figure 4, left). The masers with nearly zero velocity with respect to the galaxy are on the near side of the disk along the line of sight to the center, while the features with high negative (approaching) and positive (receding) velocities come from the disk on either side of the center. High velocities imply that 3.6 $`\times `$10<sup>7</sup> M of binding matter resides interior to $`r`$ = 0.13 pc. What is most compelling about NGC 4258 is the observation that the rotation curve is so precisely Keplerian (Figure 4, right). From this result, one can show that the radius of the mass distribution must be $`r`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$0.012 pc. If the central mass were not a BH, its density would be extraordinarily high, $`\rho `$ $`>`$ 5 $`\times `$10<sup>12</sup> M pc<sup>-3</sup>. This is comparable to the density of the dark mass at the center of our Galaxy (see following article). Under these extreme conditions, one can show that a cluster of stellar remnants (white dwarf stars, neutron stars, and stellar-size black holes) or substellar objects (planets and brown dwarfs) are short-lived. Astrophysically, these are the most plausible alternatives to a BH. Therefore the dynamical case for a supermassive black hole is stronger in NGC 4258 and in our Galaxy than in any other object. Figure 4. (Left) Spatial distribution of the water masers in NGC 4258, color-coded so that blue and red correspond to blueshifted and redshifted velocities, respectively. The maser spots are distributed in a thin, warped annulus that is only 4 from edge-on. For an adopted distance of 6.4 Mpc, 1 mas = 0.031 pc. The top panel shows an expanded view of the emission near the systemic velocity of the galaxy. (Right) Light-of-sight velocity as a function of distance along the major axis of the annulus. The high-velocity features are accurately fitted by a Keplerian model, overplotted as a continuous line. The emission near the systemic velocity, magnified in the inset, lies at nearly constant radius in the front part of the disk along the line of sight to the center. The linear velocity gradient results from the change in projection of the rotation velocity. \[Figure adapted from Miyoshi, M., et al. Nature 373, 127 (1995).\] 3. MEASURING AGN MASSES: INDIRECT METHODS Direct dynamical measurements are impractical for more luminous and more distant AGNs. The tremendous glare from the nucleus outshines the circumnuclear emission from stars, and the violent conditions near the center are likely to subject the gas to nongravitational forces. Indirect methods of estimating central masses have been devised to provide a reality check for these more difficult objects. 3.1 Fitting the Spectra of Accretion Disks As material falls toward a black hole, it is believed to settle into an accretion disk in which angular momentum is dissipated by viscosity. From the virial theorem, half of the gravitational potential energy $`U`$ is radiated. Therefore the luminosity is $$L=\frac{1}{2}\frac{dU}{dt}=\frac{1}{2}\frac{GM_{}\dot{M_{}}}{r}.$$ $`(1)`$ At sufficiently high accretion rates $`\dot{M_{}}`$, the gas is optically thick, and the disk radiates as a thermal blackbody: $$L=\mathrm{\hspace{0.17em}2}\pi r^2\sigma T^4.$$ $`(2)`$ Here $`2\pi r^2`$ is the surface area of the disk and $`\sigma `$ is the Stefan-Boltzmann constant. The effective temperature of the disk as a function of radius $`r`$ is therefore $$T(r)\left(\frac{GM_{}\dot{M_{}}}{4\pi \sigma r^3}\right)^{1/4}.$$ $`(3)`$ Parameterizing the above result in terms of the Eddington accretion rate, $`\dot{M}_\mathrm{E}L_\mathrm{E}/ϵc^2`$ = 2.2 $`(ϵ/0.1)^1`$ ($`M_{}/10^8`$ M ) M yr<sup>-1</sup>, and the Schwarzschild radius, $`R_S\mathrm{\hspace{0.17em}2}GM_{}/c^2`$ = $`2.95\times 10^{13}`$ ($`M_{}/10^8`$ M ) cm, gives $$T(r)=\mathrm{\hspace{0.17em}6}\times 10^5\mathrm{K}\left(\frac{\dot{M_{}}}{\dot{M}_\mathrm{E}}\right)^{1/4}\left(\frac{M_{}}{10^8M_{}}\right)^{1/4}\left(\frac{r}{R_S}\right)^{3/4}.$$ $`(4)`$ In other words, the peak of the blackbody spectrum occurs at a frequency of $`\nu _{\mathrm{max}}=\mathrm{\hspace{0.17em}2.8}kT/h`$ 4$`\times `$10<sup>16</sup> Hz, where $`k`$ is Boltzmann’s constant and $`h`$ is Planck’s constant. This peak is near 100 Å or 0.1 keV. In fact, the spectra of many AGNs show a broad emission excess at extreme ultraviolet or soft X-ray wavelengths. This “big blue bump” has often been identified with the thermal emission from the accretion disk. A fit to the luminosity and the central frequency of the big blue bump gives $`M_{}`$ and $`\dot{M_{}}`$ but not each separately. Corrections for disk inclination and relativistic effects further complicate the analysis. This method is therefore model-dependent and provides only approximate masses. Typical values for quasars are $`M_{}`$ $``$ 10<sup>8</sup> – 10<sup>9.5</sup> M and $`\dot{M_{}}`$ $``$ 0.1 – 1 $`\dot{M}_\mathrm{E}`$. Seyfert nuclei appear to have lower masses, $`M_{}`$ $``$ 10<sup>7.5</sup> – 10<sup>8.5</sup> M, and lower accretion rates, $`\dot{M_{}}`$ $``$ 0.01 – 0.5 $`\dot{M}_\mathrm{E}`$. 3.2 Virial Masses from Optical Variability Surrounding the center at a distance of 0.01 to 1 pc from the black hole lies the “broad-line region” (BLR). This is a compact, dense, and highly turbulent swarm of gas clouds or filaments. The clouds are illuminated by the AGN’s photoionizing continuum radiation and reprocess it into emission lines that are broadened to velocities of several thousand km s<sup>-1</sup> by the strong gravitational field of the black hole. Then $$M_{}=\eta \frac{v^2r_{\mathrm{BLR}}}{G},$$ $`(5)`$ where $`\eta 1`$ to 3 depends on the kinematic model adopted, $`v`$ is the velocity dispersion of the gas as reflected in the widths of the emission lines, and $`r_{\mathrm{BLR}}`$ is the radius of the BLR. The latter can be estimated by “reverberation mapping,” as follows. The photoionizing continuum of an AGN typically varies on timescales of days to months. In response, the emission lines vary also, but with a time delay that corresponds to the light travel time between the continuum source and the line-emitting gas. By monitoring the variations in the continuum and the emission lines in an individual object, reverberation mapping provides information on the size of the BLR. These studies also suppport the assumption that the line widths come predominantly from bound orbital motions. Applying Equation (5) suggests that Seyfert nuclei are powered by black holes with masses $`M_{}10^7`$ to 10<sup>8</sup> M, while quasar engines are more massive, with $`M_{}10^8`$ to 10<sup>9</sup> M. Since quasars also live in more massive host galaxies, this supports the emerging correlation (see the following article) between BH mass and the mass of the elliptical-galaxy-like part of the host galaxy. 3.3 X-Ray Variability Active galactic nuclei vary most conspicuously in hard X-rays (2 – 10 keV). One might hope to use the variability timescale to constrain the size of the X-ray emitting region and hence to estimate the central mass. However, no simple pattern of variability emerges, and defining a meaningful timescale is ambiguous. One approach uses the “fastest doubling time,” $`\mathrm{\Delta }t`$, to establish a maximum source size $`Rc\mathrm{\Delta }t`$. High-energy photons presumably come from the hot, inner regions of the accretion disk or in an overlying hot corona. For example, if $`R5R_S`$, as deduced in some models, we obtain an upper limit to the mass, $`M_{}`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$$`(c^3/10G)\mathrm{\Delta }t`$ $``$ $`10^4\mathrm{\Delta }t`$ M ($`\mathrm{\Delta }t`$ in s). Masses estimated in this way are generally consistent with those obtained from other virial arguments, but they are considerably less robust because of uncertainties in associating the variability timescale with a source size. For example, the x-ray intensity variations could originate from localized “hotspots” in the accretion flow. X-ray reverberation mapping may in the future be a more powerful tool. The iron K$`\alpha `$ line is widely believed to be produced by reprocessing of the hard X-ray continuum by the accretion disk. The strikingly large width and skewness of the line profiles (Figure 1), now routinely detected with ASCA, reflect the plasma bulk motion within 10 – 100 gravitational radii of the center. The temporal response of the line strength and line profile depends on a number of factors that, in principle, can be modeled theoretically; these include the geometry of the X-ray source, the structure of the disk, and the assumed (Schwarzschild or Kerr) metric of the black hole. Time-resolved X-ray spectroscopy should become feasible with the X-ray Multi-Mirror Mission (XMM) in the near future. We can then look forward to constraints both on the masses and the spins of BHs. 4. SUMMARY AND PROSPECTUS The black hole model for AGN activity has been successful and popular for over three decades. It has withstood the test of time not – at least until recently – because the empirical evidence for BHs has been overwhelming but because the alternatives are so implausible. Now progress has advanced on several fronts. The refurbished HST has greatly strengthened the evidence, already growing from ground-based observations, that supermassive dark objects live at the centers of most galaxies. The pace of discoveries is accelerating. The dark objects have exactly the range of masses that we need to explain AGN engines, but we have had no proof that they must be black holes. Then radio interferometry revealed the spectacular maser disk in NGC 4258. For its rotation curve to be as accurately Keplerian as we observe, the central mass must be confined to an astonishingly tiny volume. The inferred density of the central object is so high that astrophysically plausible alternatives can be excluded; a BH is the best explanation. The same conclusion has been reached for the BH candidate at the center of our Galaxy. This is a major conceptual breakthrough. In addition, ASCA has demonstrated that many AGNs show iron emission lines with relativistically broadened profiles. This is arguably the best evidence for the strong gravitational field of a black hole. One of the most interesting prospects for the future is time-resolved X-ray spectroscopy, because hot gas probes closest to an accreting black hole. Finally, the AGN paradigm can be turned inside-out to give what may prove to be the most direct argument for black holes. BHs were “invented” to explain nuclear activity in galaxies. In recent years, an ironic situation has developed: some BH candidates are too inactive for the amount of matter that we believe they are accreting. The same is true of some stellar-mass black hole candidates that accrete gas from evolving companion stars. A number of researchers recently have developed a theory of “advection-dominated accretion” in which the accretion disk cannot radiate most of its energy before it reaches $`R_S`$ either because it is optically thick or because it is too thin to cool. Unless most of the inflowing material ultimately escapes through an outflow, a possibility being explored, the only way to make the accretion energy disappear is to ensure that the accreting body does not have a hard surface. That is, the inactivity of well-fed nuclear engines may be evidence that they have event horizons. Finding event horizons would be definitive proof that AGN engines are black holes. 5. SUGGESTIONS FOR FURTHER READING $``$ Initial debate concerning the physical nature of quasars is summarized in Burbidge, G., & Burbidge, E. M., Quasi-Stellar Objects (San Francisco: Freeman) (1967) $``$ The three key historical papers that originated the BH hypothesis are Salpeter, E. E. Astrophys. J. 140, 796 (1964) Zel’dovich, Ya. B., & Novikov, I. D. Sov. Phys. Dokl. 158, 811 (1964) Lynden-Bell, D. Nature 223, 690 (1969) $``$ The argument for “gravity power” was further developed in Lynden-Bell, D. Physics Scripta 17, 185 (1978) $``$ Textbook style discussions of AGN physics can be found in Active Galactic Nuclei, Saas-Fee Course 20, ed. T. J.-L. Courvoisier & M. Mayor (Berlin: Springer) (1990) Peterson, B. M., An Introduction to Active Galactic Nuclei (Cambridge: Cambridge University Press) (1997) $``$ The BH paradigm is covered at a more technical level in the following review articles: Rees, M. J. Ann. Rev. Astr. Astrophys. 22, 471 (1984) Begelman, M. C., Blandford, R. D., & Rees, M. J. Rev. Mod. Phys. 56, 255 (1984) Blandford, R. D., in Active Galactic Nuclei, Saas-Fee Course 20, ed. T. J.-L. Courvoisier & M. Mayor (Berlin: Springer), 161 (1990) $``$ The search for BHs is reviewed in Kormendy, J., & Richstone, D. Ann. Rev. Astr. Astrophys. 33, 581 (1995) Richstone, D., et al. Nature 395, A14 (1998) $``$ The starburst theory for the origin of AGNs has been developed by Terlevich, R., Tenorio-Tagle, G., Franco, J., & Melnick, J. M.N.R.A.S. 255, 713 (1992) Terlevich, R., Tenorio-Tagle, G., Rozyczka, M., Franco, J., & Melnick, J. M.N.R.A.S. 272, 198 (1995) $``$ The following conference proceedings explicitly focus on the observations and interpretation of the more “garden variety” low-luminosity AGNs: Eracleous, M., Koratkar, A. P., Leitherer, C., & Ho, L. C., eds., The Physics of LINERs in View of Recent Observations (San Francisco: Astronomical Society of the Pacific) (1996) Schmitt, H. R., Kinney, A. L., & Ho, L. C., eds., The AGN/Normal Galaxy Connection (Advances in Space Research, 23 (5-6)) (Oxford: Elsevier Science Ltd.) (1999) $``$ Readers interested in a full treatment of the techniques of reverberation mapping should consult Blandford, R. D., & McKee, C. F. Astrophys. J. 255, 419 (1982) Peterson, B. M. Pub. Astr. Soc. Pac. 105, 247 (1993) $``$ Explicit application of reverberation mapping results to derive masses of AGNs was done by Ho, L. C., in Observational Evidence for Black Holes in the Universe, ed. S. K. Chakrabarti (Dordrecht: Kluwer), 157 (1998) Laor, A. Astrophys. J. Lett. 505, L83 (1998) $``$ Mass determinations using optical emission-line rotation curves include Harms, R. J., et al. Astrophys. J. Lett. 435, L35 (1994) Macchetto, F., Marconi, A., Axon, D. J., Capetti, A., Sparks, W. B., & Crane, P. Astrophys. J. 489, 579 (1997) Bower, G. A., et al. Astrophys. J. Lett. 492, L111 (1998) $``$ The water maser observations of NGC 4258 are described in Watson, W. D., & Wallin, B. K. Astrophys. J. Lett. 432, L35 (1994) Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N., Diamond, P., & Inoue, M. Nature 373, 127 (1995) $``$ Arguments against compact dark star clusters in NGC 4258 and the Galaxy are presented in Maoz, E. Astrophys. J. Lett. 494, L181 (1998) $``$ These papers discuss the derivation of $`M_{}`$ and $`\dot{M_{}}`$ by fitting spectra with accretion disk models: Wandel, A., & Petrosian, V. Astrophys. J. Lett. 329, L11 (1988) Laor, A. M.N.R.A.S. 246, 369 (1990) $``$ Attempts to derive masses using X-ray variability have been made by Wandel, A., & Mushotzky, R. F. Astrophys. J. Lett. 306, L61 (1986) $``$ The prediction, discovery, and routine detection of broad Fe K$`\alpha `$ emission lines are described, respectively, in Fabian, A. C., Rees, M. J., Stella, L., & White, N. E. M.N.R.A.S. 238, 729 (1989) Tanaka, Y., et al. Nature 375, 659 (1995) Nandra, K., George, I. M., Mushotzky, R. F., Turner, T. J., & Yaqoob, T. Astrophys. J. 477, 602 (1997) $``$ Prospects for X-ray reverberation mapping are foreseen in Stella, L. Nature 344, 747 (1990) Reynolds, C. S., Young, A. J., Begelman, M. C., & Fabian, A. C. Astrophys. J. 514, 164 (1999) $``$ Advection-dominated accretion is reviewed in Narayan, R., Mahadevan, R., & Quataert, E., in The Theory of Black Hole Accretion Discs, ed. M. A. Abramowicz, G. Björnsson, & J. E. Pringle (Cambridge: Cambridge University Press), 148 (1998) Mineshige, S., & Manmoto, T., Advances in Space Research, 23 (5-6), 1065 (1999) Blandford, R. D., & Begelman, M. C. M.N.R.A.S. 303, L1 (1999)
no-problem/0003/astro-ph0003397.html
ar5iv
text
# Two-dimensional models of hydrodynamical accretion flows into black holes ## 1 Introduction In this paper we present the numerical study of global properties of hydrodynamical black hole accretion flows with a very inefficient radiative cooling. Such flows, coined ADAFs by Lasota (1996, 1999), are thought to be present in several astrophysical black hole candidates, in particular in low mass X-ray binaries and in some active galactic nuclei. Observed properties of ADAFs may be directly connected to black hole physics, and for this reason ADAFs have recently attracted a considerable attention (for reviews see e.g. Kato, Fukue & Mineshige 1998; Abramowicz, Björnsson & Pringle 1998; Narayan 1999). The radiation losses are unimportant for dynamics as well as for thermal balance of ADAFs, and therefore the details of radiative processes are not crucial. Radiative feed-back into hydrodynamics is negligible and may be treated as a small perturbation<sup>1</sup><sup>1</sup>1Objects that are external to ADAFs may change the balance: for example, if there is an external source of soft photons, they may provide an additional, possibly efficient, Compton cooling of optically thin plasma (Shapiro, Lightman & Eardley 1976).. Abramowicz et al. (1995) and in more details Chen et al. (1995) described accretion disks solutions in the parameter space ($`\dot{m}`$, $`\tau `$, $`\alpha `$), where $`\dot{m}=\dot{M}/\dot{M}_{Edd}`$ is the accretion rate expressed in the Eddington units, $`\tau `$ is the optical depth and $`\alpha `$ is the viscosity parameter. In this space ADAFs exist in two regimes (as anticipated by Rees et al. 1982): (1) ADAFs with $`\tau 1`$ have super-Eddington accretion rates, $`\dot{m}>1`$. Radiation is trapped inside the accretion flow (Katz 1977; Begelman 1978). To this category belong slim accretion disks (Abramowicz et al. 1988) which have a vertical scale comparable the corresponding radius. (2) ADAFs with $`\tau 1`$ have very sub-Eddington accretion rates, $`\dot{m}1`$. These flows have been first investigated by Ichimaru (1977), but the recent interest in them was generated mostly by the works of Narayan and his collaborators, after important aspects of nature of the flows had been explained by Narayan & Yi (1994, 1995b) and Abramowicz et al. (1995). The most important parameters for the physics of ADAFs are the viscosity parameter $`\alpha `$ and the adiabatic index $`\gamma `$. The latter parameter determines the regime of ADAFs through the equation of state. The parameters $`\tau `$ and $`\dot{m}`$ are not important: ADAFs have either $`\tau 1`$ and $`\dot{m}>1`$, or $`\tau 1`$ and $`\dot{m}1`$. There are no strong observational or theoretical limits for $`\alpha `$ and $`\gamma `$, and therefore, at present, one needs to construct models in wide ranges of them. In this paper we have constructed models for $`10^2\alpha 1`$ and $`\gamma =4/3`$, $`3/2`$, $`5/3`$. Our models are time dependent and fully 2-D: all components of forces and all components of viscous stresses are included in the calculations. To minimize the influence of the outer boundary condition onto the flow structure, we consider the solutions in the large radial range, $`3r_gr8\times 10^3r_g`$, where $`r_g=2GM/c^2`$ is the gravitational radius of the central black hole with mass $`M`$. We have found that the properties of ADAFs depend mainly on the viscosity, i.e. on $`\alpha `$, and also, but less strongly, on the adiabatic index $`\gamma `$. Four types of accretion flows can be distinguished (see Figure 1). (i) Convective flows. For a very small viscosity, $`\alpha 0.03`$, ADAFs are convectively unstable, as predicted by Narayan & Yi (1994) and confirmed in numerical simulations by Igumenshchev, Chen & Abramowicz (1996), Stone, Pringle & Begelman (1999) and Igumenshchev & Abramowicz (1999). Axially symmetric convection transports the angular momentum inward rather than outward, for a similar reason as that described by Stone & Balbus (1996) in the context of turbulence: if there are no azimuthal gradients of pressure, turbulence tries to erase the angular momentum gradient. This property of convection governs the flow structure, as it was shown for ADAFs by Narayan, Igumenshchev & Abramowicz (2000) and Quataert & Gruzinov (2000). There are small scale circulations, with the matter fluxes considerably greater than that entering the black hole. Convection transports a significant amount of the dissipated binding energy outward. No powerful outflows are present. (ii) Large-scale circulations. For a larger, but still small viscosity, $`\alpha 0.1`$, ADAFs could be both stable or unstable convectively, depending on $`\alpha `$ and $`\gamma `$. The flow pattern consists of the large-scale ($`r`$) meridional circulations. No powerful unbound outflows are present. In some respect this type of flow is the limiting case of the convective flows in which the small scale motions are suppressed by larger viscosity. (iii) Pure inflows. With an increasing viscosity, $`\alpha 0.3`$, the convective instability dies off. Some ADAFs (with $`\gamma 3/2`$) are characterized by a pure inflow pattern, and agree in many aspects with the self-similar models (Gilham 1981; Narayan & Yi 1994). No outflows are present. (iv) Bipolar outflows. For a large viscosity, $`\alpha 1`$, ADAFs differ considerably from the simple self-similar models. Powerful unbound bipolar outflows are present. Effects of turbulent thermal conduction have been studied in several simulations. The conduction has an important influence to the flow structure, but it does not introduce a new type of flow. The paper is organized as follows. In §2 we describe equations, numerical method and boundary conditions. In §3 we present numerical results for models with and without thermal conduction. In §4 we discuss the properties of the solutions and their implications. In §5 we give the final conclusions. ## 2 Numerical method We compute ADAFs models by solving the non-relativistic time-dependent Navier-Stokes equations which describe accretion flows in a given and fixed gravitational field: $$\frac{d\rho }{dt}+\rho \stackrel{}{v}=0,$$ $`(2.1)`$ $$\rho \frac{d\stackrel{}{v}}{dt}=P+\rho \mathrm{\Phi }+𝚷,$$ $`(2.2)`$ $$\rho \frac{de}{dt}=P\stackrel{}{v}\stackrel{}{q}+Q.$$ $`(2.3)`$ Here $`\rho `$ is the density, $`\stackrel{}{v}`$ is the velocity, $`P`$ is the pressure, $`\mathrm{\Phi }=GM/r`$ is the Newtonian gravitational potential of the central point mass $`M`$, $`e`$ is the specific thermal energy, $`𝚷`$ is the viscous stress tensor with all components included, $`\stackrel{}{q}`$ is the heat flux density due to thermal conduction and $`Q`$ is the dissipation function. The flow is assumed to be axially symmetric. There is no radiative cooling of the accretion gas. We adopt the ideal gas equation of state, $$P=(\gamma 1)\rho e,$$ $`(2.4)`$ and consider only the shear viscosity with the kinematic viscosity coefficient given by $$\nu =\alpha \frac{c_s^2}{\mathrm{\Omega }_K},$$ $`(2.5)`$ where $`0<\alpha 1`$, $`c_s=\sqrt{P/\rho }`$ is the isothermal sonic speed and $`\mathrm{\Omega }_K=\sqrt{GM/r^3}`$ is the Keplerian angular velocity. We assume that the thermal conduction heat flux is directed down the specific entropy gradient, $$\stackrel{}{q}=\chi \rho Ts,$$ $`(2.6)`$ where $`s`$ is the specific entropy, $`T`$ is the temperature and $`\chi `$ is the thermometric conductivity. The formula (2.6) is correct for flows in which the heat conduction is due to either turbulent eddies or diffusion of radiation in the optically thick medium. Other laws of thermal conduction are possible for different heat transfer mechanisms. In numerical models we assume a simplified dependence, $$\chi =\nu /Pr,$$ $`(2.7)`$ where $`Pr`$ is the dimensionless Prandtl number assumed to be constant in the flow, and $`\nu `$ is defined by (2.5). In the flows without thermal conduction, i.e. $`\chi =0`$, one formally has $`Pr=\mathrm{}`$. In the case of turbulent flows the actual value of the Prandtl number is not clearly known and can vary in a wide range, $`1Pr<\mathrm{}`$, depending on the nature of turbulence. If the viscosity in turbulent flows is provided mainly by small scale eddies, one can expect $`Pr1`$. If the viscosity is due to magnetic stress, the thermal conduction could be significantly suppressed, i.e. $`Pr1`$. Note, that for molecular thermal conduction in gases, the Prandtl number is always of the order of unity (e.g. Landau & Lifshitz 1987). In actual calculations, we use $$\stackrel{}{q}=\chi \left[(\rho e)\gamma e\rho \right],$$ $`(2.8)`$ which is equivalent to (2.6). We split the numerical integration of equations (2.1)-(2.3) into three sub-steps: hydrodynamical, viscous and conductive. The hydrodynamical sub-step is calculated by using the explicit Eulerian finite-difference algorithm PPM developed by Colella & Woodward (1984). The viscous sub-step is solved by applying an implicit method with a direction-splitting when calculating the contributions to equation (2.2). The dissipation function $`Q`$ in equation (2.3) is calculated explicitly. The conductive sub-step in equation (2.3) is again calculated by an implicit method with a direction-splitting. The time-step $`\mathrm{\Delta }t`$ for the numerical integration is chosen in accordance with the Courant condition for the hydrodynamical sub-step. We use a spherical grid $`N_r\times N_\theta =130\times 50`$ with the inner radius at $`r_{in}=3r_g`$ and the outer radius at $`r_{out}=8000r_g`$. The grid points are logarithmically spaced in the radial direction and uniformly spaced in the polar direction from $`0`$ to $`\pi `$. The general-relativistic capture effect, that governs the flow near black hole, is modeled by using an absorbing boundary condition at $`r=r_{in}`$. We assume no viscous angular momentum and energy fluxes through the inner boundary associated with the ($`r\theta `$) and ($`r\varphi `$) components of the shear stress. Namely, we assume $`d(v_\theta /r)/dr=0`$ and $`d(v_\varphi /r)/dr=0`$ at $`r=r_{in}`$. Also, the energy flux due to the thermal conduction at $`r=r_{in}`$ is assumed to be zero. At the outer boundary $`r_{out}`$ we apply an absorbing boundary condition. The matter can freely outflow through $`r_{out}`$, but there are no flows from outside. In the calculations, we assume that mass is steadily injected into the calculation domain from an equatorial torus near the outer boundary of the grid. Matter is injected there with angular momentum equal to $`0.95`$ times the Keplerian angular momentum. Due to viscous spread, a part of the injected matter moves inwards and forms an accretion flow. The other part leaves the computation domain freely through the outer boundary. We start computations from an initial state in which there is a little amount of mass in the grid. As the injected mass spreads and accretes, the amount of mass within the grid increases. After a time comparable to the viscous time scale at $`r_{out}`$, the accretion flow achieves a quasi-stationary behaviour, and may be considered to be in a steady state. However, for some models this ‘steady state’ is steady only in the sense of time-averaging: these flows show persistent chaotic fluctuations, at any given point, which do not die out with time. ## 3 Two-dimension hydrodynamical models Spatial re-scaling $`rx`$ together with time re-scaling $`tt/(r_g/c)`$, make solutions of (2.1)–(2.3) independent of the black hole mass $`M`$. There is also an obvious re-scaling of density $`\rho \rho /\dot{M}_{inj}`$ that makes the solutions independent of the mass injection rate $`\dot{M}_{inj}`$. Thus, with fixed boundary conditions, the numerical models are described by the three dimensionless parameters $`\alpha `$, $`\gamma `$ and $`Pr`$. We have calculated a variety of models for different values of these parameters (see Table 1). ### 3.1 Models without thermal conduction First, we describe accretion flows with no thermal conduction ($`Pr=\mathrm{}`$). The results of the simulations are summarized in Figure 1 (left panel). Here circles show the location of the computed models in the ($`\alpha `$, $`\gamma `$) plane. Empty circles correspond to stable laminar flows. Unstable models with large-scale ($`r`$) circulation motions are represented by crossed circles. Unstable models with small-scale ($`r`$) convective motions are shown by solid circles. Arrows indicate the presence of powerful outflows or strong inflows in the models. Two outward directed arrows, on the upper and lower of a circle, mean the bipolar outflows, whereas one arrow corresponds to the unipolar outflow. Inward directed arrows indicate that the model has a pure inflow pattern. Models which are not marked by arrows reveal neither powerful outflows, nor pure inflow. The source of matter in our models has a constant injection rate and locates close to the outer boundary. In a steady state one expects that due to this particular location of the source, most of the material injected into the computational domain escapes through the outer boundary and only a minor part of it accretes into the black hole. Table 2 presents the ratio of the mass accretion rate $`\dot{M}_0`$, measured at the inner boundary, to the mass outflow rate $`\dot{M}_{out}`$ through the outer boundary for a variety of models. The selected models are both stable and unstable, and we use the time-averaged rates in the case of unstable models. From Table 2 one can see that in most cases the ratio $`\dot{M}_0/\dot{M}_{out}`$ is very small, $`10^210^3`$, and shows a complicated dependence on $`\alpha `$ and $`\gamma `$. The smallest relative accretion rates correspond to the models with a small viscosity (Models J, K and L) and a smaller ratio corresponds to a larger $`\gamma `$ in these models. A similar dependence of $`\dot{M}_0/\dot{M}_{out}`$ on $`\gamma `$ can also be seen for other fixed values of $`\alpha `$. However, Model E demonstrates a peculiar property: the mass accretion rate is about two times larger than the outflow rate. We will discuss later in detail this peculiar model and show that it is closely related to the ‘standard’ self-similar ADAF solutions (Narayan & Yi 1994), whereas other models are either not related to self-similar ADAFs, or related to self-similar ADAFs of a new kind (Narayan et al. 2000; Quataert & Gruzinov 2000). A small value of the ratio $`\dot{M}_0/\dot{M}_{out}`$ does not indicate powerful unbound outflows. In some models the large outflow rate is only a consequence of our choice of the geometry of matter injection. To confirm this point, we present in Figure 2 a variety of histograms, which show the fraction of matter that outflows behind the outer boundary and has a fixed value of the dimensionless Bernoulli parameter $`Be`$, $$Be=\left(\frac{1}{2}v^2+W\frac{GM}{r}\right)/\frac{GM}{r}.$$ $`(3.1)`$ Here $`W=\gamma c_s^2/(\gamma 1)`$ is the specific enthalpy. The histograms are shown for Models A, D, G and J, all with fixed $`\gamma =5/3`$ and different values of $`\alpha `$. We have used the standard normalization $`\mathrm{\Delta }\dot{m}_{out}(Be)=1`$ in the histograms. The matter with positive $`Be`$ is gravitationally unbound and can form outflows, whereas the matter with negative $`Be`$ is gravitationally bound and cannot escape to a large radial distance. One can see in Figure 2 that the high viscosity flows (Models A and D) form powerful unbound outflows with positive and large $`Be`$ on average, and only a minor part of the outflowing matter ($`20\%`$) is gravitationally bound. In the low viscosity flows (Models G and J) most of the matter that moved through the outer boundary has $`Be<0`$ and thus it remains gravitationally bound and cannot form powerful outflows. We have found a similar situation with respect to formation of bound/unbound outflows in models with different $`\gamma `$. In general, flows with low $`\alpha `$ are bound and have no powerful outflows. As we explained in Introduction, the numerical models of ADAFs could be divided into four types that are characterized by different flow patterns. We shall now describe the properties of the flows of the various types. #### 3.1.1 Bipolar outflows The high viscosity ($`\alpha =1`$) Models A, B, C and moderate viscosity ($`\alpha =0.3`$) Model D are stationary, symmetric with respect to the equatorial plane, and show a flow pattern with equatorial inflow and bipolar outflows. In Models A, B and C the mass is strongly concentrated towards the equatorial plane, and the flow patterns depend weakly on $`\gamma `$. The angular momentum is significantly smaller than the Keplerian one, and the pressure gradient force plays a major role in balancing gravity. We discuss here two representative high viscosity Models A ($`\gamma =5/3`$) and C ($`\gamma =4/3`$). Model B has properties somewhat between those of Models A and C. Figures 3 and 4 present selected properties of Models A and C, respectively, in the meridional cross-section. Four panels in each figure show the distributions of density $`\rho `$ (upper left), pressure $`P`$ (upper right), momentum vectors $`\rho \stackrel{}{v}`$ multiplied by $`r`$ (lower left), and Mach number $`=\sqrt{v_r^2+v_\theta ^2}/\sqrt{\gamma }c_s`$ (lower right). The correspondent distributions in Figures 3 and 4 are almost identical except the distributions of $``$. In Model A (as well as in Models B and D) the flow is everywhere subsonic up to the inner absorbing boundary at $`3r_g`$. In the equatorial inflow the radial profile of $``$ is flat and reaches the maximum value $`0.7`$. In Model C the equatorial inflow is supersonic in a large range of radii inside about $`3\times 10^3r_g`$. The equatorial values of $`(r)`$ increase with decreasing radius and take the maximum value $`=2.4`$ at the inner boundary. Thus, the presence of the supersonic or subsonic inflow in high viscosity models depends on the value of $`\gamma `$, which controls the ‘hardness’ of the equation of state (2.4). We note that in purely radial supersonic inflows the viscous torque cannot be efficient because of a reduction of the upstream transport of viscous interactions. This effect was referred as a ‘causal’ effect (Popham & Narayan 1992). Due to this effect the supersonic inflow in geometrically thin and centrifugally supported accretion disks is possible only in the innermost region with radius close to the radius of the last stable black hole orbit $`r_s`$. However, in the case of Model C the viscous interaction between the supersonic equatorial inflow and the outflowing material plays an important role. The expansion velocities in the outflows are subsonic, and there is no problem with causality. In this case the inflow can be supersonic for radii $`rr_s`$. To quantitatively characterize the process of bipolar outflows we have estimated the ‘mass inflow rate’ $`\dot{M}_{in}(r)`$ by adding up all the inflowing gas elements (with $`v_r<0`$) at a given radius $`r`$, and compared $`\dot{M}_{in}`$ with the net accretion rate $`\dot{M}_0`$. The results are shown in Figure 5 for Models A (solid line), C (dotted line) and D (dashed line). The curves do not show power-law dependences, because they do not look like straight lines on the log–log plot. From Figure 5 one can see that $`\dot{M}_{in}`$, and correspondingly the ‘mass outflow rate’, $`\dot{M}_{out}=\dot{M}_{in}\dot{M}_0`$, strongly depend on $`\gamma `$ at a fixed $`\alpha `$. In the case of Model A only about $`1/7`$ of matter that inflows at $`r=1000r_g`$ reaches the black hole, whereas in the case of Model C the fraction is about $`1/2`$. Model D shows that a reduction of $`\alpha `$ by three times, with respect to the one for Model A, results in $`23`$ times suppression of the mass outflow rate $`\dot{M}_{out}`$ in the radial range of $`10^210^3r_g`$. Models with smaller $`\gamma `$ show the tendency of larger suppression of $`\dot{M}_{out}`$ with decreasing $`\alpha `$. It is interesting to compare the behaviour of the dimensionless quantities $$\lambda =\frac{1}{\dot{M}_{in}\mathrm{}_K}\left(\dot{M}_{in}\mathrm{}+2\pi r^3\mathrm{\Pi }_{r\varphi }\mathrm{sin}\theta d\theta \right),$$ $`(3.2)`$ and $$ϵ=\frac{1}{\dot{M}_{in}v_K^2}\left(\dot{E}_{adv}+\dot{E}_{visc}\right),$$ $`(3.3)`$ in our models with prediction of self-similar solutions in which $`\lambda `$ and $`ϵ`$ are constants (see Blandford & Begelman 1999). In (3.2) and (3.3) we have used the following notation, $$\dot{E}_{adv}=2\pi r^2\rho v_r\left(\frac{v^2}{2}+W+\mathrm{\Phi }\right)\mathrm{sin}\theta d\theta ,$$ $`(3.4)`$ $$\dot{E}_{visc}=2\pi r^2(v_r\mathrm{\Pi }_{rr}+v_\theta \mathrm{\Pi }_{r\theta }+v_\varphi \mathrm{\Pi }_{r\varphi })\mathrm{sin}\theta d\theta ,$$ $`(3.5)`$ $`\mathrm{}=v_\varphi r\mathrm{sin}\theta `$ is the specific angular momentum, and $`v_K=\mathrm{\Omega }_Kr`$ and $`\mathrm{}_K=\mathrm{\Omega }_Kr^2`$ are the Keplerian angular velocity and angular momentum, respectively. Integration in (3.2)-(3.5) has been taken over those angles $`\theta `$ for which $`v_r<0`$. In our models $`\lambda `$ and $`ϵ`$ are changed with radius. The functions $`\lambda (r)`$ and $`ϵ(r)`$ are plotted in Figure 6 by the solid, dotted and dashed lines for Models A, C and D, respectively. Analysing the dependences in Figure 6 as well as the behaviour of $`\dot{M}_{in}`$ in Figure 5, one conclude that Models A, C and D do not reveal the self-similar behaviour. Figures 7 and 8 show the angular structure (in the $`\theta `$-direction) in Model A and D, respectively, at four radial positions of $`r=30r_g`$ (long-dashed lines), $`100r_g`$ (dashed lines), $`300r_g`$ (dotted lines) and $`1000r_g`$ (solid lines). The angular distribution of density demonstrates a considerable concentration of matter towards the equatorial plane in high viscosity Model A. The concentration is less significant in the case of moderate viscosity Model D. The models show quite flat distributions of angular velocity $`\mathrm{\Omega }`$, especially at smaller $`r`$. The radial velocities $`v_r`$ are negative in the equatorial inflowing regions, where the mass concentration takes place. The wide polar regions are filled by the unbound outflowing matter with positive $`v_rv_K`$. The polar outflows are less effectively accelerated in Model D, and it results in a reduction of the mass outflow rates $`\dot{M}_{out}`$ in comparison with those of Model A (dashed and solid lines in Figure 5, respectively). In the polar regions $`c_s/v_K1`$, whereas in the inflowing part one has $`c_s/v_K<1`$. Note, that the ratio $`c_s/v_K`$ equals to the relative thickness of the accretion flow in the vertically averaged accretion disk theory (e.g. Shakura & Sunyaev 1973), $`h/r=c_s/v_K`$. In accretion disks one has $`h/r1`$. The case $`h/r1`$ corresponds to a thermally expanded unbound gas cloud. Models B and C demonstrate angular structures qualitatively similar to that of Model A. Figure 9 shows the averaged radial structure for a variety of models. Models A, C and D are represented by the solid, dotted and dashed lines, respectively. All plotted quantities, $`\rho `$, $`\mathrm{\Omega }`$, $`v_r`$, $`c_s`$, have been averaged over the polar angle $`\theta `$ with the weighted function $`\rho `$, except for $`\rho `$ itself. The density and sonic velocity profiles in the models can approximately be described by radial power-laws, with $`\rho r^1`$ and $`c_sr^{1/2}`$. The radial velocity is about $`v_rr^1`$ in the case of Models A and D, but does not show any distinguished power-law dependence in the case of Model C. The latter model shows faster increase of the radial velocity with decreasing radius. The most significant difference between the high and moderate viscosity models can be seen in the radial profiles of $`\mathrm{\Omega }`$. The angular velocity $`\mathrm{\Omega }`$ shows quite unexpected behaviour in the case of high viscosity Models A and C. In Model A, $`\mathrm{\Omega }r^{1/2}`$ in all ranges of the radii. This radial dependence is significantly flatter than the one for the Keplerian angular velocity, $`\mathrm{\Omega }_Kr^{3/2}`$. In Model C, $`\mathrm{\Omega }r^{1/2}`$ in the outer region, at $`r10^3r_g`$, and $`\mathrm{\Omega }\mathrm{\Omega }_Kr^{3/2}`$ in the inner region, at $`r10^2r_g`$. The steeper dependence of $`\mathrm{\Omega }(r)`$ in the inner region of Model C could be due to the equatorial supersonic inflow (see Figure 4, lower right panel). In moderate viscosity Model D, the angular velocity profile is not surprising, it is quite close to the Keplerian one. #### 3.1.2 Pure inflows As we noted earlier, Model E demonstrates some peculiar properties. The model is stable and does not form outflows except very close to the outer boundary, at $`r4000r_g`$. Figure 10 shows some selected properties of Model E in the meridional cross-section. The flow pattern looks very similar to the one for the spherical accretion flow. However, contrary to spherical flows, the rotating accretion flow in Model E has a reduced inflow rate at the equatorial and polar regions (compare vectors in the lower left panel of Figure 10) and a corresponding local decrease of Mach number there (Figure 10, lower right panel). The net radial energy flux is close to zero for $`r10^3r_g`$ in this model; the inward advection of energy balances the outward-directed energy flux due to viscosity (see plot for $`ϵ`$ in Figure 6, long-dashed line). Such a balance of the inward and outward energy fluxes in Model E coincides with a property of the self-similar ADAF solutions, in which $`ϵ=0`$. An other property of the self-similar ADAFs is that $`\lambda =0`$. The latter property is a consequence of the assumption that the inner boundary located at $`r=0`$, and there is a zero outward flux of angular momentum. In reality, however, the inner boundary locates at a finite radius, and there must be a non-zero outward flux of angular momentum $`\dot{M}_0\mathrm{}(r_{in})`$. In this case $`\lambda `$ is a function of radius, $`\lambda (r)\mathrm{}_K^1r^{1/2}`$. Model E confirms the latter dependence in a wide range of radii, at $`r2\times 10^3r_g`$, as can be seen from the $`\lambda (r)`$ plot in the lower panel of Figure 6 (long-dashed line). It is interesting to note that Model D demonstrates a similar behaviour of $`\lambda (r)`$ (dashed line in the lower panel of Figure 6) in the innermost part, at $`r30r_g`$, where the outflow rate $`\dot{M}_{out}`$ is small (see Figure 5). The radial and angular structure of Model E can be seen in Figures 9 (long-dashed lines) and 11. The radial profiles of the $`\theta `$-averaged $`\rho `$, $`\mathrm{\Omega }`$ and $`c_s`$ are very close to those for the self-similar ADAFs: $`\rho r^{3/2}`$, $`\mathrm{\Omega }r^{3/2}`$ and $`c_sr^{1/2}`$. But, the angular profiles of them do not correspond to any of the two-dimensional self-similar solutions found by Narayan & Yi (1995a). The discrepancy is connected with a reduction of mass inflow rate in the equatorial region clearly seen in Model E. The parameter $`Be`$ is positive in the inner region of Model E, at $`r10^3r_g`$, and there are no outflows. #### 3.1.3 Large-scale circulations Models F, G, H and I with large-scale circulations have a moderate viscosity, $`0.1\alpha 0.3`$. In the ($`\alpha `$, $`\gamma `$) plane they locate on both sides of the line that separates the stable and unstable flows (see Figure 1, left panel). Below we discuss representative stable Model G and unstable Models F and I. The flow pattern for stable Model G is presented in Figure 12. In the lower left panel of Figure 12 one can see the inner part of a meridional cross-section of the global circulation cell which has a torus-like form in three dimensions. The polar outflow in the upper hemisphere becomes supersonic from $`1000r_g`$ outward. The polar funnel in the lower hemisphere filled up by low-density matter is clearly seen in the distributions of density (upper left), pressure (upper right) and Mach number (lower right). The low-density matter in the funnel forms an accretion flow at small $`r`$ and an outflow at larger $`r`$. The boundary between these inflowing and outflowing parts is variable and determines the supersonic/subsonic accretion regime in the funnel. At the particular moment shown in Figure 12 the boundary between the inflow and outflow in the funnel is located at small $`r`$ and the accretion flow is mostly subsonic. Figure 13 shows the angular structure of the flow in Model G. The equatorial asymmetry of the flow pattern introduces the asymmetry in the angular profiles of all quantities shown. Note the impressive similarity of the profiles at different radial distances except regions close to the poles. Snapshots of the unstable Models F and I are given in Figure 14, left and right panels, respectively. Model F has a stable global flow pattern dominated by an unipolar circulation motion. The flow is quasi-periodically perturbed by growing hot convective bubbles. These bubbles originate in the innermost part of the accretion flow close to the equatorial plane. They are hotter and lighter than the surrounding matter, and the Archimedes buoyancy forces them to move outward. During the motion, the bubbles are heated up further due to viscous dissipation and migrate from the equatorial region to the upper hemisphere. In Figure 14 (left) one can see the growing bubble in the density contours inside $`r500r_g`$. The structure seen in the upper polar region at $`r2000r_g`$ is a ‘tail’ of the previous bubble. Less viscous Model I does not show a regular flow pattern. The hot convective bubbles outflow quasi-periodically in both the upper and lower hemispheres without a preferable direction. Figure 14 (right) shows sequences of the convective bubbles in the polar regions of both hemispheres. The regular flow pattern with the symmetric bipolar outflows and equatorial inflow is seen only through the time averaging in Model I. #### 3.1.4 Convective flows All our low viscosity models with $`\alpha 0.03`$ are convectively unstable independently of $`\gamma `$. They show complicated time-dependent flow patterns which consist of numerous vortices and circulations. The snapshot of such a behaviour of accretion flow in the case of Model M is presented in Figure 15 (left). It demonstrates non-monotonic distributions of density and velocity in the innermost part of the flow. In spite of the significant time-variability of the accretion flow we have found a tendency towards formation of temporal coherent structures which look like convective cells extended in the radial direction. In these structures, the inflowing streams of matter are sandwiched in the $`\theta `$-direction by the outflowing streams. These structures can be seen in the velocity vectors and in the characteristic radial features of the density distribution in the left panel of Figure 15. The time-averaged flow patterns are smooth and do not demonstrate small-scale features. Figure 15 (right) shows the time-averaged distributions of the density and momentum vectors of Model M. In the picture one can clearly see that the accretion is suppressed in the equatorial sector and the mass inflows concentrate mainly along the upper and lower surfaces of the torus-like accretion disk. Figure 16 shows the angular structure of the time-averaged flow from Model M. All quantities have been averaged over about $`44`$ periods of the Keplerian rotation at $`r=100r_g`$. The angular profiles of density reach their maximum values at the equator and decrease towards the poles. The profiles have an almost identical form at each radius. The similarity of the angular profiles at different $`r`$ can be see as well in the variables $`\mathrm{\Omega }`$, $`v_r`$ and $`c_s`$, except in the regions which are close to the poles. The averaged radial velocity is almost zero over most of the $`\theta `$-range. It is only close to the poles that we have non-zero velocities (for $`r300r_g`$ they are negative and for larger $`r`$ they are positive). The polar inflows are highly supersonic in the low viscosity models as can be seen by comparing the values of $`v_r`$ and $`c_s`$ from the lower left and lower right panels of Figure 16. Figure 17 shows the radial structure of the time-averaged flows in Models J (dotted lines), K (dashed lines), L (long-dashed lines) and M (solid lines). It uses the same $`\theta `$-averaging as in Figure 9. In all models, the variables $`\mathrm{\Omega }`$ and $`c_s`$ can be described by a radial power-law, with $`\mathrm{\Omega }r^{3/2}`$ and $`c_sr^{1/2}`$, in the radial range $`1010^3r_g`$. The radial density profile can also be approximated by a power-law, $`\rho r^\beta `$, where the index $`\beta `$ varies from $`\beta 0.5`$ for Models J and M to $`\beta 0.7`$ for Models K and L. The radial velocities are connected to the density profiles by the relation $`v_rr^2\rho ^1r^{\beta 2}`$ with good accuracy. Such a fast increase of $`v_r`$ inward, with respect to the free-fall velocity, $`v_{ff}r^{1/2}`$, means that $`v_r`$ is very small at large radii in comparison with the predictions of the ‘standard’ self-similar ADAF solutions, for which $`v_rv_{ff}r^{1/2}`$. The angular velocities are close to the Keplerian ones everywhere, and the $`\mathrm{\Omega }(r)`$ profiles for Models K, M, L have a super-Keplerian part at the innermost region, which is typical for thick accretion disks. The most interesting and important property of the low viscosity models is that the convection transports angular momentum inward rather than outward, as it does in the case of ordinary viscosity. The direction of the angular momentum transport is determined by the sign of the ($`r\varphi `$)-component of the Reynolds stress tensor, $`\tau _{r\varphi }=v_r^{}v_\varphi ^{}`$, where $`v_r^{}`$, $`v_\varphi ^{}`$ are the velocity fluctuations and $`\mathrm{}`$ means time-averaging. Negative/positive sign of $`\tau _{r\varphi }`$ corresponds to inward/outward angular momentum transport. Figure 18 shows the distribution of $`\tau _{r\varphi }`$ in the meridional cross-section of Model M. It is clearly seen that $`\tau _{r\varphi }`$ is negative in most of the flow, and thus convective motions transport angular momentum inward on the whole. All low viscosity models have negative, volume averaged, $`Be`$ in all ranges of the radii. Only temporal convective blobs and narrow (in the $`\theta `$-direction) regions with outflowing matter at the disk surfaces show positive $`Be`$. Our numerical results for the low viscosity models agree with those calculated in our earlier papers (Igumenshchev et al. 1996; Igumenshchev & Abramowicz 1999), and are very similar to those obtained later by Stone et al. (1999). The models of Stone et al. (1999) are convectively unstable and the time-averaged radial inward velocity is significantly reduced in the bulk of the accretion flows. Stone et al. (1999) have checked several radial scaling laws for the viscosity and have found the same radial dependence of the time-averaged quantities ($`\rho `$, $`\mathrm{\Omega }`$ and $`c_s`$), as those presented here, in the case of $`\nu r^{1/2}`$, which is analogous to the $`\alpha `$-prescription (2.5). ### 3.2 Models with thermal conduction We have assumed the Prandtl number $`Pr=1`$ in the models with thermal conduction (see Table 1). The thermal conduction does not introduce qualitatively new types of flow patterns in addition to those that were discussed in §3.1. However, it leads to important quantitative differences. Firstly, the thermal conduction makes the contrasts of specific entropy smaller, which leads to a suppression of the small-scale convection in the low and moderate viscosity models. Secondly, the thermal conduction acts as a cooling agent in the outflows, reducing or even suppressing them in the models of moderate viscosity. Calculations that include thermal conduction cover a smaller region in the ($`\alpha `$, $`\gamma `$) plane than those of the non-conductive models. Figure 1 (right panel) summarizes some properties of the models with thermal conduction. All computed models are stable. We have found only two types of flow patterns: pure inflow (like in Model E) for all models with $`\alpha =0.3`$, and global circulation (like in Model G) for models with smaller $`\alpha `$. Figures 19 and 20 show the two-dimensional structure of Models N and P, respectively, in the meridional cross section. Distributions of density and pressure are almost spherical in both models. Important differences between the models can be seen in the distribution of the momentum vectors (lower left panel) and the Mach number (lower right panel). In Model N the momentum vectors are distributed almost spherically at $`r500r_g`$. At $`r800r_g`$ in the polar directions there are two stagnation points which divide inflows from outflows. The distribution of the Mach number is flat, and the flow is significantly subsonic everywhere. In Model P the inward mass flux spreads in the wide ($`45^{}`$) polar regions. The equatorial inflow is relatively small. The distribution of the Mach number in Model P has an equatorial minimum and increases towards the poles. At $`r200r_g`$ the polar inflows are supersonic. Figure 21 presents the radial structure of the flow in Models N (dashed lines), O (long-dashed lines) and P (dotted lines). The density profiles can be approximated by a radial power-law, $`\rho r^\beta `$, where the index $`\beta 1`$ for Model N, and $`\beta 1.5`$ for Models O and P. The profiles of $`\mathrm{\Omega }(r)`$ show a different behavior for each model. In Model N the values of $`\mathrm{\Omega }`$ is significantly reduced with respect to $`\mathrm{\Omega }_K`$ in the inner region, $`r10^3r_g`$. In Models O and P the drop of $`\mathrm{\Omega }`$ is less significant. The ratio of $`\mathrm{\Omega }/\mathrm{\Omega }_K`$ goes to a limiting value in the latter models: $`\mathrm{\Omega }/\mathrm{\Omega }_K0.6`$ in the case of Model P and $`0.3`$ in the case of Model O. The radial inward velocities increase with decreasing $`\gamma `$ in the sequence of models from N to O and P, and is well described by the power-law $`v_rr^{\beta 2}`$. The profiles of $`c_s(r)`$ are weakly changed from model to model. Models Q, R and S have almost identical flow patterns and show weak dependence on $`\alpha `$ and $`\gamma `$. As an illustrative example we present the flow pattern of Model Q in Figure 22. Like in the case of Model G discussed in §3.1.3 these models form stable global circulations. Contrary to what is seen in Model F, however, these models have a more pronounced accretion funnel in the lower hemisphere, with a highly supersonic matter inflow. Figure 23 shows the angular structure of Model Q. The comparison of this models with the properties of Model G in Figure 13 does not indicate significant quantitative differences in the flow structure except in the narrow polar regions. ## 4 Discussion The properties of ADAFs are often discussed in terms of one-dimensional (1D) vertically averaged analytic and numerical models. The comparison of our two-dimensionl (2D) models and 1D models of ADAFs constructed up to date show important qualitative differences, which we would like to stress here. Firstly, the 1D simulations of high and moderate viscosity ($`0.1\alpha 1`$) flows cannot reproduce bipolar outflows and large-scale circulations due to obvious, intrinsic limitations of the vertically averaged approach. Only for the pure inflow models the 1D approach could be adequate. However, ADAFs with pure inflows are realized only in a very small range of the parameters $`\alpha `$ and $`\gamma `$ and therefore such a type of the flow is not general. Secondly, the low viscosity ($`\alpha 0.1`$) 1D models of ADAF constructed previously had not accounted at all, or had not accounted in all important details, the effects of convection. 2D models show that the convection governs the structure of the low viscosity flows and significantly influences predicted observational properties of them. Future low viscosity 1D ADAF models should account for the convection with all the important details (see Narayan et al. 2000). Convective accretion flows are quite different from the ‘standard’ self-similar ADAF solutions in several points. Firstly, convective flows have a flattened density profile, $`\rho r^\beta `$ with $`\beta 0.5`$$`0.7`$ slightly dependent on $`\gamma `$, whereas $`\beta =3/2`$ in the case of the self-similar ADAFs (Narayan & Yi 1994). Secondly, there is a net outward energy flux in these flows provided by convective motions, whereas the self-similar ADAFs have a zero net energy flux. The amount of energy transported outward by convections is $`10^2\dot{M}_0c^2`$, and the value only moderately depends on $`\alpha `$ and $`\gamma `$. These effects have important implications for the spectra and luminosities of accreting black holes. Indeed, since $`\rho r^{1/2}`$ and $`Tr^1`$, the bremsstrahlung cooling rate per unit volume varies as $`Q_{br}\rho ^2T^{1/2}r^{3/2}`$ in the case of convective flows, and $`Q_{br}r^{7/2}`$ in the case of self-similar ADAFs. Integrating $`Q_{br}`$ over a spherical volume one can demonstrate that most of the energy losses in convective accretion flows occurs on the outside, whereas most of the energy losses in self-similar ADAFs takes place in the innermost region. Recent analytic works by Narayan et al. (2000) and Quataert & Gruzinov (2000) have provided considerable insight into properties of accretion flows with convection. The analytic analysis is based on the ansatz (confirmed by our previous and present numerical simulations) that convective motions transport angular momentum inward, rather than outward. All the basic properties of numerical models with convection were reproduced in terms of a new self-similar solution in the mentioned works. Note that the properties of convective models found in 2D simulations have recently been confirmed in 3D simulations. These properties of convective accretion flows could provide the physical explanation of the phenomenon of ‘evaporation’ of Shakura-Sunyaev thin disk with a following formation of an ADAF in the innermost region: the convective outward energy transport in optically thin ADAFs can power the evaporation process, in a similar way as that proposed by Honma (1996) in the case of the turbulent thermal conduction (see, however, Abramowicz, Björnsson & Igumenshchev 2000a for a critical discussion of the Honma model). In some respect the flows with large-scale circulations have a close resemblance to those with convection. All the basic properties of the two types of flows, including the flattened radial density profiles and outward energy transport, are very similar. A scenario in which the energy can be radiated by the gas on the outside in flows with global meridional circulations was discussed by Igumenshchev (2000). It seems that in flows with large-scale circulations, the viscosity is large enough to suppress the convection on scales $`r`$, but convective motions with scale $`r`$ still survive. Self-similar ADAFs have always averaged $`Be>0`$, and it was argued that this can introduce the formation of outflows (Narayan & Yi 1994, 1995a). Based on this idea, Blandford & Begelman (1999) made the strong assertion that all radiatively inefficient accretion flows must form unbounded powerful bipolar outflows (ADIOs, the advection-dominated inflows-outflows). Our previous and present investigations do not confirm the physical consistency of the ADIOs idea. Abramowicz, Lasota & Igumenshchev (2000b) have stressed that, obviously, $`Be>0`$ is only a necessary, but not a sufficient condition for unbounded outflows. They provided an explicit numerical example in which a 2D ADAF with $`Be>0`$ has no outflows (see analogous Model E in this study). Abramowicz et al. (2000b) have also shown that $`Be<0`$ for low viscosity ADAFs that fulfill physically reasonable outer and inner boundary conditions, and have angular momentum distribution close to that of Paczyński’s (1999) toy ADAF model. Numerical results of Igumenshchev & Abramowicz (1999) and of this paper indicate that the convection energy transport provide an additional cooling mechanism that always makes $`Be<0`$ in low viscosity ADAFs, not only those with Paczyński’s-toy angular momentum distribution. Thus, ADIOs do not exist if $`\alpha 0.3`$. Models with very high viscosity, $`\alpha 1`$, do indeed show behaviour similar to that postulated for ADIOs. However, as explained in §3.1.1, these models are significantly non-self-similar, and therefore the Blandford & Begelman’s (1999) solution is not an adequate representation of them. Thermal conduction was studied by Gruzinov (2000) in the context of turbulent spherical accretion. He assumed the conduction to be proportional to the temperature gradient, and found that the accretion rate in this case can be significantly reduced compared with the Bondi rate. Gruzinov’s result is quite different to ours, because we found that the conduction acts in such a way that the accretion rates increase. These differences could indicate that the problem strongly depends on the assumed prescription for thermal conduction. ## 5 Conclusions We have performed a systematic study of 2D axisymmetric viscous rotating accretion flows into black holes in which radiative losses are neglected. Assumptions and numerical technique adopted here are similar to those used by Igumenshchev & Abramowicz (1999); a few modifications are connected to inclusion of thermal conduction. The thermal conduction flux was chosen to be proportional to the specific entropy gradient. We assumed that mass is steadily injected within an equatorial torus near the outer boundary of the spherical grid. The injected mass spreads due to the action of viscous shear stress and accretes. We set an absorbing inner boundary condition for the inflow at $`r_{in}=3r_g`$. We study the flow structure over three decades in radius using a variety of values of the viscosity parameter $`\alpha `$ and adiabatic index $`\gamma `$. Our models without thermal conduction cover an extended region in the ($`\alpha `$, $`\gamma `$) plane, $`0.01\alpha 1`$ and $`4/3\gamma 5/3`$. We have found four types of pattern for the accretion flow, which had been found earlier in two-dimensional simulations by Igumenshchev & Abramowicz (1999), Stone et al. (1999) and Igumenshchev (2000). The type of flow mainly depends on the value of $`\alpha `$, and is less dependent on the value of $`\gamma `$. The high viscosity models, $`\alpha 1`$, form powerful bipolar outflows. The pure inflow and large-scale circulations patterns occur in the moderate viscosity models, $`\alpha 0.10.3`$. The low viscosity ($`\alpha 0.03`$) models exhibit strong convection. All models with bipolar outflows and pure inflow are steady. The models with large-scale circulations could be either steady or unsteady depending on the values of $`\alpha `$ and $`\gamma `$. All convective models are unsteady. Some of our pure inflow and convective models do show a self-similar behaviour. In particular, the pure inflow model ($`\alpha =0.3`$, $`\gamma =3/2`$) reasonably well satisfies the predictions of the self-similar solutions of Gilham (1981) and Narayan & Yi (1994) in which $`\rho r^{3/2}`$. The convective accretion flows show a good agreement with the self-similar solutions recently found by Narayan et al. (2000) and Quataert & Gruzinov (2000) in which $`\rho r^{1/2}`$. The latter solutions have been constructed for the convection transporting angular momentum towards the gravitational center. The self-similar solutions for accretion flows with bipolar outflows (ADIOs) proposed by Blandford & Begelman (1999) have not been confirmed in our numerical simulations. The most interesting feature of the flows with large-scale circulations and convective accretion flows is the non-zero outward energy flux, which is equivalent to the effective luminosity $`10^2\dot{M}_0c^2`$, where $`\dot{M}_0`$ is the black hole accretion rate. This result has important implications for interpretation of observations of accreting black hole candidates and neutron stars. The accretion flows with thermal conduction have not been studied as completely as the non-conductive flows. The conductive models show only two types of laminar flow patterns: pure inflow (with $`\alpha =0.3`$) and global circulation (with $`\alpha 0.030.1`$). The thermal conduction mainly acts as a cooling agent in our models, it suppresses bipolar outflows and convective motions. Acknowledgments. The authors gratefully thank Ramesh Narayan for help with interpretation of the numerical results and comments on a draft of the paper, Ed Spiegel for pointing out the importance of thermal conduction in viscous accretion flows, Rickard Jonsson for useful comments, and Jim Stone and Eliot Quataert for discussions. The work was supported by the Royal Swedish Academy of Sciences.
no-problem/0003/quant-ph0003023.html
ar5iv
text
# Maximally entangled mixed states in two qubits ## Abstract We propose novel mixed states in two qubits, “maximally entangled mixed states”, which have a property that the amount of entanglement of these states cannot be increased further by applying any unitary operations. The property is proven when the rank of the states is less than 4, and confirmed numerically in the other general cases. The corresponding entanglement of formation specified by its eigenvalues gives an upper bound of that for density matrices with same eigenvalues. Entanglement (or inseparability) is one of the most striking features of quantum mechanics and an important resource for most applications of quantum information. In quantum computers, the quantum information stored in quantum bits (qubits) is processed by operating quantum gates. Multi-bit quantum gates, such as the controlled-NOT gate, are particularly important, since these gates can create entanglement between qubits. In recent years, it has attracted much attention to quantify the amount of entanglement, and a number of measures, such as the entanglement of formation and negativity , have been proposed. When the system of the qubits is in a pure state, the amount of entanglement can be changed through the gate operations from zero of separable states to unity of maximally entangled states. The most quantum algorithms are designed for such ideal pure states. When the system is maximally mixed, however, we cannot receive any benefit of entanglement in the quantum computation, since the density matrix of the system (unit matrix) is invariantly separable under any unitary transformations or gate operations. Recently, a question about NMR quantum computation has been proposed , since the states in the vicinity of the maximally mixed state are also always separable as is the case of the present NMR experiments. In all realistic systems, the mixture of the density matrix describing the qubits is inevitably increased by the coupling between the qubits and its surrounding environment. Therefore, it is extremely important to understand the nature of entanglement for general mixed states between two extremes of pure states and a maximally mixed state. In this paper, we try to answer a simple question how much the increase of the mixture limits the amount of entanglement to be generated by the gate operation or unitary transformation. We propose a class of mixed states in bipartite $`2\times 2`$ systems (two qubits). The states in this class, maximally entangled mixed states, show a property of having a maximum amount of entanglement in a sense that the entanglement of formation (and even negativity) of these states cannot be increased further by applying any (local and/or nonlocal) unitary transformations. The property is rigorously proven in the case that the rank of the states is less than 4, and confirmed numerically in the case of rank 4. The corresponding entanglement of formation specified by its eigenvalues gives an upper bound of that for density matrices with same eigenvalues. The entanglement of formation (EOF) for a pure state is defined as the von Neumann entropy of the reduced density matrix. The EOF of a mixed state is defined as $`E_F(\rho )=\mathrm{min}_ip_iE_F(\psi )`$, where the minimum is taken over all possible decompositions of $`\rho `$ into pure states $`\rho =_ip_i|\psi _i\psi _i|`$. The analytical form for EOF in 2$`\times `$2 systems is given by $$E_F(\rho )=H(\frac{1+\sqrt{1C^2}}{2}),$$ (1) with $`H(x)`$ being Shannon’s entropy function. The concurrence $`C`$ is given by $$C=\mathrm{max}\{0,\lambda _1\lambda _2\lambda _3\lambda _4\},$$ (2) where $`\lambda `$’s are the square root of eigenvalues of $`\rho \stackrel{~}{\rho }`$ in decreasing order. The spin-flipped density matrix $`\stackrel{~}{\rho }`$ is defined as $$\stackrel{~}{\rho }=\sigma _y^A\sigma _y^B\rho ^{}\sigma _y^A\sigma _y^B,$$ (3) where denotes the complex conjugate in the computational basis. Since $`E_F`$ is a monotonic function of $`C`$, the maximum of $`C`$ corresponds to the maximum of $`E_F`$. The states we propose are that obtained by applying any local unitary transformations to $`M`$ $`=`$ $`p_1|\mathrm{\Psi }^{}\mathrm{\Psi }^{}|+p_2|0000|`$ (4) $`+`$ $`p_3|\mathrm{\Psi }^+\mathrm{\Psi }^+|+p_4|1111|,`$ (5) where $`|\mathrm{\Psi }^\pm =(|01\pm |10)/\sqrt{2}`$ are Bell states, and $`|00`$ and $`|11`$ are product states orthogonal to $`|\mathrm{\Psi }^\pm `$. Here, $`p_i`$’s are eigenvalues of $`M`$ in decreasing order ($`p_1p_2p_3p_4`$), and $`p_1+p_2+p_3+p_4=1`$. These include states such as $`\rho `$ $`=`$ $`p_1|\mathrm{\Phi }^{}\mathrm{\Phi }^{}|+p_2|0101|`$ (6) $`+`$ $`p_3|\mathrm{\Phi }^+\mathrm{\Phi }^+|+p_4|1010|,`$ (7) where $`|\mathrm{\Phi }^\pm =(|00\pm |11)/\sqrt{2}`$ are also Bell states, and include such that obtained by exchanging $`|\mathrm{\Psi }^{}|\mathrm{\Psi }^+`$, $`|00|11`$ in Eq. (5), or $`|\mathrm{\Phi }^{}|\mathrm{\Phi }^+`$, $`|01|10`$ in Eq. (7). Since entanglement is preserved by local unitary transformations, all these states have the same concurrence of $`C^{}`$ $`=`$ $`\mathrm{max}\{0,C^{}(p_i)\}`$ (8) $`C^{}(p_i)`$ $``$ $`p_1p_32\sqrt{p_2p_4}`$ (9) The concurrence $`C^{}`$ is maximum among the density matrices with the same eigenvalues, at least, when the density matrices have a rank less than 4 ($`p_4=0`$). The proof is as follows: (1) Rank 1 case ($`p_2=p_3=p_4=0`$). In this case, Eq. (5) is reduced to $`M=|\mathrm{\Psi }^{}\mathrm{\Psi }^{}|`$ which obviously has the maximum concurrence of unity. (2) Rank 2 case ($`p_3=p_4=0`$). Any density matrices of two qubits (not necessarily rank 2) can be expressed as $$\rho =q|\psi \psi |+(1q)\rho _{\mathrm{sep}},$$ (10) where $`|\psi `$ is an entangled state and $`\rho _{\mathrm{sep}}`$ is a separable density matrix. Convexity of the concurrence implies that $$C(\rho )qC(|\psi \psi |)+(1q)C(\rho _{\mathrm{sep}})=qC(|\psi \psi |).$$ (11) Since $`\rho _{\mathrm{sep}}`$ is a positive operator, $`q`$ is equal to or less than the maximum eigenvalue of $`\rho `$, and thus, $$C(\rho )p_1.$$ (12) The equality is satisfied when $`|\psi `$ is a maximally entangled pure state and it is an eigenvector of $`\rho `$ with the eigenvalue of $`p_1`$. The upper bound in Eq. (12) coincides with $`C^{}`$ for $`p_3=p_4=0`$. (3) Rank 3 case ($`p_4=0`$). Any rank 3 density matrices can be decomposed into two density matrices as $$\rho =(13p_3)\rho _2+3p_3\rho _3,$$ (13) where eigenvalues of $`\rho _2`$ are $$\{\frac{p_1p_3}{13p_3},\frac{p_2p_3}{13p_3},0,0\},$$ (14) and eigenvalues of $`\rho _3`$ are $`\{1/3,1/3,1/3,0\}`$. According to Lemma 3 in Ref. , $$\text{Tr}\rho ^2\frac{1}{3}\rho \text{ is separable.}$$ (15) Since the purity of $`\rho _3`$ is $`1/3`$, $`\rho _3`$ is always separable. Therefore, convexity of the concurrence implies that $$C(\rho )(13p_3)C(\rho _2)p_1p_3.$$ (16) Here, we have used that, as shown above, the maximum concurrence of rank 2 density matrices is its maximum eigenvalue. The upper bound in Eq. (16) again coincides with $`C^{}`$ for $`p_4=0`$. In order to check whether $`C^{}`$ is maximum even in general $`p_40`$ cases, we have performed a numerical calculation whose scheme is similar to that in Ref. . We have generated 10,000 density matrices in a diagonal form with random four eigenvalues . The maximum concurrence has been obtained among 1,000,000 density matrices generated by multiplying random unitary matrices in the circular unitary ensemble to each of 10,000 diagonal matrices. The results are shown in Fig. 1 where the maximum concurrence are plotted as a function of the participation ratio ($`R=1/\text{Tr}\rho ^2`$). When the density matrix is close to the pure state ($`R=1`$), the maximum concurrence is also close to unity, as expected. For $`R3`$, the states are always separable (Eq. (15)) and the maximum is zero. In the region of $`1<R<3`$, the maximum tends to decrease with the increase of $`R`$, but the points rather broadly distribute. The same data are plotted as a function of $`C^{}`$ in the inset of Fig. 1. All points very closely distribute along the straight line of $`C=C^{}`$, and none of the points are present on the higher side of the line. This numerical result strongly support the hypothesis that $`C^{}`$ gives an upper bound of the concurrence even in the general cases of $`p_40`$. Accepting the hypothesis implies that all the states satisfying $`C^{}(p_i)0`$ become automatically separable. This condition of separability is looser than Eq. (15). In fact, $`C^{}(p_i)0`$ is only a necessary condition of $`\text{Tr}\rho ^21/3`$. Difficulty of the rigorous proof of the hypothesis, if it is true, might relate to the difficulty of complete understanding of separable-inseparable boundary in the 15-dimensional space of the density matrices due to its complex structure. We emphasize again that the numerical result strongly support that the hypothesis is true. It should be noted here that, when the eigenvalues of a density matrix satisfy a relation, $`C^{}`$ is indeed maximum even for $`p_40`$. Any rank 4 density matrices are decomposed as $$\rho =(p_1p_32\sqrt{p_2p_4})|11|+\rho _4,$$ (17) where $`|1`$ is an eigenvector of $`\rho `$, and eigenvalues of $`\rho _4`$ (not normalized) are $`\{p_3+2\sqrt{p_2p_4},p_2,p_3,p_4\}`$. When eigenvalues of $`\rho `$ satisfies $$p_3=p_2+p_4\sqrt{p_2p_4},$$ (18) the purity of (normalized) $`\rho _4`$ is equal to 1/3 and $`\rho _4`$ becomes always separable. Therefore, using the convexity of the concurrence again, the upper bound of the concurrence is proven to be $`C^{}`$ for density matrices satisfying Eq. (18). When $`p_2=p_3=p_4(1/4)`$, $`M`$ is reduced to the Werner state: $`M`$ $`=`$ $`p_1|\mathrm{\Psi }^{}\mathrm{\Psi }^{}|`$ (20) $`+{\displaystyle \frac{1p_1}{3}}(|\mathrm{\Psi }^+\mathrm{\Psi }^+|+|\mathrm{\Phi }^{}\mathrm{\Phi }^{}|+|\mathrm{\Phi }^+\mathrm{\Phi }^+|),`$ whose eigenvalues satisfies Eq. (18). Therefore, EOF of the Werner states cannot be increased further by any unitary transformations. It is worth to test whether the states we propose have maximum entanglement in the other entanglement measures. It has been shown that positive partial transpose is necessary condition for separability , and that it is also sufficient condition for $`2\times 2`$ and $`2\times 3`$ systems . In $`2\times 2`$ systems, when the density matrix is entangled, its partial transpose has only one negative eigenvalue . The modulus of the negative eigenvalue ($`E_N`$) is one of entanglement measures, and twice of $`E_N`$ agrees with the negativity introduced in Ref. . We have performed a numerical calculation similar to that for Fig. 1, and obtained maximum of $`E_N`$ for 10,000 random density matrices. In the inset of Fig. 2, those are plotted as a function of $`2E_N^{}`$ $`=`$ $`\mathrm{max}\{0,2E_N^{}(p_i)\}`$ (21) $`2E_N^{}(p_i)`$ $``$ $`p_2p_4+\sqrt{(p_1p_3)^2+(p_2p_4)^2},`$ (22) which is the negativity of $`M`$. None of the points are present on the higher side of a straight line of $`E_N=E_N^{}`$ as in the case of the concurrence. While it has been shown that two measures (EOF and negativity) do not induce the same ordering of density matrices with respect to the amount of entanglement , the above numerical results suggest that $`M`$ has the maximum amount of entanglement in both two measures. It should be noted here that $`E_N^{}=0`$ is equivalent to $`C^{}=0`$, and therefore, the condition of $`C^{}(p_i)0`$, for which all the states will be separable as mentioned before, does not contradict to the condition of $`E_N^{}(p_i)0`$. Further, it will be natural to attribute the upper bound of entanglement, which is well described by $`C^{}(p_i)`$ and $`E_N^{}(p_i)`$, to the increase of the degree of the mixture of the states. In this sense, each of $`C^{}(p_i)`$ and $`2E_N^{}(p_i)`$ (both distribute in the range of $`[1/2,1]`$) can be considered as one of measures characterizing the degree of mixture such as the purity (or participation ratio), von Neumann entropy, Renyi entropy, and so on. Finally, as a simple application of the upper bound of EOF: $$E_F(\rho )H(\frac{1+\sqrt{1C^{}}}{2}),$$ (23) we consider the situation generating entangled states by using the quantum gate consisting of two qubits, more concretely a controlled-NOT (CNOT) gate. In realistic situations, the coupling between the gate and its surrounding environment is inevitably present. The entire system consisting of the gate plus its environment happen to be entangled by the coupling, and the mixture of the reduced density matrix describing the gate will be inevitably increased. In order to treat such decohered CNOT gates, we adopt the spin-boson model , where each qubit is described as a spin $`\frac{1}{2}`$ system, and the environment is expressed as an ensemble of independent bosons. As the model of the CNOT gate, we choose the simplest Hamiltonian: $$H_G=\frac{R}{4}(1\sigma _{cz})\sigma _{tx},$$ (24) where $`c`$ and $`t`$ denotes the control-bit and target-bit, respectively. The state change after $`t=h/(2R)`$ corresponds to the change in the CNOT operation. In this paper, we demonstrate two types of gate-environment couplings. These are $`H_{GE}^1`$ $`=`$ $`\sigma _{cz}{\displaystyle \underset{k}{}}B_k(a_k^{}+a_k),`$ (25) $`H_{GE}^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1\sigma _{cz})\sigma _{tx}{\displaystyle \underset{k}{}}B_k(a_k^{}+a_k),`$ (26) where $`a_k`$ is an annihilation operator of a boson in the environment. $`H_{GE}^1`$ describes the situation that only the control-bit couples with the environment. $`H_{GE}^2`$ may describe the situation that the gate operation is achieved by irradiating a optical pulse which contains a noise coherent over the qubits as well as the pulse itself. For these phase-damping couplings, the time evolution of the reduced density matrix describing the gate is analytically solved by assuming the product initial state for the entire density matrix: $$\rho _{\mathrm{tot}}(0)=\rho (0)\rho _E,$$ (27) where $`\rho _E`$ is the thermal equilibrium density matrix of the environment. Since we pay attention only to the generation of the entangled state, the initial state of the CNOT gate is chosen to be the pure state of $`(|0_c+|1_c)|0_t`$. The time development of EOF for several values of the coupling strength $`K`$ ($`_kB_k^2\delta (\omega \omega _k)`$) are shown in Fig. 3 for the (a) $`H_{GE}^1`$ and (b) $`H_{GE}^2`$ coupling. The values of the coupling strength $`K`$ are chosen such that the values of the fidelity of the output state to the desired state in the absence of the decoherence are roughly 0.95, 0.9 and 0.8, which are common to Figs. 3 (a) and (b). It is interesting to note that, while the fidelity is the same in two cases of coupling, the amount of the entanglement is significantly different. The upper bound of EOF (Eq. (23)) for each coupling strength is shown by an arrow on the right side of each panel for comparison. Since EOF of a state has the physical meaning of the asymptotic number of Bell pairs required to prepare the state by using only local quantum operations and classical communication, comparing the difference in EOF will make sense. In Fig. 3 (a), the EOF is considerably lower than the corresponding upper bound, while EOF almost agrees with the upper bound in Fig. 3 (b). Therefore, with respect to the function generating entangled states, the performance of the CNOT gate shown in Fig. 3 (b), is already optimal (or saturated) in a sense that there is no other way to increase further the amount of the entanglement than avoiding the increase of the mixture of the output density matrix, and thus, avoiding the decoherence itself. On the other hand, for the CNOT gate shown in Fig. 3 (a), there is room for improvement of the performance in principle, although we cannot show the detailed methods here. To conclude, we propose the mixed states in two qubits, which have a property that the amount of entanglement of these states cannot be increased further by any unitary operations. The property is proven when the rank of the states is less than 4, and when the states satisfy a special relation such as the Werner states. The results of the numerical calculations strongly support a hypothesis that these mixed states are indeed maximally entangled even in general cases. It will be extremely important to verify the above hypothesis and to seek out the maximally entangled mixed states as well as the measure in larger dimensional systems, for understanding the nature of entanglement of general mixed states and for the progress of the quantum information science and its applications.
no-problem/0003/cond-mat0003058.html
ar5iv
text
# Observation of Individual Josephson Vortices in YBCO Bicrystal Grain-Boundary Junctions ## Abstract The response of YBCO bicrystal grain-boundary junctions to small dc magnetic fields (0 - 10 Oe) has been probed with a low-power microwave (rf) signal of 4.4 GHz in a microwave-resonator setup. Peaks in the microwave loss at certain dc magnetic fields are observed that result from individual Josephson vortices penetrating into the grain-boundary junctions under study. The system is modeled as a long Josephson junction described by the sine-Gordon equation with the appropriate boundary conditions. Excellent quantitative agreement between the experimental data and the model has been obtained. Hysteresis effect of dc magnetic field is also studied and the results of measurement and calculation are compared. Grain-boundary junctions in high-temperature-superconducting (HTS) thin films have been studied extensively due to their importance in both device applications and fundamental physics. The dynamics of Josephson vortices in Josephson junctions, which are analogous to the Abrikosov vortices in type II superconductors, is a very interesting subject, because of the mesoscopic physics involved. Much effort has been devoted to understanding the collective behavior of Josephson vortices in situations such as flux-flow devices. The study of vortex physics on the mesoscopic scale has in general been hampered by the lack of suitable experimental probes. It has also been known that for long junctions (junctions with a dimension longer than the Josephson penetration depth), Josephson vortices induced by the rf magnetic field cause nonlinear microwave losses which can severely limit the applicability of HTS materials in wireless communication applications. Previous experiments and modeling have yielded qualitative agreement. To further quantitatively understand the effect of the Josephson vortex dynamics on the microwave losses of HTS materials, it is important to study the influence of the Josephson vortices generated by dc magnetic fields. In this scenario, the vortex dynamics should be easier to probe, and an external dc magnetic field can emulate the physical situations of trapped flux or the earth’s ambient magnetic field. We report the observation of individual Josephson vortices generated by an external dc magnetic field and how these events affect microwave losses. We probe the Josephson vortex dynamics by using a small rf signal of 4.4 GHz in a microwave-resonator setup that includes a bicrystal grain-boundary junction. Each Josephson vortex entering the junction is manifested by a sharp peak in the microwave resistance, which is measured by our experimental setup. The second-order nonlinear sine-Gordon equation, which determines the dynamics of a long junction in the presence of dc and rf magnetic fields, is solved numerically. The measured and calculated results agree quantitatively as the dc field is increasing, and we are able to identify the series of sharp peaks observed in the microwave loss with the first several Josephson vortices penetrating into the grain-boundary junctions. Hysteresis effects upon decreasing the dc magnetic field have also been measured and calculated. Differences between calculated and measured losses in decreasing field will be discussed. The junctions used in this study were fabricated from 140- nm-thick, epitaxial, c-axis-oriented, YBCO films deposited by laser ablation on 1 -cm by 1 -cm r-plane (1012) sapphire bicrystal substrates with a $`24^{}`$-misorientation angle. To characterize the microwave properties of the junction, we have used a microstrip-resonator configuration that allows us to distinguish the effects of the junction from those of the rest of the film. The resonator was patterned such that the junction is positioned at the midpoint of the microstrip, spanning the entire width of 150 $`\mu `$m, as shown in Fig. 1. The resonance frequency $`f_1`$ of the fundamental mode is 4.4 GHz with overtone resonant modes at $`f_n`$ $``$ $`n`$$`f_1`$ where $`n`$ is an integer. At resonance, the fundamental mode is a half-wavelength standing wave with a current maximum at the midpoint of the resonator line, where the fabricated junction is positioned. In contrast, the $`n=2`$ mode is a full wavelength with a current node at the position of the junction. Therefore, by comparing the measured results of these two modes, we can separate the properties of the engineered grain-boundary junction from those of the remainder of the superconducting film. This resonator technique has previously been used to measure the microwave power-handling properties of the engineered junction in zero dc magnetic field. In this work, we have studied the characteristics of the junctions in small dc magnetic fields by injecting low-power rf input signals (pW) at the fundamental and the first overtone (4.4 and 8.7 GHz) of the resonator. The device was cooled in a magnetic field smaller than 0.01 Oe. The quality factor $`Q_0`$, which is proportional to the inverse of the microwave resistance, was measured for dc magnetic fields ranging from 0 to 10 Oe with a step size as small as 0.01 Oe, at temperatures ranging from 5 to 75 K. As expected, the measurements of the $`n=2`$ mode showed no observable dc magnetic field dependence, since in this mode, the engineered junction does not contribute to the microwave loss, and the applied magnetic field $`H_{dc}`$ is too small to affect the rest of the film. However, in the measurements of the $`n=1`$ mode, we observe an abrupt decrease in $`Q_0`$ at certain narrow ranges of dc magnetic fields, followed by a recovery to the zero-field value. This pattern is followed for all of the measured temperatures. Experimental data, plotted as $`1/Q_0`$ (proportional to the microwave resistance) versus dc magnetic field at various temperatures, are shown in Fig. 2. As described below, we interpret the observed peaks in $`1/Q_0`$ as single Josephson vortices penetrating into the junction. Furthermore, at each temperature, the data show a threshold field below which the microwave loss is low, and at which the first peak in the microwave loss is observed. The threshold field can be identified as the critical Josephson field $`H_{cJ}`$. Also, notice that the higher the temperature, the noisier the data becomes, indicating that thermal fluctuations may cause nucleation and annihilation of Josephson vortices. The noise is most apparent in the 75 K data. Two $`24^{}`$ junction samples have been measured and almost identical behavior was observed. The length $`L`$ of the grain-boundary junction is 150 $`\mu `$m, which is much greater than the Josephson penetration depth $`\lambda _J`$ given by, $$\lambda _J=\sqrt{\frac{\mathrm{\Phi }_0}{2\pi \mu _0J_c(2\lambda _L+d)}},$$ (1) where $`\mathrm{\Phi }_0`$ is the flux quantum, $`\mathrm{\Phi }_0`$=$`h/2e`$=$`2.07\times 10^{15}Wb`$, $`\lambda _L`$ $``$ 0.2 $`\mu `$m is the London penetration depth of the film, and $`d`$ is the physical grain-boundary interlayer thickness, which is negligible compared with $`\lambda _L`$. For a typical $`J_c`$ of $`10^2`$ to $`10^4`$ A/cm<sup>2</sup>, $`\lambda _JL`$ and the long-junction regime applies. For this situation, $`H_{cJ}`$ is given by $$H_{cJ}=2\mu _0J_c\lambda _J\sqrt{J_c/\lambda _L}.$$ (2) Above $`H_{cJ}`$, a Meissner state of the junction is not possible, and quantized flux in the form of Josephson vortices starts to penetrate into the junction from its edges. The dynamics of a long-junction system are governed by the sine-Gordon equation, $$\lambda _J^2\frac{^2\varphi (x,t)}{x^2}=\mathrm{sin}\varphi (x,t)+\tau _J\frac{\varphi }{t},$$ (3) where $`\varphi (x,t)`$ is the gauge-invariant phase difference of the superconducting wave function across the junction $`\tau _J=\mathrm{\Phi }_0/2\pi dJ_c\rho _n`$, with $`\rho _n`$ being the normal leakage resistivity of the junction. The capacitive term is omitted for our case of an overdamped junction. We have solved Eq. (3) numerically with boundary conditions at the junction edges that include both the dc and microwave magnetic field. Similar treatments have been reported by other authors. Thus, $$\frac{\varphi }{x}|_{x=0,L}=\frac{2\pi (2\lambda _L+d)[H_{dc}\pm H_0\mathrm{sin}(\omega t)]}{\mathrm{\Phi }_0},$$ (4) where $`H_{dc}`$ is the applied dc magnetic field, $`H_0`$ is the amplitude of the microwave magnetic field at the edges of the junction, and $`\omega `$ is the angular frequency of the microwave signal. The $`\pm `$ sign in Eq. (4) indicates that the directions of the microwave fields are opposite at the two edges of the junction. For this geometry, the microwave electric field is in the y-direction which is defined to be normal to the junction area, and is given by $$E_y(x,t)=\frac{\mathrm{\Phi }_0}{2\pi d}\frac{\varphi (x,t)}{t}.$$ (5) The impedance and harmonic generation can then be calculated from the Fourier transform of the time-dependent electric field $`E_y`$, $$R_n=\frac{2}{H_0}_{0}^{}{}_{}{}^{T_{\mathrm{rf}}}𝑑tE_y(0,t)\mathrm{sin}(n\omega t),$$ (6) $$X_n=\frac{2}{H_0}_{0}^{}{}_{}{}^{T_{\mathrm{rf}}}𝑑tE_y(0,t)\mathrm{cos}(n\omega t),$$ (7) where $`T_{\mathrm{rf}}`$ is the microwave period, $`n`$ is a positive integer, $`R_1`$ and $`X_1`$ are proportional to the microwave resistance and reactance at the fundamental resonator frequency, and $`R_n`$ and $`X_n`$ for $`n>1`$ correspond to the $`n`$th harmonic generated in the junction. In Eqs. (6) and (7) we use only $`E_y`$ at the edge of the junction, since the microwave loss is dominated by the behavior at the edges as found by Lehner $`et`$ $`al`$. The calculated resistance is compared with the measured results as a function of $`H_{dc}`$ at $`T=5`$ K in Fig. 3. The parameters used in the calculation are $`J_c=4.078\times 10^2`$ A/cm<sup>2</sup>, $`\rho _n=6.75\times 10^8\mathrm{\Omega }`$ cm<sup>2</sup>, where $`\rho _n`$ is obtained from dc I-V measurements, and $`J_c`$ is taken to fit the experimentally observed $`H_{cJ}`$ (Eq. (2)). The calculation shows peaks in the microwave loss at the same dc magnetic fields as the experimental results. We interpret the peaks in $`1/Q_0`$ as the result of Josephson vortices entering the junction. After zero-field cooling, the applied dc magnetic field is gradually increased and as long as $`H_{dc}<H_{cJ}`$, the external magnetic field is screened by the self-current in the junction (Meissner state). A further increase of the applied dc magnetic field so that $`H_{dc}H_{cJ}`$ causes a Josephson vortex to almost enter the junction. Since the junction has two edges, this event is actually a two-vortex event, one from each edge. Because of the presence of the small rf magnetic field at the junction edges, two Josephson vortices are created and annihilated during each rf cycle. This state of the junction manifests itself in our experiment with sharply decreased $`Q_0`$ because the power dissipation is the highest when a Josephson vortex is created or annihilated at the edges. With a slightly higher applied dc field, the Josephson vortex is driven completely into the junction and there is very little extra microwave loss once the vortex is in the junction. As the applied field is increased further, the process is repeated each time another vortex enters the junction, until the junction is packed with Josephson vortices, and the collective properties of a large number of strongly interacting vortices have to be considered. Our numerical results are also qualitatively consistent with a previous linearized analytical treatment of the microwave absorption by a long junction in a magnetic field. The temperature dependence of $`H_{cJ}`$ comes from that of $`J_c`$ and $`\lambda _L`$ in accordance with Eq. (2). The experimental temperature dependence of the onset of the first peak in the microwave resistance agrees roughly with the theoretically predicted $`H_{cJ}(T)`$. This agreement is consistent with the hypothesis that the peaks observed in the microwave resistance of the grain-boundary junction are caused by single Josephson vortices entering the junction. Since the calculation involving the rf magnetic field is computer-time consuming, we have also solved the static sine-Gordon equation without the perturbation from the rf signal. In order to get a physically meaningful solution, we keep the damping term $`d\varphi /dt`$ in Eq (3). After turning on the dc field at $`t=0`$, $`\varphi (x,t)`$ is integrated in time until a static solution is reached. Magnetic field and current distributions in the junction are then obtained. We have confirmed that the dc fields at which the microwave loss peaks appear correspond to the calculated dc fields at which individual Josephson vortices enter the junction. Therefore, results from the static dc calculation should provide sufficient information about the microwave losses. We have calculated the hysteresis effects of the dc magnetic field using the static sine-Gordon solutions. To simulate our experiment, we have carried out the calculation with a dc field increasing from 0 to 3 Oe, then decreasing from 3 back to 0 Oe, with each step using $`phi(x)`$ obtained from the previous step as the initial condition. The results show that as the dc field decreases, the Josephson vortices leave the junction at different field values than those at which vortices enter while the dc field is increasing. In addition, the number of expected microwave loss peaks (six) is more than in the case of the increasing field (four) because at some fields, one-half of a flux quantum leaves the junction, while all fluxon entry events cause a change of one flux quantum as the field increases. The measured $`1/Q_0`$ versus $`H_{dc}`$ for decreasing $`H_{dc}`$ is plotted in Fig. 4. The calculation-predicted dc fields at which vortices leave the junction are indicated by the arrows in Fig. 4. Peaks of the microwave loss are observed but the agreement between the measured and the predicted results is not as good as the case of the increasing field. The calculations have considered only the case of a uniform junction. The good agreement with the experimental data for an increasing field indicates that the junction is sufficiently homogeneous for the assumption of uniform $`J_c`$ to be a good approximation. The double-peak feature apparent in the measured data in Fig. 3 might be due to a slightly asymmetric $`J_c`$ near the two edges of the junction so that there is one vortex penetrating into the junction from each edge at slightly different fields. We believe that nonuniform $`J_c`$ and the pinning resulting from it might explain the differences seen in Fig. 4 between the calculated and measured behavior in decreasing field. In the case of increasing field, pinning is expected to have little effect on the process of vortex entry, and once in the junction, the losses are low, so the effects of pinning are not observed. On the other hand, pinning would affect the fields at which the vortices leave the junction by effectively adding to the potential barrier that a vortex must overcome to exit the junction. This is probably why the data for increasing field agrees better with the calculation. It will be interesting to consider defects in the junction in the calculation, so that the pinning of the Josephson vortices can be included. The results of the measurements and calculations presented above show that we are able to probe individual Josephson vortices entering and exiting the bicrystal grain-boundary Josephson junctions under study. The microwave loss is not a monotonic function of external dc magnetic field and increases dramatically when a single vortex penetrates into or exits a long junction. Hysteretic behavior has been observed both in the experiments and the calculation. Since it is believed that HTS films contain weak links and the threshold field for vortex penetration can be very small, even the earth’s ambient field or the field from trapped flux might have a significant impact on the microwave impedance of HTS films. In addition, the collective effects from many Josephson junction-like weak links might also explain the anomalous dc response observed in which $`R_S`$ decreases with the application of a small magnetic field. We gratefully acknowledge support for this work by the Air Force Office of Scientific Research. The authors wish to thank L. R. Vale and R. H. Ono at NIST Boulder, CO for providing the samples used in this study, and J. Derov, G. Roberts, R. Webster and J. Moulton at AFRL for their hospitality. We also wish to thank Dr B. Willemsen for providing a rf probe used in this project.
no-problem/0003/astro-ph0003480.html
ar5iv
text
# Hotspot Emission from a Freely Precessing Neutron Star ## 1 Introduction \[Garmire et al. 2000\] have noted that the x-ray source, 1E 161348-5055, located near the center of the supernova remnant RCW 103, has a sinusoidal light curve with a period of approximately six hours. The authors suggest that this period implies that the x-ray source has a low-mass companion with a six hour orbital period. In this paper, we examine an alternative possibility, that 1E 161348-5055 is a freely precessing neutron star with a precession period of about six hours. The varying spectrum from 1E 161348-5055 is similar to a blackbody yielding an effective area of a small fraction of a square kilometer (Garmire, private communication), much less than the surface area of a neutron star; therefore, the situation could be well approximated by a point source (a hotspot) on the surface of a freely-precessing neutron star. Free precession has often been invoked to explain long-period variability in neutron stars and neutron-star systems. \[Brecher 1975\] attributed the 35-day cycle of Her X-1 (\[Tananbaum et al. 1972\]) to the free precession of the neutron star secondary. \[Jones 1988\] explained timing residuals in the Crab pulsar as arising from small amplitude free precession of the neutron star. \[Cadez & Galicic 1996\] found a small amplitude modulation in the optical flux from the Crab pulsar which \[Cadez, Galicic & Calvani 1997\] cite as evidence of free precession. In §2 we outline the kinematics of free precession, and in §3, we review the equations that determine the trajectory of light leaving the surface of the neutron star. §4 presents the light curves as a function of precessional and rotational phase for Newtonian, relativistic and ultracompact neutron stars with a emission from a hotspot. Finally, §5 places the results in a greater context. ## 2 Free Precession A body will precess freely if its rotation axis does not coincide with one of its principal axes. More specifically, if the body is only slightly prolate or oblate, the angular velocity vector ($`\stackrel{}{\omega }`$) of the star will make a constant angle ($`\kappa `$) with one of the principal axes (the $`3`$axis), forming the body cone, and will trace a cone in space (the space cone) with half-opening angle $`\kappa `$ as well. The rate of the precession is given by (\[Goldstein 1980\]) $$\mathrm{\Omega }=\frac{I_3I_1}{I_1}\omega _3=ϵ\omega _3.$$ (1) Values of $`ϵ=10^310^4`$ agree with the glitching behavior of neutron stars and the inferred gravitational-radiation spindown from the Crab pulsar (\[Shapiro & Teukolsky 1983\]). Forced precession by an orbiting companion typically has a frequency lower by a factor of $`2/(3\mathrm{\Omega }_{}\omega _3)`$ where $`\mathrm{\Omega }_{}`$ is the angular frequency of the orbit. Internal magnetic fields may distort a neutron star significantly. \[Ostriker & Gunn 1969\] estimate the distortion of a neutron star due to internal fields, $$ϵ4\times 10^6\left(3B_{p,15}^2B_{\varphi ,15}^2\right)$$ (2) where $`B_{15}`$ is the value of the magnetic field in units of $`10^{15}`$ G and $`\mathrm{}`$ denotes a volume-weighted average over the star. Here, $`B_p`$ and $`B_\varphi `$ refer to the poloidal and azimuthal components of the magnetic field, respectively. One expects the internal fields of the neutron star to be significantly larger than the field inferred by magnetic dipole radiation due to the contribution of higher multipoles and the concentration of magnetic flux in flux tubes (e.g. \[Pines & Alpar 1985\]). We would like to calculate the light curve from a hotspot on the surface of the freely precessing neutron star. Let us take the center of the cone that the angular velocity moves along to be the $`\stackrel{}{𝐳}`$axis. Our line of sight ($`\stackrel{}{𝐎}`$) makes an angle $`\xi `$ with this axis and forms a plane with $`\stackrel{}{𝐳}`$axis. We measure the phase of the precession ($`\varphi `$) relative to intersection of the space cone with this plane. The plane containing $`\stackrel{}{𝐳}`$ and $`\stackrel{}{𝐎}`$ also intersects the body cone. The angular momentum of the star points along the $`\stackrel{}{𝐳}`$-axis. In the body frame, the angular velocity of the star traces a cone centered on a principal axis of the star. We call this principal axis $`\stackrel{}{\mathrm{𝟑}}`$. The hotspot is located at $`\stackrel{}{\mu }`$ which makes an angle $`\beta `$ with $`\stackrel{}{\mathrm{𝟑}}`$. Let us freeze the precession and the rotation of the star when $`\stackrel{}{\mathrm{𝟑}}`$ points along $`\stackrel{}{𝐳}`$ (see Figure 1). At this orientation, the angle between the $`\stackrel{}{\mathrm{𝟑}}\stackrel{}{𝐎}`$ plane and the $`\stackrel{}{\mathrm{𝟑}}\stackrel{}{\mu }`$ plane is $`\gamma `$. Using spherical trigonometry (see Figure 1) yields the angle between the line of sight and the rotation axis, $`\zeta (\varphi )`$, and the angle between the hotspot and the rotation axis $`\alpha (\varphi )`$. $`\mathrm{cos}\zeta (\varphi )`$ $`=`$ $`\mathrm{cos}\kappa \mathrm{cos}\xi +\mathrm{sin}\kappa \mathrm{sin}\xi \mathrm{cos}\varphi `$ (3) $`\mathrm{cos}\alpha (\varphi )`$ $`=`$ $`\mathrm{cos}\kappa \mathrm{cos}\beta +\mathrm{sin}\kappa \mathrm{sin}\beta \mathrm{cos}(\varphi \gamma )`$ (4) The star rotates as well as precesses. The phase of the rotation is given by $`\eta `$. When $`\eta =0`$ the hotspot lies in the $`\stackrel{}{\omega }\stackrel{}{𝐎}`$ plane. The angle between the line of sight and the hotspot is $`\theta `$ and is given by $$\mathrm{cos}\theta =\mathrm{cos}\alpha (\varphi )\mathrm{cos}\zeta (\varphi )+\mathrm{sin}\alpha (\varphi )\mathrm{sin}\zeta (\varphi )\mathrm{cos}\eta $$ (5) If the hotspot emits isotropically, and we neglect gravitational lensing, the observed flux from the hotspot is simply proportional to $`\mathrm{cos}\theta `$ for $`|\theta |<\pi /2`$ and zero otherwise. ## 3 Gravitational Lensing \[Page 1995\] presents a detailed treatment of the gravitational lensing of the surface of a neutron star. Since the light trajectory is bent, the zenith angle of our detector ($`\delta `$) as seen from the hotspot is no longer equal to $`\theta `$. They are related by $$\theta (x)=_0^y\frac{xdu}{\sqrt{(12y)y(12u)u^2x^2}}$$ (6) where $`y=GM/Rc^2`$ and $`x=\mathrm{sin}\delta `$. For $`y<1/3`$, the image of the hotspot $`i`$ will be visible if $`\theta _i+2\pi j<\theta (1)`$. The flux from the hotspot is proportional to $$\underset{j}{}\frac{x(\theta _i+2\pi j)}{\mathrm{sin}(\theta _i+2\pi j)}\frac{\mathrm{d}x}{\mathrm{d}\theta }|_{\theta =\theta _i+2\pi j}$$ (7) such that $`|\theta _i+2\pi j|\theta (1)`$. For $`y>1/3`$ the surface of the neutron star lies below the circular photon orbit; therefore, each hotspot yields an infinite number of images (\[Shapiro & Teukolsky 1983\]): $$\underset{xx_{\text{max}}}{lim}\theta (x;y>1/3)=\mathrm{}.$$ (8) where $$x_{\text{max}}=3\sqrt{3}y\sqrt{12y}.$$ (9) ## 4 Light Curves To construct a light curve, we must specify several angles ($`\xi ,\kappa ,\beta `$ and $`\gamma `$). During some portion of the precession period, the observed flux will be constant with orbital phase, if either $`\beta =\kappa `$ or $`\xi =\kappa `$. In the first case, the angular velocity vector will coincide with the location of the hotspot on the star. In the second case, the angular velocity vector will point along the line of sight once during each precession. ### 4.1 $`\beta =\kappa =90^{}`$ To maximize the flux during the portion of the precessional period where the flux does not vary with rotational phase, we take $`\gamma =0`$ and $`\beta =\kappa =90^{}`$, and to minimize the flux during the rest of the precession, we take $`\xi =\theta (1)`$. If $`\gamma 0`$, the maximum flux will not occur during that portion of precession when the flux does not vary with rotational phase. Taking $`\beta =\kappa 90^{}`$ will reduce the maximum flux, and $`\xi \theta (1)`$ will change the portion of the time when the hotspot is not visible. Figure 2 shows the observed flux from the hotspot as a function of the star’s precessional and rotational phase. Only half of a precessional period is depicted. The flux varies at twice the precession rate. As Figure 3 shows, the mean flux over the rotational period varies nearly sinusoidally. Twice during the precessional period, when the mean flux reaches its maximum, the flux does not vary with the rotational phase of the star. At other stages of the star’s precession, the hotspot spends much of the rotational period hidden behind the horizon on the neutron star surface. If $`\beta =\kappa 90^{}`$, one finds that the mean flux varies at the precessional frequency, resulting in a light curve similar to that presented in Figure 4. ### 4.2 $`\xi =\kappa =\theta (1)`$ In this case, we attempt to maximize the flux during the portion of the precession when the flux does not vary over the rotation. To do this, we take $`\beta =\xi =\kappa =\theta (1)`$ and $`\gamma =0`$. If the bending of the photon trajectories is neglected (i.e. as $`y`$ approaches zero), $`\theta (1)`$ approaches $`90^{}`$ and this case reduces to the previous one. However, for a realistic neutron star with $`y0.2`$, we have new light curve which varies at the precession rate (not twice that rate as the previous case). Taking $`\beta \kappa `$ or $`\gamma 0`$ also yields a portion of the light curve when the flux does not vary with rotational phase, but this does not coincide with the period when the mean flux reachs its maximum. ### 4.3 Random Geometry The choices of $`\xi ,\gamma ,\beta `$ and $`\kappa `$ that we have made previously are not generic. One would expect the values of $`\xi `$ and $`\gamma `$ from a particular neutron star to be random. The values of $`\beta `$ and $`\kappa `$ are intrinsic to the star; therefore, one may find a physical motivation for a particular distribution of their values. Figure 5 presents two light curves for two randomly selected geometries. We created a random sample of several hundred geometries and found that approximately three percent yielded light curves qualitatively similar to those presented in Figure 2 and Figure 4. Although one would generally expect more complicated light curves such as those depicted in Figure 5, a significant fraction of the geometries yield light curves in which the flux is constant with rotational phase when it reaches its maximum, and during the rest of the precessional period, the hotspot is hidden for a large portion of each rotation. ## 5 Discussion We have explored several possible light curves of a freely precessing neutron star with a hotspot, and focussed on those whose flux is constant with rotational phase when the flux reaches a maximum value. These possibilities indicate that 1E 161348-5055 may be a freely precessing neutron star. However, as the mean flux decreases from its maximum it also begins to vary with the rotation of the star; therefore, in the context of this model, we would expect that subsequent observations of 1E 161348-5055 may uncover its rotational period which we would expect to be on the order of several seconds. The variation of the flux with the rotational phase of the star during some portion of the precession appears generic to freely precessing stars with a hotspot. The Earth undergoes a free precession with a period of about 433 days, known as the \[Chandler 1891\] wobble. This is significantly longer than one would expect from the asymmetry of the Earth’s figure due to dissipation inside the Earth (\[Burs̆a & Pĕc̆ 1993\]). Without excitation the wobble would disappear within about a century, and the avenue for its excitation is still unclear. In neutron stars, precession has been proposed to explain long-term variations in their spin and pulse profiles (e.g. \[Davis & Goldstein 1970, Goldreich 1970, Ruderman 1970, Brecher 1972, Pines & Shaham 1972, Pines, Pethick & Lamb 1973, Pines & Shaham 1974\]). If neutron stars rotate as rigid bodies the precessional period would be $`P/ϵ`$ (\[Pines & Shaham 1972, Pines & Shaham 1974\]) where $`P`$ is the rotational period. \[Ruderman & Sutherland 1974\] proposed that neutron stars contain a superfluid component in their cores. \[Shaham 1977\] explored how the pinning of the superfluid vortices affects the free precession of a neutron star. He argued that the dissipation timescale for the precessional mode ($`\tau _w`$) is of the order of the postglitch relaxation time ($`\tau `$) times the ratio of the rotational to the precessional frequency. $`\tau `$ ranges from a week for the Crab to nearly a century for 1641-45 (\[Shapiro & Teukolsky 1983\]); therefore, depending on the nature of the superfluid coupling the precessional mode may last for millennia. \[Shaham 1977\] also found that if the star is triaxial the geometry of the precession is more complicated that for the purely free precession considered here. Additionally, the precessional frequency in this case is given by angular velocity of the superfluid component of the star times the fractional contribution of the superfluid to the total moment of inertia of the star (about one percent). \[Sedrakian, Wasserman & Cordes 1999\] have recently reexamined the precession of multicomponent neutron stars with imperfect vortex pinning and found several possibly long-lasting precessional modes with long periods like the precession described here. The excitation and decay of precessional motions in neutron stars are still uncertain. Evidence has been found for free precession in some radio pulsars (e.g. \[Cadez, Galicic & Calvani 1997, Jones 1988\]), but it is not generic (e.g. \[Morgan et al. 1995\]); therefore, the question arises as to which properties of a neutron star would allow or prevent it from precessing and would they correlate with its radio emission. \[Melatos 1999\] and \[Melatos 2000\] argue that precession is characteristic of strongly magnetized neutron stars (c.f. Equation 2). \[Usov & Melrose 1996\] and \[Arons 1998\] have proposed that strongly magnetized neutron stars are unlikely to produce radio emission collectively due to the formation of bound electron-positron pairs. Alternatively, \[Baring & Harding 1997\] suggest that in sufficiently strong fields ($`B>B_c`$) the QED process of photon splitting (\[Adler 1971\]; \[Heyl & Hernquist 1997\]) can dominate one-photon pair production. This will effectively quench the pair cascade, making coherent pulsed radio emission impossible. Since the timescales for both the excitation and decay of precessional motion in neutron stars are unknown, one can appeal to the relative youth of 1E 161348-5055 and the other members of the AXP class. They are all several thousand years old, much younger than vast majority of radio pulsars (\[Taylor, Manchester & Lyne 1993\]). The appropriate timescales for precession may simply be shorter than the ages of most radio pulsars while longer than those of AXPs. Furthermore, the hints of precession seen in the Crab pulsars (\[Cadez, Galicic & Calvani 1997\]) point toward this explanation. We have examined the light curves of free precessing neutron stars with a hotspot and focussed on those geometries which exhibit an epoch during each precessional period where the flux does not vary as the star rotates. These geometries account for about three percent of a random sample and may provide an explanation for the emission from 1E 161348-5055 . If this is the case, further observations of the light curve from 1E 161348-5055 should reveal a pulse period of the order of $`10^4`$ times the precessional period of six hours. Free precession may be a hallmark of young or highly magnetized neutron stars, and it is a direct probe of the structure of the crust and interior of the neutron star and the coupling between them.
no-problem/0003/cond-mat0003042.html
ar5iv
text
# Ordered phase in the two-dimensional randomly coupled ferromagnet ## Introduction For more than two decades the intensive numerical work on the spin glass (SG) problem has been concentrated almost exclusively on the Edwards-Anderson Ising spin glass : Ising spins on a regular \[hyper\]cubic lattice with random near neighbor Gaussian or binomial interaction distributions . The many possible alternative Ising systems with a randomness ingredient have hardly been touched on and such results as exist have been largely ignored. One such family of alternative systems was proposed in . It consists of a 2d square lattice of $`L\times L`$ Ising spins $`\sigma _i=\pm 1`$ with uniform ferromagnetic second near neighbor interactions of strength $`J`$, plus random near neighbor interactions $`J_{ij}=\pm \lambda J`$; we will refer to it as the RCF (Randomly Coupled Ferromagnet) model . It is described by the following Hamiltonian: $$H=\underset{i,j}{}J_{ij}\sigma _i\sigma _j\underset{[i,j]}{}J\sigma _i\sigma _j.$$ (1) For each realization of the randomness the $`J_{ij}`$ are drawn with the constraint $`_{i,j}J_{ij}=0`$ to reduce fluctuations. As the spins are coupled through the ferromagnetic second near neighbor interactions, the system can be partitioned into two inter-penetrating sublattices in checkerboard-like fashion. In the limit $`\lambda =0`$ the two sublattices order ferromagnetically and independently below the Onsager temperature $`T=2.27J`$. Because each sublattice can order up or down, there are four degenerate ground states. As was pointed out in , for non-zero $`\lambda `$ the near neighbor interactions can be considered in terms of effective random fields exerted by each sublattice on the other, so that for finite $`\lambda `$ and large enough $`L`$ the ferromagnetic sublattice ordering is expected to be broken up, as in the $`2d`$ random field Ising (RFI) model . The ground state will consist of coexisting domains of each of the four types : up/up, up/down, down/up and down/down. The question is: is the break-up accompanied by paramagnetic order down to $`T=0`$ ? A number of Monte Carlo simulations were performed, and it was concluded on the basis of standard numerical criteria that when the ratio $`\lambda `$ is less than about $`1`$, the RCF systems show spin glass like ordering at a finite temperature, whereas 2d Edwards Anderson ISGs are paramagnetic down to $`T=0`$ . The finite ordering temperature interpretation was strongly questioned by Parisi et al who criticized the initial work on the grounds that the results were restricted to relatively small sample sizes L. On the basis of Monte Carlo data obtained on rather larger samples Parisi et al suggested that the RCF systems are always paramagnetic down to $`T=0`$, like the Edwards Anderson ISGs. Further large sample Monte carlo results however indicated finite-temperature ordering. Here we present data from ground-state configuration evaluations which show unambiguously that RCF systems indeed exhibit finite temperature SG like ordering for $`\lambda `$ less than about $`1`$. This opens up new and intriguing possibilities for the testing of fundamental properties of complex ordered systems at finite temperatures in a $`2d`$ context. ## Algorithm In the present work, ground state configurations have been found for periodic boundary conditions, and for the case where in one direction the boundary conditions are switched to anti-periodic. By comparing the ground state energies of the different boundary conditions for each realization conclusions on the ordering behavior can be obtained. Similar studies were performed for simple $`d`$-dimensional EA spin glasses in $`d=2`$ , $`d=3`$ , and $`d=4`$ . For readers not familiar with the calculation of spin-glass ground states now a short introduction to the subject and a description of the algorithm used here are given. A detailed overview can be found in The concept of frustration is important for understanding the behavior of $`\pm J`$ Ising spin glasses. The simplest example of a frustrated system is a triple of spins where all pairs are connected by antiferromagnetic bonds, see fig. 1. A bond is called satisfied if it contributes with a negative value to the total energy by choosing the values of its adjacent spins properly. For the triangle it is not possible to find a spin-configuration were all bonds are satisfied. In general a system is frustrated if closed loops of bonds exists, where the product of these bond-values is negative. For square and cubic systems the smallest closed loops consist of four bonds. They are called (elementary) plaquettes. As we will see later the presence of frustration makes the calculation of exact ground states of such systems computationally hard. Only for the special case of the two-dimensional system with periodic boundary conditions in no more than one direction and without external field a polynomial-time algorithm is known . In general only methods with exponential running times are known, on says the problem is NP-hard . Now for the general case three basic methods are briefly reviewed and the largest system sizes which can be treated are given for three-dimensional systems, the standard spin-glass model, were data for comparison is available. The simplest method works by enumerating all $`2^N`$ possible states and has obviously an exponential running time. Even a system size of $`4^3`$ is too large. The basic idea of the so called Branch-and-Bound algorithm is to exclude the parts of the state space, where no low-lying states can be found, so that the complete low-energy landscape of systems of size $`4^3`$ can be calculated . A more sophisticated method called Branch-and-Cut works by rewriting the quadratic energy function as a linear function with an additional set of inequalities which must hold for the feasible solutions. Since not all inequalities are known a priori the method iteratively solves the linear problem, looks for inequalities which are violated, and adds them to the set until the solution is found. Since the number of inequalities grows exponentially with the system size the same holds for the computation time of the algorithm. With Branch-and-Cut anyway small systems up to $`8^3`$ are feasible. The method used here is able to calculate true ground states up to size $`14^3`$. For two-dimensional systems, as considered in this paper, sizes up to $`50^2`$ can be treated. The method is based on a special genetic algorithm and on Cluster-Exact Approximation . CEA is an optimization method designed specially for spin glasses. Its basic idea is to transform the spin glass in a way that graph-theoretical methods can be applied, which work only for systems exhibiting no frustrations. Next a description of the genetic CEA is given. Genetic algorithms are biologically motivated. An optimal solution is found by treating many instances of the problem in parallel, keeping only better instances and replacing bad ones by new ones (survival of the fittest). The genetic algorithm starts with an initial population of $`M_i`$ randomly initialized spin configurations (= individuals), which are linearly arranged using an array. The last one is also neighbor of the first one. Then $`n_o\times M_i`$ times two neighbors from the population are taken (called parents) and two new configurations called offspring are created. For that purpose the triadic crossover is used which turned out to be very efficient for spin glasses: a mask is used which is a third randomly chosen (usually distant) member of the population with a fraction of $`0.1`$ of its spins reversed. In a first step the offspring are created as copies of the parents. Then those spins are selected, where the orientations of the first parent and the mask agree . The values of these spins are swapped between the two offspring. Then a mutation with a rate of $`p_m`$ is applied to each offspring, i.e. a fraction $`p_m`$ of the spins is reversed. Next for both offspring the energy is reduced by applying CEA: The method constructs iteratively and randomly a non-frustrated cluster of spins. Spins adjacent to many unsatisfied bonds are more likely to be added to the cluster. During the construction of the cluster a local gauge-transformation of the spin variables is applied so that all interactions between cluster spins become ferromagnetic. Fig. 2 shows an example of how the construction of the cluster works using a small spin-glass system. For 2d $`\pm J`$ spin glasses each cluster contains typically 70 percent of all spins. The non-cluster spins act like local magnetic fields on the cluster spins, so the ground state of the cluster is not trivial. Since the cluster has only ferromagnetic interactions, an energetic minimum state for its spins can be calculated in polynomial time by using graph theoretical methods : an equivalent network is constructed , the maximum flow is calculated <sup>*</sup><sup>*</sup>*Implementation details: We used Tarjan’s wave algorithm together with the heuristic speed-ups of Träff. In the construction of the level graph we allowed not only edges $`(v,w)`$ with level($`w`$) = level($`v`$)+1, but also all edges $`(v,t)`$ where $`t`$ is the sink. For this measure, we observed an additional speed-up of roughly factor 2 for the systems we calculated. and the spins of the cluster are set to their orientations leading to a minimum in energy. This minimization step is performed $`n_{\mathrm{min}}`$ times for each offspring. Afterwards each offspring is compared with one of its parents. The pairs are chosen in the way that the sum of the phenotypic differences between them is minimal. The phenotypic difference is defined here as the number of spins where the two configurations differ. Each parent is replaced if its energy is not lower (i.e. not better) than the corresponding offspring. After this whole step is done $`n_o\times M_i`$ times, the population is halved: From each pair of neighbors the configuration which has the higher energy is eliminated. If more than 4 individuals remain the process is continued otherwise it is stopped and the best individual is taken as result of the calculation. The representation in fig. 3 summarizes the algorithm. The whole algorithm is performed $`n_R`$ times and all configurations which exhibit the lowest energy are stored, resulting in $`n_g`$ statistical independent ground state configurations. The running time of the algorithm with suitable parameters chosen (see Table I) grows exponentially with the system size. On a 80Mhz PowerPC processor a typical $`L=40`$ instance takes 3 hours (15 hours for $`L=56`$). ## Results In this work ground states of the RCF are studied for system sizes up to $`L=56`$ and values of $`\lambda =0.5`$, $`0.7`$, $`0.9`$, and $`1.1`$. Usually 1000 different realizations were treated, each submitted to periodic (pbc) and antiperiodic (apbc) boundary conditions in one direction and always pbc in the other direction. The apbc are realized by inverting one line of bonds in the system with pbc. Because of the enormous computational effort, for the largest system sizes only realizations with $`\lambda =0.7`$ where considered with large statistics (and about 100 realizations with $`L=56,\lambda =0.9`$). The periodic ground states give a direct measurement of the $`T=0`$ break up length $`L_b`$ at each value of $`\lambda `$, which is defined as follows: For small enough $`L`$ the ground states will always be such that there is a full ferromagnetic ordering within each sublattice. With increasing $`L`$, more and more samples will be found with ground states having at least one of the sublattices incompletely ferromagnetic. The break up length $`L_b`$ is defined as the value of $`L`$ above which more than half the samples do not have pure ferromagnetic order in each sublattice. For the binomial RFI model, $`L_b5.5\mathrm{exp}(2/\mathrm{\Delta }^2)`$ where $`\mathrm{\Delta }`$ is the strength of the random field . For the RCF the values of $`L_b`$ are shown against $`\lambda ^2`$ in Figure 1. It was suggested in that by analogy with the RFI results $`L_b(\lambda )`$ could be expected to vary as $`\mathrm{exp}(1/(4\lambda ^2))`$. In fact the data points for the true ground states lie on the line $`L_b3.2\mathrm{exp}(0.62/\lambda ^2)`$. For the particular cases $`\lambda =0.5`$ and $`\lambda =0.7`$, $`L_b45`$ and $`10`$ respectively. With the wisdom of hindsight, it can be seen that the measurements done in for $`\lambda =0.5`$ were mainly in the regime $`L<L_b`$ while for $`\lambda =0.7`$ the larger samples were well in the regime $`L>L_b`$. A “typical” ground state for $`\lambda =0.7`$ and $`L=56`$ is shown in Figure 5. All four possible types of domains occur. Because of the discrete structure of the interaction usually the ground state is degenerate. But in contrast to the EA spin glasses with only $`\pm J`$ near neighbor interactions, where a complex ground-state landscape exists, the structure of the degeneracy is trivial for $`\lambda 1`$: the whole system may be flipped, sometimes it is possible to flip both sublattices independently, and usually some small clusters occur with can take two orientations. But for studying whether the model exhibits long range order or not, it is sufficient to concentrate on the ground-state energies $`E_P,E_{AP}`$ for periodic and antiperiodic boundary conditions. The energy differences $`\mathrm{\Delta }=E_PE_{AP}`$ give information about whether a system exhibits some kind of stiffness against perturbations of the boundary, i.e. about the presence of order. $`\mathrm{\Delta }`$ is called the stiffness energy. For samples with the same set of interactions the stiffness can be analyzed in terms of the size dependence of the average $`\mathrm{\Delta }`$ and of the width $`W\sqrt{\sigma ^2(\mathrm{\Delta })}`$ of the distribution $`P(\mathrm{\Delta })`$. For $`\lambda =0.7`$ the distribution is presented in Fig. 6. The inset shows the behavior of the average stiffness energy as a function of $`L`$ for all four values of $`\lambda `$. For system sizes larger the breakup length and $`\lambda 0.7`$ the stiffness energy decreases, indicating that no ferromagnetic long range order is present in the system. For $`\lambda =0.5`$ the breakup length is very large, so the asymptotic behavior is hardly visible, but $`\mathrm{\Delta }`$ seems to fall for $`L28`$. From direct evaluation of the magnetization (see Fig 7 and Fig. 8) we conclude that no ferromagnetic order should be present beyond an upper limit $`\lambda =0.27(8)`$. For smaller values of $`\lambda `$ nothing can be concluded from our data. Furthermore, for smaller values of $`\lambda `$ it remains possible that the ground states of the RCF model do not exhibit ferromagnetic ordering ordering for any finite value of the relative coupling constant $`\lambda `$. From standard relationships one can write $`\mathrm{\Delta }L^{\theta _F}`$ with $`\theta _F`$ the ferromagnetic stiffness exponent, and $`WL^{\theta _{SG}}`$ with $`\theta _{SG}`$ the spin glass stiffness exponent. Positive values of the exponents indicate a long range order. Because of the small system sizes an evaluation of the ferromagnetic exponent is difficult. From the results presented in Fig. 6 we find an asymptotic ($`L\mathrm{}`$) value of $`\theta _F=2`$ ($`\lambda =0.9,1.1`$). Now we turn to the question whether some kind of spin-glass order is present in the system. This can be investigated by analyzing The dependence of the variance $`\sigma ^2(\mathrm{\Delta })`$ of the stiffness-energy distributions on the system size, the result is shown in Fig. 9. For small system sizes the variance grows for all values of the coupling constant $`\lambda `$. In order to exclude finite-size effects, only systems larger than the breakup length $`L_b(\lambda )`$ should be taken into account. Above $`L_b`$ there is a good linear size dependence of $`\mathrm{log}W(L)`$ against $`\mathrm{log}L`$, with $`\theta _{SG}=0.59(8)`$, $`0.29(1)`$, $`0.09(5)`$, and $`0.16(2)`$ respectively for $`\lambda =0.5,0.7,0.9`$, and $`1.1`$. The values of $`\theta _{SG}`$ against $`\lambda `$ are shown in the inset of Figure 9. The result for $`\lambda =0.5`$ is not very reliable, because the largest system size is of the order of the breakup length. In the log-log plot the datapoints for $`\lambda =0.5`$ exhibit a negative curvature, thus the asymptotic value of $`\theta _{SG}`$ may be smaller than $`0.59`$. For the other systems the breakup length is quite small, so the results give unambiguous evidence for spin glass like ordering in the large size limit, with a non-zero ordering temperature. Especially for $`\lambda =0.7`$, where $`L_b10`$, the result $`\sigma (\mathrm{\Delta })>0`$ is very reliable. Thus, it is indeed not necessary to carry out further calculations with larger systems to prove the fact, there there are values of the coupling constant giving rise to an ordered spin glass phase in the RCF. The limiting value $`\lambda _c`$ above which $`\theta _{SG}`$ is negative is very close to $`1.0`$; $`\lambda _c`$ would correspond to the highest value at which the ordering temperature is non-zero, in good agreement with the initial estimate from the Monte Carlo work . ## Conclusion We have calculated ground states of the Randomly Coupled Ferromagnet for different values of the spin-glass coupling constant $`\lambda `$ and with periodic as well as antiperiodic boundary conditions. By using the genetic cluster-exact approximation algorithm, we were able to treat system sizes up to $`N=56\times 56`$. The breakup length was calculated for each value of $`\lambda `$. From the calculation of $`T=0`$ stiffness energy it could be concluded that below $`\lambda _c1`$ the RCF exhibits an ordered spin glass like phase at finite temperature. It should be stressed again that for $`\lambda >0.5`$ the largest system sizes are well beyond the breakup length, so no changes are to be expected for larger system sizes. For $`\lambda <0.5`$, especially if one likes to test whether the model exhibits ferromagnetic ordering, ground states calculations of larger systems are needed to study the behavior in more detail. Unfortunatley, these studies are beyond the power of current computers and algorithms. Although the zero-temperature stiffness exponent values give no direct information on the ordering temperatures, the present results are consistent with the the conclusions drawn in where Monte Carlo estimates of the critical temperatures were made using the finite-size scaling of the spin glass susceptibility and the form of the time dependence of the autocorrelation function relaxation. Ordering temperatures were estimated to be close to $`2.0`$ for $`\lambda =0.5`$ and $`0.7`$, dropping to zero near $`\lambda =1`$. Rather remarkably the $`T=0`$ crossover as a function of $`L`$ at $`L_b`$ appears to have little effect on the behavior of the SG susceptibility as a function of size in the temperature region close to $`T_g`$ . However for $`\lambda =0.5`$ Parisi et al observed weakly non-monotonic behavior of the Binder parameter with $`L`$ for sizes that we now know to be in the region of the crossover. Since the existence of a spin glass like phase at finite temperature now has been established definitely, it would be instructive to carry out further careful Monte Carlo measurements for sample sizes well in the regime $`L>L_b`$ and over a range of $`\lambda `$ values. Is the physics of the $`2d`$ RCF above, at, and below the ordering temperature strictly analogous to that of the standard Edwards Anderson ISG at dimensions where there is finite temperature ordering? To what extent could the RCF enlighten us concerning problems which in the Edwards Anderson ISG context have remained conflictual for more than twenty years ? The fact that the RCF lives on a $`2d`$ lattice rather than in a higher dimension should facilitate understanding of the fundamental physics of ordering in complex systems. Finally, there may even be possible experimental realizations of systems where quasi-two-dimensional magnets form short range clusters with local ferromagnetic or antiferromagnetic order, with random frustrated interactions linking these clusters together. Examples of promising behaviour of this sort are Fe compound with halogens where it might be interesting to look at the data again in view of the present results. ## Acknowledgements AKH was supported by the Graduiertenkolleg “Modellierung und Wissenschaftliches Rechnen in Mathematik und Naturwissenschaften” at the Interdisziplinäres Zentrum für Wissenschaftliches Rechnen in Heidelberg and the Paderborn Center for Parallel Computing by the allocation of computer time. AKH obtained financial support from the DFG (Deutsche Forschungs Gemeinschaft) under grant Zi209/6-1. IAC gratefully acknowledges very helpful discussions with Dr N. Lemke, and thanks Professor T. Shirakura for having shown him very interesting unpublished data. Meetings organized by the Monbusho collaboration ”Statistical physics of fluctuations in glassy systems” and by the ESF network ”Sphinx” played an essential role for the present work.
no-problem/0003/gr-qc0003083.html
ar5iv
text
# Chaos around the superposition of a monopole and a thick disk ## Abstract We extend recent investigations on the integrability of oblique orbits of test particles under the gravitational field corresponding to the superposition of an infinitesimally thin disk and a monopole to the more realistic case, for astrophysical purposes, of a thick disk. Exhaustive numerical analyses were performed and the robustness of the recent results is confirmed. We also found that, for smooth distributions of matter, the disk thickness can attenuate the chaotic behavior of the bounded oblique orbits. Perturbations leading to the breakdown of the reflection symmetry about the equatorial plane, nevertheless, may enhance significantly the chaotic behavior, in agreement with recent studies on oblate models. The recent observational evidences suggesting that huge black-holes might inhabit the center of many active galaxies have motivated some investigations on the dynamics of test particles in gravitational systems consisting in the superposition of monopoles and disks. Infinitesimally thin disks are frequently used to model flattened galaxies. Some exact relativistic solutions describing the superposition of non-rotating black-holes and static thin disk, and their respective Newtonian limits, were presented and discussed in . In , the integrability of oblique orbits of test particles around the exact superposition of a black-hole and an infinitesimally static thin disk was considered. Bounded zones of chaotic behavior were found for both the relativistic and Newtonian limits. There are several examples in the literature of chaotic motion involving black-holes: in the fixed two centers problem, in a black-hole surrounded by gravitational waves, and in several core–shell models with relevance to the description of galaxies (see for a recent review). As to the Newtonian case, we notice, for instance, the recent work of C. Chicone, B. Mashhoon, and D. G. Retzloff on the chaotic behavior of the Hill system. The Newtonian analysis of has revealed an interesting property of the dynamics of oblique bounded orbits around the superposition of a monopole and an infinitesimally thin disk. Since one was manly interested in bounded motions close to the monopole, it was assumed that the disk was infinite and homogeneous. This situation corresponds the simplest superposition of a monopole and a disk. Using cylindrical coordinates $`(r,\theta ,z)`$ with the monopole, with mass $`M`$, located in the origin and the disk corresponding to the plane $`z=0`$, the gravitational potential is given by $$V(r,\theta ,z)=\frac{M}{\sqrt{r^2+z^2}}+\alpha |z|,$$ (1) where $`\alpha `$ is a positive parameter standing for the superficial mass density of the disk. The angular momentum $`L`$ in the $`z`$ direction is conserved, and we can easily reduce the three-dimensional original problem to a two-dimensional one in the coordinates $`(r,z)`$ with the Hamiltonian given by $$H=\frac{\dot{r}^2}{2}+\frac{\dot{z}^2}{2}+\frac{L^2}{2r^2}\frac{M}{\sqrt{r^2+z^2}}+\alpha |z|.$$ (2) The Hamiltonian (2) is smooth everywhere except on the plane $`z=0`$. Moreover, the parts of the trajectories restricted to the regions $`z>0`$ and $`z<0`$ are integrable. The corresponding Hamilton-Jacobi equations restricted to these regions can be properly separated in parabolic coordinates, leading, respectively, to the second constants of motion $$C_{z>0}=R_z\alpha \frac{r^2}{2}$$ (3) and $$C_{z<0}=R_z+\alpha \frac{r^2}{2},$$ (4) where $`R_z`$ is the $`z`$ component of the Laplace-Runge-Lenz vector. Note that $`C`$ is not smoothly defined on the disk. With the two constant of motions $`H`$ and $`C`$, the equations for the trajectories of test particles, restricted to the regions $`z>0`$ and $`z<0`$, can be properly reduced to quadratures in parabolic coordinates. A complete bounded oblique trajectory, therefore, corresponds to the matching of an infinite number of integrable trajectory pieces. Hence, the widespread zones of chaotic motion detected in have their origin in the changes in the value of the constant $`C`$ when the test particle crosses the disk $`z=0`$. As to the relativistic case, in contrast, the trajectory pieces that do not cross the disk are themselves non-integrable, leading to typically larger chaotic regions than in the corresponding Newtonian limit. Here, we study the robustness of the results for the Newtonian case by considering the more realistic case, for the description of flattened galaxies, of a superposition of a central monopole and a smooth thick disk with the potential (1) as the vanishing disk thickness limit. A smooth distribution of matter is considered as a disk if its radial gradients are much smaller than its vertical ones. A minimally realistic model for a rotating thick disk should obey Emden’s equation for the stability of rotating polytropes. As in , we will neglect the radial gradients, and, in this case, Emden’s equation for the disk matter density $`\rho (z)`$ states that $`(G=1)`$ $$\kappa \lambda \rho ^{\lambda 2}\rho ^{\prime \prime }+\kappa \lambda (\lambda 2)\rho ^{\lambda 3}(\rho ^{})^2=4\pi \rho ,$$ (5) where $`\kappa `$ is the parameter relating the pressure to the matter density in the polytropic equation of state and $`\lambda =1+1/n`$, $`n`$ being the polytrope index. The matter density $`\rho `$ is assumed to obey Poisson’s equation $`^2V_\mathrm{D}=4\pi \rho `$. For the isothermal case ($`\lambda =1`$), equation (5) admits as a solution the following distribution of matter $$\rho (z)=\frac{\alpha }{4\pi z_0}\mathrm{sech}^2\frac{z}{z_0},$$ (6) which corresponds to the potential $$V_\mathrm{D}(z)=\alpha z_0\mathrm{ln}\mathrm{cosh}\frac{z}{z_0},$$ (7) where $`z_0`$ measures the disk “thickness”, and $`\alpha `$ its “superficial” mass density. They obey the relation $`2\kappa =\alpha z_0`$. Typically, for realistic models of a rotating dust disk one has $`z_0^2V_z^2`$. The matter density (6) corresponds, therefore, to a stable and smooth distribution of rotating matter concentrated on the plane $`z=0`$. Moreover, in the limit of $`z_00`$, we recover from (7) the infinitesimal thin disk potential $`V_\mathrm{D}(z)=\alpha |z|`$ and the corresponding $`\delta `$ distribution of matter from (6). Thus, the dynamics of test particle moving around the superposition of our smooth thick disk and a monopole will be governed by the following smooth Hamiltonian $$H=\frac{\dot{r}^2}{2}+\frac{\dot{z}^2}{2}+\frac{L^2}{2r^2}\frac{M}{\sqrt{r^2+z^2}}+\alpha z_0\mathrm{ln}\mathrm{cosh}\frac{z}{z_0}.$$ (8) We wish to stress that the potential in (8) corresponds, indeed, to a first approximation of a realistic superposition of a monopole and a smooth thick disk with matter distribution given by (6). The whole superposition must also obey Emden’s equation; in the present case it fails to obey at the origin. We are neglecting the radial gradients of the matter distributions caused by the stresses induced by the central monopole. The Hamiltonian (8) does not belong to the class of integrable two-dimensional potentials with a second constant of motion polynomial in the momenta. We will present strong evidences that (8) does not have a second constant of motion at all. We could solve numerically the system governed by (8) with great accuracy. Figure 1 shows a typical Poincaré’s section $`(H=0.2,L=M=1,\alpha =0.1,z_0=1.5)`$ across the plane $`z=0`$ revealing a widespread chaotic behavior. The disk thickness $`z_0`$, in this case, has the same magnitude of the typical $`z`$-amplitude of the trajectories. Figure 2 shows a sequence of low-energy sections $`(H=0.3,L=M=1,\alpha =0.1,z_0=0.0(\mathrm{a}),0.1(\mathrm{b}),0.25(\mathrm{c}),\mathrm{and}0.5(\mathrm{d}))`$, constructed from the same trajectory initial conditions, where the attenuation of the chaotic behavior due to the disk thickness can be clearly appreciated. We could obtain thousands of intersections for each trajectory with a cumulative error, measured by the constant $`H`$, inferior to $`10^{12}`$. We notice that the equation of motion are invariant under the following rescalings: $`r\lambda r,`$ $`z\lambda z,`$ $`t\lambda ^{3/2}t,`$ (9) $`\alpha \lambda ^2\alpha ,`$ $`MM,`$ $`z_0\lambda z_0`$ (10) $`H\lambda ^1H,`$ $`L\lambda ^{1/2}L;`$ (11) and $`r\lambda ^{}r,`$ $`z\lambda ^{}z,`$ $`t\lambda ^{}t,`$ (12) $`\alpha \lambda ^1\alpha ,`$ $`M\lambda ^{}M,`$ $`z_0\lambda ^{}z_0`$ (13) $`HH,`$ $`L\lambda ^{}L;`$ (14) $`\lambda >0`$ and $`\lambda ^{}>0`$, leading that, for each triple of nonzero parameters $`(M,\alpha ,z_0)`$, one has, in fact, only one free parameter, namely $`z_0\sqrt{\alpha /M}`$. The limit of $`z_0`$ much larger than the typical $`z`$-amplitude of the trajectories deserves a special attention. For this case, the Hamiltonian can be well approximated by $$H_{\mathrm{}}=\frac{\dot{r}^2}{2}+\frac{\dot{z}^2}{2}+\frac{L^2}{2r^2}\frac{M}{\sqrt{r^2+z^2}}+\frac{\beta }{2}z^2,$$ (15) where $`\beta =\alpha /z_0`$. Such a potential is related to the potential of the Kepler problem perturbed by a quadrupole halo potential considered in . Figure 3 shows a typical Poincaré’s section $`(H_{\mathrm{}}=0.15,L=M=1,\beta =0.1)`$ across the plane $`z=0`$ revealing a widespread chaotic behavior for the system governed by (15). We could also obtain thousands of intersections for each trajectory with a cumulative error inferior to $`10^{12}`$. Like in the infinitesimally thin disk case, due to the existence of two rescaling invariances, this Poincaré’s section can be obtained for any non-zero values of $`\beta `$ and $`M`$. Our numerical finds confirm the robustness of the results presented in . The chaotic behavior of bounded orbits can be considered as inherent to any system consisting in the superposition of a disk and a central monopole. We stress that the superpositions we have considered are symmetric under the reflection about the equatorial plane. A perturbation leading to the breakdown of the reflection symmetry could increase significantly the chaotic behavior, as the recent studies on oblate models have suggested. We could indeed check such fact by considering a dipole-like perturbation of the Hamiltonian (8) $$\stackrel{~}{H}=HD\frac{z}{(r^2+z^2)^{3/2}}.$$ (16) Figure 4 presents a typical Poincaré’s surface $`(\stackrel{~}{H}=0.3,L=M=1,\alpha =0.1,z_0=0.25,D=5\times 10^2)`$ for this case. The zones of chaotic motion are larger than in the corresponding case for which $`D=0`$ (Fig 2.c). However, the central family of regular orbits, if it does exit for $`D=0`$, seems to be robust against dipole-like perturbations. Analogous conclusions hold also for the large $`z_0`$ case. ###### Acknowledgements. The author is grateful to CNPq and FAPESP for the financial support, and to Prof. P.S. Letelier and R. Venegeroles for stimulating discussions.
no-problem/0003/astro-ph0003296.html
ar5iv
text
# Biases in Expansion Distances of Novae Arising from the Prolate Geometry of Nova Shells ## 1 INTRODUCTION The distance to a classical nova in the Galaxy is best inferred by comparing the angular size of the resolved nova shell with the size calculated from its rate of expansion and the time since the shell was ejected. However, if nova shells are ellipsoids of revolution (spheroids) rather than spherical, then the concept of “angular size” is ambiguous, and the expansion velocity along the line of sight does not correspond to the transverse expansion velocity. Thus the use of formulas that are valid in the spherical case will lead to erroneous distance estimates. Individual distance estimates may be too large or too small, depending on the true axis ratio of the nova shell and its inclination to the line of sight. Furthermore, these errors do not necessarily average toward zero when ensemble averages are taken. In this paper, we consider systematic errors in estimates of nova shell expansion distances, and recommend procedures to minimize the errors. ### 1.1 The Usefulness of Nova Distances As with all classes of astronomical objects, our understanding of the classical nova phenomenon depends on having accurate estimates of the distances of these objects. In turn, having a well-founded understanding of the distances and luminosities of novae allows them to be studied as astrophysical objects and to be exploited for other purposes. The need for accurate distances is evident, both for novae taken individually and for novae used collectively, i.e., in a statistical fashion. An accurate distance to an individual nova, combined with good coverage of the outburst light curve and a knowledge of the interstellar extinction, can allow the theory of the nova outburst to be verified and further developed. For example, it is possible to check whether there is a phase after maximum light during which the nova’s luminosity is close to the Eddington limit. All inferences about the mass of the shell that is ejected during the outburst depend on some power of the distance, through establishing the volume occupied by the emitted gas. At late stages in the evolution of a nova, when the shell can be resolved from the central binary star, the distance is needed to convert the angular size of the shell into a linear size, so that the physical conditions in the ejected gas (ionization, excitation) can be related to the ionizing flux from the central white dwarf (the post-nova) and its accretion disk. An accurate distance to a classical nova also allows the modeling of the accretion process in the post-nova system; without a distance constraint (leading to a constraint on the luminosity), it has proven impossible to infer uniquely the mass accretion rate onto the white dwarf (Wade 1988, Wade & Hubeny 1998). Treated collectively, novae have the possibility of providing a secondary or even primary distance indicator for the extragalactic distance scale. Since novae are present in galaxies of all Hubble types, they have the potential to be used directly to compare and unite the distance scales of spiral and elliptical galaxies. The so-called maximum magnitude – rate of decline (MMRD) relation gives the visual absolute magnitude at maximum light, from a measurement of the rate of decline after maximum (or equivalently the time taken to decline 2 or 3 magnitudes). The shape of the mean MMRD curve and the dispersion around this mean relation have been found from observations in external galaxies (e.g., Della Valle & Livio 1995). For the MMRD relation to be a primary distance indicator, however, the zero-point calibration must be provided by Galactic novae. Another proposed distance indicator is the absolute magnitude at 15 days past maximum light ($`M_{15}`$), where the dispersion in absolute magnitude is small for all novae taken without regard to speed class. The same remarks about zero-point calibration apply to this method.<sup>1</sup><sup>1</sup>1As one step in calibrating the MMRD relation, Cohen (1985) adjusted $`M_V`$ at maximum for her best observed novae, assuming that they had identical $`M_{15}`$; this step should be replaced by actually measuring the dispersion in $`M_{15}`$. Even the nearest Galactic novae are generally too distant for direct trigonometric parallax measurements. Instead, indirect methods of distance estimation are often used, based on the Galactic rotation curve, the total amount of interstellar reddening, the presence or absence of discrete components (“clouds”) in interstellar absorption lines, etc. The only geometrical (hence fundamental) method is that of “expansion distance” (also referred to as “expansion parallax”), in which the measured angular size of the resolved nova shell is compared with the linear size of the shell; the latter is calculated from the expansion speed of the shell gas and the known time since the outburst. In her work on the MMRD relation, Cohen (1985) had only eleven novae with well-observed expansion distances and suitable coverage of their light curves. This is largely because the surface brightness of a nova shell declines rapidly with time since outburst, so that by the time the shell is large enough to be resolved from the ground, it is often too faint to observe. Since Cohen’s study, several additional novae have had good light curve coverage, and expansion distances may become available for these from ground-based observations and especially from the Hubble Space Telescope<sup>2</sup><sup>2</sup>2Narrow-band imagery of recent novae has been carried out with the HST Wide Field/Planetary Camera 2, used in “snapshot” mode, as part of program 7386; these images are public. HST imagery of somewhat older nova shells is discussed in Gill & O’Brien (2000)., since the latter can resolve some shells within a few months or years of the outburst. The simplest way to derive an expansion distance is to assume (often implicitly) that the nova shell is expanding spherically symmetrically, hence that the transverse velocity of gas in the plane of the sky is the same as the radial velocity of gas moving directly along the line of sight. What if the ejection of the shell is asymmetric? To be specific, suppose the shell expands as a spheroid, the simplest generalization from the spherically symmetric case, and one that suffices to describe many actual nova shells. First, the projected image of the nova shell will not be circular for most orientations, and thus there will be an ambiguity in what is meant by angular size. Second, the maximum velocity along the line of sight will usually not correspond to either the “polar” or “equatorial” expansion velocity. For example, suppose that the angular size is taken to be the largest projected “radius” of the nova shell, which is perhaps the easiest size parameter to estimate on a barely resolved image. If all nova shells were oblate, then the calculated expansion distance (based on the assumption of spherical symmetry) would always be less than the true distance, because the line-of-sight velocity would be smaller than the transverse expansion velocity. The resulting nova distance scale would be too short. On the other hand, if all nova shells are prolate, than the distance to an individual object may be underestimated or overestimated, depending on the orientation and the ratio of major and minor axes. While it is clear that the distance to an individual nova can be in error as a result, it was not made clear until the work of Ford & Ciardullo (1988; hereinafter FC88) that in the prolate case, a systematic error might remain, even after averaging over an ensemble of novae that are taken to be randomly oriented in space. In their analysis, FC88 made the assumption stated above as an example, that the angular size of the nova shell is taken to be the major axis of the projected image. However, all workers do not make this identification. For example, Cohen & Rosenthal (1983) did use the projected semimajor axis for the angular size, but Cohen (1985) used an angle-averaged radius. What way is best? A goal of this paper is to extend the FC88 analysis to include consideration of six distinct yet plausible ways of defining the angular size of the shell. As the number of Galactic novae with well observed light curves increases, and with the much greater resolving power provided by adaptive optics and HST, it is likely that the calibration of the MMRD and $`M_{15}`$ relations will be improved, but the question of possible systematic errors in the distances becomes more important. This is especially so, if shell morphology is related to nova speed class, as has been suggested by Slavin, O’Brien, & Dunlop (1995). Likewise, as more expansion distances for individual novae become available, it is important to have clearly in mind whether and how much these distances may be in error, as the result of measuring uncertainties and modeling assumptions. ### 1.2 Prolate or Oblate? Theoretical arguments have been made favoring both oblate and prolate geometries for nova shells (Porter, O’Brien, & Bode 1998 and references therein). Empirically, it is now the consensus that, to the extent that nova shells can be described by spheroids, they are either prolate or spherical, but not oblate. FC88 discussed the few cases of resolved nova shells known at the time, in terms of whether they were elongated along one axis (prolate spheroids) or compressed along one axis (oblate spheroids). FC88 noted that most data on the shapes of nova shells were consistent with spherical or prolate geometries, but categorized the shell of nova HR Del 1967 as oblate. For this object, early models by Hutchings (1972) and Soderblom (1976) indeed suggested an oblate symmetry. A spatio-kinematic model by Solf (1983), however, has clearly shown that the resolved shell of HR Del is consistent with a prolate geometry, and not consistent with being oblate. Slavin, O’Brien, & Dunlop (1995) carried out imaging of nova shells using narrow band filters; in particular they have obtained images at several different tilts of an interference filter with nominal wavelength 6560 Å (17 Å FWHM), which allowed them to distinguish crudely between gas approaching or receding from the observer. Their data are clearly consistent with the shells being prolate, not oblate, if they depart detectably from spherical symmetry. Other spatio-kinematic studies, for example of the shell around nova DQ Her 1934 (e.g. Herbig & Smak 1992) also indicate prolate symmetry. Therefore we proceed with the assumption that to first approximation, nova shells are prolate spheroidal shells, with their properties in projection specified by their axis ratio and the inclination of the polar axis to the line of sight. In Section 2 of this paper, we investigate several different ways of defining the angular size of the resolved nova shell. We derive the projected size and shape of the shell and the maximum radial velocity of gas in the shell, as functions of the intrinsic axis ratio and the inclination of the polar axis. We then derive analytic expressions that give the inferred distance in terms of the true distance, as a function of axis ratio and inclination. We tabulate results for a variety of cases, using six definitions of “angular size.” In Section 3 we investigate how these various definitions of expansion distance behave, both for individual objects and when averaged over an ensemble of nova shells oriented randomly in space. We also discuss some practical matters relating to the measurement of the expansion speed and angular size of nova shells. We summarize our findings in Section 4. ## 2 EXPANSION DISTANCE ESTIMATORS FOR PROLATE SPHEROIDAL NOVA SHELLS When viewed at an inclination angle $`i`$, a prolate spheroid at distance $`d`$ and with principal axis ratio $`b/a1`$ will appear projected on the sky as an ellipse with apparent axis ratio $`b_{}/a_{}1`$. We define several quantities, which appear repeatedly in the discussion to follow: $$f_1=\sqrt{1e^2\mathrm{sin}^2i}$$ $$f_2=\sqrt{1e^2\mathrm{cos}^2i}$$ $$f_3=\sqrt{1e^2}=b/a$$ The auxiliary quantity $`e`$ is the “eccentricity” of the prolate spheroid, in the sense that $`b^2=a^2(1e^2)`$ relates the major and minor axes, $`a`$ and $`b`$ respectively, of an ellipse. We have the following relations between the axes of the spheroid in space, $`a`$ and $`b`$, and the (linear) principal axes of the projected ellipse, $`a_{}`$ and $`b_{}`$ (see Appendix A). $$a_{}=f_2a$$ $$b_{}=b=f_3a=(f_3/f_2)a_{}$$ Let $`v_0`$ denote the expansion speed along the major (polar) axis of the spheroid. Then from Appendix B the maximum projected (line-of-sight) speed is $$v_{\mathrm{max}}=f_1v_0=f_1(a/t),$$ where $`a`$ is the semimajor axis of the spheroid when the age of the nova remnant is $`t`$. (Constant expansion speed is assumed.) Also, a distance $`x`$ measured in the plane of the sky corresponds to an angle $`\rho _x=x/d`$ (radians), where $`d`$ is the true distance of the nova. The essence of the expansion distance method is to compare an estimate of the linear size of the nova shell, $`vt`$, with an estimate of the angular size, $`\rho `$. The estimator formula $$\widehat{d}=\frac{v_{\mathrm{max}}t}{\rho },$$ (1) where $`\rho `$ is any angular radius, recovers the true distance $`d`$ in the case of a spherically symmetric expanding shell. This is because $`v_{\mathrm{max}}=v_0`$ (by symmetry) and all angular radii are equal to $`a/d`$ where $`a`$ is the true linear size of the shell at time $`t`$: $$\widehat{d}=\frac{v_{\mathrm{max}}t}{\rho }=\frac{v_0t}{(a/d)}=d.$$ (2) For a prolate spheroid, in general $`v_{\mathrm{max}}v_0`$, and there is no unique measure of the angular size $`\rho `$. Given independent knowledge of $`i`$, the apparent ratio $`b_{}/a_{}=f_3/f_2=(1e^2)^{1/2}/(1e^2\mathrm{cos}^2i)^{1/2}`$ can be inverted to find $`e`$. In this case equation (1) can be used, with corrections to convert the measured speed $`v_{\mathrm{max}}`$ into $`v_0`$ and the apparent semimajor axis $`\rho _1=a_{}/d`$ into $`a/d`$, to recover the true distance. In symbols, $$\widehat{d}=\frac{(v_{\mathrm{max}}/f_1)t}{(\rho _1/f_2)}=\frac{v_0t}{(a/d)}=d.$$ (3) In general, however, the inclination of the spheroid in space is not known, so the correction factors are not known, but the simple formula $`\widehat{d}=v_{\mathrm{max}}t/\rho _1`$ does not recover $`d`$. Furthermore, there is no reason any longer to define $`\rho `$ as the apparent semimajor axis — the apparent minor axis or some average indicator of the apparent size of the nova shell could be used instead. We consider six possible definitions of $`\rho `$, and for each we investigate how large an error is made in estimating $`d`$ using equation (1). This question is addressed both for individual novae, with particular values of $`b/a`$ and $`i`$, and for statistical ensembles of nova shells, where averages are taken over random orientations for a fixed value of $`b/a`$. The six choices for $`\rho `$ are: $$\begin{array}{ccccc}\rho _1\hfill & =& a_{}/d\hfill & =& f_2\times a/d\hfill \\ \rho _2\hfill & =& b_{}/d\hfill & =& f_3\times a/d\hfill \\ \rho _3\hfill & =& (a_{}+b_{})/2d\hfill & =& (f_3+f_2)/2\times a/d\hfill \\ \rho _4\hfill & =& \sqrt{a_{}b_{}}/d\hfill & =& \sqrt{f_3f_2}\times a/d\hfill \\ \rho _5\hfill & =& 2b_{}K(k)/\pi d\hfill & =& 2f_3K(k)/\pi \times a/d\hfill \\ \rho _6\hfill & =& 2(a_{}^1+b_{}^1)^1/d\hfill & =& 2(f_2^1+f_3^1)^1\times a/d\hfill \end{array}$$ The first two choices are the apparent major and minor axes of the projected ellipse. The third choice is the arithmetic mean of $`\rho _1`$ and $`\rho _2`$. The fourth choice is the geometric mean of these. The fifth definition, $`\rho _5`$, is the angle-averaged apparent “radius” of the shell. Here $`K(k)`$ is the complete elliptic integral of the first kind (see Appendix C). The argument $`k`$ is given by $`k^2=1(k^{})^2=1(b_{}/a_{})^2=1(f_3/f_2)^2.`$ The sixth choice is the harmonic mean of $`\rho _1`$ and $`\rho _2`$. Corresponding to each $`\rho _i`$ is a distance estimator $`\widehat{d}_i=v_{\mathrm{max}}t/\rho _i`$. Given in Table 1 are $`k^{}=b_{}/a_{}`$, $`v_{\mathrm{max}}/v_0`$, $`a_{}/a`$, and $`\widehat{d}_j/d,(j=1,2,\mathrm{}5)`$ for several combinations of true axis ratio $`b/a`$ and inclination $`i`$. Since $`\widehat{d}_6/d=(\widehat{d}_1+\widehat{d}_2)/2d`$, it is not shown separately in Table 1. The probability that the polar axis of a randomly oriented spheroid makes an angle between $`i`$ and $`i+di`$ with the line of sight is $`P(i)di=(\mathrm{sin}i)di`$. Each of the estimates $`\widehat{d}_j`$ can be averaged over angle with $`P(i)`$ as the weighting function: $$\widehat{d}_j=_0^{\pi /2}P(i)\widehat{d}_j(i)𝑑i.$$ (4) The $`P(i)`$–weighted average values of $`\widehat{d}_j/d`$ were computed numerically and are shown in Table 1. For example, $`\widehat{d}_1/d=0.937`$ for $`b/a=0.80`$. ## 3 RESULTS AND DISCUSSION ### 3.1 The Typical Size of Errors in Expansion Distances Inspection of Table 1 reveals several features of the various distance estimators $`\widehat{d}_i`$. Every estimator gives the correct distance for spherically symmetric nova shells, as expected. For prolate nova shells, every estimator overpredicts the distance when the polar axis is close to the line of sight, because $`v_{max}`$ does not correspond to the projected angular size. In this case, all estimators are equally poor, and increasingly so as the axis ratio $`b/a`$ becomes more extreme. All estimators except $`\widehat{d}_2`$ underpredict the distance for prolate nova shells when the polar axis is close to the plane of the sky. In this case, the error increases as the axis ratio decreases, but $`\widehat{d}_5`$ and $`\widehat{d}_6`$ are better than the other estimators. Finally, when considering an ensemble average of shells with random orientations, the best average distance is produced using $`\widehat{d}_3`$, which is based on the straight mean of the projected semimajor and semiminor axes. Since this is true for each intrinsic axis ratio considered separately, it will also be true for an ensemble of randomly oriented nova shells that has a mixture of axis ratios. Observations of resolved nova shells show a variety of projected axis ratios. For example, nova DQ Her 1934 has a projected axis ratio of $`k^{}=b_{}/a_{}=0.73`$ (Herbig & Smak 1992), while nova FH Ser 1970 has $`k^{}=0.91`$ (Slavin, O’Brien, & Dunlop 1995). Nova HR Del 1967 has $`k^{}=0.56`$ according to Solf (1983). Slavin et al. (1995) find $`k^{}=0.75`$ for HR Del viewed in H$`\alpha `$, but a more elongated image (same projected major axis, shorter projected minor axis) in the light of \[O III\].<sup>3</sup><sup>3</sup>3Slavin, O’Brien, & Dunlop (1994) discuss the different appearance of HR Del in different ions as perhaps arising from density or composition differences of the gas ejected in different directions from the central star. Another possibility is a difference in ionizing flux between the poles and the equator of the shell, with the equator being shielded by the accretion disk around the star (cf. the discussion of nova FH Ser 1970 by Gill & O’Brien 2000). Solf finds the intrinsic axis ratio for HR Del to be $`b/a=1/3`$, possibly the most extreme of the well-studied shells. Thus Table 1 covers most of the axis ratios observed to date. If there were a shell with $`b/a=0.4`$ viewed pole-on, the expansion distance method would yield an estimated distance a factor of 2.5 larger than the true distance, regardless of the specific estimator used. The median inclination of the polar axis is 60, and 80 per cent of all randomly oriented nova shells should have inclinations between 26 and 84. Thus typical individual errors in nova expansion distances will be of order tens of percent, unless the inclination and axis ratio are known. Barring the tell-tale evidence of high inclination, given by an eclipsing binary system, the only way to derive the inclination of a nova shell is to construct a so-called spatio-kinematic model, in which spectra from multiple positions across the resolved shell are simultaneously modeled (e.g., HR Del, Solf 1983; DQ Her, Herbig & Smak 1992, Gill & O’Brien 2000). ### 3.2 Practicalities of Measuring Nova Shells The nova shell literature contains a mixture of techniques for measuring the expansion velocities and angular sizes. Given that nova shells typically are bright enough to detect only for a few decades after outburst and hence have small angular sizes if they are resolved, this is not surprising. Nevertheless, it is clear that velocities and angular sizes have not always been combined in a self-consistent fashion, even in cases where the inclination and axis ratio are known. Herbig & Smak (1992) discuss this point at length for the case of nova DQ Her 1934, comparing several different spatio-kinematic studies that arrived at widely differing distance estimates. For DQ Her the issue is that the shell has a finite thickness; it is therefore possible erroneously to combine a velocity measured from the extreme outer edge with, say, a size measured from the ridge line of an image, which represents a position within the shell. Martin (1989) also discusses this point. To avoid confusion, it is highly desirable for observers reporting either a line-of-sight expansion velocity or an angular size to be explicit about exactly what was measured. We suggest that angular size measurements be referred to a contour level that encloses a stated percentage of the nova shell flux. (Care needs to be taken, if imaging is done through a narrow-band filter that excludes part of the shell emission.) Likewise, emission line velocities may be measured at a level above the continuum such that a stated percentage of the total line flux is at less extreme velocities. If image deconvolution methods (e.g., the MEM and CLEAN algorithms) are used to “sharpen” the image, special care needs to be taken in determining and reporting any effect this has on measurements of the “edge” of a nova shell, especially if the method does not conserve flux. In addition to avoiding possible ambiguities caused by the thickness of the shell, authors need to be clear about what sort of angular size is being reported. Cohen & Rosenthal (1983) used $`\rho _1`$, the projected semimajor axis of the shell, and FC88 modeled the systematic errors in nova expansion distances assuming $`\rho _1`$, as it is perhaps the easiest to measure for barely resolved shells. Cohen (1985), however, derived angular sizes by carefully modeling the appearance of a (spherically symmetric) shell superimposed on a central star, taking into account the point spread function of the optical system. This technique gives $`\rho _5`$. Others, e.g., Shin et al. (1998), have been less careful, merely measuring the FWHM of the nova image and subtracting a stellar FWHM in quadrature to obtain a “characteristic” size. While this method is adequate to demonstrate angular extension of the shell, it is essentially useless for distance estimation, since it wrongly assumes that the profile of a seeing-convolved image of a shell is gaussian, and it does not take into account light from the central star. Additional complications arise if the projected image of the nova shell is elongated but not strictly elliptical, or if the outline of the shell is incomplete. Herbig & Smak (1992) demonstrated that the “equator” of the DQ Her shell is constricted in both angular extent and velocity, and were careful to distinguish the measured equatorial velocity from the modeled minor-axis velocity of the spheroid. If there is emission from gas beyond the main elliptical outline, as reported for DQ Her (Slavin, O’Brien, & Dunlop 1995) and RR Pic (Gill & O’Brien 1998), care must be taken that the extreme velocity used to estimate distance corresponds to the shell from which the angular size was taken, and not the extended halo. Furthermore, if the spheroidal shell is incomplete, consisting only of an equatorial “belt” and polar “blobs”, then for most inclination angles, there will be no emitting gas at the internal angle $`\theta `$ (see Figure 2) that would correspond to $`v_{max}`$ in a filled shell. Likewise, there may not be any gas emitting at the tangent to the line of sight, which defines the projected major axis for a complete shell (Appendix A). In such a case, our prescription for distance estimators formally breaks down. However, FC88 give a partial discussion of this case in the limit of a narrow equatorial belt and small polar caps, giving expressions for $`\widehat{d}_1/d`$ and its average for random orientations. FC88 find that individual nova distances for the “belt and blobs” case can be over- or underestimated, up to the same extremes as for the complete shell case, and the angle-averaged distance is underestimated by an amount very similar to the complete shell case discussed here. Another complication arises from the fact that some nova shells have been observed to change their projected shape in the few years immediately after outburst. A notable case is that of nova V1974 Cyg 1992, which was observed with long-baseline interferometry at radio frequencies (Hjellming 1996). The radio data suggest a model in which the outer and inner faces of the expanding shell have different (triaxial) ellipsoidal representations. Moreover, careful models constructed to account for radio observations of novae (Hjellming et al. 1979; Seaquist et al. 1980) demonstrate that the entire expanding gas cloud does not contribute equally to the optical emission line radiation, since the shell is likely more dense at the inner boundary. The emissivities of the various ions and the fraction of the expanding shell that contributes to the flux both change with time, as the nova shell expands and the post-nova central binary (which photoionizes the shell) recovers from the eruption. The size and morphology of a given nova shell image may differ, depending on which transition is used to image it. As discussed in Section 3.1, HR Del differs considerably in appearance, depending on whether the shell is imaged in $`\lambda `$5007 \[O III\] or in $`\lambda \lambda `$6548/6563/6584 H$`\alpha `$+\[N II\]. For all these reasons, it is best to measure the expansion velocity and the angular size of a shell contemporaneously if possible; this will assure that the same parcels of gas are used for both measurements. Finally, since the central star of a nova system is a cataclysmic binary, it is possible to conflate an emission line from an accretion disk or magnetically channeled accretion flow in the binary with the same line (or a nearby one) from gas in the nova shell. Since orbital speeds of gas in the inner accretion disk may be of order $`10^3`$ km s<sup>-1</sup>, there is the possibility to mis-measure the expansion velocity of the shell, unless care is taken to measure $`v_{max}`$ from a long-slit spectrum oriented along the projected major axis. (Recall from Appendix B that the point of the projected image at which $`v_{max}`$ is attained is generally not the same point as the central star.) These examples of traps and complications in the estimation of nova expansion distances are intended to reinforce a plea that observers give a full description of their methods and results. Depending on the particular nova, the information available, and the goals of the study, one method of proceeding or another may be appropriate — perhaps there is no single way to estimate an expansion distance that is best in every instance. To allow results from different studies to be combined correctly, the need for careful documentation is evident. ## 4 SUMMARY AND CONCLUSION We have reviewed and formalized the method of nova shell expansion distances as a means of estimating the distances to classical novae. This method combines a measurement of the shell expansion velocity (multiplied by the time since outburst) with some measure of the angular size. Expansion distances for novae underlie the calibration of the MMRD and $`M_{15}`$ relations and also form the basis for astrophysical studies of individual novae and their remnants. It is therefore important to adopt methods of measurement that minimize any possible bias in the distances that results from incomplete information about the shape or orientation of the nova shells. Many resolved shells exhibit significant prolate symmetry, so that there is no unique angular size except when the shell is seen pole-on. We developed analytic expressions for the maximum line-of-sight velocity from a complete, expanding prolate spheroidal shell and for its projected major and minor axes, as functions of the intrinsic axis ratio and the inclination of the polar axis to the line of sight. For six definitions of “angular size”, we then computed the error introduced by deriving a distance using the assumption of spherical symmetry (i.e., without correcting for inclination and axis ratio). The errors can be significant and possibly systematic, affecting studies of novae whether considered individually or statistically. The definition of angular size that results in the least errors at the extremes is $`\rho _6`$, the harmonic mean of the projected semimajor and semiminor axes. However, the definition that results in the least bias when an ensemble of randomly oriented prolate shells is considered is $`\rho _3`$, the straight mean of the projected semimajor and semiminor axes, and we recommend this method when individual inclinations and axis ratios cannot be ascertained. The $`\rho _3`$–based method is always as good or better than than the $`\rho _1`$ method (projected semimajor axis alone). The best individual expansion distances result from a full spatio-kinematic modeling of the nova shell, using spectroscopy of emission lines at multiple locations across the resolved shell. We have discussed practical issues and made recommendations for observers who make measurements of either the maximum line-of-sight velocity and the angular size of a resolved nova shell. The velocity measurement may be complicated by the presence of line emission from the central cataclysmic binary star, and if the spheroidal shell is not complete, the theoretical maximum velocity may not be observed at all. The correct application of angular size measurements can be compromised by convolution with the image point spread function, by improper technique, or by incomplete reporting. For best results, velocity and angular size measurements should be made contemporaneously, and must refer to the same features of the shell. Observers are encouraged to report as completely as possible the measurements they have made. Estimates of nova distances by the shell expansion method (or any other method) should be accompanied by a discussion of both random and systematic errors, including possible effects due to unaccounted-for departures from spherical symmetry, as discussed in this paper. Support for this work was provided by NASA through grant number GO-07386.01 from the Space Telescope Science Institute, which is operated by AURA, Inc. under NASA contract NAS 5-26555. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. ## Appendix A THE PRINCIPAL AXES OF THE PROJECTED ELLIPSE A prolate spheroidal shell centered on the origin, with its major axis aligned with the $`z`$ axis, is described by the equation: $$\frac{x^2}{b^2}+\frac{y^2}{b^2}+\frac{z^2}{a^2}=1$$ with $`b<a`$. The eccentricity of the ellipse, $`e`$, is defined by $`b^2=a^2(1e^2)`$. The observer’s line of sight, taken to be in the $`xz`$ plane, makes an angle $`i`$ with the $`z`$ axis (polar axis). This observer sees a projected ellipse with semimajor axis $`a_{}`$ and semiminor axis $`b_{}`$. The prolate symmetry around the $`z`$ axis gives the result: $$b_{}=b=a\sqrt{1e^2}.$$ The semimajor projected axis $`a_{}`$ can be found using the geometry shown in Figure 1. The intersection of the spheroidal surface and the $`xz`$ plane is an ellipse described by $$\frac{x^2}{b^2}+\frac{z^2}{a^2}=1,$$ or $$x^2=a^2(1e^2)z^2(1e^2).$$ The line of sight is described generally by $$z=c+x\mathrm{cot}i.$$ The tangent line of sight passes through point $`A`$, and for this line $`c`$ is defined by the condition that the line intersects the ellipse exactly once. Using the equation of the line to substitute for $`z`$ in the equation of the ellipse, it is seen that $$x^2[1+(1e^2)\mathrm{cot}^2i]+x[2c(1e^2)\mathrm{cot}i]+(c^2a^2)(1e^2)=0$$ This equation, quadratic in $`x`$, has a single solution (tangent condition) only when the discriminant, D, is equal to zero: $$D=4c^2(1e^2)^2\mathrm{cot}^2i4[1+(1e^2)\mathrm{cot}^2i](1e^2)(c^2a^2)=0.$$ The $`z`$-intercept of the line is thus $$c=a\sqrt{1+(1e^2)\mathrm{cot}^2i},$$ and the projected semimajor axis is $$a_{}=c\mathrm{sin}i=a\mathrm{sin}i\sqrt{1+(1e^2)\mathrm{cot}^2i}=\sqrt{a^2\mathrm{sin}^2i+b^2\mathrm{cos}^2i}=a\sqrt{1e^2\mathrm{cos}^2i}.$$ It is easy to see that $`a_{}>b_{}`$. The tangent method for finding the projected ellipse was used as long ago as Hubble (1926), although he used it only for oblate spheroids and measured $`i`$ from the equator rather than the pole of the spheroid. ## Appendix B THE MAXIMUM LINE-OF-SIGHT VELOCITY As before, let the first quadrant of the $`xz`$ plane contain the observer’s line of sight, in a direction defined by the unit vector $`\widehat{n}=(\mathrm{sin}i,\mathrm{cos}i)`$ where $`i`$ is the angle between the major axis of the spheroid and the observer. By symmetry the maximum projected (line-of-sight) velocity of the ellipsoid will be associated with a point that lies in the $`xz`$ plane, and it suffices to consider the plane ellipse $$\frac{x^2}{b^2}+\frac{z^2}{a^2}=1,$$ or $`z^2=a^2x^2(1e^2)^1`$ where $`b^2=a^2(1e^2)`$ as before. Let $`\theta `$ by the polar angle defined by $`x=z\mathrm{tan}\theta `$. (See Figure 2.) Note that $$2z\frac{dz}{dx}=\frac{d(z^2)}{dx}=2x(1e^2)^1,$$ thus $$\frac{dz}{dx}=\frac{\mathrm{tan}\theta }{1e^2}.$$ A point on the ellipse $`\stackrel{}{r}=(x,z)=(r\mathrm{sin}\theta ,r\mathrm{cos}\theta )`$ has velocity $`\stackrel{}{v}=\stackrel{}{r}/t`$, where $`t`$ is the time elapsed since a point explosion. Constant speed (no deceleration) has been assumed. The line-of-sight velocity of gas at this point will be $$v_{\mathrm{los}}=\stackrel{}{v}\widehat{n}=\frac{1}{t}(x\mathrm{sin}i+z\mathrm{cos}i)$$ and the extremum, called $`v_{\mathrm{max}}`$, occurs for the point $`\stackrel{}{r_{}}=(x_{},z_{})`$ such that $$\frac{dv_{\mathrm{los}}}{dx}=\frac{1}{t}\left(\mathrm{sin}i+\frac{dz}{dx}\mathrm{cos}i\right)=\frac{1}{t}\left(\mathrm{sin}i\frac{\mathrm{tan}\theta _{}}{1e^2}\mathrm{cos}i\right)=0.$$ Thus $$(1e^2)\mathrm{tan}i=\mathrm{tan}\theta _{}.$$ Now $`\stackrel{}{r}_{}`$ lies on the ellipse, so $$z_{}^2=a^2\frac{x_{}^2}{1e^2}=a^2z_{}^2\frac{\mathrm{tan}^2\theta _{}}{1e^2}$$ or after some algebra, $$z_{}=\frac{a}{[1+(1e^2)\mathrm{tan}^2i]^{1/2}}.$$ After some additional algebra, the desired expression is obtained: $$v_{\mathrm{max}}=\frac{z_{}}{t}\left(\mathrm{tan}\theta _{}\mathrm{sin}i+\mathrm{cos}i\right)=\frac{a}{t}\left(1e^2\mathrm{sin}^2i\right)^{1/2}=\frac{1}{t}\left(a^2\mathrm{cos}^2i+b^2\mathrm{sin}^2i\right)^{1/2}.$$ Note that in general $`\theta _{}i`$, so that the spot on the projected image of the nova shell where $`v_{\mathrm{max}}`$ is observed is usually not aligned with the central star. ## Appendix C THE ANGLE-AVERAGED APPARENT “RADIUS” OF THE PROJECTED ELLIPSE Let the projected ellipse be described by $$\left(\frac{x}{b_{}}\right)^2+\left(\frac{y}{a_{}}\right)^2=1$$ where $`x,y`$ are now rectangular coordinates in the plane of the sky, and $`b_{}<a_{}`$. To streamline the notation, the projection subscript ($``$) is temporarily suppressed. Using centered polar coordinates $`(r,\theta )`$ such that $`y=x\mathrm{tan}\theta `$, it can be seen that $$x^2(a^2+b^2\mathrm{tan}^2\theta )=a^2b^2,$$ whence $$x^2=a^2b^2/(a^2+b^2\mathrm{tan}^2\theta )$$ $$y^2=a^2b^2\mathrm{tan}^2\theta /(a^2+b^2\mathrm{tan}^2\theta )$$ $$r^2=x^2+y^2=b^2/[\mathrm{cos}^2\theta +(b/a)^2\mathrm{sin}^2\theta ].$$ With the projection subscript restored, and with $`k^{}b_{}/a_{}1`$ and $`k^21k^2`$, it can be seen that $`r^2(\theta )=b_{}^2/(1k^2\mathrm{sin}^2\theta )`$. The angle-averaged value of $`r`$ is thus $$\overline{r}=\frac{2}{\pi }_0^{\pi /2}r(\theta )𝑑\theta =\frac{2b_{}}{\pi }_0^{\pi /2}\frac{d\theta }{\sqrt{1k^2\mathrm{sin}^2\theta }}=\frac{2b_{}}{\pi }K(k)=\frac{2b}{\pi }K(k)$$ where $`K(k)`$ is the complete elliptic integral of the first kind. By symmetry, the integration is carried out over the first quadrant only. When $`k^{}1`$, a useful series expansion for $`K(k)`$ is (e.g. Dwight, 1961) $$K(k)=\frac{\pi }{2}\left(1+m\right)\left[1+\frac{1^2}{2^2}m^2+\frac{1^23^2}{2^24^2}m^4+\frac{1^23^25^2}{2^24^26^2}m^6+\mathrm{}\right]$$ with $`m(1k^{})/(1+k^{})=(a_{}b_{})/(a_{}+b_{})`$. For $`k^{}0.4`$, the series truncated after the $`m^6`$ term is accurate to better than three decimal places.
no-problem/0003/physics0003039.html
ar5iv
text
# A fully ab initio potential curve of near-spectroscopic quality for OH- ion: importance of connected quadruple excitations and scalar relativistic effects ## I Introduction Molecular anions play an important role in the chemistry of the interstellar medium, of carbon stars, and the Earth’s ionosphere. As pointed out in Ref., the presence of anions in the interstellar medium may have profound consequences for our understanding of the interstellar processing of the biogenic elements (see e.g. Ref. and references therein). Yet as judged from the number of entries in the compilations of Huber and Herzberg (for diatomics) and of Jacox (for polyatomics), high- or even medium-resolution spectroscopic data for anions are relatively scarce compared to the amount of data available for neutral or even cationic species: in the 1992 review of Hirota on spectroscopy of ions, only 13 molecular anions were listed in Table VII, compared to 4 1/2 pages worth of entries for cations. (Early reviews of anion spectroscopy are found in Refs., while ab initio studies of structure and spectroscopy of anions were reviewed fairly recently by Botschwina and coworkers.) Some of the reasons for this paucity are discussed in the introductions to Refs.. One such species is the hydroxyl anion, OH<sup>-</sup>. By means of velocity modulation spectroscopy, high-resolution fundamentals were obtained for three isotopomers, namely <sup>16</sup>OH<sup>-</sup>, <sup>16</sup>OD<sup>-</sup>, and <sup>18</sup>OH<sup>-</sup>; in addition, some pure rotational transitions have been observed. Lineberger and coworkers earlier obtained some rotational data in the course of an electron photodetachment study, and obtained precise electron affinities (EAs) of 14741.03(17) and 14723.92(30) cm<sup>-1</sup>, respectively, for OH and OD. Very recently, the same group re-measured EA(OH) and obtained essentially the same value but with a higher precision, 14741.02(3) cm<sup>-1</sup>. The spectroscopic constants of OH<sup>-</sup> were previously the subject of ab initio studies, notably by Werner et al. using multireference configuration interaction (MRCI) methods, and recently by Lee and Dateo (LD) using coupled cluster theory with basis sets as large as $`[7s6p5d4f3g2h/6s5p4d3f2g]`$. The LD paper is particularly relevant here. The CCSD(T) (coupled cluster with all single and double substitutions and a quasiperturbative treatment for triple excitations) method, in combination with basis sets of at least $`spdfg`$ quality and including an account for inner-shell correlation, can routinely predict vibrational band origins of small polyatomic molecules with a mean absolute error on the order of a few cm<sup>-1</sup> (e.g. for C<sub>2</sub>H<sub>2</sub>, SO<sub>2</sub>). Yet while LD found very good agreement between their computed CCSD(T)/\[6s5p4d3f2g/5s4p3d2f\] spectroscopic constants and available experimental data, consideration of further basis set expansion and of inner-shell correlation effects leads to a predicted fundamental $`\nu `$ at the CCSD(T) basis set limit of 3566.2$`\pm `$1 cm<sup>-1</sup>, about 11 cm<sup>-1</sup> higher than the experimental results of 3555.6057(22) cm<sup>-1</sup>, where the uncertainty in parentheses represents two standard deviations. In a recent benchmark study on the ground-state potential curves of the first-row diatomic hydrides using both CCSD(T) and FCI (full configuration interaction) methods, the author found that CCSD(T) has a systematic tendency to overestimate harmonic frequencies of A–H stretching frequencies by on the order of 6 cm<sup>-1</sup>. Even so, the discrepancy seen by LD is a bit out of the ordinary, and the question arises as to what level of theory is required to obtain ‘the right result for the right reason’ in this case. In the present work, we shall show that the discrepancy between the CCSD(T) basis set limit and Nature is mostly due to two factors: (a) neglect of the effect of connected quadruple excitations, and (b) neglect of scalar relativistic effects. When these are properly accounted for, the available vibrational transitions can be reproduced to within a fraction of a cm<sup>-1</sup> from the computed potential curve. In the context of the present Special Issue, this will also serve as an illustrative example of the type of accuracy that can be achieved for small systems with the present state of the art. Predicted band origins for higher vibrational levels (and ‘hot bands’) may assist future experimental work on this system. Finally, as by-products of our analysis, we will show that the electron affinity of OH can be reproduced to very high accuracy, and tentatively propose a slight upward revision of the dissociation energy of neutral hydroxyl radical, OH. ## II Computational methods The coupled cluster, multireference averaged coupled pair functional (ACPF), and full CI calculations were carried out using MOLPRO 98.1 running on DEC/Compaq Alpha workstations in our laboratory, and on the SGI Origin 2000 of the Faculty of Chemistry. Full CCSDT (coupled cluster theory with all connected single, double and triple excitations) and CCSD(TQ) (CCSD with quasiperturbative corrections for triple and quadruple excitations) calculations were carried out using ACES II on a DEC Alpha workstation. Correlation consistent basis sets due to Dunning and coworkers were used throughout. Since the system under consideration is anionic, the regular cc-pV$`n`$Z (correlation consistent polarized valence $`n`$-tuple zeta, or V$`n`$Z for short) basis sets will be inadequate. We have considered both the aug-cc-pV$`n`$Z (augmented correlation consistent, or AV$`n`$Z for short) basis sets in which one low-exponent function of each angular momentum is added to both the oxygen and hydrogen basis sets, as well as the aug-cc-pV$`n`$Z basis sets in which the addition is not made to the hydrogen basis set. In addition we consider both uncontracted versions of the same basis sets (denoted by the suffix ”uc”) and the aug-cc-pCV$`n`$Z basis sets (ACV$`n`$Z) which include added core-valence correlation functions. The largest basis sets considered in this work, aug-cc-pV6Z and aug-cc-pCV5Z, are of \[8s7p6d5f4g3h2i/7s6p5d4f3g2h\] and \[11s10p8d6f4g2h/6s5p4d3f2g\] quality, respectively. The multireference ACPF calculations were carried out from a CASSCF (complete active space SCF) reference wave function with an active space consisting of the valence $`(2\sigma )(3\sigma )(1\pi )(4\sigma )`$ orbitals as well as the $`(2\pi )`$ Rydberg orbitals: this is denoted CAS(8/7)-ACPF (i,e, 8 electrons in 7 orbitals). While the inclusion of the $`(2\pi )`$ orbitals is essential (see below), the inclusion of the $`(5\sigma )`$ Rydberg orbital (i.e., CAS(8/8)-ACPF) was considered and found to affect computed properties negligibly. In addition, some exploratory CAS-AQCC (averaged quadratic coupled cluster) calculations were also carried out. Scalar relativistic effects were computed as expectation values of the one-electron Darwin and mass-velocity operators for the ACPF wave functions. The energy was evaluated at 21 points around $`r_e`$, with a spacing of 0.01 Å. (All energies were converged to 10<sup>-12</sup> hartree, or wherever possible to 10<sup>-13</sup> hartree.) A polynomial in $`(rr_e)/r_e`$ of degree 8 or 9 (the latter if an F-test revealed an acceptable statistical significance for the nonic term) was fitted to the energies. Using the procedure detailed in Ref., the Dunham series thus obtained was transformed by derivative matching into a variable-beta Morse (VBM) potential $$V_c=D_e\left(1\mathrm{exp}[z(1+b_1z+b_2z^2+\mathrm{}+b_6z^6)]\right)^2$$ (1) in which $`z\beta (rr_e)/r_e`$, $`D_e`$ is the (computed or observed) dissociation energy, and $`\beta `$ is an adjustable parameter related to that in the Morse function. Analysis of this function was then carried out in two different manners: (a) analytic differentiation with respect to $`(rr_e)/r_e`$ up to the 12th derivative followed by a 12th-order Dunham analysis using an adaptation of the ACET program of Ogilvie; and (b) numerical integration of the one-dimensional Schrödinger equation using the algorithm of Balint-Kurti et al., on a grid of 512 points over the interval 0.5$`a_0`$—5$`a_0`$. As expected, differences between vibrational energies obtained using both methods are negligible up to the seventh vibrational quantum, and still no larger than 0.4 cm<sup>-1</sup> for the tenth vibrational quantum. ## III Results and discussion ### A $`n`$-particle calibration The largest basis set in which we were able to obtain a full CI potential curve was cc-pVDZ+sp(O), which means the standard cc-pVDZ basis set with the diffuse $`s`$ and $`p`$ function from aug-cc-pVDZ added to oxygen. A comparison of computed properties for OH<sup>-</sup> with different electron correlation methods is given in Table I, while their errors in the total energy relative to full CI are plotted in Figure 1. It is immediately seen that CCSD(T) exaggerates the curvature of the potential surface, overestimating $`\omega _e`$ by 10 cm<sup>-1</sup>. In addition, it underestimates the bond length by about 0.0006 Å. These are slightly more pronounced variations on trends previously seen for the OH radical. The problem does not reside in CCSD(T)’s quasiperturbative treatment of triple excitations: performing a full CCSDT calculation instead lowers $`\omega _e`$ by only 1.7 cm<sup>-1</sup> and lengthens the bond by less than 0.0001 Å. Quasiperturbative inclusion of connected quadruple excitations, however, using the CCSD(TQ) method, lowers $`\omega _e`$ by 8.5 cm<sup>-1</sup> relative to CCSD(T), and slightly lengthens the bond, by 0.00025 Å. (Essentially the same result was obtained by means of the CCSD+TQ\* method, which differs from CCSD(TQ) in a small sixth-order term $`E_{6TT}`$.) No CCSDT(Q) code was available to the author: approximating the CCSDT(Q) energy by the expression $`E[CCSDT(Q)]E[CCSDT]+E[CCSD(TQ)]E[CC5SD(T)]=E[CCSDT]+E_{5QQ}+E_{5QT}`$, we obtain a potential curve in fairly good agreement with full CI. What is the source of the importance of connected quadruple excitations in this case? Analysis of the FCI wave function reveals prominent contributions to the wave function from $`(1\pi )^4(2\pi )^0(1\pi )^2(2\pi )^2`$ double excitations; while the $`(2\pi )`$ orbitals are LUMO+2 and LUMO+3 rather than LUMO, a large portion of them sits in the same spatial region as the occupied $`(1\pi )`$ orbitals. In any proper multireference treatment, the aforementioned excitations would be in the zero-order wave function: obviously, the space of all double excitations therefrom would also entail quadruple excitations with respect to the Hartree-Fock reference, including a connected component. Since the basis set sizes for which we can hope to perform CCSDT(Q) or similar calculations on this system are quite limited, we considered multireference methods, specifically ACPF from a $`[(2\sigma )(3\sigma )(4\sigma )(1\pi )(2\pi )]^8`$ reference space (denoted ACPF(8/7) further on). As might be expected, the computed properties are in very close agreement with FCI, except for $`\omega _e`$ being 1.5 cm<sup>-1</sup> too high. AQCC(8/7) does not appear to represent a further improvement, and adding the $`(5\sigma )`$ orbital to the ACPF reference space (i.e. ACPF(8/8)) affects properties only marginally. ### B 1-particle basis set calibration All relevant results are collected in Table II. Basis set convergence in this system was previously studied in some detail by LD at the CCSD(T) level. Among other things, they noted that $`\omega _e`$ still changes by 4 cm<sup>-1</sup> upon expanding the basis set from aug-cc-pVQZ to aug-cc-pV5Z. They suggested that $`\omega _e`$ then should be converged to about 1 cm<sup>-1</sup>; this statement is corroborated by the CCSD(T)/aug-cc-pV6Z results. Since the negative charge resides almost exclusively on the oxygen, the temptation exists to use aug-cc-pV$`n`$Z basis sets, i.e. to apply aug-cc-pV$`n`$Z only to the oxygen atom but use a regular cc-pV$`n`$Z basis set on hydrogen. For $`n`$=T, this results in fact in a difference of 10 cm<sup>-1</sup> on $`\omega _e`$, but the gap narrows as $`n`$ increases. Yet extrapolation suggests convergence of the computed fundamental to a value about 1 cm<sup>-1</sup> higher than the aug-cc-pV$`n`$Z curve. For the AV$`n`$Z and A’V$`n`$Z basis sets ($`n`$=T,Q), the CAS(8/7)-ACPF approach systematically lowers harmonic frequencies by about 8 cm<sup>-1</sup> compared to CCSD(T); for the fundamental the difference is even slightly larger (9.5 cm<sup>-1</sup>). Interestingly, this difference decreases for $`n`$=5. It was noted previously that the higher anharmonicity constants exhibit rather greater basis set dependence than one might reasonably have expected, and that this sensitivity is greatly reduced if uncontracted basis sets are employed (which have greater radial flexibility). The same phenomenon is seen here. In agreement with previous observations by LD, inner-shell correlation reduces the bond lengthen slightly, and increases $`\omega _e`$ by 5–6 cm<sup>-1</sup>. This occurs both at the CCSD(T) and the CAS(8/7)-ACPF levels. ### C Additional corrections and best estimate At our highest level of theory so far, namely CAS(8/7)-ACPF(all)/ACV5Z, $`\nu `$ is predicted to be 3559.3 cm<sup>-1</sup>, still several cm<sup>-1</sup> higher than experiment. The effects of further basis set improvement can be gauged from the difference between CCSD(T)/AV6Z and CCSD(T)/AV5Z results: one notices an increase of +1.0 cm<sup>-1</sup> in $`\omega _e`$ and a decrease of 0.00006 Å in $`r_e`$. We also performed some calculations with a doubly augmented cc-pV5Z basis set (i.e. d-AV5Z), and found the results to be essentially indistinguishable from those with the singly augmented basis set. Residual imperfections in the electron correlation method can be gauged from the CAS(8/7)-ACPF $``$ FCI difference with our smallest basis set, and appear to consist principally of a contraction of $`r_e`$ by 0.00004 Å and a decrease in $`\omega _e`$ by 1.5 cm<sup>-1</sup>. Adding the two sets of differences to obtain a ‘best nonrelativistic’ set of spectroscopic constants, we obtain $`\nu `$=3558.6 cm<sup>-1</sup>, still 3 cm<sup>-1</sup> above experiment. In both cases, changes in the anharmonicity constants from the best directly computed results are essentially nil. Scalar relativistic corrections were computed at the CAS(8/7)-ACPF level with and without the $`(1s)`$-like electrons correlated, and with a variety of basis sets. All results are fairly consistent with those obtained at the highest level considered, CAS(8/7)-ACPF(all)/ACVQZ, namely an expansion of $`r_e`$ by about 0.0001 Å and — most importantly for our purposes — a decrease of $`\omega _e`$ by about 3 cm<sup>-1</sup>. Effects on the anharmonicity constants are essentially nonexistent. Upon adding these corrections to our best nonrelativistic spectroscopic constants, we obtain our final best estimates. These lead to $`\nu `$=3555.44 cm<sup>-1</sup> for <sup>16</sup>OH<sup>-</sup>, in excellent agreement with the experimental result 3555.6057(22) cm<sup>-1</sup>. The discrepancy between computed (3544.30 cm<sup>-1</sup>) and observed (3544.4551(28) cm<sup>-1</sup>) values for <sup>18</sup>OH<sup>-</sup> is quite similar. For <sup>16</sup>OD<sup>-</sup>, we obtain $`\nu `$=2625.31 cm<sup>-1</sup>, which agrees to better than 0.1 cm<sup>-1</sup> with the experimental value 2625.332(3) cm<sup>-1</sup>. Our computed bond length is slightly shorter than the observed one for OH<sup>-</sup>, but within the error bar of that for OD<sup>-</sup>. If we assume an inverse mass dependence for the experimental diabatic bond distance and extrapolate to infinite mass, we obtain an experimentally derived Born-Oppenheimer bond distance of 0.96416(16) cm<sup>-1</sup>, in perfect agreement with our calculations. While until recently it was generally assumed that scalar relativistic corrections are not important for first-and second-row systems, it has now been shown repeatedly (e.g.) that for kJ/mol accuracy on computed bonding energies, scalar relativistic corrections are indispensable. Very recently, Csaszar et al. considered the effect of scalar relativistic corrections on the ab initio water surface, and found corrections on the same order of magnitude as seen for the hydroxyl anion here. Finally, Bauschlicher compared first-order Darwin and mass-velocity corrections to energetics (for single-reference ACPF wave functions) with more rigorous relativistic methods (specifically, Douglas-Kroll), and found that for first-and second-row systems, the two approaches yield essentially identical results, lending additional credence to the results of both Csaszar et al. and from the present work. (The same author found more significant deviations for third-row main group systems.) Is the relativistic effect seen here in OH<sup>-</sup> unique to it, or does it occur in the neutral first-row diatomic hydrides as well? Some results obtained for BH, CH, NH, OH, and HF in their respective ground states, and using the same method as for OH<sup>-</sup>, are collected in Table III. In general, $`\omega _e`$ is slightly lowered, and $`r_e`$ very slightly stretched — these tendencies becoming more pronounced as one moves from left to right in the Periodic Table. The effect for OH<sup>-</sup> appears to be stronger than for the isoelectronic neutral hydride HF, and definitely compared to neutral OH. The excellent agreement ($`\pm 1`$ cm<sup>-1</sup> on vibrational quanta) previously seen for the first-row diatomic hydrides between experiment and CCSD(T)/ACV5Z potential curves with an FCI correction is at least in part due to a cancellation between the effects of further basis set extension on the one hand, and scalar relativistic effects (neglected in Ref.) on the other hand. The shape of the relativistic contribution to the potential curve is easily understood qualitatively: on average, electrons are somewhat further away from the nucleus in a molecule than in the separated atoms (hence the scalar relativistic contribution to the total energy will be slightly smaller in absolute value at $`r_e`$ than in the dissociation limit): as one approaches the united atom limit, however, the contribution will obviously increase again. The final result is a slight reduction in both the dissociation energy and on $`\omega _e`$. In order to assist future experimental studies on OH<sup>-</sup> and its isomers, predicted vibrational quanta $`G(n)G(n1)`$ are given in Table V for various isotopic species, together with some key spectroscopic constants. The VBM parameters of the potential are given in Table IV. The VBM expansion generally converges quite rapidly and, as found previously for OH, parameters $`b_5`$ and $`b_6`$ are found to be statistically not significant and were omitted. The VBM expansion requires the insertion of a dissociation energy: we have opted, rather than an experimental value, to use our best calculated value (see next paragraph). Agreement between computed and observed fundamental frequencies speaks for itself, as does that between computed and observed rotational constants. At first sight agreement for the rotation-vibration coupling constants $`\alpha _e`$ is somewhat disappointing. However, for <sup>16</sup>OH<sup>-</sup> and <sup>18</sup>OH<sup>-</sup>, the experimentally derived ‘$`\alpha _e`$’ actually corresponds to $`B_1B_0`$, i.e. to $`\alpha _e2\gamma _e+\mathrm{}`$. If we compare the observed $`B_1B_0`$ with the computed $`\alpha _e2\gamma _e`$ instead, excellent agreement is found. In the case of <sup>16</sup>OD<sup>-</sup>, the experimentally derived $`\alpha _e`$ given is actually extrapolated from neutral <sup>16</sup>OD: again, agreement between computed and observed $`B_1B_0`$ is rather more satisfying. We also note that our calculations validate the conclusion by Lee and Dateo that the experimentally derived $`\omega _e`$ and $`\omega _ex_e`$ for <sup>16</sup>OH should be revised upward. ### D Dissociation energies of OH and OH<sup>-</sup>; electron affinity of OH This was obtained in the following manner, which is a variant on W2 theory: (a) the CASSCF(8/7) dissociation energy using ACVTZ, ACVQZ, and ACV5Z basis sets was extrapolated geometrically using the geometric formula $`A+B/C^n`$ first proposed by Feller; (b) the dynamical correlation component (defined at CAS(8/7)-ACPF(all) $``$ CASSCF(8/7)) of the dissociation energy was extrapolated to infinite maximum angular momentum in the basis set, $`l\mathrm{}`$ from the ACVQZ ($`l`$=4) and ACV5Z ($`l`$=5) results using the formula $`A+B/l^3`$; (c) the scalar relativistic contribution obtained at the CAS(8/7)-ACPF level was added to the total, as was the spin-orbit splitting for O<sup>-</sup>($`{}_{}{}^{2}P`$). Our final result, $`D_0`$=4.7796 eV, is about 0.02 eV higher than the experimental one; interestingly enough, the same is true for the OH radical (computed $`D_0`$=4.4124 eV, observed 4.392 eV). In combination with either the experimental electron affinity of oxygen atom, EA(O)=1.461122(3) eV or the best computed EA(O)=1.46075 eV, this leads to electron affinities of OH, EA(OH)=1.8283 eV and 1.8280 eV, respectively, which agree to three decimal places with the experimental value 1.827611(4) eV. We note that the experimental $`D_e`$(OH<sup>-</sup>) is derived from $`D_e`$(OH)$`+`$EA(OH)$``$EA(O), and that a previous calibration study on the atomization energies of the first-row hydrides suggested that the experimental $`D_e`$(OH) may be too low. While a systematic error in the electronic structure treatment that cancels almost exactly between OH and OH<sup>-</sup> cannot entirely be ruled out, the excellent agreement obtained for the electron affinity does lend support to the computed $`D_e`$ values. ## IV Conclusions We have been able to obtain a fully ab initio radial function of spectroscopic quality for the hydroxyl anion. In order to obtain accurate results for this system, inclusion of connected quadruple excitations (in a coupled cluster expansion) is imperative, as is an account for scalar relativistic effects. Basis set expansion effects beyond $`spdfgh`$ take a distant third place in importance. While consideration of connected quadruple excitation effects and of basis set expansion effects beyond $`spdfgh`$ would at present be prohibitively expensive for studies of larger anions, no such impediment would appear to exist for inclusion of the scalar relativistic effects (at least for one-electron Darwin and mass-velocity terms). Our best computed EA(OH), 1.828 eV, agrees to three decimal places with the best available experimental value. Our best computed dissociation energies, $`D_0`$(OH<sup>-</sup>)=4.7796 eV and $`D_0`$(OH)=4.4124 eV, suggest that the experimental $`D_0`$(OH)=4.392 eV (from which the experimental $`D_0`$(OH<sup>-</sup>) was derived by a thermodynamic cycle) may possibly be about 0.02 eV too low. One of the purposes of the paper by Lee and Dateo was to point out to the scientific community, and in particular the experimental community, that state-of-the art ab initio methods now have the capability to predict the spectroscopic constants of molecular anions with sufficient reliability to permit assignment of a congested spectrum from an uncontrolled environment — such as an astronomical observation — on the basis of the theoretical calculations alone. The present work would appear to support this assertion beyond any doubt. ###### Acknowledgements. JM is the incumbent of the Helen and Milton A. Kimmelman Career Development Chair. Research at the Weizmann Institute was supported by the Minerva Foundation, Munich, Germany, and by the Tashtiyot program of the Ministry of Science (Israel).
no-problem/0003/math0003001.html
ar5iv
text
# Experimental detection of interactive phenomena and their analysis ## I. Experimental detection of interactive phenomena Let us consider a natural, behavioral, social or economical system $`𝒮`$. It will be described by a set $`\{\phi \}`$ of quntities, which characterize it at any moment of time $`t`$ (so that $`\phi =\phi _t`$). One may suppose that the evolution of the system is described by a differential equation $$\dot{\phi }=\mathrm{\Phi }(\phi )$$ and look for the explicit form of the function $`\mathrm{\Phi }`$ from the experimental data on the system $`𝒮`$. However, the function $`\mathrm{\Phi }`$ may depend on time, it means that there are some hidden parameters, which control the system $`𝒮`$ and its evolution is of the form $$\dot{\phi }=\mathrm{\Phi }(\phi ,u),$$ where $`u`$ are such parameters of unknown nature. One may suspect that such parameters are chosen in a way to minimize some goal function $`K`$, which may be an integrodifferential functional of $`\phi _t`$: $$K=K(\left[\phi _\tau \right]_{\tau t})$$ (such integrodifferential dependence will be briefly notated as $`K=K([\phi ])`$ below). More generally, the parameters $`u`$ may be divided on parts $`u=(u_1,\mathrm{},u_n)`$ and each part $`u_i`$ has its own goal function $`K_i`$. However, this hypothesis may be confirmed by the experiment very rarely. In the most cases the choice of parameters $`u`$ will seem accidental or even random. Nevertheless, one may suspect that the controls $`u_i`$ are interactive, it means that they are the couplings of the pure controls $`u_i^{}`$ with the unknown or incompletely known feedbacks: $$u_i=u_i(u_i^{},[\phi ])$$ and each pure control has its own goal function $`K_i`$. Thus, it is suspected that the system $`𝒮`$ realizes an interactive game. There are several ways to define the pure controls $`u_i^{}`$. One of them is the integrodifferential filtration of the controls $`u_i`$: $$u_i^{}=F_i([u_i],[\phi ]).$$ To verify the formulated hypothesis and to find the explicit form of the convenient filtrations $`F_i`$ and goal functions $`K_i`$ one should use the theory of interactive games, which supplies us by the predictions of the game, and compare the predictions with the real history of the game for any considered $`F_i`$ and $`K_i`$ and choose such filtrations and goal functions, which describe the reality better. One may suspect that the dependence of $`u_i`$ on $`\phi `$ is purely differential for simplicity or to introduce the so-called intention fields, which allow to consider any interactive game as differential. Moreover, one may suppose that $$u_i=u_i(u_i^{},\phi )$$ and apply the elaborated procedures of a posteriori analysis and predictions to the system. In many cases this simple algorithm effectively unravels the hidden interactivity of a complex system. ## II. Analysis of interactive phenomena Below we shall consider the complex systems $`𝒮`$, which have been yet represented as the $`n`$-person interactive games by the procedure described above. ### 2.1. Functional analysis of interactive phenomena To perform an analysis of the interactive control let us note that often for the $`n`$-person interactive game the interactive controls $`u_i=u_i(u_i^{},[\phi ])`$ may be represented in the form $$u_i=u_i(u_i^{},[\phi ];\epsilon _i),$$ where the dependence of the interactive controls on the arguments $`u_i^{}`$, $`[\phi ]`$ and $`\epsilon _i`$ is known but the $`\epsilon `$-parameters $`\epsilon _i`$ are the unknown or incompletely known functions of $`u_i^{}`$, $`[\epsilon ]`$. Such representation is very useful in the theory of interactive games and is called the $`\epsilon `$-representation. One may regard $`\epsilon `$-parameters as new magnitudes, which characterize the system, and apply the algorithm of the unraveling of interactivity to them. Note that $`\epsilon `$-parameters are of an existential nature depending as on the states $`\phi `$ of the system $`𝒮`$ as on the controls. The $`\epsilon `$-parameters are useful for the functional analysis of the interactive controls described below. First of all, let us consider new integrodifferential filtrations $`V_\alpha `$: $$v_\alpha ^{}=V_\alpha ([\epsilon ],[\phi ]),$$ where $`\epsilon =(\epsilon _1,\mathrm{},\epsilon _n)`$. Second, we shall suppose that the $`\epsilon `$-parameters are expressed via the new controls $`v_\alpha ^{}`$, which will be called desires: $$\epsilon _i=\epsilon (v_1^{},\mathrm{},v_m^{},[\phi ])$$ and the least have the goal functions $`L_\alpha `$. The procedure of unraveling of interactivity specifies as the filtrations $`V_\alpha `$ as the goal functions $`L_\alpha `$. ###### Remark Example Let us considered the interactive videosystem directed by the eye movements of an observer. The pure controls are the slow movements of eyes, whereas saccads are considered as a result of the unknown feedbacks (tremor is supposed to be random). Many classical and modern experiments clarifies the role of saccads in the formation of the stable and complete final image so such formation may be regarded as their goal function. The functional analysis of the eye movements extracts the parameters (the normal forms), which describe saccads in the concrete interactive videosystems. The normal forms are extremely interesting in the multi-user mode when the saccads of various observers begin to be correlated and synchronized. ### 2.2. The second quantization of desires Intuitively it is reasonable to consider systems with a variable number of desires. It can be done via the second quantization. To perform the second quantization of desires let us mention that they are defined as the integrodifferential functionals of $`\phi `$ and $`\epsilon `$ via the integrodifferential filtrations. So one is able to define the linear space $`H`$ of all filtrations (regarded as classical fields) and a submanifold $`M`$ of the dual $`H^{}`$ so that $`H`$ is naturally identified with a subspace of the linear space $`𝒪(M)`$ of smooth functions on $`M`$. The quantized fields of desires are certain operators in the space $`𝒪(M)`$ (one is able to regard them as unbounded operators in its certain Hilbert completion). The creation/annihilation operators are constructed from the operators of multiplication on an element of $`H𝒪(M)`$ and their conjugates. To define the quantum dynamics one should separate the quick and slow time. Quick time is used to make a filtration and the dynamics is realized in slow time. Such dynamics may have a Hamiltonian form being governed by a quantum Hamiltonian, which is usually differential operator in $`𝒪(M)`$. If $`M`$ coincides with the whole $`H^{}`$ then the quadratic part of a Hamiltonian describes a propagator of the quantum desire whereas the highest terms correspond to the vertex structure of self-interaction of the quantum field. If the submanifold $`M`$ is nonlinear the extraction of propagators and interaction vertices is not straightforward. ### 2.3. SD-transform and SD-pairs The interesting feature of the proposed description (which will be called the S-picture) of an interactive system $`𝒮`$ is that it contains as the real (usually personal) subjects with the pure controls $`u_i`$ as the impersonal desires $`v_\alpha `$. The least are interpreted as certain perturbations of the first so the subjects act in the system by the interactive controls $`u_i`$ whereas the desires are hidden in their actions. One is able to construct the dual picture (the D-picture), where the desires act in the system $`𝒮`$ interactively and the pure controls of the real subjects are hidden in their actions. Precisely, the evolution of the system is governed by the equations $$\dot{\phi }=\stackrel{~}{\mathrm{\Phi }}(\phi ,v),$$ where $`v=(v_1,\mathrm{},v_m)`$ are the $`\epsilon `$-represented interactive desires: $$v_\alpha =v_\alpha (v_\alpha ^{},[\phi ];\stackrel{~}{\epsilon }_\alpha )$$ and the $`\epsilon `$-parameters $`\stackrel{~}{\epsilon }`$ are the unknown or incompletely known functions of the states $`[\phi ]`$ and the pure controls $`u_i^{}`$. D-picture is convenient for a description of systems $`𝒮`$ with a variable number of acting persons. Addition of a new person does not make any influence on the evolution equations, a subsidiary term to the $`\epsilon `$-parameters should be added only. The transition from the S-picture to the D-picture is called the SD-transform. The SD-pair is defined by the evolution equations in the system $`𝒮`$ of the form $$\dot{\phi }=\mathrm{\Phi }(\phi ,u)=\stackrel{~}{\mathrm{\Phi }}(\phi ,v),$$ where $`u=(u_1,\mathrm{},u_n)`$, $`v=(v_1,\mathrm{},v_m)`$, $`u_i=`$ $`u_i(u_i^{},[\phi ];\epsilon _i)`$ $`v_\alpha =`$ $`v_\alpha (v_\alpha ^{},[\phi ];\stackrel{~}{\epsilon }_\alpha )`$ and the $`\epsilon `$-parameters $`\epsilon =(\epsilon _1,\mathrm{},\epsilon _n)`$ and $`\stackrel{~}{\epsilon }=(\stackrel{~}{\epsilon }_1,\mathrm{},\stackrel{~}{\epsilon }_m)`$ are the unknown or incompletely known functions of $`[\phi ]`$ and $`v^{}=(v_1^{},\mathrm{},v_m^{})`$ or $`u^{}=(u_1^{},\mathrm{},u_n^{})`$, respectively. Note that the S-picture and the D-picture may be regarded as complementary in the N.Bohr sense. Both descriptions of the system $`𝒮`$ can not be applied to it simultaneously during its analysis, however, they are compatible and the structure of SD-pair is a manifestation of their compatibility. The choice of a picture is an action of our attention: it is concentrated on the personal subjects in S-picture (the self-conscious attention) whereas it is concentrated on the impersonal desires in D-picture (the creative attention). ### 2.4. Verbalization of SD-pairs and synlinguism The main problem is to interrelate the S- and D-pictures of the system $`𝒮`$. One way is a verbalization of SD-pairs. Let us remind a definition of the verbalizable interactive game. An interactive game of the form $$\dot{\phi }=\mathrm{\Phi }(\phi ,u)$$ with $`\epsilon `$–represented couplings of feedbacks $$u_i=u_i(u_i^{},[\phi ];\epsilon _i)$$ is called verbalizable if there exist a posteriori partition $`t_0<t_1<t_2<\mathrm{}<t_n<\mathrm{}`$ and the integrodifferential functionals $`\omega _n`$ $`(\stackrel{}{\epsilon }(\tau ),\phi (\tau )|t_{n1}\tau t_n),`$ $`u_n^{}`$ $`(u^{}(\tau ),\phi (\tau )|t_{n1}\tau t_n)`$ such that $$\omega _n=\mathrm{\Omega }(\omega _{n1},u_n^{};\phi (\tau )|t_{n1}\tau t_n),$$ quantities $`\omega _n`$ are called the words. Let us now consider the SD-pair and suppose that both S- and D-pictures are verbalizable with the same $`\omega _n`$. The fact that $`\omega _n`$ are the same for both S- and D-pictures is called their synlinguism. One may characterize it poetically by the phrase that “the speech of real subjects is resulted in the same text as a whisper of the impersonal desires”. The existential character of the synlinguism should be stressed. Really it is not derived from the fact that the objective states $`\phi `$ of the system $`𝒮`$ are the same in the S- and D-pictures. The synlinguism interrelates the different $`\epsilon `$-parameters of existential nature in both pictures. The synlinguism is very important in the analysis of tactical phenomena, which essentially used the concept of verbalization in their definition. To my mind the synlinguism lies in the basis of psychophysical nature of mutual understanding of the independent subjects of a dialogue communication. In this situation it allows to identify the personal interpretations with the impersonal ones, unraveling the role of impersonal desires as bearers of the objective sense and its dynamics. To the verbalizable SD-pairs some procedures of linguistic analysis are applicable. Some of them are inherited from the verbalizable interactive games (the grammatical analysis), some are specific (the explication and analysis of objective sense). ## III. Conclusions Thus, mathematical procedures of the experimental detection of interactive phenomena in complex natural, behavioral, social and economical systems and their analysis are described. The special attention is concentrated on the role of desires and their second quantization as well as on the abstract structure of SD-pairs, their verbalization and the synlinguism.
no-problem/0003/cond-mat0003002.html
ar5iv
text
# High temperature superconductivity from the two-dimensional semiconductors without magnetism <sup>1</sup><sup>1</sup>institutetext: Institute for Solid State Physics, University of Tokyo, 7-22-1 Roppongi, Minato-ku, Tokyo, Japan <sup>2</sup><sup>2</sup>institutetext: Department of Physics, Pusan National University, Pusan 609-735, Korea <sup>3</sup><sup>3</sup>institutetext: Laboratoirè de Physique des Solides, Université Paris-Sud, Centre d’Orsay, 91405 Orsay Cedex, France (unit associated to the CNRS) (Received: date / Revised version: date) ## Abstract We examine the possibility of high temperature superconductivity from two-dimensional semiconductor without antiferromagnetic fluctuations. The weak coupling BCS theory is applied, especially where the Fermi level is near the bottom of the conduction band. Due to screening, the attractive interaction is local in $`k`$-space. The density of states(DOS) does not have a peak near the bottom of the band, but $`k`$-dependent contribution to DOS has a diverging peak at the bottom. These features lead to high temperature superconductivity which may explain the possible superconductivity of WO<sub>3</sub>. One of the most spectacular discoveries in condensed matter physics is undoubtedly the high $`T_c`$ cuprates superconductors(HTSC)BM . Many properties are not simply described by the standard classical BCS theoryBCS . The common features of the HTSC are: (a) high superconductive transition $`T_c100`$K. (b) quasi two-dimensionality, with weakly coupled CuO<sub>2</sub> layers (c) antiferromagnetic(AF) fluctuations. The mother materials of HTSC are insulators with AF order below $`T_N`$. (d) anisotropic superconductive gaps. In order to understand the mechanism of HTSC, there has been efforts to find compounds in which Cu is replaced by an other atom. The compound Ru<sub>2</sub>SrO<sub>4</sub> has the same lattice structure as the HTSC (La, BA)<sub>2</sub>CuO<sub>4</sub>, but Cu is replaced by Ru. It is found to have a superconductive phase, however $`T_c1.5`$K is rather low. There are some experimental results (for example, NMRnmr and muon spin resonancemuon ) to support triplet pairings. The band structure is considerably different from that of typical HTSC. Thus superconductive nature of this layered compound does not seem to be related to HTSC. Recently the signs of high superconductive transition temperature are reported for some doped semiconductors. The Na<sub>x</sub>WO<sub>3-x</sub> sodium tungsten bronzes, formed by doping the insulating host WO<sub>3</sub> with Na, are $`n`$-type semiconductor for $`x<0.3`$. The sample with $`x=0.05`$ shows a sharp metal to insulator transition as temperature is lowered below 100K. It is followed by a sharp drop of resistivity at $`91`$K as temperature is further lowered, it shows signs of superconductivity at 91K. At the same temperature this compound exhibits a sharp diamagnetic step in magnetizationrt . This compound is quasi two-dimensional, but does not show any sign of antiferromagnetic fluctuations in sharp contrast to the cuprate high $`T_c`$ superconductors. Possible superconductive transition is further supported by subsequent measurements of Electron Spin Resonanceesr . There is no sign of antiferromagnetic fluctuations in this compound in sharp contrast to cuprate HTSC, namely the feature (c) above does not apply. This implies that either the mechanism of superconductivity of WO<sub>3</sub> is different from the cuprate superconductors or the magnetism does not play an essential role in the high $`T_c`$ cuprates. The 5$`d`$-transition oxides WO<sub>3</sub> and Na<sub>x</sub>WO<sub>3</sub> have nearly the perovskite crystal with W ions occupying the octahedral cation sites. Stoichiometric WO<sub>3</sub> is an insulator since the W 5$`d`$ band is empty; when Na ions are added to WO<sub>3</sub>, they donate their 3$`s`$ electron to the W 5$`d`$ band, resulting in bulk metallic behavior for $`x0.3`$. For $`x<0.3`$ the Na<sub>x</sub>WO<sub>3</sub> sodium tungsten, is an n-type semiconductor (electron doped). This indicates that there exist two bands: the valence band and the conduction band with the gap $`G`$ in between. The effect of doping of Na is to add electrons in the conduction band, thus raising the chemical potential $`\mu `$. Although the superconductivity of WO<sub>3</sub> has not been confirmed solidly and it requires further experimental supports, the properties of WO<sub>3</sub> motivated us to examine the possibility of high $`T_c`$ superconductivity from semiconductors without antiferromagnetic fluctuations. For simplicity, we neglect possible contributions from the valence band. The conduction band is modeled by $$\epsilon (𝐤)2t(\mathrm{cos}k_x+\mathrm{cos}k_y),$$ (1) where $`t`$ is the transfer integral of the tight-binding model on the square lattice. This band has the van Hove singularity in the density of states(DOS) at half-filling. There are a number of workshs ; friedel ; newns ; abrikosov ; bok ; fk that try to explain high $`T_c`$ cuprates by van Hove singularity. Here the present situation is totally different since we consider a nearly empty band. The density of states is given by $$N(E)=\frac{1}{|\epsilon (𝐤)|}𝑑l=\frac{1}{v}𝑑l,$$ (2) where $`l`$ is the Fermi surface (line in two dimensions) and $`v`$ is the semiclassical velocity. For the sake of convenience we shall call $`1/|\epsilon (𝐤)|`$ as kDOS and it is plotted in Fig. 1. Note that kDOS is diverging near the $`\mathrm{\Gamma }`$ point $`𝐤=(0,0)`$ as well as at $`(\pm \pi ,0)`$ and $`(0,\pm \pi )`$. This singularity is not seen in DOS since it is integrated on a vanishingly short Fermi line. The important fact is that if interactions are local in $`k`$-space, kDOS has to be considered carefully. With the properties of the band above in mind, we consider the gap equation, $$\mathrm{\Delta }_k=\underset{k^{}}{}\frac{V_{kk^{}}\mathrm{\Delta }_k^{}(T)}{2E_k^{}}\mathrm{tanh}\frac{E_k^{}}{2k_BT},$$ (3) where $`E_k=\sqrt{\epsilon (k)^2+\mathrm{\Delta }_k(T)^2}`$, $`\mathrm{\Delta }_k(T)`$ is the gap order parameter and $`V_{kk^{}}`$ is the interaction. The sum is restricted within the cutoff $`\mu E_c<\epsilon (k^{})<\mu +E_c`$ where $`\mu `$ is the chemical potential which is measured from the bottom of the band of noninteracting case. From the gap equation(3) one can obtain $`T_c`$ and $`\mathrm{\Delta }(T)`$. Near $`T_c`$, $`\mathrm{\Delta }`$ is very small, then (3) is linearized. $`T_c`$ is determined by the linearized equation. The gap $`\mathrm{\Delta }(T)`$ is obtained by iteration of (3). If one takes the usual BCS interaction $`V`$ which is constant if the two particles are both within the cutoff and vanishing otherwise, we have the classical BCS result $`T_c1.13E_c\mathrm{exp}(1/NV)`$, where N is the DOS near the bottom of the band. When one takes physically reasonable value of $`V`$, high $`T_c`$ can not be obtained even if kDOS is large. This is because the Fermi line is a small circle and the range of integration is very small. It offsets the large but uniform kDOS. This behavior is totally analogous to the fact that the behavior of DOS –integral of kDOS along the Fermi line– that is almost constant near the bottom of the band. On the other hand if the interaction is local in k-space, i.e. not constant in the integral, the effect of the large kDOS can not be canceled by the short length of the Fermi line. In many-body systems the interactions are mostly scr- eened and this effect has to be included, unless the system is exactly solvable. Thus we take a weakly screened attractive interaction (likely to be phonon-mediated one), $$V_q=\frac{g_q^2}{q^2+q_0^2}.$$ (4) Here $`g_q`$ is the coupling constant and $`q_0`$ is the inverse of screening length $`L`$: $`q_01/L`$. This poor screening is supposed to be due to low dimensions(2d). The screening length actually depends on $`\mu `$, but we neglect this effect for the sake of simplicity. Since the interaction (4) is local in $`k`$-space, the effect of the large kDOS can not be totally canceled by the small Fermi line, as discussed above. Let us first give an estimate of the effective interaction. For $`\mu =4t+\eta `$ where $`\eta `$ is small but larger than the cutoff $`E_c`$, one has, from (3), $$2=_0^{2\pi }_{k_{inf}^{}}^{k_{sup}^{}}\frac{g_q^2\mathrm{tanh}\frac{k^2t\eta }{2k_BT_c}\frac{k^{}}{k^2t\eta }}{k^2+k^22k^{}k\mathrm{cos}\theta +q_0^2}𝑑k^{}𝑑\theta $$ (5) with $`k^2t=\eta ,\eta E_c`$ and $`|k^2t\eta |<E_c`$. This leads to $$2=\pi g_q^2t\frac{\mathrm{tanh}\kappa }{2k_BT_c}\frac{d\kappa }{\kappa }\frac{1}{A}$$ (6) with $`k^2t\eta =\kappa `$ and $`A=\kappa ^2+4\eta q_0^2t+\mathrm{}`$. With the preceding conditions, and if $`\eta `$ is large enough versus $`E_c`$, $`A`$ reduces to $`1/4\eta q_0^2t`$. One reverts to the BCS case of $`V`$ constant with $$V_{kk^{}}=\frac{g_q^2}{q_0^2},$$ (7) thus much higher $`T_c`$ is expected. The numerical solution is consistent with the above analysis and gives high critical temperature near the bottom of the band. It is plotted in Fig.2 and it takes the maximum value at $`\mu _{op}`$ near the bottom of the band. The decrease of $`T_c`$ for $`\mu `$ smaller than $`\mu _{op}`$ is due to the semiconductor gap where DOS vanishes. But note that $`T_c`$ does not vanish at the bottom of the band but extend to the band gap region due to the superconductive coherence effect. It is natural that $`T_c`$ does not depend on the cutoff in this region. The transition temperature $`T_c`$ also decreases for $`\mu `$ larger than $`\mu _{op}`$. This is because the kDOS is getting smaller in this region. We have a typical $`s`$-wave pairings, but the gap depends on the absolute value $`|𝐤|`$. The maximum and minimum of $`\mathrm{\Delta }(0)/T_c`$’s are plotted in Fig.3. Note that the maxima of the gap is almost independent of the cutoff. It is about $`1.8`$ which is close to the BCS value $`1.76`$ at $`\mu 0.6`$. It increases as $`\mu `$ is lowered. On the other hand the minimum of the gap depends on the cut off as expected because the minimum occurs at the edge of the cutoff. It is less than the BCS value and decreases as $`\mu `$ is decreased. Note that $`\mathrm{\Delta }(0)/k_BT_c`$ approach zero in a similar manner to Tc. This implies that $`\mathrm{\Delta }(0)`$ decay faster than $`Tc`$. If we take into account the valence band, certainly the behavior of $`T_c`$ will be changed. The pairings between electrons in the conduction band and valence band have to be consideredkt in addition to parings between electrons in the same band. However this will not change the results qualitatively. The features (c) and (d) do not apply but (a) and (b) survive. Therefore it can be concluded that the origin of high $`T_c`$ in the present case is due to the specific features of the energy dispersion in two dimensions. Since we have a particle-hole duality, these results are equally applicable to p-type semiconductors (hole doped). To conclude, we study the two-dimensional conductive band with low doping. This is motivated by some signs of a high superconductive transition temperature in WO<sub>3</sub>:Na rt ; esr . The crucial point is the large electron density kDOS near $`\mathrm{\Gamma }(0,0)`$. We succeed in obtaining a high superconductive temperature ($`T_c100`$K) from the approximate analysis and the numerical computations. This result is readily applicable to an almost full valence band. We expect that these results are rather general. The two-dimensional materials with almost empty conductive band or with almost full valence band are almost likely to have a high temperature superconductive transition. Acknowledgments It is a pleasure to thank M. Takigawa for a useful discussion. I.C is grateful to Korea-Japan binational program through KOSEF-JSPS which made his visit to ISSP, Tokyo possible, and to BSRI98-2412 of Ministry of Education, Korea.
no-problem/0003/hep-ex0003001.html
ar5iv
text
# What is the Usefulness of Frequentist Confidence Intervals? ## Abstract The following questions are discussed: “Why confidence intervals are a hot topic?”; “Are confidence intervals objective?”; “What is the usefulness of coverage?”; “How to obtain useful information from experiment?”; “The confidence level must be chosen independently from the knowledge of the data?”. preprint: DFTT 11/00 arXiv:hep-ex/0003001 The problem of getting meaningful information from the statistical analysis of experimental data in high energy physics has attracted recently great attention, reaching a (local) maximum at the Workshop on “Confidence Limits” held at CERN in January . Having participated to that Workshop and having read some of the related material available at the Workshop Web page , I think that there is a certain confusion on the usefulness of Frequentist confidence intervals and some clarifications are necessary. In the following I will consider and answer some crucial questions in the framework of the Frequentist theory of statistical inference. I will assume that the reader is familiar with the theory and its problems (if not, see ). 1. Why confidence intervals are a hot topic? The current debate on the methods of statistical analysis of experimental data follows mainly from the proposal at the end of 1997 of the the Unified Approach by Feldman and Cousins and its immediate adoption as the recommended Frequentist method in the 1998 edition of the Review of Particle Physics (RPP) of the Particle Data Group (PDG) . Although it may be true that “there is no PDG method” , it is a matter of fact that RPP is a guide for the Physics community. Most physicists have faith in what is written in RPP, especially regarding the fields in which they are not experts (unfortunately most human beings can achieve expertise in one or a few fields and it is naturally correct to believe to experts in the other fields if nothing that they say is obviously wrong). Therefore, the authors of RPP have a responsibility for what they write. The immediate adoption of the Unified Approach by the PDG has been considered by many rather premature, taking into account that it happened before testing the performances of the Unified Approach in real experiments. These concerns received dramatic confirmations from the unphysical results obtained in two of the first applications . Several papers have followed the one by Feldman and Cousins, proposing alternative Frequentist methods. Hence, at present there are several Frequentist approaches available and each analyzer of experimental data must choose one among them independently of the knowledge of the data, in order to preserve the property of coverage . One of the main issues in the present debate on confidence intervals is the study of the properties of the different Frequentist methods in order to allow a meaningful choice of the method to be used in a practical application. Another problem with the 1998 edition of RPP is that the emphasis on the Unified Approach is likely correlated with the disappearance of the useful description of the Bayesian approach present in the 1996 edition of RPP . It seems hard to argue that this is not a biased choice. 2. Are confidence intervals objective? It is well known that credibility intervals obtained in the framework of the Bayesian theory (see, for example, ) are subjective because of the necessity to have a prior probability distribution function for the quantity under measurement. In the Frequentist–Bayesian debate some experts biased towards the Frequentist approach say that they want to know what was measured in the experiment, without the subjective Bayesian prior of the experimenter. But also in the Frequentist approach the experimenter must choose a method to construct the confidence belt and the result that he will obtain depends on this choice (in Ref. it has been shown that Frequentist confidence intervals are objective only from a statistical point of view). Thus, it is clear that Frequentist confidence intervals (as Bayesian credibility intervals) do not describe what is measured independently from subjective choices! Actually, in the framework of the Frequentist theory there is no way to get a result without the subjective choice of the method to construct the confidence belt (approximate methods as maximum likelihood also need subjective choices). On the other hand, working in the framework of the Bayesian theory, one can present the likelihood function as the result of the experiment (or its normalized version called “relative belief updating ratio” ) and everyone can obtain a credibility interval using her/his prior. Moreover, the Bayesian prior takes into account in a proper way the subjective believe that may come from a solid experience in the field, whereas the choice of a specific Frequentist method seems much more arbitrary. Since the Frequentist confidence intervals are subjective as the Bayesian credibility intervals, but Bayesian theory takes into account subjective belief based on experience in a proper way, I think that, *contrary to what is usually believed, a choice of the method based on subjectivity favors the Bayesian approach*. A choice of the Frequentist approach is reasonable only if based on the main property of Frequentist confidence intervals, *coverage*, that I will discuss in the following items. 3. What is the usefulness of coverage? Some experts say that coverage implies that in order to be right, for example, 90% of the times, 10% of the times you must get a wrong result and you must give it, even if you know (or have a strong suspicion) that it is wrong (for example, an empty confidence interval). If I tell this to any pedestrian, he will think that I am nuts: if I know that the result is wrong why should I give it? It is not only useless, it may also confuse other people. So, I think that among reasonable people we can agree that *wrong results are useless and should not be given*. I think that coverage is useful because one knows the probability, given by the confidence level, that the confidence interval covers the true value of the quantity under measurement. Each confidence interval obtained in an experiment has this property, independently from the results and even existence of other experiments. Therefore, there is no need to give wrong confidence intervals! In principle, if one could make many experiments to measure a physical quantity $`\mu `$, each experiment producing a confidence interval with a chosen confidence level, one could collect all the confidence intervals (including those that are known to be wrong), producing a set of intervals that cover the true value of $`\mu `$ with a probability given by the confidence level. But in practice, at least in high energy physics, there are only a few experiments (sometimes one or two) that measure each physical quantity. Therefore, the set of confidence intervals is too small to be of any usefulness. Instead, one is interested to get useful information from each experiment. 4. How to obtain useful information from experiment? I think that a procedure that allows to get always useful information from an experiments is the following: 1. Choose the Frequentist method with the desired properties independently of the knowledge of the data (see ). 2. If the data do not indicate any unlikely statistical fluctuation and the confidence interval obtained with the chosen Frequentist method looks fine, the confidence interval can be given and one knows that it covers the true value of the quantity under measurement with a probability given by the confidence level. 3. If it is clear that the data indicate an unlikely statistical fluctuation (as less events than the expected background measured in a Poisson process with known background) and the confidence interval obtained with the chosen Frequentist method is suspected to be wrong (for example, too small or even vanishing), the confidence interval should not be given. Feldman and Cousins proposed that in such cases the experimenter should give also what they called “sensitivity”, but should be called more appropriately *“exclusion potential”* , because it is calculated assuming that the quantity under measurement is zero. However, since the exclusion potential cannot be combined with the confidence interval that has been obtained in the experiment, it is not clear what is the usefulness of giving two quantities instead of one, except as a warning that the confidence interval is likely to be wrong. But in this case it is better not to give it! Two quantities produce only confusion if one of them tells that the other is not reliable. Thus, the solution proposed by Feldman and Cousins (recommended also in the 1998 edition of RPP ) to give two quantities instead of one is just the opposite of what it is reasonable to do: give nothing! (Here I am discussing only Frequentist quantities. One can always give Bayesian quantities, as discussed in the next item.) 4. In any case the experimenters should analyze their data using the Bayesian theory, that allows always to obtain meaningful results. The experimenters can give the likelihood function or the relative belief updating ratio , that represent the objective result of the experiment, and can give also a credibility interval obtained with their prior based on experience and knowledge of the experiment. Following this procedure, experiments will always produce a result in the framework of the Bayesian theory and will produce also a Frequentist result only if it is a reliable one. As an illustration, let us consider the well known case of the KARMEN experiment on the search for short-baseline $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations . In the middle of 1998 the KARMEN collaboration reported the observation of zero events in a Poisson process with a known background of $`2.88\pm 0.13`$ events . Using the Unified Approach they obtained an exclusion curve in the space of the neutrino mixing parameters that seemed to exclude almost all the region allowed by the positive results of the LSND $`\overline{\nu }_\mu \overline{\nu }_e`$ experiment . The exclusion curve of the KARMEN experiment lead many people to believe that the LSND evidence in favor of $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations was almost ruled out by the result of the KARMEN experiment, in spite of the fact that the exclusion potential of the KARMEN experiment was about four times larger than the actual upper limit. This discrepancy was due to the observation of less events than the expected background. In 1999 the KARMEN experiment reported the observation of as many events as expected from background , resulting in an upper limit practically coincident with the exclusion potential, compatible with the results of the LSND experiment (see also ). It is clear that the result presented in 1998 has been worse than useless: its only effect has been to confuse people. This confusion could have been avoided if the KARMEN collaboration would not have presented the 1998 exclusion curve obtained with the Unified Approach, since it was clearly meaningless from a physical point of view (although statistically correct if the KARMEN collaboration did not choose the Unified Approach on the basis of the data, for example because it gave a bound more stringent than other methods). Moreover, the KARMEN collaboration did not (and still does not, whereas curiously they continue to give the useless 1998 exclusion curve ) present the result of a Bayesian analysis, which is less sensitive to fluctuations of the background than the Unified Approach (see, for example, ). This is probably a negative consequence of the above mentioned (question 1) bias of the 1998 edition of RPP towards the Unified Approach. Another lesson to be learned from the KARMEN example regards the usefulness of the goodness of fit test proposed by Feldman and Cousins : in the Poisson with background case the natural analogue for the goodness of fit is the probability to obtain $`nn_{\mathrm{obs}}`$ under the best-fit assumption of $`\mu =0`$, where $`\mu `$ is the mean of signal events, $`n`$ is the number of events and $`n_{\mathrm{obs}}`$ is the number of observed events. In the case of the KARMEN experiment $`n_{\mathrm{obs}}`$ was zero in 1998 and the probability to obtain $`n=0`$ with $`\mu =0`$ and a known background of $`2.88\pm 0.13`$ events is 5.6%. This probability is not unacceptable, but imagine that we decide to reject a goodness of fit lower than 10%. The problem is that in this case there is nothing to reject, because the background is known and low fluctuations of the background are allowed. Thus, in the case of a Poisson process with know background the goodness of fit test proposed by Feldman and Cousins is useless. 5. The confidence level must be chosen independently from the knowledge of the data? As far as I know, this important question has not been discussed in the literature. I think that the answer depends on the use that one is going to make with the confidence intervals. If many experiments have been done, the resulting confidence intervals at a certain confidence level form a set that covers the true value of the quantity under measurement with probability given by the confidence level. This property is damaged if the confidence level is chosen on the basis of the data. For example, in the case of a Poisson process with known background, it is reasonable to choose a higher confidence level if less events than the expected background have been observed, because the low fluctuation of the background induces a certain skepticism on the reliability of the confidence interval. But in this way the set of confidence intervals with high confidence level is unbalanced towards low values of the quantity under measurement, whereas the set of confidence intervals with low confidence level is unbalanced towards high values of the quantity under measurement. Thus, the sets of confidence intervals do not have correct coverage and the answer to the question above is “yes”. In practice, however, at least in high energy physics research, one does not have the possibility to do many experiments for the measurement of a certain quantity and one is not interested in collecting a set of confidence intervals that cover the true value of the quantity under measurement with a given probability. As discussed above in 3, each experimental collaboration is interested to obtain a meaningful and reliable result in its experiment. Nobody is going to collect sets of confidence intervals obtained in different experiments and study their properties. The confidence interval obtained in each experiment is considered individually, not embedded in a set. In this case the confidence level can be chosen after the data are known without spoiling coverage. It is now well known that the method to construct the confidence belt must be chosen independently of the knowledge of the data . A simple reason is that knowing the data one can always construct a confidence belt that gives any wanted confidence interval at some (sometimes small) confidence level. But when the method to construct the confidence belt has been chosen the coverage of the confidence belt is guaranteed for any value of the confidence level and the freedom to choose the confidence level does not allow one to get a wanted confidence interval. Taking into account that the confidence interval can be chosen at will, I think that it is highly desirable that the experimental collaborations publish not a single confidence interval at an arbitrary confidence level, but the entire *confidence distribution* of the parameter , i.e. the limits of the confidence interval as functions of the confidence level, at least for large values of the confidence level (say larger than 68%). In these days this can be easily done even for multi-dimensional confidence intervals by giving a table available as a file through the Internet and/or an interpolating function. In conclusion, I have discussed some crucial questions regarding Frequentist confidence intervals. I hope that the answers that I have given will at least stimulate the debate on the subject and, if they are right, will contribute to an improvement of the understanding of the usefulness of confidence intervals.
no-problem/0003/cond-mat0003458.html
ar5iv
text
# Three-Point Density Correlation Functions in the Fractional Quantum Hall Regime ## 1 Introduction The fractional quantum Hall effect has been one of the key topics in condensed matter physics for the last 15 years. While much of the experimental work on this phenomenon has been focussed on electrical transport measurements, recently attention has turned to the development of complementary spectroscopic techniques. In the case of both phonon and inelastic light scattering experiments the spectroscopic probe couples to the electronic density of the system and so naturally interacts with its collective density fluctuations. The theory of these collective modes was first developed by Girvin, MacDonald and Platzman (GMP) in a manner analogous to that used by Feynman for the phonon and roton modes of superfluid helium. Unlike the case of liquid helium, there is no gapless phonon-like behaviour at small wave vector; the collective modes of the fractional quantum Hall systems are gapped at all wavelengths: a consequence of the incompressibility of these states. However, in common with liquid helium, the collective mode dispersion does have a well defined minimum, hence the quanta of collective excitations of the fractional quantum Hall states are referred to as magnetorotons. One can envision two types of process involved in both the phonon and light scattering experiments. Firstly a magnetoroton can be created when the electron liquid absorbs one of the probe quanta (phonon or photon), we refer to this as type A absorption. Secondly, an existing magnetoroton can be scattered into a different state by the absorption of a probe quantum, this we call a type B process. At zero temperature the electron liquid will be in its ground state (well described by Laughlin’s wave function) and there will be no magnetorotons present, hence only the type A process can occur. Because the magnetoroton dispersion is gapped at all wave vectors there is a threshold frequency for this process: no probe quantum can be absorbed whose energy is less than the minimum gap, $`\mathrm{\Delta }^{}`$. Type B processes will occur at any non-zero temperature and have no threshold: in principal, arbitrarily small energies can be transferred to the electron liquid by these processes. The theory of the Type A processes has been discussed by He and Platzman for the inelastic light-scattering case and by our group for the phonon case. These type B processes will determine the leading finite-temperature corrections to the zero temperature theory outlined in . The theory of Type B processes is less well developed but potentially rather important for the understanding of the phonon experiments because these typically use pulses containing a black body distribution of phonon energies characterized by a pulse temperature $`T_\varphi <\mathrm{\Delta }^{}`$ and use the temperature of the electron liquid itself to monitor the rate of phonon absorption. At an arbitrary electron temperature there will be competition between rare type A processes in which high energy phonons are absorbed and common type B processes in which low energy phonons are absorbed. In recent time-resolved experiments the change in electron temperature is recorded over the period of time in which the pulse is in contact with the electrons. At early times the electron liquid is at very low temperatures and so type A processes will dominate the energy transfer and hence the rate of change of electron temperature. At later times the electron temperature will be higher, because of all the earlier type A absorptions and type B processes will become comparable in determining the energy transfer rate. The purpose of this paper is to derive a sum rule obeyed by the matrix elements which control the rates for the type B processes and to use this to test the best existing approximation scheme for this object: the colvolution approximation developed for studying the corresponding quantities for superfluid Helium-4 . As discussed in , these matrix elements are also important for the understanding of the effects of disorder on the form of the magnetoroton dispersion. In all of the following, natural units in which $`l_c=\sqrt{\mathrm{}/eB}=1`$ and $`e^2/4\pi ϵ_0\kappa l_c=1`$ (where $`\kappa `$ is the bulk dielectric constant of the material in which the 2des is formed) are used. ## 2 Magnetorotons Girvin, MacDonald and Platzman supposed that the low-lying collective excitations of a fractional quantum Hall fluid could be obtained in a manner analogous to that used by Feynman in his classic works on superfluid helium. They proposed that a collective excitation with 2d wave vector $`𝐪`$ would be described by the wave function $$|𝐪=\frac{\overline{\rho }_𝐪|\mathrm{\Omega }}{\sqrt{\mathrm{\Omega }\left|\overline{\rho }_𝐪\overline{\rho }_𝐪\right|\mathrm{\Omega }}}$$ (1) where $`|\mathrm{\Omega }`$ is the ground state of the system (for which Laughlin has provided an excellent wave function, at least for the cases where the Landau level filling $`\nu `$ is of the form $`1/m`$ for $`m`$ odd) and $`\overline{\rho }_𝐪`$ is an operator that is derived from the conventional density operator $$\rho _𝐪=\underset{i=1}{\overset{N}{}}e^{i𝐪\widehat{𝐫}_i}$$ (2) by projection onto the lowest Landau level. In the notation used by MacDonald in $`\overline{\rho }_𝐪`$ $`={\displaystyle \underset{i=1}{\overset{N}{}}}B_i\left(𝐪\right)`$ (3) $`B\left(𝐪\right)`$ $`=𝒫_0e^{i𝐪𝐫}𝒫_0`$ (4) where $`𝒫_0`$ acts on the Hilbert space of a single electron to project out the states within the lowest (spin-polarized) Landau level. Girvin and Jach investigated the properties of these projected operators and, in particular deduced that $$B_i\left(𝐤\right)B_i\left(𝐪\right)=e^{𝗄^{}𝗊/2}B_i\left(𝐤+𝐪\right)$$ (5) where $`𝗄=k_x+ik_y`$. This leads, bearing in mind that the projection operators for one particle commute with operators associated with the other, to the commutation relation for the projected density operators $`[\overline{\rho }_𝐤,\overline{\rho }_𝐪]`$ $`=\left(e^{𝗄^{}𝗊/2}e^{𝗊^{}𝗄/2}\right)\overline{\rho }_{𝐤+𝐪}`$ $`=i\mathrm{\Phi }(𝐤,𝐪)\overline{\rho }_{𝐤+𝐪}`$ (6) where $`\mathrm{\Phi }(𝐤,𝐪)`$ $`=i\left(e^{𝗄^{}𝗊/2}e^{𝗊^{}𝗄/2}\right)`$ (7) $`=e^{𝐤𝐪/2}2\mathrm{sin}\left({\displaystyle \frac{1}{2}}𝐤𝐪\right).`$ The normalization in equation 1 includes a matrix element which has the form of a projected static structure factor $$\overline{s}\left(q\right)=\frac{1}{N}\mathrm{\Omega }\left|\overline{\rho }_𝐪\overline{\rho }_𝐪\right|\mathrm{\Omega }.$$ (8) GMP showed that this could be related to the true static structure factor $$s\left(q\right)=\frac{1}{N}\mathrm{\Omega }\left|\rho _𝐪\rho _𝐪\right|\mathrm{\Omega }$$ (9) via the simple relation $$\overline{s}\left(q\right)e^{q^2/2}=s\left(q\right)1.$$ (10) The magnetoroton energy can be written in the form $`\mathrm{\Delta }\left(k\right)`$ $`=𝐤\left|\right|𝐤\mathrm{\Omega }\left|\right|\mathrm{\Omega }`$ $`={\displaystyle \frac{\mathrm{\Omega }\left|\overline{\rho }_𝐤[,\overline{\rho }_𝐤]\right|\mathrm{\Omega }}{N\overline{s}\left(k\right)}}`$ $`={\displaystyle \frac{\overline{f}\left(k\right)}{\overline{s}\left(k\right)}}`$ (11) and GMP showed that the projected oscillator strength, $`\overline{f}\left(k\right)`$, can be written explicitly as an integral involving $`\overline{s}`$ and $`\mathrm{\Phi }`$ only. Hence, obtaining $`\overline{s}\left(k\right)`$ from the Laughlin wave function, they found the dispersion relation for these excitations. Consider now a bosonic probe quantum (such as a phonon or photon) labelled by a 3d wave vector $`𝐐`$ which couples to the 2d electrons via a hamiltonian of the form $$H_{int}=\underset{𝐐}{}M_𝐐\rho _𝐪\left(a_𝐐+a_𝐐^{}\right)$$ (12) where $`a_𝐐`$ annihilates a probe quantum with the given wave vector, $`\rho _𝐪`$ is the electron density operator described above (un-projected) and $`M_𝐐`$ is some coupling function. For the detailed forms of $`M_Q`$ relevant to the experimental systems see . The probability per unit time for a magnetoroton initially in a state $`|𝐤`$ to absorb a probe quantum with wave vector $`𝐐`$ and be scattered into the state $`|𝐤+𝐪`$ is given by Fermi’s golden rule as $$\tau _{𝐤,𝐐}^1=\frac{2\pi }{\mathrm{}}\left|M_𝐐\right|^2\left|𝐤+𝐪\left|\rho _𝐪\right|𝐤\right|^2\delta \left(\mathrm{\Delta }\left(\left|𝐤+𝐪\right|\right)\mathrm{\Delta }\left(k\right)\mathrm{}\omega _𝐐\right)$$ (13) where $`\omega _𝐐`$ is the energy of the probe quantum. This, as usual with lowest order perturbation theory, factorizes nicely into a part which depends on the well characterized details of the probe and the material in which the 2des is formed and a part that only involves the states of the correlated 2d electrons. The relevant matrix element in the latter is $$𝐤+𝐪\left|\rho _𝐪\right|𝐤=\frac{P(𝐤,𝐪)}{\sqrt{\overline{s}\left(\left|𝐤+𝐪\right|\right)\overline{s}\left(k\right)}}$$ (14) where we have replaced $`\rho _𝐪`$ with its projected counterpart by virtue of the idempotence of the projection operators and we define the three-point correlation function $$P(𝐤,𝐪)=\frac{1}{N}\mathrm{\Omega }\left|\overline{\rho }_{𝐤𝐪}\overline{\rho }_𝐪\overline{\rho }_𝐤\right|\mathrm{\Omega }.$$ (15) . ## 3 Symmetry Properties of the Correlation Function We can deduce two symmetry properties of $`P(𝐤,𝐪)`$ straightforwardly. Firstly we can use the fact that $`\overline{\rho }_𝐪^{}=\overline{\rho }_𝐪`$ to deduce that $`\left\{P(𝐤,𝐪)\right\}^{}`$ $`=\left\{{\displaystyle \frac{1}{N}}\mathrm{\Omega }\left|\overline{\rho }_{𝐤𝐪}\overline{\rho }_𝐪\overline{\rho }_𝐤\right|\mathrm{\Omega }\right\}^{}`$ $`={\displaystyle \frac{1}{N}}\mathrm{\Omega }\left|\overline{\rho }_𝐤\overline{\rho }_𝐪\overline{\rho }_{𝐤+𝐪}\right|\mathrm{\Omega }`$ $`=P(𝐤+𝐪,𝐪).`$ (16) Secondly we can use the commutation relation for the projected density operators, as derived by Girvin and Jachto simplify $`P(𝐤,𝐪)P(𝐪,𝐤)`$ $`={\displaystyle \frac{1}{N}}\mathrm{\Omega }\left|\overline{\rho }_{𝐤𝐪}[\overline{\rho }_𝐪,\overline{\rho }_𝐤]\right|\mathrm{\Omega }`$ $`=i\mathrm{\Phi }(𝐪,𝐤){\displaystyle \frac{1}{N}}\mathrm{\Omega }\left|\overline{\rho }_{𝐤𝐪}\overline{\rho }_{𝐤+𝐪}\right|\mathrm{\Omega }`$ $`=i\mathrm{\Phi }(𝐪,𝐤)\overline{s}\left(\left|𝐤+𝐪\right|\right).`$ (17) ## 4 Derivation of the Sum Rules ### 4.1 Structure Factor Sum-Rule In order to show the basic idea of the method we will begin with a simple case and prove a sum-rule for the projected structure factor itself. From the work of GMP we know that $$\overline{s}\left(q\right)e^{q^2/2}=s\left(q\right)1.$$ In first quantized notation we have that, for a system of $`N`$ particles $`s\left(q\right)`$ $`={\displaystyle \frac{1}{N}}\mathrm{\Omega }\left|\left({\displaystyle \underset{i=1}{\overset{N}{}}}e^{i𝐪\widehat{𝐫}_i}\right)\left({\displaystyle \underset{j=1}{\overset{N}{}}}e^{i𝐪\widehat{𝐫}_j}\right)\right|\mathrm{\Omega }`$ $`=1+{\displaystyle \frac{1}{N}}{\displaystyle \underset{ij}{}}\mathrm{\Omega }\left|e^{i𝐪\left(\widehat{𝐫}_i\widehat{𝐫}_j\right)}\right|\mathrm{\Omega }`$ Hence we have that $$\left(s\left(𝐪\right)1\right)d^2𝐪=\frac{1}{N}\underset{ij}{}\mathrm{\Omega }\left|e^{i𝐪\left(\widehat{𝐫}_i\widehat{𝐫}_j\right)}d^2𝐪\right|\mathrm{\Omega }$$ Now $$e^{i𝐪\left(\widehat{𝐫}_i\widehat{𝐫}_j\right)}d^2𝐪=\delta \left(\widehat{𝐫}_i\widehat{𝐫}_j\right)$$ so that $$\left(s\left(𝐪\right)1\right)d^2𝐪=\frac{1}{N}\underset{ij}{}\mathrm{\Omega }\left|\delta \left(\widehat{𝐫}_i\widehat{𝐫}_j\right)\right|\mathrm{\Omega }.$$ Inserting a representation of unity in terms of the complete set of $`N`$-particle position eigenstates $`|\left\{𝐫_i\right\}`$ gives $`{\displaystyle \left(s\left(𝐪\right)1\right)d^2𝐪}`$ $`={\displaystyle \frac{1}{N}}{\displaystyle \underset{ij}{}}{\displaystyle \left(\underset{i=1}{\overset{N}{}}d^2𝐫_i\right)\mathrm{\Omega }\left|\delta \left(\widehat{𝐫}_i\widehat{𝐫}_j\right)\right|\left\{𝐫_i\right\}\left\{𝐫_i\right\}|\mathrm{\Omega }}`$ $`={\displaystyle \frac{1}{N}}{\displaystyle \underset{ij}{}}{\displaystyle \left(\underset{i=1}{\overset{N}{}}d^2𝐫_i\right)\delta \left(𝐫_i𝐫_j\right)\mathrm{\Omega }|\left\{𝐫_i\right\}\left\{𝐫_i\right\}|\mathrm{\Omega }}`$ The only configurations that could contribute to this integral are those in which two particles co-incide. Such configurations, however, have zero weight because of the Pauli exclusion principle. Hence we conclude that $$\left(s\left(𝐪\right)1\right)d^2𝐪=0.$$ From this we find, straightforwardly that $$\overline{s}\left(q\right)d^2𝐪2\pi _0^{\mathrm{}}e^{q^2/2}q𝑑q=\left(s\left(q\right)1\right)d^2𝐪=0$$ or $$\overline{s}\left(q\right)d^2𝐪=2\pi .$$ Of course, this result can be derived more simply by noting that $$s\left(𝐪\right)=1+\rho e^{i𝐪𝐫}\left(g\left(r\right)1\right)d^2𝐫+4\pi ^2\rho \delta ^2\left(𝐪\right)$$ where $`g\left(r\right)`$ is the radial distribution function. Hence $`{\displaystyle \left(s\left(𝐪\right)1\right)d^2𝐪}`$ $`=\rho {\displaystyle e^{i𝐪𝐫}d^2𝐪\left(g\left(r\right)1\right)d^2𝐫}+4\pi ^2\rho {\displaystyle \delta ^2\left(𝐪\right)d^2𝐪}`$ $`=4\pi ^2\rho {\displaystyle \delta ^2\left(𝐫\right)\left(g\left(r\right)1\right)d^2𝐫}+4\pi ^2\rho `$ $`=4\pi ^2\rho g\left(0\right).`$ The Pauli principle ensures that $`g\left(0\right)=0`$ so that we again recover our sum-rule. ### 4.2 Three Point Correlation Sum-Rule Now let us derive our principal result: a sum rule for the three point correlation function. As shown by MacDonald et al., the three point correlation function can be written in the form $$P(𝐤,𝐪)=P_0(𝐤,𝐪)+h^{(3)}(𝐤,𝐪)$$ (18) where $`P_0(𝐤,𝐪)`$ $`=e^{\frac{1}{2}\left|𝗊\right|^2\frac{1}{2}𝗄^{}𝗊}\left\{e^{\frac{1}{2}\left|𝗄\right|^2}+s\left(\left|𝗄\right|\right)1\right\}`$ $`+e^{\frac{1}{2}\left(𝗄+𝗊\right)^{}𝗄}\left(s\left(\left|𝗊\right|\right)1\right)+e^{\frac{1}{2}𝗊^{}𝗄}\left(s\left(\left|𝗄+𝗊\right|\right)1\right).`$ Once again, $`𝗄=k_x+ik_y`$ is the complex representation of the vector $`𝐤`$ and $$h^{(3)}(𝐤,𝐪)=\frac{1}{N}\underset{lmn}{}\mathrm{\Omega }\left|e^{i\left(𝐤+𝐪\right)𝐫_l}e^{i𝐪𝐫_m}e^{i𝐤𝐫_n}\right|\mathrm{\Omega }$$ (19) is the un-projected three point correlation function of the quantum liquid. We wish to find a sum rule of the form $$\frac{1}{𝒜}\underset{𝐪}{}P(𝐤,𝐪)=F\left(𝐤\right).$$ (20) The term that is likely to cause us difficulty is the un-projected three point function, however $`{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}h^{(3)}(𝐤,𝐪)`$ $`={\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}{\displaystyle \frac{1}{N}}{\displaystyle \underset{lmn}{}}\mathrm{\Omega }\left|e^{i\left(𝐤+𝐪\right)𝐫_l}e^{i𝐪𝐫_m}e^{i𝐤𝐫_n}\right|\mathrm{\Omega }`$ $`={\displaystyle \frac{1}{N}}{\displaystyle \underset{lmn}{}}\mathrm{\Omega }\left|e^{i𝐤𝐫_l}\left[{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}e^{i𝐪𝐫_l}e^{i𝐪𝐫_m}\right]e^{i𝐤𝐫_n}\right|\mathrm{\Omega }.`$ (21) The sum over $`q`$ is simply a delta function, so that $$\frac{1}{𝒜}\underset{𝐪}{}h^{(3)}(𝐤,𝐪)=\frac{1}{N}\underset{lmn}{}\mathrm{\Omega }\left|e^{i𝐤𝐫_l}\delta ^2\left(𝐫_l𝐫_m\right)e^{i𝐤𝐫_n}\right|\mathrm{\Omega }$$ (22) but, as seen above, this must be identically zero, the only configurations for which the delta-function is non-zero are ones for which the ground state wave function vanishes by virtue of the Pauli exclusion principle. Hence $`\frac{1}{𝒜}_𝐪h^{(3)}(𝐤,𝐪)=0`$$`𝐤`$. The remaining terms are simplified by noting that $$e^{\frac{1}{2}k^2}+s\left(k\right)1=\overline{s}\left(k\right).$$ (23) and by removing possible singularities by writing $$s\left(q\right)1=h\left(q\right)+N\delta _{𝐪,\mathrm{𝟎}}$$ (24) where $$h\left(q\right)=\rho _0\left(g\left(r\right)1\right)e^{i𝐪𝐫}d^2𝐫$$ (25) and $`g\left(r\right)`$ is the usual radial distribution function of the electron liquid. Hence we are left with $`{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}P_0(𝐤,𝐪)`$ $`={\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝗊}{}}e^{\frac{1}{2}\left|𝗊\right|^2\frac{1}{2}𝗄^{}𝗊}\overline{s}\left(\left|𝗄\right|\right)+2\rho _0e^{\frac{1}{2}\left|𝗄\right|^2}`$ $`+{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝗊}{}}e^{\frac{1}{2}\left(𝗄+𝗊\right)^{}𝗄}h\left(\left|𝗊\right|\right)+{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝗊}{}}e^{\frac{1}{2}𝗊^{}𝗄}h\left(\left|𝗄+𝗊\right|\right)`$ $`={\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝗊}{}}e^{\frac{1}{2}\left|𝗊\right|^2\frac{1}{2}𝗄^{}𝗊}\overline{s}\left(\left|𝗄\right|\right)+2\rho _0e^{\frac{1}{2}\left|𝗄\right|^2}`$ (26) $`+2e^{\frac{1}{2}\left|𝗄\right|^2}{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝗊}{}}e^{\frac{1}{2}𝗊^{}𝗄}h\left(\left|𝗊\right|\right).`$ (27) As shown below in the appendix the summations can be carried out in the limit $`N,𝒜\mathrm{}`$, $`N/𝒜=\rho _0`$ to give $`{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}P(𝐤,𝐪)`$ $`={\displaystyle \frac{\overline{s}\left(\left|𝗄\right|\right)}{2\pi }}+2\rho _0e^{\frac{1}{2}\left|𝗄\right|^2}+2e^{\frac{1}{2}\left|𝗄\right|^2}\left(\rho _0\right)`$ (28) $`={\displaystyle \frac{\overline{s}\left(k\right)}{2\pi }}.`$ (29) which is our desired sum-rule. We can use the symmetry properties derived above to derive a second version of this. Consider $`{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}\left(P(𝐤,𝐪)P(𝐪,𝐤)\right)`$ $`=i{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}\mathrm{\Phi }(𝐪,𝐤)\overline{s}\left(\left|𝐤+𝐪\right|\right)`$ $`=i{\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{𝐪}{}}\mathrm{\Phi }(𝐪𝐤,𝐤)\overline{s}\left(q\right).`$ (30) It is shown in the appendix that this integral vanishes so that $$\frac{1}{𝒜}\underset{𝐤}{}P(𝐤,𝐪)=\frac{\overline{s}\left(q\right)}{2\pi }.$$ (31) Finally the reality of the right-hand side of 28 allows a further form of the sum-rule to be written as $$\frac{1}{𝒜}\underset{𝐪}{}P(𝐤+𝐪,𝐪)=\frac{\overline{s}\left(k\right)}{2\pi }.$$ (32) The basic idea used here could be extended to consider sum-rules for higher order correlation functions were they to be of interest. ## 5 Assessment of the Convolution Approximation In their work MacDonald et al. estimated the small $`q`$ behaviour of $`P(𝐤,𝐪)`$ by using the convolution approximation for the 3-particle distribution function $$n^{\left(3\right)}(𝐫,𝐫^{},𝐫^{\prime \prime })=\underset{lmn}{}\mathrm{\Omega }\left|\delta \left(𝐫𝐫_l\right)\delta \left(𝐫^{}𝐫_m\right)\delta \left(𝐫^{\prime \prime }𝐫_n\right)\right|\mathrm{\Omega }$$ which can be written as $`n_c^{\left(3\right)}(𝐫,𝐫^{},𝐫^{\prime \prime })`$ $`=\rho ^3\left\{1+h\left(rr^{}\right)+h\left(r^{}r^{\prime \prime }\right)+h\left(r^{\prime \prime }r\right)\right\}`$ $`+\rho ^3\left\{h\left(rr^{}\right)h\left(r^{}r^{\prime \prime }\right)+h\left(r^{}r^{\prime \prime }\right)h\left(r^{\prime \prime }r\right)+h\left(r^{\prime \prime }r\right)h\left(rr^{}\right)\right\}`$ $`+\rho ^4{\displaystyle h\left(rR\right)h\left(r^{}R\right)h\left(r^{\prime \prime }R\right)d^2𝐑}.`$ The three-point function, $`h^{\left(3\right)}(𝐤,𝐪),`$ can be written as $$h^{\left(3\right)}(𝐤,𝐪)=\frac{1}{N}d^2𝐫d^2𝐫^{}d^2𝐫^{\prime \prime }e^{i\left(𝐤+𝐪\right)𝐫}e^{i𝐪𝐫^{}}e^{i𝐤𝐫^{\prime \prime }}n_c^{\left(3\right)}(𝐫,𝐫^{},𝐫^{\prime \prime })$$ which gives, using the convolution approximation $`h_c^{\left(3\right)}(𝐤,𝐪)`$ $`=N^2\delta _{𝐤,\mathrm{𝟎}}\delta _{𝐪,\mathrm{𝟎}}+N\delta _{𝐤,\mathrm{𝟎}}h\left(q\right)+N\delta _{𝐪,\mathrm{𝟎}}h\left(k\right)+N\delta _{𝐤+𝐪,\mathrm{𝟎}}h\left(k\right)`$ $`+h\left(k\right)h\left(q\right)+\left[h\left(k\right)+h\left(q\right)+h\left(k\right)h\left(q\right)\right]h\left(\left|𝐤+𝐪\right|\right).`$ This approximation improves on the standard Kirkwood decoupling in that it correctly gives the $`q0`$ limit as $$h_c^{\left(3\right)}(𝐤,\mathrm{𝟎})=\left(N2\right)\left(s\left(k\right)1\right).$$ Since $`P_0(𝐤,𝐪)`$ saturates our sum-rule we expect that $$\frac{1}{A}\underset{𝐪}{}h^{\left(3\right)}(𝐤,𝐪)=0.$$ Using the convolution form gives $`{\displaystyle \frac{1}{A}}{\displaystyle \underset{𝐪}{}}h_c^{\left(3\right)}(𝐤,𝐪)`$ $`=\rho N\delta _{𝐤,\mathrm{𝟎}}+N\delta _{𝐤,\mathrm{𝟎}}{\displaystyle \frac{1}{A}}{\displaystyle \underset{𝐪}{}}h\left(q\right)+2\rho h\left(k\right)`$ $`+{\displaystyle \frac{1}{A}}{\displaystyle \underset{𝐪}{}}\left\{h\left(k\right)h\left(q\right)+\left(h\left(k\right)+h\left(q\right)+h\left(k\right)h\left(q\right)\right)h\left(\left|𝐤+𝐪\right|\right)\right\}`$ Now $`{\displaystyle \frac{1}{A}}{\displaystyle \underset{𝐪}{}}h\left(q\right)`$ $`\rho {\displaystyle \frac{d^2𝐪}{\left(2\pi \right)^2}d^2𝐫e^{𝐪𝐫}\left(g\left(r\right)1\right)}`$ $`=\rho `$ so that $`F\left(k\right)`$ $`={\displaystyle \frac{1}{A}}{\displaystyle \underset{𝐪}{}}h_c^{\left(3\right)}(𝐤,𝐪)=\left(1+h\left(k\right)\right){\displaystyle \frac{1}{A}}{\displaystyle \underset{𝐪}{}}h\left(q\right)h\left(\left|𝐤+𝐪\right|\right)`$ $`\left(1+h\left(k\right)\right){\displaystyle \frac{d^2𝐪}{\left(2\pi \right)^2}h\left(q\right)h\left(\left|𝐤+𝐪\right|\right)}.`$ This final integral requires a specific form for the pair correlation function, $`h\left(q\right)`$. For the primary fractional quantum Hall states ($`\nu =1/m`$, for odd $`m`$) this is known from Monte-Carlo studies of the Laughlin wave function which lead to the form $$h\left(q\right)=\nu e^{q^2/2}+4\nu e^{q^2}\underset{m}{}c_mL_m\left(q^2\right)$$ where the $`c_m`$ co-efficients are tabulated in . We have estimated $`F\left(k\right)`$ numerically for the case $`\nu =1/3`$ and it is plotted in figure 1, along with a plot of $`\overline{s}\left(k\right)/2\pi `$ for comparison. Clearly $`F\left(k\right)`$ is not identically zero and is not even negligible in comparison to $`\overline{s}\left(k\right)/2\pi `$. Hence we deduce that this approximation is deficient. MacDonald et al. themselves pointed out that this approximation could well be unreliable as it does not correctly reflect the particle-hole symmetry $`\left(\nu 1\nu \right)`$ of the system. ## 6 Summary and Discussion In this short note we have derived a sum rule (with symmetry related variants) for the static structure factor and the three-point correlation function that will determine the leading finite-temperature corrections to the absorption rates of phonons and photons in spectroscopic studies of the fractional quantum Hall effect. In principle the approach followed would allow the formulation of similar sum-rules for higher order correlation functions should such objects ever become relevant to experimental work. The sum-rule for the three-point function has been used to assess the validity of the convolution approximation for fractional quantum Hall systems and has shown that it does not capture all of the physics. *This work was supported by the EPSRC (UK).* ## Appendix A Evaluation of Integrals First of all we need to evaluate the following integral for later use: $$_0^{2\pi }e^{ze^{i\varphi }}𝑑\varphi =_0^{2\pi }e^{z\mathrm{cos}\varphi }\left\{\mathrm{cos}\left(z\mathrm{sin}\varphi \right)i\mathrm{sin}\left(z\mathrm{sin}\varphi \right)\right\}𝑑\varphi $$ (33) now $`{\displaystyle _0^{2\pi }}e^{z\mathrm{cos}\varphi }\mathrm{cos}\left(z\mathrm{sin}\varphi \right)𝑑\varphi `$ $`=2\pi `$ $`{\displaystyle _0^{2\pi }}e^{z\mathrm{cos}\varphi }\mathrm{sin}\left(z\mathrm{sin}\varphi \right)𝑑\varphi `$ $`=0`$ so that $$_0^{2\pi }e^{ze^{i\varphi }}𝑑\varphi =2\pi z.$$ (34) Now we need to evaluate $$f_1\left(k\right)=\frac{1}{𝒜}\underset{q}{}e^{\frac{1}{2}\left|q\right|^2\frac{1}{2}k^{}q}$$ (35) in the limit $`𝒜\mathrm{}`$, $`N\mathrm{}`$, $`N/𝒜=\rho _0`$ this becomes $$\frac{1}{4\pi ^2}_0^{\mathrm{}}𝑑qqe^{q^2/2}_0^{2\pi }𝑑\varphi e^{\frac{1}{2}kqe^{i\varphi }}=\frac{1}{2\pi }.$$ (36) Similarly $`f_2\left(k\right)`$ $`={\displaystyle \frac{1}{𝒜}}{\displaystyle \underset{q}{}}e^{q^{}k/2}h\left(\left|q\right|\right)`$ $`{\displaystyle \frac{1}{4\pi ^2}}{\displaystyle _0^{\mathrm{}}}𝑑qqh\left(q\right){\displaystyle _0^{2\pi }}𝑑\varphi e^{\frac{1}{2}kqe^{i\varphi }}`$ $`={\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{\mathrm{}}}𝑑qqh\left(q\right).`$ (37) The final integral can be evaluated quite generally. $`{\displaystyle _0^{\mathrm{}}}𝑑qqh\left(q\right)`$ $`={\displaystyle \frac{1}{2\pi }}{\displaystyle d^2𝐪h\left(q\right)}`$ $`=2\pi \rho _0h\left(0\right).`$ (38) Now for any spin-polarized fermi system $`g\left(r\right)=1+h\left(r\right)0`$ as $`r0`$ as a consequence of the Pauli principle, hence $`h\left(0\right)=1`$ and $$\frac{1}{2\pi }_0^{\mathrm{}}𝑑qqh\left(q\right)=\rho _0.$$ (39) Finally we need to evaluate $$\frac{1}{𝒜}\underset{𝐪}{}\mathrm{\Phi }(𝐪𝐤,𝐤)\overline{s}\left(q\right)\frac{1}{4\pi ^2}e^{k^2/2}_0^{\mathrm{}}𝑑qq\overline{s}\left(q\right)_0^{2\pi }𝑑\varphi 2e^{\frac{1}{2}kq\mathrm{cos}\varphi }\mathrm{sin}\left(\frac{1}{2}kq\mathrm{sin}\varphi \right).$$ (40) As we have already seen the $`\varphi `$ integral vanishes and so, therefore does the whole expression. ## Appendix B <br>Figure Caption Figure 1: A plot of the function $`F\left(k\right)`$ estimated numerically (full line) with a plot of $`\overline{s}\left(k\right)/2\pi `$ plotted (dashed line) for comparison.
no-problem/0003/math-ph0003007.html
ar5iv
text
# Absolute Continuity of the Floquet Spectrum for a Nonlinearly Forced Harmonic Oscillator ## 1 Introduction and statement of the result It is well known \[HLS\] that the spectrum of the Floquet operator of the resonant, linearly forced Harmonic oscillator $$i\frac{u}{t}=\frac{1}{2}\mathrm{\Delta }u+\frac{1}{2}x^2+2\epsilon (\mathrm{sin}t)x_1u,x=(x_1,\mathrm{},x_n)IR^n,\epsilon >0$$ is purely absolutely continuous. We show in this paper that the absolute continuity of the Floquet spectrum persists under time-periodic perturbations growing no faster than linearly at infinity provided the resonance condition still holds. Thus we consider the time-dependent Schrödinger equation $$i\frac{u}{t}=\frac{1}{2}\mathrm{\Delta }u+\frac{1}{2}x^2u+2\epsilon (\mathrm{sin}t)x_1u+\mu V(t,x)u$$ (1.1) and suppose that $`V(t,x)`$ is a real-valued smooth function of $`(t,x)`$, $`2\pi `$-periodic with respect to $`t`$, increasing at most linearly as $`|x|`$ goes to infinity: $$|_x^\alpha V(t,x)|C_\alpha ,|\alpha |1.$$ (1.2) Under this condition Eqn. (1.1) generates a unique unitary propagator $`U(t,s)`$ on the Hilbert space $`L^2(IR^n)`$. The Floquet operator is the one-period propagator $`U(2\pi ,0)`$ and we are interested in the nature of its spectrum. It is well known that the long time behaviour of the solutions of (1.1) can be characterized by means of the spectral properties of the Floquet operator (\[KY\]). Our main result in this paper is the following theorem. ###### Theorem 1.1 Let $`V`$ be as above. Then, for $`|\mu |<\epsilon \underset{t,x}{sup}|_{x_1}V(t,x)|`$, the spectrum of the Floquet operator $`U=U(2\pi ,0)`$ is purely absolutely continuous. Remark The above result can be understood in terms of the classical resonance phenomenon. If $`V=0`$ the motions generated by the classical Hamiltonian $`{\displaystyle \frac{1}{2}}(p^2+x^2)+2\epsilon x_1\mathrm{sin}t`$ undergo a global resonance between the proper frequency of the harmonic motions and the frequency of the linear forcing term. All initial conditions eventially diverge to infinity by oscillations of linearly increasing amplitude. The quantum counterpart of this phenomonon is the absolute continuity of the Floquet spectrum\[HLS\]. One might ask whether this absolute continuity is stable under perturbations which destroy the linearity of the forcing potential. Theorem 1.1 establishes the stability under perturbations which make the forcing a non-linear one but do not destroy the globality of the resonance phenomenon because all initial conditions still diverge by oscillations to infinity. The globality property of the resonance phenomonon seems therefore a necessary condition for the absolute continuity of the Floquet spectrum. It is indeed known (\[H\]) that the Schrödinger operators $`{\displaystyle \frac{1}{2}}\mathrm{\Delta }+{\displaystyle \frac{1}{2}}|x|^\alpha +{\displaystyle \frac{1}{2}}\epsilon x_1\mathrm{sin}\omega t`$, $`\alpha >2`$, $`\omega IR`$, whose classical counterparts yield local nonlinear resonances, have no absolutely continuous part in their Floquet spectrum. Notation We use the vector notation: for the multiplication operator $`X_j`$ by the variable $`x_j`$ and the differential operator $`D_j={\displaystyle \frac{1}{i}}{\displaystyle \frac{}{x_j}}`$, $`j=1,\mathrm{},n`$, we denote $`X=(X_1,\mathrm{},X_n)`$ and $`D=(D_1,\mathrm{},D_n)`$. For a measurable function $`W`$ and a set of commuting selfadjoint operators $`=(_1,\mathrm{},_n)`$, $`W()`$ is the operator defined via functional calculus. We have the identity $$𝒰^{}W()𝒰=W(𝒰^{}𝒰)$$ (1.3) for any unitary operator $`𝒰`$. ## 2 Proof of the Theorem It is well known (\[Ya\]) that the nature of the spectrum of the Floquet operator $`U`$ is the same (apart from multiplicities) as that of the Floquet Hamiltonian formally given by $$𝒦u=i\frac{u}{t}\frac{1}{2}\mathrm{\Delta }u+\frac{1}{2}x^2u+2\epsilon (\mathrm{sin}t)x_1u+\mu V(t,x)u$$ (2.1) on the Hilbert space $`𝐊=L^2(T\text{ }\text{ })L^2(IR^n)`$, where $`T\text{ }\text{ }=IR/2\pi ZZ`$ is the circle. More precisely, if $`𝒦`$ is the generator of the one-parameter strongly continuous unitary group $`𝒰(\sigma )`$, $`\sigma IR`$, defined by $$(𝒰(\sigma )u)(t)=U(t,t\sigma )u(t\sigma ),u=u(t,)𝐊,$$ (2.2) then, $`𝒰(2\pi )=e^{i2\pi 𝒦}`$ is unitarily equivalent to $`\mathrm{𝟏}U(2\pi ,0)`$. We set $$𝐃C^{\mathrm{}}(T\text{ }\text{ },𝒮(IR^n)).$$ It is easy to see that: 1. The function space $`𝐃`$ is dense in $`𝐊`$. 2. $`𝐃`$ is invariant under the action of the group $`𝒰(\sigma )`$. 3. For $`u𝐃`$, $`𝒦u`$ is given by the right hand side of (2.1). It follows that $`𝐃`$ is a core for $`𝒦`$ (\[RS\]) and $`𝒦`$ is the closure of the operator defined by (2.1) on $`𝐃`$. We introduce four unitary operators $`𝒰_0𝒰_3`$ on $`𝐊`$ and successively transform $`𝒦`$ by $`𝒰_j`$ as follows: Write $$H_0=\frac{1}{2}\mathrm{\Delta }+\frac{1}{2}x^2\frac{1}{2}$$ and define $$𝒰_0u(t,)=e^{itH_0}u(t,),u𝐊.$$ (2.3) ###### Proposition 2.1 (1) The operator $`𝒰_0`$ is a well defined unitary operator on $`𝐊`$. (2) $`𝒰_0`$ maps $`𝐃`$ onto itself. (3) For $`u𝐃`$, $`𝒦_1𝒰_0^{}𝒦𝒰_0`$ is given by $$𝒦_1u=i\frac{u}{t}+2\epsilon \mathrm{sin}t(X_1\mathrm{cos}t+D_1\mathrm{sin}t)u+\mu V(t,X\mathrm{cos}t+D\mathrm{sin}t)u+\frac{u}{2}.$$ (2.4) (4) $`𝐃`$ is a core of $`𝒦_1`$. Proof. It is well-known that $`\sigma (H_0)=\{0,1,\mathrm{}\}`$ and we have $`e^{2\pi niH_0}=\mathrm{𝟏}`$. Hence (2.3) defines a unitary operator on $`𝐊`$. We have $`𝒮(IR^n)=_{k=1}^{\mathrm{}}D(H_0^k)`$ and (2) follows. (3) follows from the identity (1.3) and the well-known formulae $$e^{itH_0}Xe^{itH_0}=X\mathrm{cos}t+D\mathrm{sin}t,e^{itH_0}De^{itH_0}=X\mathrm{sin}t+D\mathrm{cos}t.$$ Since $`𝐃`$ is a core of $`𝒦`$ and $`𝒰_0`$ maps $`𝐃`$ onto itself, $`𝐃`$ is also a core for $`𝒦_1`$. Note that for any linear function $`aX+bD+c`$ of $`X`$ and $`D`$, and $`W`$ satisfying (1.2), $`W(aX+bD+c)`$ is a pseudo-differential operator with Weyl symbol $`W(ax+b\xi +c)`$ (\[Hö\]). To eliminate the term $`2\epsilon X_1\mathrm{sin}t\mathrm{cos}t`$ from $`𝒦_1`$, we define $$𝒰_1u(t,x)=e^{i\epsilon (\mathrm{cos2}t)x_1/2}u(t,x).$$ (2.5) It is easy to see that $`𝒰_1`$ maps $`𝐃`$ onto itself and we have $$𝒰_1^{}\left(i\frac{}{t}\right)𝒰_1=\left(i\frac{}{t}\right)\epsilon (\mathrm{sin2}t)X_1,𝒰_1^{}D𝒰_1=D+\frac{\epsilon \mathrm{cos2}t}{2}𝐞_1,$$ on $`𝐃`$. It follows that $`𝒦_2𝒰_1^{}𝒦_1𝒰_1`$ is given by the closure of $$\begin{array}{c}𝒦_2u=i\frac{u}{t}+2\epsilon (\mathrm{sin}^2t)D_1u+\epsilon ^2(\mathrm{sin}^2t\mathrm{cos2}t)u\hfill \\ +\mu V(t,X\mathrm{cos}t+\mathrm{sin}t(D+\frac{\epsilon \mathrm{cos2}t}{2}𝐞_1))u+\frac{u}{2}\hfill \end{array}$$ (2.6) defined on $`𝐃`$. We write $`2\epsilon (\mathrm{sin}^2t)D_1=\epsilon D_1\epsilon (\mathrm{cos2}t)D_1`$ in the right side of (2.6). Next, to eliminate the term $`\epsilon (\mathrm{cos2}t)D_1`$, we define $$𝒰_2u(t,x)=e^{i\epsilon (\mathrm{sin2}t)D_1/2}u(t,x)=u(t,x+\epsilon (\mathrm{sin2}t)𝐞_1/2).$$ Then, $`𝒰_2`$ maps $`𝐃`$ onto itself and we have on $`𝐃`$ $$𝒰_2^{}\left(i\frac{}{t}\right)𝒰_2=\left(i\frac{}{t}\right)+\epsilon (\mathrm{cos2}t)D_1,𝒰_2^{}X𝒰_2=X\frac{\epsilon \mathrm{sin2}t}{2}𝐞_1.$$ It follows, also with the help of the identity (1.3), that $`𝒦_3𝒰_2^{}𝒦_2𝒰_2`$ is the closure of the operator given on $`𝐃`$ by $$\begin{array}{c}𝒦_3u=i\frac{u}{t}+\epsilon D_1u+\epsilon ^2(\mathrm{sin}^2t\mathrm{cos2}t)u\hfill \\ +\mu V(t,X\mathrm{cos}t+D\mathrm{sin}t\frac{\epsilon \mathrm{sin}t}{2}𝐞_1)u+\frac{u}{2}.\hfill \end{array}$$ (2.7) Here we also used the obvious identity $`\mathrm{cos2}t\mathrm{sin}t\mathrm{cos}t\mathrm{sin2}t=\mathrm{sin}t`$. We write now $$(\mathrm{sin}^2t)\mathrm{cos2}t=\frac{1}{2}\mathrm{cos2}t\frac{1}{4}\mathrm{cos4}t\frac{1}{4}.$$ and define $$𝒰_3u(t,x)=e^{i\epsilon ^2(\mathrm{sin2}t)/4+i\epsilon ^2(\mathrm{sin4}t)/16}u(t,x).$$ Again $`𝒰_3`$ maps $`𝐃`$ onto itself and $`𝒰_3^{}𝒦_2𝒰_3`$ is the closure of the operator given on $`𝐃`$ by $$\begin{array}{c}u=i\frac{u}{t}+\epsilon D_1u+\frac{(2\epsilon ^2)u}{4}\hfill \\ +\mu V(t,X\mathrm{cos}t+D\mathrm{sin}t\frac{\epsilon 𝐞_1\mathrm{sin}t}{2})u.\hfill \end{array}$$ (2.8) Thus, $`𝒦`$ is unitarily equivalent to $``$ defined as the closure of the operator with domain $`𝐃`$ and action specified by the right side of (2.8). Completion of the proof of the Theorem. We apply Mourre’s theory of conjugate operators (\[M\]; see also\[PSS\]). We take the selfadjoint operator $`𝒜`$ defined by $$𝒜u(t,x)=x_1u(t,x)$$ with obvious domain as the conjugate operator for $``$, and verify the conditions (a-e) of Definition 1 of \[M\] are satisfied. * $`𝐃D(𝒜)D()`$ and hence $`D(𝒜)D()`$ is a core of $``$. * It is clear that $`e^{i\alpha 𝒜}=e^{i\alpha X_1}`$ maps $`𝐃`$ onto $`𝐃`$ and that, for $`u𝐃`$, we have $$\begin{array}{c}e^{i\alpha 𝒜}e^{i\alpha 𝒜}uu=\epsilon \alpha u\mu V(t,X\mathrm{cos}t+D\mathrm{sin}t\frac{\epsilon 𝐞_1\mathrm{sin}t}{2})\hfill \\ +\mu V(t,X\mathrm{cos}t+D\mathrm{sin}t\frac{(\epsilon 2\alpha )𝐞_1\mathrm{sin}t}{2}).\hfill \end{array}$$ Since $`V(x)V(x+\alpha 𝐞_1\mathrm{sin}t)`$ is bounded with bounded derivatives, the right hand side extends to a bounded operator on $`𝐊`$ and it is continuous with respect to $`\alpha `$ in the operator norm topology. It follows that $`e^{i\alpha 𝒜}`$ maps the domain of $``$ into itself and $`sup_{|\alpha |1}e^{i\alpha 𝒜}u_𝐊<\mathrm{}`$ for any $`uD()`$. * Let us verify the conditions (c’), (i), (ii), (iii) of Proposition II.1 of \[M\] taking $`H=`$, $`A=𝒜`$ and $`𝒮=𝐃`$. The verification of these conditions in turn implies (c). First remark that (i) and (ii) are a direct consequence of (a) and (b) above. Moreover for any $`u𝐃`$ we have $$i[,𝒜]u=\epsilon u+\mu \mathrm{sin}t_{x_1}V(t,X\mathrm{cos}t+D\mathrm{sin}t\frac{\epsilon \mathrm{sin}t}{2}𝐞_1)u$$ (2.9) The right hand side extends to a bounded operator $`C`$ in $`𝐊`$ which, following \[M\], we denote $`i[,𝒜]^{}`$. The boundedness implies a fortiori Condition (iii) and hence (c) is verified. * By direct computation we have $$i[[,𝒜]^{},𝒜]u=\mu \mathrm{sin}^2t(_{x_1}^2V)(t,X\mathrm{cos}t+D\mathrm{sin}t\frac{\epsilon \mathrm{sin}t}{2}𝐞_1)u.$$ (2.10) The right hand side extends to a bounded operator on $`𝐊`$. It follows that $`[,𝒜]^{}D(𝒜)D(𝒜)`$ and (2.10) holds for $`uD(𝒜)`$. Hence $`[[,𝒜]^{},𝒜]`$ defined on $`D()D(𝒜)`$ is bounded and this yields (d). * The operator norm of $`u\mathrm{sin}t_{x_1}V(t,X\mathrm{cos}t+D\mathrm{sin}t{\displaystyle \frac{\epsilon \mathrm{sin}t}{2}}𝐞_1)u`$ is bounded by $`\underset{t,x}{sup}|_{x_1}V(t,x)|`$ because the right hand side is equal to $`\mathrm{sin}t_{x_1}𝒰_0^{}V(t,x\epsilon \mathrm{sin}t𝐞_1/2)𝒰_0`$. Hence if $`|\mu |_{x_1}V_L^{\mathrm{}}<\epsilon `$, then we have $`i[,𝒜]^{}c>0`$. Thus the conditions of \[M\] are satified and we can conclude that $`\sigma (𝒦)=\sigma _{ac}(𝒦)=IR`$ if $`|\mu |_{x_1}V_L^{\mathrm{}}<\epsilon `$ by Theorem and Proposition II.4 of \[M\].
no-problem/0003/astro-ph0003136.html
ar5iv
text
# Homogeneity of Stellar Populations in Early-Type Galaxies with Different X-ray Properties ## 1. Introduction ASCA X-ray observations of NGC 4636 (Matsushita et al. 1998) and some other giant early-type galaxies (Matsushita 1997) show that early-type galaxies can be classified into two categories in terms of X-ray extent. Some early-type galaxies have a very extended dark matter halo characterized by X-ray emission out to $``$ 100 kpc from the galaxy center, while others have a compact X-ray halo. The galaxies with an extended X-ray emission can be interpreted as sitting in larger scale potential structure, such as galaxy groups, subclumps of clusters, or clusters themselves, as well as sitting their own potential well associated with each galaxy. Potential structure must have played a big role in the cource of galaxy formation. If the difference in potential structure had been already established before the bulk of stars formed, we would expect some differences in stellar populations, such as mean age or metallicity, as well. A deeper potential well would keep the gas more effectively against the thermal energy input by supernova (SN) explosion, and the chemically enriched gas can be recycled more efficiently, and the galaxy would end up with higher mean stellar metallicity (Larson 1974). Therefore we would expect that the X-ray extended galaxies have higher metallicities than the X-ray compact ones at a given stellar mass. Furthermore, considering that the higher density peaks collapse earlier in the Universe, which is likely to be the case for the X-ray extended galaxies sitting in the local density peaks, we would also expect their older ages than those of the X-ray compact ones. Both of these effects would make the colors of the X-ray extended galaxies redder. The central question of this paper is, therefore, how this dichotomy in X-ray properties hence the potential structure is related to the optical properties which trace the stellar populations in galaxies. Another interesting issue is whether the number of globular clusters per unit optical luminosity of galaxy correlates with the X-ray extent of galaxy. This is because, if the X-ray extended galaxies are the products of galaxy mergers as they are located in the center of larger scale potential structure, and if a considerable number of new globular clusters form during galaxy mergers as suggested by Zepf and Ashman (1993), we would expect more globular clusters in the X-ray extended galaxies for a given optical luminosity. Matsushita (2000; hereafter M2000) has recently compiled the X-ray properties of 52 nearby early-type galaxies from ROSAT data. Together with the archival data of various optical properties, we now compare the optical properties with the X-ray properties to examine the correlation between them. The structure of this paper is the following. In § 2 we summarize the X-ray properties of our sample of early-type galaxies, highlighting the dichotomy of the potential structure. In § 3 we present their optical properties, including integrated colors and Mg<sub>2</sub> index. We show the homogeneity of the stellar populations despite the dichotomy in X-ray properties. We discuss the impact of this result on the formation of early-type galaxies in § 4, and conclude the paper in § 5. We use $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> throughout this paper. ## 2. X-ray Properties We use the same sample of early-type galaxies presented in M2000. The sample is composed of 52 bright early-type galaxies, of which 42 are ellipticals and 10 are S0 galaxies. M2000 have selected all the early-types observed by PSPC, available in the ROSAT archival data, whose $`B`$-band magnitudes are brighter than 11.7. The environment of the sampled galaxies varies from cluster environment (Virgo, Fornax, and Centaurus clusters) to galaxy groups and the field. Figure 1 shows X-ray luminosity of the inter stellar medium (ISM) within a radius of 4$`r_e`$ ($`L_X`$) against $`L_B\sigma ^2`$ for all the sample galaxies, where $`r_e`$, $`L_B`$ and $`\sigma `$ indicate the effective radius in the optical profile, the galaxy luminosity in $`B`$-band (taken from Tully 1988), and the central velocity dispersion of stars, respectively. In order to exclude the contribution from low mass X-ray binaries and the active galactic nuclei, the ROSAT PSPC spectrum (0.2-2.0 keV) is fitted with two components; soft ($``$1 keV) and hard (10 keV), and only the soft component is used to determine the ISM X-ray luminosity (M2000). The quantity $`L_B\sigma ^2`$ is proportional to the kinetic energy of the gas supplied from stellar mass loss and heated up by random stellar motions. The solid line, $`\mathrm{log}L_X/(L_B\sigma ^2)`$$`=`$25.15 (const), corresponds to the energy balance between the cooling by X-ray emission and the heating by stellar mass loss, assuming a relation between mass loss rate and $`L_B`$ (Ciotti et al. 1991; M2000). There is a considerable scatter in $`L_X`$ for a given $`L_B\sigma ^2`$. Fig. 1. X-ray luminosity ($`L_X`$) within 4 $`r_e`$ versus $`L_B\sigma ^2`$. Vertical error-bars show two sigma errors in $`L_X`$. Crosses indicate the X-ray extended galaxies, and those surrounded by big circles are the cD galaxies. The galaxies in cluster environment are shown by filled symbols, while those in the field or in small groups are shown by open symbols. Ellipticals are indicated by circles, while S0’s are indicated by triangles. The solid line corresponds to $`\mathrm{log}L_X/(L_B\sigma ^2)=25.15`$ (constant), on which the X-ray luminosity is just comparable to the kinetic energy input to the ISM by stellar mass loss and the heating of the ejected gas by random stellar motions. Many galaxies follow the solid line, suggesting that their X-ray luminosities can be simply explained by the energy input from stars through mass loss. However some galaxies have significantly higher $`L_X/L_B\sigma ^2`$ ratios, requiring an excess energy to explain such high X-ray luminosities. Many of these $`L_X`$ bright galaxies are classified as X-ray extended galaxies in M2000 (crosses) as they show spatially extended X-ray emission. They are likely to have more extended dark matter halos residing on larger scale structure, such as a group or a cluster of galaxies, as well as on its own galaxy. Furthermore, the X-ray extended galaxies show significantly higher ISM temperatures at $`r>r_e`$ than that of the X-ray compact ones as a result of their extended potential. In fact, their mean ISM temperature within 4$`r_e`$ is about factor of 2 higher than the others at a given $`\sigma `$ (M2000). We reproduce this plot in Fig. 2. We refer these differences as a dichotomy of X-ray properties. Therefore we will use a quantity $`E_X`$=$`\mathrm{log}L_X/(L_B\sigma ^2)`$$``$25.15, the excess energy in $`L_X`$, as a measure of the ‘X-ray extent’ later in § 3. Larger $`E_X`$ means that a galaxy has a deeper and more extended potential. The galaxies which are classified as X-ray extended ones in M2000 are the following nine: NGC 4406 (bright galaxy in Virgo cluster), NGC 4472 (brightest galaxy in Virgo), NGC 4486 (cD in Virgo), NGC 4636 (bright galaxy in Virgo), NGC 4696 (cD in Centaurus cluster), NGC 1399 (cD in Fornax cluster), NGC 1407 (group center), NGC 5044 (group center), and NGC 5846 (group center). ## 3. Optical Properties We have compiled the optical properties of our sample galaxies from various sources. The integrated colors are taken from de Vaucouleurs et al. (1991; RC3). Following Schweizer & Seitzer (1992), we defined $`(UV)_{e,0}`$ as $$(UV)_{e,0}=(UV)_e+[(UV)_{T,0}(UV)_T],$$ (1) Fig. 2. The ISM temperature ($`kT`$) within 4 $`r_e`$ versus central stellar velocity dispersion ($`\sigma `$). Crosses indicate the X-ray extended galaxies, and those surrounded by big circles are the cD galaxies. The galaxies in cluster environment are shown by filled symbols, while those in the field or in small groups are shown by open symbols. Ellipticals are indicated by circles, while S0’s are indicated by triangles. Two dotted lines correspond to $`\beta _{\mathrm{spec}}`$=0.5 (upper) and $`\beta _{\mathrm{spec}}`$=1.0 (lower), respectively, where $`\beta _{\mathrm{spec}}`$ is the ratio between kinetic temperature of stars and the gas temperature. where subscript $`T`$ denotes global colors, subscript $`0`$ indicates colors corrected for Galactic extinction and redshift, and subscript $`e`$ denotes average colors within the effective radius ($`r_e`$). $`(BV)_{e,0}`$ is also defined in a similar way. The integrated colors within $`r_e`$ in the Cousins system, $`(VR)_e`$ and $`(VI)_e`$, are taken from Buta & Williams (1995). Galactic extinction is corrected for using the extinction in $`B`$-band ($`A_g`$) from RC3 and the extinction curve from Rieke & Lebofsky (1985). The reddening corrected colors are denoted as $`(VR)_{e,0}`$ and $`(VI)_{e,0}`$. The reason why we use colors within $`r_e`$ rather than the total colors is that there are more galaxies available with $`r_e`$ apertures. This improves the statistics. Furthermore, the colors within $`r_e`$ would be more reliable than the total colors, although no errors are given in the literature for the total colors. However, we have checked that the results in this paper did not change even if we used the total colors. The central line index Mg<sub>2</sub>, central velocity dispersion ($`\sigma `$), maximum rotation velocity ($`V_m`$), and deviation from the Fundamental Plane ($`\mathrm{\Delta }\mathrm{FP}`$) are obtained from Prugniel & Simien (1996). A velocity dispersion for NGC 4696 is added from Faber et al. (1989). The globular cluster specific frequency ($`S_N`$), which is the number of globular clusters per unit galaxy luminosity, is acquired from Ashman & Zepf (1998). Finally, the $`a_4`$ index, which is the fourth cosine coefficient of the Fourier expansion of isophotal deviation from a pure ellipse, and the ellipticity ($`ϵ`$) are taken from Bender et al. (1989). We will use these values later in this section. The number of galaxies which have optical properties available are 44 ($`UV`$), 47 ($`BV`$), 32 ($`VR`$), 32 ($`VI`$), 43 (Mg<sub>2</sub>), 49 ($`\mathrm{\Delta }\mathrm{FP}`$), 44 ($`V_m`$), 23 ($`S_N`$), and 25 ($`a_4`$) out of a total of 52 galaxies in our X-ray sample. In the top three panels of Fig. 3, we have plotted $`UV`$, $`VI`$ colors and Mg<sub>2</sub> index against $`\mathrm{log}\sigma `$. The error-bars indicate one sigma measurement errors. The error-bars are not shown for Mg<sub>2</sub>, but are negligibly small ($`<`$0.005). The typical error for $`\mathrm{log}\sigma `$ is $``$0.02 (Prugniel & Simien 1996). We find the scaling Fig. 3. Integrated colors within an effective radius $`(UV)_{e,0}`$ and $`(VI)_{e,0}`$, central Mg<sub>2</sub> index, and globular cluster specific frequency $`S_N`$ (from top to bottom) are plotted against the central stellar velocity dispersion $`\sigma `$. Vertical error-bars (one sigma) are shown except for the Mg<sub>2</sub> index. Crosses indicate the X-ray extended galaxies, and those surrounded by big circles are the cD galaxies. The galaxies in cluster environment are shown by filled symbols, while those in the field or in small groups are shown by open symbols. Ellipticals are indicated by circles, while S0’s are indicated by triangles. The solid lines are the linear regression lines fitted to the data excluding the X-ray extended galaxies. The dashed lines show the relations taken from the literature (Bower et al. 1992; Burstein et al. 1988). relations for the early-type galaxies as a whole which are shown by the solid lines. The linear regression lines are fitted to the data excluding the X-ray extended galaxies in order to see the difference between the X-ray extended galaxies (crosses) and the X-ray compact ones (the others), if present. For comparison, the same relations taken from Bower, Lucey & Ellis (1992) and Burstein et al. (1988) are also reproduced by the dashed lines on the top panel and the third one, respectively. Our regression lines agree very well with those in the literature. Most important, none of the $`UV`$, $`VI`$, and Mg<sub>2</sub> indices of the X-ray extended galaxies looks different from that of the X-ray compact ones, being well within the scatter at fixed $`\sigma `$. $`S_N`$ is plotted against $`\mathrm{log}\sigma `$ in the bottom panel of Fig. 3. The error-bars correspond to one sigma. Again, the X-ray extended galaxies do not have systematically different $`S_N`$ values, except the two circled galaxies, NGC 1399 and NGC 4486 (the former has a larger $`\sigma `$), which have a factor of 2-3 as many globular clusters for a given optical galaxy luminosity. These are both cD galaxies in nearby clusters which might have had rather different mechanism of globular cluster formation, such as secondary formation in the cooling flows (Richer et al. 1993, but see also Holtzman et al. 1996 for an objection), capture of globular clusters from other galaxies through tidal stripping (Côté, Marzke & West 1998) or the debris of cannibalized nucleated dwarf galaxies (Bassino, Muzzio & Rabolli 1994). Apart from Fig. 4. Deviations from the $`(UV)`$-$`\sigma `$ and $`(VI)`$-$`\sigma `$ relations, deviation from the Mg<sub>2</sub>-$`\sigma `$ relation, and $`S_N`$ (from top to bottom) are plotted against the X-ray extent defined as $`E_X`$$`=`$$`\mathrm{log}L_X/(L_B\sigma ^2)`$$``$25.15. Vertical error-bars (one sigma) are shown except for the Mg<sub>2</sub> index, and the horizontal error-bars correspond to two sigma errors in $`L_X`$. Crosses indicate the X-ray extended galaxies, and those surrounded by big circles are the cD galaxies. The galaxies in cluster environment are shown by filled symbols, while those in the field or in small groups are shown by open symbols. Ellipticals are indicated by circles, while S0’s are indicated by triangles. these cD galaxies, the number of globular clusters seems comparable between the X-ray extended galaxies and the X-ray compact ones. The consequence of this result will be discussed in § 4. In order to compare the optical properties against the X-ray properties in a more general manner, we use $`E_X`$ as a measure of the X-ray extent or the depth of the potential (§ 2). To subtract the underlying metallicity effect as a function of galaxy mass (or $`\sigma `$) which is present in early-type galaxies (eg., Kodama et al. 1998), we measure the deviations from the color-$`\sigma `$ and the Mg<sub>2</sub>-$`\sigma `$ relations (solid lines in Fig. 3) for each galaxy, and plotted them against the X-ray extent ($`E_X`$) in Fig. 4. The raw values of $`S_N`$ are re-plotted in the bottom panel . There is no clear trend that any of the optical quantities scale with the X-ray extent. Our result is consistent with White & Sarazin (1991) who found no correlation between the excess of X-ray luminosity for a given optical luminosity and the residual from the scaling relations in $`UV`$ color and Mg<sub>2</sub> index. To test the above results statistically, we calculate the mean deviations in four colors, the mean deviations in Mg<sub>2</sub> index, and the mean $`S_N`$ for the X-ray extended galaxies and the X-ray compact ones separately. The results are summarized in Table 1. The standard deviation within each galaxy category is also given as a measure of internal scatter. Since NGC 1399 and NGC 4486 have significantly larger $`S_N`$ values, we excluded these two cD galaxies in calculating the average and the scatter of $`S_N`$ for the X-ray extended galaxies. We applied Welch’s Fig. 5. Deviation from the Fundamental Plane ($`\mathrm{\Delta }`$ FP), boxy/disky index ($`a_4/a`$), and velocity anisotropy index $`(V_m/\sigma )^{}`$ (from top to bottom) are plotted against the X-ray extent defined as $`E_X`$$`=`$$`\mathrm{log}L_X/(L_B\sigma ^2)`$$``$25.15. Typical errors for ($`a_4/a`$$`\times `$100) and ($`V_m/\sigma `$) are 0.25 and 10%, respectively. Holizontal error-bars correspond to two sigma errors in $`L_X`$. Crosses indicate the X-ray extended galaxies, and those surrounded by big circles are the cD galaxies. The galaxies in cluster environment are shown by filled symbols, while those in the field or in small groups are shown by open symbols. Ellipticals are indicated by circles, while S0’s are indicated by triangles. non-parametric statistical test and found that none of these optical quantities show significant difference between the X-ray extended galaxies and the compact ones. The probability of the difference is always smaller than 90 per cent, and the hypothesis that we draw both samples from the same parent group cannot be rejected. If a difference of more than 0.078 in $`UV`$ or 0.023 in Mg<sub>2</sub> would have been present between the two groups, it should have been detected at the significance level in this statistical test. This corresponds to the difference in stellar populations of only $`\mathrm{\Delta }\mathrm{log}Z=0.1`$ or $`\mathrm{\Delta }\mathrm{log}T=0.15`$ for old galaxies (Kodama & Arimoto 1997). Therefore the stellar populations of the early-type galaxies should be homogeneous below this level despite the variety of X-ray extent. The above upper limit for the metallicity difference is rather small compared to the expected difference if the galaxies with the same stellar mass form in various potential depth. We will discuss this point in § 4. We also plot the deviation from the fundamental plane, $`\mathrm{\Delta }\mathrm{FP}`$, in the top panel of Fig. 5 as a function of $`E_X`$. $`\mathrm{\Delta }\mathrm{FP}`$ is a measure of the difference of the dynamical mass-to-light ratio from that of the ‘normal’ early-type galaxies with the same dynamical mass (Prugniel & Simien 1996). This indicates the deviations in stellar populations and/or dynamical mass including dark matter. A positive value of $`\mathrm{\Delta }\mathrm{FP}`$ corresponds to a larger mass-to-light ratio. The X-ray extended galaxies generally have a high $`\mathrm{\Delta }\mathrm{FP}`$, while the X-ray compact ones show a considerable spread towards lower values. Since we have found that the stellar populations are quite homogeneous against $`E_X`$, the above result is indicative of the presence of more dark matter in the X-ray extended galaxies, as expected. Finally, we compare the isophotal shape and the velocity anisotropy against the X-ray properties. We first use the $`a_4/a`$ index, where $`a_4`$ is the fourth cosine coefficient of the Fourier expansion of the deviations from a pure ellipse and $`a`$ is the semi-major axis of the isophote (Bender & Möllenhoff 1987). A positive value of $`a_4/a`$ means a disky isophote, while a negative value corresponds to a box-shaped isophote. The other index, $`V_m/\sigma `$ is a ratio between maximum rotation velocity ($`V_m`$) and central velocity dispersion ($`\sigma `$), which is transformed to the anisotropy index by taking into account the ellipticity ($`ϵ`$) as: $$(V_m/\sigma )^{}=(V_m/\sigma )/\sqrt{ϵ/(1ϵ)},$$ (2) following Bender (1988). There are clear trends that the X-ray extended galaxies have a negative $`a_4/a`$ and small $`(V_m/\sigma )^{}`$ with relatively small scatters. These effects should partly come from the dependence of $`a_4/a`$ and $`(V_m/\sigma )^{}`$ on galaxy luminosity (Bender 1988, Bender et al. 1989), because the X-ray extended galaxies are generally bright. However, it is notable that all of the X-ray extended galaxies have boxy shapes and weak rotations. These findings are similar to what Bender (1988) and Bender et al. (1989) found; ie., $`a_4/a`$ index correlates with the X-ray luminosity excess that comes from the surrounding hot gas halos and also with the velocity anisotropy. Some attempts have been made to understand this kinematical dichotomy of elliptical galaxies in the context of galaxy-galaxy merging using dynamical simulations (Bekki & Shioya 1997; Bruckert et al. 1999). Bruckert et al. (1999) showed that major mergers produced boxy ellipticals with anisotropic velocity, while minor mergers produced the disky ones. Considering together that the X-ray extended galaxies are located at the local density peaks, they are possibly the products of major mergers. After all, although the clear dichotomy in X-ray properties correlates with the isophotal shape and the velocity structure of galaxies, the stellar populations and globular cluster properties are still found to be quite homogeneous. ## 4. Discussion We estimate how much metallicity difference is expected between the X-ray extended galaxies and the X-ray compact ones at a given stellar mass (or $`\sigma `$) if the dichotomy of the potential structure has already been established at the time of star formation. The hydrostatic equilibrium for the ISM gives the galaxy potential of $$\mathrm{\Phi }T\left(\frac{\mathrm{log}\rho }{\mathrm{log}r}+\frac{\mathrm{log}T}{\mathrm{log}r}\right),$$ (3) where $`\rho `$ is the density of matters including the dark matter. Since the density gradient $`\mathrm{log}\rho /\mathrm{log}r`$ should not differ much between the two galaxy categories (X-ray extended and the X-ray compact ones) within optical radius, and the ISM temperature gradient $`\mathrm{log}T/\mathrm{log}r`$ is negligible compared to the density gradient (Forman, Jones & Tucker 1985; Trinchieri et al. 1994; Boute & Canizares 1994), the potential $`\varphi `$ approximately scales with the ISM temperature $`T`$. Given that there is a factor of 2 difference in $`T`$ within 4 $`r_e`$ between the two galaxy categories for a given $`\sigma `$ (§ 2), the X-ray extended galaxies could have experienced twice as many supernova explosions and recycled twice as much metals into the same amount of stars before the gas is expelled. Therefore we could expect that the mean stellar metallicity of the X-ray extended galaxies is twice as high as that of the X-ray compact ones for a given $`\sigma `$. This big difference in metallicity should have been detected in the statistical test in § 3, if present. As opposed to what we expect, however, we do not detect any significant difference in stellar populations between the two. This means that the potential structure of early-type galaxies during the major star formation epoch was quite different from what it is today. Gravitational potential must have been homogeneous. It must not have produced more than only 0.1 dex difference in mean stellar metallicity at a given stellar mass. Later on, after the epoch of major star formation, some galaxies became incorporated into larger scale potentials by infalling into the bottom of the local potential and/or by accumulating the surrounding materials and augmenting their gravitational potential, which eventually resulted in the variety of X-ray extent seen at the present-day. This picture is consistent with what people have found as to the formation of early-type galaxies: Most of the ‘stars’ in early-type galaxies should form at significantly high redshifts ($`z>2`$) (eg., Bower, Lucey & Ellis 1992; Ellis et al. 1997; Stanford, Eisenhardt & Dickinson 1998; Kodama et al. 1998; van Dokkum et al. 1998; Kodama, Bower & Bell 1999), while the ‘mass’ of early-type galaxies can successively grow due to the accretion and/or merging even well below $`z<2`$ in the course of hierarchical assembly (eg., Kauffman 1996; Baugh et al. 1998; Bower, Kodama & Terlevich 1998; van Dokkum et al. 1999). From our results, we speculate that the chemical abundance of ISM in $`\alpha `$-elements (such as O, Mg, Si, and S which come mainly from SN Type II) should be quite uniform at a fixed stellar mass of the central galaxy regardless of its X-ray extent. The individual potential structure of a galaxy is independent of the larger scale potential and should be similar at the time of major star formation. Davis & White (1996) and Loewenstein et al. (1994) claimed that galaxies with lower ISM temperature or compact X-ray halos tend to have lower chemical abundance of the ISM. However, it is not yet clear whether there is such a correlation for the $`\alpha `$-elements abundance (Matsushita, Makishima & Ohashi 2000). The new X-ray satellite XMM will solve this problem. On the other hand, the contribution from SN Type Ia can be much different because the potential structure would vary at the time of a SN Ia explosion due to the time delay of the explosion (Yoshii, Tsujimoto & Nomoto 1996). The X-ray extended galaxies would have acquired deeper potential wells by then, keep the SNIa ejecta more efficiently and hence show relatively iron enhanced ISM abundance compared to the X-ray compact ones. This is what we actually observe by ASCA. Although the ISM abundance of the X-ray compact galaxies is still uncertain, if we assume the same $`\alpha `$-element abundance as the X-ray extended systems, the Fe abundance is about a factor of 2 smaller than that of the X-ray extended objects (Matsushita, Makishima & Ohashi 2000). Considering that the X-ray extended galaxies are sitting in the center of larger scale potential structures and that dynamical friction drives satellite galaxies towards the center, these galaxies are more likely to be produced by galaxy mergers. The fact that the X-ray extended galaxies tend to have boxy shapes and weak rotation might support this hypothesis. If this is really the case, and if a considerable number of new globular clusters form during galaxy mergers as suggested by Zepf & Ashman (1993), we should expect higher $`S_N`$ for the X-ray extended galaxies on average. However, there is no significant difference in $`S_N`$ except for the cD galaxies. This may imply that, apart from the cD galaxies, the secondary globular clusters do not generally form by recent galaxy mergers, and that most of the globular clusters around the X-ray extended galaxies are likely to form very early when the major star formation takes place in their host galaxies. ## 5. Conclusions The stellar population makeup in early-type galaxies does not correlate with their present-day global potential structure. Early-type galaxies form stars at early epoch in their own potential wells independently, and some of the galaxies become incorporated into larger scale potential structures (clusters/groups) later on. This idea naturally explains the homogeneity of the stellar populations despite the variety of X-ray properties. This work was supported by the Japan Society for the Promotion of Science (JSPS) through its Research Fellowships for Young Scientists. We thank K. Pimbblet for carefully reading the paper and polishing up English as well as giving us some useful comments.
no-problem/0003/cond-mat0003450.html
ar5iv
text
# Generic criticality in a model of evolution ## ACKNOWLEDGMENTS I thank Dr. H. Hinrichsen for interesting discussion and the Department of Mathematics of the Heriot-Watt University (Edinburgh, Scotland) for allocation of the computer time.
no-problem/0003/hep-lat0003021.html
ar5iv
text
# Low-lying Eigenvalues of the QCD Dirac Operator at Finite Temperature ## 1 Introduction The correlations of Dirac operator eigenvalues in QCD and related theories have been shown to have a fascinating relation to Random Matrix Theory. There are two very different domains of interest here. One is the so-called “bulk” of the eigenvalue spectrum of the Dirac operator, far from both the infrared and the ultraviolet ends. The other is the so-called “hard edge” at $`\lambda 0,i.e.`$ the infrared end of the spectrum in theories with spontaneous breaking of chiral symmetry. The relevance of Random Matrix Theory in describing eigenvalue correlations of the Dirac operator in the bulk of the spectrum was first demonstrated by Halasz and Verbaarschot , and it has since been confirmed by numerous lattice gauge theory studies . From a theoretical point of view, these results in the bulk of the spectrum remain to be better understood. In the other case, near $`\lambda 0`$, the connection between universal Random Matrix Theory results and the QCD Dirac operator spectrum is by now firmly established, and has also been extensively checked in lattice gauge theory simulations . Already in the original work of Shuryak and Verbaarschot it was shown that the pertinent chiral Random Matrix Theory partition function coincides exactly with the effective field theory partition function in the relevant microscopic limit. More recently an explicit relationship between the universal Random Matrix Theory eigenvalue distributions and those of the QCD Dirac operator has been established , the precise link being the partially quenched chiral condensate . In this domain Random Matrix Theory is an intriguing alternative description of exactly the same phenomena that can be derived from the effective QCD partition function in the large-volume limit where $`V1/m_\pi ^4`$ . There exists also an interesting physical situation that forces us to reconsider the two different domains of the Dirac operator spectrum simultaneously. This is near the finite-temperature chiral phase transition, where the analytical connection between the effective QCD partition function and Random Matrix Theory breaks down even for the part of the spectrum that is close to $`\lambda =0`$. This situation is readily confronted in lattice gauge theory. Indeed, it is for staggered fermions even rigorously proven , that chiral symmetry is restored at high temperature. The low-lying spectrum of the Dirac operator must therefore be quite different in the high temperature phase as compared with zero temperature. In particular, the absence of chiral symmetry breaking implies, through the Banks-Casher relation, $$\overline{\psi }\psi =\pi \rho (0),$$ (1) that the density of eigenvalues at zero vanishes. This could happen either just at that point of $`\lambda =0`$ (as, for example, in the free theory where $`\rho (\lambda )\lambda ^3`$), or the spectrum could develop a gap. The chiral phase transition occurs at the temperature $`T_c`$ where the density of Dirac operator eigenvalues just reaches zero at $`\lambda =0`$. If the transition is continuous, this will happen smoothly as the temperature $`T`$ is increased towards $`T_c`$. In such a case, an important question to settle is the precise power-law behavior of the spectral density of the Dirac operator right at $`T=T_c`$, because this can be related to the critical exponents of the phase transition . The same chiral Random Matrix Theory that yields universal microscopic spectral correlators which exactly coincide with those of the Dirac operator at $`T=0`$ can be tuned in such a way as to just reach (multi-critical) points where $$\rho (\lambda )\lambda ^{2k}$$ (2) near $`\lambda =0`$. Here $`k`$ is an integer labeling the multi-criticality. At such points universality of microscopic spectral correlators still holds in the Random Matrix Theory context, but there is no justification for assuming that these results are relevant for the Dirac operator spectrum at $`T=T_c`$.<sup>1</sup><sup>1</sup>1In particular, this behavior is only compatible with a continuous phase transition. But at a more fundamental level, there is simply no longer any relation between the chiral Random Matrix Theory ensemble and the effective QCD partition function for temperatures $`T`$ that do not satisfy $`T\mathrm{\Lambda }_{QCD}`$. There is also a schematic Random Matrix model for the finite-temperature behavior of the Dirac operator spectrum . It gives a different behavior at $`T=T_c`$: $`\rho (\lambda )\lambda ^{1/3}`$. Also here there is no physical justification for using it in connection with the Dirac operator spectrum at finite temperature, but it is an interesting model that depends on just one deterministic external parameter, and we shall therefore return to it below. Suppose, for a moment, that the Dirac spectrum actually develops a gap around $`\lambda =0`$ above $`T_c`$. In Random Matrix Theory the end of a spectrum around such a gap is referred to as a “soft edge”. Generally, the (macroscopic) density of eigenvalues near a soft edge behaves as (for $`\lambda >\lambda _0`$) : $$\rho (\lambda )(\lambda \lambda _0)^{2m+1/2}$$ (3) with $`\lambda _0`$ being the location of the edge. Here $`m`$ is an integer that labels universality classes classified by their Random Matrix Theory potentials ($`m`$ parameters in the potentials must be tuned in order to reach each class). Thus the generic behavior, without any fine tuning, corresponds to $`m=0`$, which gives a square root approach to the soft edge: $$\rho (\lambda )(\lambda \lambda _0)^{1/2}$$ (4) Random Matrix Theory actually gives a more detailed, microscopic, description. This arises from a blowing-up of the eigenvalue density function around the soft edge $`\lambda _0`$ with a rescaling according to the macroscopic behavior (3). For example, for the generic $`m=0`$ universality class, the corresponding microscopic eigenvalue density is $$\rho (\lambda )X[\mathrm{Ai}(X)]^2+[\mathrm{Ai}^{}(X)]^2,$$ (5) where $`X(\lambda \lambda _0)N^{2/3}`$ and $`\mathrm{Ai}(x)`$ is the standard Airy function. Here $`N`$ denotes the size of the matrix, and the rescaling by $`N^{2/3}`$ is required in order to spread out the increasing number of eigenvalues to obtain one well-defined limiting function. From the known asymptotic behavior of the Airy functions one finds: $$\rho (\lambda )\{\begin{array}{cc}\frac{\sqrt{X}}{\pi }+𝒪\left(\frac{1}{X}\right)\hfill & \text{for }X\mathrm{}\hfill \\ \frac{17}{96\pi }|X|^{1/2}\mathrm{exp}(4|X|^{3/2}/3)\hfill & \text{for }X\mathrm{}\hfill \end{array}$$ (6) Thus, at the microscopic level the spectrum is not cut off sharply at $`\lambda _0`$, but has an exponentially suppressed tail beyond. Further, the square root behavior of the eigenvalue density is modulated by wiggles corresponding to the distribution of particular eigenvalues (smallest, second, third, etc.). (See, for example, Fig. 9.) Until a few months ago, there existed only one low-statistics investigation of the low-lying Dirac eigenvalue spectrum for staggered fermions in the high temperature phase . It did not unequivocally establish the existence of a gap. For example, some low eigenvalues were found. However, they could possibly be attributed to “would-be zero modes” from global gauge field topology, shifted away from zero by the explicit chiral symmetry breaking of staggered fermions at finite lattice spacing . This is a general problem with staggered fermions that also we must face here: at finite lattice spacing, the index theorem is not valid for staggered fermions, and gauge field topology (whichever way one defines it on a discrete lattice) does not give rise to exact zero modes of the staggered Dirac operator. At $`T=0`$ no trace of non-trivial gauge field topology on the lowest-lying spectrum of staggered eigenvalues has been found , except in the 2-d Schwinger model at fairly small lattice spacing . Finite temperature, which causes a depletion of genuine non-zero eigenvalues near $`\lambda 0`$, further complicates this issue of a mix-up with would-be zero modes. Very recently a study of just the low-lying Dirac operator eigenvalues near $`T_c`$ has actually indicated the presence of a gap at large lattice volumes . Again the statistics was rather limited, and only a small number of the lowest-lying eigenvalues could be included (varying between 8 and 10). The present study will in many ways follow the same lines of thought as Ref. , but we shall have much larger statistics, and we shall also probe some different aspects. In particular, we are also interested in seeing whether quenching causes a different behavior of the smallest eigenvalues near $`T_c`$ compared with dynamical fermions. Last year a study of low-lying eigenvalues of the overlap Dirac operator in quenched finite temperature gauge theories appeared . Overlap fermions are well suited for such an investigation since they do not suffer from explicit chiral symmetry breaking even at finite lattice spacing and since they have exact zero modes in topologically non-trivial gauge fields. Ref. found that effects of topology persisted in the high temperature phase, although strongly suppressed compared to the low temperature phase. More interestingly, an accumulation of low eigenvalues with an apparently finite density in the infinite-volume limit was found very near $`\lambda 0`$ even above the (quenched) phase transition temperature $`T_c`$. The statistical properties of the smallest group of eigenvalues were consistent with them being due to a dilute gas of instantons and anti-instantons . These results have led to the speculation that chiral symmetry might remain broken even in the high temperature phase of quenched QCD with overlap fermions. In this paper we describe high statistics investigations of the low-lying eigenvalue spectrum of the staggered Dirac operator for SU(3) gauge group at finite temperature. We do this for both quenched (section 2) and dynamical QCD with four flavors of staggered fermions (section 3). The interest in the quenched case lies primarily in checking whether also staggered fermions, although insensitive to topology at the gauge couplings we can investigate here, show signs of unusual behavior of the smallest Dirac eigenvalues above $`T_c`$ (as was the case for overlap fermions ). Both quenched and unquenched simulations give rise to Dirac operator spectra that can be compared with Random Matrix Theory. In particular, if the Dirac operator spectrum exhibits a gap, is it of the soft-edge kind of Random Matrix Theory? Are there indications that Random Matrix Theory provides a more accurate description of the Dirac operator spectrum as the three-volume is increased? We shall try to answer these questions in what follows. ## 2 Quenched QCD at high temperature For our quenched finite temperature Monte Carlo simulations we have used lattices with temporal extent $`N_t=4`$ and up to three spatial volumes: $`8^3`$, $`12^3`$ and $`16^3`$. For $`N_t=4`$ the deconfinement phase transition for SU(3) pure gauge theory with Wilson action has been very accurately determined, occurring at lattice gauge coupling $`\beta _c=5.6925(2)`$ . It has for long been assumed that the chiral phase transition of the quenched theory occurs at exactly this deconfinement phase transition point of the pure gauge theory, but this has recently been challenged . We now give some technical details of our simulations. The gauge field configurations were generated with a mixture of overrelaxation and heat bath updates, and were analyzed after every 20-th heat bath sweep. On each configuration we computed the 50 lowest-lying eigenvalues, and in some cases even more, using the variational Ritz functional method . Many ensembles consisted of several thousand configurations. Gauge coupling, lattice size and statistics of our ensembles is summarized in Table 1. As is well-known, the staggered Dirac operator $`\text{/}D_{x,y}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\mu }{}}\eta _\mu (x)\left(U_\mu (x)\delta _{x+\mu ,y}U_\mu ^{}(y)\delta _{x,y+\mu }\right)`$ (7) $``$ $`\text{/}D_{e,o}+\text{/}D_{o,e}`$ (8) is anti-hermitian, with purely imaginary eigenvalues $`i\lambda `$ that come in pairs of opposite sign. In eq. (8) $$\eta _\mu (x)=(1)^{_{\nu <\mu }x_\nu }$$ (9) are the usual phase factors for staggered fermions. Denoting $$ϵ(x)=(1)^{_\nu x_\nu },$$ (10) we have also explicitly shown how $`\text{/}D`$ connects even sites, i.e. those with $`ϵ(x)=+1`$, with odd sites, those with $`ϵ(x)=1`$, and vice versa. This means that the operator $`\text{/}D^2`$ is hermitian and positive semi-definite. The sign function $`ϵ(x)`$ defined above plays the role of $`\gamma _5`$ in the continuum: it anticommutes with $`\text{/}D:\{\text{/}D,ϵ\}=0`$. As $`\text{/}D^2`$ does not mix between even and odd lattice sites, we need only compute the eigenvalues, on, say, the even sublattice. One easily sees that if $`\psi _e`$ is a normalized eigenvector of $`\text{/}D^2`$ with eigenvalue $`\lambda ^2`$, then $`\psi _o\frac{1}{\lambda }\text{/}D_{o,e}\psi _e`$ is a normalized eigenvector of $`\text{/}D^2`$ with eigenvalue $`\lambda ^2`$, and non-zero only on odd sites. Moreover, as we will never encounter exact zero modes on genuine quantum configurations, there is no difficulty with the above definition of $`\psi _o`$. We of course make use of these properties, and hence compute only the (positive) eigenvalues of $`\text{/}D^2`$ restricted to the even sublattice, and then take the (positive) square root. All eigenvalues to be shown in the following thus have an equal number of negative companions, of the exact same magnitude. The spectral density of the Dirac operator is given by $$\rho (\lambda )\frac{1}{V}\underset{n}{}\delta (\lambda \lambda _n),$$ (11) and it is readily computed numerically from our Monte Carlo simulations by a binning of the measured eigenvalues per configuration at convenient small intervals. In Fig. 1 we show the density of low lying eigenvalues on $`8^3\times 4`$ lattices for several $`\beta `$ values between 5.5 (in the confined phase) to 5.9 (in the deconfined phase). In each case we computed the 50 lowest positive eigenvalues $`i\lambda `$ of the staggered Dirac operator. The plots are normalized by the following condition, $`_0^{\mathrm{}}\rho (\lambda )𝑑\lambda =\text{\#eigenvalues}/V`$. In the confined phase it is quite evident that the density at zero would be non-zero in the thermodynamic limit. As the temperature is increased, the density at zero decreases. There is a qualitative change of the eigenvalue density once the temperature is increased above $`T_c`$. Beyond the first few eigenvalues the eigenvalue density assumes a shape compatible with a square root behavior (4). However, a sizable tail, decreasing with increasing temperature, of small eigenvalues persists, which seems to extend all the way down to zero. In Fig. 2 we compare the eigenvalue density in the high temperature phase, at $`\beta =5.9`$, for several spatial volumes. As can be seen the eigenvalue density is volume independent to a surprising degree. In particular, the tail of small eigenvalues is seen to be volume independent and appears to persist in the thermodynamic limit. We note that the tail is much larger than would be compatible with the exponential tail from the Airy function behavior of random matrix theory at a soft edge eq. (6) (see Fig. 9 in section 3 below for an example). It is tempting to speculate that the tail seen here in the quenched case with staggered fermions is a reflection of the accumulation of small eigenvalues seen with overlap fermions in and attributed there to a dilute gas of instantons and anti-instantons. In the staggered case, the explicit chiral symmetry breaking at finite lattice spacing shifts the small modes, presumably resulting in the observed tail. We also looked at the unfolded level spacing between individual eigenvalues $`i`$ and $`i+1`$ $$s_i=\frac{\lambda _{i+1}\lambda _i}{\lambda _{i+1}\lambda _i}.$$ (12) In Fig. 3 the distribution of the $`s_i`$ between eigenvalue $`i`$ and $`i+1`$, $`i=\mathrm{1..6}`$ is shown and compared to the expected distribution, the Wigner surmise $$P(s)=\frac{32}{\pi ^2}s^2e^{\frac{4}{\pi }s^2}.$$ (13) Evidently, the level spacings between the first eigenvalues are not very accurately given by Random Matrix Theory correlations. But as we move into the bulk, say for $`i>5`$ in the case of the $`8^3\times 4`$ lattice, the agreement with Random Matrix Theory becomes almost perfect. It should be stressed here that this is a volume-dependent statement. For larger volumes one needs to go beyond more eigenvalues starting at the soft edge before one finds good agreement. This is consistent with the fact that there is a definite scale around the soft edge, where eigenvalue correlations are poorly described by Random Matrix Theory. Going to larger volumes simply forces more eigenvalues into that region. ## 3 QCD with four dynamical fermions at high temperature For our dynamical simulations we chose to work with $`n_f=4`$ flavors. Four is the “natural” number of flavors for staggered fermions in the continuum limit, and an efficient exact simulation algorithm can be used, the Hybrid Monte Carlo algorithm. In addition, the four flavor theory is known to have a rather strong first order finite temperature phase transition . Therefore, tunnelings into the low temperature phase are strongly suppressed already on rather small systems and at temperatures quite close to $`T_c`$. We made dynamical simulations with quark masses $`am_q=0.1`$, 0.05, 0.025, 0.01 and 0.002, and for couplings $`\beta `$ in the high temperature phase (here $`a`$ is the lattice spacing). Interestingly, the lowest eigenvalue provides a sensitive method for determining the critical coupling $`\beta _c(am_q)`$: in Fig 4 the lowest eigenvalue on a $`8^3\times 4`$ lattice with $`am_q=0.025`$ is plotted against $`\beta `$. $`\beta `$ ranges from 4.96 to 5.07 in intervals of 0.002. Each point is an average over 5 configurations at that $`\beta `$-value. A rise is observed between $`\beta =5.012`$ and $`\beta =5.022`$, which can be clearly interpreted as the chiral symmetry restoring finite temperature phase transition. (For published values of the critical couplings $`\beta _c(am_q)`$) see Ref. .) The possibility of using the magnitude of the smallest Dirac operator eigenvalues as probes for chiral symmetry restoration was suggested by Jackson and Verbaarschot on the basis of a similarly observed behavior in a Random Matrix Theory context that we will also discuss below. Most of our analysis with dynamical fermions was performed at $`\beta =5.2`$, which is above $`\beta _c`$ for all values of $`m_q`$ we used. Typically, we analyzed $`3000`$ configurations for each volume and $`m_q`$, extracting the 50 smallest eigenvalues. Ensemble details are summarized in Table 2. Note the large statistics gathered for $`\beta =5.2`$ and $`am_q=0.025`$. As in the quenched case we find a small tail at the lower end of the eigenvalue distribution, reaching towards $`\lambda =0`$ from the main bulk of the distribution. This tail also appears to be volume independent (see Fig. 6). However, in this case it is somewhat more suppressed than in the quenched case, and for small values of the quark mass it does not extend all the way to $`\lambda =0`$, see Fig. 5. In Fig. 7 we show eigenvalue distributions measured at $`\beta =5.2`$ and various $`am_q`$. Note that for larger $`am_q`$, $`\beta _c`$ is also larger, which causes a shift of the whole spectrum as $`am_q`$ is varied. Therefore, a direct quantitative comparison of the distributions in Fig. 7 is not straightforward. However, we note that the distributions at $`am_q=0.025`$ and $`am_q=0.002`$ are almost on top of each other, indicating that these distributions are very close to the $`am_q=0`$ distribution. Furthermore, as in the quenched case described in the previous section, we find the “macroscopic” behavior of the eigenvalue density seemingly compatible with a square root form. By fitting eq. (3) to the bulk of the spectrum (leaving out the tail), we obtain different powers, depending on how much of the tail we choose to cut off. This fitting has been done for $`V=8^3\times 4`$ at $`\beta =5.2`$ and $`am_q=0.025`$ and the resulting values can be seen in Table 3. There is a clear tendency towards a slightly higher power ($`=2m+1/2`$) than the square root which would be expected from Random Matrix Theory. In Fig. 8 we show the fit with the cut at 0.1. For comparison, we include the distribution of the first eigenvalue in the figure. Clearly, the tail at small $`\lambda `$ is caused by the excessive width of the distribution of the smallest eigenvalue. In Fig. 9 we compare the spectral density for quark mass $`am_q=0.025`$ with the prediction (5) of the Random Matrix Theory for the density near a soft edge. In order to make the comparison possible, we rescale the spectral density as $$\rho (\lambda )\frac{\lambda _0}{2}\left(\frac{2}{\pi \lambda _0KV}\right)^{\frac{2}{3}}\rho \left(\frac{2}{\lambda _0}(\lambda \lambda _0)\left(\frac{2}{\pi \lambda _0KV}\right)^{\frac{2}{3}}\right)$$ (14) where we determine $`\lambda _0`$ and $`K`$ from a fit to a square root: $`\sqrt{\frac{\lambda _0}{2}}K\sqrt{(\lambda \lambda _0)}`$. Here $`V`$ is the lattice volume. We see that the tail predicted by the Random Matrix Theory is much more strongly suppressed than the measured distribution. Moreover, we can also observe that the density of the eigenvalues themselves does not match: the measured distribution includes 50 eigenvalues, whereas the function (5) has only 40 ‘wiggles’ in the $`\lambda `$-range of the distribution. However, if we attempt to perform the comparison by matching the number of eigenvalues/wiggles, the overall fit becomes worse. In itself, this mismatch is not surprising, when we remember that the overall shape is not well described by a square root behavior in the first place (see Table 3). In the quenched simulations we saw that correlations between the first 5 or 6 eigenvalues were not very accurately given by Random Matrix Theory. In Fig. 10 the distribution of $`s_i`$, defined in eq. (12), is shown for a dynamical simulation with $`\beta =5.2`$ , $`am_q=0.025`$ on a $`8^3\times 4`$ lattice. Here we see, on the same lattice volume, a clear deviation from the Wigner surmise only for the first two or three spacings, $`s_1`$, $`s_2`$ and $`s_3`$. Again, comparisons can only be made at equal volumes, as larger volumes imply more eigenvalues in the region around the soft edge where correlations are poorly described by Random Matrix Theory. There is an interesting physical consequence of a genuine gap in the Dirac eigenvalue spectrum. Consider the difference between the ($`\pi `$) susceptibility $`\chi _P`$ and the susceptibility of its scalar partner ($`a_0`$), which we denote by $`\chi _S`$ . In a manner similar to the Banks-Casher formula for the chiral condensate one finds that this difference can be written $$\omega =4m^2_0^{\mathrm{}}𝑑\lambda \frac{\rho (\lambda ;m)}{(\lambda ^2+m^2)^2},$$ (15) where the spectral density $`\rho (\lambda ;m)`$ includes the zero modes due to topology as well. In the infinite-volume limit this contribution from exact zero modes should vanish, and we should be left with the integral over non-zero modes. If the spectral density has a gap around the origin this means that $`\omega `$ vanishes in the chiral limit. Because axial U(1) rotates $`\chi _P`$ into $`\chi _S`$ and vice versa, this would imply a restoration of axial U(1) at these high temperatures . Conversely, if one believes that this axial U(1) symmetry is not restored at high temperature, one gets constraints on the behavior of the spectral density of the Dirac operator near the origin. For a power-law behavior near $`\lambda 0`$ a non-vanishing $`\omega `$ in the infinite-volume chiral limit can only be supported for $`\rho (\lambda ,0)\lambda ^\alpha `$ with $`\alpha 1`$, and in fact $`\omega `$ would only remain finite in that limit if $`\alpha =1`$. Of course, the fact that we appear to see a gap in the eigenvalue spectrum with staggered fermions at this particular coupling does not imply that this gap persists in physical units as we take the continuum limit. If not, we are here studying a pure lattice artifact. The only way to test this is to study the shift in the apparent cut-off eigenvalue $`\lambda _0`$ as we change the lattice spacing (this is, however, far beyond what we can do at present). A very different uncertainty comes from the fact that we are working with fermions that are almost insensitive to topology at the couplings and volumes available to us. This means that there are some would-be zero modes mixed up with our regular eigenvalues near the origin. At zero temperature we found that these would-be zero modes somewhat surprisingly behaved as the non-zero eigenvalues at realistic couplings and volumes . But there is no guarantee that this is the case at finite temperature. This means that the small tail extending to zero in our quenched simulations, and some of the smallest eigenvalues in the simulations with dynamical fermions may be due to these would-be zero modes. In particular, the disagreement with Airy-function behavior very close to the soft edge may be partly due to these would-be zero modes. ## 4 Random Matrix Theory We have just shown our lattice gauge theory data for the spectrum of the Dirac operator below, around, and above $`T_c`$, and compared it with some of the analytical formulas of Random Matrix Theory in the limit where the size of these matrices $`N`$ goes to infinity. It is of interest to see how a simple model of a chiral phase transition in Random Matrix Theory behaves at finite $`N`$, as some external parameter (mimicking temperature $`T`$) is changed. The model we shall focus on here was proposed by Jackson and Verbaarschot in the first of Ref. and has also been studied in Ref. . It is based on a modified chiral ensemble of $`N\times N`$ complex matrices $`W`$, with partition function $$𝒵=𝑑W\underset{f=1}{\overset{N_f}{}}det(Mim_f)\mathrm{exp}\left[NTr(WW^{})\right]$$ (16) in a sector of topological charge zero. Here $$M=\left(\begin{array}{cc}0& W^{}+T\\ W+T& 0\end{array}\right)$$ (17) is a $`2N\times 2N`$ block hermitian matrix, and the external deterministic parameter $`T`$ is playing a rôle reminiscent of temperature. In the normalization chosen here, a continuous “phase transition” occurs at $`T_c=1`$, where the spectral density $`\rho (\lambda )`$ of the eigenvalues of the random matrices just reaches zero . For $`T>T_c`$ the spectrum becomes two-banded, with a gap surrounding zero. The global shape of $`\rho (\lambda )`$ in this model will be very different from the actual (macroscopic) Dirac operator spectral density. But the interesting feature lies in having here a simple model which qualitatively seems to describe some of the observed behavior of the staggered Dirac operator with $`N_f=4`$, in particular the apparent gap in the density for $`T>T_c`$.<sup>2</sup><sup>2</sup>2Many other details do not match at all. For instance the “phase transition” in the model (16) is continuous, in contrast to the finite-temperature phase transition in the massless $`N_f=4`$ theory, which is believed to be of first order. It is very simple to do quenched ($`i.e`$, $`N_f=0`$) numerical simulations of the above Random Matrix model, as it just corresponds to generating an ensemble of complex matrices with Gaussian weight. Such matrices are “maximally random” in that their matrix elements are independently of Gaussian distribution. This allows us to choose very large random matrices numerically, and then finding the eigenvalues of the hermitian matrix $`M`$. In this way we have studied the detailed behavior of the finite-$`N`$ Random Matrix model (16) just below $`T_c`$, at $`T_c`$, and just above $`T_c`$ (where the two bands separate as the gap develops).<sup>3</sup><sup>3</sup>3Similar numerical studies have been performed by K. Splittorff, M.Sc. thesis, The Niels Bohr Institute 1999 (unpublished). Simulations of the macroscopic spectral density in this model can also be found in the original paper by Jackson and Verbaarschot . We show some of our numerical results in Figs. 11 and 12. First, in Fig. 11 we display a sequence of the macroscopic spectral density with increasing magnitude of the parameter $`T`$: $`T=0,0.5,0.75,1.0,1.2,2.0`$. The plots were made by diagonalizing 10000 $`200\times 200`$ matrices. At $`T=0`$ this macroscopic Random Matrix Theory spectrum is just the Wigner semi-circle law (we display only the $`\lambda >0`$ part): $$\rho (\lambda ,T=0)=\frac{1}{2\pi }\sqrt{4\lambda ^2}.$$ (18) As $`T`$ increases, a dip in the spectral density around $`\lambda =0`$ slowly develops, and it subsequently turns into a gap. Of course, this macroscopic Random Matrix Theory spectrum is totally unlike the macroscopic Dirac operator spectrum. But it is interesting to zoom in on the microscopic behavior of the Random Matrix spectral density at the soft edge of the gap. In this way, by blowing up the scale of the smallest eigenvalues, we obtain the plots in Fig. 12 for the same parameter values of $`T`$ as above. One sees clearly how the universal Bessel-kernel behavior below $`T_c`$ turns into the also universal Airy-kernel above $`T_c`$. We note that it is has been shown in Ref. that the massless microscopic spectral density of the above Random Matrix model has precisely the usual zero-temperature form, $$\rho _s(\zeta (T))=\frac{\zeta (T)}{2}\left[J_{N_f}(\zeta (T))^2J_{N_f1}(\zeta (T))J_{N_f+1}(\zeta (T))\right],$$ (19) where $`\zeta (T)`$ is simply the eigenvalues rescaled by the (T-dependent) infinite-volume spectral density at the origin $`\rho (0,T)`$: $$\zeta (T)=\lambda 2\pi N\rho (0,T).$$ (20) In this model $`\rho (0)`$ approaches zero with a mean-field type of behavior : $$\rho (0,T)=\rho (0,0)\sqrt{1T^2}.$$ (21) For $`T`$ bigger than $`T_c`$, but still close to it, we find, as expected, a deformation of the Airy-kernel. In fact, the microscopic behavior there smoothly interpolates between the Bessel-form and the Airy-form. The peaks corresponding to individual eigenvalues from the Bessel-function behavior below $`T_c`$ gradually smoothen out to become the inflection points in the spectral density of the Airy-kind as the soft edge moves away from the origin. To illustrate how accurately one reproduces the Airy-behavior in this kind of simulations, we show in Fig. 13 the soft edge prediction appropriately scaled to fit simulation data at T=3. The Airy-behavior is perfectly reproduced close to the edge, but with deviations after the first few eigenvalues(wiggles). The deviation is presumably caused by the limited number of eigenvalues at $`N=300`$, and when $`N`$ is increased we expect the fit to improve. ## 5 Conclusions The purpose of this study has been to find the extent to which Random Matrix Theory may be able to describe low-lying eigenvalue distributions and correlations between low-lying eigenvalues of the staggered Dirac operator at finite temperature. We have also been interested in testing the quenched theory with staggered fermions in the light of recent results with overlap fermions , which indicated that the chiral finite-temperature phase transition in the quenched theory may be more subtle than previously expected. In the quenched case we do not see the accumulation of small Dirac operator eigenvalues around $`\lambda =0`$ that was observed with overlap fermions. This is not entirely surprising in view of the insensitivity of staggered fermions to gauge field topology at these lattice couplings and lattice volumes . With staggered fermions we do observe a strong depletion of eigenvalues near the origin once the temperature $`T`$ reaches the pure gauge theory deconfinement phase transition temperature $`T_c`$. This is in agreement with the conventional picture that chiral symmetry is restored in the quenched theory with staggered fermions at precisely the deconfinement phase transition. On the other hand, we find a clear difference in the behavior of the smallest Dirac operator eigenvalues in the quenched theory and the theory with genuine, dynamical, fermions. In the quenched case a small, volume independent tail of the eigenvalue distribution extends to $`\lambda =0`$ while in the full theory the tail does not reach the origin at the couplings we have investigated. While the bulk of the eigenvalue distribution near the (soft) edge is roughly compatible with a square root behavior, the tail of small eigenvalues is considerably larger than the Airy function behavior that Random Matrix Theory would predict. Physically, a genuine gap in physical units in the eigenvalue spectrum above $`T_c`$ would imply the restoration of axial U(1) symmetry at these high temperatures. A very likely scenario is therefore that the apparent gap found with staggered fermions at finite bare coupling $`\beta `$ shrinks to zero in physical units as the continuum limit is reached. However, an investigation of whether this is indeed the case is much beyond the scope of the present paper. Acknowledgments: We thank K. Splittorff and J. Verbaarschot for discussions. The work of P.H.D. and K.R. has been partially supported by EU TMR grant no. ERBFMRXCT97-0122, and the work of U.M.H. has been supported in part by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. In addition, P.H.D. and U.M.H. acknowledge the financial support of NATO Science Collaborative Research Grant no. CRG 971487.
no-problem/0003/quant-ph0003104.html
ar5iv
text
# Quantum authentication and key distribution using catalysis ## Abstract Starting from Barnum’s recent proposal to use entanglement and catalysis for quantum secure identification \[quant-ph/9910072\], we describe a protocol for quantum authentication and authenticated quantum key distribution. We argue that our scheme is secure even in the presence of an eavesdropper who has complete control over both classical and quantum channels. Since the publication of the BB84 protocol , quantum key distribution has developed into a well-understood application of quantum mechanics to cryptography. Typically, quantum key distribution schemes depend either on an unjammable classical communication channel or on authentication of the classical communication by classical methods. Comparatively little work has been done on the problem of quantum authentication and authenticated quantum key distribution. Some existing quantum authentication proposals are variations of the BB84 protocol . These proposals either require an unjammable classical channel, or authentication of the classical communication using classical cryptographic methods . An early proposal uses quantum oblivious transfer, which has since been shown to be insecure . Some recent proposals are based on entanglement. A very interesting protocol of this type is due to Howard Barnum . In his protocol, the parties use a shared entangled pair of particles as a catalyst to perform a quantum operation which would be impossible without the catalyst. In its original form, however, Barnum’s protocol has been shown to be insecure . In this paper, we describe a protocol derived from Barnum’s protocol which appears to be secure against a wide range of eavesdropping attacks. In a simplified version of our protocol, the two parties, Alice and Bob, initially share $`K`$ particle pairs in an entangled state $`|c`$ (the key or catalyst). Assume Alice wants to identify herself to Bob. Bob then prepares $`K`$ pairs of particles in an entangled state $`|b`$ and sends one particle from each pair to Alice (the challenge). It is possible to choose the states $`|c`$ and $`|b`$ such that by using only local operations and classical communication (LQCC), Alice and Bob can convert the four-particle state $`|b|c`$ into the four-particle state $`|c|c`$, but by using only LQCC, the two-particle state $`|b`$ cannot be converted into the two-particle state $`|c`$ deterministically. The state $`|c`$ thus acts as a catalyst for the conversion of $`|b`$ into $`|c`$. Using a different catalyst for each pair of challenge particles, Alice and Bob perform LQCC to convert all $`K`$ challenge pairs to the state $`|c`$. Bob now selects a number $`K^{}`$ of his challenge particles and asks Alice to send back her corresponding challenge particles (her response). For each of the $`K^{}`$ challenge pairs now in his possession, Bob makes a projective measurement onto the state $`|c`$. An eavesdropper, Eve, pretending to be Alice, would not have had access to the catalyst $`|c`$, so Eve and Bob would not have been able to convert all their challenge particles to the state $`|c`$, and therefore some of Bob’s test measurements would fail. Below we will derive an upper bound $`p_0`$ for the probability $`p`$ that an eavesdropper remains undetected in a single such measurement. The overall probability of not detecting an eavesdropper is bounded above by $`p_0^K^{}`$ and can be made arbitrarily small by choosing $`K^{}`$ large enough. After a successful authentication, Alice and Bob share $`2(KK^{})`$ catalyst pairs, since the protocol requires that they destroy the catalyst pairs used in the conversion of the $`K^{}`$ tested challenge pairs. If $`K>2K^{}`$, they now share more key particles than before. Our authentication protocol thus also provides authenticated quantum key distribution. The simplified version of our protocol just given is not secure. Below, we first describe a full version of the protocol, and then we discuss a number of eavesdropping attacks against it which we believe are the most powerful such attacks. We will argue that our protocol is secure even in the presence of an eavesdropper with full control over both classical and quantum communication channels; we do not, however, give a full security proof. In our analysis, we assume that all quantum operations are error-free and that the quantum channel is noiseless. Choice of states. Consider bipartite states $`|b=_{k=1}^n\sqrt{b_k}|k|k`$ and $`|c=_{k=1}^n\sqrt{c_k}|k|k`$, where the states $`|k`$ are orthonormal basis states for one particle. If $`b_1\mathrm{}b_n`$ and $`c_1\mathrm{}c_n`$, then $`b_k`$ and $`c_k`$ are called the ordered Schmidt coefficients of the states $`|b`$ and $`|c`$. The state $`|b`$ can be converted deterministically into $`|c`$ using only LQCC iff the ordered Schmidt coefficients of the target state $`|c`$ majorize those of the initial state $`|b`$ , i.e., iff $`k:_{i=1}^kc_i_{i=1}^kb_i`$ with equality for $`k=n`$. Otherwise, only a probabilistic conversion is possible . States with the properties required for our protocol exist for $`n=5`$ . For $`n4`$, the protocol needs to be modified to use probabilistic entanglement-assisted conversion . Our choice of Schmidt coefficients for $`|b`$ and $`|c`$ is $`b_1=b_2=0.31`$, $`b_3=0.30`$, $`b_4=b_5=0.04`$, $`c_1=0.48`$, $`c_2=0.24`$, $`c_3=c_4=0.14`$, $`c_5=0`$. With this choice, the conversion of $`|b`$ into $`|c`$ can be done only with probability $`P(bc)0.572`$, but the ordered Schmidt coefficients of the tensor-product state $`|c|c`$ majorize those of the state $`|b|c`$, so the latter can be converted into the former deterministically. Even though the exact conversion $`|b|c`$ can only be done with probability 0.572, it is possible to convert $`|b`$ to pure or mixed states $`\rho `$ close to $`|c`$ with much higher probability (we say that $`\rho `$ is close to $`|c`$ if the fidelity $`F=c|\rho |c`$ is close to 1). By applying a theorem given in Ref. , it can be seen that the average fidelity for the conversion $`|b|c`$ is bounded above as $$c|\rho |cp_00.9907,$$ (1) where $`\rho `$ is now the average state resulting from the conversion. The theorem also shows that the maximum average fidelity $`\overline{F}=p_0`$ is achieved by a pure state $`|\xi _c`$ to which $`|b`$ can be converted deterministically. Overview of the full protocol. The main difference between the simplified version of the authentication protocol given above and the full version is that the latter is symmetric. In an authentication round, Alice and Bob each establish the identity of the other part. One round of the protocol consists of Alice and Bob each preparing $`K`$ particle pairs in state $`|b`$. Bob sends one particle of each of his pairs to Alice; for these pairs, Alice is called the prover and Bob the verifier. Likewise, Alice sends one particle of each of her pairs to Bob; for these pairs, she is the prover and he the verifier. Using a different catalyst for each pair, Alice and Bob now convert each of the $`|b`$ states to a $`|c`$ state. Each of the two asks the other to send back $`K^{}`$ ($`K^{}<K/2`$) of the new particles for testing; they abort the protocol if they detect any particle pair not in the $`|c`$ state. Eve, who does not initially share any entanglement with Alice and Bob, cannot impersonate one of them to the other. For a successful attack, Eve must therefore first obtain shared entanglement with Alice and Bob. Below, after describing the protocol in detail, we discuss its security against a number of attacks, where Eve has full control over both the quantum and classical communication channels (such attacks are called “man-in-the-middle” attacks). The key. Before the first authentication round, we shall assume that Alice and Bob share $`2K`$ particle pairs prepared in the state $`|c`$: these are the catalysts, and together they form the key. With each successful authentication round, the number of key pairs increases. In each round, the key particles used are labeled $`\gamma _A^i`$ and $`\gamma _B^i`$, respectively, where $`i=1,\mathrm{},2K`$, and the state of each pair $`\gamma _A^i\gamma _B^i`$ is $`|c`$. Detailed description. An authentication round consists of the following steps. 1. Bob prepares $`K`$ particle pairs $`\beta _A^i\beta _B^i`$ in state $`|b`$, where $`i`$ is odd, and sends $`\beta _A^i`$ to Alice. These are Bob’s challenges. Likewise, Alice prepares $`K`$ pairs $`\beta _A^i\beta _B^i`$ in state $`|b`$ for $`i`$ even, and sends $`\beta _B^i`$ to Bob. Thus, for odd indices, Bob will be the verifier; Alice will be the verifier for even indices. 2. For each $`i`$, Alice and Bob perform the deterministic catalysis conversion $`|b|c|c|c`$, where Alice performs local operations on her particles $`\gamma _A^i`$ and $`\beta _A^i`$ and Bob performs local operations on his particles $`\gamma _B^i`$ and $`\beta _B^i`$ . We can and do require that only the verifier performs both unitary transformations and generalized measurements; the prover performs only unitary transformations depending on the result of the verifier’s measurements, which are communicated classically. 3. Alice picks randomly a subset $`Q_A\{2,4,\mathrm{},2K\}`$ of size $`K^{}`$ of particles for which she is the verifier, and Bob does likewise for a subset $`Q_B\{1,3,\mathrm{},2K1\}`$ of size $`K^{}`$ for which he is the verifier. Bob as verifier now asks Alice to send back her response $`\beta _A^i`$ for some $`iQ_B`$. Bob measures the projector $`|cc|`$ on the particle pair $`\beta _A^i\beta _B^i`$. If the measurement fails, he aborts the protocol. Then Alice becomes the verifier, asks Bob to send $`\beta _B^i`$ for some $`iQ_A`$ and tests it likewise. They continue taking turns as prover and verifier until they have exhausted the sets $`Q_A`$ and $`Q_B`$. At the end of this step, they discard the catalysts $`\gamma _A^i\gamma _B^i`$ for $`iQ_AQ_B`$. 4. The authentication fails if any of the projective measurements in the previous step fails, or if Alice or Bob receive more than $`K^{}`$ requests to send back challenge particles. 5. If the authentication round succeeds, Alice and Bob are left with $`2(KK^{})`$ pairs $`\gamma _A^i\gamma _B^i`$ and $`2(KK^{})`$ pairs $`\beta _A^i\beta _B^i`$, i.e., they now have $`2K4K^{}`$ additional pairs in the catalyst state $`|c`$. The $`2K+n(2K4K^{})`$ they share after the $`n`$th successful round are now renamed $`\gamma _A^j\gamma _B^j`$ in random order, i.e., with the indices $`j`$ permuted using a pseudo-random number generator. Remark. If the authentication fails, the parties discard all particles used till that point, including both the original key and all new key pairs generated. In this case, Alice and Bob have to start again with a new key. Therefore, in practice they should initially share several sets of $`2K`$ key pairs. Security and attacks. We now dicuss the security of our protocol against a number of attacks. We start with two simple attacks, impersonation and denial of service, and then move on to more powerful “man-in-the-middle” attacks. Impersonation. Suppose that Alice is not present and Eve tries to persuade Bob that she is Alice. When Bob sends out a challenge particle, Eve intercepts it. We therefore label it $`\beta _E`$ rather than $`\beta _A`$, omitting the index $`i`$ for clarity. Eve must now perform local operations on $`\beta _E`$ such that a later measurement by Bob on the pair $`\beta _E\beta _B`$ will fail with the smallest possible probability. If $`\rho `$ is the average state of the pair $`\beta _E\beta _B`$ resulting from Eve’s and Bob’s operations, then the probability that Bob’s measurement succeeds is given by the fidelity $`c|\rho |c`$. Since Eve does not have the catalyst particle $`\gamma _A`$ paired with the particle $`\gamma _B`$ that Bob will use, the conversion is not assisted by any entanglement. The fidelity $`c|\rho |c`$ is therefore bounded above by $`p_0<1`$ \[see Eq. (1)\]. Since in one authentication round, Bob makes $`K^{}`$ such measurements, the probability of not detecting Eve is bounded above by $`p_0^K^{}`$, which can be made arbitrarily small by choosing $`K^{}`$ large enough. Denial of service. In this type of attack, Eve deliberately causes the authentication round to fail, and hence causes one party to discard all key particles. Although our protocol in its present form is particularly vulnerable to this kind of attack, this is not an essential weakness since an attacker who controls both quantum and classical communication can always prevent successful authentications between the legitimate parties. Man in the middle. We now look at stronger attacks in which Eve tries to obtain key material which she could then use, e.g., in a later impersonation attack. Eve’s goal is to share pairs of particles in the catalyst state $`|c`$ with Alice and/or Bob. For instance, if she succeeds in obtaining a large amount of key material shared with Bob, she will be able to authenticate herself to Bob without Alice being present. Eve’s ability to obtain key material is limited by the fact that if her presence is detected in a single measurement, all the previously obtained key material she shares with the verifier who performed that measurement will become worthless. We will distinguish between two kinds of attacks. In a type I attack, Eve does not intercept the challenge particle when it is sent from the verifier to the prover. In a type II attack, she intercepts the challenge particle and sends another particle on to the prover. Since the protocol is symmetric, we will assume in the following that Alice is the prover and Bob the verifier. Type I attack. By definition, in a type I attack, Bob sends the challenge particle $`\beta _A^i`$ to Alice without Eve interfering. Assume now that Bob sends out a request for a response particle. Eve has three options. In option 1, she passes the request on to Alice, then she passes Alice’s response particle $`\beta _A^i`$ on to Bob. Eve’s presence will not be detected, but she does not obtain any key material either. In option 2, Eve passes the request on to Alice, then intercepts Alice’s particle and sends another particle on to Bob. Eve does not gain anything, because both Alice and Bob are going to discard their respective particles. In addition, Eve risks detection with nonzero probability. Option 3 is the interesting one. Here, Eve does not pass Bob’s request on to Alice. Instead she prepares a pair of particles $`\alpha _E`$ and $`\alpha _B`$ in a state of her choice and sends $`\alpha _B`$ to Bob. Then she asks Alice to send back the particle $`\beta _A^{i+2}`$, which is the next one for which Bob is the verifier. Since the pair $`\beta _A^{i+2}\beta _B^{i+2}`$ is in the state $`|c`$, Eve now shares a perfect catalyst pair with Bob (assuming that $`i+2Q_B`$). Bob’s measurement on the pair $`\alpha _B\beta _B^i`$, however, is going to detect her with a probability not less than $`1p_0`$. In the case that Bob’s measurement does not detect her, we assume for our security analysis that, after the measurement, the pair $`\alpha _E\beta _A^i`$ shared between Alice and Eve is in state $`|c`$, which is probably too strong an assumption. There is an additional risk of detection for Eve in the next authentication round since, when Alice and Bob relabel their particles in step 5 of the protocol, there will be a $`j`$ such that $`\gamma _A^j`$ is not entangled with $`\gamma _B^j`$. Even if Bob does not ask for a response particle, Eve may still send a request to Alice, so that again she obtains a perfect catalyst pair with Bob. However, since Alice will abort the protocol if she receives more than $`K^{}`$ requests to send back a response particle, Eve cannot request a particle from her without also at some time during the round sending a corresponding response particle to Bob. Therefore, Eve cannot avoid being detected with a probability of at least $`1p_0`$ for each key particle she obtains in this way. Type II attack. We now assume that Eve intercepts the challenge particle $`\beta _A`$ sent out by Bob. As before, because Eve now owns that particle, we will label it $`\beta _E`$. The pair $`\beta _E\beta _B`$ is in state $`|b`$. Eve then prepares two particles $`\alpha _A\alpha _E`$ in a state $`|a`$ of her choice, keeps $`\alpha _E`$ and sends $`\alpha _A`$ to Alice. Unaware of Eve’s presence, Bob now goes through the catalysis protocol with his particles $`\beta _B`$ and $`\gamma _B`$, where $`\gamma _B`$ is entangled with Alice’s particle $`\gamma _A`$. Bob sends out the results of his generalized measurements, which Eve intercepts. Bob’s two particles $`\gamma _B`$ and $`\beta _B`$ are now in the state $$\rho _{\gamma _B\beta _B}=\mathrm{tr}_{\gamma _A\beta _A}(\rho _{\gamma _A\gamma _B}\rho _{\beta _A\beta _B})=\mathrm{tr}_{\gamma _A\beta _A}(|cccc|).$$ (2) This state is independent of Alice’s and Eve’s actions and has no entanglement between the two particles. At this point, there are three different cases. In the first case, Bob does not request a response particle; Eve thus does not risk being detected. She now shares entangled states with both Alice and Bob. She can perform arbitrary unitary or nonunitary local operations on her particles $`\alpha _E`$ and $`\beta _E`$, and she can send fake measurement information to Alice in order to influence Alice’s unitary operations. For our security analysis, we assume that this enables her to bring both pairs $`\alpha _A\alpha _E`$ and $`\beta _E\beta _B`$ into the catalyst state $`|c`$, although it follows from the analysis of case 2 below that she cannot reach this goal completely. Eve may also ask Alice to send particle $`\alpha _A`$ back to her, but generally, Eve will not gain anything from this. In the second case, Bob requests a response particle, and Eve sends him her particle $`\beta _E`$. We will now show that the fidelity between the target state $`|c`$ and the state $`\rho _{\beta _E\beta _B}`$ on which Bob performs his measurement is bounded above by $`c|\rho _{\beta _E\beta _B}|cp_0`$, which implies that Bob’s measurement fails with probability $`1p_0`$. The reason is that even if Bob collaborated with Eve on maximizing the fidelity, they could only use LQCC in the conversion; it would not be assisted by any entanglement. Since Alice performs only unitary transformations, but no measurements, on her particles $`\gamma _A`$ and $`\alpha _A`$, no entanglement is created between $`\alpha _E`$ and $`\gamma _B`$, which could assist Eve and Bob in their task. As in the first case, for our security analysis we will assume that if Eve remains undetected, she shares pairs in the catalyst state with both Alice and Bob. Eve can get close to this goal by performing a type I attack against Alice leading to a perfect catalyst pair shared with Bob. Eve can do this because she has not passed Bob’s earlier request on to Alice. In the third case, Bob also requests a response particle, but this time Eve passes his request on to Alice and intercepts Alice’s response $`\alpha _A`$. Eve then performs arbitrary operations on the three particles now in her possession, $`\alpha _A`$, $`\alpha _E`$ and $`\beta _E`$. Then she sends one particle on to Bob. We label this particle $`\stackrel{~}{\beta }_E`$. We now assume that Eve does not use any entanglement to assist her in the conversion of the $`\beta `$ particles, which means that the fidelity between the target state $`|c`$ and the state $`\rho _{\stackrel{~}{\beta }_E\beta _B}`$ on which Bob performs his measurement is bounded above by $`c|\rho _{\stackrel{~}{\beta }_E\beta _B}|cp_0`$. This implies again that Bob’s measurement fails with probability $`1p_0`$. The above assumption is rather strong, but partially justified by the fact that there is a conflict of interest for Eve: if Bob does not request a response particle, Eve wants the $`\alpha `$ particles to be in the pure $`|c`$ state, in which case they are not entangled with any other particle. For a full analysis of this conflict of interest, one needs to analyse the set of unitary transformations Alice is allowed to perform under the protocol. Unlike the first and second cases, if Bob’s measurement does not fail, Eve will not share entanglement with either Alice or Bob, since they discard their respective particles. To evaluate the overall security of the protocol against a type II attack, we now assume that Eve attacks $`L`$ particle pairs. Since Alice and Bob check a random fraction $`K^{}/K`$ of these pairs, the probability that Eve remains undetected is approximately bounded above by $`p_0^{LK^{}/K}`$—the bound becomes exact in the limit of large $`K`$ and $`K^{}`$. If Eve is not detected, the fraction $`e`$ of key pairs she shares with Alice and Bob is not greater than $`L/K`$. The probability $`p(e)`$ that Eve obtains a fraction $`e`$ undetected is therefore bounded above by $`p_0^{eK^{}}`$. The security of the protocol against a type II attack then follows from the fact that, for any $`e>0`$, Alice and Bob can make $`p(e)`$ arbitrarily small by choosing $`K`$ and $`K^{}`$ sufficiently large. Similarly, the protocol is secure against a type I attack because the probability that Eve remains undetected in a type I attack against $`L`$ particle pairs is bounded above by $`p_0^L`$. Conclusions and outlook. The quantum authentication protocol described above appears to be secure even in the presence of an eavesdropper who has complete control over both classical and quantum communication channels at all times. Our protocol does not rely on classical cryptography. Furthermore, the security of the protocol does not depend on keeping classical information secret, including information about quantum states: all parties, including the eavesdropper, have full information about all aspects of the protocol. In each authentication round, additional quantum key particles are distributed securely. Combined with entanglement purification and privacy amplification techniques , our protocol therefore also provides authenticated quantum key distribution. There is a number of important open questions which we plan to address in the future. Most importantly, we need to analyse the protocol in the presence of noise and for more subtle eavesdropping attempts such as coherent attacks, or an attack in which Eve partially entangles the challenge with an ancillary particle. Furthermore, there is scope for improving the protocol in several respects. For instance, the parties should not have to discard all key pairs if a single measurement fails. It should also be possible to find states with a lower fidelity bound $`p_0`$, e.g., by going to a higher-dimensional Hilbert space. Acknowledgments. The authors would like to thank Howard Barnum, Todd Brun and Martin Plenio for very helpful discussions. Thanks also to Daniel Gottesman and Norbert Lütkenhaus for pointing out problems with previous versions of the protocol. This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC).
no-problem/0003/cond-mat0003292.html
ar5iv
text
# Effective potentials for 6-coordinated Boron: structural geometry approach ## 1 Introduction Boron stands out among the elements for the complexity and polymorphous variety of its structures . The structures are networks built from icosahedrally symmetric clusters, and the typical “inverted-umbrella” coordination shell is asymmetrical, containing five neighbors on one side and one on the other. The stable phase ($`\beta `$-B) is currently modeled with 320 atoms/unit cell. Of the metastable allomorphs , only $`\alpha _{12}`$ and T<sub>50</sub> have solved structures , the latter of which is stabilized by $`4\%`$ N atoms . The simplest phase $`\alpha _{12}`$ contains 3-centered bonds both within and between icosahedra . It was thought that bonding within icosahedra is metallic and weaker, while inter-icosahedral bonds are covalent and strong ; however, recent experiments have questioned this behavior . Boron is the only plausible candidate for a single-element or covalent quasicrystal, which can be speculatively modeled on either $`\alpha _{12}`$-B or $`\beta `$-B. This idea inspired the discovery of a new Boron phase , as well as the prediction of Boron nanotubes . Our motivation is to compare the energies of alternate Boron structures. Selected crystal structures , finite icosahedral clusters , microtube segments, or sheet fragments were compared by direct ab-initio calculations. Such computations, however, are limited to $`O(10^2)`$ atoms whereas real Boron has more atoms per unit cell, and quasicrystal models require $`10^5`$ atoms. Furthermore Monte Carlo and/or molecular dynamics simulations are desirable, especially for the liquid . A tractable approximation is required of the ab-initio total energy as a function of atomic positions, such as a classical (many-atom) potential. This potential must reasonably represent the energy not only for the ground-state structure with slight distortions, but also for relatively high-energy local environments, such as occur in defects or sometimes in complex ground states, We know no previous attempts at such a potential. (The closest precursor of our work is Lee’s study of a few Boron structures , comparing atomic-orbital total energies with a simpler geometrical function of the closed circuits in the Boron network.) We start from a general form of potential into which all Silicon potentials could be cast, a sum $`E_{total}=_iE_i`$ over interactions local to each coordination shell: The site energy $`E_i`$ of atom $`i`$ with coordination number $`Z_i`$ is broken into terms depending on one, two, etc. of its neighbors $`j,k,\mathrm{}`$: $$E_i=\underset{j}{}f_{Z_i}(r_{ij})+\underset{jk}{}G_{Z_i}(r_{ij},r_{ik},\theta _{jk}^{(i)})+\mathrm{}$$ (1) where $`r_{ij}`$ is the distance from atom $`i`$ to $`j`$, and $`\theta _{jk}^{(i)}`$ the angle formed by two neighbours $`j,k`$ and center $`i`$. (We shall assume later that $`j,k`$ are restricted to “nearest” neighbours of site $`i`$.) Those Si potentials mostly attempted to capture the role of $`Z`$ and the radial dependences. The dependence on bond angles (with fixed coordination and bond radius) is not so assured, even for Si. Three-body terms may be implicit, as each bond’s effective $`Z`$ depends on the surrounding atoms . Bazant et al critiqued the angle dependence of prior potentials, but they too addressed it indirectly, by assuming various angular forms and checking which of these gave the most transferable radial behaviours. In this letter, we develop the opposite approach: we isolate the angular dependence without a priori assumptions of analytic form, by fitting to a large database in which non-angular variables are held fixed. We have no hopes that a potential should be transferable to a structures with very different local order; thus our database must be a large family of structures, among which bond angles and coordination numbers vary independently<sup>2</sup><sup>2</sup>2 We can quantify how well our database samples the space of possible coordination shells, by use of a metric which defines a distance in this space . This is not the case for previous databases of periodic structures. The reason is, in part, that at Boron’s typical coordination $`Z=6`$, a much greater variety of coordination shells is plausible than at ordinary covalent ($`Z=3`$-$`4`$) or metallic ($`Z12`$) coordinations. Consider a database of structures in which all bond lengths are $`R`$, all coordination numbers are $`Z`$, and non-pair interactions are limited to nearest neighbors. Then eq. (1) reduces to $$E_i=\underset{ji}{}f_Z(r_{ij})+\underset{jk}{}G_Z(R,\theta _{jk}^{(i)})+h_Z(i)$$ (2) where $`G_Z(R,\theta )G_Z(R,R,\theta )`$, and $`h_Z`$ gathers all interactions beyond the three-body (or rather two-neighbor) terms. It is not obvious a priori what form $`h_Z`$ should have; we found empirically (see below) that four-body (three-neighbor) terms are not needed, but a $`Z`$-neighbor term $`h_Z(i)c_Z(R)\xi (i)`$ is needed. It is proportional to the “asymmetry” $`\xi (i)`$ of coordination shell $`i`$ (originally introduced to characterize dangling bonds ): $$\xi (i)=\frac{|_j𝐫_{ij}|}{(1/Z)_j|𝐫_{ij}|}$$ (3) where $`𝐫_{ij}`$ is the vector from atom $`i`$ to its neighbour $`j`$. (The denominator is the mean nearest-neighbour distance.) Now, restrict the database further so that every structure is “uniform” i.e. all sites are crystallographically equivalent and have the same local environment; We investigate those uniform structures in which all nearest neighbour distances are made equal (or nearly so), the better to separate the $`R`$-dependence from the angular effects. Then $`E_i`$ must be equated with the total energy per atom, as found from an LDA calculation. Now assume that, within the database, only a discrete set of inter-neighbor angles is possible, $`\theta _a`$ ($`a=1,2,\mathrm{}`$); we realize this approximately by simply dividing the range of $`\theta `$ into several bins. Then – if the structures sufficiently outnumber the angular bins – one obtains $`G_Z(R,\theta _a)`$ for each angle by a simple linear fit. For each value of the scale $`R`$, the coefficients $`\{G_Z(R,\theta _a)\}`$ and $`c_Z(R)`$ satisfy a set of linear equations: $$E_m(R)Zf_Z(R)=\underset{a}{}N_m(\theta _a)G_Z(R,\theta _a)+c_Z(R)\xi _m$$ (4) Here $`m`$ runs over the structures of the same $`Z`$, and $`N_m(\theta _a)`$ is the number times angle $`\theta _a`$ occurs in the coordination shell of structure $`m`$. The whole procedure can be repeated for each $`Z`$. ## 2 Structural database For our database, we adopted or invented over 60 uniform structures , spanning coordination numbers $`Z`$ from 3 to 12, and exhibiting a variety of angular patterns for each $`Z`$; We shall present details only for 6-coordinated ($`Z6`$) structures, which are relevant to the real phases: $`80\%`$ of the sites in $`\beta `$-B, and $`50\%`$ in $`\alpha _{12}`$(B), have $`Z=6`$. The same procedure was carried out for $`Z=4,5`$ and $`7`$ Our $`Z6`$ structures, of course, include the simple cubic (SC) and the triangular (tri) lattices (parentheses give codes to be used henceforth). We systematically constructed more $`Z6`$ structures by stacking $`Z4`$ planar lattices, such as the 3.4.6.4 pattern of inter-penetrating dodecagons (i-dodec) or the 3.6.3.6 lattice (Kagome), see Figure 1(a,b). We also stacked $`Z5`$ planar lattices such as $`3^2\mathrm{.4.3.4}`$ (called $`\sigma `$) or the $`3^3.4^2`$ (called $`\mu `$), puckering alternate layers so that each vertex gained one more neighbour from the layer above or below: this produced the $`Z6`$ puckered ($`P\sigma `$) and ($`P\mu `$) structures, respectively as in Figure 1(c,d). As far as possible, the inter- and intra-unit bond lengths in our packings and stackings were made equal. Figure 1(e,f,g) show inherently three-dimensional networks: a simple cubic array of cuboctahedra (SC-co); the (T-lattice) – also called “pyrochlore lattice” – consisting of a diamond lattice’s bond midpoints; and icosahedra placed on a simple cubic lattice (SC-ico).<sup>3</sup><sup>3</sup>3 SC-ico was introduced in Ref. , but misidentified there as “primitive orthorhombic.” We also rolled a triangular lattice into a single infinite nanotube , with a circumference of 8 edges (tube-tri, not illustrated). Finally, we rolled the $`Z5`$ $`3^2\mathrm{.4.3.4}`$ ($`\sigma `$) lattice into a tube whose circumference is the dotted arc in Figure 1(h), and packing such tubes in a square array. This made the “tube-$`\sigma `$” structure, in which one neighbor of each atom belongs to an adjacent tube. Table I summarizes geometric data on the coordination environments of these $`Z6`$ structures. <sup>4</sup><sup>4</sup>4 In the table, footnote $`a`$ means nearest ($`R`$) neighbours are not at identical distances; $`b`$ means $`N_2`$ has a tolerance of $`0.14R`$ in distance; $`c`$ means neighbors at various distances between $`\sqrt{2}R`$ and $`\sqrt{3}R`$ are binned in $`N_3`$; $`d`$ means $`\theta _a`$ has a tolerance of $`6^{}`$ in angle, and the “$`180^{}`$” bin in tube-tri is actually at $`169^{}`$. Here $`N_2`$ and $`N_3`$ are the number of neighbours at at distances $`\sqrt{2}R`$ and $`\sqrt{3}R`$, respectively. The other columns directly determine our potential (2): the number $`N(\theta _a)`$ in each coordination shell of “two-neighbour” angles $`\theta _a`$, and the asymmetry $`\xi `$ of the coordination shell defined by (3). ## 3 LDA Results We performed an ab initio total energy calculation for each structure in our database, in the local density approximation with extended norm and hardness conserving pseudo-potentials . We used all planewaves with kinetic energy up to 54.5 Ry. The Brillouin zone is sampled with a k-point density of at least $`(16.3\AA ^1)^3`$. Band structure energies are converged to within $`10^6`$ Ry in each calculation. The convergence with respect to k-point density shows a precision of 0.03 eV for energy comparison among different structures. We varied the lattice constant of each structure (a uniform scale factor, without any relaxation of positions). Figure 2(a) collects the cohesive energy (per atom) of each $`Z6`$ structure as a function of the nearest neighbour distance $`R`$. The inset shows the single-well-like shape for $`R`$ up to $`5\AA `$ for three representative structures, SC, tri, and SC-ico; beyond $`R=3\AA `$, the curves for all structures converge, approaching the non-bonding limit. The main figure enlarges the physically relevant range $`R[1.6,2.0]\AA `$, to highlight the differences among the structures. The lowest energy curve shown is the “ideal-$`\alpha _{12}`$” phase, topologically same as $`\alpha _{12}`$ but with all nearest neighbour distances set equal. (This was omitted from our $`Z6`$ database because it has two kinds of sites, of which one has $`Z=7`$.) The $`Z6`$ uniform structure of lowest energy is SC-ico, another way of connecting $`\mathrm{B}_{12}`$ icosahedra so each atom has one inter-icosahedral bond. Slightly higher is SC-co, which can be viewed analogously as a packing of $`\mathrm{B}_{12}`$ cuboctahedra in place of icosahedra, with each atom having two inter-cluster bonds. ## 4 Fitting of an effective potential The simplest effective potential would be a single-well two-body potential of typical shape, so the nearest-neighbour radius lies close to the minimum of the well, while further neighbours contribute at the tail of the well. Then structures of the same coordination would have similar energies (from the near-neighbor terms) with small differences due to the farther neighbors. This will not work in general and failed badly for our energy curves. Basically, one is asking the tail of $`f_Z(r)`$ in (2) to account for both the total energy when $`R\sqrt{2}R`$ or $`\sqrt{3}R`$ – a sizeable fraction of the well depth – as well as the dependence on $`N_2`$ and $`N_3`$ (see Table I) among structures of the same $`Z`$ – which is smaller, as seen in Figure 2. We conclude that the two-body potential must be essentially truncated after the nearest-neighbor distance. It follows that the first term in (2) reduces to $`Zf(R)`$; the energy differences among structures of the same coordination $`Z`$ must be attributed to higher-order potentials. For each $`Z`$, we must choose a reference structure for which the higher-order terms are set to zero (thereby defining $`Zf_Z(R)`$ to be its energy curve); for $`Z6`$ we chose the simple cubic (SC) structure. Thus we are fitting the $`E_m(R)`$ from Figure 2(a) to the linear equations (4), in which Table I (minus columns $`N_2`$ and $`N_3`$) constitutes the $`11\times 8`$ matrix of coefficients. Eqs. (4) are overdetermined (by three degrees of freedom); hence for each $`R`$, we can solve for $`G_6(R,\theta _a)`$ and $`c_6(R)`$ in a least-squares sense, without assuming any functional forms. The resulting fit (Figure 2) shows the angular potential $`G_6(R,\theta )`$ increases monotonically with $`\theta `$. Both $`c_6(R)`$ and $`G_6(R,\theta )`$ show rapid decay as a function of $`R`$. Qualitatively similar $`(R,\theta )`$ dependences are found for $`Z=4,5,`$ and $`7`$ structures , except that for $`Z=4`$ the sign of $`G_4`$ is reversed. The monotonicity of the fitted $`G_Z(R,\theta )`$, for all four values of $`Z`$, argues for the physical validity of the result: a spurious fit should produce random fluctuations as a function of $`\theta _a`$, since the value of $`G_Z`$ at each bin is an independent parameter in the fit. ## 5 Discussion The first check on our potentials is that, among the 11 structures in our $`Z6`$ database, the total energies are in the right order (apart from exchange of certain close ones), with an average error of 0.14 eV/atom (for $`R`$ in the bonding range). Furthermore the “inverted-umbrella” coordination shell of real Boron – which was not used in our database – was correctly found lower (by $`1`$eV) than any of the $`Z6`$ shells that were used. We also performed a Monte Carlo search of random $`Z6`$ environments, at the valid $`R=1.7\AA `$ and found the inverted-umbrella was the second-lowest in energy. <sup>5</sup><sup>5</sup>5The uninverted umbrella was lowest in energy, but there exists no extended structure in which all atoms have this coordination shell. Finally, our $`G_6`$ disfavouring $`\theta =`$ $`180^{}`$(see Fig. 2) correctly predicts that Boron in triangular sheets buckles, as found by ab-initio calculations . Empirical tight-binding calculations are the successful intermediate between a fully ab-initio and our classical-potential approach . It would be worthwhile to fit our database in this fashion, especially if longer-range, metallic interactions are found within the clusters, tubes, and sheets. Tight-binding (atomic orbital) total energies can also be expanded in moments related to circuits of three or four atoms. Structure selection and bond angles was studied in this way by Lee et al for selected packings of $`\mathrm{B}_{12}`$ icosahedra. It would be interesting to analytically relate the above-mentioned circuits to the form of classical potentials found by us. Can our potentials, limited to fixed $`Z`$ and $`R`$, be extended to arbitrary Boron structures? First note that our results (Fig. 2) are well fit by a separable form $`G_Z(R,R,\theta )=g_Z(R)^2A_Z(\theta )`$. Provided the results are physical reasonable, it is straightforward to interpolate $`g_Z`$ and $`A_Z`$ with respect to $`R`$ and $`\theta `$, respectively, and (less easily) with respect to $`Z`$. Then one just needs to replace the $`Z`$ definition used in this paper by one of the well-known formulas that depends smoothly on the coordinates, e.g. $`(_jr_{ij}^5)^2/(_jr_{ij}^{10})`$. So finally we would set $`G_Z(r_i,r_j,\theta )g_Z(r_i)g_Z(r_j)A_Z(\theta )`$. in eq. (1), well-defined for general structures (provided $`4Z7`$ for all atoms); this would reduce to our present results (2) within each family of $`Z`$-coordinated uniform structures. An attractive application of classical potentials would be in the liquid phase. Here ab initio molecular dynamics studies (on a 48 atom system) found detailed results for bond-angle distributions and other correlation functions , in good agreement with experiment . Those authors also note that $`Z=6`$ but the icosahedra are broken up (the inverted-umbrella coordination shell is no longer found), contrary to earlier thought. Potentials would offer a chance to model the structure of amorphous $`a`$-B, in which few details are known but (from radial distribution functions) the $`\mathrm{B}_{12}`$ icosahedron is believed to be the motif . It would be interesting to explore more ordered (e.g. micro-quasicrystalline) or less ordered (like the liquid) variants of this picture. Finally, if our method were extended to extract a classical potential from Si and B/Si structures, this could be applied to the icosahedral or cuboctahedral $`\mathrm{B}_{12}`$ clusters which are believed to precipitate in B-doped Si. To conclude, we have presented a novel approach to generating a database for potential fitting which includes so many structures that the angular potential may be fitted rather than assumed. We obtain a reasonable description using potential terms that are local to the coordination shell of each atom, and which could be extended to general Boron structures. Modeling of the speculative or poorly-known quasicrystal, amorphous, unsolved crystal, liquid, and Si inclusion forms of Boron would profit from such a potential. \*** We thank M. Teter, D. Allen, and J. Charlesworth for providing code and support in the LDA calculation, A. Quandt and M. Sadd for comments, K. Shirai and P. Kroll for discussions. This work was supported by DOE grant DE-FG02-89ER45405, and used computer facilities of the Cornell Center for Materials Research supported by NSF grant DMR-9632275.
no-problem/0003/cond-mat0003030.html
ar5iv
text
# Sublattice Magnetization and Néel Transition in the 2⁢𝐷 Quantum Heisenberg Antiferromagnet ## Abstract We present an analytic expression for the finite temperature sublattice magnetization, at the Josephson scale, in two-dimensional quantum antiferromagnets with short range Néel order. Our expression is able to reproduce both the qualitative behaviour of the phase diagram $`M(T)\times T`$ and the experimental values of the Néel temperature $`T_N`$ for either doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> and stoichiometric La<sub>2</sub>CuO<sub>4</sub> compounds. It is the purpose of this work to show that the experimental data for the sublattice magnetization of La<sub>2</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> can in fact be described still in the context of a two-dimensional square-lattice quantum Heisenberg antiferromagnet at finite temperatures. Our starting point is the observation that the nature of the spin correlations in the renormalized classical regime is consistent with one of the three possibilities of fig. 1, according to the observation wave vector . Next we argue that the spin dynamics in the intermediate Goldstone region can be described by an effective field theory for the low energy, long wavelength fluctuations of the spin fields about a state with short range Néel order. In fact, since at low $`T`$ the three regions of fig. (1) are well separated, dynamic scaling hypotesis is valid and a hydrodynamic picture in which short wavelength spin waves follow adiabatically the disordered background is applicable . Finally, we show that the destruction of antiferromagnetic order in real materials can be associated with the collapse of the Goldstone region in fig. (1). We start already in the continuum formulation of the problem by considering the partition function $$𝒵(\beta )=𝒟n_l\delta (n_l^21)\mathrm{exp}((n_l)),$$ (1) where $$(n_l)=\frac{\rho _0}{2\mathrm{}}_0^\mathrm{}\beta d\tau \mathrm{d}^2𝐱\left[(n_l)^2+\frac{1}{c_0^2}(_\tau n_l)^2\right]$$ with $`n_l=(\sigma ,\stackrel{}{\pi })`$, $`l=1,\mathrm{},N=3`$. Let us next construct an effective field theory for the low frequency, long wavelength fluctuations of the staggered components of spin-fields about a state with short range Néel order. To this end we perform integration of the Fourier components of the fields in (1) with frequency inside momentum shells $`\kappa |\stackrel{}{k}|\mathrm{\Lambda }`$. The resulting partition function is such that, for large $`N`$, the leading contribution comes from the scale dependent stationary configurations $`n_3_\kappa `$ and $`\mathrm{i}\lambda _\kappa =m_\kappa ^2`$, solutions of the saddle-point equation $$n_3_\kappa ^2=\frac{1}{g_0}\frac{1}{\beta }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}_\kappa ^\mathrm{\Lambda }\frac{\mathrm{d}^2𝐤}{(2\pi )^2}\frac{1}{𝐤^2+\omega _n^2+m_\kappa ^2},$$ (2) where, as usual, $`\lambda `$ is a Lagrange multiplier for the averaged fixed length constraint and $`g_0=N/\rho _0`$. Due to the presence of the IR cutoff $`\kappa `$, the system can be found in two different phases (regimes): ordered (asymptotically free) or disordered (strongly coupled); depending on the size of $`\xi _\kappa =1/\kappa `$ relative to $`\xi `$: smaller (high energies) or larger (low energies). In the ordered phase, $`\xi _\kappa \xi `$, $`m_\kappa =0`$ is the solution that minimizes the free energy and the $`2D`$ system is then characterized by a nonvanishing effective sublattice magnetization $`n_3_\kappa 0`$, a divergent effective correlation length $`\xi _{eff}=1/m_\kappa =\mathrm{}`$ and gapless excitations in the spectrum. We can subtract the linear divergence in (2) by the renormalization $`1/g_0=1/g_c+\rho _s/4\pi N`$, where $`g_c=4\pi /\mathrm{\Lambda }`$ and $`\rho _s=(\mathrm{}c/a)\sqrt{S(S+1/2)}/2\sqrt{2}`$ is the bulk spin stiffness. Now, after momentum integration and frequency sum, we obtain the running spin stiffness $$\rho _s(\kappa ,\beta )=\frac{\rho _s}{2}+\frac{N}{\beta }\mathrm{ln}(2\mathrm{sinh}(\beta \kappa /2)).$$ (3) It is clear from the above expression that at long distances, $`k0`$, the system is found disordered and strongly coupled. No sublattice magnetization can be measured. Here, conversely, in order to obtain a finite temperature phase transition in the $`2D`$ system, we will rather fix the scale $`\kappa `$ and study the behaviour of the spin stiffness (3) with the running parameter as being the temperature. For this it suffices to impose the boundary condition $$\rho _s(\kappa ,T=0)=\rho _s.$$ (4) From (4) we conclude that $`\kappa =\rho _s/N`$, which is exactly the inverse Josephson correlation length $`\kappa =\xi _J^1`$. This should not be surprising since the spin stiffness is itself a microscopic, short wavelength quantity defined at the Josephson scale. Now, inserting (4) in (3), the expression for the finite temperature effective sublattice magnetization, $`M(T)\rho _s(\kappa =\rho _s/N,T)`$, becomes $$M(T)=\frac{M_0}{2}+NT\mathrm{ln}\left(2\mathrm{sinh}\left(\frac{M_0}{2NT}\right)\right),$$ (5) with $`M_0=\rho _s`$. As the temperature increases the sublattice magnetization $`M(T)`$ vanishes at a Néel temperature $`T_N`$ given by $$T_N=\frac{M_0}{N\mathrm{ln}2}=\frac{\xi _J^1}{\mathrm{ln}2}.$$ (6) This also corresponds to the value of $`T`$ for which $`\xi =\xi _J`$ and the Goldstone region in fig. (1) collapses. In fig. (2) we plot $`M(T)/M_0`$ against experiment for either La<sub>2</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> and for $`N=3`$. To compute $`\rho _s`$ we have used $`S=1/2`$, $`a=3.8\AA `$ and the experimental values of $`c`$. A more detailed analysis of the problem, with a discussion on how the actual deviation of the experimental points from the theoretical curves can be accounted for, can be found in .
no-problem/0003/astro-ph0003116.html
ar5iv
text
# Metallicity of Red Giants in the Galactic Bulge from Near-Infrared Spectroscopy ## 1 Introduction Baade’s Window (BW, $`l=1^{},b=4^{}`$) is the most studied region in the Galactic bulge. In the late 1970s, Whitford (1978) demonstrated that the integrated optical spectrum from BW closely resembles that from bulges of spiral galaxies and from moderate luminosity E and S0 galaxies. At the same time Frogel et al. (1978) found that the near infrared light from these galaxies is dominated by cool giant stars. Shortly thereafter, Blanco and his collaborators determined that BW contains an unusually high percentage of middle and late type M giants compared to other regions of the Galaxy (Blanco et al., 1984). Detailed studies of the M giants in BW revealed that they have photometric and spectroscopic properties significantly different from those of M giants in the field (Frogel & Whitford, 1987; Frogel, 1988; Frogel et al., 1990; Rich, 1983, 1988; Terndrup et al., 1990; McWiliam & Rich, 1994). The accurate determination of stellar metallicities is essential for constraining models of star formation and chemical evolution in the bulge. Frogel et al. (1990) used $`JHK`$ colors and CO and $`\mathrm{H}_2\mathrm{O}`$ photometric indices to determine metallicities of stars at latitudes between $`3^{}`$ and $`12^{}`$ along the minor axis. Tiede et al. (1995) used the relation between the slope of the upper giant branch and \[Fe/H\] (Kuchinski et al., 1995; Kuchinski & Frogel, 1995) to estimate metallicities for the same stars studied by Frogel et al. (1990). Tyson (1991) used Washington photometry on stars in similar fields for the same purpose. These three studies agreed that there is a small metallicity gradient along the minor axis of the bulge, with values ranging from -0.04 dex/deg (Frogel et al., 1990) to -0.09 dex/deg (Tyson, 1991). Finally, Minniti et al. (1995) discussed evidence for a metallicity gradient in the Galactic bulge based on compiled observations of ten fields, eight of them exterior to BW. Stars in most of the inner bulge ($`|b|3^{}`$) can be studied only with near infrared observations because of high reddening and extinction. Recently, Frogel et al. (1999) have studied 11 fields in the inner Galactic bulge using $`JHK`$ photometry. Seven of these fields are on the minor axis; five are at a latitude of $`1.3^{}`$ parallel to the major axis. They estimated the reddening of each field from their CMDs and the mean metallicity of each field with the giant branch slope method. They combined their results with those of Tiede et al. (1995) and derived a gradient of -0.064 $`\pm `$ 0.012 dex/degree in the range $`0.2^{}b10.25^{}`$ along the minor axis. Our main goal is to obtain independent values for the metallicity of the stars in the same inner bulge fields studied by Frogel et al. (1999) but with spectroscopic techniques. We use the strength of three absorption features present in the $`K`$band of cool stars: Na, Ca, and CO. The calibration is based on similar observations of giants in globular clusters by Stephens et al. (2000). ## 2 Observations and Data Reduction ### 2.1 Observations The observations for this paper were obtained on the 4m Blanco telescope at Cerro Tololo Inter-American Observatory (CTIO) during three observing runs with two instruments: the Ohio State InfraRed Imager and Spectrograph (OSIRIS; $`R=1380`$; DePoy et al., 1993), and the CTIO InfraRed Spectrograph (IRS; $`R=1650,4830`$; DePoy et al., 1990). OSIRIS used a 256 $`\times `$ 256 NICMOS3 detector and IRS has a 256 $`\times `$ 256 InSb array. Spectral coverage was between 2.19 $`\mu `$m and 2.32 $`\mu `$m with both instruments. Table 1 gives a brief log of the observing runs. Program stars were selected from the color magnitude diagrams of 11 fields interior to -4 degrees and as close as 0.2 degrees to the Galactic Center (Frogel et al., 1999). All of the observed stars are at or near the top of the red giant branch of each field. Based on their location in the CMDs we believe their probability of membership in the bulge is high. Fields are designated as in Frogel et al. (1999). Our sample also includes 14 stars in BW from Frogel & Whitford (1987) and Terndrup et al. (1991). Table 2 lists our final sample of stars. Column 1 gives the star’s name, column 2 the date of observation (year, month, day), column 3 the spectral resolution, column 4 the estimated S/N ratio per pixel in the spectrum, columns 5 and 6 the observed $`K`$ magnitude and $`JK`$ colors (Frogel et al., 1999, and unpublished), columns 7 and 8 the absolute $`K`$magnitude and dereddened $`JK`$ colors (see section 3.2), and columns 9, 10 and 11 the equivalent widths of the Na, Ca, and CO features. The tabulated uncertainties in these three quantities are simply the ratio of the mean continuum to the rms noise present in the defined continuum regions (see section 3.1). ### 2.2 Data Reduction The data acquisition and reduction were similar for all the spectra. Both instruments, OSIRIS and IRS, were used first in imaging mode to acquire the star in the slit. We took several spectra ($``$ 10) with the star stepped along the slit. This was done to estimate the sky levels in the exposures, to compensate for bad pixels, and to aid in the removal of the fringes present in the OSIRIS data which are typically $``$6 % (peak-to-peak) of the continuum. A star of spectral type A or B was observed as close to the object star’s airmass as possible to correct for telluric absorption features. Such stars have no significant spectral features in the wavelength region we observed. After we averaged the multiple spectra of a program star and divided by the average spectrum of the nearby atmospheric standard star, the fringes cancelled to $`<`$1%. OSIRIS and IRS at $`\lambda /\mathrm{\Delta }\lambda `$ = 1650 can cover the entire relevant spectral region (2.19$`\mu `$m – 2.34$`\mu `$m) with a single grating setting; with IRS at $`\lambda /\mathrm{\Delta }\lambda `$ = 4830, however, we observed at three overlapping grating settings (2.191$`\mu `$m – 2.242$`\mu `$m, 2.238$`\mu `$m – 2.290$`\mu `$m, and 2.285$`\mu `$m – 2.337$`\mu `$m) to cover the desired wavelength range. A number of stars were observed at both grating settings as a quality control check. We will discuss results of this check later. We used Image Reduction and Analysis Facility (IRAF<sup>5</sup><sup>5</sup>affiliation: The IRAF software is distributed by the National Optical Astronomy Observatories under contract with the National Science Foundation ) software for data reduction. The reduction process consisted of flat fielding the individual spectra with dome flats and sky subtraction using a sky frame made by a median combination of all data frames of the object. We replaced bad pixels (dead pixels and cosmic ray hits) by an interpolated value computed from neighboring pixels in the dispersion direction. The OSIRIS frames were then geometrically transformed to correct the curvature of the slit induced by the grating (maximum correction $``$ 2 pixels). The induced curvature in IRS frames was insignificant ($`<`$ 1 pixel) over the region of the array that was used. Geometric transformations for OSIRIS were derived from night sky emission lines. We extracted the individual spectra along an aperture of 7 pixels using the APSUM package in IRAF and did a further sky subtraction by using regions on either side of the aperture. The final spectrum is an average of the extracted spectra. We divided the spectra of object stars by an early type atmospheric standard star, observed and reduced in the same way, to remove telluric absorption features and multiplied by a 10,000 K blackbody to put the spectra on a $`\mathrm{F}_\lambda `$ scale. The temperature of the blackbody approximately corresponds to the average temperature of our atmospheric standard stars. The maximum effect from the difference between the standard star temperature and the temperature of the adopted 10,000 K blackbody is a $``$1.5% tilt in the continuum slope, which is insignificant to the equivalent width measurements. We used OH air-glow lines (Olivia & Origlia, 1992) to obtain wavelength solutions. For OSIRIS and IRS at R = 1650 spectra, we also included the $`{}_{}{}^{12}\mathrm{CO}`$(2,0) bandheads near zero radial velocity to compute the wavelength calibration since there are no OH air-glow lines present at the red end of the $`K`$-band spectra. OH air-glow lines were present only in the first grating setting (2.191$`\mu `$m – 2.242$`\mu `$m); a second-order wavelength solution was derived there. Since the grating is the same for the three grating settings and the grating angle differences are small, the first and second order terms of the first grating setting solution should be the same for all three observed grating settings. After applying those terms to the second (2.238$`\mu `$m – 2.290$`\mu `$m) and third (2.285$`\mu `$m – 2.337$`\mu `$m) grating settings, only the zero order term is unknown. This zero order term is just a shift that was computed from the lines present in the overlapping regions. We connected the three grating settings, after wavelength calibration, by averaging the overlapping regions. We shifted both the OSIRIS and IRS spectra in wavelength to correct for radial velocity differences; the <sup>12</sup>CO(2,0) bandhead was fixed at 2.293 $`\mu `$m. This shift is needed to ensure the relative accuracy and consistency of the equivalent widths of atomic and molecular features. A sample of the final normalized spectra are shown in Figures 1 on an $`\mathrm{F}_\lambda `$ scale. Only the brightest stars for each field are shown in the Figures. The whole database is available on the anonymous FTP site of the OSU Astronomy Department (ftp to ftp.astronomy.ohio-state.edu, login as anonymous, change to directory pub/solange, and get the file bulge\_spec.tar.gz). ## 3 Analysis ### 3.1 Equivalent Widths of Atomic and Molecular Absorption Features Stellar photospheric absorption features in our spectra were identified from the wavelengths of the lines in Kleinmann & Hall (1986). The strongest features in our data are lines of Na I and Ca I, and the (2,0) bands of <sup>12</sup>CO. The equivalent widths of these features were measured with respect to a continuum level defined as the best first-order fit for bands free of spectral lines near the features. The band passes adopted for the features and continuum are given in Table 3 and the features themselves are shown in Figure 1. These bandpasses are identical to those used to measure the giants stars in globular clusters (Stephens et al., 2000). The measured equivalent widths for Na I, Ca I, and <sup>12</sup>CO(2,0) for our program stars are listed in Table 2. To estimate the formal uncertainties, we assume that the noise is dominated by photon statistics and that $`\sigma _{\mathrm{line}}\sigma _{\mathrm{cont}}`$. The uncertainty in the measurement of each feature (in Å) is given by: $$\sqrt{2\mathrm{N}_{\mathrm{pixels}}}\times \mathrm{dispersion}\times \sigma _{\mathrm{cont}}$$ where $`\mathrm{N}_{\mathrm{pixels}}`$ is the number of pixels contained within the defined feature band, dispersion is measured in Å per pixel, and $`\sigma _{\mathrm{cont}}`$ is the rms noise per pixel of the fitted continuum. Uncertainties listed in Table 2 were calculated using this formula. These values are really lower limits as they provide no estimate for any systematic errors that may exist in the data. More realistic estimates of the uncertainties are computed below, using differences found in measurements taken with different instruments and spectral resolutions. Each feature at each resolution has its own calculated rms noise. The estimated signal-to-noise ratio that appears in Table 2 is the ratio of the continuum level to the standard deviation of each spectrum. The standard deviation of each spectrum is the quadratic average of the calculated rms noise of each feature ($`\sigma _{\mathrm{cont}}=\sqrt{[\sigma _{\mathrm{cont}}^2(\mathrm{Na})+\sigma _{\mathrm{cont}}^2(\mathrm{Ca})+\sigma _{\mathrm{cont}}^2(\mathrm{CO})]/3}`$). There are 13 stars with IRS spectra at both spectral resolutions 1650 & 4830. Also, there are 8 stars with spectra taken with OSIRIS at R=1380 and IRS at R=1650. The average differences and standard deviation of Na, Ca, and CO equivalent widths measured at both resolutions are listed in Table 4. The last column of Table 4 is the average formal error from Table 2. The mean differences of the equivalent widths measured at different resolutions and taken with different instruments are negligible at the one sigma level. Thus, when we have more than one observation we will use the average of the equivalent widths measured at different resolutions. The standard deviations listed in Table 4 are also an indicator of potential systematic uncertainties in the data; we consider these values to be a better estimate of the true uncertainties in the respective equivalent widths than the formal errors. The total uncertainties are computed as the average of the standard deviations listed in Table 4, and are 0.38 for EW(Na), 0.87 for EW(Ca) and 1.7 for EW(CO). ### 3.2 Reddening We estimated the extinction and reddening to each star using the same technique as Frogel et al. (1999). Specifically, we assumed that the color of the upper giant branch in each field was the same as that in BW: $$(JK)_0=0.113K_0+2.001$$ (1) where $`(JK)_0`$ is the dereddened $`JK`$ color and $`K_0`$ is the dereddened $`K`$ magnitude. Further, we assumed the relation between extinction and reddening found by Mathis (1990): $$A_K=0.618E(JK).$$ (2) The reddening is estimated by calculating the shift in $`K`$ and $`JK`$ along the reddening vector to force each star to fall on the BW giant branch. $`M_{K_0}`$ is computed assuming that all stars are located at a distance of 8 kpc. Dereddened photometric indices, $`M_{K_0}`$ and $`(JK)_0`$, are listed in Table 2. The photometric uncertainties are estimated to be about 0.04 or 0.05 magnitudes (Frogel et al., 1999) for $`K`$ and $`(JK)`$. The uncertainties of the dereddened photometric indices should also include the uncertainties caused by the differential reddening present in each field and the assumption that all stars are located at the same distance. Frogel et al. (1999) found that the amount of scatter due to differential reddening is proportional to the average reddening for each field. Using eq. (3) from Frogel et al. (1999) we estimate that the scatter due to differential reddening implies an scatter in $`(JK)_0`$ of 0.30 mag for the c fields and 0.12 mag for the g fields, and a scatter in $`M_{K_0}`$ of 0.2 mag for the c fields and 0.1 mag for the g fields and BW. The maximum dispersion in magnitude due to spread along the line of sight is 0.2 mag (Frogel et al., 1990) for fields at galactic latitude less than 4. So, the scatter in $`M_{K_0}`$ including both the effects of differential reddening and dispersion along the line of sight is 0.30 mag for the c fields and 0.23 mag for the g fields and BW. ## 4 Results ### 4.1 Dependence on Luminosity Figure 2 shows the dependence of the equivalent widths of Na, Ca, and CO on $`M_{K_0}`$. These plots resemble CMDs, since the EW(Na), EW(Ca), and EW(CO) may depend on both effective temperature and luminosity in addition to metallicity. There is a considerable amount of scatter in Figure 2. We computed the standard deviation of EW(Na), EW(Ca), and EW(CO) in two narrow ranges of $`M_{K_0}`$, listed in Table 5, to minimize any spread that might arise from an $`M_{K_0}`$ dependence of the indices. In all cases the standard deviation is greater than the total uncertainties of the equivalent widths (see Sec. 3.1). Therefore, part of the scatter is real and can be understood as a spread in metallicity in our sample of bulge stars. Note that we assume all stars are of closely similar age so that only \[Fe/H\] differences will cause a spread in color or EW at a given $`M_{K_0}`$. If the observed scatter is the quadratic addition of the scatter due to differences in metallicity and the scatter due to uncertainties in the data, then the scatter due to metallicity is 0.9 Å for EW(Na), 0.4 Å for EW(Ca), and 1.8 Å for EW(CO). There is a statistically significant slope of -0.6 Å/mag in EW(CO) vs. $`M_{K_0}`$. However, this probably reflects the dependence of both EW(CO) and $`M_{K_0}`$ on $`(JK)_0`$ color (see Johnson, 1966; Ramírez et al., 1997). In particular, as giants evolve they increase in luminosity and decrease in effective temperature. As the effective temperature decreases, the CO opacity, and hence the strength of the CO lines, increases. For example, there is a slope of $``$ -0.5 Å/mag in the EW(CO) vs. $`M_{K_0}`$ relation for the field giants, assuming that the giant branch of BW (eq. 1) is similar to the giant branch of field giants. This is very similar to the slope of -0.6 Å/mag we observe. This suggests that the slope we observe in EW(CO) vs. $`M_{K_0}`$ in Figure 2 can be explained by the dependence of both EW(CO) and $`M_{K_0}`$ on effective temperature or $`(JK)_0`$ color. There is no obvious relation between EW(Na) and EW(Ca) with respect to $`M_{K_0}`$. Our previous study of field giants (Ramírez et al., 1997) indicates that such a relation should exist (for the same reasons as for CO). But, the scatter is too high in the graphs of EW(Na) and EW(Ca) vs. $`M_{K_0}`$ to find a relationship between those indices. ### 4.2 Metallicity using Globular Cluster Giants Stephens et al. (2000) have established an \[Fe/H\] scale for Galactic globular clusters based on medium resolution (1500-3000) infrared $`K`$ band spectra of the brightest stars in 15 clusters. The technique uses the same absorption features as we use here: Na, Ca, and CO. Indeed, many of their spectra were obtained with the identical instrument set up and on the same nights as the spectra analyzed here. Their calibration is derived from spectra of more than 100 giant stars in 15 Galactic globular clusters which have good optical abundance determinations. The technique is valid for globular cluster giants with $`1.8<`$\[Fe/H\]$`<0.1`$ and $`7<M_{K_0}<4`$, and has a typical uncertainty of $`\pm 0.1`$ dex. Our sample of stars in the different bulge fields has similar colors and magnitudes to the stars analyzed by Stephens et al. (2000). Figure 3 shows the color-magnitude diagram for the globular cluster stars from Stephens et al. (2000) and our sample of bulge stars with $`M_{K_0}>7`$. The scatter seen in globular cluster stars is real and arises from sequences of different metallicities, where bluer cluster stars are more metal poor. The bulge stars appear in a line because of the dereddening technique, where we force the stars to lie on the BW giant branch. Figure 4 compares the three spectral indices (EW(Na), EW(Ca), and EW(CO)) with dereddened color, $`(JK)_0`$ for globular cluster and bulge stars. Note that there is considerable overlap of the two populations although the globular cluster stars extend to lower values of equivalent widths while the bulge stars go to higher values. These differences most likely reflect differences in the \[Fe/H\] distributions of the two populations. Stephens et al. (2000) calculated two calibrations for globular cluster metallicities. Solution 1 estimates the metallicity with only the spectral indices EW(Na), EW(Ca), and EW(CO). Solution 2 also incorporates the dereddened $`(JK)`$ color and the absolute $`K`$band magnitude. Figure 5 shows a comparison between results of solution 1 and 2 for the globular cluster and bulge stars. The two solutions yield indistinguishable results for the globular cluster stars, but for stars in the bulge, solution 2 gives higher metallicities for \[Fe/H\]$`>0.2`$. Both solutions are extrapolations for \[Fe/H\] values higher than -0.15 . Nevertheless, we would like to understand which solution might be better to use as an extrapolation to the higher metallicities. At higher metallicities the EW(CO) reaches a plateau and become insensitive to changes in \[Fe/H\] (Stephens et al., 2000). Since solution 1 has a stronger dependence on the EW(CO) than solution 2, solution 1 is expected to be less sensitive to changes in \[Fe/H\] at higher metallicities. The analysis of Stephens et al. (2000) also shows that at higher metallicities $`M_{K_0}`$ accounts for more and more of the variation in \[Fe/H\]. For this reason we feel that solution 2 is a better indicator of metallicity, and is the one we applied to our sample of bulge stars. We applied solution 2 to the individual stars of our sample with $`M_{K_0}7`$, corresponding to the brightest cluster stars. If we consider a typical bulge star of $`M_{K_0}=6.5`$, $`(JK)_0`$ = 1.1, EW(Na) = 4.0 Å, EW(Ca) = 3.0 Å, EW(CO) = 21.9 Å, the total uncertainties for the equivalent widths (see section 3.1), and the scatter in the photometric indices due to differential reddening and dispersion through the line of sight (see section 3.2), we compute a typical error in \[Fe/H\] of $`\pm `$0.12 dex for individual stars in the g fields and BW and $`\pm `$0.23 dex for individual stars in the c fields. The typical error in \[Fe/H\] is almost doubled for the stars in the c fields, because differential reddening is higher in these very low latitude fields and the uncertainty in the $`(JK)_0`$ color becomes important. We compute a mean value of \[Fe/H\] for each field by averaging the results of the individual stars. The mean \[Fe/H\], the standard deviation and the error in the mean for each field are listed in Table 6. The error in the mean is the standard deviation divided by the square root of the number of stars in each field. ## 5 Discussion ### 5.1 Comparison to Slope of Giant Branch method. Frogel et al. (1999) used the slope of the giant branch (GB) to estimate the mean metallicities of the same c and g fields of our sample. For BW we used the slope of the GB result from Tiede et al. (1995). In Figure 6 we have plotted slope of the GB results against ours. The agreement is very good in all the fields, except for g3-1.3. The mean average difference of both techniques is $`0.03\pm 0.15`$ dex, entirely consistent with the combined uncertainties of the two techniques. ### 5.2 Metallicity Gradients in the Inner Bulge. We first explore the possible existence of a metallicity gradient along the major-axis of the inner bulge including all the fields with galactic latitude, $`b=1.3^{}`$. Figure 7 shows our results for g0-1.3, g1-1.3, g1-1.3, g2-1.3, and g4-1.3 fields plotted against galactic longitude, $`l`$. The line is an error weighted least-squares fit to the points. We find that there might be a small metallicity gradient along the major axis since the slope of the line is 0.017 $`\pm `$ 0.011 dex/degree. In Figure 7, we have also plotted the metallicity gradient along the major axis obtained by Frogel et al. (1999) (dashed-line). Note that their result and ours are very close. Next we explore the existence of a metallicity gradient along the minor-axis of the inner bulge including all the fields with galactic longitude, $`l0^{}`$. Figure 8 shows our results for c, g0-1.3, g0-1.8, g0-2.3, g0-2.8, and BW fields plotted against galactic latitude, $`b`$. The line is an error weighted least-squares fit to the points. There is no evidence for a metallicity gradient along the minor axis; the slope of the fit is -0.012 $`\pm `$ 0.027 dex/deg. This result seems to be in disagreement with earlier results, in which a metallicity gradient along the minor axis ranges from -0.04 dex/deg (Frogel et al., 1990) to -0.09 dex/deg (Tyson, 1991). However, if we consider only the metallicities obtained by Frogel et al. (1999) for the c, g0-1.3, g0-1.8, g0-2.3, g0-2.8, and BW fields we obtain a fit with a slope of 0.001 $`\pm `$ 0.021, in close agreement with our spectroscopic result. The minor axis metallicity gradient found in earlier studies arises when fields at higher galactic latitude are also included. In Figure 8, we have plotted as a dashed line the metallicity gradient along the minor axis obtained by Frogel et al. (1999) including all fields with $`l10.5^{}`$. In Figure 9, we have plotted the location of the observed fields with respect to a 3.5$`\mu `$m COBE/DIRBE outline of the Galactic bulge (Weiland et al., 1994). When only fields located inside the COBE/DIRBE outline are considered, no metallicity gradient is found. The metallicity gradient arises only when fields located outside the COBE/DIRBE outline ($`R>`$ 0.6 kpc) are included. Metallicity gradients in the galactic bulge have recently been predicted by the theoretical models of Mollá et al. (2000). Mollá et al. present a multiphase evolution model which assumes a dissipative collapse of the gas from a protogalaxy or halo to form the bulge and the disk. They predict a metallicity gradient of -0.4 dex/kpc in the bulge region 0.5 $`R`$ 1.5 kpc, which is in good agreement with the metallicity gradient found by Frogel et al. (1999). But, Mollá et al. also predict a steeper gradient, -0.8 dex/kpc, in the inner bulge at $`R<`$ 0.5 kpc, which is not observed in our data or in the Frogel et al. (1999) data. We find the metallicity gradient becomes flat at the scale height were the infrared light becomes dominant in the Galactic bulge ($`R<`$ 0.6 kpc). Mollá et al. (2000) assume that a core population, which is metal rich and supported by rotation, dominates the stellar population of the inner bulge. The existence of such a metal rich population in the inner bulge is not supported by recent measurements of stellar iron abundances in the Galactic Center by Ramírez et al. (2000), who found \[Fe/H\] near solar for a sample of late supergiant and giant stars. ### 5.3 \[Fe/H\] metallicity distribution In section 4.1 we found that the spread in EW(Na, Ca, CO) at a given magnitude was consistently higher than expected from measurement uncertainties alone. A likely explanation for the observed spread in the equivalent width values is that it arises from an intrinsic spread in \[Fe/H\] for the stars. We compute the \[Fe/H\] metallicity distribution considering all stars in our sample with $`M_{K_0}7`$ but excluding the c field stars, because of their large individual uncertainties in \[Fe/H\], and including stars along the major axis, for a total of 72 stars. The mean \[Fe/H\] for the inner bulge is -0.21 dex with a full width dispersion of 0.30 dex. Since the average error of our \[Fe/H\] results is $`\pm `$0.12 per star (see section 4.2), the dispersion observed in the metallicity distribution is real. These values are consistent with theoretical results from Mollá et al. (2000), who predict a mean \[Fe/H\] of -0.20 with a dispersion of 0.40 dex for one bulge population recipe. If we consider a typical bulge star of $`M_{K_0}=6.5`$, $`(JK)_0`$ = 1.1, EW(Na) = 4.0 Å, EW(Ca) = 3.0 Å, EW(CO) = 21.9 Å, compute \[Fe/H\] using solution 2 of Stephens et al. (2000), and determine the difference in \[Fe/H\] adding and subtracting the scatter in the equivalent widths due to metallicity (see Sec. 4.1), we obtain a difference of $`\pm `$ 0.26. This number is very similar to the dispersion of the \[Fe/H\] distribution. We conclude that the scatter seen in the equivalent widths is real and can be explained by the dispersion observed in the \[Fe/H\] distribution. We now compare the metallicity distribution of our sample of 72 stars in the inner bulge with the metallicity distribution derived for BW by Sadler et al. (1996) in Figure 10. Their mean \[Fe/H\] for 262 stars in BW is -0.15 dex with a dispersion of 0.44 dex. This is quite similar to our mean of -0.21 dex with a dispersion of 0.30 dex. ## 6 Conclusions. We present $`K`$band spectra of giant stars in fields interior to $``$4 degrees and as close as 0.2 degrees of the Galactic Center. We measure equivalent widths of the strongest features present in the $`K`$band spectra, EW(Na), EW(Ca), and EW(CO), and also dereddened photometric indices $`M_{K_0}`$ and $`(JK)_0`$. We use these indices to compute \[Fe/H\] for the individual stars, using the calibration derived for globular clusters by Stephens et al. (2000). The mean \[Fe/H\] for each field is in good agreement with the results obtained with the slope of the giant branch method (Frogel et al., 1999). We find no evidence for a metallicity gradient along the minor or major axis of the bulge for $`R<`$ 0.6 kpc. We also show that metallicity gradients found in earlier works only arise when fields located at larger galactic radii are included. Those higher galactic radii fields are located outside the infrared bulge defined by the COBE/DIRBE outline. We compute the \[Fe/H\] distribution for the inner bulge, finding a mean value of -0.21 dex with a full width dispersion of 0.30 dex, which are very similar to the mean and width of the BW’s \[Fe/H\] distribution from Sadler et al. (1996) and to the theoretical distribution of a bulge formed by dissipative collapse (Mollá et al., 2000). S.V.R. gratefully acknowledges support from a Gemini Fellowship (grant # GF-1003-97 from the Association of Universities for Research in Astronomy, Inc., under NSF cooperative agreement AST-8947990 and from Fundación Andes under project C-12984), and from an Ohio State Presidential Fellowship. We thank the CTIO staff for helpful support. J.A.F. thanks the former director of the Carnegie Observatories, Dr. Leonard Searle, for a Visiting Research Associateship without which this program could not have gotten started. Finally, J.A.F. notes that NSF declined to provide support for this research program.
no-problem/0003/cond-mat0003428.html
ar5iv
text
# How to create Alice string (half-quantum vortex) in a vector Bose-Einstein condensate. \[ ## Abstract We suggest a procedure how to prepare the vortex with $`N=1/2`$ winding number – the counterpart of the Alice string – in Bose–Einstein condensates. \] Vortices with fractional winding number can exist in different condensed matter systems, see review paper . Observation of atomic Bose-condensates with multi-component order parameter in laser manipulated traps opens the possibility to create half-quantum vortices there. We discuss the $`N=1/2`$ vortices in the Bose-condensate with the hyperfine spin $`F=1`$, and also in the mixture of two Bose-condensates. The order parameter of $`F=1`$ Bose-condensate consists of 3 complex components according to the number of the projections $`M=(+1,0,1)`$. These components can be organized to form the complex vector $`𝐚`$: $$\mathrm{\Psi }_\nu =\left(\begin{array}{c}\mathrm{\Psi }_{+1}\\ \mathrm{\Psi }_0\\ \mathrm{\Psi }_1\end{array}\right)=\left(\begin{array}{c}\frac{a_x+ia_y}{\sqrt{2}}\\ a_z\\ \frac{a_xia_y}{\sqrt{2}}\end{array}\right).$$ (1) There are two symmetrically distinct phases of the $`F=1`$ Bose-condensates: (i) The chiral or ferromagnetic state occurs when the scattering length $`a_2`$ in the scattering channel of two atoms with the total spin 2 is less than that with the total spin zero, $`a_2<a_0`$ . It is described by the complex vector $$𝐚=f(\widehat{𝐦}+i\widehat{𝐧}),$$ (2) where $`\widehat{𝐦}`$ and $`\widehat{𝐧}`$ are mutually orthogonal unit vector with $`\widehat{𝐥}=\widehat{𝐦}\times \widehat{𝐧}`$ being the direction of the spontaneous momentum $`𝐅`$ of the Bose condensate, which violates the parity and time reversal symmetry; $`f`$ is the amplitude of the order parameter. (ii) The polar or superfluid nematic state, which occurs for $`a_2>a_0`$, is described by the real vector up to the phase factor $$𝐚=f\widehat{𝐝}e^{i\mathrm{\Phi }},$$ (3) where $`\widehat{𝐝}`$ is a real unit vector. The direction of the vector $`\widehat{𝐝}`$ can be inverted by the change of the phase $`\mathrm{\Phi }\mathrm{\Phi }+\pi `$. That is why phase-insensitive properties of the polar state are also insensitive to the reversal of the direction of $`\widehat{𝐝}`$. In this respect $`\widehat{𝐝}`$ is similar to the director in nematic liquid crystals. The chiral state (i) corresponds to the orbital part of the matrix order parameter in superfluid <sup>3</sup>He-A, while the nematic state (ii) corresponds to the spin part of the same <sup>3</sup>He-A order parameter. The order parameter matrix of <sup>3</sup>He-A is the product of two vector order parameters: $`A_{\alpha k}a_\alpha ^{\mathrm{nematic}}a_k^{\mathrm{chiral}}`$. That is why each of the two states shares some definite properties of superfluid <sup>3</sup>He-A. In particular the chiral state (i) displays continuous vorticity , which was heavily investigated in superfluid <sup>3</sup>He-A (see and reviews ). An isolated continuous vortex is the so called Anderson-Toulouse-Chechetkin vortex. The smooth core of the vortex represents the skyrmion, in which the $`\widehat{𝐥}`$-vector sweeps the whole unit sphere. Outside the soft core the $`\widehat{𝐥}`$-vector is uniform, while the order parameter phase has finite winding. In <sup>3</sup>He-A and thus in the $`F=1`$ Bose-condensate too, it is the $`4\pi `$ winding around the soft core, i.e. the continuous vortex has winding number $`N=2`$. This continuous vortex can be also represented as a pair of the so called continuous Mermin-Ho vortices , each having the winding number $`N=1`$. The $`\widehat{𝐥}`$-vector in the Mermin-Ho vortex covers only half of a unit sphere and thus is not uniform outside the soft core. Such a half-skyrmion is also called the meron. An optical method to create the meron – the Mermin-Ho vortex – in the $`F=1`$ Bose-condensate has been recently discussed in Ref. . For the spin-$`1/2`$ Bose-condensates the order parameter is a spinor, which represents the “half of the vector”. That is why the continuous Anderson-Toulouse-Chechetkin vortex (in which the $`\widehat{𝐥}`$ sweeps the whole unit sphere) has twice less winding number in such condensates, i.e. the skyrmion is the $`N=1`$ continuous vortex . The spinorial order parameter is the counterpart of the order parameter in the Standard Model of the electroweak interactions, which is the spinor Higgs field transforming under $`SU(2)`$ symmetry group. That is why the $`N=1`$ continuous vortex in the spin-$`1/2`$ Bose-condensates simulates the continuous electroweak string in the Standard Model. The Higgs field in the continuous electroweak string (and thus the $`N=1`$ continuous vortex in the spin-$`1/2`$ Bose-condensate) has the following distribution of the order parameter : $$\left(\begin{array}{c}\mathrm{\Psi }_{}\\ \mathrm{\Psi }_{}\end{array}\right)=f\left(\begin{array}{c}e^{i\varphi }\mathrm{cos}\frac{\theta (r)}{2}\\ \mathrm{sin}\frac{\theta (r)}{2}\end{array}\right),\widehat{𝐥}=\widehat{𝐳}\mathrm{cos}\theta (r)+\widehat{𝐫}\mathrm{sin}\theta (r).$$ (4) Here $`(z,r,\varphi )`$ are coordinates of cylindrical system; $`\theta (0)=\pi `$; and $`\theta (\mathrm{})=0`$. Note that the meron configuration in such system, with $`\theta (0)=\pi `$ and $`\theta (\mathrm{})=\pi /2`$, would have $`N=1/2`$ winding number. The $`N=1`$ vortices with the order parameter described by Eq.(4), have been recently generated in the Bose-condensate with two internal levels , following the proposal elaborated in Ref.. Though these two internal levels are not related by an exact $`SU(2)`$ symmetry, under some conditions there is an approximate $`SU(2)`$ symmetry, and the $`N=1`$ vortex does represent a skyrmion. This vortex has a smooth (soft) core, which size is essentially larger than that of the conventional vortex core which has the dimension of order of the coherence length. Such enhancement of the core size allowed for the observation of the $`N=1`$ vortex-skyrmion by optical methods . From the Eq.(4) it follows that this continuous $`N=1`$ vortex can be also represented as the vortex in the $`|`$ component whose core is filled by the $`|`$ component. The nematic state (ii) may contain a no less exotic topological object – the topologically stable $`N=1/2`$ vortex – which still has avoided experimental identification in superfluid <sup>3</sup>He-A. The $`N=1/2`$ vortex is a combination of the $`\pi `$-vortex in the phase $`\mathrm{\Phi }`$ and $`\pi `$-disclination in the nematic order parameter vector $`\widehat{𝐝}`$: $$𝐚=f(r)\left(\widehat{𝐱}\mathrm{cos}\frac{\varphi }{2}+\widehat{𝐲}\mathrm{sin}\frac{\varphi }{2}\right)e^{i\varphi /2}.$$ (5) The change of the sign of the vector $`\widehat{𝐝}`$ when circumscribing around the core is compensated by the change of sign of the exponent $`e^{i\mathrm{\Phi }}=e^{i\varphi /2}`$, so that the whole order parameter is smoothly connected after circumnavigating. This $`N=1/2`$ vortex is the counterpart of the so called Alice string considered in particle physics: a particle which moves around an Alice string flips its charge or parity. In a similar manner a quasiparticle adiabatically moving around the vortex in <sup>3</sup>He-A or in the Bose condensate with $`F=1`$ in nematic state (ii) finds its spin or its momentum projection $`M`$ reversed with respect to the fixed environment. This is because the $`\widehat{𝐝}`$-vector, which plays the role of the quantization axis for the spin of a quasiparticle, rotates by $`\pi `$ around the vortex. As a consequence, several phenomena (e.g. global Aharonov-Bohm effect) discussed in the particle physics literature correspond to effects in <sup>3</sup>He-A physics , which can be extended to the atomic Bose condensates. In high-temperature superconductors with a nontrivial order parameter, the half-quantum vortex was identified as being attached to the intersection line of three grain boundaries , as suggested in . This $`N=1/2`$ vortex has been observed via the fractional magnetic flux it generates. In the spin projection representation the order parameter asymptote in the $`N=1/2`$ vortex in the nematic phase is $$\mathrm{\Psi }_\nu e^{i\varphi /2}\left(\begin{array}{c}e^{i\varphi /2}\\ 0\\ e^{i\varphi /2}\end{array}\right)=\left(\begin{array}{c}e^{i\varphi }\\ 0\\ 1\end{array}\right).$$ (6) This means that the $`N=1/2`$ vortex can be represented as a vortex in the spin-up component $`|`$, while the spin-down component $`|`$ is vortex-free. Such representation of the half-quantum vortex in terms of the regular $`N=1`$ vortex in one of the components of the order parameter occurs also in the <sup>3</sup>He-A. The general form of the order parameter in the half-quantum vortex, which includes also the core structure, is $$\mathrm{\Psi }_\nu =\left(\begin{array}{c}f_1(r)e^{i\varphi }\\ 0\\ f_2(r)\end{array}\right),f_1(0)=0,|f_1(\mathrm{})|=|f_2(\mathrm{})|.$$ (7) Note that, since the $`M=0`$ component in Eq.(6) is zero, the half-quantum vortex can be generated also in the Bose-condensate with two internal degrees of freedom, explored in Ref. . The necessary condition for that is that in the equilibrium state of such condensate both components must be equally populated. This is required by the asymptote of Eq.(7), where both components have the same amplitude. If the amplitudes are not exactly equal, the half-quantum vortex acquires the tail in the form of the domain wall terminating on the vortex. The same happens in <sup>3</sup>He-A where the half-quantum vortex is the termination line of the topological soliton. The Eq.(6) may suggest a way to generate a half-quantum vortex in an alkali Bose–Einstein condensate, simply be combining the successful idea for producing skyrmions with the proposal for making scalar vortices by the effect of light forces. Let us start from the homogeneous state $$\mathrm{\Psi }_\nu (\mathrm{initial})=f\left(\begin{array}{c}e^{i\alpha }\\ 0\\ e^{i\beta }\end{array}\right),$$ (8) which corresponds to the phase $`\mathrm{\Phi }=(\alpha +\beta )/2`$ and the nematic vector $`\widehat{𝐝}=\widehat{𝐱}\mathrm{cos}(\alpha \beta )/2+\widehat{𝐲}\mathrm{sin}(\alpha \beta )/2`$. A light spot shall illuminate the condensate with an intensity distribution $`I`$ that draws a half-quantized vortex, $$I=I_0e^{i\varphi /2}.$$ (9) The light should be a short pulse and it should be non–resonant with respect to atomic transition frequencies. Simultaneously, uniform microwave radiation shall penetrate the condensate. The radiation should be far–detuned from the transition frequency between the spin components $`|`$ and $`|`$ of the condensate such that it only causes shifts in the relative phases between $`|`$ and $`|`$ and no population transfer. The light spot will imprint an optical mask onto the homogeneous microwave field, due to the optical Stark effect. Therefore, the generated relative phase shift will follow the half-quantum vortex drawn by the light spot. Simultaneously, the condensate gains an overall scalar phase factor, caused by the intensity kick of the light. This factor should exactly compensate the phase mismatch between the components that is left from the optically assisted microwave effect. Of course, for this the intensities of light and microwave radiation should be properly adjusted, but this could be arranged. In this simple way, an Alice string can be created in a multicomponent Bose–Einstein condensate of alkali atoms. Acknowledgements. We thank Matti Krusius and Brian Anderson for fruitful discussions. The work of GEV was supported in part by the Russian Foundations for Fundamental Research and by European Science Foundation. UL was supported by the Alexander von Humboldt Foundation and the Göran Gustafsson Stiftelse.
no-problem/0003/astro-ph0003141.html
ar5iv
text
# Hard X-ray emission from the galaxy cluster A2256 ## 1 Introduction Nonthermal hard X-ray (HXR) radiation has been detected for the first time in the Coma cluster by BeppoSAX (Fusco-Femiano et al. 1999) and RXTE (Rephaeli, Gruber & Blanco 1999), while marginal evidence is reported for A2199 (Kaastra et al. 1999). These observations are only first steps towards assessing the general existence of this new component in the X-ray spectra of clusters of galaxies. The search for nonthermal emission in more clusters is of high importance as it will allow to derive additional informations on the physical conditions of the intracluster medium (ICM) environment, which cannot be obtained by studying the thermal plasma emission only. Various interpretations of the HXR emission have been presented since its discovery in the Coma cluster spectrum. The most direct explanation is inverse Compton (IC) scattering of cosmic microwave background (CMB) photons by the relativistic electrons responsible of the extended radio emission present in the central region of Coma (Willson 1970). The combined radio synchrotron and IC HXR fluxes (e.g., Rephaeli 1979) allow to estimate a volume-averaged intracluster magnetic field of $`0.16\mu G`$ (Fusco-Femiano et al. 1999). One of the problems with the IC model is that this value of the magnetic field in the ICM seems to be at odd with the value determined from Faraday rotation of polarized radiation toward the head tail radio galaxy NGC4869 that gives a line-of-sight $`B6\mu G`$ (Feretti et al. 1995), and with the equipartition value in the radio halo, which is $`0.4h_{50}^{2/7}\mu G`$ (Giovannini et al. 1993). We note, however, that Feretti et al. (1995) also inferred the existence of a weaker and larger scale magnetic field component in the range of $`0.10.2h_{50}^{1/2}\mu G`$, and therefore the $`6\mu G`$ field could be local. A low average magnetic field is also consistent with the model developed by Brunetti et al. (1999), which predicts a magnetic field strenght decreasing with the distance from the cluster centre. An alternative explanation is nonthermal bremsstrahlung (NTB) emission from suprathermal electrons currently accelerated at energies greater than $``$10 keV by shocks or turbulence (Kaastra et al. 1998; Ensslin, Lieu, & Biermann 1999; Sarazin & Kempner 1999). Another and more trivial possibility is that the HXR radiation is due to a hard X-ray source present in the external regions of the field of view of the BeppoSAX PDS (FWHM=$`1.3^{}`$, hexagonal), as for example a highly obscured Seyfert 2 galaxy like the Circinus galaxy (Matt et al. 1999). In the central region ($`30^{}`$ in radius), the MECS image does not show evidence of this kind of sources (Fusco-Femiano 1999). Hovewer, the detection of a hard nonthermal component in other clusters should strongly reduce the probability of this last interpretation. In this letter we present the results of a long observation of A2256, exploiting the unique capabilities of the PDS, onboard BeppoSAX , to search for HXR emission (Frontera et al. 1997). The cluster was also observed with the MECS, an imaging instrument working in the 1.5-10 keV energy range (Boella et al. 1997). The galaxy cluster A2256 is similar to the Coma cluster in many X-ray properties, as luminosity and presence of substructures. The ROSAT PSPC observations showed that A2256 is a double X-ray cluster (Briel et al. 1991), suggesting that a subcluster may be merging with a larger cluster, although there is no strong evidence in the temperature map in favour of an advanced merger (Markevitch & Vikhlinin 1997), as it is for Coma. The average gas temperature is $``$7 keV, as measured by several X-ray instruments (David et al. 1993; Hatsukade 1989; Markevitch & Vikhlinin 1997; Henriksen 1999). Both clusters show a radio halo in the central and periferal regions. However, the radio emission from A2256 is notably complex. The region around the cluster centre is occupied by an unusual concentration of radio galaxies: at least five discrete sources have been identified with cluster galaxies, but there also two extended emission regions which have linear sizes $``$1 Mpc (Bridle & Formalont 1976; Bridle et al. 1979; Rottgering et al. 1994). Throughout the Letter we assume a Hubble constant of $`H_o=50kms^1Mpc^1h_{50}`$ and $`q_0=1/2`$, so that an angular distance of $`1^{}`$ corresponds to 92 kpc ($`z_{A2256}=0.0581`$; Struble & Rood 1991). Quoted confidence intervals are at $`90\%`$ level, if not otherwise specified. ## 2 PDS and MECS Data Reduction The total effective exposure time was $`1.3\times 10^5`$ sec for the MECS and $`7.1\times 10^4`$ sec for the PDS in the two observations of February 1998 and February 1999. The observed count rate for A2256 was 0.497$`\pm `$0.002 cts/s for the 2 MECS units and 0.27$`\pm `$0.04 cts/s for the PDS instrument. Since the source is rather faint in the PDS band (approximately 1.5 mCrab in 15-150 keV) a careful check of the background subtraction must be performed. The background sampling was performed using the default rocking law of the two PDS collimators that samples ON, +OFF, ON, -OFF fields for each collimator with a dwell time of 96” (Frontera et al. 1997). When one collimator is pointing ON source, the other collimator is pointing toward one of the two OFF positions. We used the standard procedure to obtain PDS spectra (Dal Fiume et al. 1997), which consists in extracting one accumulated spectrum for each unit for each collimator position. We then checked the two independently accumulated background spectra in the two different +/-OFF sky directions, offset by 210’ with respect to the on-axis pointing direction. The comparison between the two accumulated backgrounds (\[+OFF\] - \[-OFF\]) shows a difference with a marginal excess below 30 keV in the \[+OFF\] pointing. This excess is much lower than the signal from the source, but it must not be neglected. The total excess in the first two equalized energy channels (15-33.5 keV) is $`0.048\pm 0.024ctss^1`$, i.e. approximately 2$`\sigma `$. This concentration in only the lowest energy channels implies that the excess is likely due to contamination by a point source rather than to a statistical fluctuation. The total source spectrum was therefore obtained using only the uncontaminated background accumulated pointing at the \[-OFF\] field. Hovewer, in Section 3 we report the confidence level of the nonthermal emission in excess of the thermal one considering the average of the two background measurements. The background level of the PDS is the lowest obtained thus far with high-energy instruments on board satellites thanks to the equatorial orbit and is very stable again thanks to the favorable orbit. No modeling of the time variation of the background is required. MECS data preparation and linearization was performed using the Saxdas package under Ftools environment. We have extracted a MECS spectrum from a circular region of 8 radius (corresponding to about 0.8 Mpc) centered on the primary emission peak. From the ROSAT PSPC radial profile, we estimate that about 70% of the total cluster emission falls within this radius. The background subtraction has been performed using spectra extracted from blank sky event files in the same region of the detector as the source. A numerical relative normalization factor among the two instruments has been included in the fitting procedure (see next Section) to account for: a) the fact that the MECS spectrum includes emission out to $``$0.8 Mpc from the X-ray peak, while the PDS field of view (1.3 degrees FWHM) covers the entire emission from the cluster; b) the slight mismatch in the absolute flux calibration of the MECS and PDS response matrices employed (September 1997 release; Fiore, Guainazzi & Grandi 1999); c) the vignetting in the PDS instrument, (the MECS vignetting is included in the response matrix). The estimated normalization factor is $``$1.1. In the fitting procedure we allow this factor to vary within 15% from the above value to account for the uncertainty in this parameter. ## 3 PDS and MECS Data Analysis and Results The spectral analysis of the MECS data alone, in the energy range 2-9.7 keV and in the central $``$0.8 Mpc region, gives a gas temperature of $`kT=7.41\pm 0.23`$ keV ($`\chi ^2`$=154.5 for 162 degrees of freedom; hereafter dof), using an optically thin thermal emission model (MEKAL code on the XSPEC package), absorbed by a galactic line of sight equivalent hydrogen column density, $`N_H`$, of 4.01$`\times 10^{20}cm^2`$. This value of the temperature is consistent with the ASCA GIS measurement (6.78-7.44 keV; Henriksen 1999), and with the values obtained by previous observations : the Einstein MPC (6.7-8.1 keV; David et al. 1993) and Ginga (7.32-7.70 keV; Hatsukade 1989). Also the flux of $`5.3\times 10^{11}`$$`ergcm^2s^1`$in the 2-10 keV energy range is consistent with the previous measurements. The iron abundance is $`0.26\pm 0.03`$, in agreement with the ASCA results (Markevitch & Vikhlinin 1997). The analysis of the PDS data with a thermal bresstrahlung component gives a temperature of $``$30 keV. Fitting the data with two thermal components, one of these at the fixed temperature of 7.4 keV, we obtain a temperature greater than $``$90 keV for the second component. These unrealistic high values for the gas temperature obtained in both the fits are interpreted as a strong indication that the detected hard excess is due to a nonthermal mechanism. Figure 1 shows the simultaneous fit to the MECS and PDS data with a thermal component at the temperature of 7.47$`\pm `$0.35 keV and a normalization factor of $``$1.2 for the two data sets. The $`\chi ^2`$ is 180.5 for 167 dof. Hard X-ray radiation at energies greater than $``$20 keV is in excess with respect to the thermal component at a level of $`4.6\sigma `$ and this value is rather stable against variation of the normalization factor. It results slightly lower ($`4.5\sigma `$) considering the average of the two background measurements. Besides, also fitting the PDS data alone with a thermal component at the fixed temperature of 7.47 keV we obtain an excess at a level of 4.3$`\sigma `$. If we introduce a second nonthermal component, modeled as a power law, we obtain the fit shown in figure 2. The $`\chi ^2`$ is 156.6 for 165 dof. The improvement with respect to the previous model is significant at more than the 99.99% confidence level, according to the F-test. The confidence contours of the parameters $`kT`$ and photon spectral index ($`\alpha _X`$) show that, at 90% confidence level, the temperature is well determined, 6.8-7.5 keV, while $`\alpha _X`$ describes a large interval 0.3-1.7. The presence of the nonthermal component has the effect to slightly decrease the best fit value of the temperature ($`6.95_{0.35}^{+0.45}`$ keV), with respect to the temperature obtained considering only the MECS data. The flux of the nonthermal component is rather stable, $`1.2\times 10^{11}`$$`ergcm^2s^1`$in the 20-80 keV energy range, against variations of $`\alpha _X`$. The contribution of the nonthermal component to the thermal flux in the 2-10 keV energy range is $`10\%`$ for $`\alpha _X`$1.70. The analysis of the two observations with effective exposure times of $``$23 ksec (February 1998) and $``$48 ksec (February 1999) for the PDS does not show significant flux variations. These results and the fact that the two clusters with a detected hard X–rays excess (Coma and A2256) both have radio halos, strongly support the diffuse nonthermal mechanism as responsible for the excess, as discussed in the next section. ## 4 Discussion A2256 is the second cluster, after Coma (Fusco-Femiano et al. 1999), which shows hard X-ray radiation up to $``$80 keV in the PDS spectrum, with a clear excess above the thermal intracluster emission. (A2199 shows only a marginal evidence in the external region of the MECS detectors, Kaastra et al. 1999). We have investigated the possibility that the observed excess in A2256 could be due by a confusing source in the field of view of the PDS. The most qualified candidate is the QSO 4C +79.16 observed by ROSAT PSPC with a count rate of $``$0.041 c/s (WGA Catalogue). With a typical photon index of 1.8 (ROSAT reports a steeper index of $``$2.5), about 1.2 c/s are necessary to account for the observed HXR emission of $`1.2\times 10^{11}`$$`ergcm^2s^1`$in the 20-80 keV energy range of the PDS. Considering that the QSO is $`52^{}`$ off-axis, an unusual variability of about two orders of magnitude is required. There is still the possibility that an obscured source, like Circinus (Matt et al. 1999), be responsible of the detected HXR radiation. Unless the obscured source is within 2 of the central bright core of A2256, our analysis of the MECS image excludes the presence of this kind of sources in the central region ($`30^{}`$ in radius) of the cluster. The application of the inverse Compton model, based on the scattering of relativistic electrons with the 3K background photons, appears less straightforward in A2256 than in the Coma cluster. The radio morphology is remarkably complex (Bridle & Fomalont 1976; Bridle et al. 1979; Rottgering et al. 1994). There are at least four radio sources classified as head-tail radio galaxies, an ultra steep spectrum source and a diffuse region in the north with two diffuse arcs ($`G`$,$`H`$ according to Bridle et al. 1979), at a distance of $`8^{}`$ from the cluster centre. The extent of this diffuse region is estimated to be 1.0$`\times `$0.3 Mpc, with a total flux density of 671 mJy at 610 MHz and a rather uniform spectral index of 0.8$`\pm `$0.1 between 610 and 1415 MHz (Bridle et al. 1979). The percentage polarization is uniform with an average value of 20%. The alignement of the electric field vectors suggests a well ordered magnetic field. The equipartition magnetic field is 1-2$`\mu G`$ (Bridle et al. 1979). A fainter extended emission permeates the cluster centre (diffuse emission around $`D`$ in Bridle et al. 1979) with a steeper radio spectral index of $``$1.8 as estimated by Bridle et al. (1979) and in agreement with the 327 MHz data from the Westerbork Northern Sky Survey (Rengelink et al. 1997). The total flux density is 100 mJy at 610 MHz and no polarized emission have been detected from this region. We note that the physical and morphological properties of the diffuse $`D`$ emission are consistent with those of central halo sources while those in the $`GH`$ region are consistent with the properties of peripheral relic sources as 1253+275 in the Coma cluster. In addition to the thermal emission, a second component in the X-ray spectrum of A2256 was noted by Markevitch & Vikhlinin (1997) in their spectral analysis of the ASCA data in the central r=$`3^{}`$ spherical bin. Although they were not able to firmly establish the origin of this emission, their best fit is a power law model with photon index 2.4$`\pm `$0.3, therefore favouring a nonthermal component. The contribution of this component to the total flux is not reported in the paper. Considering that there are no bright point sources in the ROSAT HRI image, they argued for an extended source. Also the joint ASCA GIS & RXTE PCA data analysis is consistent with the detection of a nonthermal component in addition to the thermal component. The contribution of this nonthermal component to the total X-ray flux in the 2-10 energy range is $`4\%`$. However, a second thermal component (0.75-1.46 keV), instead of a nonthermal one, provides a better description of the data (Henriksen 1999). The MECS data do not show evidence of this steep nonthermal component in the central bin of $`2^{}`$ because the energy range is truncated to a lower limit of 2 keV (Molendi, De Grandi, & Fusco-Femiano 2000), while a joint fit to the LECS & MECS data within $`4^{}`$ does not show a significant evidence for an additional component at energies lower than 2 keV. The power-law component (slope $`2.4\pm 0.3`$) found in the analysis of the ASCA data (Markevitch & Vikhlinin 1997), and the upper limit of 1.7 for $`a_X`$, determined by BeppoSAX data, suggest that two tails could be present in the X-ray spectrum of A2256. The former might due to the central diffuse radio source with the steep index $`\alpha _R`$1.8, and the last to the more extended radio emission in the northern region of the cluster with the flatter energy spectral index of 0.8$`\pm `$0.1. Assuming that the contribution of the power-law component, detected by ASCA , to the total X-ray flux ($`F_X(210keV)5\times 10^{11}`$$`ergcm^2s^1`$) is $`5\%`$, we obtain a negligible contribution at PDS energies ($`4\times 10^{13}`$$`ergcm^2s^1`$) and a magnetic field in the central radio region of $`0.5\mu G`$. For the external radio region, with spectral index 0.8, the nonthermal X-ray flux $`f_X(2080keV)1.2\times 10^{11}`$$`ergcm^2s^1`$, derived by the PDS excess, leads to a low value of $`0.05\mu G`$. Even assuming that a large fraction (say 50$`\%`$) of the HXR flux is due to the several point radio sources in the central region and/or to the contribution of different mechanisms, we obtain only a slightly greater value of $`0.08\mu G`$. The combined fit of ASCA GIS and RXTE PCA data (Henriksen 1999) gives an upper limit of $`2.64\times 10^{12}`$$`ergcm^2s^1`$in the 2-10 keV energy range for the nonthermal component that corresponds to a lower limit for the volume-averaged intracluster magnetic field, $`B`$, of 0.36$`\mu G`$ ($`\alpha _R=1.8`$). Considering that the HXR flux detected by the PDS is in agreement with the above value, we would obtain a value for $`B`$ consistent with that derived by the GIS & PCA data, but the fit to the MECS & PDS data is unacceptable for $`\alpha _X=1+\alpha _R=2.8`$. The previous scenario of a decreasing intracluster magnetic field from the cluster center would be difficult to reconcile with the stronger periferal radio region and higher equipartition magnetic field with respect to the central radio halo. Therefore, we could consider the possibility, recently suggested by Brunetti et al. (1999), that the HXR IC spectrum may be flatter than the synchrotron radio spectrum because of the acceleration and energy loss processes that produce an electron spectrum with different slopes. A different electron spectrum index for HXR and radio emissions is more likely for low magnetic fields which require higher electron energies for synchrotron than for IC radiation. This could explain the better fit to the PDS data of A2256 with $`\alpha _X<1+\alpha _R`$=1.8. Besides, this model suggests an alternative interpretation of the HXR excess of A2256. We can consider that a single hard tail is present in the X-ray spectrum of the cluster with index $`\alpha _X`$1.7, as detected by the PDS. The electron spectrum responsible of this HXR IC emission can produce radio emission with spectral index $`\alpha _R>\alpha _X`$-1=0.7 with a resulting mean volume-averaged intracluster magnetic field higher than the one we derive from the classical IC model. A different mechanism which may produce HXR radiation is given by nonthermal bremsstrahlung. Sarazin & Kempner (1999) suggest that all or part of the HXR emission detected in the Coma cluster might be NTB from suprathermal electrons formed through current acceleration of the thermal gas, either by shocks or turbulence in the ICM. For A2256 the MECS & PDS measurements determine a power-law momentum spectrum of the electrons with index $`2\alpha _X1=2.4`$ (90%). The consequence is that an accelerating electron model with flat spectrum produce more IC HXR emission than the NTB mechanism, unless the electron spectrum cuts-off or steepens at high energies. Besides, these models produce more radio emission than observed if $`B`$ is $`1\mu G`$. We thank P. Giommi for useful suggestions regarding data analysis, G. Brunetti for discussions on the interpretation of the results, and the referee for valuable comments.
no-problem/0003/astro-ph0003148.html
ar5iv
text
# Fundamental properties of the open cluster NGC 2355 ## 1 Introduction Old open clusters, with ages greater than the age of the Hyades ($``$ 600 Myr), represent a minority of about 80 objects among 1200 known open clusters. Among their properties which enable to investigate both stellar physics and galactic structure (reviewed by Friel 1995), we are especially interested in orbits because they are related to the processes which have allowed them to survive tidal forces. The statistics are still poor but it seems that old open clusters follow orbits that keep them away from the plane and the disruptive effects of giant molecular clouds. The question is to know if these orbits result from special events or represent the tail of the distribution of clusters that have already been destroyed. Another relevant point to clarify is the relationship between orbits and metallicity \[Fe/H\] which traces the dynamical and chemical evolution of the Galaxy. The metallicity of old open clusters is intermediate between the disk and the thick disk, with a radial gradient, but a large dispersion that could indicate an inhomogeneous enrichment of the Galaxy. To answer such fundamental questions, new observations are needed to investigate in more details the old open cluster properties and their correlations. We have therefore undertaken a spectroscopic and astrometric program to obtain metallicities, distances and velocities of high quality for several poorly known old open clusters, NGC 2355 being our first target. There are very few references on NGC 2355 in the astronomical literature. A photometric study in UBV down to $`V19.2`$ was made by Kaluzny & Mazur (1991). In this study, the reddening of the cluster was estimated to be $`E_{BV}`$=0.12 mag, the distance modulus $`(mM)_0=12.1`$, the metallicity +0.13 and the age the same as Praesepe. In their search for old open clusters, Phelps et al. (1994) also report a photometric study of NGC 2355 in BV but the photometry of individual stars is not given. Their calibration of the index $`\delta V`$, defined as the magnitude difference between the main-sequence turnoff and the giant clump leads to a Morphological Age Index corresponding to 0.9 Gyr, like Praesepe (Janes & Phelps 1994). More recently, Ann et al. (1999) examined this cluster as part of the BOAO survey (Bohyunsan Optical Astronomy Observatory, Korea) and determined from UBVI photometry : $`[\mathrm{Fe}/\mathrm{H}]=0.32,E_{BV}=0.25,(mM)_0=11.4`$ and an age of 1 Gyr. In Sect. 2 and 3 we present new data which are used to analyse the cluster in combination with the UBV photometry of Kaluzny & Mazur (1991) and the JHK<sub>s</sub> photometry which is available for the whole field in the 2MASS 1999 Spring Incremental Data Release. We describe the determination and analysis of proper motions from photographic plates and recent observations at the meridian circle of Bordeaux (Sect. 2). For 24 bright stars ($`V13`$) around the cluster’s center, spectra were obtained with the echelle spectrograph ELODIE on the 193cm telescope at the Haute-Provence Observatory. The radial velocities of the red giants were obtained by standard on-line reduction directly at the telescope. The determination of the radial velocity and projected rotational velocity of the hot fast rotating turnoff stars required dedicated reduction tools (Sect. 3). In Sect. 4 we present our analysis of the spectra to estimate the atmospheric parameters $`T_{\mathrm{eff}},\mathrm{log}g,[\mathrm{Fe}/\mathrm{H}]`$ and the absolute magnitude $`M_\mathrm{V}`$. For the latter, we developed a new version of the TGMET method (Katz et al. 1998 and Soubiran et al. 1998). We discuss the case of an unusual giant in NGC 2355 which is 2.3 magnitudes brighter than the giant clump for the same temperature. We also report the discovery of a blue straggler in the cluster and of a moving pair of field stars. Sect. 5 deals with the fundamental parameters of NGC 2355 resulting from our study. Our conclusions are reviewed in Sect. 6. For identifying individual stars we use as far as available the star numbers introduced by Kaluzny & Mazur (1991), preceded by the prefix ”KM”. ## 2 Measurement and analysis of proper motions We determined precise proper motions for stars in the cluster region and in the surrounding field for two reasons: 1. to enable a kinematical segregation of cluster members and non-members, and 2. to derive the absolute tangential velocity of the cluster. In order to achieve adequate accuracy, the proper motions were determined from observations at 3 epochs with a maximum separation of about 90 years. The first epoch, around 1910, was provided by a triple-exposed plate from the Bordeaux Carte du Ciel (CdC, $`2^{}\times 2^{}`$, $`B_{\mathrm{lim}}15.0`$) on which the cluster is favorably placed near the plate center, and by 5 plates from the Bordeaux Astrographic Catalogue (AC, same size, $`B_{\mathrm{lim}}12.0`$) which fully or partially overlap with the field of the CdC-plate. Second epoch positions were obtained by measurement of two POSS-I glass copies (O & E plates) from the Leiden Observatory plate archive. The third epoch consists of observations made with the CCD meridian circle of Bordeaux Observatory in 1997/1998 as part of the ‘Méridien 2000’ program (see Colin et al. 1998). The CdC and POSS plates were scanned on the MAMA machine at the Paris Observatory, the AC plates were scanned on a PDS machine at the Astronomical Institute Münster. The scans were processed partly with the SExtractor software (Bertin & Arnouts 1996) and partly with our own software, in particular for the centering of the CdC triple images. All observations were reduced to the reference system of Hipparcos. The meridian observations and the first-epoch plate measurements were linked directly to reference stars from the Hipparcos catalogue and the ACT Reference catalogue (Urban et al. 1998). Iterative reduction schemes were used in order to make due account of the multiple observations of each star. The measurements from the POSS-I plates required a separate treatment because the geometry of projection on these plates is subject to complicated and sizable distortions. Thus we constructed from the first and third-epoch data an intermediary catalog of secondary reference stars. We then applied a moving-filter technique as described by Morrison et al. (1998) to transform the POSS-I data locally and smoothly to the Hipparcos system. The final proper motions were obtained by combining the positions from all epochs in a weighted linear least-squares adjustment. According to the reduction residuals and the comparison between different observations of the same epoch, the mean accuracies of the positions are as follows: 50 mas per coordinate for the mean positions from the meridian circle observations, between 120 and 150 mas per coordinate and plate for the positions from the first-epoch plates and 150 mas per coordinate and plate for the positions from POSS-I. The mean internal errors of the resulting proper motions range from $`0.7\mathrm{mas}\mathrm{y}^1`$ for the brightest stars to $`2.0\mathrm{mas}\mathrm{y}^1`$ for the faintest stars of the sample. Without selection according to kinematics, the distribution of the stars in the plane of the sky reveals that the cluster is centered on the position $`\alpha =7^\mathrm{h}17\stackrel{m}{.}0,\delta =13^{}45^{}`$ (2000.0), and that its angular radius is at least $`5^{}`$, but probably larger. In Fig. 1 we present histograms of the distribution of proper motions for a circular field of $`7^{}`$ radius around the above given position. For comparison we also show the distribution of proper motions in an annulus outside the cluster, namely between $`18^{}`$ and $`36^{}`$ from the center (counts rescaled to equal surface). It is clearly seen that the cluster stands out against the field as a concentration of comoving stars. In order to estimate the mean proper motion of the cluster and to obtain cluster membership probabilities we fitted two-dimensional Gaussians to the proper-motion distributions of the pure field sample and the cluster-and-field sample. The parameters of the distributions were determined by applying a maximum-likelihood criterion to the proper motions in the range $`\mu _l\mathrm{cos}b[6,+10]\mathrm{mas}\mathrm{y}^1`$ and $`\mu _b[10,+6]\mathrm{mas}\mathrm{y}^1`$. The distribution of the field stars appears centered around $`(\mu _l\mathrm{cos}b,\mu _b)=(+2.4,1.8)\mathrm{mas}\mathrm{y}^1`$ and has a dispersion of about $`3\mathrm{mas}\mathrm{y}^1`$. The proper motions of the cluster stars are centered on $`(\mu _l\mathrm{cos}b,\mu _b)=(+0.5,2.4)\mathrm{mas}\mathrm{y}^1`$, i.e. they are slightly offset from the mean proper motion of the field, and have a dispersion of $`0.8\mathrm{mas}\mathrm{y}^1`$ in $`\mu _l\mathrm{cos}b`$ and $`1.5\mathrm{mas}\mathrm{y}^1`$ in $`\mu _b`$. The dispersion in $`\mu _b`$ is in agreement with the estimated mean proper motion errors. However it is surprising that the dispersion in $`\mu _l\mathrm{cos}b`$ is substantially smaller. The determination of the mean proper motion of the cluster has a statistical uncertainty of $`0.3\mathrm{mas}\mathrm{y}^1`$. To this we must add in quadrature the uncertainty of the absolute calibration of the Hipparcos reference frame which is $`0.25\mathrm{mas}\mathrm{y}^1`$ (Kovalevsky et al. 1997). With some additional allowance for other (possibly undetected) systematic errors in the measuring process we estimate that the accuracy of our determination of the absolute proper motion of the cluster is $`0.5\mathrm{mas}\mathrm{y}^1`$ per coordinate. Using the above given parameters for the distributions of the proper motions in the cluster and the field, individual kinematical membership probabilities were calculated. This was done in the usual way according to the relative frequency of cluster stars which the fitted model distributions predict for a given proper motion. Our sample of stars in the $`7^{}`$ circle thus divides into 38 probable cluster members ($`p>90\%`$), 13 probable field stars ($`p<10\%`$) and 17 unclear cases ($`10\%p90\%`$). By kinematical discrimination between cluster stars and field stars one obtains an improved picture of the structure and spatial extent of the cluster. For this purpose we chose a field of $`36^{}`$ radius around the cluster center and selected only those stars with proper motion equal to the mean motion of the cluster within $`1\sigma `$, i.e. the proper motion dispersion of the cluster stars. The latter criterion reduces the surface density of the field stars by a factor of 10, but retains a sufficiently large number of cluster stars so that the cluster’s structure and extent become more clearly recognizable. Fig. 2 compares the radial profile of stellar density (number counts in non-overlapping annuli around the cluster center) with and without kinematical selection. It turns out that the cluster has a central component with exponentially decreasing density out to about $`7^{}`$, a halo with approximately constant density beyond $`7^{}`$ and an edge at $`15^{}`$. The core radius of the cluster, i.e. the radius at which the surface density drops to half its central value, is found to be about $`1.5^{}`$. ## 3 Spectroscopy : radial and rotational velocities Twenty-four stars with $`V13`$ in the field of the cluster were observed at the Haute-Provence Observatory in January 1999, on the 1.93m telescope equipped with the spectrograph ELODIE. This instrument is a dual-fibre-fed echelle spectrograph devoted to the measurements of accurate radial velocities (Baranne et al. 1996). A spectral range 390-680 nm is recorded in a single exposure as 67 orders on a 1K CCD at a mean resolving power of 42000. With a one hour exposure one typically achieves a S/N of 100 on a star of magnitude 8.5 or a S/N of 10 on a star of magnitude 12.8. ELODIE is a very stable instrument, allowing to compare easily spectra observed at different epochs. Optimal extraction and wavelength calibration are automatically performed on-line, as well as the measurement of radial velocities by digital cross-correlation with binary templates thanks to the TACOS reduction software developed by D. Queloz (1996). The cross-correlation technique is well adapted for strong-lined spectra, corresponding to moderate effective temperatures up to 6500 K. The precision of radial velocities for such spectral types is better than $`100\text{m s}^1`$ even at low signal to noise ratio. Among the 24 target stars, 16 stars presented a clean deep correlation profile, permitting an accurate radial velocity and FWHM measurement directly at the telescope. The observations revealed a strong concentration of stars at $`v_\mathrm{r}35\mathrm{km}\mathrm{s}^1`$. This is without doubt the trace of the radial motion of the cluster. Thus, on the basis of radial velocity, 9 target stars were confirmed to be members of the cluster, with colours corresponding to clump giants. The mean radial velocity of this sub-sample is $`35.13\mathrm{km}\mathrm{s}^1`$ with a standard deviation of $`0.39\mathrm{km}\mathrm{s}^1`$. The FWHM of the correlation function was for most of the stars nearly constant at $`11.3\mathrm{km}\mathrm{s}^1`$, but slightly larger for 3 stars (KM 1 : $`13.2\mathrm{km}\mathrm{s}^1`$ , KM 2 : $`34.6\mathrm{km}\mathrm{s}^1`$, KM 20 : $`17.3\mathrm{km}\mathrm{s}^1`$) corresponding to the signature of either macroturbulence, rotation or binarity. The radial velocities are listed in Tab. 1, together with the UBV photometry from Kaluzny & Mazur (1991) and the JHK<sub>s</sub> photometry from 2MASS. Eight stars with bluer colour presented broad lines indicating a high rotational velocity. They could not be treated by the standard cross-correlation method. Instead, their rotational profile was extracted using a least-squares deconvolution technique developed and fully described by Donati et al. (1997). The latter method presents some similarities with the cross-correlation method. It is based on the fact that the observed spectrum can be expressed as the convolution product of a line pattern with a rotational plus instrumental profile. This profile can thus be recovered by deconvolving the observed spectrum with a line mask computed from a model atmosphere having the same parameters as the star. As the effective temperatures of the target stars were not known, a series of line masks with $`T_{\mathrm{eff}}`$ ranging from 6000 K to 9000 K were computed from the Kurucz’s database (Kurucz 1993). The best contrast was obtained at $`T_{\mathrm{eff}}70007500\mathrm{K}`$. The deconvolution was quite difficult due to the low signal to noise ratio of the spectra but the signature of the rotation was visible for each star and confirmed a radial velocity consistent with the cluster’s velocity. The next step was to calibrate the width of the deconvolved profiles in terms of $`v\mathrm{sin}i`$. For this task we used several reference stars for which both a high-quality ELODIE spectrum and a published value of $`v\mathrm{sin}i`$ were available. Nine stars from the TGMET library (see next section) were found in the catalogue of rotational velocities compiled by Uesugi & Fukuda (1982), restricted to $`50200\mathrm{km}\mathrm{s}^1`$. The FWHM of the deconvolved profiles were measured the same way for reference and target stars by fitting a 10 degree polynomial as can be seen in Fig. 3. In this example, the same line mask corresponding to the parameters $`T_{\mathrm{eff}}=7500K,\mathrm{log}g=4.0,[\mathrm{Fe}/\mathrm{H}]=0.0`$ was used for the deconvolution of the two spectra, but HD 132052 ($`v\mathrm{sin}i=120\mathrm{km}\mathrm{s}^1`$) was observed at S/N=131 while KM 13 was observed at S/N=11. The weighted linear regression performed on the 9 points given by the reference stars led to the relation $`v\mathrm{sin}i=0.443\text{FWHM}+19.5\mathrm{km}\mathrm{s}^1`$, represented in Fig. 4. The projected rotational velocities estimated for the 8 turnoff stars of NGC 2355 are given in column 10 of Tab. 1. The most remarkable star among those which are classified as cluster members in Tab. 1 is KM 1. This is a giant which is 2.3 magnitudes brighter than the giant clump of the cluster. The fact that it is within one sigma of the cluster distribution in position, proper motion and radial velocity makes us believe that it is a cluster member and not a field star. On the other hand, we cannot completely exclude the possibility that it is a projection of a field star onto the cluster because a field star at this position may by chance have the same radial velocity as the cluster. According to the formulae of the circular motion around the galactic center, a star on the line of sight to the cluster at a distance between $`0.5\mathrm{kpc}`$ and $`2\mathrm{kpc}`$ would have a heliocentric radial velocity between $`+15\mathrm{km}\mathrm{s}^1`$ and $`+27\mathrm{km}\mathrm{s}^1`$. Hence there is a certain overlap between the radial velocity distribution of the field stars and the radial velocity of the cluster. If KM 1 is a member of the cluster, then it remains unclear by which phenomenon this star is considerably brighter than the other cluster giants of the same colour. We have looked for photometric variations to check if this star could be in an unstable phase of its evolution. No variations could be detected in the meridian observations over 3 years, nor in comparison with the apparent brightness on the 1950 POSS-I plates and the 1910 CdC plate. KM 1 is part of the TYCHO catalogue (TYC 775 997 1) where no variability is reported. The spectrum of this peculiar star is discussed in more detail in the next section. Another remark is to be made on the star KM 2. This one was selected by Ahumada & Lapasset (1995) as a blue straggler candidate. However, its radial velocity indicates that this star is not a member of the cluster. We also point towards the stars KM 20 and KM 26. These are found to be field stars with identical radial velocities of $`v_\mathrm{r}=50.3\mathrm{km}\mathrm{s}^1`$, despite an angular separation of $`1.7^{}`$. The hypothesis of a moving pair is discussed at the end of the next section. ## 4 Atmospheric parameters, absolute magnitudes The atmospheric parameters ($`T_{\mathrm{eff}},\mathrm{log}g,[\mathrm{Fe}/\mathrm{H}]`$) and the absolute magnitude $`M_\mathrm{V}`$ have been obtained with the automated software TGMET, described in Katz et al. (1998). TGMET is a minimum distance method (reduced $`\chi ^2`$ minimisation) which measures in a quantitative way the similarities and discrepancies between spectra and finds for a given target spectrum the most closely matching template spectra in a library. The TGMET library (Soubiran et al. 1998) was built with high S/N ELODIE spectra of reference stars for which the atmospheric parameters were taken from published detailed analyses, mostly in the Catalogue of \[Fe/H\] determinations (Cayrel de Strobel et al. 1997). The previous version of the TGMET library was extended to cover the temperature interval \[3500 K - 7500 K\] and now includes nearly 450 reference spectra of all metallicities. To improve the temperature estimation, the library was also completed with stars having reliable $`T_{\mathrm{eff}}`$, either from the list of Blackwell & Lynas-Gray (1998) based on ISO flux calibration, or from the calibration of the colour index V$``$K (Alonso et al. 1996 and Alonso et al. 1999). A new aspect of TGMET was developed by estimating the absolute magnitude $`M_\mathrm{V}`$ simultaneously with the atmospheric parameters, based on the fact that stars having similar spectra have similar absolute magnitudes. In fact most of the stars of TGMET library are in the Hipparcos catalogue and 313 of them have parallaxes with a relative errors lower than 10%. Stars from the library having precise Hipparcos parallaxes had their absolute magnitude $`M_\mathrm{V}`$ derived from the TYCHO V apparent magnitude. They were used as reference stars for the absolute magnitude as for the atmospheric parameters. Some tests were performed to check the reliability of such spectroscopically determined absolute magnitudes. At solar metallicity, the rms difference between the absolute magnitude determined from Hipparcos and from TGMET is 0.21 for dwarfs, 0.31 for clump giants and 0.50 for other giants. The parameters ($`T_{\mathrm{eff}},\mathrm{log}g,[\mathrm{Fe}/\mathrm{H}],M_\mathrm{V}`$) of a target star processed by TGMET are given by the weighted mean of the parameters of the best matching reference spectra (presenting a reduced $`\chi ^2`$ which does not exceed the lowest one by more than 12%). The resulting uncertainty depends mainly on two factors. The first one is the quality of the parameters found in the literature for the reference stars. Typically errors quoted in detailed analyses are 50 to 150 K for $`T_{\mathrm{eff}}`$, 0.1 to 0.3 for $`\mathrm{log}g`$, and 0.05 to 0.1 for $`[\mathrm{Fe}/\mathrm{H}]`$. But for some reference stars, it happens that the errors on the atmospheric parameters from the literature are much higher and such stars are gradually being identified with TGMET as outliers. The spectral detailed analysis is the only primary method to estimate metallicities. It is a difficult task and the results obtained by different authors can differ by a large amount. We do not expect to do better than detailed analyses with TGMET, but the method will improve if we can add reference stars with very reliable atmospheric parameters to the library . The problem is not as critical for absolute magnitudes because the large majority of the reference stars have excellent Hipparcos parallaxes thus reliable absolute magnitudes. The second source of uncertainty in the TGMET results is the way the parameter space is sampled by the reference stars. For example, among evolved stars, results are expected to be better for clump giants than for other giants because clump giants are more numerous in the literature and Hipparcos, consequently better represented in the TGMET library than other kinds of giants, and also because clump giants occupy a smaller volume than other giants in the parameter space. The results of TGMET for the 24 target stars are given in Tab. 2. The last column presents the distance moduli $`(VM_\mathrm{V})`$ of the stars as derived from the spectroscopically determined $`M_\mathrm{V}`$. To illustrate the TGMET processing Fig. 5 shows the spectrum of KM 10 in the region of the MgI triplet together with its best matching reference spectrum HD 205435 $`(T_{\mathrm{eff}}=5068K,\mathrm{log}g=2.64,[\mathrm{Fe}/\mathrm{H}]=0.16,M_\mathrm{V}=1.097)`$. As another example, Fig. 6 represents the $`H_\alpha `$ line of the fast rotator KM 22 together with its best matching reference spectrum HD 201377 ($`M_\mathrm{V}=1.607`$). In the previous section, it was pointed out that KM 1 is surprisingly found to be a probable member of the cluster despite a visual magnitude much brighter than the clump giants, for the same colour. Its distance modulus is consistent with the rest of the cluster and hence in agreement with the supposed membership, as well as its metallicity. Nevertheless this star has an abnormal, thus interesting, position on the cluster’s HR diagram which is worth attention (see Figs. 7 and 8). The intrinsic physical difference between KM 1 and the other giants of the cluster has been investigated by comparing their respective TGMET best matching reference stars, the parameters of which are presented in Tab. 3. The two sets present similar mean temperatures and metallicities but the range of absolute magnitudes is quite different. KM 1 exhibits through TGMET more spectral similarities with supergiants like HD 215665 or HD 159181 than with clump giants. The larger dispersion which is found for the parameters of KM 1, especially for $`M_\mathrm{V}`$, is well explained by the fact that the TGMET library does not sample the parameter space at the same resolution for supergiants than for clump giants as already mentioned. The weighted mean and error bar for each parameter of KM 1 was computed with the 10 best-fitting reference stars, despite large differences, because they equally (within 12%) match the target spectrum. The large error bars reflect the fact that there is not a perfect analog of KM 1 in the TGMET library. Consequently, in the following, KM 1 will contribute with a lower weight than clump giants to the determination of the fundamental parameters of the cluster. The 20 reference stars listed in Tab. 3, except HD 214567, are reported in Uesugi & Fukuda (1982) to rotate at $`v\mathrm{sin}i1020\mathrm{km}\mathrm{s}^1`$, so that there is no difference between the two sets concerning the rotation. The brighter magnitude of KM 1 could correspond to a higher mass, but in that case KM 1 should be much younger than the other giants. By comparison to the isochrones of Girardi et al. (2000) in the plane ($`T_{\mathrm{eff}},M_\mathrm{V}`$), KM 1 should be 160 Myr old whereas the rest of the cluster is 1 Gyr old (see next section). This phenomenon is similar to the blue staggler phenomenon, but on the red, evolved side. Ahumada & Lepasset (1995) enumerate several theories which have been proposed for the blue stagglers, and which could also explain the observation of KM 1: a field star captured by the cluster, a star which has accreted mass from the interstellar medium, a star which formed after the bulk of the cluster members, the result of a non-standard mechanism in the evolution, the result of a stellar collision or a binary merger. KM 1 could also be a blue straggler which has evolved. At the present state, the main difference observed between the spectra of KM 1 and the clump giants is a slightly broadened profile as seen on macroturbulent supergiants, or on rotating or binary giants, and a difference in absolute magnitude detected by TGMET. A spectrum with much higher S/N is necessary in order to obtain further insight on the nature of this star. Also mentioned in the previous section, KM 20 and KM 26 might be a moving pair because they have a common radial velocity of $`50.3\mathrm{km}\mathrm{s}^1`$. Their metallicities of $`0.26`$ and $`0.31`$ are in agreement. Unfortunately, the other parameters determined for KM 20 present large standard errors (see Tab. 2) indicating that the fit with the TGMET reference spectra was not satisfactory. We recall that KM 20 has an enlarged profile with $`\mathrm{FWHM}=17.3\mathrm{km}\mathrm{s}^1`$. Thus this star might be a spectroscopic binary which would explain the mediocre results of TGMET. Since a precise estimate of the proper motion of KM 26 is missing (no measurement due to the blending of the image by a reseau line of the CdC plate) the common motion cannot be assessed by means of proper motions. Anyway, the proper motion of KM 20 is small ($`\mu _l\mathrm{cos}b=2.2\mathrm{mas}\mathrm{y}^1,\mu _b=1.6\mathrm{mas}\mathrm{y}^1`$), so a large distance is probable and the verification of common proper motion would be difficult. Based on the spectroscopic estimate of the distance of KM 26 (2.7 kpc), the angular separation corresponds to a linear distance between the two stars of $`1.3\mathrm{pc}`$. ## 5 Fundamental parameters of the cluster ### 5.1 Metallicity, age, reddening, distance, size The weighted average of \[Fe/H\] of the members of NGC 2355 listed in Tab. 2 gives a metallicity of $`[\mathrm{Fe}/\mathrm{H}]=0.07\pm 0.11`$. The standard error on $`[\mathrm{Fe}/\mathrm{H}]`$ does not take into account the uncertainties on $`[\mathrm{Fe}/\mathrm{H}]`$ for the reference stars which are usually unknown. For example, the reference clump giant HD 5395 (see Tab. 3), with $`[\mathrm{Fe}/\mathrm{H}]=0.70`$, has been used to derive the parameters of KM 3, KM 4 and GSC 500538, despite an uncertain metallicity : Fernandes-Villacanas et al. (1990) report $`[\mathrm{Fe}/\mathrm{H}]=1.0`$, McWilliam (1990) $`[\mathrm{Fe}/\mathrm{H}]=0.51`$ whereas TGMET gives a value of $`[\mathrm{Fe}/\mathrm{H}]=0.44`$. This illustrates the difficulty to define a precise reference system for the atmospheric parameters. In this light, the error bar on \[Fe/H\] is underestimated. We have estimated the age of NGC 2355 by chosing in the isochrones of solar abundance of Girardi et al. (2000) the one which matched the best our observations in the plane ($`T_{\mathrm{eff}},M_\mathrm{V}`$). Fig. 9 shows that an age of 1 Gyr is probable due to the position of the turnoff stars. Despite an unexpectedly dispersed photometry, as previously mentioned by Kaluzny & Mazur (1991), Ann et al. (1999) and confirmed with JHK<sub>s</sub>, which can be interpreted as the consequence of an inhomogeneous interstellar absorption, the UBV and JHK<sub>s</sub> photometry gives an opportunity to estimate the reddening of the cluster and to test at the same time our temperature scale. By inverting the empirical relations $`T_{\mathrm{eff}}=f(\text{colour},[\mathrm{Fe}/\mathrm{H}])`$ calibrated by Alonso et al. (1996) for dwarfs and by Alonso et al. (1999) for giants, the colour index B$``$V and V$``$K corresponding to the TGMET effective temperatures were computed and compared to the observed ones, adopting K<sub>s</sub> for K. A systematic difference between them can be interpreted either in terms of reddening or as an error in the temperature scale. Giants have been tested first because their TGMET temperature scale is more reliable than for fast rotators. The mean observed colours B$``$V and V$``$K for the cluster’s giants are respectively 1.04 and 2.54, for a mean effective temperature of 5000 K. Such a temperature at $`[\mathrm{Fe}/\mathrm{H}]=0.07`$ corresponds to B$``$V=0.88 and V$``$K=2.12 in the Alonso et al.’s temperature scale. The corresponding excesses $`E_{BV}=0.16`$ and $`E_{VK}=0.42`$ lead to a ratio $`E_{VK}/E_{BV}=2.62`$ which agrees, within the error bars, with the value of 2.7 reported by Rieke & Lebofsky (1985) and Cardelli et al. (1989) for the interstellar extinction. For the hot stars the ratio was slightly different, possibly indicating an error in the temperature scale. The mean observed V$``$K of the dwarfs (1.00), corrected by $`E_{VK}=0.42`$ leads to $`T_{\mathrm{eff}}`$=7500 K with Alonso et al.’s relations while the mean temperature estimated by TGMET is 7300 K. An offset of 200 K in $`T_{\mathrm{eff}}`$ is still consistent with an age of 1 Gyr. The individual distance moduli of the cluster members in Tab. 2 yield a mean distance modulus of $`11.56\pm 0.10`$ for the cluster. By correcting for interstellar absorption according to a mean reddening of $`E_{BV}=0.16`$ and $`A_\mathrm{V}/E_{BV}=3.09`$ (Rieke & Lebofsky 1985), we determine the distance of NGC 2355 as $`1650_{70}^{+80}`$ pc. The corresponding height above the galactic plane is 340 pc. The dereddened distance modulus $`(VM_\mathrm{V})_0=11.06`$ is consistent with the one derived by Ann et al. (1999), $`(mM)_0=11.4`$, by isochrone and ZAMS fittings on colour-magnitude diagrams whereas they find a lower metallicity ($`[\mathrm{Fe}/\mathrm{H}]=0.32`$) and a higher reddening $`(E_{BV}=0.25)`$. On the contrary, we are in disagreement with Kaluzny & Mazur (1991) for the distance modulus ($`(mM)_0=12.1`$) but in better agreement for the metallicity and reddening ($`[\mathrm{Fe}/\mathrm{H}]=+0.13,E_{BV}=0.12`$). Isochrone and ZAMS fitting is well adapted for dense clusters with high quality multicolour photometry because of the three parameters to be deduced simultaneously : age, metallicity and reddening. In the case of NGC 2355, where the photometry is dispersed, this method can lead to a wide range of parameters as can be seen from the compared studies of Ann et al. (1999) and Kaluzny & Mazur (1991). Spectroscopy concerns less stars but better constrains the parameters. In Sect. 2, the angular radius of the cluster’s central body was estimated to be $`7^{}`$, while that of its halo was estimated to be $`15^{}`$. At the distance of 1.65 kpc this corresponds to linear radii of 3.3 pc and 7.2 pc respectively. The radius of the central body, 3.3 pc, is typical of the old open clusters linear radii which are distributed in a small range with a median at 2.65 pc, and an upper quartile at 3.45 pc (Janes & Phelps 1994). Fig. 8 presents the dereddened colour magnitude diagram of the cluster in (V$``$K,V), including the members which have been confirmed by their radial velocity, and the candidates within $`7^{}`$ of the cluster’s center having a probability higher than 90% to be member on the basis of proper motion. For comparison the 1 Gyr isochrone has been transformed into observable quantities and overlayed. The bluest star, GSC 501264, in the prolongation of the main sequence, is a typical blue straggler candidate. According to its colour $`(JK)_0=0.03`$, Alonso et al.’s calibration gives an effective temperature of 8300 K. The colour index $`(JH)=0.09`$, for which the reddening is unknown, gives a consistent temperature but $`(VK)_0=0.19`$ is unfortunately outside the limits of the calibration. It was not possible to estimate spectroscopically such a high temperature with TGMET because of the limit of the library to 7500 K. The effective temperature of blue stragglers seems to correspond to a mass which is higher than that of the turnoff stars, consequently unconsistent with the age of the cluster. This phenomenon was already mentioned in the case of the red giant KM 1 in Sect. 4. ### 5.2 Space velocity and galactic orbit Combining our results for the absolute proper motion, radial velocity and distance of the cluster, we determine its heliocentric space motion as $`(U,V,W)=(33.5\pm 1.8,18.8\pm 3.6,11.2\pm 3.9)\mathrm{km}\mathrm{s}^1`$ <sup>1</sup><sup>1</sup>1To be clear, U,V,W are vector components with respect to a right-handed triad pointing to galactic center, direction of rotation and northern galactic pole. The uncertainties in the components of the space motion result from the combination of all estimated observational errors which were given in the previous sections. However, due to the relatively large distance of the cluster from the Sun, the error budget is dominated by the uncertainty in the cluster’s proper motion. In order to obtain the velocity of the cluster in the galactocentric frame, we assume the motion of the Sun in the LSR as $`(U,V,W)_{}=(9.7,5.2,6.7)\mathrm{km}\mathrm{s}^1`$, following the recent result of Bienaymé (1999) which is supported by similar results of e.g. Dehnen & Binney (1998). Furthermore we adopt the current IAU standard values of $`V_{LSR}=220\mathrm{km}\mathrm{s}^1`$ for the local circular rotation velocity and $`R_{}=8.5`$ kpc for the distance of the Sun from the galactic center. The galactocentric position and velocity of the cluster then is $`(x,y,z)=(10.00,0.64,+0.34)\mathrm{kpc}`$ and $`(U,V,W)=(23.5,+206.2,4.2)\mathrm{km}\mathrm{s}^1`$. Together with the galactic gravitational potential these vectors determine the orbit of the cluster in the Galaxy. We have integrated the equations of motion in the galactic model of Allen & Santillan (1991) over the estimated cluster age of 1 Gyr. The resulting orbit is characterized by radial oscillations between distances from the galactic center of 8.9 and 10.1 kpc and vertical oscillations with an amplitude of 350 pc. The median of the distance $`r`$ from the galactic center along the orbit is 9.6 kpc and the median of the distance $`|z|`$ from the galactic plane is 0.24 kpc. The cluster has made 3.7 revolutions around the galactic center and 21 crossings of the galactic disk within its lifetime. If one varies the measured space velocity of the cluster within the error bars of the observations the parameters of the orbit undergo relatively small changes. We find that the radial distances can differ by $`\pm 3\%`$ and the vertical distances by $`\pm 7\%`$ from the above given values for the ‘mean orbit’. Thus we can say with certainty that the cluster keeps well outside the solar circle throughout its revolution around the Galaxy. We currently observe the cluster close to its maximum distance from the plane, i.e. close to the point of reversal of the vertical oscillation. This is consistent with the characteristics of the vertical motion because the probability to find the cluster near the maximum of $`|z|`$ (at a randomly chosen instant) is about a factor of three larger than the corresponding probability for a lower value of $`|z|`$. The statistics of $`|z|`$ along the orbit is such that the cluster spends only 9% of its time in the thin layer of the young disk population at $`|z|50\mathrm{pc}`$ where close encounters with very massive molecular clouds could have occurred. On the other hand, the orbit of NGC 2355 does not reach such extreme vertical distances as a few other old open clusters which are found at $`|z|`$ up to 2.4 kpc (Friel 1995). Thus we recognize NGC 2355 as a fairly normal representative of the old open cluster population. The latter has a scale height of 375 pc (Janes & Phelps 1994) as compared to the scale height of 55 pc for the young population of open clusters. ## 6 Conclusion We have presented a detailed study of stars in the region of NGC 2355, combining new astrometric and spectroscopic data with recent photometric data from other sources. Our main results can be summarised as follows : \- NGC 2355 is at 1.65 kpc of the Sun and 340 pc above the galactic plane in the direction of the anticenter, with a reddening of $`E_{BV}=0.16`$ and $`E_{VK}=0.42`$. \- Its metallicity is $`[\mathrm{Fe}/\mathrm{H}]=0.07\pm 0.11`$ and its age is 1 Gyr. \- NGC 2355 has a core radius of about 0.7 pc, a central component with a radius of 3.3 pc and a halo out to 7.2 pc from the cluster center. \- The turnoff stars of NGC 2355 are fast rotators, with a mean projected rotational velocity of $`100\mathrm{km}\mathrm{s}^1`$ and a mean $`T_{\mathrm{eff}}`$ of 7500 K. \- The giant clump is well defined at $`T_{\mathrm{eff}}=5000K`$, $`M_\mathrm{V}=0.51`$. \- Two stragglers have been identified in the cluster: a blue one, and a giant which has an unusual position in the HR diagram, 2.3 mag brighter than the giant clump. \- NGC 2355 has a galactocentric space velocity vector $`(U,V,W)=(23.5,+206.2,4.2)\mathrm{km}\mathrm{s}^1`$ and an orbit which keeps it beyond the solar circle and with only brief passages through the galactic plane. As a by-product of the study of the cluster, we found a moving pair of field giants with a radial velocity of $`50\mathrm{km}\mathrm{s}^1`$. ###### Acknowledgements. We thank J.Guibert and the MAMA team at Paris Observatory for their support by scanning plates, R. LePoole from Leiden Observatory for lending the POSS-I glass copies, the Astronomical Institute Münster for scanning time on the PDS machine, and all colleagues of Bordeaux Observatory who have taken part in the meridian circle observations. We also thank C. Catala who provided his observations to increase the TGMET library, J.-F. Donati who kindly made his deconvolution software available for us, J.-L. Halbwachs and S. Piquard who provided some intermediate measurements of TYCHO, and A. Alonso who provided his calibrations before publication. We are also grateful to R. Cayrel for his comments and suggestions. We have made use in this research of the SIMBAD and VIZIER databases, operated at CDS, Strasbourg, France. M.O. gratefully acknowledges financial support by a Marie Curie research grant from the European Community during this work.
no-problem/0003/hep-th0003118.html
ar5iv
text
# Untitled Document hep-th/0003118 Black Holes Radiate Mainly on the Brane Roberto Emparan<sup>a</sup>, Gary T. Horowitz<sup>b</sup>, Robert C. Myers<sup>c</sup> <sup>a</sup> Departamento de Física Teórica, Universidad del País Vasco, Apdo. 644, E-48080 Bilbao, Spain <sup>b</sup> Physics Department, University of California, Santa Barbara, CA 93106 USA <sup>c</sup> Department of Physics, McGill University, Montréal, QC, H3A 2T8, Canada <sup>a</sup>wtpemgar@lg.ehu.es, <sup>b</sup>gary@cosmic.physics.ucsb.edu, <sup>c</sup>rcm@hep.physics.mcgill.ca Abstract We examine the evaporation of a small black hole on a brane in a world with large extra dimensions. Since the masses of many Kaluza-Klein modes are much smaller than the Hawking temperature of the black hole, it has been claimed that most of the energy is radiated into these modes. We show that this is incorrect. Most of the energy goes into the modes on the brane. This raises the possibility of observing Hawking radiation in future high energy colliders if there are large extra dimensions. March, 2000 1. Introduction It has been proposed that space may have extra compact dimensions as large as a millimeter . If all the standard model fields live on a three-brane and only gravity (and perhaps some other unobserved fields) propagate in the bulk, such large extra dimensions are consistent with all current observations. We will consider the evaporation of black holes in this scenario. Although our results hold for any number of large extra dimensions, for definiteness we focus mainly on the case of two extra dimensions of size $`L`$. Since the effective four-dimensional Newton’s constant $`G_4`$ is related to $`G_6`$ by $`G_4=G_6/L^2`$, if the fundamental scale of gravity in the bulk is of order a TeV, $`G_4`$ has the observed value provided $`L1`$ mm. For weak fields, the bulk metric can be decomposed into the four-dimensional graviton and an infinite tower of Kaluza-Klein modes, which act like four-dimensional spin-two fields with masses starting at $`1/L10^4`$ eV. One of the most striking consequences of a low fundamental Planck scale, is the possibility of forming semiclassical black holes at rather low energies, say of order 100 TeV. Suppose one collapses matter (or collides particles) on the brane to form a black hole of size $`\mathrm{}_{\mathrm{fun}}r_0L`$ (where $`\mathrm{}_{\mathrm{fun}}=G_6^{1/4}`$ is the fundamental, i.e., six-dimensional, Planck length). This black hole has a temperature $`T1/r_0`$ which is much larger than the mass of the light Kaluza-Klein modes. Since gravity couples to everything, and there are so many Kaluza-Klein modes with mass less than the Hawking temperature, it has been claimed \[2,,3\] that the Hawking radiation will dominated by these Kaluza-Klein modes, with only a tiny fraction of the energy going into standard model particles. In other words, most of the energy would be radiated off of the brane into the bulk. If this were the case, the Hawking radiation from these small black holes would be essentially unobservable. We claim that this argument is incorrect, and most of the Hawking radiation goes into the standard model fields on the brane! The easiest way to see this is to consider the calculation from the six-dimensional perspective<sup>1</sup> This argument was given in a slightly different context in . Similar observations were also made independently by Susskind .. For a single massless six-dimensional field, the rate at which is energy radiated is of order $$\frac{dE}{dt}A_6T^6\frac{r_0^4}{r_0^6}\frac{1}{r_0^2}$$ where $`A_6`$ denotes the area of the six-dimensional black hole. For a single massless four-dimensional field on the brane, the rate of energy loss is of order $$\frac{dE}{dt}A_4T^4\frac{r_0^2}{r_0^4}\frac{1}{r_0^2}$$ and hence is the same. That is, with a single relevant scale $`r_0`$ determining the Hawking radiation, bulk and brane fields must both have $`dE/dtr_0^2`$. Hence the Hawking evaporation must emit comparable amounts of energy into the bulk and brane. However, with the typical assumption that there are many more fields on the brane than in the bulk, one would conclude that most of the energy goes into the observable four-dimensional fields. While the detection of this Hawking radiation would likely not be the first experimental signature of large extra dimensions, such measurements would provide a dramatic new window on black hole microphysics. We will examine this argument in more detail below (and confirm its validity), but first we must ask what was wrong with the original arguments suggesting that the Hawking radiation goes mostly into Kaluza-Klein modes. In one form , one views the emission of Hawking radiation as a six-dimensional process. In this case, since brane fields seem to have a tiny phase space compared to bulk fields, it would appear that the emission of the latter should dominate the Hawking evaporation. However, it is incorrect to think of brane fields as bulk fields confined to a limited phase space. The brane fields are intrinsically four-dimensional, and their emission is governed by the four-dimensional relation (1.1), and not the six-dimensional formula (1.1) with a restricted area. Dominance of the Kaluza-Klein modes might also be argued from a four-dimensional point of view . In this case, it may appear that the Kaluza-Klein modes must dominate the evaporation since there are a large number (of order $`(L/r_0)^2`$) light modes with masses below the scale of the Hawking temperature. However, here it is incorrect to think of the individual Kaluza-Klein modes of the bulk graviton as massive spin two fields on the brane with standard (minimal) gravitational couplings. Rather, since the Kaluza-Klein modes are excitations in the full transverse space, their overlap with the small (six-dimensional) black holes is suppressed by the geometric factor $`(r_0/L)^2`$ relative to the brane fields. Hence this geometric suppression precisely compensates for the enormous number of modes, and the total contribution of all Kaluza-Klein modes is only the same order as that from a single brane field. Since eq. (1.1) automatically incorporates the emission of all Kaluza-Klein modes, clearly this four-dimensional approach is a complicated reorganization of a simple six-dimensional situation. 2. Detailed calculations We now want to look in more detail at the rate of energy loss by a black hole to modes on the brane and in the bulk. We will consider a general dimension $`d`$ for the bulk spacetime, and assume that we live on a (3+1)-dimensional brane. The extra dimensions will have size $`L`$. Since we are assuming the size of the black hole $`r_0`$ is much less than $`L`$, the geometry near the black hole is simply that of a $`d`$-dimensional Schwarzschild solution $$ds^2=f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\mathrm{\Omega }_{d2}^2$$ with $$f(r)=1\left(\frac{r_0}{r}\right)^{d3}.$$ The event horizon is thus at $`r=r_0`$, and the area of the event horizon is $`A_d=r_0^{d2}\mathrm{\Omega }_{d2}`$ where $`\mathrm{\Omega }_n`$ denotes the volume of a unit $`n`$-sphere. If a black hole is formed from matter on the brane, symmetry requires that the brane pass through the equator of the black hole. We further assume that the three-brane is essentially a test brane with negligible self gravity of its own<sup>2</sup> We also assume that the brane has negligible thickness. This is reasonable since the actual thickness of the brane is likely to be of order the fundamental scale $`\mathrm{}_{\mathrm{fun}}`$, and a black hole will behave semi-classically only if $`r_0\mathrm{}_{\mathrm{fun}}`$.. Then the induced metric on the brane will be $$ds^2=f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\mathrm{\Omega }_2^2$$ with $`f(r)`$ still given by (2.1). On the brane then, the event horizon is again at $`r=r_0`$, and the area of the event horizon is $`A_4=4\pi r_0^2`$. This induced metric on the brane is certainly not the four-dimensional Schwarzschild geometry. Since the Ricci tensor of this four-dimensional metric (2.1) is nonzero near the horizon, one can think of it as a black hole with matter fields (i.e., Kaluza-Klein modes) around it. However, the calculation of Hawking evaporation relies mainly on properties of the horizon, such as its surface gravity. Changing the geometry outside will change the effective potential that waves have to propagate through. This will modify the grey body factors, but since the potential is qualitatively the same, the total energy radiated is changed only by factors of order unity. Since the Hawking temperature is constant over the horizon, it is the same for both the black hole in the bulk and on the brane, and is given by $$T=\frac{d3}{4\pi r_0}.$$ The metric (2.1) (with $`f`$ given by (2.1)) has no $`1/r`$ term and hence seems to give zero mass in four dimensions. However, this metric only describes the geometry near the black hole. For $`rL`$ the geometry will be approximated by (2.1) with $$f(r)1\frac{2G_4M}{r}$$ where $`M`$ is the mass of the $`d`$-dimensional black hole $$M=\frac{(d2)r_0^{d3}\mathrm{\Omega }_{d2}}{16\pi G_d}$$ In other words, the mass measured on the brane is the same as the mass in the bulk. This can be seen as follows. Consider the higher dimensional spacetime and unwrap the compact dimensions. The result is a cubic array of black holes, each of mass $`M`$ and separated by a distance $`L`$. From a large distance,<sup>3</sup> Here, we ignore the gravitational interaction energy of the black holes in the array, which is justified for $`r_0L`$. this looks like a “surface density” $`\rho =M/L^{d4}`$. The asymptotic metric will thus contain the term $`f(r)=1(2G_d\rho /r)`$. However, since $`G_d=G_4L^{d4}`$, this is equivalent to (2.1). Although this $`1/r`$ term is the dominant correction to the flat metric for $`rL`$, it is already quite small for $`rL`$ and will not cause a significant modification to our estimates of the energy radiated. We now show that the emission rate of Kaluza-Klein modes, regarded as four-dimensional fields, is actually suppressed relative to modes that propagate only along the brane. In order to see this, let us consider the calculation of the emission rate of a massless bulk field in the following way: since we have to sum over all the modes of the field that are emitted by the black hole, let us decompose these according to the momentum $`𝐤`$ which they carry into the $`d4`$ transverse dimensions. On the brane, this Kaluza-Klein momentum is identified with the four-dimensional mass of these modes, which we denote $`m=|𝐤|`$. If we then sum over all other quantum numbers, we will find the emission rate corresponding to a Kaluza-Klein mode with momentum $`𝐤`$. Proceeding in this way we get, for the emission rate per unit frequency interval, of modes with momenta in the interval ($`𝐤,𝐤+d𝐤`$), $$\frac{dE}{d\omega dt}(\omega ,𝐤)(\omega ^2m^2)\frac{\omega A_d}{e^{\beta \omega }1}d^{d4}k.$$ Here, $`A_d`$ is the area of the black hole in the $`d`$-dimensional bulk<sup>4</sup> The only difference for a fermionic mode would, of course, be to change the sign of the ‘one’ in the denominator in the formula above.. We are neglecting purely numerical factors since we will find below that they do not play any significant role. As a check, when this expression is integrated over all Kaluza-Klein modes, one recovers the emission rate of a massless bosonic field into the $`d`$-dimensional bulk: $$\frac{dE}{d\omega dt}(\omega )=_{|𝐤|=0}^{|𝐤|=\omega }\frac{dE}{d\omega dt}(\omega ,𝐤)d^{d4}k\frac{\omega ^{d1}A_d}{e^{\beta \omega }1}.$$ Consider a light Kaluza-Klein mode, with a mass much smaller than the black hole temperature, $`m1/r_0`$. We set $`d^{d4}k(1/L)^{d4}`$ for an individual mode, and $`A_dr_0^{d4}A_b`$, with $`A_b`$ the sectional area on the brane. Then, $$\frac{dE}{d\omega dt}(\omega ,m)\left(\frac{r_0}{L}\right)^{d4}(\omega ^2m^2)\frac{\omega A_b}{e^{\beta \omega }1}.$$ which is identical to the emission rate of a massive field in four dimensions, except for a suppression factor of $`(r_0/L)^{d4}`$. (Note that this formula applies equally well for $`m=0`$). So we see that the Hawking radiation into each Kaluza-Klein mode (among these, the massless graviton) is much smaller than the radiation into any other minimally coupled field that propagates only in four dimensions. In particular, compared with a purely four-dimensional gravity theory, Hawking radiation in gravitons on the brane is suppressed by a factor of $`(r_0/L)^{d4}`$. Still the total radiation (2.1) into a bulk field is comparable to that into a field on the brane, because there are of order $`(L/r_0)^{d4}`$ light modes with $`m<T1/r_0`$. As we mentioned earlier, this suppression factor can be understood as arising from the small geometric overlap between a bulk mode and a small black hole which has only a limited extent in the transverse dimensions. Of course, since there is no analogous effect for all the nongravitational fields on the brane, this supports our conclusion that most of the energy is radiated on the brane. Since the number of relevant fields on the brane may be only a factor of ten or so larger than the number of bulk fields, one might worry that the claim that the Hawking radiation is dominated by brane fields could still be thwarted by large numerical factors coming from the higher dimensional calculation. To check this, we consider two improvements over the rough estimate of the radiation rates given in (1.1) and (1.1). The first is to include the dimension dependent Stefan-Boltzman constant $`\sigma _n`$. In $`n`$ dimensions, the energy radiated by a black body of temperature $`T`$ and surface area $`A_n`$ is $$\frac{dE_n}{dt}=\sigma _nA_nT^n$$ Repeating the standard calculations found in any statistical mechanics text, in higher dimensions, one finds that the $`n`$-dimensional Stefan-Boltzman constant is $$\begin{array}{cc}\hfill \sigma _n=& \frac{\mathrm{\Omega }_{n3}}{(2\pi )^{n1}(n2)}_0^{\mathrm{}}\frac{z^{n1}dz}{e^z1}\hfill \\ \hfill =& \frac{\mathrm{\Omega }_{n3}}{(2\pi )^{n1}(n2)}\mathrm{\Gamma }(n)\zeta (n)\hfill \end{array}$$ with $`\zeta (n)`$ denoting the Riemann zeta function. These factors do not change much with the dimension, in the cases of interest. For example, $$\sigma _4=\frac{\pi ^2}{120}.08,\sigma _6=\frac{\pi ^3}{504}.06,\sigma _{10}=\frac{\pi ^5}{3168}.097$$ Although formally these quantities have been calculated for infinite (uncompactified) spacetimes, eq. (2.1) provides a good approximation when $`T1/L`$. The fact that $`\sigma _n`$ changes very little with dimension, confirms that even though higher dimensional spacetimes have infinitely many more modes (corresponding to excitations in the extra dimensions), the rate at which energy is radiated by a black body with radius $`r_0`$ and temperature $`T1/r_0`$ is roughly independent of the dimension. Substituting eq. (2.1) in eq. (2.1), we find for the black hole that $$\frac{dE_n}{dt}=\sigma _n\mathrm{\Omega }_{n2}\left(\frac{d3}{4\pi }\right)^n\frac{1}{r_0^2}=\frac{\mathrm{\Omega }_{n3}}{(2\pi )^{n1}(n2)}\mathrm{\Gamma }(n)\zeta (n)\mathrm{\Omega }_{n2}\left(\frac{d3}{4\pi }\right)^n\frac{1}{r_0^2}$$ where we have used the horizon area for $`A_n`$. Hence for modes in a three-brane, we find $$\frac{dE_4}{dt}=\frac{(d3)^4}{7680\pi }\frac{1}{r_0^2}.$$ For the case of a six-dimensional world, $`n=6`$, with two extra large compact dimensions $$\frac{dE_6}{dt}=\frac{(d3)^6}{4^6189\pi }\frac{1}{r_0^2}.$$ Now if we substitute in $`d`$=6 and take the ratio, we find $$\frac{dE_4/dt}{dE_6/dt}=\frac{56}{5}=11.2.$$ Hence by these calculations, the emission of a bulk mode is actually suppressed relative to a mode confined to the brane. If we consider $`n`$=$`d`$=10, the ratio becomes $$\frac{dE_4/dt}{dE_{10}/dt}12.1.$$ However, there is a second improvement which we can easily incorporate into our calculations. This concerns the area that appears in (2.1). We have been using the horizon area as the area of the black body emitter in eq. (2.1), but at least in the geometric optics approximation, a black hole acts as a perfect absorber of a slightly larger radius. Recall that in four dimensions, there is a critical radius $`r_c=(3\sqrt{3}/2)r_02.6r_0`$ for null geodesics. If a photon travels inside this radius, it is captured by the black hole. Detailed calculations have shown that the total energy radiated is better approximated by assuming the area is given by $`r_c`$ rather than $`r_0`$. Note, however, that this DeWitt approximation is not obviously justified since the typical wavelengths are of order the size of the black hole. Although detailed calculations are not yet available in higher dimensions, we expect a similar improvement exists in this case as well. For a general dimension, $`r_c`$ becomes $$r_c=\left(\frac{d1}{2}\right)^{1/(d3)}\sqrt{\frac{d1}{d3}}r_0.$$ The ratio decreases slightly with the dimension: at $`d=6`$, $`r_c1.75r_0`$; at $`d=10`$, $`r_c1.41r_0`$. Note that this critical radius will be the same for brane and bulk modes since the problem of calculating null geodesics involves only motion in a plane of the full geometry (2.1). The correction due to this effect enters the emission rate (2.1) through the area factor. Since the bulk modes include a higher power of the radius, increasing the radius increases the relative decay rates for the bulk modes by a factor $`(r_c/r_0)^{n2}`$. With this correction, we find $$\frac{dE_4/dt}{dE_6/dt}3.66,\mathrm{and}\frac{dE_4/dt}{dE_{10}/dt}1.54,$$ and so the ratios become closer to one. Thus there are no unexpected large factors to ruin the naive estimate that a Hawking evaporation emits as much energy into a typical brane field as into a typical bulk field. A definitive comparison of the bulk and brane radiation rates would require a more detailed analysis. In particular, one expects a suppression for higher spin fields due to angular momentum barriers . For example, in a pure four-dimensional calculation, the radiation rate for the graviton is approximately 10 times smaller than that for a massless spin-one-half field . Of course, such detailed calculations would require a specific brane-world model to determine the exact black hole geometry and the precise multiplicity of bulk and brane fields. 3. Discussion So far we have considered small black holes with $`r_0<L`$. Will larger black holes also radiate mainly on the brane? If $`r_0>L`$, the solution is simply a product of four-dimensional Schwarzschild and a torus. Hence the horizon area is $`A_d=4\pi r_0^2L^{d4}`$, and the geometric suppression factor in eq. (2.1) is replaced by one. However, the Hawking temperature is now lower than the mass of all Kaluza-Klein modes, so their contribution to the Hawking radiation is clearly suppressed. Approximating the radiation rate with eq. (1.1), we have $$\frac{dE}{dt}A_dT^dr_0^2T^4(LT)^{d4}(L/r_0)^{d4}r_0^2T^4.$$ So the total contribution of the Kaluza-Klein modes is suppressed by the factor $`(L/r_0)^{d4}`$ relative to that a single brane field. Actually, since $`T<1/L`$, this six-dimensional formula only accurately captures the contributions of modes with relatively large Kaluza-Klein momentum. The dominant contribution will actually come from the massless mode which in this regime radiates identically to a brane field. So for large black holes, a bulk field still carries essentially the same energy as a field on the brane, and the latter again dominate the Hawking radiation due to the relatively high multiplicity of light brane fields. If a black hole initially has $`r_0>L`$, then Hawking radiation will cause the Schwarzschild radius to decrease. When $`r_0L`$, the four-dimensional black hole $`\times `$ $`(S^1)^{d4}`$ solution becomes unstable , and is believed to break up into $`d`$-dimensional black holes.<sup>5</sup> Note that at the transition with $`r_0L`$, the black hole mass is $`ML^{d3}/G_d=L/G_4`$. Although this is much larger than the four dimensional Planck mass, it is much smaller than a typical stellar mass (e.g., for $`d=6`$ and $`L=1`$ mm, $`M=10^{27}`$ gms, which is about the mass of the Earth). These black holes attract each other and coalesce, forming a single higher dimensional black hole. Could this final black hole lie in the bulk and not on the brane? This is highly unlikely since a black hole will not slide off a brane! Rather it feels a restoring force due to the brane tension. To see this, we must consider the condition for a black hole on a brane to be static. A black hole will grow whenever $`T_{\mu \nu }\mathrm{}^\mu \mathrm{}^\nu >0`$ where $`\mathrm{}^\mu `$ is a null geodesic generator of the event horizon. This is just the statement that energy is crossing the horizon. The stress energy tensor of a brane is proportional to its induced metric. In order for the black hole to be static (and not swallow up the brane) $`\mathrm{}^\mu `$ must lie entirely in the brane so $`T_{\mu \nu }\mathrm{}^\mu \mathrm{}^\nu \mathrm{}_\mu \mathrm{}^\mu =0`$. This will be the case if the radial direction orthogonal to the black hole is tangent to the brane. In other words, the brane must intersect the black hole orthogonally. So if one pulls on a black hole on a brane, the brane bends to stay orthogonal and pulls back on the black hole. Thus, a black hole on the brane will attract a black hole in the bulk, forming a larger black hole on the brane. Although we have found that most of the radiation goes into purely four-dimensional fields, the evaporation of a small black hole will not proceed as in a purely four-dimensional theory. The black hole is $`d`$-dimensional, and its mass $`M`$ is related to the radius as in (2.1). In particular, this means that the lifetime of the black hole will not be like that of a four-dimensional black hole, $`\tau _4G_4^2M^3`$, but rather, $`\tau _dG_d^{\frac{2}{d3}}M^{\frac{d1}{d3}}`$ . Note that $`\tau _d(L/r_0)^{2(d4)}\tau _4`$ and so the lifetime is longer (possibly enormously longer) than would have been expected from four-dimensional Einstein gravity. The essential feature is that when $`G_dM<L^{d3}`$ (i.e., $`r_0<L`$), for a fixed mass, the Schwarzschild radius is larger than it would be for a four-dimensional black hole. This means that the temperature is lower, the horizon area is larger, and the evaporation rate is slower. The fact that the horizon area is larger is the feature which results in the higher dimensional black hole being entropically favored . In the scenario with $`d=6`$ and $`L1`$ mm, the lifetime of a black hole formed at $`M100`$ TeV (so $`r_010^{15}`$ mm) would be $`\tau _610^{25}`$ s.<sup>6</sup> For a black hole with mass smaller than $`10^{19}`$ GeV, it is not meaningful to compare its lifetime with a semiclassical four-dimensional estimate. Finally, although we have focused our discussion on the large extra dimension scenario, black holes still radiate mainly on the brane in the Randall-Sundrum scenario with an infinite extra dimension. As discussed in \[4,,11,,12,,13\], large black holes on the brane (with Schwarzschild radius $`r_0`$ larger than the scale $`R`$ of the bulk cosmological constant) appear as flattened pancakes and have a five-dimensional area of order $`Ar_0^2R`$. The temperature is constant over the horizon and of order $`T1/r_0`$. So the energy radiated in five-dimensional bulk modes is $`dE/dtA_5T^5R/r_0^3`$ which is much smaller than the energy radiated in four-dimensional modes on the brane: $`dE/dtA_4T^41/r_0^2`$ . Black holes which are smaller than the AdS curvature scale will be approximately spherical and behave as we have discussed above. Given that small black holes radiate mainly on the brane (and that such black holes will not slip off the brane), the brane-world scenario has the potential to make interesting observable predictions about small black holes appearing either in collider experiments or in the early universe. It will be interesting to investigate their detailed phenomenology. Acknowledgements The work of RE is partially supported by UPV grant 063.310-EB187/98 and CICYT AEN99-0315. GTH was supported in part by NSF Grant PHY95-07065. RCM was supported in part by NSERC of Canada and Fonds du Québec. This paper has report numbers EHU-FT/0003 and McGill/00-10. References relax N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phys. Lett. B429 (1998) 263, hep-ph/9803315; Phys. Rev. D59 (1999) 086004, hep-ph/9807344; I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. B436 (1998) 257, hep-ph/9804398. relax P. Argyres, S. Dimopoulos, and J. March-Russell, Phys. Lett. B441 (1998) 96, hep-th/9808138. relax T. Banks and W. Fischler, ‘‘A Model for High Energy Scattering in Quantum Gravity", hep-th/9906038. relax R. Emparan, G.T. Horowitz and R.C. Myers, J. High Energy Phys. 01 (2000) 007, hep-th/9911043. relax L. Susskind, private communication relax N. Sánchez, Phys. Rev. D18 (1978) 1030; pages 461-479 (hep-th/9711068) in ‘String Theory in Curved Spacetimes,’ ed., N. Sánchez (World Scientific, 1996). relax B.S. DeWitt, Phys. Rep. C19 (1975) 297. relax D. Page, Phys. Rev. D13 (1976) 198; Phys. Rev. D14 (1976) 3260. relax R. Gregory and R. Laflamme, Phys. Rev. Lett. 70 (1993) 2837, hep-th/9301052; Nucl. Phys. B428 (1994) 399, hep-th/9404071. relax L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 4690, hep-th/9906064; 3370, hep-ph/9905221. relax A. Chamblin, S.W. Hawking and H.S. Reall, Phys. Rev. D61 (2000) 065007, hep-th/9909205. relax S. Giddings, E. Katz, and L. Randall, ‘‘Linearized Gravity in Brane Backgrounds," hep-th/0002091. relax R. Emparan, G. T. Horowitz and R. C. Myers, J. High Energy Phys. 01 (2000) 021, hep-th/9912135.
no-problem/0003/physics0003063.html
ar5iv
text
# 1 Problem ## 1 Problem The principle of an electrostatic accelerator is that when a charge $`e`$ escapes from a conducting plane that supports a uniform electric field of strength $`E_0`$, then the charge gains energy $`eE_0d`$ as it moves distance $`d`$ from the plane. Where does this energy come from? Show that the mechanical energy gain of the electron is balanced by the decrease in the electrostatic field energy of the system. ## 2 Solution Once the charge has reached distance $`d`$ from the plane, the static electric field $`𝐄_e`$ at an arbitrary point r due to the charge can be calculated by summing the field of the charge plus its image charge, $$𝐄_e(𝐫,d)=\frac{e𝐫_1}{r_1^3}\frac{e𝐫_2}{r_2^3},$$ (1) where $`𝐫_1`$ ($`𝐫_2`$) points from the charge (image) to the observation point r, as illustrated in Fig. 1. The total electric field is then $`E_0\widehat{𝐳}+𝐄_e`$. It turns out to be convenient to use a cylindrical coordinate system, where the observation point is $`𝐫=(r,\theta ,z)=(r,0,z)`$, and the charge is at $`(0,0,d)`$. Then, $$r_{1,2}^2=r^2+(zd)^2.$$ (2) The part of the electrostatic field energy that varies with the position of the charge is the interaction term (in Gaussian units), $`U_{\mathrm{int}}`$ $`=`$ $`{\displaystyle \frac{E_0\widehat{𝐳}𝐄_𝐞}{4\pi }𝑑\mathrm{Vol}}`$ (5) $`=`$ $`{\displaystyle \frac{eE_0}{4\pi }}{\displaystyle _0^{\mathrm{}}}𝑑z{\displaystyle _0^{\mathrm{}}}\pi 𝑑r^2\left({\displaystyle \frac{zd}{[r^2+(zd)^2]^{3/2}}}{\displaystyle \frac{z+d}{[r^2+(z+d)^2]^{3/2}}}\right)`$ $`=`$ $`{\displaystyle \frac{eE_0}{4}}{\displaystyle _0^{\mathrm{}}}𝑑z\left(\left\{\begin{array}{cc}2\hfill & \text{if}z>d\hfill \\ 2\hfill & \text{if}z<d\hfill \end{array}\right\}2\right)`$ $`=`$ $`eE_0{\displaystyle _0^d}𝑑z=eE_0d.`$ (6) When the particle has traversed a potential difference $`V=E_0d`$, it has gained energy $`eV`$ and the electromagnetic field has lost the same energy. In a practical “electrostatic” accelerator, the particle is freed from an electrode at potential $`V`$ and emerges with energy $`eV`$ in a region of zero potential. However, the particle could not be moved to the negative electrode from a region of zero potential by purely electrostatic forces unless the particle lost energy $`eV`$ in the process, leading to zero overall energy change. An “electrostatic” accelerator must have an essential component (such as a battery) that provides a nonelectrostatic force that can absorb the energy extracted from the electrostatic field while moving the charge from potential zero, so as to put the charge at rest at potential $`V`$ prior to acceleration.
no-problem/0003/cond-mat0003069.html
ar5iv
text
# Low thermal conductivity of the layered oxide (Na,Ca)Co2O4: Another example of a phonon glass and an electron crystal ## Abstract The thermal conductivity of polycrystalline samples of (Na,Ca)Co<sub>2</sub>O<sub>4</sub> is found to be unusually low, 20 mW/cmK at 280 K. On the assumption of the Wiedemann-Franz law, the lattice thermal conductivity is estimated to be 18 mW/cmK at 280 K, and it does not change appreciably with the substitution of Ca for Na. A quantitative analysis has revealed that the phonon mean free path is comparable with the lattice parameters, where the point-defect scattering plays an important role. Electronically the same samples show a metallic conduction down to 4.2 K, which strongly suggests that NaCo<sub>2</sub>O<sub>4</sub> exhibits a glass-like poor thermal conduction together with a metal-like good electrical conduction. The present study further suggests that a strongly correlated system with layered structure can act as a material of a phonon glass and an electron crystal. Thermoelectric materials have recently attracted a renewed interest as an application to a clean energy-conversion system. The conversion efficiency of a thermoelectric material is characterized by the figure of merit $`Z=S^2/\rho \kappa `$, where $`S`$, $`\rho `$ and $`\kappa `$ are the thermopower, the resistivity and the thermal conductivity, respectively. At a temperature $`T`$, a dimensionless value of $`ZT`$ is required to be more than unity for a good thermoelectric material, which is, however, difficult to realize. We have found a large thermopower (100 $`\mu `$V/K at 300 K) and a low resistivity (200 $`\mu \mathrm{\Omega }`$cm at 300 K) for NaCo<sub>2</sub>O<sub>4</sub> single crystals. These parameters suggest that NaCo<sub>2</sub>O<sub>4</sub> is a potential thermoelectric material. An important finding is that the transport properties are difficult to understand in the framework of a conventional one-electron picture based on band theories. We have proposed that strong electron-electron correlation plays an important role in the enhancement of the thermopower. Very recently Ando et al. have found that the electron specific-heat coefficient of NaCo<sub>2</sub>O<sub>4</sub> is as large as 48 mJ/molK<sup>2</sup>, which is substantially enhanced from the free-electron value, possibly owing to the strong correlation. In addition to a large thermopower and a low resistivity, a thermoelectric material is required to show a low thermal conductivity. In view of this, a filled skutterudite Ce(Fe,Co)<sub>4</sub>Sb<sub>12</sub> shows quite interesting properties. The most remarkable feature of this compound is that “filled” Ce ions make the lattice thermal conductivity several times lower than that for an unfilled skutterudite CoSb<sub>3</sub>. The Ce ions are weakly bound in an oversized atomic cage so that they will vibrate independently from the other atoms to cause large local vibrations. This vibration and the atomic cage are named “rattling” and a “rattling site”, respectively. As a result, the phonon mean free path can be as short as the lattice parameters. Namely this compound has a poor thermal conduction like a glass and a good electric conduction like a crystal, which Slack named a material of “a phonon glass and an electron crystal”. It should be mentioned that rattling is not the only reason for the low thermal conductivity, where point defects and/or solid solutions significantly reduce the thermal conductivity of La<sub>x</sub>(Fe,Co)<sub>4</sub>Sb<sub>12</sub> ($`x<1`$) and Co$`{}_{1x}{}^{}M_{x}^{}`$Sb<sub>3</sub> ($`M`$=Fe, Ni, and Pt). Nevertheless a search for materials having rattling sites is a recent trend for thermoelectric-material hunting. Through this search, BaGa<sub>16</sub>Ge<sub>30</sub> and Tl$`{}_{2}{}^{}M`$Te<sub>5</sub> ($`M`$=Sn and Ge) have been discovered as potential thermoelectric materials with low thermal conductivity. A preliminary study of the thermal conductivity of polycrystalline NaCo<sub>2</sub>O<sub>4</sub>, which has no rattling sites, revealed a low value of 15-20 mW/cmK at 300 K. This is indeed unexpectedly low, because a material consisting of light atoms such as oxygens will have a high thermal conductivity. In fact, polycrystalline samples of a high-temperature superconducting copper oxide show a higher value of 40-50 mW/cmK at 300 K. This is qualitatively understood from its crystal structure as schematically shown in the inset of Fig. 1. NaCo<sub>2</sub>O<sub>4</sub> is a layered oxide, which consists of the alternate stack of the CoO<sub>2</sub> layer and the Na layer. The CoO<sub>2</sub> layer is responsible for the electric conduction, whereas the Na layer works only as a charge reservoir to stabilize the crystal structure. The most important feature is that the Na ions randomly occupy 50% of the regular sites in the Na layer. The Na layer is highly disordered like an amorphous solid, and it looks like a glass for the in-plane phonons. Thus significant reduction of the thermal conductivity is likely to occur in the sandwicth structure made of the crystalline metallic layers and the amorphous insulating layers. In this paper, we report on measurements and quantitative analyses on the thermal conductivity of polycrystalline samples of (Na,Ca)Co<sub>2</sub>O<sub>4</sub> from 15 to 280 K. The observed thermal conductivity is like that for a disordered crystal, and is insensitive to the substitution of Ca for Na. These results imply that the phonon mean free path is as short as the lattice parameters, and a semi-quantitative analysis reveals that the point-defect scattering due to the solid solution of Na ions and vacancies effectively reduces the lattice thermal conductivity down to 15-20 mW/cmK. On the other hand, the electrical resistivity remains metallic down to 4.2 K, which means that the electron mean free path is much longer than the lattice parameters. Thus NaCo<sub>2</sub>O<sub>4</sub> can be a material of a phonon glass and an electron crystal, whose conduction mechanisms are qualitatively different from those of the “rattler” model of the filled skutterudite. Polycrystalline samples of Na<sub>1.2-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> ($`x`$=0, 0.05, 0.10 and 0.15) were prepared through a solid-state reaction. Starting powders of NaCO<sub>3</sub>, CaCO<sub>3</sub> and Co<sub>3</sub>O<sub>4</sub> were mixed and calcined at 860C for 12 h. The product was finely ground, pressed into a pellet, and sintered at 920C for 12 h. Since Na tends to evaporate during calcination, we added 20 % excess Na. Namely we expected samples of the nominal composition of Na<sub>1.2-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> to be Na<sub>1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub>, which we will denote as (Na,Ca)Co<sub>2</sub>O<sub>4</sub>. The thermal conductivity was measured using a steady-state technique in a closed refrigerator pumped down to 10<sup>-6</sup> Torr. The sample was pasted on a copper block with silver paint (Dupont 4922) to make a good thermal contact with a heat bath, and on the other side of the sample a chip resistance heater (120 $`\mathrm{\Omega }`$) was pasted to supply heat current. Temperature gradient was monitored by a differential thermocouple made of Chromel-Constantan, while temperature was monitored with a resistance thermometer (Lakeshore CERNOX 1050). Figure 1 shows the thermal conductivity of (Na,Ca)Co<sub>2</sub>O<sub>4</sub>. The substitution of Ca for Na only slightly decreases the thermal conductivities of (Na,Ca)Co<sub>2</sub>O<sub>4</sub>. This makes a remarkable contrast to the change of the resistivity with the Ca substitution. The magnitude (20 mW/cmK at 280 K) is as low as that of a conventional thermoelectric material such as Bi<sub>2</sub>Te<sub>3</sub>, which is consistent with the previous study. Let us make a rough estimate of the phonon mean free path ($`\mathrm{}_{ph}`$) for NaCo<sub>2</sub>O<sub>4</sub> at 280 K. In the lowest order approximation, the lattice thermal conductivity $`\kappa _{ph}`$ is expressed by $$\kappa _{ph}=\frac{1}{3}cv\mathrm{}_{ph},$$ where $`c`$ and $`v`$ are the lattice specific heat and the sound velocity. Since we consider a moderately high temperature region where phonons are sufficiently excited, we assume $`c=3Nk_B`$ ($`N`$ is the number of atoms per unit volume). The sound velocity is associated with the Debye temperature $`\theta _D`$ as $$\theta _D=\frac{\mathrm{}v}{k_B}(6\pi ^2N)^{\frac{1}{3}}.$$ We employ $`\theta _D`$=350 K from the recent specific-heat data, and get $`\mathrm{}_{ph}`$ = 6.7 Å for 20 mW/cmK, which is comparable with the in-plane lattice parameter (3 Å). This picture is intuitively understood from the fact that the Na layer is highly disordered. Note that the observed data of 20 mW/cmK includes the electron thermal conductivity, and thus the obtained value of 6.7 Å gives the upper limit of the phonon mean free path. Figure 2 summarizes the thermoelectric parameters of NaCo<sub>2</sub>O<sub>4</sub>. In Fig. 2(a) are shown the thermal conductivity (the same data as $`x`$=0 in Fig. 1) and the figure of merit calculated using the resistivity and the thermopower of the same sample. We also plot the electron thermal conductivity ($`\kappa _{el}`$) estimated from the resistivity on the assumption of the Wiedemann-Franz law as $`\kappa _{el}=L_0T/\rho `$ ($`L_0=\pi ^2k_B^2/3e^2`$ is the Lorentz number). $`\kappa _{el}`$ is 10% of $`\kappa `$, and the heat conduction is mainly determined by the phonons. The figure of merit is 10<sup>-4</sup>K<sup>-1</sup> above 100 K, which is largest among oxides, but does not yet reach the criteria of $`ZT=1`$. Much progress is thus needed to realize oxide thermoeletcrics. In Fig. 2(b), the resistivity and the thermopower are plotted as a function of temperature, which reproduce the pioneering work on the Na-Co-O system by Molenda et al. The temperature dependence of the resistivity is essentially the same as that for the in-plane resistivity of the single crystals, though the magnitude is much higher owing to the grain-bondary scattering. It should be noted that the resistivity exhibits metallic conduction down to 4.2 K without any indication of the localization. This implies that the electron mean free path is much longer than the lattice parameters. Previously we showed that the electron mean free path of the single crystal is as long as 230 Å at 4.2 K along the in-plane direction. We can therefore say that the phonon mean free path is much shorter than the electron mean free path. This is nothing but a material of a phonon glass and an electron crystal. Here we will compare the measured thermal conductivity with the phonon-scattering theory by Callaway. The total scattering rate $`\tau ^1`$ is given as the sum of three scattering rates as $`\tau ^1`$ $`=`$ $`\tau _{pd}^{}{}_{}{}^{1}+\tau _{phph}^{}{}_{}{}^{1}+\tau _{0}^{}{}_{}{}^{1}`$ $`=`$ $`A\omega ^4+B\omega ^2+v/L`$ where $`\tau _{pd}^{}{}_{}{}^{1}`$, $`\tau _{phph}^{}{}_{}{}^{1}`$ and $`\tau _{0}^{}{}_{}{}^{1}`$ are the scattering rates for the point-defect scattering, the phonon-phonon scattering, and the boundary scattering, respectively. For a phonon frequency $`\omega `$, the three scattering rates are written as $`A\omega ^4`$, $`B\omega ^2`$ and $`v/L`$, where $`A`$, $`B`$ and $`L`$ are characteristic parameters. According to Ref. , $`A`$ is expressed as $`A=\mathrm{\Omega }_0f_i(1M_i/M)^2/4\pi v^3`$, where $`\mathrm{\Omega }_0`$ is the unit cell volume, $`M_i`$ is the mass of an atom, $`f_i`$ is the fraction of an atom with mass $`M_i`$, and $`M=f_iM_i`$ is the average mass. We calculated $`A`$ for (Na,Ca)Co<sub>2</sub>O<sub>4</sub> by following the method in Ref. , where Na (23g/mol), Ca (40g/mol) and $`\mathrm{}`$ (vacancy) make a solid solution in the ratio of Na:Ca:$`\mathrm{}`$=(1-$`x`$):$`x`$:1. $`B`$ is a temperature-dependent parameter, which is proportional to $`T`$ at high temperatures ($`BCT`$). It should be noted that the phonon-phonon scattering gives $`\kappa 1/\sqrt{ACT}`$ at high temperatures in the presence of a large $`A`$. As clearly shown in Fig. 1, $`\kappa `$ for (Na,Ca)Co<sub>2</sub>O<sub>4</sub> increases with $`T`$, implying that the phonon-phonon scattering is neglegibly small. Thus $`L`$ corresponding to an inelastic scattering length is the only fitting parameter. In Fig. 3, the measured $`\kappa _{ph}`$ $`(=\kappa \kappa _{el})`$ of NaCo<sub>2</sub>O<sub>4</sub> is compared with two theoretical curves. Sample #1 is the same sample as shown in Fig. 1, and Sample #2 is another sample prepared in a different run. Curve A is the calculation using the phonon-scattering theory, where $`L`$=0.2 $`\mu `$m is used. As expected, the point-defect scattering quite effectively reduces the thermal conductivity by two or three orders of magnitude. The Ca substitution effect is also consistently explained as shown in the inset, where data points (as indicated by open circles) in different runs are added to show the reproducibility. Although the solid solution of Na and $`\mathrm{}`$ dominates $`\kappa _{ph}`$, the theory predicts a small correction due to the substitution of Ca (as indicated by the solid line), which is in good agreement with the observation. This directly indicates that the point-defect scattering plays an important role in reducing $`\kappa _{ph}`$. A problem is the physical meaning of $`L`$=0.2 $`\mu `$m: It is much longer than the electron mean free path (10-10<sup>2</sup> Å), but much shorter than the grain size (10 $`\mu `$m). Possible candidates are the average distance of stacking faults and/or interlayer disorder. The absence of the phonon-phonon scattering means that the phonon lifetime is extremely short, and is rather characteristic of the thermal conductivity of a glass. Curve B is the calculation of the minimum thermal conductivity $`\kappa _{min}`$ by Cahill et al., which has been compared with $`\kappa _{ph}`$ for a glass. Although the calculated $`\kappa _{min}`$ is one order of magnitude smaller than the measured $`\kappa _{ph}`$, such a deviation is also seen in other disordered crystals. Note that $`\kappa _{ph}T^3`$ is not seen for NaCo<sub>2</sub>O<sub>4</sub> at low temperatures, which is a hallmark of disordered crystals. Since $`\kappa _{ph}T^3`$ is usually seen below 10 K, this is possibly because the measurement temperature was high. NaCo<sub>2</sub>O<sub>4</sub> consists of the sandwich structure of the amorphous and crystalline layers, and the heat conduction process is perhaps in between that for a mixed crystal and an amorphous solid. Thus it should be further explored which curve is more likely to capture the essential feature of the heat conduction in NaCo<sub>2</sub>O<sub>4</sub>. We propose that a layered material consisting of a strongly correlated conducting layer and disordered insulating layer can be a promising thermoelectric material. If the heavy-fermion system is realized in the strongly correlated layer, $`S^2/\rho `$ can be increased through the mass enhancement due to the spin fluctuation. Recently we proposed that the effective mass of NaCo<sub>2</sub>O<sub>4</sub> is enhanced as much as that for CePd<sub>3</sub>. Meanwhile, in the disordered insulating layer, the lattice thermal conductivity can be minimized by the disorder that causes little effect on the electric conduction. In this context, it will work as a material of a phonon glass and an electron crystal. This scenario might be compared with the thermoelectric superlattices extensively studied by Dresselhaus et al. At present, no band calculation of NaCo<sub>2</sub>O<sub>4</sub> is available, but the band calculation of isostructural LiCoO<sub>2</sub> shows that the valence bands do not show any sub-band structure expected from the 2D quantum confinement. This means that the electronic states of NaCo<sub>2</sub>O<sub>4</sub> are not very anisotropic in a one-electron picture. This situation is essentially identical to the band picture of high-temperature superconductors. We think that the enhancement of the thermopower of NaCo<sub>2</sub>O<sub>4</sub> should not be attributed to the quantum confinement of the semiconductor superlattices, but to the strong correlation. In summary, we prepared polycrystalline samples of (Na,Ca)Co<sub>2</sub>O<sub>4</sub> and measured the thermal conductivity from 15 to 280 K. We have found that the phonon mean free path is 6.7 Å at 280 K, which is much shorter than the electron mean free path. This means that (Na,Ca)Co<sub>2</sub>O<sub>4</sub> acts as a materials of a phonon glass and an electron crystal, though it has no rattling sites. We have compared the experimental data with the phonon-scattering theory and the minimum thermal conductivity, and have found that the point-defect scattering plays an important role. The authors would like to appreciate J. Takeya and Y. Ando for technical help for thermal-conductivity measurement. They also thank T. Kawata, T. Kitajima, T. Takayanagi, T. Takemura, and T. Nonaka for collaboration.
no-problem/0003/astro-ph0003120.html
ar5iv
text
# 1 Introduction ## 1 Introduction Ground-based radio interferometers are able to produce images of the sky at frequencies down to a few tens of MHz. Some important scientific goals, however, require imaging at even lower frequencies. Absorption and refraction by the ionosphere prevents imaging from the ground at frequencies of a few MHz and lower, so an interferometer array composed of inexpensive satellites will be needed. Suitable locations for a space-based array include very high Earth orbits, halo orbits about the Sun-Earth Lagrange points, Earth-trailing heliocentric orbits, the far side of the Moon, and (perhaps) lunar orbit. The optimal choice depends on financial considerations and the unavoidable tradeoff between a benign environment in which to maintain a multi-satellite array and the difficulty of getting enough data from the array to Earth. ## 2 Science Goals What unique science can be done only at frequencies below $`10`$ MHz? There are two general areas where very low frequency observations are critical: First, sources of emission which are intrinsically limited to low frequencies (e.g., plasma oscillations and electron cyclotron masers), and second, observations of strongly frequency-dependent absorption (e.g., free-free absorption by diffuse ionized interstellar hydrogen). Type II radio bursts from interplanetary shocks driven by coronal mass ejections provide a good example of the first case. These intrinsically narrow-band emissions decrease in frequency as the shock propagates farther from the Sun into regions of lower plasma density. In order to image and track type II bursts as they approach 1 AU from the Sun, observations at frequencies below 1 MHz are necessary. This would allow us to predict the arrival at Earth of coronal mass ejections, which can trigger severe geomagnetic storms. If located far enough from Earth, a low frequency array would also be able to image Earth’s magnetosphere from the outside and observe how it changes in response to solar disturbances. A sensitive map of the radio sky with arcminute angular resolution at a few MHz would be especially effective at detecting coherent emission from disks, jets, and possibly gas giant planets orbiting close to nearby stars. Most coherent processes have sharp upper-frequency limits, and can only be detected at low frequencies. All-sky surveys at low frequencies would map the galactic distribution of low energy cosmic ray electrons and would likely discover large numbers of high redshift galaxies, “fossil” radio lobes, and large-scale interstellar shocks and shells from old galactic supernovae and $`\gamma `$-ray bursts. In addition, diffuse ionized hydrogen could be detected via its absorption of radiation from extragalactic radio sources across the sky. These observations would complement H$`\alpha `$ emission maps, which predict large variations in free-free optical depth on angular scales of a few degrees. ## 3 Requirements for a Low Frequency Array in Space Any space-based array for very low frequency imaging will need to meet three fundamental requirements: 1) the array must be located far enough from Earth to avoid terrestrial interference and the extended ionosphere, 2) there must be a large enough number of individual antennas in the array to produce dense, uniform ($`u,v`$) coverage in all directions simultaneously, and 3) the observing bandwidth must be sufficient to provide useful sensitivity for short snapshot observations. The second and third requirements result from the nearly omnidirectional nature of reasonably sized antennas at very low frequencies. Strong variable radio sources anywhere on the sky will affect the observed total power levels, and unless such sources are imaged on short time scales their time-variable sidelobes will limit the dynamic range of observations in other directions. Simulations show that a minimum of 12 satellites will be needed, with at least 16 satellites preferred (see Figure 1 on the next page). The maximum useful baseline length is set by interstellar and interplanetary scattering. These effects are proportional to $`\nu ^2`$ and thus are much stronger at low frequencies. For frequencies of a few MHz, maximum baselines of a few hundred km are appropriate. The degree of scattering at any given frequency is a strong function of direction on the sky. Consequently, is it important to have a wide range of baseline lengths in the array. Short projected baselines are needed in any case to allow angularly large structures in solar radio bursts and the galactic synchrotron background to be imaged. Imaging the entire sky is a daunting task, but it can be made tractable by dividing the sky into $`10^3`$ fields of view and relying on parallel processing. Each field will require only a 16-bit Fourier transform in the radial direction to account for sky curvature, and $`100`$ deconvolving beams (see Frail et al. 1994). During deconvolution, model components from all fields must be subtracted from the full 3-D visibility data set to remove sidelobes from other fields during the next iteration of residual image production. The total computing rate required is large by current standards, but will be readily available within a few years. ### Acknowledgements. Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the US National Aeronautics and Space Administration. ## References Frail, D., Kassim, N. & Weiler, K. 1994, AJ, 107, 1120
no-problem/0003/cond-mat0003402.html
ar5iv
text
# Collective oscillations of two colliding Bose-Einstein condensates ## Abstract Two <sup>87</sup>Rb condensates ($`F=2,m_f`$=2 and $`m_f`$=1) are produced in highly displaced harmonic traps and the collective dynamical behaviour is investigated. The mutual interaction between the two condensates is evidenced in the center-of-mass oscillations as a frequency shift of 6.4(3)%. Calculations based on a mean-field theory well describe the observed effects of periodical collisions both on the center-of-mass motion and on the shape oscillations. 03.75.Fi, 05.30.Jp, 32.80.Pj, 34.20.Cf Since the first realization of Bose-Einstein condensation with dilute trapped gases , systems of condensates in different internal states have deserved attention as mixtures of quantum fluids. In this context, the important issue of the interaction between two distinct condensates was early addressed at JILA with the production of two <sup>87</sup>Rb condensates in the hyperfine levels $`|F=2,m_f=2`$ and $`|1,1`$ in a Ioffe-type trap. Subsequent experiments of the JILA group have focused on the dynamics of two condensates in the states $`|2,1`$ and $`|1,1`$ having nearly the same magnetic moment, confined by a time-orbiting potential (TOP) trap. In these experiments the authors have investigated the effects of the mutual interaction in a situation of almost complete spatial overlap of the two condensates. The resulting dynamics reveals a complex structure, and it is characterized by a strong damping of the relative motion of the two condensates . More recently another group has experimentally investigated a mixture of <sup>87</sup>Rb condensates in different $`m_f`$ states in a TOP trap, but no effects of the mutual interaction have been observed. These experiments have raised several interesting questions from the theoretical point of view, such as the origin of the above mentioned damping , the phase coherence properties of the two condensates and the nature of Rabi oscillations in presence of an external coupling , thus providing a challenge to explore new dynamical regimes in different experimental configurations. In this work we demonstrate an experimental method for a sensitive and precise investigation of the interaction between two condensates. These are made to collide after periods of spatially separated evolution and we get quantitative information from the resulting collective dynamics. Following the scheme originally introduced by W. Ketterle and co-workers to create an atom laser out-coupler, we use a radio-frequency (rf) pulse to produce two <sup>87</sup>Rb condensates in the states $`|2,2|2`$ and $`|2,1|1`$. Due to the different magnetic moments and the effect of gravity, they are trapped in two potentials whose minima are displaced along the vertical $`y`$ axis by a distance much larger that the initial size of each condensate. As a consequence the $`|1`$ condensate, initially created in the equilibrium position of $`|2`$, undergoes large center-of-mass oscillations, in a regime very different from that explored in and analyzed in , where the two condensates sit in overlapping traps. The fact that the two condensates periodically collide opens the possibility of detecting even small interactions through changes in frequency and amplitude of the oscillations. Indeed, the periodic collisions of the $`|1`$ condensate with the $`|2`$, initially remaining almost at rest, strongly affect the collective excitations of both the condensates: (i) the center-of-mass oscillation frequency of the $`|1`$ condensate is shifted upwards; (ii) the shape oscillations of $`|2`$ condensate, triggered by the sudden transfer to the $`|1`$ state, are significantly enhanced. The complex dynamics is quantitatively analyzed and found in agreement with the theoretical predictions derived by the numerical solution of two coupled Gross-Pitaevskii (GP) equations at zero temperature. We prepare a condensate of typically $`1.5\times 10^5`$ $`^{87}`$Rb atoms in the $`F=2,m_f=2`$ hyperfine level ($`|2`$), confined in a 4-coils Ioffe-Pritchard trap elongated along the $`z`$ symmetry axis . The axial and radial frequencies for the $`|2`$ state, measured after inducing center-of-mass oscillations of the condensate, are $`\omega _{z2}=2\pi \times 12.6(2)`$ Hz and $`\omega _2=2\pi \times 164.5(5)`$ Hz respectively, with a magnetic field minimum of $`1.75`$ Gauss. By applying a rf pulse, the initial $`|2`$ condensate is put into a coherent superposition of different Zeeman $`|m_f`$ sublevels of the $`F=2`$ state, which then move apart: $`|2`$ and $`|1`$ are low-field seeking states and stay trapped, $`|0`$ is untrapped and falls freely under gravity, while $`|1`$ and $`|2`$ are high-field seeking states repelled from the trap. All the condensates in different Zeeman states are simultaneously imaged by absorption with a 150 $`\mu `$s pulse of light resonant on the $`F=2F^{}=3`$ transition, shone 30 ms after the switching-off of the trap. By fixing the duration and varying the amplitude of the rf field, we control the relative population in different Zeeman sublevels . A 10 cycles rf pulse at 1.24 MHz with an amplitude B<sub>rf</sub>=10 mG quickly transfers $`13`$% of the atoms to the $`|1`$ state without populating the $`|0`$, $`|1`$ and $`|2`$ states. The $`|1`$ condensate experiences a trapping potential with lower axial and radial frequencies ($`\omega _1=\omega _2/\sqrt{2}`$) and displaced along the vertical $`y`$ axis by $`y_0=g/\omega _2^29.2`$ $`\mu `$m. After the rf-pulse, the $`|1`$ condensate moves apart from $`|2`$, and begins to oscillate around its equilibrium position. Due to the mutual repulsive interaction, the latter starts oscillating too, though with a much smaller amplitude. The resulting periodic superimposition modifies the effective potential, which is the sum of the external potential (magnetic and gravitational) and the mean-field one. We have studied the dynamics of the $`|1`$ condensate in presence (“interacting” case) and in absence (“non-interacting” case) of $`|2`$, by varying the permanence time in the trap. We restrict to permanence times so short, i.e. less than 40 ms, that we can neglect both atom losses due to the condensate finite lifetime, 0.7(1) s, and the heating ($`dT/dt=0.11(2)\mu `$K/s). For the non-interacting case, we have used a stronger rf-pulse (B<sub>rf</sub>=20 mG) that completely empties the $`|2`$ state coupling all the atoms in $`|1`$ and in the other untrapped Zeeman states, which rapidly leave the trap. We start considering the case of the $`|1`$ condensate in absence of the $`|2`$ condensate (non-interacting case). In Fig. 1a we plot the center-of-mass oscillations as a function of the trapped evolution time in units of $`\omega _2^10.967`$ ms, after 30 ms of ballistic expansion. The center-of-mass undergoes sinusoidal oscillation with a measured frequency of $`\omega _1=2\pi \times 116.4(3)`$ Hz. This value is, as expected, a factor $`\sqrt{2}`$ smaller than $`\omega _2`$. Furthermore the experimental data give no evidence of damping within the maximum monitored trapping time (26 ms). Fig. 1b shows the center-of-mass evolution of the two condensates in the interacting case. The center-of-mass of the $`|1`$ condensate exhibits a substantially different behaviour, which allows a clear quantitative analysis. The oscillation frequency is up-shifted to $`\omega _1=2\pi \times 123.9(3)`$ Hz, i.e. a 6.4(3)% change with respect to the non-interacting case. This can be understood considering that the mean-field repulsion of the second condensate produces an anharmonic correction to the effective potential experienced by the $`|1`$ condensate. Furthermore, the oscillations appear now damped with an exponential decay time of about 60 ms. Indeed, each time the two condensates superimpose (about every 8 ms), there is an energy transfer from the $`|1`$ condensate toward the $`|2`$ condensate. As a consequence we expect, and we do observe, effects on both the center-of-mass motion and the collective shape-oscillations of the $`|2`$ condensate. A quantitative description of these experimental features requires the solution of two coupled Gross-Pitaevskii (GP) equations. Neglecting the interaction with the thermal cloud, the two condensates evolve according to $$i\mathrm{}\frac{\mathrm{\Psi }_i}{t}=\left[\frac{\mathrm{}^2^2}{2m}+V_i+\underset{j=1,2}{}\frac{4\pi \mathrm{}^2a_{ij}}{m}|\mathrm{\Psi }_j|^2\right]\mathrm{\Psi }_i$$ (1) $`i=1,2`$, where $`V_i`$ are the trapping potentials: $`V_1(x,y,z)`$ $`=`$ $`{\displaystyle \frac{m}{2}}\omega _1^2\left[(x^2+y^2)+\lambda ^2z^2\right]`$ (2) $`V_2(x,y,z)`$ $`=`$ $`{\displaystyle \frac{m}{2}}\omega _2^2\left[\left(x^2+(yy_0)^2\right)+\lambda ^2z^2\right]`$ (3) and the asymmetry parameter is $`\lambda \omega _z/\omega _{}0.0766`$ for both traps. For the <sup>87</sup>Rb scattering lengths we use $`a_{22}=a_{12}=98a_0`$ and $`a_{11}=94.8a_0`$ . Our experimental configuration allows to simplify these equations by using the fact that we are in the Thomas-Fermi (TF) regime due to the large number of atoms, and that the system is strongly elongated along the $`z`$ axis, $`\lambda 1`$ . For an elliptic trap, the low-frequency excitations with $`m=0`$ are linear superpositions of the monopole ($`n=1`$, $`l=0`$, $`m=0`$) and quadrupole ($`n=0`$, $`l=2`$, $`m=0`$) modes . The dispersion laws for the two modes, at leading order in $`\lambda `$, are given by $$\omega _{high}2\omega _{};\omega _{low}\sqrt{\frac{5}{2}}\lambda \omega _{}.$$ (4) In this limit the two frequencies are quite different, and the axial and radial collective excitations are almost decoupled. The radial width is characterized by small-amplitude oscillations with frequency $`\omega _{low}`$ modulated by a large-amplitude oscillation with frequency $`\omega _{high}`$, and vice versa for the axial width . Therefore, since the interactions mostly affect the radial motion, we assume the axial dynamics to be still characterized by the low frequency oscillations of the TF regime. Then, we study the trapped dynamics in the $`x,y`$ plane by solving the GP equation (1), by approximating our system as a uniform cylinder . We start from an initial configuration corresponding to the stationary ground-state of Eq. (1), with all the $`N`$ trapped atoms in the $`|2`$ condensate. Afterwards, at $`t=0`$, $`N_1`$ atoms are instantaneously transferred from the $`|2`$ to the $`|1`$ state, the former remaining with $`N_2=NN_1`$ atoms. Here we consider $`N_1=0.13N`$ for the interacting case, and $`N_1=N`$ for the non-interacting one. The theoretical curves in Fig. 1 show an up-shift of the center-of-mass oscillation frequency for the $`|1`$ condensate occurring in the interacting case, as experimentally observed. This shift is of 5.4%, in qualitative agreement with that measured. Furthermore, in presence of interactions the model correctly predicts a damping, which is not due to any dissipative process (the total energy is conserved), but to a transfer of energy from the center-of-mass oscillations of the $`|1`$ condensate to the other degrees of freedom of the system . Still, this damping time is nearly a factor 2 longer than that experimentally observed. To understand the origin of the discrepancies between theory and experiment, it’s worth discussing the main approximations of our model. First, since the model is basically 2-dimensional, the energy transfer in the axial direction is overlooked. Secondly, we completely disregard the interaction between the two condensates during the expansion . Indeed, in our experiment, during the switching-off of the trap the two clouds acquire different velocities and cross each other in the fall, but we expect that they are so dilute that we can neglect their mutual mean-field repulsion. Finally, our model doesn’t take into account the elastic scattering, occurring when the relative velocity of the two condensates exceeds the sound velocity . This effect, whose description lies beyond the mean-field approximation, represents an important channel of both atoms and energy losses which plays a significant role, for example, in the four-wave mixing experiments with Bose condensates . We consider now the $`|2`$ condensate. In the interacting case, both small center-off-mass oscillations and significant features for the aspect ratio oscillations emerge from our model. The latter are initially induced by the sudden change in the internal energy, consequent on the transfer of $`N_1`$ atoms from the $`|2`$ to the $`|1`$ level . In Fig. 2 we compare the theoretical evolution of the $`|2`$ condensate aspect ratio, i.e. the radial to axial width ratio, in absence and in presence of collisions. In the former case, a faster (radial) oscillation superimposes to a slow (axial) one as a consequence of the decoupling between the two oscillation modes. The frequencies $`\omega _{high}`$ and $`\omega _{low}`$ were separately measured by means of resonant modulation of the trapping magnetic field . The non-interacting behaviour predicted in Fig. 2 should of course yield when there is only one trapped state. For instance this is the case of an atom laser out-coupled in a single-step transition, as for sodium , or for rubidium in $`F=1`$ state . In the interacting case, we see from Fig. 2 that a significantly different behaviour is expected. Before the first collision at $`t8`$ ms, the oscillations are rather small since they are determined only by the sudden change in the number of atoms. Instead, at longer times, the changes in the aspect ratio become more pronounced due to the energy transfer during collisions between the two condensates. Once the ballistic expansion is taken into account, the simulation results are compared to the experimental data in Fig. 3. The agreement is only of a qualitative character. We attribute the discrepancy to the approximations we used in our simplified analysis. Nevertheless, we believe that our model provides a useful physical insight of the relevant aspects of the problem. In conclusion we have developed a powerful tool for the investigation of the interactions between condensates and we have demonstrated how they can quantitatively affect frequency, amplitude and shape of oscillations. In particular, the frequency shift measurement gives access to the number of atoms in the parent condensate $`N_2`$ or, alternatively, to the $`a_{21}`$ scattering length. This opportunity seems particularly interesting for low $`N_2`$, as we have found that the frequency shift scales roughly as $`\mathrm{log}(N_2a_{12})`$ . The agreement with the model, given its simplicity and lack of free parameters, is generally satisfactory. At least the discrepancy observed for the damping of center-of-mass oscillations suggests that the investigation of the relaxation beyond the mean-field theory (namely, by including the elastic scattering) could be an interesting subject for future studies. The experimental perspectives of this system of two periodically overlapping condensates lead to the investigation of the interference between condensates spatially separated during their evolution and, eventually, to studies of Josephson effects in two weakly linked condensates . We acknowledge fruitful discussions with F. Dalfovo.
no-problem/0003/astro-ph0003482.html
ar5iv
text
# Complete Zeldovich approximation ## Abstract We have developed a generalization of the Zeldovich approximation (ZA) that is exact in a wide variety of situations, including plannar, spherical and cilyndrical symmetries. We have shown that this generalization, that we call complete Zeldovich approximation (CZA), is exact to second order at an arbitrary point within any field. For gaussian fields, the third order error have been obtained and shown to be very small. For statistical purposes, the CZA leads to results exact to the third order. cosmology: theory — cosmology: large-scale structure of the Universe — gravitation Developing a simple analytical approximation that accounts accurately for the non-linear evolution of density fields seems to be a rather interesting task. To reconstruct with some accuracy the initial conditions from present velocity field in the line of sight one such approximation is needed. To obtain the statistical properties of the present field in terms of those for the initial one; the non-linear corrections to the microwave background or to the Gun-Peterson effect, an approximation of this kind is highly convenient. An approximation that has been widely used to this purposes is Zeldovich approximation (ZA), where the density fluctuation, $`\delta `$, is a unique function of the proper values, $`\lambda _i`$, of the linearly calculated local deformation tensor $`\frac{\stackrel{}{u}}{\stackrel{}{x}}`$ ($`\stackrel{}{u}`$ being the peculiar velocity field). $$(1+\delta )^1=\underset{i=1}{\overset{3}{}}(1\lambda _i)$$ (1) This approximation give generaly rather good results and it is particularly convenient for deriving the statistical properties of the present field, since we only need the statistical properties of the $`\lambda _j`$ in the initial field. In this approximation the local deformation tensor is that given by the linear theory. So, although it is a good approximation to all orders, it is exact only to first order. On the other hand, in the Lagrangian perturbative development (LPD) the deformation tensor is formaly exact (Bouchet et al. 1995), but it is not a unique function of the $`\lambda _i`$. Within this context, the question easily arise as to whether is posible to find an approximation depending only on the $`\lambda _i`$ and substancialy more accurate than ZA. Reisnegger & Miralda-Escude (1995) considered an extension of ZA (EZA) that is exact for planar, spherical and cilyndrical symmetry. However, although this approximation gives usually better results than ZA, it is not fully consistent. Hui and Bertschinger (1996) developed an approximation (LTA) that is exact for any fluid element such that the orientation and axis ratios of the gravitational and velocity equipotentials are constant along its trayectory. This include planar, spherical and cilyndrical symmetries. The problem with this approximation, which gives excelent results, is its complexity, that makes very difficult to determine the explicit dependence on the $`\lambda _i`$. The purpose of this letter is to present the explicit dependence of $`\delta `$ as a function of the $`\lambda _i`$ for the most accurate approximation that is a unique function of these quantities. This approximation we call complete Zeldovich approximation (CZA). Like EZA and LTA, CZA is exact for planar ($`\lambda _10`$, $`\lambda _2=\lambda _3=0`$), spherical ($`\lambda _1=\lambda _2=\lambda _3`$) and cilyndrical ($`\lambda _1=\lambda _2`$, $`\lambda _3=0`$) symmetries, but, unlike those approximations, it is also exact in another wide variety of cases that essentialy correspond to the collapse of an initial top-hat elliptical density fluctuation. We have shown that CZA is exact to second order and computed the third order error, which is very small. We have also shown that the exact evolution at an arbitrary point may be expressed in terms of a simple extension of the CZA containing some aditional variables (the CZA being recovered when these variables are set equal to zero). All this is shown in detail in an accompanying paper; here we shall only present the CZA and comment upon its meaning and derivation. Approximations depending only on the $`\lambda _i`$ are usually called local. We shall retain this convention, but it must be noted that the $`\lambda _i`$, although defined locally, are non-locally generated. The sum of the $`\lambda _i`$, which is equal to the linear density perturbation, $`\delta _L`$, can take any value at a given point, regardless of the values taken at other points. To obtain the $`\lambda _i`$, however, we must integrate the continuity equation, so the $`\lambda _i`$ at a given point depends on the whole field $`\delta _L(\stackrel{}{x})`$. Another way to point out the non-local caracter of the $`\lambda _i`$ is by noticing that the quantities $`\lambda _i\frac{\delta _L}{3}`$ are generated by the action of the linearly calculated local tidal field which, obviously, depends on the whole field $`\delta _L(\stackrel{}{x})`$. Keeping this in mind, it is not surprising that although knowing $`\delta _L`$ at a point at some initial time let us to obtain the evolution only to first order, knowing the $`\lambda _i`$ let us to obtain the evolution to second order. We shall now describe the steps we have followed to obtain the CZA. We choose the ansatz: $$(1+\delta )^1=\underset{i=1}{\overset{3}{}}(1r_i(\lambda )\lambda _i),$$ (2) where $`r_i(\lambda )`$ are certain functions of the $`\lambda _i`$. Within this ansatz, ZA corresponds to the zeroth order approximation for $`r_i`$ ($`r_i=1`$). It is interesting to note that, although independently developed this ansatz is similar to that chosen by the authors of the EZA, except for the fact that they assumed all $`r_i`$ to be equal; an assumption that is incompatible with exactness to second order. To determine the functions $`r_i`$ we use the constraints imposed on them by: considerations about the symmetry of $`\lambda _i`$ with respect to permutations of the indexes; the fact that for planar symmetry (2) must be exact with $`r_i=1`$; the form of the exact solution for spherical collapse and compatibility of the form of $`r_i`$ with dynamical equations. These conditions determine $`r_i`$ uniquely. Let us comment them in more detail. Rotational invariance imply that the $`r_i`$ must reduce to each other through permutations of the indexes. So, they all derive from the same function, $`r(\stackrel{}{u})`$. $$r_i(\stackrel{}{\lambda })=r(\stackrel{}{u})|_{\stackrel{}{u}=(\lambda _i,\lambda _j,\lambda _k)}$$ (3) Furthermore, rotational symmetry in the plane perpendicular to the i-th proper axis imply that $`\lambda _j`$, $`\lambda _k`$ must enter symmetricaly in this expression. So, $`r(\stackrel{}{u})`$ must be symmetric with respect to its second and third arguments. We now assume that a series expansion of $`r(\stackrel{}{u})`$ in powers of the $`u_i`$ exist. We shall see later that this series converges for all relevant $`\stackrel{}{u}`$. The symmetry considerations we have just mentioned, imply that this series can only contain terms of the form: $$r(u_1,u_2,u_3)=1+\underset{l,m,n=0}{\overset{\mathrm{}}{}}C_{l,m,n}^p(u_2+u_3)^l(u_2u_3)^{2n}u_1^m$$ $$pl+2n+m,$$ (4) where $`C_{l,m,n}^p`$ are the coefficients of the $`p`$-th order terms. Noting that in the planar case ($`\lambda _10`$, $`\lambda _2=\lambda _3=0`$) expression (2) is exact with $`r_i=1`$, it is clear that expansion (4) cannot contain terms with $`m0`$ and $`l=n=0`$. So, in our notation we must have: $$C_{0,m,0}^p=0$$ (5) Hence ,there are $`2p1`$ terms of order $`p`$. For spherical collapse, both the actual density fluctuation, $`\delta _{sp}`$, and its linear value, $`\delta _L`$, may be expressed exactly as a parametric function of time (Peebles 1980). From these expressions we have derived an expression for $`\delta `$ as an explicit function of $`\delta _L`$: $$1+\delta _{sp}=\left(1r_{sp}(\delta _L)\frac{\delta _L}{3}\right)^3$$ $$r_{sp}(\delta _L)=1+f_1(\theta )\frac{\delta _L}{7}+f_2(\theta )\frac{23}{567}\delta _L^2+f_3(\theta )\frac{13}{900}\delta _L^3+f_4(\theta )5.86\times 10^3\delta _L^4+f_5(\theta )2.55\times 10^3\delta _L^5+R_{sp}(\delta _L)$$ $$R_{sp}(\delta _L)=f_6(\theta )2.58\times 10^3\delta _L^5\left(\frac{1}{1\frac{\delta _L}{2.065}}1\right)+E$$ $$|E|<2\times 10^3\mathrm{for}\delta _L1.57;EO(\delta _L^6),$$ (6) where $`\theta `$ stands for all cosmological parameters. For a flat Friedman model all $`f_i(\theta )`$ are exactly equal to one. For a general Friedman model the dependence on $`\mathrm{\Omega }`$ (the density in units of the critical one) is very mild. When $`\mathrm{\Omega }>1/20`$ the following is a rather good approximation: $$f_i(\mathrm{\Omega })=\mathrm{\Omega }^{2i/63}$$ (7) Comparing expressions (6) and (2) and noting that for spherical symmetry $`\lambda _1=\lambda _2=\lambda _3=\frac{\delta _L}{3}`$, we obtain the following constraint on $`r(u)`$: $$r_{sp}(\delta _L)=r(\frac{\delta _L}{3},\frac{\delta _L}{3},\frac{\delta _L}{3})$$ (8) This relationship imply that the coefficients of order $`p`$, $`C_{l,m,n}^p`$, must satisfy just one equation. So ,for $`p`$ larger than one the coefficients are underdetermined. However, the value of $`\delta `$ given by (2) must satisfy the dynamical equations (Peebles 1980): $$\frac{d\stackrel{}{v}}{dt}+\stackrel{}{v}\frac{\dot{a}}{a}=\frac{\varphi }{a};\varphi (\stackrel{}{x})=Ga^2p_b(\tau )\frac{d^3x^{}\delta (x^{})}{|x^{}x|}$$ $$\stackrel{}{v}=a\stackrel{}{u};\stackrel{}{u}\dot{\stackrel{}{x}};r_i(\lambda )\lambda _i=\frac{\dot{x_i}(t,q_i)}{q_i}$$ (9) where $`x_i`$ are Eulerian comoving coordinates and $`x_i(t,q_i)`$ are the Eulerian coodinates of a particle with Lagrangian coordinates $`q`$ (the last relationship holds in the local proper system). Note that the continuity equation is automaticaly satisfied by CZA. From these equations one may see that terms of order $`p`$ in $`r`$ imply the existence of certain terms of order $`p+1`$. This recursive scheme, together with expresion (8) determine completely all coefficients. Their computation, which is not trivial, is given in detail in the accompanying paper. Here we simply give the result and comment it. It is interesting to note that the process we have followed to determine the coefficients essentialy amounts to analyticaly continue the $`r_i(\stackrel{}{\lambda })`$ known in the planar and spherical case in a manner consistent with equations (9). The situations described exactly by the CZA are those where the local deformation tensor (whose proper values are $`r_i(\stackrel{}{\lambda })\lambda _i`$) are everywhere the same (or, at least at all fluid elements affecting each other’s evolution). This must be so because it is true for the planar and spherical case and preserved by the continuing procedure. For a flat Friedman universe, we have found for $`r_i`$ (in the general case terms of order $`p`$ should be multiplied by $`f_p(\theta )`$): $$r_i(\lambda _i,\lambda _j,\lambda _k)=1+\frac{3}{14}(\lambda _j+\lambda _k)+\frac{18}{245}(\lambda _j+\lambda _k)^2+\frac{157}{4410}(\lambda _j+\lambda _k)\lambda _i+\frac{3}{245}(\lambda _j\lambda _k)^2+0.03371(\lambda _j+\lambda _k)^3$$ $$+1.63\times 10^2(\lambda _j+\lambda _k)^2\lambda _i+2.75\times 10^2(\lambda _j+\lambda _k)\lambda _i^2+10^3(\lambda _j\lambda _k)^2\lambda _i+1.2\times 10^2(\lambda _j\lambda _k)^2(\lambda _j+\lambda _k)$$ $$+1.94\times 10^2(\lambda _j+\lambda _k)^4+9.4\times 10^3(\lambda _j+\lambda _k)^3\lambda _i+1.58\times 10^2(\lambda _j+\lambda _k)^2\lambda _i^2+1.3\times 10^2(\lambda _j+\lambda _k)\lambda _i^3$$ $$+4.3\times 10^3(\lambda _j\lambda _k)^4+8.4\times 10^3(\lambda _j\lambda _k)^2(\lambda _j+\lambda _k)^2+7.2\times 10^4(\lambda _j\lambda _k)^2(\lambda _j+\lambda _k)\lambda _i+R(\lambda _i,\lambda _j,\lambda _k)$$ (10) This messy expression includes explicitly terms up to the fourth order although for more purposes using up to the quadratic term is enough. Terms of order larger than four are usualy very small. However, in the rare cases when they are of some relevance many orders contribute roughly equaly. So, it is not convenient to include higher order terms explicitly. Instead, we approximate all those terms by a single term, $`R`$, that we shall latter give. Expression (2) with $`r_i`$ given by (10) give the exact evolution of $`\delta `$ in a field where the local deformation tensor is everywhere given by $`r_i(\stackrel{}{\lambda })\lambda _i`$ at a time when the linear one is $`\lambda _i`$. It is clear that this is the situation within a top-hat cilyndrical fluctuation. So, we may use this case (with $`\mathrm{\Omega }=1`$) to check the correctness of the continuing process. In this case we have $`\lambda _1=0`$, $`\lambda _2=\lambda _3=\frac{\delta _L}{2}`$. So, expressions (2) and (10) lead to: $$(1+\delta _{\mathrm{cyl}})=\left(1r_{\mathrm{cyl}}(\delta _L)\frac{\delta _L}{2}\right)^2;r_{\mathrm{cyl}}r(\frac{\delta _L}{2},\frac{\delta _L}{2},0)$$ $$=1+\frac{3}{28}\delta _L+\frac{107}{3528}\delta _L^2+1.135\times 10^2\delta _L^3+4.5\times 10^3\delta _L^4+R_{\mathrm{cyl}}(\delta _L)$$ (11) On the other hand, we have obtained $`r_{\mathrm{cyl}}(\delta _L)`$ through direct accurate numerical integration (the error of $`r_{\mathrm{cyl}}<10^5`$) and fitted the coefficients in the expansion. These cofficients agree with the predictions (those in (11)) well within the fitting errors (0.1% ,0.4% ,3% ,10% for coefficients from the first to the fourth). We have also used the numerical results to fit an approximate expression for $`R_{\mathrm{cyl}}(\delta _L)`$ and find: $$R_{\mathrm{cyl}}=2.2\times 10^3\delta _L^5\left(1\frac{\delta _L}{2.06}\right)^1+E$$ $$|E|<5\times 10^3$$ (12) We may now obtain an expression for $`R(\lambda )`$ (see expression (10)) demanding that the exact result be obtained in the planar and spherical cases and that it reduces to a good approximation to $`R_{\mathrm{cyl}}`$ in the cilyndrical case: $$R(\lambda _i,\lambda _j,\lambda _k)=\left[19\left(\lambda _i\frac{\lambda _j+\lambda _k}{2}\right)\left(1\frac{\lambda _i+\lambda _j+\lambda _k}{1.3}\right)\right]$$ $$\times \left(V_{\mathrm{sp}}(\lambda _i+\lambda _j+\lambda _k)V_{\mathrm{sp}}(\lambda _i)+V_{\mathrm{sp}}\left(\frac{\lambda _j+\lambda _k}{2}\right)\right)+E$$ $$V_{\mathrm{sp}}(x)r_{\mathrm{sp}}(x)\left(1+\frac{x}{7}+\frac{23}{567}x^2+\frac{13}{900}x^3+5.86\times 10^3x^4\right)$$ $$=2.58\times 10^3x^5\left(1\frac{x}{2.06}\right)^1+E_2;E_2<2.3\times 10^3\mathrm{for}x<1.57$$ (13) This expression corresponds to $`\mathrm{\Omega }=1`$ and may be inmediately generalized for an arbitrary cosmological model. The maximum error of expression (10) with $`R(\lambda )`$ given by (13) is $`6\times 10^3`$ being usualy quite smaller. For general values of the $`\lambda _i`$ the situation described exactly by expressions (2) and (10) is more complex than for the three peculier cases already considered. However, for all practical purposes a simple generalization of these cases, namely, a top-hat initial ellipsoidal fluctuation, may be considered to be described exactly by those expressions. In fact, it may be shown that the intrinsic error of $`r_i(\stackrel{}{\lambda })`$ in these situations as given by (10) ($`3\times 10^3`$ at the time of collapse) is smaller than the error of this expression (due only to $`R(\lambda )`$). In the top-hat spherical and cilyndrical cases the deformation tensor for outside matter is diferent from that for matter within. The same is true for the top-hat eliptical case. However, in this case, unlike in the former the outside matter is relevant to the evolution of the matter within. Expression (10) accounts exactly to the second order (and a small error to higher orders) for the small contribution of the outside matter to the tidal field within the ellipsoid. To check the accuracy of the LTA approximation, Hui and Bertschinger (1996) considered the collapse of a top-hat initial fluctuation with axial ratios 1:1.25:1.5 and represented (in their fig.2 ) the evolution of the axis predicted by this approximation and that predicted by an approximation (that they called exact) that neglects the effect of outside matter. The ellipsoid generates a linear growing mode for the velocity field with asociated values given by: $`\lambda _1=0.2576a`$; $`\lambda _2=0.3233a`$; $`\lambda _3=0.4191a`$. (label “1” corresponding to the largest axis). As we have said before, expressions (2) and (10) may be considered exact in this case, giving for the evolution of the axis: $$x_i=w_ia(1r_i(\stackrel{}{\lambda })\lambda _i)$$ where $`a`$ is the expansion factor of the universe, and $`w_i`$ are the axis ratios. This is represented in figure 1 along with the predictions of an approximation where all $`r_i`$ are set equal to their symmetrized value $`\left(\frac{r_1(\lambda )+r_2(\lambda )+r_3(\lambda )}{3}\right)`$. This approximation must be very close to EZA (Reisenegger and Miralda-Escude 1995). It may be shown that by symmetrizing the effect of the outside matter is neglected. This may be checked by noting that this approximation is barely distinguishable from “exact”. The LTA and CZA are indistinguishable (within a 0.2% ) up to a $`1.2`$, differing very little up to the collapse, that takes place at $`a=1.584`$ for CZA and $`a=1.613`$ for LTA. Note that the evolution of the difference between the values of the axes given by these approximations is quantitatively the same as that between the exact solution and the LTA for an elliptical cloud in empty space (see their fig. 3), such as would be expected if CZA were exact. In fact, it may be shown that the error of the value of a at colapse given by CZA is at most 0.3% . The agreement between LTA and CZA for $`a<1.2`$ is so complete that if we had an explicit expression (like (10)) for the former approximation, the first order coefficient should be very close (within 3%) to that for the CZA and those of second order should not differ much. This imply that most likely LTA is exact to second order and that it accounts for the effect of outside matter (both facts are related), hence being more accurate than “exact” and ECZ (which can not be exact to second order). So far we have considered the situations described exactly by CZA. However we are mostly interested in the performance of this approximation at an arbitrary point. There is no obvious reason to expect an approximation determined by the above considerations to be the best at a random point. To see that this is actualy so we first write $`\delta `$ at such point in the form: $$1+\delta =\underset{i}{}\left(1r_i\lambda _i+\frac{3}{14}\delta _Lx_i+8.46\times 10^2\delta _L^2x_i+8.34\times 10^2\left[\frac{x_1^2+x_2^2+x_3^2}{3}\right]\delta _L+\mathrm{}\right)^1$$ $$1+\underset{i}{}\lambda _i+\frac{10}{7}(\lambda _1\lambda _2+\mathrm{})+\left[\frac{3}{14}\delta _L+8.46\times 10^2\delta _L^2\right]x_i$$ (14) where the $`x_i`$ are certain variables defined by the action of some integral operator on $`\delta _L(\stackrel{}{x})`$ and that cannot be reduced to functions of the $`\lambda _i`$. This expression, that in full would contain more variables, is formaly exact, like the LPD (Bouchet et al 1995). But here for each order we have separated the part depending on the $`\lambda _i`$, that goes into $`r_i(\lambda )\lambda _i`$. We may use (14) and the probability distribution of the $`\lambda _i`$ at points with a fixed value of $`\delta _L`$ to obtain the statistical properties of $`\delta `$ at these points. By comparing what we find with the exact results found by Bernardeau (1994) for these statistical quantities we find: $$x_i=0;x_i_\lambda =C\left(\lambda _i\frac{\delta _L}{3}\right);x_i^2_\lambda =x_i_\lambda ^2+\frac{2C}{9}(1C)\sigma ^2$$ $$C=\frac{3}{2}6.4(C_{\mathrm{exp}}0.0544)$$ (15) The first result is valid at every point and could be derived in other ways ,for example, by comparing (14) with the LPD. The other results, which are of statistical character, give the mean and mean quadratic value of $`x_i`$ over points with a fixed value of the $`\lambda _i`$. $`C_{\mathrm{exp}}`$ is a spectral constant defined in the last reference. In the context of expression (14) the CZA, as we have defined it, may be caracterized by the neglect of the $`x_i`$. This approximation is the same for all fields. However, we could consider a CZA specific for each spectrum (for Gaussian fields) simply by inserting in (14) in the place of $`x_i`$ , $`x_i^2`$, their mean values given in (15) with the value of $`C_{\mathrm{exp}}`$ corresponding to that spectrum. This approximation give exactly to third order the moments of $`\delta `$ over points with fixed $`\lambda _i`$ values. Hence, it gives the one point statistics exactly to third order. For smooth field (the case considered here) $`C_{\mathrm{exp}}`$ lies between 0.053 and 0.061 for most interesting spectrums. So the general CZA (which is exact to third order for $`C_{\mathrm{exp}}=0.0544`$) imply a very small error to third order. We have stimated the error of the CZA by computing to third order (the first non-vanishing) the RMS fluctuation of the value of $`\delta `$ over points with fixed $`\lambda _i`$ values. We found: $$(\delta \delta _\lambda )^2_\lambda ^{1/2}=\left[\left(\frac{3}{14}\right)^2(\lambda _1^2+\lambda _2^2+\lambda _3^2)(\lambda _1\lambda _2+\mathrm{})\frac{1}{\gamma \sigma ^2}+3.1\right]^{1/2}\gamma \sigma ^2\delta _L$$ $$\gamma \frac{30}{13}(C_{exp}0.0471)$$ The fact that at every point the sum of the $`x_i`$ vanishes imply (see (14)) that CZA is exact to second order. As we have seen, this is most likely to be also true for the LTA, but it is not true for the EZA. Both the ZA and the CZA are unique functions of the $`\lambda _i`$ so, one might wonder where is what makes the latter exact to second order. The answer is that in the CZA we use for the proper values of the deformation tensor $`r_i(\stackrel{}{\lambda })\lambda _i`$ which is exact to second order, rather than $`\lambda _i`$. It must be noted however that the velocity field is not given to second order in an explicit manner. The equation $`_q\stackrel{}{u}=_ir_i\lambda _i`$ is exact to second order but, to obtain the velocity field, we must integrate it. Figure captions: Figure 1: Evolution of the axis lengths for a homogeneous ellipsoid embedded in an expanding Universe. Initial axial ratios are 1:1.25:1.50. The solid line corresponds to CZA, which is practically exact; and the dashed line corresponds to an approximation that makes all $`r_i`$ equal to their symmetrized value.
no-problem/0003/cond-mat0003357.html
ar5iv
text
# A dynamical model describing stock market price distributions ## 1 Introduction One of the most important problems in mathematical finance is to know the probability distribution of speculative prices. In spite of its importance for both theoretical and practical applications the problem is yet unsolved. The first approach to the problem was given by Bachelier in 1900 when he modelled price dynamics as an ordinary random walk where prices can go up and down due to a variety of many independent random causes. Consequently the distribution of prices was Gaussian . The normal distribution is ubiquitous in all branches of natural and social sciences and this is basically due to the Central Limit Theorem: the sum of independent, or weakly dependent, random disturbances, all of them with finite variance, results in a Gaussian random variable. Gaussian models are thus widely used in finance although, as Kendall first noticed , the normal distribution does not fit financial data specially at the wings of the distribution. Thus, for instance, the probability of events corresponding to 5 or more standard deviations is around $`10^4`$ times larger than the one predicted by the Gaussian distribution, in other words, the empirical distributions of prices are highly leptokurtic. Is the existence of too many of such events, the so called outliers, the reason for the existence of “fat tails” and the uselessness of the normal density specially at the wings of the distribution. Needless to say that the tails of the price distributions are crucial in the analysis of financial risk. Therefore, obtaining a reliable distribution has deep consequences from a practical point of view . One of the first attempts to explain the appearance of long tails in financial data was taken by Mandelbrot in 1963 who, based on Pareto-Lévy stable laws , obtained a leptokurtic distribution. Nevertheless, the price to pay is high: the resulting probability density function has no finite moments, except the first one. This is indeed a severe limitation and it is not surprising since Mandelbrot’s approach can still be considered within the framework of the Central Limit Theorem, that is, the sum of independent random disturbances of infinite variance results in the Lévy distribution which has infinite variance . On the other hand, the Lévy distribution has been tested against data in a great variety of situations, always with the same result: the tails of the distribution are far too long compared with actual data. In any case, as Mantegna and Stanley have recently shown , the Lévy distribution fits very well the center of empirical distributions —much better than the Gaussian density— and it also shares the scaling behavior shown in data . Therefore, if we want to explain speculative price dynamics as a sum of weakly interdependent random disturbances, we are confronted with two different and in some way opposed situations. If we assume finite variance the tails are “too thin” and the resulting Gaussian distribution only accounts for a narrow neighborhood at the center of the distribution. On the other hand, the assumption of infinite variance leads to the Lévy distribution which explains quite well a wider neighborhood at the center of distributions but results in “too fat tails”. The necessity of having an intermediate model is thus clear and this is the main objective of the paper. Obviously, since the works of Mandelbrot and Fama on Lévy distributions, there have been several approaches to the problem, some of them applying cut-off procedures of the Lévy distribution and, more recently, the use of ARCH and GARCH models to obtain leptokurtic distributions . The approaches based on cut-off procedures are approximations to the distributions trying to better fit the existing data, but they are not based on a dynamical model that can predict their precise features. On the other hand ARCH and GARCH models are indeed dynamical adaptive models but they do not provide an overall picture of the market dynamics resulting in a distinctive probability distribution. In fact, ARCH/GARCH models usually assume that the market is Gaussian with an unknown time-varying variance so to be self-adjusted to obtain predictions. The paper is organized as follows. In Sect. 2 we propose the stochastic model and set the mathematical framework that leads to a probability distribution of prices. In Sect. 3 we present the main results achieved by the model. Conclusions are drawn in Sect. 4. ## 2 Analysis Let $`S(t)`$ be a random processes representing stock prices or some market index value. The usual hypothesis is to assume that $`S(t)`$ obeys an stochastic differential equation of the form $$\dot{S}/S=\rho +F(t),$$ (1) where $`\rho `$ is the instantaneous expected rate of return and $`F(t)`$ is a random process with specified statistics, usually $`F(t)`$ is zero-mean Gaussian white noise, $`F(t)=\xi (t)`$, in other words $`dW(t)=\xi (t)dt`$, where $`W(t)`$ is the Wiener process or Brownian motion. In this case, the dynamics of the market is clear since the return $`R(t)\mathrm{log}[S(t)/S(0)]`$ obeys the equation $`\dot{R}=\rho +\xi (t)`$ which means that returns evolve like an overdamped Brownian particle driven by the “inflation rate” $`\rho `$ and, in consequence, the return distribution is Gaussian. Let us take a closer look at the price formation and dynamics. Following Merton we say that the change in the stock price (or index) is basically due to the random arrival of new information. This mechanism is assumed to produce a marginal change in the price and it is modelled by the standard geometric Brownian motion defined above. In addition to this “normal vibration” in price, there is an “abnormal vibration” basically due to the (random) arrival of important new information that has more than a marginal effect on price. Merton models this mechanism as a jump process with two sources of randomness: the arrival times when jumps occurs, and the jump amplitudes. The result on the overall picture is that the noise source $`F(t)`$ in price equation is now formed by the sum of two independent random components $$F(t)=\xi (t)+f(t),$$ (2) where $`\xi (t)`$ is Gaussian white noise corresponding to the normal vibration, and $`f(t)`$ is “shot noise” corresponding to the abnormal vibration in price. This shot noise component can be explicitly written as $$f(t)=\underset{k=1}{\overset{\mathrm{}}{}}A_k\delta (tt_k),$$ (3) where $`\delta (t)`$ is the Dirac delta function, $`A_k`$ are jump amplitudes, and $`t_k`$ are jump arrival times. It is also assumed that $`A_k`$ and $`t_k`$ are independent random variables with known probability distributions given by $`h(x)`$ and $`\psi (t)`$ respectively . We now go beyond this description and specify the “inner components” of the normal vibration in price, by unifying this with Merton’s abnormal component. We thus assume that all changes in the stock price (or index) are modelled by different shot-noise sources corresponding to the detailed arrival of information, that is, we replace the total noise $`F(t)`$ by the sum $$F(t)=\underset{n=n_0}{\overset{m}{}}f_n(t),$$ (4) where $`f_n(t)`$ are a set of independent shot-noise processes given by $$f_n(t)=\underset{k_n=1}{\overset{\mathrm{}}{}}A_{k_n,n}\delta (tt_{k_n,n}).$$ (5) The amplitudes $`A_{k_n,n}`$ are independent random variables with zero mean and probability density function (pdf), $`h_n(x)`$, depending only on a single “dimensional” parameter which, without loss of generality, we assume to be the standard deviation of jumps $`\sigma _n`$, i.e., $$h_n(x)=\sigma _n^1h(x\sigma _n^1).$$ (6) We also assume that the occurrence of jumps is a Poisson process, in this case shot noises are Markovian, and the pdf for the time interval between jumps is exponential: $$\psi (t_{k_n,n}t_{k_n1,n})=\lambda _n\mathrm{exp}[\lambda _n(t_{k_n,n}t_{k_n1,n})],$$ (7) where $`\lambda _n`$ are mean jump frequencies, i.e., $`1/\lambda _n`$ is the mean time between two consecutive jumps . Finally, we order the mean frequencies in a decreasing way: $`\lambda _n<\lambda _{n1}`$. Let $`X(t)`$ be the zero-mean return, i.e., $`X(t)R(t)\rho t`$. For our model $`X(t)`$ reads $$X(t)=\underset{n=n_0}{\overset{m}{}}\underset{k_n=1}{\overset{\mathrm{}}{}}A_{k_n,n}\theta (tt_{k_n,n}),$$ (8) where $`\theta (t)`$ is the Heaviside step function. Our main objective is to obtain an expression for the pdf of $`X(t)`$, $`p(x,t)`$, or equivalently, the characteristic function (cf) of $`X(t)`$, $`\stackrel{~}{p}(\omega ,t)`$, which is the Fourier transform of the pdf $`p(x,t)`$. Note that $`X(t)`$ is a sum of independent jump processes, this allows us to generalize Rice’s method for a single Markov shot noise to the present case of many shot noises . The final result is $$\stackrel{~}{p}(\omega ,t)=\mathrm{exp}\left\{t\underset{n=n_0}{\overset{m}{}}\lambda _n[1\stackrel{~}{h}(\omega \sigma _n)]\right\}.$$ (9) As it is, $`X(t)`$ represents a shot noise process with mean frequency of jumps given by $`\lambda =\lambda _n`$ and jump distribution given by $`h(x)=\lambda _nh_n(x)/\lambda `$. Nevertheless, we make a further approximation by assuming (i) $`n_0=\mathrm{}`$, i.e., there is an infinite number of shot-noise sources, and (ii) there is no characteristic time scale limiting the maximum feasible value of jump frequencies, thus $`\lambda _n\mathrm{}`$ as $`n\mathrm{}`$. Both assumptions are based on the fact that the “normal vibration” in price is formed by the addition of (approximately) infinitely many random causes, which we have modelled as shot noises. According to this, we introduce a “coarse-grained” description and replace the sum in Eq. (9) by an integral $$\stackrel{~}{p}(\omega ,t)=\mathrm{exp}\left\{t_{\mathrm{}}^{u_m}\lambda (u)[1\stackrel{~}{h}(\omega \sigma (u))]𝑑u\right\}.$$ (10) In order to proceed further we should specify a functional form for $`\lambda (u)`$ and $`\sigma (u)`$. We note by empirical evidence that the bigger a sudden market change is, the longer is the time we have to wait until we observe it. Therefore, since $`\lambda (u)`$ decreases with $`u`$ (recall that frequencies are decreasingly ordered) then $`\sigma (u)`$ must increase with $`u`$. We thus see that $`\sigma (u)`$ has to be a positive definite, regular and monotone increasing function for all $`u`$. The simplest choice is: $`\sigma (u)=\sigma _0e^u`$. On the other hand, there is empirical evidence of scaling properties in financial data . We summarize the above requirements (i.e., inverse relation between $`\lambda `$ and $`\sigma `$, and scaling) by imposing the “dispersion relation”: $$\lambda =\lambda _0(\sigma _0/\sigma )^\alpha .$$ (11) where $`\alpha `$ is the scaling parameter. Under these assumptions the cf of the return $`X(t)`$ reads: $$\stackrel{~}{p}(\omega ,t)=\mathrm{exp}\left\{\lambda _0t\sigma _0^\alpha _0^{\sigma _m}z^{1\alpha }[1\stackrel{~}{h}(\omega z)]𝑑z\right\},$$ (12) where $`\sigma _m=\sigma _0e^{u_m}`$ is the maximum value of the standard deviation. We observe that if $`\sigma _m=\mathrm{}`$, which means that some shot-noise source has infinite variance, then Eq. (12) yields the Lévy distribution $$\stackrel{~}{L}_\alpha (\omega ,t)=\mathrm{exp}(kt\omega ^\alpha ),$$ (13) where $$k=\lambda _0\sigma _0^\alpha _0^{\mathrm{}}z^{1\alpha }[1\stackrel{~}{h}(z)]𝑑z.$$ (14) Hence, if we want a distribution with finite moments, we have to assume a finite value for $`\sigma _m`$. Let $`\lambda _m`$ be the mean frequency corresponding to the maximum (finite) variance. Recall that, in the discrete case (c.f. Eq. (9)), shot-noise sources are ordered, thus $`\lambda _m`$ and $`\sigma _m`$ correspond to the mean frequency and the variance of the last jump source considered. Our last assumption is that the total number of noise sources in Eq. (8) increases with the observation time $`t`$ and, since $`n_0=\mathrm{}`$, this implies that $`m=m(t)`$ is an increasing function of time. Consequently, the mean period of the last jump source, $`\lambda _m^1`$, also grows with $`t`$. The simplest choice is the linear relation: $`\lambda _mt=a`$, where $`a>0`$ is constant. Therefore, from the dispersion relation, Eq. (11), we see that the maximum jump variance depends on time as a power law: $$\sigma _m^2=(bt)^{2/\alpha },$$ (15) where $`b\sigma _0^\alpha \lambda _0/a`$. We finally have $$\stackrel{~}{p}(\omega ,t)=\mathrm{exp}\left\{abt_0^{(bt)^{1/\alpha }}z^{1\alpha }[1\stackrel{~}{h}(\omega z)]𝑑z\right\},$$ (16) ## 3 Results Let us now present the main results and consequences of the above analysis. First, the volatility of the return is given by $$X^2(t)=\frac{a\sigma _m^2}{2\alpha }=\frac{a}{2\alpha }(bt)^{2/\alpha },$$ (17) which proves that $`\alpha <2`$ and the volatility shows super-diffusion. The anomalous diffusion behavior of the empirical data (at least at small time scales) was first shown by Mantegna and Stanley without mention it . Second, kurtosis is constant and given by $$\gamma _2=\frac{(2\alpha )^2\stackrel{~}{h}^{(iv)}(0)}{(4\alpha )a}.$$ (18) Thus $`\gamma _2>0`$ for all $`t`$, in other words, we have a leptokurtic distribution in all time scales. Third, the return probability distribution scales as $$p(x,t)=(bt)^{1/\alpha }p(x/(bt)^{1/\alpha })$$ (19) and the model becomes self-similar . In Fig. 1 we plot the super-diffusion behavior. Circles correspond to empirical data from S&P 500 cash index during the period January 1988 to December 1996. Solid line shows the super-diffusive character predicted by Eq. (17) setting $`\alpha =1.30`$ and $`ab^{2/\alpha }=2.44\times 10^8`$ (if time is measured in minutes). Dashed line represents normal-diffusion $`X^2(t)t`$. Observe that data obeys super-diffusion for $`t10`$ min, and when $`t>10`$ min there seems to be a “crossover” to normal diffusion. We finally study the asymptotic behavior of our distribution. It can be shown from Eq. (12) that the center of the distribution, defined by $`|x|<(bt)^{1/\alpha }`$, is again approximated by the Lévy distribution defined above. On the other hand the tails of the distribution are solely determined by the jump pdf $`h(u)`$ by means of the expression $$p(x,t)\frac{abt}{|x|^{1+\alpha }}_{|x|/\sigma _m}^{\mathrm{}}u^\alpha h(u)𝑑u,(|x|(bt)^{1/\alpha }).$$ (20) Therefore, return distributions present fat tails and have finite moments if jump distributions behave in the same way. This, in turn, allows us to make statistical hypothesis on the form of $`h(u)`$ based on the empirical form and moments of the pdf. In Fig. 2 we plot the probability density $`p(x,t)`$ of the S&P 500 cash index returns $`X(t)`$ observed at time $`t=1`$ min (circles). $`\mathrm{\Sigma }=1.87\times 10^4`$ is the standard deviation of the empirical data. Dotted line corresponds to a Gaussian density with standard deviation given by $`\mathrm{\Sigma }`$. Solid line shows the Fourier inversion of Eq. (12) with $`\alpha =1.30`$, $`\sigma _m=9.07\times 10^4`$, and $`a=2.97\times 10^3`$. We use the gamma distribution of the absolute value of jump amplitudes, $$h(u)=\mu ^\beta |u|^{\beta 1}e^{\mu |u|}/2\mathrm{\Gamma }(\beta ),$$ (21) with $`\beta =2.39`$, and $`\mu =\sqrt{\beta (\beta +1)}=2.85`$. Dashed line represents a symmetrical Lévy stable distribution of index $`\alpha =1.30`$ and the scale factor $`k=4.31\times 10^6`$ obtained from Eq. (14). We note that the values of $`\sigma _m`$ and $`\mathrm{\Sigma }`$ predict that the Pareto-Lévy distribution fails to be an accurate description of the empirical pdf for $`x5\mathrm{\Sigma }`$ (see Eq. (20)). We chose a gamma distribution of jumps because (i) as suggested by the empirical data analized, the tails of $`p(x,t)`$ decay exponentially, and (ii) one does not favor too small size jumps, i.e., those jumps with almost zero amplitudes. In any case, it would be very useful to get some more microscopic approach (based, for instance, in a “many agents” model ) giving some inside on the particular form of $`h(u)`$. ## 4 Conclusions Summarizing, by means of a continous description of random pulses, we have obtained a dynamical model leading to a probability distribution for the speculative price changes. This distribution which is given by the following characteristic function: $$\stackrel{~}{p}(\omega ,t)=\mathrm{exp}\left\{a_0^1z^{1\alpha }[1\stackrel{~}{h}(\omega z\sigma _m(t))]𝑑z\right\},$$ (22) where $`\sigma _m(t)=(bt)^{1/\alpha }`$, it depends on three positive constants: $`a`$, $`b`$, and $`\alpha <2`$. The characteristic function (22) also depends on an unknown function $`\stackrel{~}{h}(\omega )`$, the unit-variance characteristic function of jumps, also to be conjectured and fitted from the tails of the empirical distribution. Therefore, starting from simple and reasonable assumptions we have developed a new stochastic process that possesses many of the features, i. e. fat tails, self-similarity, superdiffusion, and finite moments, of financial time series, thus providing us with a different point of view on the dynamics of the market. We finally point out that the model does not explain any correlation observed in empirical data (as some markets seem to have ). This insufficiency is due to the fact that we have modelled the behavior of returns through a mixture of independent sources of white noise. The extension of the model to include non-white noise sources and, hence, correlations will be presented soon. This work has been supported in part by Dirección General de Investigación Científica y Técnica under contract No. PB96-0188 and Project No. HB119-0104, and by Generalitat de Catalunya under contract No. 1998 SGR-00015.
no-problem/0003/astro-ph0003388.html
ar5iv
text
# What if Dark matter is Bosonic and self-interacting ## Abstract Recently the problem of singular galactic cores and over-abundant formation of dwarf galaxies, inherent to the standard cold dark matter model, had attracted a great deal of attention. One scenario which may be free of these problems invokes a self-interacting Bose-field. We find the limiting core density in this model due to the self-annihilation of the scalar field into its own relativistic quanta. The limiting density may correspond to the observable one if there is only one dark matter component. Alternatively, there may be more than one dark matter species and the annihilation of one species may be very efficient with subsequent expansion of the other, thus avoiding the problem of singular cores. Introduction. The nature of the dark component which constitutes most of the matter in the Universe remains unknown. It is usually assumed that the dark matter (DM) is cold and non-interacting. Adding a non-zero cosmological constant to the non-baryonic dark matter gives rise to the so-called $`\mathrm{\Lambda }`$CDM model. In spite of its great success in reproducing the observed Universe at large scales, in the recent few years it has become more and more evident that the $`\mathrm{\Lambda }`$CDM model suffers from predicting too much power of density perturbations on small scales. First, the central density distributions in dwarf and low surface brightness galaxies seem to have finite cores in contrast with the singular profiles predicted by N-body simulations (see however Ref.). Secondly, high resolution numerical experiments predict that galaxies as our own should contain several thousands clumps of dark matter of the size of a dwarf galaxy . However, only a few of these are observed. Several solutions to this problem have been suggested in the literature: i) the dark matter is not cold, but warm ; ii) the dark matter is interactive ; iii) the dark matter is warm and interactive ; iv) the power spectrum of $`\mathrm{\Lambda }`$CDM has sharp drop on subgalactic scales ; v) the dark matter is in the form of a self-interacting scalar field . All these ideas are not new and have been previously considered in different applications. For instance, warm dark matter was considered in , self-interacting dark matter was studied in Ref. , while the scalar field model of Ref. was considered in Ref. . Clearly all these models require an appropriate degree of fine-tuning in their parameters (e.g. mass and self-couplings). For example, the dark matter component of Ref. should interact sufficiently strongly to allow the dark matter in the halo to start relaxing on time-scale of the age of the Universe, but the relaxation should not be too fast and already completed otherwise a gravothermal instability will inevitably occur , leading to the formation of singular profiles. This is not a problem within the scalar field model where the pressure which supports the core is generated by the self-repulsion of dark matter particles. However, for the halo dominated by the scalar field the radius of its core is only a function of the field mass and self-coupling and therefore they should obey certain relation to fit observations. In this paper we will study further constraints on the parameters of scalar field model with mass $`m`$ and self-coupling $`\lambda `$ (with interaction potential defined as $`\lambda \varphi ^4/4`$) and give a testable prediction for the core radius and core density of the gravitational bound object composed by scalar quanta. In particular, we analyze constraints which are due to inevitable self-annihilation of the self-interacting scalar field without conserved quantum numbers into its own relativistic quanta. Self-interacting scalar field and gravitationally bound clumps. It is assumed that the scalar field is in the state of coherent oscillations, just like the familiar axion in cosmological setting, and that the gravitationally bound object is formed out of the field clump. Of the prime interest for comparison with observations is the case when self-gravity of the field dominates (generalization for the case of the scalar field in the external gravitational well, created e.g. by baryonic component, is straightforward). The parameters of such configuration can be estimated analyzing a simple equation of “hydrostatic equilibrium” in non-relativistic limit $`{\displaystyle \frac{dP(r)}{dr}}`$ $`=`$ $`{\displaystyle \frac{\rho (r)M(r)}{M_{\mathrm{Pl}}^2r^2}},`$ (1) $`{\displaystyle \frac{dM(r)}{dr}}`$ $`=`$ $`4\pi r^2\rho (r).`$ (2) The pressure $`P(r)`$ and the density $`\rho (r)`$ have to be understood here as quantities averaged over the period of field oscillation, which gives for the equation of state $`P\lambda \rho ^2/3m^4`$. These equations have a simple solution $`\rho (r)=\rho (0)\mathrm{sin}(x)/x`$, where $`xr/r_c`$ and the core radius satisfies<sup>*</sup><sup>*</sup>*This definition of $`r_c`$ coincides with Ref. but is $`\sqrt{6}`$ smaller compared to Ref. , because in the $`r_c`$ was defined via $`\rho (r_c)=0`$. $$\frac{m^4}{\lambda }\frac{M_{\mathrm{Pl}}^2}{6\pi r_c^2},$$ (3) or $$r_c10^2\sqrt{\lambda }(\mathrm{eV}/m)^2\mathrm{kpc}.$$ (4) Note that the density in the core, $`\rho (0)`$, is unrelated to the core radius, but the density is limited. Equation of state changes to relativistic, $`P=\rho /3`$ when $`\varphi _0>m/\sqrt{\lambda }`$, where $`\varphi _0`$ is the amplitude of field oscillations, and the object became unstable . This corresponds to the maximum density, $`\rho =m^2\varphi _0^2`$, $$\rho _{\mathrm{max}}m^4/\lambda .$$ (5) This density corresponds to the object which is close to the limit of black hole formation. Since this is the only restriction on the density if the field has conserved quantum numbers (which corresponds to the complex scalar field), the model with conserved particle number is unappealing (the excess dark matter cusps may Bose-condense into black holes in the centers of galaxies , but this can not be a solution for all systems). Therefore, here we consider the model where the dark matter particle number is not conserved. For example, the real scalar field can decay or self-annihilate. The corresponding constraints on the density arising from self-annihilation, which are inevitable because of the non-zero $`\lambda `$, will be considered below. Let us first though consider under which conditions the coherent field configuration forms. Condensation. If the self-coupling is not very large, during the collapse of spatially large configurations the coherence of the field will be destroyed even if the configuration develops from the small overdensity in initially homogeneous oscillating field. Namely, after virialization at each spatial point there will be many streams of particles, each with different vector of velocity. In such situation the analysis above and Eq. (4) are not applicable. If velocities dominate, the virialized configuration will behave in the gravitational field as non-interacting usual dark matter. However, with time the Bose-condensation will occur . This will happen even with very small values of the self-coupling since the scattering is Bose-enhanced if the phase-space density of particles is larger than unity, which will be always the case with parameters satisfying Eq. (4). If the relaxation time for the Bose-condensation is smaller than the age of the Universe, we may apply Eq. (4) for the final configuration. Let us consider now the relaxation in virialized clumps due to the scattering process $`2\varphi 2\varphi `$ following Refs. . The inverse relaxation time is $`t_R^1(1+n)\sigma \rho v_\mathrm{e}m^1`$, where $`\sigma `$ is the corresponding cross section, and $`v_\mathrm{e}`$ is the characteristic velocity which characterizes the depth of the gravitational well. We are taking into account the possibility that mean phase-space density of $`\varphi `$ particles, $`n`$, can be large, which accounts for the factor $`(1+n)`$ in the expression for the relaxation time. For particles bound in a gravitational well, it is convenient to rewrite this expression in the form $$t_R^1\frac{\lambda ^2\rho v_\mathrm{e}}{m^3}\left[1+\frac{\rho }{m^4v_\mathrm{e}^3}\right].$$ (6) Let us consider first the case $`n1`$. The relaxation time in the clump is smaller than a given time $`\tau `$ if $$\rho >\frac{v_e}{\sqrt{m\tau }}\frac{m^4}{\lambda }.$$ (7) With $`\rho =0.02M_{}/\mathrm{pc}^35\times 10^6`$ $`\mathrm{eV}^4`$ and $`v_e=100`$ km s<sup>-1</sup>, which approximately corresponds to the parameters of the cores of dwarf galaxies, the relaxation time will be smaller than the age of the Universe if $$\lambda >10^{15}(m/\mathrm{eV})^{7/2}.$$ (8) If this condition is satisfied, the Bose condensate in the center of gravitational well should form . In addition to the condensation due to self-interaction, there is a process of purely gravitational relaxation, also with subsequent formation of the coherent field configuration, see Refs. . However, this should be efficient only if density inhomogeneities are large (e.g. during initial stage of the collapse), while we are interested in continuing condensation in the already virialized clump as well. In addition, in all range of parameters relevant for the present discussion, the condensation solely due to self-coupling is efficient. Self-annihilation of the condensate. Besides taking into account the specific Bose-enhancement during the process of collisional relaxation, one has also to consider the decay of the condensate. This is also peculiar and may correspond to a “laser” effect . In other words, the Bose-stimulation of relevant processes has to be taken into account. In such a case it is important to know up to which distances the decay or annihilation products stay in the resonance with each other. If the initial configuration is not condensed, but can be described by some distribution of (free) particles over momenta, one can employ the Boltzmann kinetic approach . In a stationary state (created, say, by the process of violent relaxation) the distribution of particles in the phase space is a function of integrals of motion like energy. This does not mean though that one can neglect the red-shift in the collision integral for particles which are at the same energy level. This is because particles are moving. However, there is enough phase-space available for the decay products to stay in the resonance if the distribution of “parent” particles is isotropic . With the average momentum of the distribution of the parent particles going to zero (condensation), the collision integral start to diverge. The Boltzmann approach breaks down and one can employ the formalism of particle creation by time dependent classical background. For the condensed state in equilibrium in the gravitational well we have the following peculiarity: the red-shift cancels out and the decay products stay in the resonance throughout the whole configuration. First, in the condensed state particles are not moving. Second, in the state of hydrostatic equilibrium the gravitational energy and internal interaction energy are tuned precisely in the way to cancel the gravitational redshift. This can be understood in several ways. Indeed, in hydrostatic equilibrium, the total mass of configuration $`M`$ (which includes all forms of energy) takes its minimum possible value under perturbations of internal structure. Some small amount of particles can be moved around resulting in $`\delta M=0`$. As a Gedanken experiment one can think to move around particles which are going to decay. The condition $`\delta M=0`$ means that at infinity the products of decay will have the same energy. Therefore, they stay in resonance on WKB trajectories labeled by quantum numbers at infinity. In other words, from the point of view of local observers, in hydrostatic equilibrium $`\mu \sqrt{g_{00}}`$ = const, where $`\mu `$ is chemical potential, see e.g. . Compare this with the energy of free moving particle as measured by the local observer, $`\omega \sqrt{g_{00}}`$ = const. Therefore, $`\mu /\omega =`$ const and the decay process stays in the resonance. Let us make this explicit for the self-annihilation of the Bose-condensed field in the metric $`ds^2=fdt^2f^1\delta _{ij}dx^idx^j`$, where $`f=1+2\mathrm{\Phi }`$ and $`\mathrm{\Phi }`$ is Newtonian gravitational potential. Condensed state can be described by the field configuration of the form $`\varphi =\varphi (t)\psi (𝐱)`$, where $`\varphi (t)`$ is periodic function of time which to the first approximation is $`\varphi (t)=\varphi _0\mathrm{sin}\omega t`$, with $`\omega `$ close to $`m`$. Let us describe quanta which propagate in this metric as WKB plane waves, $`\delta \varphi g_p(t)e^{ip_jx^j}`$, where $`p_j=p_j(𝐱)`$ are spatial momenta which satisfy $`g^{\mu \nu }p_\mu p_\nu =m_{\mathrm{eff}}^2`$ or $`p_0^2=f^2|𝐩|^2+fm_{\mathrm{eff}}^2=`$ const. Here $`m_{\mathrm{eff}}`$ includes contribution to the particle mass from the interaction with the condensate, $`m_{\mathrm{eff}}^2=m^2+3\lambda \varphi _0^2\psi ^2/2`$. Mode functions satisfy the equation $$\delta \ddot{\varphi }f^2^2\delta \varphi +f(m_0^2+3\lambda \varphi )\delta \varphi =0.$$ (9) Therefore the equation for $`g_p(t)`$ is $$\ddot{g_p}+p_0^2g_p2\omega ^2(1+2\mathrm{\Phi })\psi ^2q\mathrm{cos}(2\omega t)g_p=0,$$ (10) where $`q3\lambda \varphi _0^2/4\omega ^2`$ is the resonance parameter. We see that change in the gravitational parameter $`\mathrm{\Phi }`$ with the distance gives insignificant change in the “effective” resonance parameter. More important is the change in $`\psi ^2(𝐱)`$, but most of the modes with the same value of $`p_0`$ can stay in the resonance up to the distances comparable to the core size. The important fact which leads to this conclusion is that $`\omega =`$ const. The problem reduces to particle creation by time dependent homogeneous classical background and is very well studied, see e.g. . We briefly review the results below. The number density of the created particles grows exponentially with time, $`n_k=\mathrm{exp}(\mu _kt)`$ where the characteristic exponent is positive and non-zero in narrow resonance bands and its numerical value is model dependent, being a function of the coupling constants of the theory. There can be several channels of decay, e.g. $`\varphi `$ can be coupled to photons leading to peculiar cosmic maser effect discussed in which by itself puts constraints on the strength of the coupling to the electromagnetic field. The strength of the coupling $`f_\varphi `$ to the electromagnetic field (as well as to any other possible but hypothetical field) is unknown and might be even zero. However, we cannot disregard the decays of the condensate into its own quanta because of the self-coupling $`\lambda `$. Let us consider the consequences of the condensate decays. The rate of particle production as a function of particle momenta $`k`$ is determined by the growth rate of unstable solutions of the Mathieu equation for the corresponding mode functions $$\ddot{g}_k+[A2q\mathrm{cos}(2\tau )]g_k=0,$$ (11) and at the center of the $`N`$-th instability band the parameter $`\mu _N`$ is given by $$\mu _N=\frac{m}{2N}\frac{q^N}{(2^{N1}(N1)!)^2}.$$ (12) The coupling of $`\varphi `$ to the electromagnetic field gives $`A=4k^2/m^2`$ and $`q=2k\varphi _0/mf_\varphi `$. The decay of $`\varphi `$ into two photons is saturated in the first instability band which is centered at $`A=1`$. In such a case, $`\mu _1=qm/2`$ and the products of the decay have momentum $`k=m/2`$, with a width $`\delta k\mu `$. For the self-annihilation $`4\varphi 2\varphi `$, as we’ve seen one finds $`A=(k^2+m^2)/m^2+2q`$ and $$q=3\lambda \varphi _0^2/4m^2.$$ (13) The self-decay of $`\varphi `$ occurs in the second instability band centered at $`A=4`$ with $`\mu _2=q^2m/16`$, momentum $`k=\sqrt{3}m`$ and width $`\delta k\mu `$. Clearly, the products of the self-annihilation are ultra-relativistic and easily escape the gravitational well. The rate of decay $`\mu `$ is a function of the amplitude of the field oscillations $`\varphi _0`$ and therefore is a function of the energy density in the core: $`\rho =m^2\varphi _0^2`$. Notice that the exponential growth of the particle number in the resonance bands is due to the Bose-statistics: already created particles stimulate production of new quanta. However, particles which leave gravitational well do not participate in the stimulation process. Therefore, the exponential growth occur only if $`D\mu r_c>1`$, where $`r_c`$ is the core radius of the field configuration. If $`D1`$ initially, the number density of decay quanta will grow exponentially in time in the region of the core because in each decay process two identical particles are produced and travel in opposite directions. In a sense, this system is equivalent to the inversely populated laser medium placed between reflecting mirrors. The resulting explosion will reduce the density in the core below the level at which $`D1`$. If the core was growing gradually, starting from the small density, which is likely the case in astrophysical situation, the density will just stop growing at the condition $`D1`$ even if the infall of particles continues due to condensation from surrounding non-relativistic “gas”. In this regime the luminosity in relativistic particles will be equal to the rate of Bose-condensation. Let find the maximum core density corresponding to the condition $`D1`$ in the case of self-annihilation. The condition $`q^2mr_c1`$ with $`r_c`$ defined in Eq. (4) and $`q`$ defined in Eq. (13) givesNumerically, the assumption of perfect hydrostatic equilibrium turns out to be not vitally important here. Had we used instead of $`r_c`$ the distance over which the gravitational redshift equals to the width of the resonance band, we would have obtained exactly the same relation, but with power 1/2 replaced by the power 2/5 in the prefactor of $`m^4/\lambda `$. $$\rho _{\mathrm{max}}\left(\frac{m}{M_p\sqrt{\lambda }}\right)^{1/2}\frac{m^4}{\lambda }\left(\frac{1}{mr_c}\right)^{1/2}\frac{m^4}{\lambda }.$$ (14) Interestingly, there are indications that the halo central density is nearly independent of the mass from the galactic to the galaxy cluster scales, with average value of around $`\rho =0.02M_{}/\mathrm{pc}^3`$ . With this value of density, Eq. (14) gives $$\lambda 10^8(m/\mathrm{eV})^{7/2}.$$ (15) Comparing this with the condition (8), we see that the Bose-condensation is efficient indeed in this parameter range. Let us finally estimate the maximum core density which corresponds to non-stimulated self-annihilation ($`D<1`$, and the rate of Bose-condensation is small). The core density will stop changing effectively when the rate of annihilation will became comparable to the age of the Universe, $`t_0`$. We find $$\rho _\mathrm{c}\frac{1}{(mt_0)^{1/3}}\frac{m^4}{\lambda }.$$ (16) We see that not only the core radius, but also the core density may be uniquely defined in terms of the mass and the coupling of the scalar field. This provides a severe test of the model. The core radius or the maximum density (14) will be changed somewhat, if the field is embedded in an external gravitational well (created by baryonic matter), but this will not alter the required parameter range significantly. However, the picture may be more complicated and perhaps more interesting with core radius (4) and limiting density (14) not related to the observed characteristics of the dark halos. Assume that initially the field $`\varphi `$ contributes to the most of the dark component and that the parameters are such that the Bose-condensation is efficient on a time scale shorter than the age of the Universe. Finally, suppose that in the core of the halo which forms the condition $`D1`$ is satisfied. This is a modification of the scenario considered in Refs. where the possibility of electromagnetic radiation was advocated. Here annihilation of the field $`\varphi `$ into itself will do the job, but we require that the Bose-condensation with subsequent inflow into the core and decays are efficient enough to re-process the major fraction of the dark halo into relativistic particles. The baryonic core (and any other dark component) will then expand after loosing part of gravitating central mass. The problem of singular cores may be avoided in this way in a wider range of parameters. Needless to say, our proposal of annihilating dark matter, as a solution of CDM problems on small scales, is not limited to a self-annihilating scalar field, but applies as well to other possible forms of dark matter particles which may annihilate into different species. We are grateful to V. A. Berezin for useful discussions.
no-problem/0003/cond-mat0003122.html
ar5iv
text
# Sub-Aging in a Domain Growth Model ## I Introduction Aging of glassy systems is now well understood, at least from a qualitative point of view , and different approaches have been used to understand such a behaviour. One of them is the interpretation of aging in terms of a coarsening process. The picture is the following: consider for instance an Ising ferromagnet, which is quenched at time $`t=0`$ below its critical temperature. When $`t`$ increases, two types of domains emerge, with up and down spins. In the thermodynamic limit, equilibrium is never reached. At late times, domains have reached a typical size $`L(t)`$. It is thus natural to assume scaling laws for the different quantities of interest . For instance, one can try the ansatz $`S(𝒌,t)L^dg(kL)`$ for the structure factor (in a $`d`$-dimensional space), or $`C(t,t_w)F(L(t)/L(t_w))`$ for the two-time autocorrelation function, where $`g`$ and $`F`$ are scaling functions. The growth law $`L(t)`$ determines then all the properties of the system. As an example, the droplet model for spin glasses assumes a logarithmic growth, leading to $`C(t,t_w)=F(\mathrm{ln}t/\mathrm{ln}t_w)`$. If the growth law is given by $`L(t)t^\alpha `$, like e.g. in a spinodal decomposition, one gets $`C(t,t^{})=F(t/t_w)`$. This last behaviour is called ‘simple aging’ and has been analytically shown to hold within various non-random models . Moreover, the above functional form for the correlation function is also found analytically in some mean-field models of spin glasses, which give the general form for the correlation functions in the aging regime $`C(t,t_w)=𝒞(\frac{h(t)}{h(t_w)})`$, with $`h`$ and $`𝒞`$ two scaling functions (valid in the two-time regime where both times are large, but with $`1<C<0`$). Although the notations are different, the functional form is the same as in coarsening processes, and it is then very natural to try to interpret the $`h`$-function as a relevant length scale for spin glasses, as was done for instance in ref.. From the experimental and numerical side, it is found that a simple aging behaviour describes the data well, in many different systems. This is interpreted by saying that the relaxation time $`t_r(t_w)`$ of the system scales as the age $`t_w`$ of the sample: $`t_rt_w`$. However, a more subtle effect may appear, since $`t_r`$ very often grows more slowly than $`t_w`$. This effect has been called sub-aging . In his pioneering experiments on polymer glasses, Struik introduced the exponent $`\mu `$ from the relation $`t_rt_w^\mu `$, with $`\mu <1`$. Different values of $`\mu `$ have been reported: Struik used $`\mu 0.9`$, experiments in spin glasses $`\mu 0.97`$ , simulations of a structural glass were fitted using the value $`\mu 0.88`$ , and recently, experiments on a gel gave $`\mu 0.9`$ . It can be checked (this point is discussed in detail in ref.) that the $`\mu `$-exponent is equivalent to the following choice of the $`h`$-function: $`h(t)=\mathrm{exp}(\frac{1}{1\mu }(\frac{t}{t_0})^{1\mu })`$. In accordance to what has been said above, this equivalence holds when $`t_w\mathrm{}`$ and $`tt_wt_w^\mu `$. Another function, the ‘enhanced power law’ form $`h(t)=\mathrm{exp}(\mathrm{ln}^a(t/t_0))`$ with $`a>1`$, has been phenomenologically introduced in the context of spin glasses , and the value $`a=2.2`$ was used to fit experiments. This in turn gives the relation $`t_rt_w/\mathrm{ln}^{a1}(t_w)`$, valid in the regime $`t_w\mathrm{}`$ and $`tt_wt_w/\mathrm{ln}^{a1}(t_w)`$. These choices are nonetheless not clearly motivated from a theoretical point of view, since the mean-field spin glass models discussed above only predict the existence of $`h(t)`$, and its analytical computation remains at present an open problem. In this context, simple models where $`h`$ can be computed are much needed, but there are only few examples where sub-aging appears. Very recently, a model exhibiting sub-aging has been proposed by Rinn et al , who studied a slight variation of Bouchaud’s trap model for aging. This has given a theoretical support to the use of an exponent $`\mu `$, even if its physical origin remains somewhat unclear. A scaling approach to the diffusion of a point particle in a low dimensional space has been proposed in ref., and leads in some cases to a sub-aging which can be well described by an enhanced power law. We study in the present paper a model for coarsening (the $`O(n)`$ model in the large-$`n`$ limit) which also exhibits a sub-aging scaling in the autocorrelation function when the order parameter is not conserved. Its origin is the simultaneous presence in the system of two different length scales, whose consequence is the breakdown of the simple scaling laws generally used in domain growth processes. In particular, no $`t/t_w`$-scaling is found, and the relaxation time grows as $`t_rt_w/\sqrt{\mathrm{ln}t_w}`$ (sub-aging). The autocorrelation is shown to be well represented in the asymptotic regime by an enhanced power law with $`a=3/2`$, i.e. $`h(t)=\mathrm{exp}((\mathrm{ln}x)^{3/2})`$. Interestingly enough, $`h(t)`$ can not be interpreted in our example as a length scale. We do not want to argue that the model is a realistic one for the aging of polymers or spin glasses, but rather to give a possible physical explanation (the role of length scales ) for the absence of the ‘naive’ $`t/t_w`$-scaling, and exhibit a simple example where the $`h`$-function can be computed and discussed in terms of length scales, which has not been done so far. ## II The $`O(n)`$ model This model is one of the few exactly solvable model for coarsening. It was first studied by Coniglio and Zannetti , who computed the scaling properties of the structure factor during the domain growth process. They pointed out the presence of the two mentioned length scales, and named ‘multiscaling’ the breakdown of the usual $`S(𝒌,t)L^dg(kL)`$. Bray and Humayun have shown, however, that this multiscaling was a peculiarity of the large-$`n`$ limit, and proved that for a large but finite value of $`n`$, a ‘normal scaling’ was recovered . On an other hand, this ‘pathology’ has been shown to appear as a relevant preasymptotic effect in different coarsening models , like for instance the kinetic Ising model. The model is defined through the Hamiltonian $$H[\mathit{\varphi }]=d^d𝒙\left(\frac{1}{2}(\mathbf{}\mathit{\varphi })^2+\frac{1}{4n}(n\mathit{\varphi }^2)^2\right),$$ (1) where $`\mathit{\varphi }(𝒙,t)`$ is a $`n`$-component vector field in a $`d`$-dimensional space. Two different dynamics may be associated to this model, depending on whether or not the order parameter is conserved. In the case of a non-conserved order parameter, the dynamics is given by the so-called time dependent Ginzburg-Landau equation $$\frac{\mathit{\varphi }\mathbf{(}𝒙\mathbf{,}𝒕\mathbf{)}}{t}=\frac{\delta H}{\delta \mathit{\varphi }\mathbf{(}𝒙\mathbf{,}𝒕\mathbf{)}}+𝜼(𝒙,t),$$ (2) where $`𝜼(𝒙,t)`$ is a random Gaussian variable with mean zero and variance given by $`𝜼(𝒙,t)𝜼(𝒙^{\mathbf{}},t^{})=2T\delta (tt^{})\delta (𝒙𝒙^{\mathbf{}}).`$ For conserved fields, we add $`^2`$ in front of the r.h.s to get the Cahn-Hilliard equation $$\frac{\mathit{\varphi }\mathbf{(}𝒙\mathbf{,}𝒕\mathbf{)}}{t}=^2\left(\frac{\delta H}{\delta \mathit{\varphi }\mathbf{(}𝒙\mathbf{,}𝒕\mathbf{)}}\right)+𝜼(𝒙,t),$$ (3) where the variance of $`𝜼(𝒙,t)`$ is $`𝜼(𝒙,t)𝜼(𝒙^{\mathbf{}},t^{})=2T\delta (tt^{})^2\delta (𝒙𝒙^{\mathbf{}}).`$ We shall see below that the limit $`n\mathrm{}`$ allows to solve the dynamics in both cases. The key point that makes the model exactly soluble is that in the limit of $`n\mathrm{}`$, the replacement $`\mathit{\varphi }^2/n\varphi ^2,`$ where $`\varphi `$ is one of the components of $`\mathit{\varphi }`$, can be made. The two types of dynamics are now successively considered. ## III Non-conserved order parameter: simple aging The time dependent Ginzburg-Landau equation (2) associated to the Hamiltonian (1) is $$\frac{\mathit{\varphi }}{t}=^2\mathit{\varphi }+\mathit{\varphi }\frac{1}{n}(\mathit{\varphi }^2)\mathit{\varphi }+𝜼,$$ (4) where the dependence on space and time has been removed for clarity. This differential equation is associated with random initial conditions, in order to reproduce the quench experiment described in the introduction, and $`\varphi (𝒙,0)`$ is taken from a Gaussian distribution with zero mean and variance $`\varphi (𝒙,0)\varphi (𝒙^{\mathbf{}},0)=\mathrm{\Delta }\delta (𝒙𝒙^{\mathbf{}}).`$ From now on, we work at $`T=0`$. In the coarsening problem, temperature does not play an essential role, provided it is below the critical temperature. The scaling regime can then be directly studied at $`T=0`$. The review paper provides a longer discussion of that point, and we discuss below how our results may be (slightly) changed by an non-zero temperature. The large-$`n`$ limit results in the following equations which have to be self-consistently solved: $$\frac{\varphi }{t}=^2\varphi +a(t)\varphi ;a(t)=1\mathit{\varphi }^2.$$ (5) The solution is discussed in Refs., and one finds for the Fourier transform $`\varphi (𝒌,t)=d^d𝒙\varphi (𝒙,t)e^{i𝒌𝒙}`$ $$\varphi (𝒌,t)=\varphi (𝒌,0)e^{k^2t}\left(\frac{t}{t_0}\right)^{d/4},$$ (6) where $`t_0\mathrm{\Delta }^{2/d}/8\pi `$. It is now easy to compute the structure factor $$S(𝒌,t)\frac{1}{V}\varphi (𝒌,t)\varphi (\mathbf{}𝒌,t)=(8\pi t)^{d/2}e^{2k^2t}.$$ (7) We used $`\varphi (𝒌,0)\varphi (\mathbf{}𝒌,0)=\mathrm{\Delta }V`$ from initial conditions. The structure factor may be written as $`S(𝒌,t)=L^dg(kL)`$, with $`L(t)=t^{1/2}`$ and $`g(x)=(8\pi )^{d/2}\mathrm{exp}(2x^2)`$, demonstrating the validity of the scaling hypothesis in that case. The autocorrelation function is defined as $$C(t,t_w)\frac{1}{V}d^d𝒙\varphi (𝒙,t)\varphi (𝒙,t_w)=\frac{1}{V}\frac{d^d𝒌}{(2\pi )^d}\varphi (𝒌,t)\varphi (\mathbf{}𝒌,t_w)$$ (8) and may be easily computed: $$C(t,t_w)=\left[\frac{2\sqrt{tt_w}}{t+t_w}\right]^{d/2}.$$ (9) Defining the scaling variable $`\lambda _1t/t_w`$, $`C(t,t_w)`$ can be rewritten $$C(t,t_w)=F_1(\lambda _1);F_1(x)\left[\frac{2\sqrt{x}}{1+x}\right]^{d/2}.$$ (10) This last equation means that the autocorrelation function exhibits a simple aging behaviour. We have then illustrated on a concrete model the scaling approach to domain growth described in the introduction. We shall see in the next section the differences arising when sub-aging is present. Let us note here that a finite temperature does not affect the above discussion, since it simply introduces a short-time relaxation in the correlation function, that does not depend on the waiting time $`t_w`$ and corresponds to an equilibrium relaxation inside the growing domains. The long-time relaxation we are interested in, and which corresponds to the growth of the domains themselves is still described by (10). ## IV Conserved order parameter: sub-aging The Cahn-Hilliard equation (3) associated to the Hamiltonian (1) is given by (still at $`T=0`$) $$\frac{\mathit{\varphi }}{t}=^2\left[^2\mathit{\varphi }+\mathit{\varphi }\frac{1}{n}(\mathit{\varphi }^2)\mathit{\varphi }\right],$$ (11) and is solved following the same steps as previously, leading to : $$\varphi (𝒌,t)=\varphi (𝒌,0)\mathrm{exp}\left(k^4t+k^2\sqrt{\frac{dt}{2}\mathrm{ln}(\frac{t}{t_0})}\right),$$ (12) with $`t_0\mathrm{\Delta }^{4/d}/(16\pi )^2`$. The structure factor reads in that case $$S(𝒌,t)\left[L_1(t)^d\right]^{f(kL_2(t))},$$ (13) where $`f(x)2x^2x^4`$. In this expression, two characteristic length scales have been defined: $`L_1(t)t^{1/4}`$, and $`L_2(t)(\frac{8t}{d\mathrm{ln}(t/t_0)})^{1/4}`$. In the standard scaling form, $`S(𝒌,t)L^dg(kL)`$, the structure factor varies as $`L^d`$ with a prefactor depending on the scaling variable $`kL`$, whereas for the multiscaling form (13), $`S`$ varies as $`L_1^\alpha `$, with an exponent $`\alpha `$ which depends continuously on the scaling variable $`kL_2`$. The two scalings are thus completely different. Coniglio and Zannetti have interpreted this multiscaling in terms of domains composed of sub-domains, each sub-domain growing at a different rate. The initial motivation for the present work was indeed to investigate the possible existence of a ‘hierarchy’ of time scales, similar to the one found in mean-field spin glass models (‘ultrametricity in time’) . A different effect arises instead. Using eq.(12), one easily gets for the autocorrelation function $$C(t,t_w)\frac{1}{(t+t_w)^{d/4}}\mathrm{exp}\left(\frac{d}{8}\frac{\left(\sqrt{t\mathrm{ln}(t/t_0)}+\sqrt{t_w\mathrm{ln}(t_w/t_0)}\right)^2}{t+t_w}\right).$$ (14) It is obvious from this expression that $`C(t,t_w)`$ cannot be written as a function of $`t/t_w`$ only. The physical key ingredient for the absence of the usual scaling is the presence of two different length scales in the system. We prove now analytically that eq.(14) implies sub-aging. It has to be remarked first that when the time difference $`\tau tt_w`$ is equal to $`t_w`$, one has $$C(t_w+t_w,t_w)\underset{t_w\mathrm{}}{}\frac{1}{t_w^{(32\sqrt{2})d/24}}0.$$ (15) In the asymptotic limit of large waiting times, the relaxation of $`C(t,t_w)`$ is complete in times $`\tau t_w`$. In that regime, one can show that $$C(t,t_w)\underset{\tau t_w}{}\mathrm{exp}\left(\frac{d\mathrm{ln}t_w}{64}\left(\frac{\tau }{t_w}\right)^2\right).$$ (16) Defining the scaling variable $`\lambda _2\tau \sqrt{\mathrm{ln}t_w}/t_w`$, eq.(14) can finally be rewritten $$C(t,t_w)F_2(\lambda _2);F_2(x)\mathrm{exp}\left(\frac{dx^2}{64}\right).$$ (17) The relaxation time grows hence as $`t_rt_w/\sqrt{\mathrm{ln}t_w}`$, i.e. more slowly than $`t_w`$: this is a sub-aging behaviour. It is moreover possible to compute the function $`h(t)`$ discussed in the introduction. The scaling form $`C(t,t_w)=𝒞\left(\frac{h(t)}{h(t_w)}\right)`$ should be valid in the two-time regime where both times are large, but with a non-zero value of the correlation function. In the present case, this regime is characterized by $$t_w\mathrm{},\tau \frac{t_w}{\sqrt{\mathrm{ln}t_w}}.$$ (18) We have seen that a natural choice for $`h(t)`$ would be $`L_1(t)`$ or $`L_2(t)`$, i.e. a length scale, since it is a common interpretation. This does not work, and a more complicated form has to be found. It is straightforward to realize that a possible choice is an enhanced power law: $$𝒞(x)=\mathrm{exp}\left(\frac{d}{288}\mathrm{ln}^2(x)\right);h(t)=\mathrm{exp}\left((\mathrm{ln}t)^{3/2}\right).$$ (19) The function $`h`$ is neither $`L_1`$ nor $`L_2`$, but a combination of the two, and therefore does not have a direct physical interpretation: $`h(t)\mathrm{exp}\left(\left(L_1/L_2\right)^6\right).`$ ## V Response functions: infinite effective temperatures It is also relevant to study the response functions for aging systems, since it is a major prediction of the dynamical mean-field theory for spin glasses that interesting informations are encoded in the susceptibilities . Up to now, we have studied aging in the two-time correlation functions $`C(t,t_w)`$. In glassy systems, aging is also found in the related response functions $`R(t,t_w)`$, associated with a breakdown of the fluctuation dissipation theorem which at equilibrium would be $`TR(t,t_w)=C(t,t_w)/t_w`$. This is taken into account by introducing an effective temperature $`T_{\text{eff}}`$ through $$T_{\text{eff}}(q)=\underset{t_w\mathrm{}}{lim}\frac{\frac{C(t,t_w)}{tw}}{R(t,t_w)}|_{C(t,t_w)=q}.$$ (20) In coarsening systems, however, response functions have been shown numerically and analytically to be weak, in the sense that $`T_{\text{eff}}\mathrm{}`$ . This property has been related to the decreasing density of topological defects (domain walls) during the coarsening. In the case of the $`O(n)`$ model, no topological defects are present if $`n>d`$, which is naturally the case in the large-$`n`$ limit. We compute then $`R(t,t_w)`$ in the both cases studied above to obtain $`T_{\text{eff}}`$. We refer the reader to ref. for the method, since we follow exactly the same steps. We get the two following expressions: $$R(t,t_w)\left(\frac{t}{t_w}\right)^{d/4}\left(\frac{1}{tt_w}\right)^{d/2},$$ (21) in the non-conserved case, and $$R(t,t_w)\frac{1}{(tt_w)^{(d+2)/4}}\mathrm{exp}\left(\frac{d}{8}\frac{\left(\sqrt{t\mathrm{ln}t}\sqrt{t_w\mathrm{ln}t_w}\right)^2}{tt_w}\right),$$ (22) in the conserved case (we dropped out all numerical constants). Combining eqs.(9,14,21,22), it is easy to show that for the non-conserved and the conserved case successively, one has: $$T_{\text{eff}}(q)\underset{t_w\mathrm{}}{lim}t_w^{d/21},T_{\text{eff}}(q)\underset{t_w\mathrm{}}{lim}\frac{t_w^{(d2)/4}}{(\mathrm{ln}t_w)^{(d+2)/8}\mathrm{exp}(\sqrt{\mathrm{ln}t_w})}.$$ (23) This holds for $`0<q<1`$, and shows that for $`d>2`$, although there is no interpretation here in terms of defects, the effective temperature is infinite, as has been found so far in all domain growth processes . We studied in this letter the aging dynamics of the $`O(n)`$ model in the large-$`n`$ limit. We showed that when the order parameter is not conserved, standard scaling laws hold, leading to a simple aging behaviour. We investigated the more interesting case of a conserved dynamics, and were able to show that the multiscaling observed in the structure factor does not imply a hierarchy of time scales (‘ultrametricity in time’ ). Rather, the relaxation takes place in a time scale which is shorter than the waiting time, $`t_rt_w/\mathrm{ln}^{a1}(t_w)`$ with $`a=3/2`$, the correlation function being well represented in that regime by $`C(t,t_w)=𝒞(h(t)/h(t_w))`$, where $`h`$ is an enhanced power law $`h(t)=\mathrm{exp}(\mathrm{ln}^a(t))`$. This simple example exhibits then a very rich aging behaviour, whose origin is the presence of two different length scales during the coarsening process. It shows also that the interpretation of $`h(t)`$ as a length scale may in some cases be misleading. The enhanced exponential form that has been successfully used to fit spin glass experiments arises naturally from our computation. It implies that the relaxation time scales as $`t_rt_w/\mathrm{ln}^{a1}(t_w)`$, which could hardly be experimentally distinguishable from a power law $`t_rt_w^\mu `$, when $`\mu `$ is very near to one, as it is in spin glasses. ###### Acknowledgements. I sincerely thank J. Kurchan who suggested and followed this work, J.-L. Barrat and J.-Ph. Bouchaud for their interest and encouragements, L. F. Cugliandolo and M. Sellitto for their help during the preparation of the manuscript.
no-problem/0003/cond-mat0003059.html
ar5iv
text
# Bond-Ordering Model for the Glass Transition ## I Introduction The problem of glass formation is a classic one in physics and materials science. Viewpoints break down into kinetic explanations of an arrested liquid, and phase transitions with some kind of order parameter. Discussions have approached the glass transition in terms of the fragmentation of small structural units, the agglomeration of clusters through chemical bonds, or correlations between different metastable equilibrium states representing distinct configurations or rearrangements of the system. In addition, attempts have been made to incorporate frustration in local bond ordering in a glass through the introduction of a local order parameter describing locally preferred arrangements of liquid molecules, which, in general, are not consistent with the crystallographic symmetry favored by density ordering. In this model the frustration arises from competition between density ordering and local bond ordering, explaining why some molecules crystallize easily without vitrification, while others easily form glasses without crystallization. Another approach, using computer simulations, introduces a displacement–displacement correlation function as a measure of the local configurational order which grows as the temperature is lowered toward the glass transition. Finally, the role of fluctuations has been reviewed in general fashion with both theoretical and experimental evidence for heterogeneity at the glass transition. We would like to introduce a model of positional glass which incorporates many of these ideas in simple form. First we recall some of the basic structural features of typical network-glass-formers like silica. Vitreous silica is made up of SiO<sub>4</sub> tetrahedra just like the crystalline phase, quartz. Thus, molecular structure for the two different phases of silica, i.e. crystalline and amorphous, are very much alike, with the major exception of n.n.n. distance Si<sub>1</sub>-Si<sub>2</sub> which may be specified in terms of the Si-O-Si bond angle. The distribution of Si-O-Si bond angles $`\beta `$ as determined by Mozzi and Warren for vitreous silica is shown in Fig. 1. From the bond angle distribution V($`\beta `$), it is clear that the Si<sub>1</sub>-Si<sub>2</sub> distances vary significantly, which in fact is the primary source of topological disorder. Yet the bond angle $`\beta `$ is confined to within roughly $`30^o`$ of its most probable value viz $`144^o`$. Clearly, the variation in $`\beta `$ is large enough to suppress the long-range order (LRO) characterizing the crystalline phase, and yet small enough to maintain the medium-range order (MRO) which extends to about $`10\AA `$. Hence, we note that ordering in n.n. and n.n.n. distances or bond lengths is well maintained in spite of the fact that vitreous silica is devoid of any substantial structural order. Similar structural properties are observed in various other network glasses such as B<sub>2</sub>O<sub>3</sub> and GeO<sub>2</sub>. This observation is striking enough to indicate a strong role for the ordering of bonds in a proper microscopic model for the glass transition. The bond ordering can be viewed as being brought about by the local reorientations of molecular clusters as suggested in the theory of Adam and Gibbs. The idea is that supercooled liquids do not necessarily need to undergo structural ordering in order to achieve local equilibrium. In fact, local rearrangements of molecular clusters can lead to substantial lowering in the internal energy of an entire sample through reducing bond energies at the local level, which is the primary reason for the ordering of n.n. and n.n.n. distances, or alternatively the bond-angle degrees of freedom, in amorphous materials. A model for the glass transition incorporating small structural units or fragments was proposed by M. Suzuki et al.. In this fragmentation model noncrystalline solids are assumed to be assemblies of pseudo-molecules—a pseudo-molecule being a cluster of atoms having a disordered lattice in which there are no definite defects such as under- or over-coordinated atoms. As temperature increases, bond breaking intensifies at the boundaries of such clusters where the bonds tend to be weak. The bond breaking mechanism arising from the thermal excitation of electrons from bonding to antibonding energy states, causes the noncrystalline solid to fragmentize, with the average fragment size decreasing as the temperature increases. Consequently, material begins to show viscous flow when average fragment size reaches a critical value. The model is shown to have some success in describing the temperature dependence of viscosity and the variation in transition temperature with heating rate for a-Si. The origin of pseudo-molecules in the cooling process, however, is not addressed in the fragmentation model. In the following we consider a microscopic model for the glass transition which is general enough to be applicable to various types of glass-forming systems with widely differing bonding schemes and chemical compositions. We would like to treat glass transition as a phenomenon that finds similar description whether a liquid is cooled or heated through the transition. The focus of bond-ordering model is the bonds linking neighboring atoms rather than the atoms themselves. In other words, a bond is treated as a distinct object in its own right, possessing internal degrees of freedom or electronic states. The internal state of a bond is governed by the separation of the participating atoms. The term bond-ordering refers to the process of relaxation of bonds into their low-lying internal energy states, facilitated by the cooperative rearrangements within molecular groups. Bond ordering, therefore, may be viewed as some form of ordering in energy space. The important point that we would like to bring home, however, is that such an ordering can be achieved without need for any significant structural order. To this effect, we have provided results from Monte Carlo (MC) simulations of the model Hamiltonian which couples the coordinates of ions to the electronic states of electrons, as for a typical covalently bonded network glass. The simulations make clear the possibility of local ordering of bonds, uncorrelated with any kind of long-range structural ordering. Sec. II of this article contains the theoretical background regarding the covalently-bonded systems. In Sec. III we describe the model Hamiltonian. Some new definitions in terms of mathematical expressions for various structural and bond-related quantities of interest, are introduce in Sec. IV. The simulation procedure and results are discussed extensively in Sec. V. Sec. VI contains the concluding remarks and a summary of the main ideas introduced in this article. ## II Theoretical Background Glass-forming liquids include covalently-bonded network glasses, coordination-constrained metallic glasses and systems made of more complex organic molecules or polymer. Here we take the view that the interaction between constituent atoms or molecular units is given by an effective potential which characterizes their positional and orientational interaction. In particular for covalent materials the interactions may be reasonably described in terms of basis states of degenerate $`s`$ and $`p`$ orbitals on each atom. Linear combinations of orbitals on neighboring atoms then lead to bonding and antibonding states which describe the interactions between atoms. The occupation of the bonding state by one to two electrons results in a lowering of the electronic energy relative to the atomic levels. This covalent bond provides the structural constraint on the relative position of the two atoms. In order to be more specific we consider a general Hamiltonian for a covalently bonded network glass. We start first with a general formulation of the Hamiltonian for a typical glass-forming liquid. Although these systems are often treated in a classical formulation, in fact, the bonding between atoms in a network glass, as well as the coordination in metallic glasses arise from quantum mechanical considerations. The many-body Hamiltonian may be written in the form: $$H=H_{ii}+H_{ie}+H_{ee},$$ (1) where $`H_{ii}`$ is the interaction between ions in the liquid, $`H_{ie}`$ is the ion-electron interaction, and $`H_{ee}`$ is the interaction between electrons. The usual approach to solving this type of many-body problem is to exploit the separation of time scales inherent in the Born-Oppenheimer approximation: solve for the electronic states regarding the ionic coordinates as parameters, then vary the ionic coordinates to minimize the energy or extract the electron-phonon interaction, etc. For a covalent material like most network glasses, the electronic states are reasonably approximated by forming bonding and antibonding states from the atomic $`s`$ and $`p`$ orbitals. The available electrons are then apportioned to the lowest bonding states to obtain the ground state of the system. The bonding and antibonding energies, depend on distance or separation as shown schematically in Fig. 1: the bonding state has a minimum at an ion-ion distance $`r_0`$, while the antibonding state is repulsive at all distances. This may be thought of as a tight-binding approximation to the actual electronic structure of a glass. A bond, is a bonding state occupied by two electrons and is strongest (or has lowest energy) when the distance between ions is near $`r_0`$. On the other hand, a bond missing an electron due to a thermal fluctuation or a transition of the electron to the antibonding state, corresponds to a broken bond. In this tight-binding representation of bonds, the Hamiltonian may be expressed as in Eq. (1), a more detailed expression of which is derived in the next section. ## III Model Hamiltonian The potential energy of a pair interaction may be expanded in the displacement $`x`$ from the equilibrium: $$U(x)=U_0+\frac{x^2}{2!}U_0^{\prime \prime }+\frac{x^3}{3!}U_0^{\prime \prime \prime }+\mathrm{},$$ (2) where the coefficients are evaluated at equilibrium separation. Dropping the constant term which plays no role, and in the harmonic approximation, we get: $$U(x)=\frac{x^2}{2}U_0^{\prime \prime }.$$ (3) For our lattice model we identify $`x=|𝑹_i𝑹_j|`$, where $`𝑹_i`$ and $`𝑹_j`$ are the displacements from the equilibrium of the atoms assigned to the $`i`$th and $`j`$th n.n. sites, respectively. Hence, with the idealizing constraint that all displacements are of same magnitude $`|𝑹|`$, i.e. $`|𝑹_i|=|𝑹_j|=|𝑹|`$, we have: $$U\left(|𝑹_i𝑹_j|\right)=|𝑹|^2U_0^{\prime \prime }\left(1\widehat{𝑹}_i\widehat{𝑹}_j\right).$$ (4) Letting $`J=|𝑹|^2U_0^{\prime \prime }`$, we obtain the following Hamiltonian for a system of atoms interacting through n.n. coupling: $$\stackrel{~}{}=J\underset{<i,j>}{}\left(\widehat{𝑹}_i\widehat{𝑹}_j\right).$$ (5) This is clearly of the same mathematical form as the $`q`$-state clock model Hamiltonian, when the displacement degrees of freedom are taken to be discrete. The interaction Hamiltonian we used for our MC simulations, is the following: $``$ $`=`$ $`J{\displaystyle \underset{<i,j>}{}}\left(\widehat{𝑹}_i\widehat{𝑹}_j+1\right)n_{ij}`$ (6) $`=`$ $`J{\displaystyle \underset{<i,j>}{}}\left(\mathrm{cos}\theta _{ij}+1\right)n_{ij},`$ (7) where $`\theta _{ij}=\frac{2\pi }{8}(s_is_j)`$, with $`s_i,s_j=1,2,\mathrm{},8`$. The $`s_i`$’s are integer labels for various possible displacements of an atom from its equilibrium lattice site $`i`$ . The quantity $`n_{ij}`$ should be regarded as the bonding-electron occupation number for a (possible) bond, linking n.n. sites $`i`$ and $`j`$. A bond may or may not be broken depending on whether the corresponding $`n_{ij}`$ takes on the values $`0`$, or $`1`$, respectively. The value taken by $`n_{ij}`$ depends on the number of bonding-electrons made available to the system—this is an input parameter to the simulation code—and the relative values of energies of the n.n. interactions. Bonds with lower values of energy are more likely to have bonding-electrons. Interesting effects are observed with a bonding-electron (hole) concentration of about $`60\%`$ ($`40\%`$) and that is what we report later in this article. As we shall see shortly, the net effect of holes is to suppress the $`XY`$-like transition occurring for clock models with $`q>4`$, and hence allowing us to observe the behavior of a disordered system with lowering of the temperature. ## IV Some New Definitions In Sec. I we defined bond ordering as that process involving relaxation of bonds into their low-lying internal energy states. There are a few more physical parameters relevant to our discussion which are defined in this section and are considered in context of the glass transition phenomenon. ### A Bond Order-Parameter We introduce bond order parameter (or bond magnetization) as a measurable physical property that indicates the extent of bond ordering prevailing in a physical system. We begin with considering a two dimensional system of $`4`$-fold coordinated atoms interacting through n.n. coupling of strength $`J`$. The bond magnetization of such a system is significant when bonds are in their low-lying energy states (as for a bond-ordered low temperature phase), and negligible when bonds are distributed among all possible energy states with uniform probability which is indeed the case when the thermal energy is far in excess of the coupling strength $`J`$. To construct an expression for the bond magnetization of such a system, every n.n. pair of atoms is characterized by a vector $`\widehat{𝑩}_{ij}`$ the purpose of which is to characterize the interaction energy. $`\widehat{𝑩}_{ij}`$ is specified via the angle it makes with an arbitrary fixed axis in the plain of the system, $`\varphi _{ij}`$. The angle $`\varphi _{ij}`$ is written in terms of bond energy as: $$\varphi _{ij}=\pi \frac{ϵ_{ij}}{J},Jϵ_{ij}J,$$ (8) where $`J`$ is n.n. coupling strength, $`ϵ_{ij}`$ is the bond energy and $`\pi \varphi _{ij}\pi `$ . An expression that fulfills all the requirements of an extensive bond magnetization is the following: $`M_b`$ $`=`$ $`\left({\displaystyle \underset{<i,j>}{\overset{2N}{}}}\widehat{𝑩}_{ij}\right)^2^{1/2}`$ (9) $`=`$ $`\left({\displaystyle \underset{<i,j>}{\overset{2N}{}}}{\displaystyle \underset{<m,n>}{\overset{2N}{}}}\widehat{𝑩}_{ij}\widehat{𝑩}_{mn}\right)^{1/2},`$ (10) where angular braces stand for thermal average and $`N`$ is the total number of atoms in the system. In terms of bonds’ energies, the expression for intensive bond magnetization of the system; $`m_b=M_b/2N`$, may be written as: $$m_b=\frac{1}{2N}\left(\underset{<i,j>}{\overset{2N}{}}\underset{<m,n>}{\overset{2N}{}}\mathrm{cos}\left[\frac{\pi }{J}(ϵ_{ij}ϵ_{mn})\right]\right)^{1/2}.$$ (11) The normalization is chosen such that $`0m_b1`$ . ### B Bond Susceptibility Bond susceptibility is the response function associated with bond magnetization $`M_b`$ and its thermodynamic conjugate field $`H_b`$, called bond ordering field. The exact physical nature of $`H_b`$ is not yet known to us; however, bond susceptibility can still be calculated without appealing to a knowledge of $`H_b`$. The change in Gibbs free energy in an infinitesimal bond ordering process may be written in terms of the newly introduced parameters $`M_b`$ and $`H_b`$, as in the following: $$dG=SdTM_bdH_b.$$ (12) Eq. (12) can serve as starting point for incorporating $`M_b`$ into the thermodynamics of disordered systems. Bond susceptibility apart from a normalization will therefore be given by: $$\chi _b=\frac{M_b}{H_b}=\frac{^2G}{H_b^2}.$$ (13) Starting with Eq. (13), one may readily obtain an expression for the bond susceptibility in terms of fluctuations in the bond magnetization: $$\chi _b=\left(|𝑴_b|^2|𝑴_b|^2\right)/2Nk_BT,$$ (14) where $`𝑴_b=_{<i,j>}^{2N}\widehat{𝑩}_{ij}`$ . Eq. (14) is properly normalized to the number of n.n. bonds $`2N`$ for a system of $`N`$ atoms with periodic boundary conditions, and is used for calculating bond susceptibility via MC simulations. This rather abstract entity, the bond susceptibility, describes the tendency of a system for bond ordering and provides the basis for a new identification of glass transition temperature. ### C New Identification for T$`_\text{g}`$ Following the previous discussion one can trace the origins of MRO characteristic of the vitreous state in the local ordering of bonds, which becomes most intense at some particular temperature $`T_g`$. This brings us to another identification for the calorimetric glass transition temperature, which particularly applies to fragile and intermediate class of the glass-forming liquids. In glass transition region, bond susceptibility of a supercooled glass-forming liquid reaches a maximum. This also implies that specific heat must display a maximum in the glass transition region because of the large energy fluctuations associated with the intense ordering of bonds. In view of this, the transition peak in the experimentally measured specific heat of various fragile and intermediate glass-formers can be regarded as an artifact of intense bond ordering or strong fluctuations in the number of bonds in each energy state occurring at glass transition. Hence, we propose a new identification for $`T_g`$ as that temperature corresponding to maximum of bond susceptibility of a supercooled liquid nearing configurational arrest. Having given an explanation for the specific heat peak at the glass transition, it doesn’t seem improper to consider the unexpected linear behavior of specific heat at very low temperatures. Work by Hunklinger et al. gives strong evidence that the low temperature anomalous properties of amorphous materials arise mainly from two-level systems and not from the multi-level vibrational degrees of freedom associated with the atoms. The energy gaps $`\mathrm{\Delta }`$ of the two-level systems are supposed to vary with uniform probability in some range $`0\mathrm{\Delta }\mathrm{\Delta }_0`$, where $`\mathrm{\Delta }_01K`$. These low energy excitations may be attributed to the bond ordering process at low temperatures. In disordered systems, whether supercooled liquids or glass, bonds continuously relax into more stable internal energy states with lowering of the temperature. At very low temperatures certain number of bonds may be seen to act like two-level systems with varying energy gaps. Bond ordering process at low temperatures is therefore a possible explanation for the low temperature anomalous properties exhibited by amorphous materials. ### D Structural Magnetization and Susceptibility Structural magnetization is a measure of the conventional LRO a system may possess. For the system we are considering the displacement of an atom from its designated equilibrium lattice site $`i`$, is characterized with $`𝑹_i`$. Further idealizing the system by assuming that displacements are of same magnitude $`|𝑹|`$, we can express extensive structural magnetization as in the following: $`M_s`$ $`=`$ $`\left({\displaystyle \underset{i=1}{\overset{N}{}}}\widehat{𝑹}_i\right)^2^{1/2},`$ (15) where $`\widehat{𝑹_i}=𝑹_i/|𝑹|`$ . The intensive (or normalized) structural magnetization is given by; $`m_s=M_s/N`$, where $`0m_s1`$ . The analogy between structural magnetization and magnetization of a magnetic system, is rather obvious. In fact, we can use this analogy to express structural susceptibility in terms of fluctuations in structural magnetization, as follows: $$\chi _s=\left(|𝑴_s|^2|𝑴_s|^2\right)/Nk_BT,$$ (16) where $`𝑴_s=(_iR_i^x,_iR_i^y)`$ . Structural susceptibility is the response function describing the tendency for structural ordering. ### E Order-Parameter for Glass We would like to address at this point a possible order parameter (or order parameter density) for supercooled liquids and glass, that also serves as yet another distinction between fragile and strong classes of the glass-forming liquids. This will simply be the bond magnetization if one is considering strictly a $`\text{liquid}\text{glass}`$ transition. Many of the theories describing glass transition phenomenon, assume that there is a single parameter which characterizes glass. This assumption is believed to be inaccurate. Prigogine and Defay have shown that in general the ratio of the discontinuities in second-order thermodynamic quantities; isothermal compressibility, heat capacity at constant pressure, and coefficient of thermal expansion: $$R=\frac{\mathrm{\Delta }\kappa _T\mathrm{\Delta }C_p}{TV(\mathrm{\Delta }\alpha )^2},$$ (17) is equal to unity if a single order parameter characterizes the underlying thermodynamic transition, but if more than one order parameter is involved, then $`R>1`$ . The latter seems to describe most glasses. In view of this, we consider a two parameter description of glass which involves two of the parameters described earlier, namely, the structural magnetization $`m_s`$ and the bond order parameter $`m_b`$. We require for amorphous solids that the structural LRO vanishes while the bond magnetization remains significantly large. The requirement of the vanishing of structural LRO, is meant to characterize the liquid-like attributes of the amorphous systems. Yet large values for bond order parameter is a solid-like attribute that should serve to distinguish the glass from the liquid phase. Clearly, in case of strong glass-formers characterized with strong covalent bonds, the values for bond order parameter above and below the transition must be quite comparable. However, in case of fragile systems the liquid undergoes substantial bond ordering at the transition mainly due to the nondirectional nature of their chemical bonds. As a result bond magnetization is expected to vary rather significantly for the fragile class. Originally, the labels strong and fragile were introduced to refer to the ability of a liquid in withstanding changes in MRO with temperature. In context of bond-ordering model, these labels will be referring to the ability of a liquid to withstand changes with temperature in bond magnetization $`m_b`$. It is worthwhile to mention that in this scheme an ideal glass may be characterized as being maximally bond ordered which should also imply the least possible energy. ## V Simulation: <br>Procedure and Results Clock models have been investigated in some detail, both analytically and through numerical methods. Previous Monte Carlo works, have examined the behavior of 2D clock systems with various values of the parameter $`q`$. The important results are that for $`q4`$ the system exhibits one second-order transition, but for $`q>4`$ two Kosterlitz-Thouless (KT) transitions are present. The upper transition temperature is believed to have a value approximately equal to the KT transition temperature for the continuous model, $`T_c=0.89J/k_B`$. As $`q`$ increases, the lower transition temperature ($`1/q^2`$) approaches zero, leaving just one KT transition for the 2D $`XY`$-model. For model Hamiltonian we consider Eq. (6), which in fact is a bond-diluted version of $`q`$=$`8`$ state clock model, involving n.n. interaction with antibonding electronic state. For our purpose, the eight possible orientations must be interpreted as the possible displacements of an atom from its designated equilibrium lattice site, with every atom’s displacement being the same if the system were in a ordered configuration. The standard MC importance-sampling method was used to simulate the behavior of the system on $`L\times L`$ square lattices with periodic boundary conditions. We performed simulations on lattices of size $`L=12,\mathrm{\hspace{0.33em}20},\mathrm{\hspace{0.33em}32},\text{and}\mathrm{\hspace{0.33em}50}`$. Preliminary work was carried out on systems of size $`L=12`$. In every case the temperature was lowered in steps of $`0.05J/k_B`$, starting with initial value $`T=2.00J/k_B`$. The system was initialized in random configuration suitable for high temperatures where it is known to be disordered. At every temperature system was allowed to equilibrate through a few hundred Monte Carlo steps per site (MCS). The data points were then acquired by averaging over 40,000 MCS. The data were accumulated in several bins, and binned averages were used to obtain error estimates for the calculated means and also to monitor the state of equilibrium. For most cases the estimated statistical errors were less than $`2\%`$ of the calculated mean values. In order to test our code, we simulated $`q=6`$ state clock model and compared our results with the extensive literature available on the subject. To our satisfaction the agreements were impressive. As a note on the calculation of internal energy, the (possible) bonds were sorted out in ascending order of energy at every time step and were given bonding-electrons in that order. Many of the quantities calculated have been already discussed and given explicit mathematical definitions in Sec. IV. In addition, specific heat was obtained from fluctuations in the total energy: $$C=\left(^2^2\right)/2Nk_BT^2.$$ (18) The bond-related quantities; bond order parameter, bond susceptibility, internal energy and specific heat are normalized to the number of n.n. bonds $`2N`$, for a system of size $`N=L^2`$ with periodic boundary conditions. On the other hand the structural properties, i.e. structural magnetization and susceptibility are normalized to the system size $`N`$. Results correspond to a hole concentration of 40%. Perhaps the most interesting feature of the results is the conspicuous peak in bond susceptibility, shown in Fig. 2, which is an indication of bond ordering, uncorrelated with any long-range structural ordering as is evident from structural magnetization and susceptibility seen in Figs. 3, and 4. Finite size effects are quite evident. The temperature at which bond susceptibility reaches its maximum value is estimated to be $`0.9J/k_B`$, consistent with the KT transition temperature. At exactly the temperature where bond susceptibility maximum occurs, one observes a maximum in specific heat shown in Fig. 5, which should be attributed to large energy fluctuations associated with the ordering of bonds. Indeed, experimental measurements of heat capacity for covalently bonded fragile systems such as As<sub>2</sub>S<sub>3</sub> and B<sub>2</sub>O<sub>3</sub>, exhibit similar peaks at glass transition which therefore suggest bond ordering nature for the glass transition. The internal energy for various system sizes is shown in Fig. 6, displaying steep slope in bond ordering region. Fig. 7 contains the variation of bond magnetization with temperature. In bond ordering region, bond magnetization increases rapidly with decreasing temperature in spite of the fact that structural magnetization stays fairly constant there. This behavior testifies to the earlier assertion that a system can undergo substantial bond ordering and hence largely reduce its internal energy without undergoing any significant structural ordering. Unlike the behavior expected from $`q`$-state clock model in 2D, structural susceptibility (Fig. 4) does not exhibit singular behavior at intermediate temperature range hence ruling out the possibility of long-range cooperative structural ordering. This behavior of structural susceptibility may be seen in view of the large concentration of holes or bond dilution. ## VI Summary We have investigated the existence of order associated with bonds, in amorphous systems of interest such as vitreous silica. Through our MC simulations, it is found that bond ordering may occur independently of structural ordering. Bond ordering implies ordering in n.n. and n.n.n. distances and thus leads to MRO, which is a key aspect of the vitreous state. An order parameter for supercooled liquids and glasses is introduced. In case of $`\text{liquid}\text{glass}`$ transition bond magnetization $`m_b`$ may be fruitfully employed in the thermodynamics of glass and any calculation that may involve coarse-grained parameters. In addition, a new identification for glass transition temperature is afforded through the variation with temperature of the bond susceptibility. As a last remark, the sharp rise in the viscosity of glass-forming liquids when cooled toward glass transition, may be viewed in terms of strengthening of bonds. The fragile glass-formers undergo significant bond ordering through the transition and for that reason their viscosity rises dramatically, as opposed to the smooth Arrhenius-like behavior of viscosity exhibited by strong glass-forming liquids.
no-problem/0003/cond-mat0003333.html
ar5iv
text
# Robustness as an Evolutionary Principle∗ ## I Introduction A common concept in evolution is fitness and fitness landscapes , and often evolution is viewed as hill climbing, possibly with jumps between fitness maxima . However, fitness landscapes implicitly assume that fitness is varying over a well-defined metric in genomic space. This would be the case if single point mutations were a driving force. However, significant genome rearrangements are observed already in the rather brief real-time evolution experiments of Escherichia coli cultures of Papadopoulos et al. . Genomic rearrangements short-circuit the simple metric generated by one point mutations, usually underlying the intuition of evolution on landscapes. As a consequence the combinatorial distance for moving from a genome A to a genome B may easily be different from the distance of the opposite move, simplest exemplified by deletions and insertions. Thus, although fitness landscapes have a meaning for the small scale adjustments associated to fine-tuning of binding constants, it is an unjustified concept for evolutionary changes on the scale of speciation events. Abandoning fitness landscapes we here instead discuss the possibility that evolution progresses through a process where genotypes and phenotypes subsequently set the frame at which the other may change. Of particular relevance for this view of evolution is the fact that one often observes different phenotypes for the same genotype. This viewpoint is in part supported by cell differentiations within one organism, in part supported with epigenetics and the large class of organisms which undergo metamorphosis and thus exist in several phenotypes for the same genotype. Recently, it has also been proposed that genotype-phenotype ambiguity is governing speciation events. A class of systems that exhibits epigenetics is represented by the logical networks, where nodes in the network take values on or off, as function of the output of specified other nodes. This has been suggested to model the regulatory gene circuits where specific genes may or may not be expressed as function of other genes. In terms of these models it is natural to define genotypes in form of the topology and rules of the nodes in the network. The phenotypes are similarly associated to the dynamical expression patterns of the network. To define the rules under which phenotypes and genotypes set the frame for each other’s development, a model for evolution should fulfill the requirement of robustness. Robustness is defined as the ability to function in face of substantial change in components . Robustness is an important ingredient in simple molecular networks and probably also an important feature of gene regulation on both, small and large scale. In terms of logical networks, robustness is implemented by constraining subsequent networks to have similar expression patterns. This article is organized as follows: First we discuss dynamics on logical networks and numerically review the basic properties of attractors of random threshold networks and Boolean networks. Then we propose a minimal evolution model and investigate its statistical and structural implications for the evolved networks. Finally biological implications, and possible experimental approaches to the dynamics of real genetic networks are discussed. ## II Dynamics on logical networks Let us first discuss two prototype networks that exhibit epigenetics, Boolean networks and threshold networks . These are both networks of logical functions and share similar dynamical properties. We here briefly describe their definition and dynamical features. In both networks each node is taking one of two discrete values, $`\pm 1`$, that at each timestep is a discrete function of the value of some fixed set of other nodes specified by a wiring diagram. If we denote the links that provide input to node $`i`$ by $`\{w_{ij}\}`$, with $`w_{ij}=\pm 1`$ also, then for the threshold network case the updating rule is additive $`\sigma _i=1if{\displaystyle \underset{j\{w_i\}}{}}w_{ij}\sigma _j0`$ (1) $`\sigma _i=1if{\displaystyle \underset{j\{w_i\}}{}}w_{ij}\sigma _j<\mathrm{\hspace{0.33em}0}`$ (2) In the Boolean network case the updating is a general Boolean function of the input variable $$\sigma _i=B(\sigma _{j}^{}{}_{}{}^{}swhichprovideinputtoi).$$ (3) Thus, the threshold networks form a hugely restricted set of the Boolean networks. Boolean networks include all nonlinear combinations of input nodes, including functions as, for example, the “exclusive or”. The basic property of logical networks is a dynamics of the state vector { $`\sigma _i`$ } characterized by transients that lead to subsequent attractors. The attractor length depends on the topology of the network. Below a critical connectivity $`K_c2`$ the network decouples into many disconnected regions, i.e., the corresponding genome expression would become modular, with essentially independent gene activity. Above $`K_c`$ any local damage will initiate an avalanche of activity that may propagate throughout most of the system. For any $`K`$ above $`K_c`$ the attractor period diverges exponentially with respect to system size $`N`$ and in some interval above $`K_c`$ the period length in fact also increases nearly exponentially with connectivity $`K`$ . ## III Structural evolution of networks Dynamics may occur on networks as defined by the rule above, but at least as important is the dynamics of network topology. In terms of network topology an evolution means a change in the wiring $`\{w_{ij}\}\{w_{ij}^{}\}`$ that takes place on a much slower timescale than the $`\{\sigma _j\}`$ updating. The evolution of such networks represents the extended degree of genetic network engineering that seems to be needed to account for the large differences in the structure of species genomes , given the slow and steady speed of single protein evolution . We have in an earlier publication proposed to evolve Boolean networks with the sole constraint of continuity in expression pattern . Here we simplify this model by simple damage spreading testing: The model evolves a new single network from an old network by accepting rewiring mutations with a rate determined by expression overlap. This is a minimal constraint scenario with no outside fitness imposed. Further the model tends to select for networks which have high overlap with neighbor mutant networks, thus securing robustness. Now let us formulate an operational version of the evolution in terms of threshold networks as these have comparable structural and statistical features to the Boolean ones . Consider a threshold network with $`N`$ nodes. To each of these let us assign a logical variable $`\sigma _i=`$ $`1`$ or $`+1`$. The states $`\{\sigma _i\}`$ of the $`N`$ nodes are simultaneously updated according to (1) where the links $`w_{ij}`$ are specified by a matrix. The entry value of the connectivity matrix $`w_{ij}`$ may take values $`1`$ and $`+1`$ in case of a link between $`i`$ and $`j`$, and the value $`0`$ if $`i`$ is not connected to $`j`$. The system that is evolved is the set of couplings $`w_{ij}`$ in a single network. One evolutionary time step of the network is: 1) Create a daughter network by a) adding, b) removing, or c) adding and removing a weight in the coupling matrix $`w_{ij}`$ at random, each option occurring with probability $`p=1/3`$. This means turning a $`w_{ij}=0`$ to a randomly chosen $`\pm 1`$ or vice versa. 2) Select a random input state $`\{\sigma _i\}`$. Iterate simultaneously both the mother and the daughter system from this state until they either have reached and completed the same attractor cycle, or until a time where $`\{\sigma _i\}`$ differs between the two networks. In case their dynamics is identical then replace the mother with the daughter network. In case their dynamics differs, keep the mother network. Thus, the dynamics looks for mutations which are phenotypically silent, i.e., these are neutrally inherited under at least some external condition. Notice that adding a link involves selecting a new $`w_{ij}`$, thus changing the rule on the same timescale as the network connectivity. Iterating these steps represents an evolution which proceeds by checking overlap in expression pattern between networks. If there are many states $`\{\sigma _i\}`$ that give the same expression of the two networks, then transitions between them are fast. On the other hand, if there are only very few states $`\{\sigma _i\}`$ which result in the same expression for the two networks, then the transition rate from one network to the other is small. If this is true for all its neighbors then the evolutionary process will be hugely slowed down. In Fig. 1 the connectivity change with time for a threshold network of size $`N=32`$ is shown. Time is counted as number of attempted mutations, and one observes that especially for high connectivity the system may stay long time at a particular network before an allowed mutation leads to punctuations of the stasis. The overall distribution of waiting times is $`1/t^{2\pm 0.2}`$. One feature of the evolution is the structure of the evolved networks, which can be quantified by the average length of attractors for the generated networks. This is shown in Fig. 2, where they are compared with attractor lengths for random networks at the same connectivity. One observes that the evolved networks have much shorter attractors than the random ones, thus our evolution scenario favors simplicity of expression. To examine further the expression behavior of the networks let us consider the size of frozen components as introduced by Kauffman for Boolean networks . A frozen component is the set of nodes connected to a given attractor that does not change at any time when you iterate along the attractor, i.e., a frozen component represents genes which are anesthesized under a given attractor/initial conditions. In Fig. 3 one sees that the frozen component for the evolved network typically involves half the system, and thus is much larger than the typical frozen component associated to attractors of randomly generated threshold networks. Also we test frozen components for random one mutant neighbors of the selected ones, and find that these networks also have huge frozen components. Let us finally look at the active part of the network and the complexity of its expression pattern. As a quite large fraction of the nodes may belong to the frozen component of the network, the remaining active part of the nodes may behave differently from the average dynamics of the whole network. One possible measure is the number of times, each non-frozen node switches its state during the dynamical attractor. In Fig. 4 this quantity is shown for random networks as well as evolved networks. One observes that the active part of the evolved networks exhibits a much simpler expression pattern than that of a random network of comparable connectivity. Overall, requiring robustness as an evolution criterion has observable consequences for both, the temporal evolution pattern, and for confining possible genetic network architectures to the ones with simple expression patterns. ## IV Discussion Some quantitative testing of the minimal evolution scenario is possible on the macro-evolutionary scale. Here the intermittent evolution of the networks bears resemblance to the punctuated equilibrium observed for species in the fossil record . Quantitatively the $`1/f`$ power spectra and $`1/t^2`$ stability distribution for single networks, that one finds for this model as well as for the earlier version , compares well with the similar scalings observed for the statistics of birth and death of individual species in the evolutionary record . Obviously the here ignored features related to co-evolution prevent us from discussing co-extinctions . In fact the analogy can even be fine-grained into a sum of characteristic lifetimes, each associated to a given structural feature of the networks . A similar decomposition is known from the fossil record , where groups of related species display Poisson distributed lifetimes and therefore similar evolutionary stability. A validation on the microlevel based on statistical properties of genetic regulatory circuits has to be based either on properties of genetic networks or on evolution and mutation experiments of fast lived organisms as E. coli . A key number is the estimated average connectivity $`K`$ of $`23`$ in the E. coli genome . Information on the overall organization of these genetic networks is obtained from gene knock out experiments. A quantitative support for a connected genome can be deduced from Elena and Lenski’s experiments on double mutants, which demonstrated that about 30-60% of these (dependent on interpretation) change their fitness in a cooperative manner. In terms of our networks, we accordingly should expect a coupled genetic expression for about half of the of pairs of genes. Although our evolved networks can give such correlations for the connectivity estimate of 2-3 given by , the uncertainty is still so large that random networks also are in accordance with data. Further one should keep in mind that the E. coli genome is large and not well represented by threshold dynamics of all nodes, and also that only between 45 and 178 of the E. coli’s 4290 genes are likely to mediate regulatory functions . Thus, most of the detected gene-gene correlations presumably involve genes which are not even regulatory, but instead metabolic and their effect on each other more indirect than in the case of the regulatory ones. Presumably one would obtain stronger elements of both coupling and correlation if one specialized on regulatory genes. Thus one may wish for experiments where one and two point mutations are performed in regulatory genes only. A more direct test of our hypothesis of damage control as a selection criterion may be obtained from careful analysis of the evolution of gene regulation in evolving E. coli cultures. Another interesting observation is the simplicity of biological expression patterns. For example as observed in yeast many genes are only active one or two times during the expression cycle , thus switching from off to on or on to off occurs for each gene in this system only a few times during expression. For random dynamical networks of comparable size one would expect a much higher activity. Thus surprisingly simple expression patterns are observed in biological gene regulatory circuits. This compares well with our model observation where simplicity of expression patterns emerges as a result of the evolutionary constraint. ## V Summary In this article we have proposed a computer simulation of evolution operating on logical networks. The scenario mimics an evolution of gene regulatory circuits that is governed by the requirement of robustness only. The resulting dynamics evolves networks which have very large frozen components and short attractors. Thus they evolve to an ordered structure that counteracts the increasing chaos when networks become densely connected. The evolved architecture is characterized by simplicity of expression pattern and increased robustness to permanent mutational fluctuations in the network architecture – features that are also seen in real molecular networks. Acknowledgment We thank Stanley Brown for valuable comments on the manuscript.
no-problem/0003/hep-th0003015.html
ar5iv
text
# A quantum Peierls-Nabarro barrier ## 1 Introduction Many systems in condensed matter and biophysics may be modelled by infinite chains of coupled anharmonic oscillators. If the anharmonic substrate potential has two or more degenerate vacua, such a system may support static kink solutions interpolating between neighbouring vacua. These kinks have various interesting physical interpretations (as crystal dislocations , charge density waves and magnetic and ferroelectric domain walls, for example) and their dynamics is an interesting and important subject. Such a system of oscillators has an alternative interpretation as a spatially discrete version of an appropriate nonlinear Klein-Gordon equation. The lattice spacing $`h`$ is related to the spring constant of the chain $`\alpha `$ by $`\alpha =1/h^2`$ so the strong spring-coupling limit is interpreted as a continuum limit. In the continuum limit, static kinks may occupy any position in space, by translation symmetry. This is generically untrue in the discrete system: static kinks may generically occupy only two positions relative to the lattice, one of which is a saddle point of potential energy, the other a local minimum. The difference in energy between these two static solutions is the Peierls-Nabarro (PN) barrier, the barrier which a kink must surmount in order to propagate from one lattice cell to the next. It can have strong effects on the dynamics of kinks in the system (kink trapping, radiative deceleration, phonon bursts etc. ). One might expect the PN barrier, and hence its effects, to grow monotonically with $`h`$ (of course the barrier vanishes as $`h0`$). However, recent work of Flach, Kladko and Zolotaryuk has shown that this is certainly not universally true. In fact, there exist infinitely many substrate potentials with the property that at at least one none-zero lattice spacing, $`h_{}`$ say, the PN barrier vanishes exactly and a continuous translation orbit of static kinks is recovered. We shall say that a substrate potential with this property is “transparent at lattice spacing $`h_{}`$.” Such potentials may be constructed by means of the so-called Inverse Method, and are clearly of some theoretical interest. The purpose of the present paper is to argue that although the kinks of such a system (at $`h=h_{}`$) are free of the classical PN barrier, they still experience a qualitatively similar periodic confining potential due to quantum effects analogous to the Casimir effect of quantum electrodynamics. We call this the quantum Peierls-Nabarro (QPN) potential. The essential physical observation is that the total zero-point energy of the lattice phonon modes in the presence of a kink depends periodically on the kink position. It should be emphasized that the kink position itself is treated as a classical degree of freedom while the phonons are quantized. The physical regime in which this is consistent will be identified: the classical kink mass must far exceed the phonon mass, and the kink must interpolate between widely separated vacua. For purposes of illustration, we shall compute the QPN potential numerically for a two parameter family of substrate potentials which (in a sense) includes discrete sine-Gordon and $`\varphi ^4`$ systems. As a by-product of these calculations, we will obtain numerical evidence in favour of the assumption that kinks in these models are classically stable. ## 2 Construction of transparent substrate potentials by the Inverse Method The general discrete nonlinear Klein-Gordon system consists of a field $`\varphi :\times `$ whose evolution is determined by a second order differential difference equation, $$\ddot{\varphi }_n=\frac{1}{h^2}(\varphi _{n+1}2\varphi _n+\varphi _{n1})V^{}(\varphi _n).$$ (1) Here $`h`$ is the spatial lattice spacing, $`\ddot{\varphi }_n=d^2\varphi _n/dt^2`$ and $`V`$ is the substrate potential. One interpretation of the equation of motion is as that of an infinite system of identical oscillators (each oscillating in potential well $`V`$) with nearest neighbours coupled by identical Hooke’s law springs of strength $`\alpha =h^2`$. As the name suggests, (1) becomes a nonlinear Klein-Gordon equation in the continuum limit, $`h0`$. If $`V(\varphi )`$ has neighbouring degenerate vacua at $`\varphi =a_{}`$ and $`\varphi =a_+>a_{}`$, say, (1) supports static kinks interpolating between them. To find these requires the solution of a second order nonlinear difference equation subject to the boundary conditions $`lim_{n\pm \mathrm{}}\varphi _n=a_\pm `$, which is usual only possible numerically. In this section we will construct, for a given lattice spacing $`h_{}>0`$, a substrate potential $`V_h_{}(\varphi )`$ which supports a continuous translation orbit of static kinks, and so by construction is transparent at lattice spacing $`h_{}`$. To do this we shall use a variant of the Inverse Method of Flach, Kladko and Zolotaryuk (which was originally devised to construct substrate potentials which support exact propagating, rather than static, kink solutions). The idea is to choose a static kink profile, that is an analytic, monotonic surjection $`f:(a_{},a_+)`$, satisfying exponential decay critera as $`z\pm \mathrm{}`$, $$f(z)=a_\pm +O(e^{\mu |z|}),\mu >0,$$ (2) and the symmetry requirement, $$f(z)+f(z)a_++a_{},$$ (3) then impose that the translated kink $`\varphi _n^b=f(nhb)`$ be a static solution of the system (with $`h=h_{}`$) for all $`b`$. From equation (1), this condition holds provided one chooses $`V_h_{}`$ such that $$V_h_{}^{}(f(z))=\frac{1}{h_{}^2}[f(z+h_{})2f(z)+f(zh_{})]$$ (4) for all $`z`$. This uniquely determines $`V_h_{}^{}:(a_{},a_+)`$ by monotonicity of $`f`$, and hence $`V_h_{}:(a_{},a_+)`$ up to an arbitrary constant. To complete the definition of this transparent substrate, one should extend its definition appropriately to all $``$. How one does this is somewhat arbitrary, but will have no bearing on our results, so we shall merely demand that $`V_h_{}`$, $`V_h_{}^{}`$, $`V_h_{}^{\prime \prime }`$ be continuous at $`a_\pm `$. Equations (2) and (4) then imply $`V_h_{}^{}(a_\pm )`$ $`=`$ $`0`$ (5) $`V_h_{}^{\prime \prime }(a_\pm )`$ $`=`$ $`{\displaystyle \frac{2}{h_{}^2}}(\mathrm{cosh}\mu h_{}1)>0.`$ (6) Hence $`\varphi =a_\pm `$ are both stable equilibria. To see that these are degenerate vacua, note that from (4), $`h_{}^2[V_h_{}(a_+)V_h_{}(a_{})]`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}[f(z+h_{})2f(z)+f(zh_{})]f^{}(z)𝑑z`$ (7) $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}[f(z+h_{})+f((z+h_{}))]f^{}(z)𝑑z(a_+^2a_{}^2)`$ $`=`$ $`(a_++a_{})[f(z)]_{\mathrm{}}^{\mathrm{}}(a_+^2a_{}^2)`$ $`=`$ $`0,`$ using the symmetry constraint on $`f`$, (3). So given a kink profile $`f`$, the inverse method generates a one-parameter family of double-well substrate potentials $`\{V_h_{}:h_{}>0\}`$ each of which is transparent at spacing $`h=h_{}`$. Moreover, one has as explicit formula for the continuous translation orbit of static kink solutions, namely $`\varphi _n^b=f(nh_{}b)`$, $`b`$. On physical grounds, one expects the static kinks to be stable to small perturbations, although strictly speaking this is not assured. One would need to check that the Hessian of the potential energy functional, $$E_P=\underset{n}{}\left[\frac{1}{2h_{}^2}(\varphi _{n+1}\varphi _n)^2+V_h_{}(\varphi _n)\right],$$ (8) about $`\varphi ^b`$ has strictly positive spectrum (except for the zero-mode associated with translation). The quantum calculation described in section 3 may be reinterpreted as the calculation of this spectrum. The results of section 5 then constitute numerical confirmation of kink stability for the family of transparent substrates considered therein. ## 3 The quantum Peierls-Nabarro potential In this section we shall quantize the system using a weak coupling approximation, essentially following the method outlined in , adapted to the infinite lattice. The method has previously been applied in the spatially discrete context to a certain nonstandard lattice sine-Gordon model . One may regard $`E_P`$ as a potential energy function on the inifinite dimensional space $`Q`$ of sequences $`\varphi :`$. The vacua $`\varphi =a_\pm `$ lie at the bottom of identical potential wells, cut off from each other, and all configurations with kink boundary behaviour, by an infinite energy barrier. Assuming that $`V`$ is transparent at the lattice spacing under consideration (e.g. $`V=V_h_{}`$ and $`h=h_{}`$, as in section 2), the continuous kink translation orbit is an equipotential curve in $`Q`$: the classical energy of a static kink is independent of its position. If the kinks are stable, this is a level valley bottom winding through $`Q`$. Quantum mechanically, a particle cannot sit at the bottom of a potential well, or on the low dimensional floor of a valley: it always possesses a zero-point energy dependent on the shape of the well bottom. In this section we will semi-classically quantize motion both in the vacuum and kink sectors of the system. A physical regime will be identified in which the kink is very heavy, so that the kink position $`b`$ may be treated as a classical degree of freedom, while the comparatively light phonon modes orthogonal to the translation mode are quantized perturbatively, by Taylor expansion of $`E_P`$. Computation of the kink ground state energy then amounts to summing the zero-point energies of an infinite system of harmonic oscillators, resulting in a divergent series. The quantity of physical significance is not this energy, but rather the difference between the kink and vacuum ground state energies, which is expected to be finite. Since translation is not a symmetry of the discrete system, there is no reason to expect this energy to be independent of $`b`$, that is, one expects the quantum kink energy to vary periodically with kink position, which is the origin of the quantum PN potential. It is convenient to introduce a dimensionless coupling constant $`\lambda `$ into the model so that the (classical) Hamiltonian of the system is $$H=\underset{n}{}\left[\frac{1}{2}\pi _n^2+\frac{1}{2h^2}(\varphi _{n+1}\varphi _n)^2+\frac{1}{\lambda ^2}V(\lambda \varphi )\right]$$ (9) where $`\pi _n=\dot{\varphi }_n`$ is the momentum conjugate to $`\varphi _n`$. Assuming that $`V`$ is transparent at spacing $`h`$, this system supports a continuous translation orbit of static kinks $$\varphi _n^{b,\lambda }=\frac{1}{\lambda }f(nhb),b$$ (10) interpolating between $`a_{}/\lambda `$ and $`a_+/\lambda `$. The classical energy of these kinks is independent of $`b`$ and clearly scales with $`\lambda `$ as $$E_P[\varphi ^{b,\lambda }]=\frac{1}{\lambda ^2}E_P[\varphi ^{0,1}].$$ (11) The physical regime of interest is that of small $`\lambda `$ where the kinks interpolate between widely separated vacua, by (10), and are very heavy, by (11). In this regime, one may approximate motion about any stable static solution $`\stackrel{~}{\varphi }=\phi /\lambda `$ by using a truncated Taylor series approximation for $`E_P[\stackrel{~}{\varphi }+\delta \varphi ]`$: $$E_P[\stackrel{~}{\varphi }+\delta \varphi ]=\frac{1}{\lambda ^2}E_P[\phi ]+\frac{1}{2h^2}\underset{n,m}{}W_{nm}\delta \varphi _n\delta \varphi _m+O(\lambda )$$ (12) where $$W_{nm}=\delta _{nm}[2+h^2V^{\prime \prime }(\phi )]\delta _{n,m+1}\delta _{n,m1}.$$ (13) Since $`W`$ is a real, symmetric matrix, there exists an orthogonal transformation $`R`$ such that $$W_{nm}=\underset{k,l}{}R_{nk}^TU_{kl}R_{lm}$$ (14) with $`U`$ diagonal. The diagonal entries of $`U`$ are the eigenvalues $`\mathrm{\Lambda }_n`$ of $`W`$, none of which is negative provided $`\stackrel{~}{\varphi }`$ is stable, as we are assuming. Introducing normal coordinates $`\xi _n=_mR_{nm}\delta \varphi _m`$ which have conjugate momenta $`\eta _n=_mR_{nm}\pi _m`$, the Hamiltonian for motion about $`\stackrel{~}{\varphi }`$ reduces to $$H=\frac{1}{\lambda ^2}E_P[\phi ]+\frac{1}{2}\underset{n}{}[\eta _n^2+\frac{\mathrm{\Lambda }_n}{h^2}\xi _n^2]+O(\lambda ).$$ (15) Neglecting the $`O(\lambda )`$ remainder, this is the Hamiltonian for an inifinite set of decoupled harmonic oscillators of natural frequencies $`h^1\sqrt{\mathrm{\Lambda }_n}`$. We now quantize in standard canonical fashion in the cases where $`\stackrel{~}{\varphi }=a_\pm /\lambda `$ (the vacuum) and $`\stackrel{~}{\varphi }=\varphi ^{b,\lambda }`$ (the kink located at $`b`$). Let the $`W`$-matrices in these cases be denoted $`W^{vac}`$ and $`W^K(b)`$ repectively, with spectra $`\{\mathrm{\Lambda }_n^{vac}\}`$ and $`\{\mathrm{\Lambda }_n^K(b)\}`$. In each case, the quantum correction to the ground state energy is the sum of the zero-point energies of the oscillators, $$\frac{1}{2h}\underset{n}{}\sqrt{\mathrm{\Lambda }_n},$$ (16) in units where $`\mathrm{}1`$ (recall that $`h`$ denotes the lattice spacing of the system). In the case of kinks, one should omit from this sum the eigenvalue associated with the translation mode since, the kink being very heavy, this mode is treated classically. Actually this makes no difference since the corresponding eigenvalue vanishes, so one might as well sum over all modes, including the zero mode. Of course, the series (16) is divergent in both the vacuum and kink sectors, and must be suitably regulated and renormalized. To this end, we truncate the lattice symmetrically about the kink centre (so $`n_0nn_0`$, assuming $`b[0,h)`$) and consider the spectra $`\{\mathrm{\Lambda }_n(N):n=1\mathrm{},N\}`$ of the truncated $`W`$-matrices of order $`N=2n_0+1`$. The renormalized ground state energy is then $$(b)=\frac{E_P[\varphi ^{0,1}]}{\lambda ^2}+\underset{N\mathrm{}}{lim}\underset{n=1}{\overset{N}{}}[\sqrt{|\mathrm{\Lambda }_n^K(N,b)|}\sqrt{\mathrm{\Lambda }_n^{vac}(N)}],$$ (17) which one expects to be finite, given the exponential spatial localization of the kink (the large $`|n|`$ entries of the matrix $`W^K`$ are essentially identical to those of $`W^{vac}`$). The finite size of the lattice perturbs the translation zero mode away from zero slightly, so one of the kink eigenvalues $`\mathrm{\Lambda }_n(N,b)`$ may be negative for some $`(N,b)`$ (although it must vanish as $`N\mathrm{}`$). This is why we have introduced an absolute value into equation (17), so that $`(b)`$ is the limit of a real valued sequence. In the limit $`N\mathrm{}`$ lattice translation symmetry is recovered, so $`(b)`$ should be periodic with period $`h`$. ## 4 The $`\mathrm{cosh}^\mu `$ kink family If we now choose $`V=V_h`$, for some kink profile $`f`$, we see from equations (13) and (4) that the QPN potential depends on $`f`$ only through $`f^{}`$, because $$2+h^2V_h^{\prime \prime }(f(z))=\frac{f^{}(z+h)+f^{}(zh)}{f^{}(z)}.$$ (18) For purposes of illustration we will consider the one-parameter family $$f^{}(z)=\frac{1}{\mathrm{cosh}^\mu z}$$ (19) with $`\mu >0`$. Note that this includes the cases of the sine-Gordon ($`f(z)=2\mathrm{tan}^1e^z`$) and $`\varphi ^4`$ ($`f(z)=\mathrm{tanh}z`$) kink profiles: $`\mu =1`$ and $`\mu =2`$ respectively. The corresponding transparent substrates in these two cases are shown in figure 1. Figure 1: Transparent substrate potentials generated by the sine-Gordon ($`\mu =1`$, left) and $`\varphi ^4`$ ($`\mu =2`$, right) kink profiles, with lattice spacings from $`h=0.5`$ (bottom) to $`h=3.0`$ (top) in steps of $`0.5`$. The neighbouring vacua $`a_\pm `$ may be any real numbers separated by $$a_+a_{}=_{\mathrm{}}^{\mathrm{}}\frac{dx}{\mathrm{cosh}^\mu x}.$$ (20) Since $`f(z)`$ has order $`e^{\mu |z|}`$ exponential decay, $`V_h^{\prime \prime }(a_\pm )`$ is given by equation (6), and the vacuum $`W`$-matrix takes the simple form $$W_{nm}^{vac}=2\mathrm{cosh}\mu h\delta _{nm}(\delta _{n,m1}+\delta _{n,m+1}).$$ (21) The spectrum of the system truncated to $`N`$ lattice sites is easily computed: $$\mathrm{\Lambda }_n^{vac}(N)=2(\mathrm{cosh}\mu h1)+4\mathrm{sin}^2\left(\frac{n\pi }{2(N+1)}\right),n=1,2,\mathrm{},N.$$ (22) In the limit $`N\mathrm{}`$, the spectrum densely fills the interval $`[2(\mathrm{cosh}\mu h1),2(\mathrm{cosh}\mu h+1)]`$. The eigenvalue problem for the kink W-matrix, $$W_{nm}^K(b)=\frac{f^{}(nh+hb)+f^{}(nhhb)}{f^{}(nhb)}\delta _{nm}(\delta _{n,m1}+\delta _{n,m+1}),$$ (23) is intractable analytically, and will be solved numerically in section 5. One can show, however, that the quantum kink energy is (for all $`b`$) lower than the classical kink energy, that is, the quantum energy correction is negative. To see this, let $`\mathrm{\Delta }(b)`$ be the real diagonal matrix $$\mathrm{\Delta }_{nm}(b)=h^2\delta _{nm}[V_h^{\prime \prime }(f(nhb))V_h^{\prime \prime }(a_\pm )],$$ (24) so that $`W^K(b)=W^{vac}+\mathrm{\Delta }(b)`$. Let $`\{\mathrm{\Lambda }_n^K(b)\}`$, $`\{\mathrm{\Lambda }_n^{vac}\}`$ and $`\{\mathrm{\Gamma }_n(b)\}`$ be the eigenvalues of $`W^K(b)`$, $`W^{vac}`$ and $`\mathrm{\Delta }(b)`$, each spectrum arranged in nonincreasing order ($`\mathrm{\Gamma }_1\mathrm{\Gamma }_2\mathrm{\Gamma }_3\mathrm{}`$). Then standard matrix perturbation theory asserts that $$\mathrm{\Lambda }_n^K(b)\mathrm{\Lambda }_n^{vac}+\mathrm{\Gamma }_1(b)$$ (25) for all $`n`$, where $`\mathrm{\Gamma }_1`$ is the greatest eigenvalue of $`\mathrm{\Delta }(b)`$, $$\mathrm{\Gamma }_1(b)=\underset{n}{\mathrm{max}}h^2[V_h^{\prime \prime }(f(nhb))V_h^{\prime \prime }(a_\pm )].$$ (26) So if $`V_h^{\prime \prime }(f(z))<V_h^{\prime \prime }(a_\pm )`$ for all $`z`$ then $`\mathrm{\Gamma }_n<0`$ and the perturbation of $`W^{vac}`$ by $`\mathrm{\Delta }(b)`$ must reduce each eigenvalue $`\mathrm{\Lambda }_n^K(b)`$ relative to its vacuum counterpart, with the result (always assuming kink stability) that $$\underset{n}{}(\sqrt{|\mathrm{\Lambda }_n^K(b)|}\sqrt{\mathrm{\Lambda }_n^{vac}})<0.$$ (27) This condition on $`V_h^{\prime \prime }(\varphi )`$ (maximum second derivative at the vacua) is quite natural, and can easily be shown to hold for the whole $`\mathrm{cosh}^\mu `$ family ($`\mu >0`$, $`h>0`$). Note that $$h^2[V_h^{\prime \prime }(f(z))V_h^{\prime \prime }(a_\pm )]=g(z)\underset{x\mathrm{}}{lim}g(x),$$ (28) where $$g(z)=\left[\frac{\mathrm{cosh}z}{\mathrm{cosh}(z+h)}\right]^\mu +\left[\frac{\mathrm{cosh}z}{\mathrm{cosh}(zh)}\right]^\mu .$$ (29) Now $`g`$ is differentiable and even, and $$g^{}(z)=\mu \mathrm{sinh}h\mathrm{cosh}^{\mu 1}z\left[\frac{1}{\mathrm{cosh}^{\mu +1}(z+h)}\frac{1}{\mathrm{cosh}^{\mu +1}(zh)}\right],$$ (30) so the only critical point of $`g`$ is $`z=0`$, a local minimum, whence it follows that $`g(z)<lim_x\mathrm{}g(x)`$ for all $`z`$. ## 5 Numerical results Since the truncated kink $`W`$-matrix $`W_N^K(b)`$ is real, symmetric and tridiagonal, its eigenvalue problem is particularly easy to solve numerically. In this section we present data generated by implementing the QL decomposition algorithm for tridiagonal matrices with implicit eigenvalue shifts, outlined in . The first thing to check is that, as expected, the spectrum of $`W^K(b)`$ is positive semi-definite with nondegenerate eigenvalue zero. The least and next-to-least eigenvalues of $`W_N^K(b)`$ for $`N`$ odd, $`3N90`$ were computed for a large sample of parameter values in the range $`1\mu 3`$, $`0.5h10`$, $`0b/h0.5`$. The results were similar at all points sampled: the least eignevalue converges to $`0`$, while the next-to-least converges to a positive number below the lower edge of the vacuum phonon band, that is, less than $`2(\mathrm{cosh}\mu h1)`$. It is instructive to look at the build up of the spectrum of $`W_N^K(b)`$ as $`N`$ grows large, as depicted for two contrasting sets of parameter values in figure 2. This clearly shows the rapid convergence of the bottom eigenvalue to $`0`$ (convergence being faster for larger $`\mu h`$, since the kink structure is then more tightly spatially localized) and the next lowest eigenvalue to a constant outside the phonon band. The rest of the eigenvalues accumulate, apparently densely, within the phonon band (delimited by horizontal dashed lines in figure 2). This is precisely the right behaviour to ensure both kink stability and convergence of the quantum corrected kink energy in the limit $`N\mathrm{}`$. The size of truncated system needed to obtain practical convergence depends on $`\mu h`$, but, within the parameter range cited above, $`N=51`$ seems to suffice. This is the matrix order used to obtain the remaining numerical data. Figure 2: Build up of the spectrum of $`W_N^K(b)`$ as the matrix order $`N`$ grows large, in the cases $`\mu =1.5`$, $`h=0.5`$, $`b/h=0.2`$ (left) and $`\mu =2.0`$, $`h=2.6`$, $`b/h=0`$ (right). Note how the spectrum accumulates within the vacuum phonon band, delimited by horizontal dashed lines. The quantity $`(b)`$ as defined in (17) is problematic to plot since it contains contributions of different orders in $`\lambda `$. For this reason it is convenient to consider $`\stackrel{~}{}(b)=(b)(0)`$, which is of order $`\lambda ^0`$, instead. In fact, $`\stackrel{~}{}(b)`$ is the quantity of most direct physical significance anyway: it gives the change in kink energy as $`b`$ varies, which is precisely what is meant by the quantum Peierls-Nabarro potential. Figure 3 shows $`\stackrel{~}{}(b)`$ ($`0b/h1`$) for the sine-Gordon substrate potentials ($`\mu =1`$), at a variety of lattice spacings. The results are qualitatively very similar to the usual classical PN potential: the kink has greatest energy when located exactly on a lattice site ($`b=0`$) and least when located exactly midway between lattice sites ($`b=h/2`$), the energy difference (the QPN barrier) growing monotonically with $`h`$. Figure 3: Position dependence of the quantum PN potential $`\stackrel{~}{}(b)`$ for $`\mu =1`$ and $`h=1.5`$ to $`h=5`$ in steps of $`0.5`$. Figure 4: Position dependence of the quantum PN potential $`\stackrel{~}{}(b)`$ for $`\mu =2`$ and $`h=2`$ to $`h=5`$ in steps of $`1`$. The unlabeled curves are $`h=2`$ (dashed) and $`h=4`$ (solid). Figure 4 shows similar plots for the $`\varphi ^4`$ substrate potentials ($`\mu =2`$). Here, once again, kinks have greatest energy when $`b=0`$ and least when $`b=h/2`$, but the QPN barrier does not grow monotonically with $`h`$, nor is the shape of $`\stackrel{~}{}(b)`$ so uniform as in the $`\mu =1`$ case. Similar plots for $`\mu =3`$ reveal more complicated behaviour, as shown in figure 5. For small $`h`$, $`\stackrel{~}{}(b)`$ is maximum at $`b=0`$ and minimum at $`b=h/2`$ as for $`\mu =1`$, $`\mu =2`$. However, for $`h`$ above a critical value ($`1.46`$) $`b=0`$ becomes a local minimum of $`\stackrel{~}{}`$ and two extra local maxima appear. The global minimum of $`\stackrel{~}{}`$ remains at $`b=h/2`$, rather than $`b=0`$, until $`h`$ exceeds about 1.52, after which $`\stackrel{~}{}(h/2)`$ exceeds $`\stackrel{~}{}(0)`$. As $`h`$ is increased further, the two local minima coallesce at $`b=h/2`$ (at $`h1.7`$), so that the QPN potential starts to resemble an inverted version of the $`\mu =1`$ case: now kinks have minimum energy when located exactly on a lattice site and maximum energy when located exactly midway between sites. Figure 5: Position dependence of the quantum PN potential $`\stackrel{~}{}(b)`$ for $`\mu =3`$ and $`h=1.4`$ to $`h=1.6`$ in steps of $`0.2`$. Figure 6: A rough measure of the depth of the QPN barrier: $`\stackrel{~}{}(h/2)`$ as a function of $`h`$ for various $`\mu [1,3]`$. Periodicity and reflexion symmetry of $``$ imply that $`b=0`$ and $`b=h/2`$ must always be critical points. There may be others (as in the case $`\mu =3.0`$), but for the most part, plotting $`\stackrel{~}{}(h/2)=(h/2)(0)`$ against $`h`$ gives a good account of how the QPN barrier varies with $`h`$ and $`\mu `$. In particular, the sign of $`\stackrel{~}{}(h/2)`$ tells one whether the QPN barrier tends to trap kinks between lattice sites ($`\stackrel{~}{}(h/2)<0`$) or on lattice sites ($`\stackrel{~}{}(h/2)>0`$). Figure 6 shows plots of $`\stackrel{~}{}(h/2)`$ against $`h`$ for various $`\mu [1,3]`$. Three regimes clearly emerge for large $`h`$: $`1\mu 2`$ ($`\stackrel{~}{}(h/2)<0`$ growing unbounded as $`h\mathrm{}`$), $`\mu =2`$ ($`\stackrel{~}{}(h/2)<0`$ remaining bounded as $`h\mathrm{}`$) and $`2\mu 3`$ ($`\stackrel{~}{}(h/2)>0`$ growing unbounded as $`h\mathrm{}`$). This trichotomy may be explained by consideration of the asymptotic forms of $`W^K(0)`$ and $`W^K(h/2)`$ for large $`h`$: $`{\displaystyle \frac{W^K(0)}{e^{\mu h}}}`$ $``$ $`\mathrm{diag}(\mathrm{},1,1,2^\mu ,0,2^\mu ,1,1,\mathrm{})`$ (31) $`{\displaystyle \frac{W^K(h/2)}{e^{\mu h}}}`$ $``$ $`\mathrm{diag}(\mathrm{},1,1,0,0,1,1,\mathrm{}).`$ (32) This leads to the prediction $`{\displaystyle \frac{\stackrel{~}{}(h/2)}{e^{\mu h/2}}}`$ $``$ $`{\displaystyle \frac{1}{2h}}\left[(00)+(02^{\mu /2})+(12^{\mu /2})+(11)+(11)+\mathrm{}\right]`$ (33) $`\stackrel{~}{}(h/2)`$ $``$ $`{\displaystyle \frac{(12^{1\mu /2})}{2h}}e^{\mu h/2}.`$ (34) Formula (34) accounts well for the asymptotic behaviour seen in figure 6. Clearly the most interesting case from this point of view is the critical value $`\mu =2`$. The dependence of $`\stackrel{~}{}(h/2)`$ on $`h`$ for $`\mu =2`$ is shown in figure 7. One sees that, rather counterintuitively, the QPN barrier actually vanishes in the extreme discrete limit, $`h\mathrm{}`$. One should remember, of course, that in varying $`h`$ one also varies the shape of the transparent substrate potential $`V_h`$ (which would not otherwise remain transparent). In fact, the limit $`h\mathrm{}`$ is always badly singular since the curvature of the substrate at the vacua grows unbounded, by equation (6). Figure 7: Dependence of the QPN barrier on the lattice spacing $`h`$ in the critical case ($`\mu =2`$). ## 6 Concluding remarks In this paper we have considered oscillator chains with no classical PN barrier and shown that their kinks still experience a lattice-periodic confining potential due to purely quantum mechanical effects, leading to a new mechanism for kink pinning. The quantum PN potential was computed numerically for a simple two-parameter family of substrates, revealing a rich variety of behaviour. It remains to be seen whether the QPN potential has any relevance to genuine physics. Given the idealized one-dimensional nature of the model, this seems unlikely (generalizing the Inverse Method to higher dimensions is very problematic). Another cause for doubt is that the effect only exists for certain special substrates. Just how special these “transparent” substrates are is unknown. In order to have any physical relevance they would at least have to be structurally stable: if $`V_h[f]`$ is the substrate transparent at $`h`$ generated by kink $`f`$, then given any sufficiently small perturbation $`\delta V`$ there should exist a kink $`f_{}`$ close to $`f`$ and spacing $`h_{}`$ close to $`h`$ such that $`V_h[f]+\delta V=V_h_{}[f_{}]`$. Thinking of the Inverse Method as a mapping $`K\times _+P`$ (where $`K`$ is the space of kink profiles and $`P`$ the space of potentials), the question is whether this mapping is continuous with respect to some sensible choice of topologies on $`K`$ and $`P`$. This is one of many open mathematical questions raised by the present work. We have presented numerical evidence to support the assumption of classical kink stability, but it should be possible to prove stability rigorously. Similarly, one should be able to prove convergence of the series defining $`(b)`$. In both cases one needs to understand the large $`N`$ behaviour of the spectrum of $`W_N^K(b)`$. Standard minimax estimates are insufficient for this purpose - more delicate analysis is required. ## Acknowledgments This work was partially completed at the Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, where the author was a guest scientist. He wishes to thank Prof. Eberhard Zeidler for the generous hospitality of the institute. The author is an EPSRC Postdoctoral Research Fellow in Mathematics.
no-problem/0003/math0003029.html
ar5iv
text
# Untitled Document For the last version of this text, type hodgegaillard on Google. Date of this version: Mon Dec 24 09:01:37 CET 2007. A Hodge Theorem for Noncompact Manifolds Theorem If $`M`$ is a riemannian manifold, then the inclusion of the complex of coclosed harmonic forms into the de Rham complex induces a linear isomorphism in cohomology. If $`M`$ has at most countably many connected components, this linear isomorphism is a Fréchet isomorphism. \[Manifolds are assumed to be $`C^{\mathrm{}}`$ and Hausdorff.\] Theorem 5 in Section I.9.10 of Bourbaki implying that $`M`$ is paracompact, we can assume that it is connected, and also that it is non-compact (the result being classical in the compact case). Then the claim follows easily (using the Open Mapping Theorem and the fact that the de Rham cohomology is a Fréchet space) from the surjectivity of the laplacian on the de Rham complex. Let us check this surjectivity. In \[4, p. 158\] de Rham proves (using results of Aronszajn, Krzywicki and Szarski ) that a harmonic form which has a zero of infinite order vanishes identically; this implies in particular that the laplacian satisfies Property (A) in Definition 5 of Malgrange \[3, p. 333\]; it is well known that the laplacian satisfies also Condition (P) — called ellipticity nowadays — in Definition 6 of \[3, p. 338\]; in view of Theorem 5 in \[3, p. 341\] this implies the desired surjectivity. * Aronszajn N., Krzywicki A., Szarski J., A unique continuation theorem for exterior differential forms on Riemannian manifolds, Ark. Mat., Volume 4, Number 5 (1962) 417-453. Available at springerlink.com. * Bourbaki, N., Topologie générale, Vol. 1, Chapitres 1 à 4, Hermann, 1971. * Malgrange B., Existence et approximation des solutions des équations aux dérivées partielles et des équations de convolution, Ann. Inst. Fourier, Grenoble 6 (1955-56) 271-354. Available at numdam.org. * de Rham G., Differentiable manifolds, Springer-Verlag, 1984.
no-problem/0003/hep-th0003120.html
ar5iv
text
# Local Quantum Observables in the Anti-deSitter - Conformal QFT Correspondence ## I The AdS-CFT correspondence Quantum field theory on anti-deSitter space-time (AdS) has received an important impact from string theory. Evidence was found to the effect that certain theories on $`d+1`$-dimensional AdS equivalently describe conformally invariant quantum field theories (CFT) on $`d`$-dimensional Minkowski space-time. In particular, the higher-dimensional AdS theory can be recovered from the lower-dimensional CFT. So the correspondence is “holographic” in the sense of where the influence of a black hole horizon on quantum fields in the ambient bulk space was discussed. This correspondence has attracted much attention, as it suggests a wealth of implications for quantum gravity and for gauge theories in physical (four-dimensional) space-time. For a comprehensive review, see . A large portion of the work on the AdS-CFT correspondence crucially involves “stringy” pictures (branes, duality, M-theory) when comparing contributions to the relevant path integrals and/or correlation functions. The correspondence is, however, claimed to be a model-independent feature of quantum field theory . So the question arises as to whether it can be understood in more basic terms not relying on string theory. In fact, one main ingredient, the coincidence of the space-time symmetry groups, is even a purely classical one, long known to physicists: both the isometry group of AdS in $`d+1`$ dimensions and the conformal group of Minkowski space-time in $`d`$ dimensions are SO$`(2,d)`$. It is the aim of this letter to show that it is indeed possible to understand (and to prove) the AdS-CFT correspondence in a general quantum field theoretical set-up. We shall first give a brief introduction to this set-up in Sect. II. It is certainly the most general one to incorporate the two fundamental principles of relativistic Covariance and Causality in quantum theory. ## II Local observables The prime objects of consideration in a quantum theory are the quantum observables, represented as self-adjoint operators on a Hilbert space whose elements are the vector states in which the system can be prepared. The real expectation values of the observables in various states (e.g., the vacuum state) predict the statistical outcome of any measurement. In a relativistic quantum theory, in contrast to quantum mechanics, observables have the property of localization, compatible with Locality and Covariance. Locality is a consequence of Einstein Causality and means that observables which are localized at spacelike distance commute with each other. Covariance requires that the space-time symmetry group acts (by unitary operators $`U(g)`$) on the localization of observables according to its geometric significance, thereby preserving any algebraic relations among them. In the AdS-CFT case at hand this group is the AdS resp. conformal group SO$`(2,d)`$. In conventional quantum field theory, the above features are usually encoded by quantum fields: objects $`\varphi (x)`$ localized at the points $`x`$ in space-time, commuting at spacelike distance and transforming under some relativistically covariant transformation law. Due to the singular nature of quantum fields, these are not Hilbert space operators, but become operators after smearing with a test function. The best localization of observables is therefore an open region in space-time which contains the support of a test function. The choice of quantum fields used for the description of a relativistic quantum system is to a large extent a matter of convenience. It has been recognized long ago that different quantum fields may well describe the same quantum system. Prime examples are the equivalence of the Sine-Gordon and Thirring model , the re-interpretation of Chern-Simons theories in terms of models with Yukawa interaction , and the duality of certain supersymmetric Yang-Mills theories . One is thus led to the conclusion that what determines the physical interpretation of a quantum theory are not the individual quantum fields but the algebras of Hilbert space operators which are generated by localized field operators. Theories with possibly different equations of motion may well be equivalent if they only generate the same system of local algebras. The existence of generating fields is not even required if the local algebras can be specified by any other consistent prescription. To conclude, the knowledge of localization is sufficient for the physical interpretation of a theory. The more specific interpretation of individual observables can be recovered from their correlations with other localized observables (exhibited in expectation values). This insight has proved most fruitful for a wide spectrum of structural results, ranging from scattering theory, a clarification of the superselection (charge) structure, to an algebraic renormalization group analysis. For a recent review, see . As we are aiming at a general and intrinsic description of the AdS resp. CFT theories, we shall deal here with their local algebras and assume that they comply with the requirements of Covariance and Locality, as well as the additional but obvious property of Isotony: an observable localized in some region $`O`$ is localized in any larger region also. ## III AdS-CFT resumed We adopt the set-up, sketched above, of algebras of localized observables, based on fundamental principles generally accepted. It applies to any physically reasonable relativistic quantum field theory, including certain string theories . We shall show that it is the most natural set-up to establish the AdS-CFT correspondence. For it is this structure which is preserved by the correspondence. In contrast, the description in terms of specific fields and Lagrangeans will in general not be preserved. As explained, a quantum field theory is specified if the algebras $`A(O)`$ of observables localized in each open space-time region $`O`$ are known: $$A(O)=\mathrm{span}\{\varphi \text{}\varphi \text{ is an observable localized in }O\}.$$ This assignment has to comply with Covariance and Locality. Thus, to prove the AdS-CFT correspondence, we have to establish a prescription specifying the algebras $`B(W)`$ of local AdS observables for suitable AdS regions $`W`$ if the algebras $`A(K)`$ of local CFT observables for suitable Minkowski regions $`K`$ are given, and vice versa. This prescription must pass on Locality and Covariance from the given theory to the new theory in correspondence. As the discussion of quantum fields in the presence of a gravitational horizon underlying the holographic picture suggests, the set of all operators representing observables should be the same for both theories, and act on the same Hilbert space. Moreover, the conformal space-time should play the role of a horizon in AdS space-time. Indeed, the $`d+1`$-dimensional AdS space-time given as $$\text{AdS}\text{1,d}=\{\xi 𝐑^{d+2}:\xi _0^2\xi _1^2\mathrm{}\xi _d^2+\xi _{d+1}^2=R^2\}$$ has a “boundary” at spacelike infinity, and the induced (properly rescaled) metric of the boundary is that of $`d`$-dimensional conformal Minkowski space-time. The action of the AdS group on AdS<sub>1,d</sub> preserves the boundary, and acts on it like the conformal group in $`d`$-dimensional Minkowski space-time. The law of causal propagation between the bulk of AdS and its boundary suggest how to find the prescription to identify localized observables between the two theories . Namely, let $`K`$ be a causally complete open and convex region in Minkowski space-time, – a convenient choice is a double-cone, i.e., the intersections of a future-directed and a past-directed light-cone. It uniquely determines a wedge-shaped region $`W`$ of AdS which consists of all points at which one can receive signals from some point of $`K`$, and from which one can send signals to some other point of $`K`$ (the “causal completion” of $`K`$ in AdS). Conversely, the boundary region $`K`$ is recovered from this AdS region $`W`$ by taking its intersection with the boundary of AdS. We omit proofs of the geometric facts mentioned here and in the sequel; the reader may find details in . It is largely sufficient to visualize AdS<sub>1,d</sub> in suitable coordinates as a full cylinder $`𝐑\times B^d`$ whose axis $`𝐑`$ represents time (possibly periodic, see below), and whose boundary $`𝐑\times S^{d1}`$ represents spacelike infinity. ($`B^d`$ is a ball, and $`S^{d1}`$ is a sphere.) Double-cones $`K`$ are inscribed into the boundary, and the wedges $`W`$ look like actual wedges “chopped into the cylinder” (cf. Fig. 1). FIG. 1. Wedge regions in AdS and corresponding double-cones in the boundary (in Penrose coordinates) Let us denote the bijective correspondence between AdS wedges and boundary (Minkowski) double-cones by $`K=\iota (W)W=\iota ^1(K)`$. Then the specification $$A(K):=B(W)\mathrm{if}K=\iota (W),$$ determines a system of local algebras $`A(K)`$ of observables on Minkowski space-time, given the system of local algebras $`B(W)`$ on AdS. This identification preserves Covariance. For if a wedge $`W`$ is transformed under the AdS group, then its intersection $`K`$ with the boundary undergoes a conformal transformation, and vice versa. More concretely, if $`g`$ stands for an element of the AdS group, and $`\dot{g}`$ for its induced conformal transformation of the boundary, then $$\iota (gW)=\dot{g}K\mathrm{if}K=\iota (W),$$ implying the correct transformation of the observables: $`U(g)A(K)U(g)^1=U(g)B(W)U(g)^1=`$ $`=B(gW)=A(\iota (gW))=A(\dot{g}K).`$ In particular, the conformal symmetry is implemented by the same unitary Hilbert space representation $`U`$ of SO$`(2,d)`$ as the AdS symmetry. The identification also preserves Isotony: One has $$\iota (W_1)\iota (W_2)\mathrm{if}W_1W_2$$ for obvious reasons. Since the given algebras $`B(W)`$ satisfy Isotony, this implies $$A(K_1)A(K_2)\mathrm{if}K_1K_2.$$ Finally, also Locality is preserved: Namely, $`\iota `$ maps pairs of causally complementary AdS regions onto pairs of causally complementary boundary regions (the causal complement $`X^{}`$ of a region $`X`$ consists of all points which are spacelike separated from any point in $`X`$): $$\iota (W^{})=K^{}\mathrm{if}\iota (W)=K.$$ If now $`K=\iota (W)`$ and $`\widehat{K}`$ are spacelike separated, then $`\widehat{K}`$ is a subset of $`K^{}=\iota (W^{})`$. Observables localized in $`K`$ and $`\widehat{K}`$ are thus identified with operators in $`B(W)`$ and $`B(W^{})`$, and hence commute as required by Locality. Since $`\iota `$ is a bijection, the prescription can be reversed, specifying the system of algebras $`B(W)`$ by the given local algebras $`A(K)`$: $$B(W):=A(K)\mathrm{if}W=\iota ^1(K).$$ By the same arguments as before, Covariance, Isotony and Locality hold for $`B(W)`$ if they hold for $`A(K)`$. We emphasize that we have reduced the problem of establishing the AdS-CFT correspondence to a completely geometric one. It is not necessary to proceed from a specific quantum field theory in order to understand why it admits a holographic re-interpretation. We now turn to discuss some physical implications of the correspondence thus established. ### A Change of the physical interpretation Although the pair of corresponding theories shares the same set of local observables as operators on the same Hilbert space, they have different physical interpretations. This possibility is familiar from quantum mechanics where the state space is always a separable Hilbert space and the set of all observables are the functions of position and momentum. The physical interpretation arises from the assignment of observables to localization regions, and the consequent correlations in the expectation values of observables in various geometric arrangements. A re-assignment completely changes the interpretation. For instance, the description of a scattering experiment would require the determination of correlations between observables at asymptotically large distances. As notions like “spacelike infinity” are not preserved by the bijection $`\iota `$, the computation of asymptotic correlations yields entirely different results in corresponding theories. Furthermore, the one-parameter subgroups of SO$`(2,d)`$ describing time translations in the AdS and conformal interpretations do not coincide. Therefore, also notions like dynamics, energy and entropy change their meaning under the AdS-CFT correspondence. ### B Pointlike AdS and extended CFT observables Not even the concept of a point is preserved by the correspondence (which should not be a surprise since corresponding theories live in space-times of different dimension). For instance, arbitrarily small double-cones in the boundary correspond to wedges close to infinity in AdS, which always have infinite volume. That an observable can be written as a field $`\varphi (x)`$ smeared with a test function, or as some function of field operators, may be true in AdS, but not in the corresponding CFT, or vice versa. Thus, a description in terms of fields may fail in one of the two theories. This is an instance where the advantage of thinking in terms of extended observables and the description of their time evolution by an automorphism group in contrast to fields and equations of motion, is clearly exhibited. We want to demonstrate that the identification of localized observables implies that AdS observables localized in finite AdS regions (in particular proper AdS fields) correspond to genuinely extended CFT observables. In the argument we assume the dimension of AdS to be $`d+1>1+1`$ (the case $`d=1`$ is very special and has been discussed elsewhere ). Let $`X`$ be a bounded region in AdS. Pick some wedge $`W`$ which contains $`X`$ and consider the family of wedges $`W_i`$ contained in $`W`$ which are spacelike to $`X`$. One finds that the corresponding boundary double-cones $`K_i=\iota (W_i)`$ are contained in $`K=\iota (W)`$ and cover its $`t=0`$ surface. Let $`B(X)`$ denote the algebra of AdS observables localized in $`X`$. It belongs to $`B(W)`$ and commutes with all observables in $`B(W_i)`$, hence as a CFT observable it belongs to $`A(K)`$ and commutes with all observables in $`A(K_i)`$. It commutes in particular with all boundary fields smeared over a neighborhood of a Cauchy surface of $`K`$. Now, if the algebra $`A(K)`$ were generated by the family of subalgebras $`A(K_i)`$, we would find that $`B(X)`$ belongs to $`A(K)`$ and commutes with every operator in $`A(K)`$, and therefore is a commutative algebra. Its elements can be classical observables only. This cunclusion applies, e.g., if the CFT is completely described by fields with a causal dynamical law, since the fields along the Cauchy surface generate all observables localized in $`K`$. Reversing the argument, we conclude that the quantum observables in $`B(X)`$ (e.g., AdS field operators smeared within $`X`$) correspond to CFT observables in $`A(K)`$ which are not generated by the family of subalgebras $`A(K_i)`$ covering the $`t=0`$ surface of the double-cone $`K`$. They are thus genuinely extended CFT observables. In particular, the CFT cannot be completely described by its fields with a dynamical law. The extended observables of the CFT, whose presence is implied by the above argument, might be Wilson loop operators in nonabelian gauge theories. While observable fields fail to generate all quantum observables of the CFT, gauge invariant nonlocal “functions” of gauge fields could account for the rest. On the other hand, CFT fields correspond to AdS observables attached to infinity, which might just be suitably renormalized limits of AdS fields . More enthralling is the possibility to identify some AdS degrees of freedom, which collectively restore the crossing symmetry of conformal operator product expansions obtained by an AdS prescription , as strings – thus making contact with the original conjectures . ### C Global structure of space-time The bijection $`\iota `$ between double-cones and wedges (Sect. III) pertains to proper conformal Minkowski space-time and projective AdS space-time which is the AdS hyperboloid with antipodal points $`\xi `$ and $`\xi `$ identified: $`\mathrm{P}\text{AdS}\text{1,d}=\text{AdS}\text{1,d}/(\xi \xi )`$. One may still formulate the AdS theory on AdS<sub>1,d</sub>, but then one finds $`B(W)=B(W)`$: antipodal wedges have the same observables. A CFT on proper conformal Minkowski space-time cannot describe any interaction since its observables commute also at timelike separation, hence any causal influence is bound to lightlike geodesic propagation. This implies that observables in the corresponding theory on projective AdS commute unless their localizations are connected by a causal geodesic (see also for an independent argument to the same effect). But if causal influence only propagates along geodesics, then no process like the decay of a particle (with non-geodesic trajectory due to recoil) is possible. Hence, the AdS theory will also not describe a system with interaction. Theories of physical interest thus rather “live” on covering spaces of projective AdS and of conformal Minkowski space-time, respectively, where the closed timelike curves of these manifolds are unwinded. The bijection $`\iota `$ generalizes to corresponding covering spaces, and especially the universal coverings of both spaces (disregarding the case $`d=1`$ which is again peculiar ). This allows for possible anomalous dimensions of conformal fields and nontrivial timelike commutation relations, and evades the above conclusion of geodesic propagation and absence of interaction on AdS. ## IV Conclusion We have obtained the proper identification of local quantum observables which underlies the “holographic” correspondence between quantum field theory on $`d+1`$-dimensional anti-deSitter space-time and $`d`$-dimensional conformal quantum field theory. It simply reflects the geometric law of causal propagation between AdS space-time and its boundary. But it suffices to define one theory in terms of the other, and entails a complete re-interpretation of the physical content. We conclude from this result, among other things, that AdS fields correspond to genuinely extended CFT observables. These can be a hint at conformal gauge theories. Strings play no particular role in the present explanation of the AdS-CFT correspondence. But it is conceivable that they re-appear as “collective” AdS variables required by crossing symmetry of the corresponding CFT.
no-problem/0003/astro-ph0003169.html
ar5iv
text
# Hydrodynamical Models of Accretion Disks in SU UMa Systems ## 1 Introduction SU UMa stars form a subclass of dwarf novae, which, in turn, are a subclass of cataclysmic variables (CVs). Like all CVs, the SU UMa stars are semidetached binary systems consisting of a white dwarf (the primary) and a low-mass, main-sequence star (the secondary). The secondary fills its Roche lobe, and a stream of gas flows from its surface toward the primary through the inner Lagrangian point. Because of excessive angular momentum, the stream is deflected from its original direction, and an accretion disk is formed around the primary. The disk may be subject to a thermal instability (Smak 1999), resulting in episodes of enhanced accretion rate. To a distant obsever, such an episode is visible as a temporary brightening of the star, commonly referred to as an outburst. As opposed to ordinary dwarf novae, the SU UMa stars exhibit a clearly bimodal distribution of outbursts. Normal outbursts have an amplitude of $``$ 3 mag, and last from one to four days. Superoutbursts are by $``$ 1 mag stronger, and last for up to several weeks. The recurrence time of normal outbursts (days to weeks) is not constant, and it varies substantially form one system to another. The superoutbursts repeat more regularly, and their recurrence time is much longer (months to years). In the extreme case of WZ Sge stars, superoutbursts are nearly exclusively observed, with a recurrence time of up to several tens of years. In superoutbursts, the light curve of a SU UMa system is modulated with a period a few per cent longer than the orbital period. Those modulations are referred to as superhumps. The superhump signal is known to originate from extended source(s) in the outer disk (Warner 1995). Superoutbursts are thought to be driven by a combination of thermal and tidal instability. During normal outbursts, the disk grows in size as it diffuses under the influence of increased viscosity. In systems with mass ratios $`q0.25`$, it eventually reaches up to the location of the 3:1 resonance, at which the orbital frequency of the disk gas is three times larger than the orbital frequency of the binary (we define $`q`$ as the ratio $`M_2/M_1`$, where $`M_1`$ and $`M_2`$ are primary’s and secondary’s mass, respectively). Subsequently, the tidal instability sets in, the disk becomes eccentric, and, seen in the inertial frame, it performs a slow, prograde precession. The tidal influence of the secondary on (and the viscous dissipation in) the outer disk, is largest when the bulk of the disk passes the secondary. The superhump period is then the beat period between the precession period and the orbital period of the binary (Osaki 1996). Disk precession and superhump phenomenon have been subject to rather intense theoretical investigations, largely based on numerical simulations. According to Lubow (1991a,b), the eccentricity builds up due to nonlinear interaction of waves, in which the m=3 component of the tidal field is a key factor. Heemskerk (1994) performed simulations using only that component, and he found that the disk became eccentric, but it precessed retrogradely. Moreover, with the full tidal potential, the accretion disk was kept away from the location of the resonance, and no significant eccentricity was produced. Heemskerk’s results are the only ones obtained with an Eulerian (fluid) code. All remaining models presented in the literature have been based on Lagrangian (particle) codes. A detailed review of those calculations can be found in Murray (1998). While a qualitative agreement with observations of superhump systems was reached in several aspects, the models did not entirely agree with the analytical theory of the tidal instability. In particular, the measured eccentricity growth rates were much smaller than the predicted ones. The models themselves were often based on an extremely simple physical scenario (fully isothermal disk; gas from the secondary uniformly ”raining” onto a circular orbit within the disk). In some of them the tidal instability was initiated by an arbitrary increase of viscosity in the disk by a factor of 10. Finally, even in the models which properly followed the stream of gas between the inner Lagrangian point up to its collision with the edge of the disk, the resolution in the collision region was too poor to resolve strong shock waves responsible for the hot spot phenomenon. In the present paper, we obtain Eulerian models of disks in SU Uma systems in order to isolate the influence of various approximations on the outcome of the simulations. The models can be directly compared to Lagrangian models of Murray (1996, 1998). We perform both isothermal simulations, and simulations in which the full energy equation with a realistic cooling term is solved. We also compare ”rainfall-type” mass transfer models with those based on realistic modelling of the stream from the secondary. The physical assumptions on which our models are based are described in Sect.2 together with numerical methods emplyed to solve the equations of hydrodynamics. The models are presented in Sect. 3, and the results are discussed in Sect. 5. ## 2 Physical assumptions and numerical methods We simulate the flow of gas in the orbital plane of a binary consisting of two stars in circular orbits around the center of mass. We use spherical coordinates cnd a corotating reference frame centered on the primary, with the $`z`$-axis perpendicular to the orbital plane. Assuming that the ratio $`H/r`$ of the disk is constant, and the latitudinal velocity component is negligible, we can write the continuity equation and the equations describing conservation of radial and angular momentum in the following form: $`{\displaystyle \frac{\rho }{t}}+(\rho \stackrel{}{𝐯})`$ $`=`$ $`0`$ $`{\displaystyle \frac{\rho v_r}{t}}+(\rho v_r\stackrel{}{𝐯})`$ $`=`$ $`\rho {\displaystyle \frac{v_\varphi ^2}{r}}{\displaystyle \frac{p}{r}}+\rho f_r+F_r^{visc}`$ $`{\displaystyle \frac{j}{t}}+(j\stackrel{}{𝐯})`$ $`=`$ $`{\displaystyle \frac{p}{\varphi }}+r\rho f_\varphi +rF_\varphi ^{visc}`$ where for any variable $`a`$ $$(a\stackrel{}{𝐯})=\frac{1}{r^2}\frac{r^2av_r}{r}+\frac{1}{r}\frac{av_\varphi }{\varphi }$$ In these equations $`j=r\rho v_\varphi `$ is the angular momentum density measured in corotating frame, and $`\stackrel{}{𝐟}`$ is the external force (gravitational and inertial) acting on unit volume. Viscous forces $`F^{visc}`$ are given by standard formulae (see eg. Landau and Lifshitz 1982). We assumed that kinematic viscosity coefficient and bulk to shear coefficients ratio are constant troughout the disk. The models are either isothermal or radiative. In the first case we use an isothermal euation of state $$p=c_s^2\rho ,$$ where $`c_s`$ is the isothermal sound speed, assumed to be constant in space and time. In the second case the equation of state of an ideal gas with the ratio of specific heats $`\gamma `$ equal to $`5/3`$ is used, and the energy equation $$\frac{E}{t}+(E\stackrel{}{𝐯})=p\stackrel{}{𝐯}+Q^{visc}Q^{rad}$$ (1) is additionally solved, with $`E`$ standing for the internal energy density, $`Q^{visc}`$ \- for heat generated by viscous forces and $`Q^{rad}`$ \- for heat radiated away. Cooling processes are described in the same way as in Różyczla and Spruit (1993). The equations are solved with the help of an explicit Eulerian code described by Różyczka (1985) and Różyczka and Spruit (1993). The inner edge of the grid is located at $`r_{in}=0.1d`$ from the primary (where $`d`$ is the distance between the components of the system), and the outer one ($`R_{out}`$) - at the inner Lagrangian point $`L_1`$. For all models the same mass ratio $`q=3/17`$ is assumed, resulting in $`r_{L_1}=0.736d`$. The spacing of the grid is logarithmic in $`r`$ and uniform in the azimuthal coordinate $`\varphi `$, and there are $`50`$ and $`60`$ grid points in $`r`$ and $`\varphi `$, respectively. The gas can freely flow into the computational domain or out of it through the outer boundary of the grid. At the inner grid boundary only free outflow is allowed for, and no inflow can occur. Due to the explicit character of the numerical scheme, the length of the time step is limited by the Courant condition. In the present simulations the Courant factor is $`0.3`$. ## 3 Initial setup and results of simulations In the present paper we repeat and refine two simulations performed by Murray (1996), also referred to as models $`4`$ and $`5`$ in a later discussion by Murray (1998). As mentioned in the Introduction, Murray uses a Lagrangian SPH code, whwreas we work with an Eulerian, grid based one. Additionally, we employ a more realistic physical scenario, taking into account the effects of cooling, and using an ideal gas equation of state instead of an isothermal one. Following Murray we adopt $`q=3/17`$, a kinematic viscosity coefficient of $`2.5\times 10^4d^2\mathrm{\Omega }_{}`$ (where $`\mathrm{\Omega }_{}`$ is the orbital frequency of the binary), and a sound velocity of $`0.02d\mathrm{\Omega }_{}^1`$ for the isothermal cases. As an initial condition we chose isothermal, Keplerian disk with density varying as $`r^{1.5}`$. The disk extends from the inner boundary of the grid up $`r=0.35d`$. The rest of the grid is originally filled with a rarefied ambient medium at Keplerian rotation around the primary. The density of the ambient medium is 1% of the disk density at $`r=0.35`$, and its pressure matches that of the disk. The original two models of Murray differ only in details of mass transfer. In the first case the gas from the secondary enters the computational domain as a narrow stream originating at $`L_1`$; in the second case the gas ”rains” uniformly onto the orbit corresponding to the circularization radius. In the following, they are referred to as stream and rain models, respectively (note that, by the definition of the circularization radius, if the mass transfer rate is the same in both cases then the rate of angular momentum transfer is also the same). Below we describe four models: radiative stream, radiative rain, isothermal stream, and isothermal rain. Hereafter, they are referred to as Models (or Cases) 1 - 4, correspondingly. All models were followed for 100 orbits of the binary. In order to check if the balance between the matter added to the disk and accreted onto the white dwarf is established in the course of evolution, we monitored the disk mass as a function of time (Fig.1). For models 1, 2 and 3 the mass of the disk grows rapidly in the beginning, and then it reaches an equilibrium value. That value is almost two times greater in Case 4 (isothermal rain) than in Case 2 (radiative rain). In Case 3, after the phase of initial growth, the mass of the disk begins to oscillate with a period of $`45P`$ (where $`P`$ is the orbital period of the binary). The next step was to check, whether the disks in our models became eccentric. For natural numbers $`k,l`$ we can calculate $$S_{k,l}=\left|_{r_{in}}^{r_{out}}\rho _{k,l}r^2𝑑r\right|,$$ where $`\rho _{k,l}`$ are defined by equation $$\rho =\underset{k=0}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{\mathrm{}}{}}\rho _{k,l}\mathrm{exp}[i(k\theta l\mathrm{\Omega }_{}t)]$$ and the term $`r^2`$ results from the assumption of constant ratio $`H/r`$ (in practice, the lower integration limit can be equal to $`r_{in}`$, whereas the upper one has to be placed at $`r=0.6`$ in order to avoid highly asymmetric contribution from the stream). If the mass of the disk is constant, then an appropriate measure of the eccentricity is $`S_{1,0}`$. If, however, the mass is varying in the course of simulation, $`S_{1,0}^{}`$ should be used instead, where $`S_{1,0}^{}\stackrel{def}{=}S_{1,0}/S_{0,0}`$ (note that $`S_{0,0}`$ is proportional to the mass of gas in the integration domain). Fig. 2 shows $`S_{1,0}^{}`$ as a function of time. It can be immediately seen that $`S_{1,0}^{}`$ is much greater in isothermal models than in radiative ones. In other words, isothermal disks more willingly become elliptic. Secondly, only Model 4 (i.e. isothermal rain) exhibits a phase of exponential growth of $`S_{1,0}^{}`$ which was predicted by Lubow’s theory. Even in that case, however, another prediction of the theory is not met, namely, the behavior of the (2,3) mode. The strength of that mode should be proportional to the time-derivative of the strength of (1,0) mode, whereas in our simulations it is roughly proportional to the strength of (1,0) mode (see Fig. 3). Following Murray (1996, 1998), we calculated the rate of viscous dissipation in the disk as a function of time. The results of Foureir analysis of that function for Models 3 and 4 in range of $`60100P`$ are shown in Fig. 4 and 5. In Case 4 (isothermal rain) the rate of viscous dissipation reaches a stationary value, whereas in Case 3 (isothermal stream) it exhibits large amplitude oscillations (by a factor of 2) with a period of $`45P`$. Note that the maxima of the dissipation rate coincide with the maxima of the the disk mass. In both models much more rapid oscillations of the dissipation rate are also seen, whose dominant frequency is a little lower than $`1P^1`$ (all remaining peaks in power spectra can be interpreted as its higher harmonics). The corressponding periods are equal to $`1.11P`$ in Model 3., and $`1.08P`$ in Model 4. In Model 3 the high-frequency oscillations are visible only within dissipation rate maxima (on the ascending branch shortly before peak, and on the entire descending branch). In Model 4 they are excited when the mass of the disk approaches its maximum, and they persist till the end of simulation. A detailed examination of i high-frequency oscillation reveals their similarity to those found by Murray (1998) in his Models 4 and 5 (see his Fig. 6). Because they are visible both in model with and without stream, those oscillations cannot be excited by the interaction of the stream with the advancing and retreating edge of the disc, and they most probably must be related to the tidal influence of the secondary. We found no similar oscillations in our radiative models. In Fig. 6 and 7 we present sequences of disk density maps for Models 1 and 3 (radiative stream and isothermal stream), covering the time-span from $`99P`$ to $`100P`$ (at that time the rapid oscillations of viscous dissipation rate are fully developed in Model 3). In the radiative case the disk is almost circular. However, in the isothermal case the disk has a clearly nonaxisymmetric shape, which, at least in some frames, does not significantly differ from the elliptical one. The ellipse is precessing with a period a little longer than $`1P`$. In one precession period the maximum viscous dissipation occurs when the major axis of the ellipse is perpendicular to the line connecting the components of the binary ($`t=99.65P`$ in Fig. 7) (the same results were obtained by Murray). In both cases a nonaxisymmetric feature in the form of two spiral arms is also seen, which in Case 1 remains stationary in the corotating frame. We interpret it as the $`(2,2)`$ mode, excited and maintained by the tidal interaction of the secondary. Fig. 8 shows the density map of Model 3 at $`T=80P`$ (i.e. between the large-scale maxima of the dissipation rate, when no short-period oscillations are visible). At that moment the disk is nearly axisymmetric, and only the (2,2) mode has a significant amplitude. Thus, we conclude that the overall behaviour of Model 3 is in agreement with the tidal instability model of superhumps. ## 4 Discussion In the present paper we reported a series of 4 simulations of disk in Cataclysmic Variables of SU UMa type. The least realistic model was obtained with an isothermal equation of state, and a ”rainfall-type” approximation for mass transfer from the secondary. The most realistic simulation involved a full energy equation, and a stream of matter originating at the inner Lagrangian point. We found that only the isothermal disks exhibit a clear tendency toward elliptic distortions accompanied by precession. Within the framework of Lubow’s (1992) theory, one could try to explain this result by assuming, that the strength of the tidally excited (2,2) mode is significantly larger in radiative models than in the isothermal ones. This is because, according to the theory, the (2,2) oscillations tend to keep the disk gas away from the 3:1 resonance responsible for the growth of the tidal instability. Such an assumption, however, is not confirmed by the analysis of our results: the (2,2) mode appears to be equally strong in all models. The phase of an exponential growth of the (1,0) mode, foreseen by the theory, was found only in the least realistic model. However, contrary to theoretical predictions, even in that case the time derivative of the strength of (1,0) mode was not proportional to the strength of the (2,3) mode. Oscillations with a period slightly longer than $`P`$, which may be tentatively associated with superhumps, were observed in isothermal models only. The period of oscillations found in radiative models was about 3 times shorter. The main results of this work may be summarized in three points: 1. As foressen by the tidal instability theory, isothermal develop an appreciable eccentricity, and begin to precess. The precession period is the same as the period of rapid fluctuations in viscous dissipation rate, and it is slightly longer than the orbital period of the binary. The behaviour of the (1,0) mode, however, is not consistent with the theory. 2. In radiative models an elliptic, precessing disk does not develop. The dominant oscillations in the viscous dissipation curve have periods of $`1/3P`$. 3. In all models the two-armed (2,2) mode, excited and maintained by tidal forces of the secondary, is very clearly seen. Our conclusion is that the mechanism of superhump phenomenon is not yet entirely understood, and further research on this cubject is desirable. Różyczka & Spruit (1993) simulated a viscosity-free disk with radiative losses, in which angular momentum was transported by spiral shocks. They found an eruptive instability, qualitatively similar to outbursts of Dwarf Novae. During the eruption, the radius of the disk increased substantially. We suggest that low-viscosity disks prone to this type of instability may develop eccentricity and exhibit superhump-like oscillations during outbursts. This issue is presently under investigation. Acknowledgments This research was supported by the Committee for Scientific Research through the grant 2.P03D.004.13. ## REFERENCES * Heemskerk, M.H.M. 1994, Astron. Astrophys., 288, 807. * Landau,L.D., and Lifszyc, E.M. 1982, Fluid Mechanics, (Oxford, Pergamon Press), p. 48. * Lubow, S. 1991a, Astrophys. J., 381, 259. * Lubow, S. 1991b, Astrophys. J., 381, 268. * Murray,J.R. 1996, MNRAS, 279, 402. * Murray,J.R. 1998, MNRAS, 297, 323. * Osaki,Y. 1996, P.A.S.P., 108, 39. * Różyczka,M. 1985, Astron. Astrophys., 147, 209. * Różyczka, M., and Spruit,H.C. 1993, Astrophys. J., 417, 677. * Smak, J.I. 1999, Disk Instabilities in Close Binary Systems. 25 Years of the Disk-Instability Model., Eds. S. Mineshige, and J. C. Wheeler. Frontiers Science Series No. 26. (Universal Academy Press, Inc.), p. 1. * Warner, B. 1995, Cataclysmic Variable Stars, (Cambridge Univ. Press).
no-problem/0003/astro-ph0003167.html
ar5iv
text
# The Sombrero galaxy Based on observations taken with the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique of France, and the University of Hawaii, and observations with the NASA/ESA Hubble Space telescope obtained at the Space Telescope European Coordinating Facility, jointly operated by ESA and the European Southern Observatory. ## 1 Introduction This is the third of a series of papers dealing with optical observations of the Sombrero galaxy (M 104, NGC 4594). We report here the results of 3D spectroscopy of the nuclear region of M 104 with the TIGER instrument, in the 6750/460 Å spectral domain, which includes the \[N ii\]$`\lambda \lambda `$6548,6583, H$`\alpha `$ and \[S ii\]$`\lambda \lambda `$6717,6731 emission lines. This 3D dataset is used in combination with archived images acquired with the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) on board the Hubble Space Telescope (HST), to study the nuclear regions of this galaxy. This paper is organised as follows: Sect. 2 presents the observations and the data reduction; Sect. 3 shows the results; and Sect. 4 contains a brief discussion of the nature of the observed nuclear structures. Throughout this paper, we will use a distance to M 104 of 8.8 Mpc (Ciardullo et al. Ciar93 (1993)), yielding an intrinsic scale of $``$43 pc.arcsec<sup>-1</sup>. We will also use a systemic (heliocentric) velocity $`V_s`$ of 1080 km s<sup>-1</sup>, infered from the two-dimensional stellar velocity field (Emsellem et al. Ems96 (1996), hereafter Paper 2), and a value of 84° ($`\pm 2`$°) for the inclination of the galaxy (Paper 2). ## 2 Observations and data reduction ### 2.1 HST/WFPC2 and NICMOS imaging From the Space Telescope/European Coordinating Facility (ST/ECF) archive at the European Southern Observatory, we have retrieved WFPC2 and NICMOS images of M 104 in four bands: F547M, F658N, F814W, and F160W. Their total integration times were 1340, 1120, 1600, and 128 seconds, respectively. The data reduction was performed using the ST/ECF pipeline, with the most recent calibration data. The images were flux calibrated, rotated to the cardinal orientation (north up, east to the left), and corrected for geometric distorsion. An emission-line image (\[N ii\]+H$`\alpha `$) has been constructed by subtracting the F814W image from the on-band F658N image. Colour maps, similar to $`VI`$ and $`VH`$, were also constructed, using the F547M, F814W and F160W images<sup>1</sup><sup>1</sup>1The $`VI`$ map has already been presented by Pogge et al. (Pogge99 (1999)) and Kormendy et al. (Korm96 (1996)). Note that in Kormendy et al. (Korm96 (1996)), the FOS apertures in their Fig. 1 are on the wrong side (east and west are reversed).. ### 2.2 TIGER spectrography We obtained a total of 1.5 hour integration on the centre of M 104 using the TIGER 3D spectrograph (Bacon et al. Bacon95 (1995)) at the Canada France Hawaii telescope, in April 1996. The spectral domain (6750/460 Å) includes the \[N ii\]$`\lambda \lambda `$6548,6583, H$`\alpha `$ and \[S ii\]$`\lambda \lambda `$6717,6731 emission lines. The spectral sampling was 1.5 Å per pixel, with a spectral resolution of $``$3.5 Å FWHM. The data reduction was performed using a dedicated software (Rousset Rousset92 (1992)). The spatial sampling was $`0\stackrel{}{.}39`$ per lens, with a (seeing limited) spatial resolution of $`0\stackrel{}{.}95`$. Subtraction of the stellar continuum was achieved via a library of stellar and galactic spectra using a procedure which will be detailed in a forthcoming paper (Emsellem et al. in preparation). To improve the spatial resolution of this data set, we have used a Richardson-Lucy algorithm (Richardson 1972; Lucy 1974) to deconvolve each velocity slice of our continuum subtracted data cube. The point-spread function used in this deconvolution was obtained by comparing the emission-line image derived from the TIGER data cube and the HST emission-line image. We limited the number of iterations to 40 and the spatial resolution after deconvolution is $`0\stackrel{}{.}5`$, with a new spatial sampling of $`0\stackrel{}{.}2`$ per pixel. The emission lines present in our spectra have been fitted using the FIT/SPEC software (Rousset Rousset92 (1992)) to reconstruct maps of the ionised gas distribution and kinematics. Except when explicitely mentioned, we only present results obtained by fitting a single Gaussian profile for each individual emission line, with all lines constrained to share the same velocity shift and width. Excellent fits were obtained, even for the nuclear emission-line profiles, despite their extended wings (e.g. see Fig. 3 in Kormendy et al. Korm96 (1996), hereafter K+96). ## 3 Results ### 3.1 Ionised gas distribution and kinematics In Fig. 1, we present maps of the \[N ii\]$`\lambda `$6583 line intensity, centroid velocity and velocity dispersion (the latters being common to all fitted lines). The centre of the field has been defined as the location of the maximum line intensity, which coincides for all observed lines. The ionised gas distribution derived from our deconvolved data cube agrees with that obtained in the HST \[N ii\]+H$`\alpha `$ image (Pogge et al. Pogge99 (1999)), and exhibits three prominent features: a bright core, two roughly symmetric spiral-like structures, and a curved extension northwest of the nucleus. The spiral structures harbor velocities significantly lower than the circular velocities predicted by the multi-Gaussian expansion model of M 104 (Paper 2). Similar behavior has been noticed for gas between 10 and 50″ from the nucleus (Rubin et al. Rubin85 (1985)), as well as in nuclear FOS spectra (K+96). A cut of the velocity map along the major axis of the galaxy (Fig. 2), shows the presence of a strong velocity gradient close to the nucleus, with extrema of $`+130`$ and $`200`$ km s<sup>-1</sup>, located $`0\stackrel{}{.}4`$ (18 pc) east and $`1\stackrel{}{.}1`$ (48 pc) west of the centre, respectively. These extrema are followed by an abrupt decrease of the velocity modulus, which reaches minima $`1\stackrel{}{.}4`$ (60 pc) east and $`2\stackrel{}{.}3`$ (100 pc) west of the nucleus, before increasing again. The kinematics of the ionised gas inside $``$1″ (43 pc) is thus decoupled from that of the gas in the spiral arms. This decoupling is also present in the datacube before deconvolution. After deconvolution, the velocity gradient in the central arcsecond is larger on the eastern side than on the western side. The velocity dispersion is centrally peaked, reaching a value of 380 km s<sup>-1</sup> in the deconvolved data. From their high spatial resolution FOS data, Nicholson et al. (Nich98 (1998)) obtained larger central velocity dispersions, of 540 and 390 km s<sup>-1</sup> for \[N ii\] and H$`\alpha `$, respectively. However, this discrepancy can easily be accounted for by the difference in spatial resolution and the fact that we did not attempt to model the \[N ii\] and H$`\alpha `$ lines separately. Note also that the central velocity dispersion for the gas in our deconvolved data is similar to the stellar one from K+96 (FOS data). ### 3.2 Colour map and dust distribution The colour maps (Fig. 3) outline the (patchy) dust structures present in the central region of M 104. As already pointed out in Paper 1, there is a straight-line dust lane southeast of the nucleus. It has a projected length $`>1\stackrel{}{.}2`$ ($`50`$ pc). Taking a minimum inclination of 82° for the galaxy and assuming that this dust lane lies in the equatorial plane of the galaxy, this yields a deprojected length $`>370`$ pc. The associated $`E(VI)`$ reaches 0.2 mag, corresponding to an apparent A<sub>V</sub> $``$ 0.5 mag (for $`R_V=3.1`$). Especially in the $`VH`$ colour map (Fig. 3, bottom panel), there is a hint for the existence of a symmetric, though much fainter, extinction feature on the northern side of the nucleus. Last, the nucleus of the galaxy appears extremely blue (local $`VI`$ difference of 0.3 mag) compared to its surroundings. It is unlikely that this feature is an artifact<sup>2</sup><sup>2</sup>2We checked that it is still present after harmonisation of the spatial resolutions in the F547M, F814W, and F160W images., as already mentioned in K+96. ## 4 Discussion Recently, Regan & Mulchaey (Regan99 (1999)) argued that nuclear bars may not be the primary agent for the fueling of active galactic nuclei (AGN) and suggested nuclear spirals as an alternative mechanism. Martini & Pogge (MP99 (1999)) also analysed visible and near-infrared HST images of a sample of 24 Seyfert 2 galaxies and found that 20 of these exhibit nuclear spirals (with only 5 clear nuclear bars). Physical processes involved in the formation of such spirals are not clear yet, although acoustic instabilities have often been mentioned as a possible mechanism in non self-gravitating nuclear discs (Montenegro et al. MYE99 (1999), Elmegreen et al. Elm+98 (1998), and references therein). In M 104 (classified as a liner), kiloparsec-scale spiral arms are indeed present but a straight dust lane is also found closer to the nucleus: it is a specific signature of strong bars with inner Lindblad resonance (Athanassoula Lia92 (1992)). If it actually traces a nuclear bar, this dust lane should have a symmetric counterpart on the other side of the nucleus. There is indeed a hint for such a feature (see Sect. 3.2) but it is rather weak. However, this weakness could be attributed to the fact that this second dust lane would be located on the far (north) side of the galaxy. To test this hypothesis, we computed the effect of the presence of two symmetric dust filaments using the luminosity density model of M 104 of Emsellem (Ems95 (1995), Paper 1 hereafter). We assumed the filaments to be in the equatorial plane of the galaxy (10° from end-on), and their characteristics were set to ensure that the mean extinction for the southern filament was consistent with the observations. The model predicts that the apparent extinction should rapidly decrease northward, with apparent $`A_V<0.03`$ mag or $`E(VH)<0.026`$, $`0\stackrel{}{.}6`$ north of the nucleus (for $`R_V=3.1`$). These are upper limits since dust scattering and clumpiness would tend to significantly reduce these values. For these extinction levels, we indeed expect the northern filament to be barely detectable in our colour maps. The observed kinematics would also fit naturally into the strong nuclear bar picture. The fact that overall the velocities are small compared to the predicted circular velocities, obviously argue for the presence of strongly non-circular motions. The velocity profile in the spiral features could be explained by the combination of the streaming motions and projection effects as the spiral curves around the nucleus and becomes perpendicular to the line-of-sight. If the nuclear bar hypothesis is correct, the existence of offset straight dust lanes also requires the presence of an inner Linblad resonance (ILR), and an extended $`x_2`$ orbit family (e.g. Athanassoula & Bureau Lia99 (1999), hereafter AthB99). This would explain the observed kinematical decoupling between the gas in the nucleus and in the spiral arms, as clearly illustrated in the models of AthB99. The observed asymmetry in the central velocity gradients agrees also qualitatively with the predictions made by AthB99, when dust is included (see their Fig. 10 with e.g. $`\psi =22.5`$°). We should however keep in mind that significant differences exist between the gas distribution, as observed in the core of M 104, and as idealised in the model of AthB99. The presence of a large-scale bar has already been suggested in Papers 1 and 2 (pattern speed $`\mathrm{\Omega }_p`$ of $``$120 km s<sup>-1</sup> kpc<sup>-1</sup> for a distance to M 104 of 8.8 Mpc). The mass model predicted a strong ILR for this primary bar, located roughly 20″ from the nucleus. The secondary nuclear bar discussed in this letter would then be well inside this resonance. From the extension and orientation of the central dust lanes, we can roughly estimate a semi-major axis of $`a=425`$ pc ($``$10″), and an orientation of approximately 10° from end-on<sup>3</sup><sup>3</sup>3See Gerhard Ger89 (1989) for this bar. The location of the corotation can be estimated from the relation $`r_L1.2\times a`$ (AthB99, $`r_L`$ is the Lagrangian radius), yielding a value of $``$12″. This is right where the transition region between the inner and outer disks occurs (Seifert & Scorza Seifert96 (1996)). More detailed modelling is needed to accurately estimate its pattern speed. If there are clearly some hints of the presence of a nuclear bar in M 104, as discussed above, it is also clear that additional information are needed to confirm or infirm its existence. In particular, the main support for the nuclear bar scenario comes from the presence of the straight, southern dust filament, which could actually be much further from the nucleus than we assumed (e.g. if it is outside the equatorial plane of the galaxy). In the same way, the kinematical decoupling of the central regions could alternatively be due to, e.g. a nuclear keplerian disk. The answer to this should come soon, from HST/STIS observations of M 104. These high spatial resolution data will allow detailed comparison between the observed PVDs and the gaseous kinematics predicted by various models (nuclear bar, keplerian disk). Emission lines in the infrared, where dust is less problematic, could also be very valuable to understand the gas distribution and kinematics in more details. ###### Acknowledgements. PF acknowledges support by the Région Rhône-Alpes under an Emergence fellowship.
no-problem/0003/cond-mat0003277.html
ar5iv
text
# Acknowledgments ## Acknowledgments This research was funded in part by the Russian Foundation for Basic Research under Grant No. 98-02-16170 and INTAS under Grant No. INTAS-OPEN 97-603.
no-problem/0003/astro-ph0003013.html
ar5iv
text
# The Mystery of Ultra-High Energy Cosmic Rays ## 1 Introduction The origin of cosmic rays with energies above $`10^{20}`$ eV is an intriguing mystery. At present, about 20 events above $`10^{20}`$ eV have been reported worldwide by experiments such as the High Resolution Fly’s Eye, AGASA, Fly’s Eye, Haverah Park, Yakutsk, and Volcano Ranch. (For recent reviews of these observations see, e.g., ). The unexpected flux above $`7\times 10^{19}`$ eV shows no sign of the Greisen-Zatsepin-Kuzmin (GZK) cutoff. A cutoff should be present if these ultra-high energy particles are protons, nuclei, or photons from extragalactic sources. Cosmic ray protons of energies above a few $`10^{19}`$ eV lose energy to photopion production off the cosmic microwave background (CMB) and cannot originate further than about $`50`$Mpc away from Earth. Nuclei are photodisintegrated on shorter distances due to the infrared background while the radio background constrains photons to originate from even closer systems. In addition to the presence of events past the GZK cutoff, the arrival directions of the highest energy events show no clear angular correlation with any of the plausible optical counterparts such as sources in the Galactic plane, the Local Group, or the Local Supercluster. If these events are protons, their arrival direction should point back to their sources, but unlike luminous structures in a 50 Mpc radius around us, the distribution of the highest energy events is isotropic. At these high energies the Galactic and extragalactic magnetic fields should not affect the orbits significantly. Protons at $`10^{20}`$ eV propagate mainly in straight lines as they traverse the Galaxy since their gyroradii are $``$ 100 kpc in $`\mu `$G fields which is typical in the Galactic disk so they should point back to their sources within a few degrees. Extragalactic fields are expected to be $`\mu `$G, and induce at most $``$ 1<sup>o</sup> deviation from the source. Even if the Local Supercluster has relatively strong fields, the highest energy events may deviate at most $``$ 10<sup>o</sup>. If astrophysical sources cannot explain these observations, the exciting alternative involves physics beyond the standard model of particle physics. Not only the origin of these particles may be due to physics beyond the standard model, but their existence can be used to constrain extensions of the standard model such as violations of Lorentz invariance. The absence of a GZK cutoff and the isotropy of arrival directions are some of the challenges that models for the origin of UHECRs face. This mystery has generated a number of proposals but no model can claim victory at this point. The exact shape of the spectrum at the highest energies as well as a clear composition determination awaits future observatories such as the Pierre Auger Project and the proposed satellites OWL and Airwatch. In this talk, I briefly review the models that attempt to solve this mystery. For more extensive reviews, see . ## 2 Astrophysical Zevatrons These challenging observations have generated two different proposals to reaching a solution: A bottom-up approach involves looking for Zevatrons, possible acceleration sites in known astrophysical objects that can reach ZeV energies, while a top-down approach involves the decay of very high mass relics from the early universe and physics beyond the standard model of particle physics. Cosmic rays can be accelerated in astrophysical plasmas when large-scale macroscopic motions, such as shocks and turbulent flows, are transferred to individual particles. The maximum energy of accelerated particles, $`E_{\mathrm{max}}`$, can be estimated by requiring that the gyroradius of the particle be contained in the acceleration region: $`E_{\mathrm{max}}=ZeBL`$, where $`Ze`$ is the charge of the particle, $`B`$ is the strength and $`L`$ the coherence length of the magnetic field embedded in the plasma. For $`E_{max}>10^{20}`$ eV and $`Z1`$, the only known astrophysical sources with reasonable $`BL`$ products are neutron stars ($`B10^{13}`$ G, $`L10`$ km), active galactic nuclei (AGNs) ($`B10^4`$ G, $`L10`$ AU), radio lobes of AGNs ($`B0.1\mu `$G, $`L10`$ kpc), and clusters of galaxies ($`B\mu `$G, $`L100`$ kpc). Clusters of Galaxies: Cluster shocks are reasonable sites to consider for ultra-high energy cosmic ray (UHECR) acceleration, since particles with energy up to $`E_{max}`$ can be contained by cluster fields. However, efficient losses due to photopion production off the CMB during the propagation inside the cluster limit UHECRs in cluster shocks to reach at most $``$ 10 EeV. AGN Radio Lobes: Next on the list of plausible Zevatrons are extremely powerful radio galaxies . Jets from the central black-hole of an active galaxy end at a termination shock where the interaction of the jet with the intergalactic medium forms radio lobes and ‘hot spots’. Of special interest are the most powerful AGNs where shocks can accelerate particles to energies well above an EeV via the first-order Fermi mechanism. These sources may be responsible for the flux of UHECRs up to the GZK cutoff. A nearby specially powerful source may be able to reach energies past the cutoff. However, extremely powerful AGNs with radio lobes and hot spots are rare and far apart. The closest known object is M87 in the Virgo cluster ($``$ 18 Mpc away) and could be a main source of UHECRs. Although a single nearby source can fit the spectrum for a given strength and structure of the intergalactic magnetic field , it is unlikely to match the observed arrival direction distribution. After M87, the next known nearby source is NGC315 which is already too far at a distance of $``$ 80 Mpc. A recent proposal tries to get around this challenge by invoking a Galactic wind with a strongly magnetized azimuthal component . Such a wind can significantly alter the paths of UHECRs such that the observed arrival directions of events above 10<sup>20</sup> eV would trace back to the Virgo cluster close to M87. If our Galaxy has a such a wind is yet to be determined. The proposed wind seems hard to support physically and would focus most events into the northern Galactic pole and render point source identification fruitless. Future observations of UHECRs from the Southern Hemisphere by the Southern Auger Site will provide data on previously unobserved parts of the sky and help distinguish plausible proposals for the effect of local magnetic fields on arrival directions. Full sky coverage is a key discriminator of such proposals. AGN - Central Regions: The powerful engines that give rise to the observed jets and radio lobes are located in the central regions of active galaxies and are powered by the accretion of matter onto supermassive black holes. It is reasonable to consider the central engines themselves as the likely accelerators. In principle, the nuclei of generic active galaxies (not only the ones with hot spots) can accelerate particles via a unipolar inductor not unlike the one operating in pulsars. In the case of AGNs, the magnetic field is provided by the infalling matter and the spinning black hole horizon provides the imperfect conductor for the unipolar induction. The problem with AGNs as UHECR sources is two-fold: first, UHE particles face debilitating losses in the acceleration region due to the intense radiation field present in AGNs, and second, the spatial distribution of objects should give rise to a GZK cutoff of the observed spectrum. In the central regions of AGNs, loss processes are expected to downgrade particle energies well below the maximum achievable energy. This limitation has led to the proposal that quasar remnants, supermassive black holes in centers of inactive galaxies, are more effective UHECR accelerators. In this case, losses are not as significant but the distribution of sources should still lead to a clear GZK cutoff unless the spectrum is fairly hard. Neutron Stars Another astrophysical system capable of accelerating UHECRs is a neutron star. Acceleration processes inside the neutron star light cylinder are bound to fail much like the AGN central region case: ambient magnetic and radiation fields induce significant losses. However, the plasma that expands beyond the light cylinder is freer from the main loss processes and may be accelerated to ultra high energies. One possible source of UHECR past the GZK cutoff is the early evolution of neutron stars. In particular, newly formed, rapidly rotating neutron stars may accelerate iron nuclei to UHEs through relativistic MHD winds beyond their light cylinders. In this case, UHECRs originate mostly in the Galaxy and the arrival directions require that the primaries be heavier nuclei. Depending on the structure of Galactic magnetic fields, the trajectories of iron nuclei from Galactic neutron stars may be consistent with the observed arrival directions of the highest energy events. Moreover, if cosmic rays of a few times $`10^{18}`$ eV are protons of Galactic origin, the isotropic distribution observed at these energies is indicative of the diffusive effect of the Galactic magnetic fields on iron at $`10^{20}`$ eV. This proposal awaits a clear composition determination. Gamma-Ray Bursts Transient high energy phenomena such as gamma-ray bursts may accelerate protons to ultra-high energies . Aside from both having unknown origins, GRBs and UHECRs have some similarities that argue for a common origin. Like UHECRs, GRBs are distributed isotropically in the sky, and the average rate of $`\gamma `$-ray energy emitted by GRBs is comparable to the energy generation rate of UHECRs of energy $`>10^{19}`$ eV in a redshift independent cosmological distribution of sources, both have $`10^{44}\mathrm{erg}/\mathrm{Mpc}^3/\mathrm{yr}.`$ However, the distribution of UHECR arrival directions and arrival times argues against the GRB–UHECR common origin. Events past the GZK cutoff require that only GRBs from $`<50`$ Mpc contribute. Since less than about one burst is expected to have occurred within this region over a period of 100 yr, the source would appear as a concentration of UHECR events. Therefore, a very large dispersion of $`>`$$``$ 100 yr in the arrival time of protons produced in a single burst is necessary. The deflection by random magnetic fields combined with the energy spread of the particles is usually invoked to reach the required dispersion. If the dispersion in time and space is achieved, the energy spectrum for the nearby source(s) becomes very narrowly peaked $`\mathrm{\Delta }E/E1`$. Finally, if the observed small scale clustering of arrival directions is confirmed by future experiments with clusters having lower energy events precede higher energy ones, bursts would be invalidated. ## 3 Hybrid Models The UHECR puzzle has inspired proposals that use Zevatrons to generate UHE particles other than protons, nuclei, and photons. These use physics beyond the standard model in a bottom-up approach, thus, named hybrid models. The most economical among such proposals involves a familiar extension of the standard model, namely, neutrino masses. If some flavor of neutrinos have masses $`1`$ eV, the relic neutrino background will cluster in halos of galaxies and clusters of galaxies. High energy neutrinos ($`10^{21}`$ eV) accelerated in Zevatrons can annihilate on the neutrino background and form UHECRs through the hadronic Z-boson decay. This proposal is aimed at generating UHECRs nearby (in the Galactic halo and Local Group halos) while using Zevatrons that can be much further than the GZK limited volume, since neutrinos do not suffer the GZK losses.The weak link in this proposal is the nature of a Zevatron powerful enough to accelerate protons above ZeVs that can produce ZeV neutrinos as secondaries. This Zevatron is quite spectacular, requiring an energy generation in excess of presently known highest energy sources. Another suggestion is that the UHECR primary is a new particle. The mass of a hypothetical hadronic primary can be limited by the shower development of the Fly’s Eye highest energy event to be below $`<50`$ GeV. Both a long lived new particle and the neutrino Z-pole proposals involve neutral particles which are usually harder to accelerate (they are created as secondaries of even higher energy charged primariess) but can traverse large distances without being affected by the cosmic magnetic fields. Thus, a signature of such hybrid models for future experiments is a clear correlation between the position of powerful Zevatrons in the sky such as distant compact radio quasars and the arrival direction of UHE events. Another exotic primary that can use a Zevatron to reach ultra high energies is the vorton. Vortons are small loops of superconducting cosmic string stabilized by the angular momentum of charge carriers. Vortons can be a component of the dark matter in galactic halos and be accelerated in astrophysical Zevatrons . Although not yet clearly demonstrated, the shower development profile is also the likely liability of this model. ## 4 Top-Down Models It is possible that none of the astrophysical scenarios are able to meet the challenge posed by the UHECR data as more observations are accumulated. In that case, one alternative is to consider top-down models. This proposal dates back to the work on monopolonia of Hill and Schramm. Other top-down proposals involve the decay of ordinary and superconducting cosmic strings, cosmic necklaces, vortons, and superheavy long-lived relic particles. The idea behind these models is that relics of the very early universe, topological defects (TDs) or superheavy relic (SHR) particles, produced after or at the end of inflation, can decay today and generate UHECRs. Defects, such as cosmic strings, domain walls, and magnetic monopoles, can be generated through the Kibble mechanism as symmetries are broken with the expansion and cooling of the universe. Topologically stable defects can survive to the present and decompose into their constituent fields as they collapse, annihilate, or reach critical current in the case of superconducting cosmic strings. The decay products, superheavy gauge and higgs bosons, decay into jets of hadrons, mostly pions. Pions in the jets subsequently decay into $`\gamma `$-rays, electrons, and neutrinos. Only a few percent of the hadrons are expected to be nucleons. Typical features of these scenarios are a predominant release of $`\gamma `$-rays and neutrinos and a QCD fragmentation spectrum which is considerably harder than the case of shock acceleration. ZeV energies are not a challenge for top-down models since symmetry breaking scales at the end of inflation typically are $`10^{21}`$ eV (typical X-particle masses vary between $`10^{22}10^{25}`$ eV) . Fitting the observed flux of UHECRs is the real challenge since the typical distances between TDs is the Horizon scale, $`H_0^13h^1`$ Gpc. The low flux hurts proposals based on ordinary and superconducting cosmic strings. Monopoles usually suffer the opposite problem, they would in general be too numerous. Inflation succeeds in diluting the number density of monopoles usually making them too rare for UHECR production. To reach the observed UHECR flux, monopole models usually involve some degree of fine tuning. If enough monopoles and antimonopoles survive from the early universe, they may form a bound state, named monopolonium, that can decay generating UHECRs. The lifetime of monopolonia may be too short for this scenario to succeed unless they are connected by strings. Once two symmetry breaking scales are invoked, a combination of horizon scales gives room to reasonable number densities. This can be arranged for cosmic strings that end in monopoles making a monopole string network or even more clearly for cosmic necklaces. Cosmic necklaces are hybrid defects where each monopole is connected to two strings resembling beads on a cosmic string necklace. Necklace networks may evolve to configurations that can fit the UHECR flux which is ultimately generated by the annihilation of monopoles with antimonopoles trapped in the string. In these scenarios, protons dominate the flux in the lower energy side of the GZK cutoff while photons tend to dominate at higher energies depending on the radio background. If future data can settle the composition of UHECRs from 0.01 to 1 ZeV, these models can be well constrained. In addition to fitting the UHECR flux, topological defect models are constrained by limits on the flux of high energy photons, from 10 MeV to 100 GeV, observed by EGRET. Another interesting possibility is the recent proposal that UHECRs are produced by the decay of unstable superheavy relics that live much longer than the age of the universe. SHRs may be produced at the end of inflation by non-thermal effects such as a varying gravitational field, parametric resonances during preheating, instant preheating, or the decay of topological defects. These models need to invoke special symmetries to insure unusually long lifetimes for SHRs and that a sufficiently small percentage decays today producing UHECRs. As in the topological defects case, the decay of these relics also generate jets of hadrons. These particles behave like cold dark matter and could constitute a fair fraction of the halo of our Galaxy. Therefore, their halo decay products would not be limited by the GZK cutoff allowing for a large flux at UHEs. Future experiments should be able to probe these hypotheses. For instance, in the case of SHR and monopolonium decays, the arrival direction distribution should be close to isotropic but show an asymmetry due to the position of the Earth in the Galactic Halo. Studying plausible halo models and the expected asymmetry will help constrain halo distributions especially when larger data sets are available from future experiments. High energy gamma ray experiments such as GLAST will also help constrain the SHR models due to the products of the electromagnetic cascade. ## 5 Conclusion Next generation experiments such as the High Resolution Fly’s Eye which recently started operating, the Pierre Auger Project which is now under construction, the proposed Telescope Array, and the OWL and Airwatch satellites will significantly improve the data at the extremely-high end of the cosmic ray spectrum. With these observatories a clear determination of the spectrum and spatial distribution of UHECR sources is within reach. The lack of a GZK cutoff should become clear with HiRes and Auger and most extragalactic Zevatrons may be ruled out. The observed spectrum will distinguish Zevatrons from top-down models by testing power laws versus QCD fragmentation fits. The cosmography of sources should also become clear and able to discriminate between plausible populations for UHECR sources. The correlation of arrival directions for events with energies above $`10^{20}`$ eV with some known structure such as the Galaxy, the Galactic halo, the Local Group or the Local Supercluster would be key in differentiating between different models. For instance, a correlation with the Galactic center and disk should become apparent at extremely high energies for the case of young neutron star winds, while a correlation with the large scale galaxy distribution should become clear for the case of quasar remnants. If SHRs or monopolonia are responsible for UHECR production, the arrival directions should correlate with the dark matter distribution and show the halo asymmetry. For these signatures to be tested, full sky coverage is essential. Finally, an excellent discriminator would be an unambiguous composition determination of the primaries. In general, Galactic disk models invoke iron nuclei to be consistent with the isotropic distribution, extragalactic Zevatrons tend to favor proton primaries, while photon primaries are more common for early universe relics. The hybrid detector of the Auger Project should help determine the composition by measuring simultaneously the depth of shower maximum and the muon content of the same shower. The prospect of testing extremely high energy physics as well as solving the UHECR mystery awaits improved observations that should be coming in the next decade with experiments under construction or in the planning stages. This work was supported by NSF through grant AST 94-20759 and DOE grant DE-FG0291 ER40606.
no-problem/0003/astro-ph0003346.html
ar5iv
text
# 16 x 25 Ge:Ga Detector Arrays for FIFI LS ## 1 INTRODUCTION For the Far Infrared Field Imaging Line Spectrometer (FIFI LS), we need two-dimensional $`16\times 25`$ pixel detector arrays which cover the wavelength range from 40 to 210 $`\mu `$m. Gallium-doped germanium detectors are proven to be very sensitive in the wavelength range of 40 to 120 $`\mu `$m. Application of $``$ 600 Nmm<sup>-2</sup> of stress along the crystallographic axis shifts the long wavelength cutoff from 120 $`\mu `$m to approximately 220 $`\mu `$m. Thus, we will use two Ge:Ga detector arrays, one stressed and one unstressed, to cover the desired wavelength range. The concentrations of Gallium dopants will be $`1\times 10^{14}`$ cm<sup>-3</sup> and $`2\times 10^{14}`$ cm<sup>-3</sup> for the stressed and unstressed arrays, respectively. The expected dark detector NEP is $`5\times 10^{18}`$ WHz<sup>-1/2</sup>, which has been reached with a similar design in a balloon-borne experiment. Some of the operational parameters of the detector arrays are listed in Tab. 1. ## 2 ARRAY DESIGN The design for the stressed and unstressed detector arrays will be almost identical. For stability reasons, the unstressed detector array will actually be stressed to about 10 % of the long-wavelength stress level. As a tradeoff between sensitivity and susceptibility to cosmic rays, the size of the detector pixels was chosen to be roughly 1 mm<sup>2</sup> cross section with an interelectrode distance of 1.5 mm. Due to the high reflectivity and low absorption coefficient of Ge:Ga, the quantum efficiency of a free standing photoconductor would be very low. The probabilities for single pass absorption of a FIR photon by detectors this size are estimated to be 10 % and 20 % for unstressed and stressed Ge:Ga, respectively. Therefore, the detectors are located in gold-plated integrating cavities with area-filling light cones to maximize the quantum efficiency. Electrically, the detector housing is maintained at a constant bias voltage, while the signal ends of the individual pixels are insulated from the housing by a thin shim of sapphire. ### 2.1 Finite Element Analysis In order to optimize the stressing mechanism, we studied the stress distribution of a single detector pixel between two cylindrical steel pistons which apply an external force of 500 N, as shown in Fig. 1. Even in the case of a perfectly centred detector, the distribution of stress values within the detector is very inhomogeneous, with stress values varying between 405 and 765 Nmm<sup>-2</sup>. This leads to a broadening of the spectral response curve and an enhanced probability of detector breakage. The stress uniformity within one pixel can be significantly improved by using a piston of a material with a higher Young’s modulus or pedestals between the pistons and the detector. In our analysis, the use of silver pedestals reduces the variation of stress to a range of 486 to 507 Nmm<sup>-2</sup>. As we see from Fig. 1 (middle), the range of stress values is drastically increased if the detector is slightly (20 $`\mu `$m) decentred, which demonstrates the importance of precisely centred detectors. Although silver pedestals have been used, the stress values vary between 301 and 691 Nmm<sup>-2</sup>. On the right panel of Fig. 1, the stress distribution for the detector housing is shown. The highest stress levels occur at the “c-shaped” clamp, while the material near the detector stack remains unaffected. ### 2.2 Stressing Mechanism Each detector is placed within about 20 $`\mu `$m of the centre of its cavity. As the finite element analysis (FEA) of the previous section has shown, this positional accuracy is required to avoid inhomogeneous stress, which would leadin to inhomogeneous responsivity and increased probability of pixel breakage. The edges of the detectors face the entrance holes of the cavities, to ensure that the first pass of reflected radiation is trapped by the integrating cavity and not reflected directly back out through the entrance aperture. The entire detector array will consist of 25 linear $`1\times 16`$ pixel detector arrays, each equipped with its own linear light cone array. In Fig. 2, one linear stressed Ge:Ga detector array is sketched. The detector housing as well as the light cone arrays are machined by sparc erosion, which ensures high precision without introducing mechanical stress. As for the previous $`5\times 5`$ FIFI detector array, one screw at the top of the array serves as the stressing mechanism. To gradually increase the stress to the detectors when the stressing screw is turned, the horizontal slit is designed to create a spring mechanism. The spring constant at room temperature for the stressed detector array is $`513`$ Nmm<sup>-1</sup> and will be about $`50`$ Nmm<sup>-1</sup> for the unstressed array. The detector housing is constructed of a high strength aluminum alloy (7075 Aluminum T6) which ensures a high mechanical stability and a good thermal conductivity. Due to a differential thermal contraction between the housing and the components of the detector stack, the stress to the detectors is increased during cool-down. For the stressed array, this increase is estimated to be about 100 N. The vertical slit decouples the detector stack from any distortions caused by the stress application. The ball-and-socket pivot design of the tungsten carbide pistons compensates for non-parallel surfaces. To provide a uniform stress along the stack of detectors, the upper and lower parts of this combination are machined from spheres with diameter matching those of the detector cavities of 3 mm, so that they can rotate slightly without conducting forces into the housing. Due to the high Young’s modulus of tungsten carbide, the bending of the pistons around the corners of the detectors is minimized. The signal end of a detector is in contact with the pedestal of a CuBe pad. Both the high Young’s modulus of the tungsten carbide pistons and the pedestals of the CuBe pads lead to an improved stress homogenity within the pixels (see section 2.1). The CuBe pad is electrically insulated from the housing by a thin sheet of sapphire. The signal wire is soldered to the CuBe pad and led to the back end of the detector housing, where it is soldered to a connector. The connector is linked to the cryogenic readout electronics (CRE) located at the back of the detector housing. ### 2.3 Light Cones To collect all of the light in the focal plane and to feed it into the appropriate detector cavity, funnel-shaped light cones are used. The back end of each linear light cone array forms a part of the integrating cavity (see Fig. 2). Like the cavities, the light cones are gold plated. The light cones are individually tilted in the focal plane to match the angle of incoming light from the pupil plane. Due to an easier and therefore cheaper production, we will use straight light cones which have been successfully used on the KAO with FIFI and not the parabolical Winston-type cones. In order to optimize the light cone arrays, we studied two different types of light cones (Fig. 3) whose parameters are listed in Tab. 2. Since the coupling holes are the main loss source of the detector cavities, we attempt to reduce their size as much as possible to optimize the quantum efficiency. On the other hand, the diameter of the coupling hole should be of order at least a few $`\lambda `$ to minimize diffraction losses. To what extent type II with a coupling hole diameter of 0.5 mm is already affected by diffraction losses is not yet clear and has to be determined. On the right panel of Fig. 3, the simulated transmission for both types is plotted. Ray bundles from solid angles exceeding those of the pupil are no longer entirely transmitted, allowing the light cones to help reduce straylight. In this respect, type II is somewhat more efficient: it totally rejects light from light sources at angles $`10^{}=\alpha _{\mathrm{cr}}`$ from the optical axis of the light cone, whereas $`\alpha _{\mathrm{cr}}=15`$ for type I. However, this raytrace simulation uses geometrical optics and does not account for diffraction losses. ### 2.4 Readout Electronics The purpose of the cryogenic readout electronics (CRE), together with some passive components, is to amplify and multiplex the signal of the 16 detectors of one linear array. The CRE is a specially designed CMOS circuit under development for FIRST-PACS, operating at liquid helium temperature or lower (see Fig. 4). The detector current of each pixel is read out by an integrating amplifier. A sample-and-hold stage acts as analog memory between the integrator and the multiplexer circuit. All channels of one complete detector column are sampled at the same time and switched sequentially to the output of the CRE by the sample-and-hold circuit. The complete read out of each $`16\times 25`$ pixel photoconductor array is done by a 16-bit BiCMOS A/D converter stage. Very careful shielding, including output signal feedback on the shield, is used to minimize the load on the cryogenic output stage and the crosstalk between the analog signal lines to the converter stage. A digital multiplexer circuit provides parallel/serial conversion of the 16-bit data words generated by the A/D converter stage for both photoconductor arrays. The resulting 8 Mbit per second serial data stream (at 10 kHz maximum sampling rate) is transmitted to the data acquisition system via fiber obtics. ## 3 PERFORMANCE MEASUREMENTS Several prototype linear $`1\times 16`$ pixel arrays based on the above design have been succeessfully assembled and tested. A photograph of one of the assembled arrays is shown in Figure 5. ### 3.1 Stress Uniformity To verify the functionality of the described stressing mechanism, we measured the room temperature resistance as a function of the applied stress, as shown in the left part of Fig. 6. The decrease of the resistance with increasing stress is very homogeneous along the stack of detectors, and no stress gradient is noticeable. The relative spectral responses of four detectors at different positions within the stack measured with a Fourier-Transform Spectrometer (FTS) for a stress of 540 Nmm<sup>-2</sup> also match quit well (Fig. 6). The cut-off wavelengths of the four detectors agree especially well. ### 3.2 Responsivity and Noise Equivalent Power A setup of two blackbodies at 4 K and 20 K, one linear stressed detector array, and a Fabry-Perot interferometer, tuned to a centre wavelength of 170 $`\mu `$m at a resolving power of about 50, was used to measure the responsivity and NEP of the detectors. The parameters were set to produce a photon power of $`3\times 10^{13}`$ W per pixel which corresponds to the expected background with FIFI LS ($`\lambda =170\mu `$m, $`\lambda /\mathrm{\Delta }\lambda =2000`$) on SOFIA. Unfortunately, in our preliminary measurement the detectors received a photon background four times higher due to unwanted stray light. Since we also lacked the readout electronics described in section 2.4, we used the transimpedance amplifiers (TIAs) with GaAs - FET’s used for the FIFI array. With that, only a few detectors could be tested. In Fig. 7 we compare the responsivity and NEP measured for our detectors and for the FIFI array, measured with narrow-band filters centred at 163 $`\mu `$m at a photon background of $`2.39\times 10^{13}`$ W per pixel. As shown in Fig. 7, the responsivity of our detectors at a given bias field is lower than for the FIFI array. However, as we see from the NEP measurement, this effect is effect is compensated by our ability to apply a higher bias field. The NEP of the detectors is almost constant over the considered range of bias fields, whereas the NEP of the FIFI detectors rises steeply for bias fields $`13`$ Vm<sup>-1</sup> where impact ionization leads to increased noise. Even with a background four times higher, above a bias field of about 7 Vm<sup>-1</sup>, our measured NEP lies below that measured for the FIFI array. Under the assumption of a background-limited performance of the detectors and no noise contribution from the readout electronics, we extrapolated the measured NEP to the desired photon power of $`3\times 10^{13}`$ W per pixel. The extrapolated NEP is well below that measured for the FIFI array, which may be due to the improved cavity design and the resulting enhanced quantum efficiency. The $`NEP_{\mathrm{BLIP}}`$ for background limited performance can be expressed as $$NEP_{\mathrm{BLIP}}=\frac{h\nu }{\eta }\left(\frac{2^3A\mathrm{\Omega }}{\lambda ^2}\mathrm{\Delta }\nu tϵ\eta \frac{1}{1\mathrm{e}^{h\nu /kT}}\left(1+tϵ\eta \frac{1}{1\mathrm{e}^{h\nu /kT}}\right)\right)^{1/2},$$ (1) where t and $`ϵ`$ are the transmission and emissivity of a blackbody at a temperature T, and $`\eta `$ is the quantum efficiency of the detectors. With the measured NEP we used Eq. 1 to calculate a quantum efficiency $`\eta =45\%`$, which is a substantial improvement over the average of 19 % reported for the FIFI detector array. The results of our first measurements are encouraging. However, we need more measurements on a larger number of detectors to confirm these first results. ## ACKNOWLEDGMENTS We are thankful to H. Dohnahlek and G. Kettenring for the design work, and to A. J. Baker for careful reading of the manuscript.
no-problem/0003/astro-ph0003277.html
ar5iv
text
# Intervening O VI Quasar Absorption Systems at Low Redshift: A Significant Baryon ReservoirBased on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-2555. The data were reduced with the STIS Team software, and the research was supported by NASA through grants GO-08165.01-97A and GO-08165.02-97A from STScI. ## 1. Introduction The resonance line doublet of Li-like O VI is a sensitive probe of hot collisionally ionized or warm very low density photoionized gas in the intergalactic medium and galaxy halos. The O VI $`\lambda \lambda `$1031.92, 1037.62 doublet has been detected in absorption toward QSOs over a wide range of redshifts (see §1 in Tripp & Savage 2000). The lowest redshift O VI absorbers are particularly interesting because the redshifts of galaxies near the QSO sight lines can be measured, and the relationship between the O VI absorber properties and environment can be studied. Furthermore, cosmological simulations predict that a substantial fraction of the baryons in the universe are in a shock-heated phase at $`10^510^7`$ K at low $`z`$ (e.g., Cen & Ostriker 1999; Davé et al. 1999), and preliminary results indicate that low-$`z`$ O VI systems may indeed be an important baryon reservoir (Tripp & Savage 2000). In a previous paper, Savage, Tripp, & Lu (1998) studied an intervening O VI absorber associated with two galaxies at $`z`$ 0.225 in the spectrum of the radio-quiet QSO H1821+643 using a combination of low resolution Hubble Space Telescope (HST) spectra with broad wavelength coverage and a high resolution HST spectrum with very limited wavelength coverage. We have re-observed this QSO with an echelle mode of the Space Telescope Imaging Spectrograph (STIS) on HST, which provides a resolution of $``$7 km s<sup>-1</sup> (FWHM) with broad wavelength coverage. In this paper we present in §2 and §3 new results on one probable and four definite O VI absorption line systems in the STIS H1821+643 spectrum. In §4 we discuss the implications of the high rate of occurance of O VI absorbers at low redshift. The direct information we obtain about the highly ionized state of the gas from the presence of O VI allows us to estimate the baryonic content of these systems. We conclude that O VI systems are likely to harbor an important fraction of the baryons at the present epoch. ## 2. Observations and Absorption Line Measurements H1821+643 was observed with STIS for 25466 seconds on 1999 June 25 with the medium resolution FUV echelle mode (E140M) and the 0.2$`\times `$0.06” slit.<sup>1</sup><sup>1</sup>1HST archive ID numbers O5E703010–O5E7030E0. This STIS mode provides a resolution of $`R=\lambda /\mathrm{\Delta }\lambda `$ 46000 or FWHM $``$ 7 km s<sup>-1</sup> (Kimble et al. 1998). The data were reduced as described by Tripp & Savage (2000) including the scattered light correction developed by the STIS Instrument Definition Team. The spectrum extends from $``$1150 to 1710 Å with four small gaps between orders at $`\lambda >`$ 1630 Å. Throughout this paper wavelengths and redshifts are heliocentric. We first searched the spectrum for O VI absorbers by checking for lines with the velocity separation and relative line strengths expected for the doublet. This identified the four definite O VI systems. We then searched for O VI lines associated with known Ly$`\alpha `$ absorbers, and this revealed the probable system (see below). A selected sample of the spectrum is shown in Figure 1. This portion of the spectrum shows the O VI doublet at $`z_{\mathrm{abs}}`$ = 0.22497 as well as a much weaker O VI doublet at $`z_{\mathrm{abs}}`$ = 0.22637. Both of these O VI absorbers are discussed in §3.1. In addition to the O VI systems in Figure 1, the STIS echelle spectrum shows new O VI absorbers at $`z_{\mathrm{abs}}`$ = 0.24531 and 0.26659 which have small equivalent widths; these systems are briefly discussed in §3.2 along with a possible O VI system at $`z_{\mathrm{abs}}`$ = 0.21326. The $`z_{\mathrm{abs}}`$ = 0.21326 system is a strong H I Ly$`\alpha `$/Ly$`\beta `$ absorber with a $`>4\sigma `$ line detected at the expected wavelength of O VI $`\lambda `$1031.93. However, the corresponding O VI $`\lambda `$1037.62 line is blended with Milky Way S II $`\lambda `$1259.52 absorption due to two high velocity clouds, so we consider this a probable but not definite O VI detection. The component structure establishes that this blend is mostly due to Milky Way S II, but it is possible that an O VI $`\lambda `$1037.62 line of the right strength is present as well. In principle this could be proved by comparing the S II $`\lambda `$1259.52 line strengths to the S II $`\lambda \lambda `$1250.58, 1253.81 line strengths. However, this does not yield a clear result at the current S/N level due to ambiguity of the continuum placement near 1259 Å. Restframe equivalent widths ($`W_\mathrm{r}`$) of absorption lines detected in the O VI systems, measured using the software of Sembach & Savage (1992), are listed in Table 1. Note that the quoted errors in equivalent width include contributions from uncertainties in the height and curvature of the continuum as well as a 2% uncertainty in the flux zero point. Integrated apparent column densities (Savage & Sembach 1991) are also found in Table 1 with error bars including contributions from continuum and zero point uncertainties. To measure line widths, we used the Voigt profile fitting software of Fitzpatrick & Spitzer (1997) with the line spread functions from the Cycle 9 STIS Handbook. ## 3. Absorber Properties Four of the five absorption systems in Table 1 are within a projected distance of 1 $`h_{75}^1`$ Mpc or less of at least one galaxy with $`\mathrm{\Delta }v=c(z_{\mathrm{gal}}z_{\mathrm{abs}})/(1+z_{\mathrm{mean}})`$ 300 km s<sup>-1</sup>, and some of them are close to multiple galaxies (see Table 1 in Tripp et al. 1998). These absorbers are also displaced from the QSO redshift by 7100 km s<sup>-1</sup> ($`z_{\mathrm{abs}}`$ = 0.26659) to 17100 km s<sup>-1</sup> ($`z_{\mathrm{abs}}`$ = 0.22497). Finally, the O VI profiles are relatively narrow. Therefore these are probably intervening systems that trace the large-scale gaseous environment in galaxy envelopes and the IGM rather than “intrinsic” absorbers (Hamann & Ferland 1999). ### 3.1. O VI Absorbers at z = 0.22497 and 0.22637 Since the O VI doublets at $`z_{\mathrm{abs}}`$ = 0.22497 and 0.22637 shown in Figure 1 are separated by only $``$340 km s<sup>-1</sup>, they are probably related and we discuss them together. Two emission line galaxies are known at heliocentric redshifts of 0.22560 and 0.22650 at projected distances of 105 and 388 $`h_{75}^1`$ kpc from the sight line<sup>2</sup><sup>2</sup>2In this paper, the cosmological parameters are set to $`H_0=75h_{75}`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$ = 0.0. (Tripp et al. 1998). In addition to the O VI doublet, the STIS spectrum shows strong absorption lines due to H I Ly$`\alpha `$ , Ly$`\beta `$, Ly$`\gamma `$, Si III $`\lambda `$1206.5, and possibly C III $`\lambda `$977.02 at $`z_{\mathrm{abs}}`$ = 0.22497; the absorption profiles of most of these species are plotted on a velocity scale in Figure 2. The lines of N V and Si IV are not detected at greater than 3$`\sigma `$ significance, and upper limits on their equivalent widths and column densities are listed in Table 1 along with upper limits on C II and Si II. Figure 2 provides several indications that these systems are multiphase absorbers. Several components are readily apparent in most of the $`z_{\mathrm{abs}}`$ = 0.22497 profiles including the O VI lines (see also Figure 1). Fitting of the Si III profile yields $`b`$ = 7.7$`{}_{2.1}{}^{}{}_{}{}^{+2.9}`$ and 1.6$`{}_{0.9}{}^{}{}_{}{}^{+2.1}`$ km s<sup>-1</sup> for the two well-detected components at $`v`$ = –3 and +25 km s<sup>-1</sup>, respectively. However, the component velocity centroids and $`b`$-values are not compatible with a homogeneous mixture of O VI and Si III. For example, the Si III profile shows a prominent narrow component at $`v`$ 25 km s<sup>-1</sup>, and there is no obviously corresponding component in the O VI profiles. While thermal Doppler broadening can make the O VI profiles broader than those of Si III, at most the increase will be a factor of $`\sqrt{28/16}`$, and this is inadequate to produce the breadth of the observed O VI lines. Thus we are compelled to consider a mixture of phases, some of which show up in Si III, while others are prominent in O VI. In the case of the O VI at $`z_{\mathrm{abs}}`$ = 0.22637, which is also visible in Figure 2, the multiphase nature is suggested by an offset of 60 km s<sup>-1</sup> between the H I Ly$`\alpha `$ and O VI velocity centroids. Also we note that no H I absorption is significantly detected at the velocity of the O VI suggesting that the hydrogen is thoroughly ionized in the O VI gas. This O VI absorber may be analogous to the highly ionized high velocity clouds seen near the Milky Way which show strong high ion absorption with very weak or absent low ion absorption (Sembach et al. 1999). ### 3.2. Other Weak O VI Systems The two new O VI systems at $`z_{\mathrm{abs}}`$ = 0.24531 and $`z_{\mathrm{abs}}`$ = 0.26659 are plotted in Figure 3. A striking feature of these weak O VI absorbers (and the candidate O VI at $`z_{\mathrm{abs}}`$ = 0.21326) is that while their O VI column densities are comparable, the strengths of their corresponding H I absorption lines are significantly different (see Table 1 and Figure 3). For example, $`N`$(O VI)/$`N`$(H I) = 5.2$`\pm 1.2`$ in the $`z_{\mathrm{abs}}`$ = 0.24531 system while $`N`$(O VI)/$`N`$(H I) = 1.2$`\pm 0.2`$ in the $`z_{\mathrm{abs}}`$ = 0.26659 absorber. The contrast is even more dramatic with the $`z_{\mathrm{abs}}`$ = 0.21326 absorber, which has $`N`$(O VI)/$`N`$(H I) $`0.14\pm 0.03`$. For reference, in collisional ionization equilibrium (Sutherland & Dopita 1993), gas with solar metallicity at the peak O VI ionization temperature should have $`N`$(O VI)/$`N`$(H I) $``$ 100. The large variablility of the observed O VI/H I ratio could indicate that the metallicity of the O VI absorbers varies substantially, or this could be due to differences in the physical conditions and ionization of the gas. If, for example, these are multiphase absorbers with the H I lines arising in a cool phase which is embedded in a hot phase which produces the O VI absorption (e.g., Mo & Miralda-Escudé 1996), then the wide variations in the O VI/H I ratio could simply be due to interception of fewer cool phase clouds in one absorber compared to another. A full analysis of the range of physical conditions of these absorbers will be presented in a later paper. However, it is interesting to note that the H I Ly$`\alpha `$ profile of the $`z_{\mathrm{abs}}`$ = 0.26659 system is rather broad and relatively smooth (see the bottom panel of Figure 3), which may indicate that this absorber is collisionally ionized and hot. However, fitting a single component to the $`z_{\mathrm{abs}}`$ = 0.26659 Ly$`\alpha `$ profile yields $`b=44.6_{6.3}^{+7.3}`$, which implies that $`T1.2\times 10^5`$ K. At this temperature the O VI ionization fraction is rather small in collisional ionization equilibrium (Sutherland & Dopita 1993), and an unreasonably high metallicity is required to produce the observed $`N`$(O VI) and $`N`$(H I) in the same gas. This may be another indication that these are multiphase absorbers or that the gas is not in ionization equilibrium. ## 4. Number Density and Cosmological Mass Density The new STIS data in this paper provide an opportunity to evaluate the number density of low-$`z`$ O VI absorbers per unit redshift ($`dN/dz`$) and a lower limit on their cosmological mass density. If we neglect continuum placement uncertainty and other systematic error sources, the STIS E140M spectrum of H1821+643 is formally adequate for $`4\sigma `$ detection of narrow lines with $`W_\mathrm{r}`$ 30 mÅ at $`\lambda _{\mathrm{obs}}`$ 1188 Å ($`z_{\mathrm{abs}}`$ $``$ 0.151 for O VI $`\lambda `$1031.93). However, the continuum placement ambiguity substantially increases the uncertainty in $`W_\mathrm{r}`$ for weak lines. Moreover, broader resolved lines spread over more pixels have higher limiting equivalent widths (limiting $`W\sqrt{\mathrm{no}.\mathrm{pixels}}`$), so broad weak lines may not be detected at the $`4\sigma `$ level. Consequently, the $`dN/dz`$ derived below should be treated as a lower limit. We require detection of both lines of the O VI doublet with $`W_\mathrm{r}`$ 30 mÅ, and we exclude one absorber<sup>3</sup><sup>3</sup>3We exclude the associated O VI absorber at $`z_{\mathrm{abs}}`$ = 0.2967. This system is not listed in Table 1 but is discussed in detail in Savage et al. (1998) and Oegerle et al. (2000). within $`\mathrm{\Delta }v`$ 5000 km s<sup>-1</sup> of $`z_{\mathrm{em}}`$ to avoid contamination of the sample with intrinsic absorbers. This results in a sample of three O VI systems<sup>4</sup><sup>4</sup>4The three systems include those at $`z_{\mathrm{abs}}`$ = 0.22497, 0.24531, and 0.26659. We exclude the probable system at $`z_{\mathrm{abs}}`$ = 0.21326, and the $`z_{\mathrm{abs}}`$ = 0.22637 system falls below the equivalent width threshold. over a redshift path of $`\mathrm{\Delta }z`$ = 0.063 (after correction for a loss of $`\mathrm{\Delta }z`$ = 0.061 for spectral regions in which either of the O VI lines is blocked by ISM or extragalactic lines from other redshifts). Therefore the most probable $`dN/dz`$ 48 for $`W_\mathrm{r}`$ 30 mÅ and 0.15 $``$ $`z_{\mathrm{abs}}`$ $``$ 0.27, and we conservatively conclude that $`dN/dz`$ 17 at the 90% confidence level (following the Gehrels 1986 treatment for small sample statistics). This is a remarkably high number density. It is important to emphasize that the sample is extremely small and, since very little is known about weak O VI lines at low redshift, it remains possible that $`dN/dz`$ is unusually high toward H1821+643 for some reason. However, there is supporting evidence that $`dN/dz`$ is generally high: (1) a similar $`dN/dz`$ is derived from STIS echelle spectroscopy of PG0953+415 (Tripp & Savage 2000), and (2) one or two additional intervening O VI absorbers are evident in the H1821+643 spectrum which did not satisfy the selection criteria to be included in the sample. More observations are needed to build the sample of weak O VI lines at low $`z`$. For comparison, low to moderate redshift Mg II absorbers with $`W_\mathrm{r}`$ 20 mÅ have $`dN/dz=2.65\pm 0.15`$ (Churchill et al. 1999; see also Tripp, Lu, & Savage 1997). The stronger O VI absorbers are less common; Burles & Tytler (1996) report $`dN/dz=1.0\pm 0.6`$ for O VI systems with $`W_\mathrm{r}`$ 210 mÅ at $`<z_{\mathrm{abs}}>`$ = 0.9. Evidently, the $`dN/dz`$ of the weak O VI lines is substantially larger than $`dN/dz`$ of other known classes of low $`z`$ metal absorbers and is more comparable to that of low $`z`$ weak Ly$`\alpha `$ absorbers, which have $`dN/dz`$ 100 for $`W_\mathrm{r}`$ 50 mÅ (Tripp et al. 1998; Penton et al. 2000). Following analogous calculations (e.g., Storrie-Lombardi et al. 1996; Burles & Tytler 1996),<sup>5</sup><sup>5</sup>5Note that while Burles & Tytler (1996) calculated the cosmological mass density of the oxygen ions in O VI absorbers (which is quite small), they did not apply an ionization and metallicity correction to estimate the total baryonic content of the O VI systems. Instead, they used this method to place a lower limit on the mean metallicity of the O VI systems. the mean cosmological mass density in the O VI absorbers, in units of the current critical density $`\rho _c`$, can be estimated using $$\mathrm{\Omega }_b(\mathrm{O}\mathrm{VI})=\frac{\mu m_\mathrm{H}H_0}{\rho _ccf(\mathrm{O}\mathrm{VI})}\left(\frac{\mathrm{O}}{\mathrm{H}}\right)_{\mathrm{O}\mathrm{VI}}^1\frac{_iN_i(\mathrm{O}\mathrm{VI})}{\mathrm{\Delta }X}$$ (1) where $`\mu `$ is the mean atomic weight (taken to be 1.3), $`f`$(O VI) is a representative O VI ionization fraction, (O/H)$`_{\mathrm{O}\mathrm{VI}}`$ is the assumed mean oxygen abundance by number in the O VI absorbers, $`_iN_i`$(O VI) is the total O VI column density from the $`i`$ absorbers, and $`\mathrm{\Delta }X`$ is the absorption distance interval (Bahcall & Peebles 1969), corrected for blocked spectral regions. With the sample defined above, we have $`\mathrm{\Omega }_b(\mathrm{O}\mathrm{VI})=8.0\times 10^5f(\mathrm{O}\mathrm{VI})^110^{[\mathrm{O}/\mathrm{H}]}h_{75}^1`$ where \[O/H\] = log (O/H) - log (O/H). To set a conservative lower limit on $`\mathrm{\Omega }_b`$(O VI), we assume \[O/H\] = $``$0.3 and $`f`$(O VI) = 0.2 (which is close to the maximum value in photo- or collisional ionization, see Tripp & Savage 2000), which yields $`\mathrm{\Omega }_b`$(O VI) $`0.0008h_{75}^1`$. If we set the mean metallicity to a more realistic value such as \[O/H\] = $``$1, $`\mathrm{\Omega }_b`$(O VI) increases to $`0.004h_{75}^1`$. Similar lower limits on $`\mathrm{\Omega }_b`$(O VI) have been derived by Tripp & Savage (2000) using a slightly less sensitive sample based on STIS echelle spectroscopy of PG0953+415 and earlier Goddard High Resolution Spectrograph observations of H1821+643. The lower limit assuming (O/H) = 1/10 solar is comparable to the combined cosmological mass density of stars, cool neutral gas, and X-ray emitting cluster gas at low redshift, $`\mathrm{\Omega }_{}+\mathrm{\Omega }_{\mathrm{H}\mathrm{I}21\mathrm{cm}}+\mathrm{\Omega }_{\mathrm{H}_2}+\mathrm{\Omega }_{\mathrm{X}\mathrm{ray}}`$ 0.006 (Fukugita, Hogan, & Peebles 1998). Though still uncertain due to the small sample,<sup>6</sup><sup>6</sup>6For a discussion of the impact of small number statistics on the $`\mathrm{\Omega }_b`$(O VI) estimates, see Tripp & Savage (2000). small redshift path probed, and uncertain (O/H)$`_{\mathrm{O}\mathrm{VI}}`$, these preliminary lower limits on $`\mathrm{\Omega }_b`$(O VI) suggest that O VI absorbers contain an important fraction of the baryons in the low redshift universe. We thank Ken Sembach and Ed Fitzpatrick for sharing their software for the measurement of column densities and $`b`$-values.