id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9903/astro-ph9903288.html
ar5iv
text
# Untitled Document
no-problem/9903/hep-ex9903029.html
ar5iv
text
# Current Status of Nucleon Decay Searches with Super-Kamiokande ## I Motivation In the Standard Model it is assumed that the proton is completely stable because there are no lighter products to which the proton could decay without violating baryon number conservation. However, the requirement for an interaction to conserve baryon number is not backed up by any symmetry in the Standard Model. If baryon number conservation is an incorrect assumption, then there are avenues for protons (as well as neutrons which are in stable nuclei) to decay. Furthermore, most theories which go beyond the Standard Model allow or in most cases require, protons to be unstable at some level. One of the constraints on such theories is to predict a proton lifetime which does not contradict current measured lifetime limits. Experimental limits on the proton lifetime can kill or constrain such theories beyond the Standard Model. Furthermore, the actuall observation of proton decay would help open the door to new physics as well as give answers to the ultimate state of matter in our universe in the far and distant future. One of the primary goals of the Super–Kamiokande experiment is to search for proton (and neutron) decay. The results of such searches performed thus far are presented. ## II Super–Kamiokande detector Super–Kamiokande is a ring imaging water Cherenkov detector containing 50 ktons of ultra pure water held in a cylindrical stainless steel tank 1 km underground in a mine in the Japanese alps. The sensitive volume of water is split into two parts. The 2 m thick outer detector is viewed with 1885 20 cm diameter photomultiplier tubes (PMTs) and acts as a veto shield to tag incoming cosmic ray muons. It completely surrounds and is optically separated from the 33 m diameter, 36 m high inner detector which is the primary sensitive volume. The inner detector contains 32.5 ktons of water and is viewed by 11146 inward pointing 50 cm diameter PMTs, giving a 40% photocathode coverage. When relativistic charged particles pass through the water they emit Cherenkov light at an angle of about $`42^{}`$ from the particles direction of travel. When this cone of light intersects with the detector wall it is possible to image it as a ring. By measuring the charge produced in each PMT and the time at which it is collected, it is possible to reconstruct the position and energy of the event as well as the number, identity and momenta of the individual charged particles in the event. ## III Atmospheric Neutrino Background Many very exciting results have come from studying what is thought to be merely the background to proton decay events. Of particular note is the continued confirmation of the atmospheric neutrino problem as well as finding evidence of massive neutrinos as the only conceivable solution. For proton decay searches there are three classes of atmospheric neutrino background events. The first is that of inelastic charged current events, $`\nu NN^{}\{e,\mu \}+n\pi `$, where a neutrino interacts with a nucleon in the water and produces a visible lepton and some number of pions. This can mimic proton decay modes such as $`pe^+\pi ^0`$.<sup>*</sup><sup>*</sup>*Thesis students: M. Shiozawa, B. Viren The second class is neutral current pion production, $`\nu N\nu N^{}+n\pi `$, the only visible products of which are pions. This is a background to, for example, $`n\overline{\nu }\eta `$.Thesis student: J. Kameda Finally, there is the mainstay of the atmospheric neutrino group, single ring quasi elastic charged current, $`\nu NN^{}\{\mu ,e\}`$, events which can look like, for example, $`p\overline{\nu }K^+`$ decays.Thesis students: M. Earl, Y. Hayato ## IV The $`pe^+\pi ^0`$ Mode The first mode discussed is $`pe^+\pi ^0`$. Since this is one of the simplest modes it serves well as a general example of proton decay searches with Super–Kamiokande and will be discussed in some detail. Figure 1 shows a cartoon of an ideal $`pe^+\pi ^0`$ decay. Here, the positron, $`e^+`$ and neutral pion, $`\pi ^0`$, exit the decay region in opposite directions. The positron initiates an electromagnetic shower leading to a single isolated ring. The $`\pi ^0`$ will almost immediately decay to two photons which will go on to initiate showers creating two, usually overlapping, rings. In Super–Kamiokande, such an ideal event might look like Fig. 2. This event was generated with a detailed $`pe^+\pi ^0`$ event and detector Monte Carlo (MC) simulation. In this figure, the PMTs are plotted as a function of $`\mathrm{cos}\theta `$ vs. $`\varphi `$ as viewed from the event vertex and are represented by squares, colored by amount of collected charge (red is more, blue is less) and sized to show distance from the event vertex. The fuzzy outer edges of the rings indicate an electromagnetic showering type of ring. Had the positron been replaced with a muon the single isolated ring would have a sharp, distinct edge. In general, real $`pe^+\pi ^0`$ events will differ from this ideal picture above because the pion can scatter or be absorbed entirely before it exits the nucleus. In addition the nuclear proton can have some momentum due to Fermi motion. These two effects, pion-nucleon interaction and Fermi motion, result in a breaking of the balance of reconstructed momentum. In addition, the pion can decay asymmetrically where one photon takes more than half of the pion’s energy leaving the second photon to create a faint or even completely invisible ring. These effects, plus energy resolution and systematic uncertainties are considered when choosing the cuts to isolate possible decay events from their background. The same reduction of the 5 Hz raw (high energy) trigger rate at Super–Kamiokande to the so called “contained event sample” (for more information see ) is used to find candidates for proton decay events as well as atmospheric neutrino events. From this reduced data sample, selection criteria unique to each search are applied to reduce the atmospheric background while keeping the efficiency to find a particular decay mode high. For the $`pe^+\pi ^0`$ mode, the selection criteria are as follows: (A) 6000 $`<Q_{tot}<`$ 9500 photoelectrons (PEs), (B) 2 or 3 e-like (showering type) rings, (C) if 3 rings: 85 $`<M_{inv,\pi ^0}<`$ 185 MeV/c<sup>2</sup>, (D) no decay electrons, (E) 800 MeV/c<sup>2</sup> $`<M_{inv,tot}<`$ 1050 MeV/c<sup>2</sup>, and (F) $`P_{tot}=\left|\stackrel{}{P}_i\right|<`$ 250 MeV/c. The criterion (A) corresponds to a loose energy cut which reduces the background with out much computation needed. As stated above, it is possible for one of the photons to be invisible, for which (B) allows. If there are 3 rings criterion (C) requires that 2 of the rings reconstruct to give a $`\pi ^0`$ mass. Since there are no muons nor charged pions expected, no decay electrons should be found and any events which have them will be cut by (D). Finally (E) requires the total invariant mass to be near that of the proton and (F) requires the total reconstructed momentum (magnitude of the vector sum of all individual momenta) to be below the Fermi momentum for <sup>16</sup>O. Figure 3 shows distributions of $`pe^+\pi ^0`$ MC, atmospheric MC, and data in reconstructed momentum vs. invariant mass after criteria (A)-(D) have been applied. Criteria (E) and (F) shown by the box. When these criteria are applied to 45 kton$``$years (736 days) of data we find no candidate events. Using atmospheric neutrino background MC equivalent to 40 years of data taking it is estimated that 0.2 background events are expected in the data. From MC simulations of $`pe^+\pi ^0`$ events, the efficiency to select any $`pe^+\pi ^0`$ events in the data sample is 44%. This gives a limit on the proton lifetime divided by $`pe^+\pi ^0`$ branching ratio (partial limit) of $`\tau /B_{pe^+\pi ^0}>2.9\times 10^{33}`$ years (90% CL). ## V The $`p\mu ^+\pi ^0`$ Mode The $`p\mu ^+\pi ^0`$ mode is very similar to the $`pe^+\pi ^0`$ mode. The only difference is to replace (A), (B) and (D) above with: (A) 5000 $`<Q_{tot}<`$ 7800 PE, (B) 1 $`\mu `$-like (non-showering) and 1 or 2 e-like rings, and (D) 1 decay electron, respectively. The region defined by criterion (A) is lower than in the above case because a muon is more massive than a positron and will go below Cherenkov threshold sooner, thus emitting less light. The other two differences are also due to having a muon in the final state instead of a positron. When these criteria are applied, no decay candidates are found in the data. It is estimated 0.1 background events should exist and a selection efficiency of 35% is obtained. The resulting partial limit is $`\tau /B_{p\mu ^+\pi ^0}>2.3\times 10^{33}`$ years (90% CL). ## VI The $`\eta `$ Modes Variations on the above two modes are found by replacing the neutral pion by an eta particle decaying to two $`\gamma `$s. This gives the modes, $`pe^+\eta `$, $`p\mu ^+\eta `$, and $`n\overline{\nu }\eta `$. All selection criteria are then very similar to the two previous modes except for (B) which is tightened up to only allow three ring events (except for $`n\overline{\nu }\eta `$ which requires exactly two) and (D) which requires the eta invariant mass to be reconstructed in the region $`470<M_{inv,\eta }<610`$ MeV/c<sup>2</sup>. The $`pe^+\eta `$ search results in no candidate events on top of an expected background of 0.3 events, a selection efficiency of 17% and a partial lifetime limit of $`\tau /B_{pe^+\eta ;\eta \gamma \gamma }>1.1\times 10^{33}`$ years (90% CL). For $`p\mu ^+\eta `$, no background is expected and no candidate events are found. The selection efficiency is 12% and the partial lifetime limit is $`\tau /B_{p\mu ^+\eta ;\eta \gamma \gamma }>0.78\times 10^{33}`$ years (90% CL) Finally, the $`n\overline{\nu }\eta `$ search finds 5 candidates consistent with the 9 expected background events and a selection efficiency of 21%. The partial limit for this mode is $`\tau /B_{n\overline{\nu }\eta ;\eta \gamma \gamma }>0.56\times 10^{33}`$ years (90% CL). ## VII The $`p\overline{\nu }K^+`$ Modes Super–Kamiokande searches for the $`p\overline{\nu }K^+`$ mode by looking for the products from the two primary branches of the $`K^+`$ decay. These are pictured in Fig. 4. In the $`K^+\mu ^+\nu _\mu `$ case, when the decaying proton is in the <sup>16</sup>O, the nucleus will be left as an excited <sup>15</sup>N. Upon de-excitation, a prompt 6.3 MeV photon will be emitted. So this second branch has two independent searches: one in which the signature of this prompt photon is required and one in which it is explicitly absent. Unlike the other modes presented, the $`p\overline{\nu }K^+`$ search has been done with data from only 33 kton$``$years (535 days) of exposure. The criteria for the $`p\overline{\nu }K^+;K^+\pi ^+\pi ^0`$ search are as follows: (A) 2 e-like rings, (B) 1 decay electron, (C) $`85<M_{inv,\pi ^0}<185`$ MeV/c<sup>2</sup>, (D) $`175<P_{\pi ^0}<250`$ MeV/c, (E) $`40<Q_{\pi ^+}<100`$ PE. The $`\pi ^+`$ is very close to Cherenkov threshold and is expected to only produce a small amount of light as in (E). Since this is not enough to produce an identifiable ring only the rings from the 2 photons from the decay of the $`\pi ^0`$ are required in (A). These photons must reconstruct to an invariant mass in the range defined by (C) as well as a momentum range defined in (D). Figure 5 shows the event distributions in $`Q_{\pi ^+}`$ vs. $`\pi ^0`$ momentum for proton decay MC, atmospheric neutrino MC and data after criteria (A) through (C). Criteria (D) and (E) are represented by the box. No candidates are found and 0.7 background events are expected. The selection efficiency is 6.5%, giving a partial lifetime limit of $`\tau /B_{p\overline{\nu }K^+;K^+\pi ^+\pi ^0}>3.1\times 10^{32}`$ years (90% CL). When searching for the $`p\overline{\nu }K^+;K^+\mu ^+\nu _\mu `$ with a 6.3 MeV prompt photon search the following criteria are required: (A) 1 $`\mu `$-like ring, (B) 1 decay electron, (C) $`215<P_\mu <260`$ MeV/c, and (D) $`N_{PMT}>7;12<t_{PMT}<120`$ ns before $`\mu `$ signal. The only particle giving a visible ring is the mono-energetic muon. Criteria (A-C) select for that. In criterion (D), $`t_{PMT}`$ is the time a PMT was hit subtracted by the time it would take a photon to get directly from the fit event vertex to the PMT (so called “time minus time of flight” or “timing residual”). This requirement to have a significant number of hit PMTs within 1 to 10 kaon lifetimes is illustrated in Fig. 6. This search has an efficiency of 4.4%, finds no candidates on an estimated background of 0.4 events, and sets a limit of $`\tau /B_{p\overline{\nu }K^+;(\gamma )K^+(\gamma )\mu ^+\nu _\mu }>2.1\times 10^{32}`$ years (90% CL). The complimentary case where no prompt gamma is allowed has the same criteria as the prompt gamma case except for the last: (D) $`N_{PMT}<=7;12<t_{PMT}<120`$ ns before $`\mu `$ signal. Since this allows a significant amount of background to survive the selection criteria the limit is set by fitting for an excess of proton decay events above the atmospheric neutrino background in the reconstructed momentum spectrum. This is shown in Fig. 7. What is found is that, if anything, the data fluctuates downward in the region of expected muon momentum. In this region, 70 candidate events are found which is consistent with the 74.5 events expected from atmospheric neutrinos. With an efficiency of 40% a limit of $`\tau /B_{p\overline{\nu }K^+;(\overline{)}\gamma )K^+(\overline{)}\gamma )\mu ^+\nu _\mu }>3.3\times 10^{32}`$ years (90% CL) is found. The combined limit for these three topologies is $`\tau /B_{p\overline{\nu }K^+;K^+\mu ^+\nu _\mu }>6.8\times 10^{32}`$ years (90% CL). ## VIII The $`p\mu ^+K^0`$ Mode The search for proton decay via $`p\mu ^+K^0`$ so far relies on $`K_s^0\pi ^0\pi ^0\gamma \gamma \gamma \gamma `$ decay branch of the kaon. The selection criteria for this are: (A) 1 $`\mu `$-like and 2 or more e-like rings, (B) 1 or less decay electron, (C) $`400<M_{inv,K^0}<600`$ MeV/c<sup>2</sup>, (D) $`150<P_\mu <400`$ MeV/c, (E) $`750<M_{inv,tot}<1000`$ MeV/c<sup>2</sup>, and (F) $`P_{tot}<300`$ MeV/c. Because it is possible for one photon from each pion to be missed, only 2 are required in (A) in order to preserve a high efficiency. Using all e-like rings, a reconstructed $`K^0`$ mass is required in (C). The reconstructed muon momentum should be a discrete value smeared by Fermi motion and energy resolution as is required in (D). Finally, (E) and (F) require total invariant mass near proton rest mass and total reconstructed momentum to be less than the Fermi momentum. Figure 9 shows the event distributions in total momentum vs. total invariant mass for proton decay MC, atmospheric neutrino MC and the data for events which pass criteria (A) and (B). The boxes represent criteria (E) and (F). The single data event which is inside the box is far outside the requirements of (C) and (D) and so is not a valid candidate. This search was done on 45 kton$``$years of data and has a 6.1% efficiency. No candidates were found and 0.65 background events were expected. The resulting limit is $`\tau /B_{p\mu ^+K^0}>4.0\times 10^{32}`$ years (90% CL). ## IX Summary of searches The current Super–Kamiokande nucleon decay results are summarized in Table I. As a comparison, limits collected in PDG 1998 are also included. Finally, although the best limits have been set by large water Cherenkovexperiments, iron calorimeters offer a complementary search with tracking sensitivity to kaons and low momentum pions and muons. Recent results from the Soudan 2 experiment were also presented and are listed below.
no-problem/9903/astro-ph9903105.html
ar5iv
text
# On the Magnetospheric Beat-Frequency and Lense-Thirring Interpretations of the Horizontal Branch Oscillation in the Z Sources ## 1 INTRODUCTION The discovery in the Z-type low-mass X-ray binaries (LMXBs) of $``$10–60 Hz quasi-periodic oscillations (QPOs) with centroid frequencies that are positively correlated with mass accretion rate (van der Klis et al. 1985; see van der Klis 1989 for a review) has led to a significant improvement in our understanding of such systems. These horizontal-branch oscillations (HBOs), which are named after the branch in X-ray color-color diagrams where they appear, were the first rapid-variability phenomena discovered in LMXBs and have played a key role in organizing the complex phenomenology of these sources that has emerged over the past decade (see, e.g., Hasinger & van der Klis 1989; van der Klis 1989). Very soon after the discovery of the HBO, its centroid frequency was identified with the difference between the Keplerian frequency at the radius where the neutron star magnetosphere couples strongly to the gas in the accretion disk and the spin frequency of the neutron star (Alpar & Shaham 1985; Lamb et al. 1985; Shibazaki & Lamb 1987). This magnetospheric beat-frequency interpretation of the HBO was found to agree well with the observed properties of the HBOs, including the dependence of the HBO frequency on X-ray countrate (Alpar & Shaham 1985; Lamb et al. 1985; Ghosh & Lamb 1992), the existence of correlated low-frequency noise (Lamb et al. 1985; Shibazaki & Lamb 1987), and the absence of any detectable QPO with a frequency equal to the Keplerian frequency at the magnetic coupling radius (Lamb 1988). The magnetospheric beat-frequency model predicted that the neutron stars in the Z sources have spin frequencies $`200`$–350 Hz and magnetic field strengths $`10^9`$$`10^{10}`$ G (see Alpar & Shaham 1985; Ghosh & Lamb 1992; Wijnands et al. 1996), consistent with the hypothesis that these stars are the progenitors of the millisecond rotation-powered pulsars (Alpar & Shaham 1985; see Alpar et al. 1982; Radhakrishnan & Shrinivasan 1982). The neutron star properties inferred from the magnetospheric beat-frequency model have subsequently been shown to be consistent with the magnetic field strengths inferred from models of the X-ray spectra of the Z sources (Psaltis, Lamb, & Miller 1995; Psaltis & Lamb 1998) and with the 290–325 Hz spin frequencies inferred from the frequency separation of the two simultaneous QPOs with frequencies $`1`$ kHz (hereafter the kilohertz QPOs) and from the high-frequency oscillations observed during type I X-ray bursts in other neutron-star LMXBs (Strohmayer et al. 1996; Miller, Lamb, & Psaltis 1998; Miller 1999). If the upper kilohertz QPO is an orbital frequency in the inner disk, the magnetospheric beat-frequency model of the HBO requires that a small fraction of the gas in the accretion disk must penetrate to radii smaller than the radius where it initially couples to the stellar magnetic field (van der Klis et al. 1997; see Miller et al. 1998 for a discussion), because observations show that the kilohertz QPOs are present at the same time as the HBO (see, e.g., Wijnands & van der Klis 1998). In addition to the HBOs, the magnetospheric beat-frequency model has been used to explain successfully similar QPOs observed in accretion-powered pulsars, where the neutron star spin frequency can be measured directly and the magnetic field strength can be estimated from the accretion torque, providing a stringent test of the model (see, e.g., Angelini, Stella, & Parmar 1989; Ghosh 1996; Finger, Wilson, & Harmon 1996). Recently, Stella & Vietri (1998) have proposed an alternative HBO mechanism, motivated by concern about whether orbiting gas can penetrate inside the magnetic coupling radius in the Z sources. In this model, the magnetic field of the neutron star plays no role in generating the HBO. Instead, the HBO observed in the Z sources and the power-spectral peaks with frequencies $`20`$–60 Hz seen in some atoll sources (see, e.g., Ford & van der Klis 1998) are both generated by nodal (Lense Thirring and classical) precession of a tilted ring of gas at a special radius in the inner disk. Stella & Vietri suggested that the nodal precession frequency of the ring is visible in X-rays because of the changes in the Doppler shift of radiation from blobs orbiting in the ring, changes in occultations by such blobs, or the changing aspect of the ring seen by an observer. Subsequently, Marković & Lamb (1998) studied the normal modes of the inner disk and showed that typically $`10`$ high-frequency nodal precession modes are weakly damped. Nodal precession has also been proposed by Cui, Zhang, & Chen (1998; see also Ipser 1996) as an explanation for the QPOs observed in black hole candidates. If the HBO is generated by nodal precession at the same radius in the accretion disk where orbital motion generates the kilohertz QPO, as proposed by Stella & Vietri (1998), then the HBO frequency, the neutron star spin frequency, and the frequency of the upper kilohertz QPO should satisfy a specific relation. The shape of this relation was shown to be consistent with observations of the HBO and kilohertz QPO frequencies observed in the Z sources GX 17$`+`$2 and GX 5$``$1 (Stella 1997; Stella & Vietri 1997, 1998; Morsink & Stella 1999), although the predicted precession frequencies were found to be smaller than the observed HBO frequencies. In this paper we use data on five Z sources obtained using the Rossi X-ray Timing Explorer (RXTE) to investigate further the origin of the HBO. All of these data have been fully reported elsewhere. The sources we consider are GX 17$`+`$2 (Wijnands et al. 1997; Homan et al. 1998), GX 5$``$1 (Wijnands et al. 1998b), GX 340$`+`$0 (Jonker et al. 1998), Cyg X-2 (Wijnands et al. 1998a), and Sco X-1 (van der Klis et al. 1997). In all of these sources the HBO and two kilohertz QPOs have been observed simultaneously. In GX 349$`+`$2, the sixth originally identified Z source, no HBO has so far been detected simultaneously with the kilohertz QPOs (Kuulkers & van der Klis 1998; Zhang, Strohmayer, & Swank 1998). Therefore, we cannot include this source in the present study. We investigate the magnetospheric beat-frequency model in §2 and the Lense-Thirring precession model in §3, comparing their predictions with the available data. In §4 we summarize our conclusions and their implications for the properties of the neutron stars in the Z sources. Finally, we characterize the correlations between the various frequencies in a model-independent way in Appendix A, in order to facilitate comparison of the present data with future data or other theoretical models. ## 2 THE MAGNETOSPHERIC BEAT-FREQUENCY INTERPRETATION ### 2.1 Model Predictions and Comparison with Observations In the magnetospheric beat-frequency model of the HBO (Alpar & Shaham 1985; Lamb et al. 1985), the centroid frequency $`\nu _{\mathrm{HBO}}`$ of the HBO is identified with the beat between the Keplerian frequency $`\nu _{\mathrm{K},m}`$ at the radius $`r_m`$ where the neutron star magnetic field couples strongly to the gas in the accretion disk and the spin frequency $`\nu _s`$ of the neutron star. The frequency of this beat is $$\nu _{\mathrm{MBF}}=\nu _{\mathrm{K},m}\nu _s.$$ (1) Ghosh & Lamb (1992) computed the dependence of $`\nu _{\mathrm{MBF}}`$ on the stellar mass and magnetic moment and the accretion rate for a variety of simple models of the inner accretion disk. They found that if the coupling radius is in an asymptotic region of the disk, then $$\nu _{\mathrm{MBF}}\nu _{\mathrm{K},0}M^\gamma \mu _{27}^\beta \left(\frac{\xi \dot{M}}{\dot{M}_\mathrm{E}}\right)^\alpha \nu _s,$$ (2) where $`\mu _{27}`$ is the magnetic moment of the neutron star in units of $`10^{27}`$ G cm<sup>3</sup>, $`M`$ is its gravitational mass in units of solar masses, $`\dot{M}`$ is the mass accretion rate, and $`\dot{M}_\mathrm{E}`$ is the Eddington critical mass accretion rate onto a neutron star of 10 km radius; the proportionality constant $`\nu _{\mathrm{K},0}`$ and the exponents $`\alpha `$, $`\beta `$, and $`\gamma `$ in equation (2) are different for different models of the inner accretion disk and are listed in Table 1. The dimensionless parameter $`\xi `$ describes the fraction of the mass flux through the inner disk that couples to the stellar magnetic field at $`r_m`$ and is introduced here to allow for the possibility that some of the gas in the disk does not couple to the stellar magnetic field at $`r_m`$ but instead penetrates to smaller radii, as required if the upper kilohertz QPO is an orbital frequency in the inner disk (Miller et al. 1998); $`\xi `$ may depend on the mass accretion rate. In the magnetospheric beat-frequency model of the HBO, the steep dependence of $`\nu _{\mathrm{HBO}}`$ on the mass accretion rate inferred from the EXOSAT data implies that $`\nu _{\mathrm{MBF}}`$ is small compared to $`\nu _{\mathrm{K},m}`$ and hence that the spin frequencies $`\nu _s`$ of the neutron stars in the Z sources are very close to, but less than, $`\nu _{\mathrm{K},m}`$ (Alpar & Shaham 1985; Lamb et al. 1995; Ghosh & Lamb 1992). Stated differently, the magnetospheric beat-frequency model of the HBO requires that the neutron stars in the Z sources be near magnetic spin equilibrium. Indeed, the 200–350 Hz spin frequencies predicted by the model (see Ghosh & Lamb 1992) are much larger than the $`20`$–50 Hz HBO frequencies, as required. The similarity of the Z-source spin frequencies predicted by the magnetospheric beat-frequency model to the spin frequencies inferred from the separation frequencies of the kilohertz QPOs in the Z sources lends further support to the model (Miller et al. 1998) and to its implication that the neutron stars in the Z sources are near magnetic spin equilibrium (White & Zhang 1997; Psaltis & Lamb 1998). If they are, then their spin frequencies are given by (Ghosh & Lamb 1979, 1992) $$\nu _s\omega _c\nu _{\mathrm{K},0}M^\gamma \mu _{27}^\beta \left[\frac{(\xi \dot{M})^\alpha }{(\dot{M}_\mathrm{E})^\alpha }\right],$$ (3) where $`\omega _c`$ is the critical fastness parameter and the angle brackets indicate an average over a time interval equal to the timescale on which the accretion torque changes the spin. Combining equations (2) and (3) and identifying $`\nu _{\mathrm{HBO}}`$ with $`\nu _{\mathrm{MBF}}`$, we find $$\frac{\nu _{\mathrm{HBO}}}{\nu _s}+1=\frac{1}{\omega _c}\left[\frac{(\xi \dot{M})^\alpha }{(\xi \dot{M})^\alpha }\right].$$ (4) Equation (4) shows that the magnetospheric beat-frequency model of the HBO predicts that $`\nu _{\mathrm{HBO}}/\nu _s+1`$ should be $`1/\omega _c`$ and hence only slightly larger than unity. The Z sources are thought to be accreting at near-Eddington accretion rates (see Lamb 1989; Hasinger & van der Klis 1989). The inner accretion disk in these sources is therefore expected to be radiation-pressure-dominated, in which case $`\alpha 0.2`$ (see Table 1). Hence, if $`\xi `$ depends only weakly on the instantaneous accretion rate $`\dot{M}`$, then $`\nu _{\mathrm{HBO}}/\nu _s+1`$ will also depend only weakly on $`\dot{M}`$ and possibly also on the magnetic field and mass of the neutron star, through the dependence of $`\omega _c`$ on these quantities (any such dependence is expected to be weak). According to the magnetospheric beat-frequency model, the HBO frequency $`\nu _{\mathrm{HBO}}`$ is related to the frequency $`\nu _2`$ of the upper kilohertz QPO only indirectly, through the dependence of both frequencies on the mass accretion rate. In all sources in which kilohertz QPOs have so far been discovered, $`\nu _2`$ increases with inferred mass accretion rate (see, e.g., van der Klis et al. 1996; Strohmayer et al. 1996; van der Klis 1998). Here we explore the consequences of the simple ansatz $`\nu _2=\nu _0(\dot{M}/\dot{M}_\mathrm{E})^\lambda `$, where $`\nu _0`$ and $`\lambda `$ are constants that are specific to each source and may depend on the mass and magnetic field strength of the neutron star. This relation, with $`\lambda 1`$, is consistent with kilohertz QPO observations of several atoll sources, provided that the observed countrate from an atoll source is proportional to the mass accretion rate (see, e.g., Strohmayer et al. 1996; Ford et al. 1997). In all the Z sources, $`\nu _2`$ is consistent with being $`1200`$ Hz when they are accreting at near-Eddington rates, which implies $`\nu _01200`$ Hz, independent of the expected modest differences in the masses and magnetic field strengths of these neutron stars. Using this simple ansatz, equation (2) can be written $$\nu _{\mathrm{HBO}}+\nu _s=A_1\nu _2^{\alpha /\lambda },$$ (5) where $$A_1\xi ^\alpha \nu _{\mathrm{K},0}M^\gamma \mu _{27}^\beta \nu _0^{\alpha /\lambda },$$ (6) and equation (4) becomes $$\frac{\nu _{\mathrm{HBO}}}{\nu _s}+1=A_2^1(\nu _2/\nu _0)^{\alpha /\lambda }$$ (7) where $$A_2\omega _c\left[\frac{(\xi \dot{M})^\alpha }{(\xi \dot{M}_\mathrm{E})^\alpha }\right].$$ (8) The inferred value of $`A_2`$ therefore provides an estimate of the critical fastness $`\omega _c`$. In order to test the relations (5) and (7) predicted by the magnetospheric beat-frequency model, simultaneous measurements of $`\nu _{\mathrm{HBO}}`$, $`\nu _2`$, and $`\nu _s`$ are needed. The HBO and kilohertz QPO frequencies are directly observed, but oscillations at the neutron star spin frequency have not yet been detected in the persistent emission of any Z source. However, comparisons of the frequencies of the two simultaneous kilohertz QPOs observed in the persistent emission of the atoll sources with the frequencies of the nearly coherent oscillations observed in these sources during type I X-ray bursts indicate that the neutron star spin frequency is nearly equal to the frequency separation between the two kilohertz QPOs (Strohmayer et al. 1996; Miller et al. 1998; Strohmayer et al. 1998; Psaltis et al. 1998; Miller 1999). Hence, for GX 17$`+`$2, GX 5$``$1, GX 340$`+`$0, and Cyg X-2 we set the spin frequency equal to the average frequency separation of the kilohertz QPOs. In Sco X-1, the frequency separation of the kilohertz QPOs is consistent with being constant at the lowest inferred accretion rates but decreases at higher rates (van der Klis et al. 1997). Sco X-1 is thought to be accreting at near-Eddington mass accretion rates when the frequency separation of the kilohertz QPOs decreases, and it is therefore plausible that this decrease is related to the effects of radiation forces on the dynamics of the accretion flow near the neutron star (see, however, Méndez et al. 1998; Psaltis et al. 1998). Hence, in plotting the Sco X-1 data, we set the spin frequency equal to the nearly constant frequency separation of its kilohertz QPOs at low inferred mass accretion rates. The spin frequencies we have adopted are listed in Table 2. Relation (5) describes adequately ($`\chi ^21.5`$ per degree of freedom) the dependence of the sum of the HBO and inferred spin frequencies on the frequency of the upper kilohertz QPO in the five Z sources in our sample, considered separately. Table 2 lists the best-fit parameters and their $`1\sigma `$ errors and Figure 1 compares the best-fit relations with the frequency data on each source. If $`\lambda 1`$, the power-law index $`\alpha `$ for all sources except Sco X-1 is $`0.2`$. This value is consistent with the expectation that the Z sources are accreting at near-Eddington rates and hence that the inner accretion disk is optically thick and radiation-pressure-dominated (see Table 1). For Sco X-1, however, a significantly weaker dependence of the HBO frequency on $`\dot{M}`$ is required, if $`\lambda `$ is independent of the mass accretion rate (but see Appendix A and Psaltis, Belloni, & van der Klis 1999). Figure 2 shows the quantity $`\nu _{\mathrm{HBO}}/\nu _s+1`$ plotted as a function of the upper kilohertz QPO frequency $`\nu _2`$ for the five Z sources in our sample. The frequency data on all the sources are consistent with a single, universal relation between $`\nu _{\mathrm{HBO}}`$, $`\nu _s`$, and $`\nu _2`$, as predicted by equation (7), when $`\nu _2`$ is $`<850`$ Hz. This relation is shown as a solid line in Figure 2. Figure 3 shows the confidence contours for the power-law index $`\alpha /\lambda `$ and the coefficient $`A_2`$ in relation (7) obtained by fitting this relation to all the data with $`\nu _2<850`$ Hz. Assuming that $`\nu _0`$ is $`1200`$ Hz, the best-fit value of $`A_2`$ gives a lower bound on the critical fastness $`\omega _c`$, because $`\xi \dot{M}`$ is expected to be a monotonically increasing function of $`\dot{M}`$ and hence $`(\xi \dot{M})^\alpha (\xi \dot{M}_\mathrm{E})^\alpha `$. If the magnetospheric beat-frequency model is the correct explanation of the HBO, then $`\omega _c`$ is $`0.8`$ for the magnetic field strengths and accretion rates of the Z sources. When $`\nu _2`$ is $`>850`$ Hz, the HBO frequencies of GX 17$`+`$2 are up to 2% higher than predicted by extrapolating the universal relation that holds at lower frequencies, whereas those of Sco X-1 are as much as 5% lower. This indicates that there is at least one other important parameter that varies with $`\nu _2`$. For example, the structure of the inner disk may change at high accretion rates, causing the exponent $`\lambda `$ to vary from source to source. This conjecture cannot be tested without a specific model for the variation in $`\lambda `$, because if $`\lambda `$ is chosen to reproduce the behavior of the data, relation (7) looses all predictive power. The magnetospheric beat-frequency model of the HBO requires that the neutron stars in all the Z sources be near magnetic spin equilibrium. The tight, universal correlation between the HBO, spin, and upper kilohertz QPO frequencies in all the Z sources when $`\nu _2<850`$ Hz is explained by the model if $`A_2\nu _0^{\alpha /\lambda }=\omega _c\nu _0^{a/\lambda }(\xi \dot{M})^\alpha /(\xi \dot{M}_\mathrm{E})^\alpha `$ is approximately the same in all of them. All the Z sources are thought to be accreting at very similar rates (comparable to the Eddington critical accretion rate; see, e.g., Lamb 1989; Hasinger & van der Klis 1989) and hence $`(\xi \dot{M})^\alpha /(\xi \dot{M}_\mathrm{E})^\alpha `$ is not expected to differ much from one to another. The critical fastness $`\omega _c`$ is expected to be comparable to unity and may be a universal constant for a given inner disk structure (Ghosh & Lamb 1979, 1992). If so, then $`\nu _0^{\alpha /\lambda }`$ is nearly the same in the five Z sources in our sample and hence the frequency of the upper kilohertz QPO is a good absolute measure of the mass accretion rate in these sources. Stated differently, the relation between the upper kilohertz QPO frequency and the accretion rate appears to be very similar in all the Z sources. ### 2.2 Discussion The analysis presented in §2.1 demonstrates that if (a) the neutron stars in the Z sources are spinning near their magnetic spin equilibrium rates, (b) the frequency separation between the upper and lower kilohertz QPOs is approximately equal to the neutron-star spin frequency, (c) the inner accretion disk in the Z sources is optically thick and radiation-pressure dominated, and (d) the upper kilohertz QPO frequency is proportional to the mass accretion rate through the inner disk, then the magnetospheric beat-frequency model is consistent with the observed behavior of the HBO frequency and, in particular, the tight, universal correlation (Figure 2) between the HBO and kilohertz QPO frequencies in all the Z sources in our sample. All of these assumptions are expected to be satisfied, as discussed in §2.1. As noted earlier, the magnetospheric beat-frequency model of the HBO is consistent with models for the upper kilohertz QPO that identify its frequency with an orbital frequency in the inner disk only if a small fraction of the accreting matter does not couple to the stellar magnetic field at the radius $`r_m`$ but instead remains in a geometrically thin Keplerian disk down to the radius responsible for the upper kilohertz QPO (i.e., $`\xi `$ must be less than unity; see van der Klis 1998; Miller et al. 1998; and Alpar & Yilmaz 1997 for discussions). In particular, the interpretation of both the HBO and the two simultaneous kilohertz QPOs as rotational beat phenomena requires that there be two distinct radii in the inner accretion disk at which beating of the neutron star spin frequency with the orbital frequency produces a QPO, as is the case, for example, in the sonic-point model (Miller et al. 1998). Assuming that the HBO is a magnetospheric beat-frequency phenomenon, we can use the observed HBO properties together with a general relation for the coupling radius to constrain the magnetic dipole moments of the neutron stars in the Z sources in a way that is largely independent of the structure of the inner accretion disk. In the Ghosh & Lamb (1979) model of disk-magnetosphere interaction, the radius $`r_m`$ at which the stellar magnetic field strongly couples to the gas in the accretion disk is given implicitly by (see Ghosh & Lamb 1991) $`r_m`$ $``$ $`\left({\displaystyle \frac{B_\varphi }{B_p}}\right)^{2/7}\left({\displaystyle \frac{\mathrm{\Delta }r}{r_m}}\right)^{2/7}\left({\displaystyle \frac{\mu ^4}{GMM_{}\xi ^2\dot{M}^2}}\right)^{1/7}`$ (9) $``$ $`3.3\times 10^6\left({\displaystyle \frac{B_\varphi }{B_p}}\right)^{2/7}\left({\displaystyle \frac{\mathrm{\Delta }r}{r_m}}\right)^{2/7}\xi ^{2/7}\mu _{27}^{4/7}M^{1/7}\left({\displaystyle \frac{\dot{M}}{\dot{M}_\mathrm{E}}}\right)^{2/7}\text{cm}`$ for any model of the inner accretion disk. Here $`B_\varphi /B_p`$ is the mean azimuthal magnetic pitch in the annulus of radial width $`\mathrm{\Delta }r/r_m`$ in the inner disk where the stellar field strongly interacts with gas in the disk. If the stellar magnetic field is too weak, it cannot couple strongly to the gas in the accretion flow well above the stellar surface and hence cannot generate magnetospheric beat-frequency oscillations. Hence in the magnetospheric beat-frequency model, the coupling radius $`r_m`$ must be larger than the neutron star radius $`R_{\mathrm{NS}}`$, which requires $$\mu _{27}1.3\xi ^{1/2}M^{1/4}\left(\frac{B_\varphi }{B_p}\right)^{1/2}\left(\frac{\mathrm{\Delta }r/r_m}{0.01}\right)^{1/2}\left(\frac{\dot{M}_{\mathrm{max}}}{\dot{M}_\mathrm{E}}\right)^{1/2}\left(\frac{R_{\mathrm{NS}}}{10^6\text{cm}}\right)^{7/4},$$ (10) where $`\dot{M}_{\mathrm{max}}`$ is the maximum mass accretion rate at which the HBO is detected. In deriving inequality (10) we have neglected the contributions of any higher multipole moments of the stellar magnetic field that may be present near the neutron star surface or may be induced by the electrical currents flowing in the disk (see Psaltis, Lamb, & Zylstra 1996 for a discussion). For the Keplerian frequency at the coupling radius to exceed the neutron star spin frequency, which is also required in the magnetospheric beat-frequency model (see Ghosh & Lamb 1979), the magnetic dipole moment must satisfy $$\mu _{27}10M^{1/4}\left(\frac{B_\varphi }{B_p}\right)^{1/2}\left(\frac{\mathrm{\Delta }r/r_m}{0.01}\right)^{1/2}\left(\frac{\nu _s}{300\text{Hz}}\right)^{7/6},$$ (11) where we have used the fact that $`\xi \dot{M}\dot{M}_\mathrm{E}`$. These upper and lower bounds on the magnetic dipole moment depend only very weakly on the neutron star mass. Figure 4 shows the resulting lower (eq. ) and upper (eq. ) bounds on the magnetic dipole moments of the neutron stars in the Z sources, as a function of the relative width $`\mathrm{\Delta }r/r_m`$ of the coupling region. We can obtain an estimate of the magnetic dipole moment of the Z sources by using the value of $`A_1`$ obtained by fitting relation (5) to the frequency data and the optically-thick, radiation-pressure-dominated model of the inner disk. The result is $$\mu _{27}(0.81.0)\xi ^{0.3}\left(\frac{\nu _0}{1200\text{Hz}}\right)^{0.3}\left(\frac{M}{2M_{}}\right)^{0.9},$$ (12) where $`\xi `$ may depend on the magnetic field strength. Relation (12) shows that the HBO frequencies predicted by the magnetospheric beat-frequency model are consistent with the HBO frequencies observed if $`\mu _{27}1`$, which implies that the dipole magnetic fields of the Z sources have field strengths at the magnetic poles of $`10^9`$ G for a 10 km neutron star. (Note that the estimated dipole magnetic moment depends only weakly on the unknown parameters $`\xi `$ and $`\nu _0`$.) The relative width $`\mathrm{\Delta }r/r_m`$ of the annulus where the stellar magnetic field strongly couples to the gas in the disk is expected to be greater than $`0.01`$ (Ghosh & Lamb 1992). Its value can be bounded below using the observed FWHM of the HBO (see also Alpar & Shaham 1985; Lamb et al. 1985). Assuming that all other QPO broadening mechanisms—such as lifetime broadening—are negligible, we can estimate the relative width of the annulus in the accretion disk in which the interaction at the beat frequency affects the X-ray luminosity, from the relative width of the HBO peak in power spectra. The width of this annulus is necessarily smaller than the width $`\mathrm{\Delta }r`$ of the layer where the magnetic field strongly interacts with the gas in the disk, and hence $$\frac{\mathrm{\Delta }r}{r_m}\frac{2}{3}\left(\frac{\delta \nu _{\mathrm{HBO}}}{\nu _{\mathrm{HBO}}+\nu _s}\right)=0.02\left(\frac{\delta \nu _{\mathrm{HBO}}}{10\text{Hz}}\right)\left(\frac{350\text{Hz}}{\nu _{\mathrm{HBO}}+\nu _s}\right),$$ (13) where $`\delta \nu _{\mathrm{HBO}}`$ is the FWHM of the HBO. Figure 4 displays this bound on $`\mathrm{\Delta }r/r_m`$ and the constraint it imposes on the dipole moment of the stellar magnetic field. Figure 4 shows that these additional physical bounds on the dipolar magnetic fields of the Z sources derived from the magnetospheric beat-frequency interpretation of the HBO are consistent both with each other and with the field strengths $`10^9`$ G estimated in equation (12) and by modeling the X-ray spectra of the Z sources (Psaltis et al. 1995; Psaltis & Lamb 1998). ## 3 THE LENSE-THIRRING PRECESSION INTERPRETATION ### 3.1 Model Predictions and Comparison with Observations In the nodal (Lense-Thirring and tidal) precession model of the HBO (Stella & Vietri 1998), a narrow ring or clumps of gas are assumed to be in a tilted orbit at the radius responsible for the upper kilohertz QPO and to precess with the frequency of a test particle in such an orbit (Stella & Vietri 1998). Alternatively, if the disk ends at this radius, one of the many weakly damped global precession modes localized near the inner edge of the disk (Marković & Lamb 1998) may be excited. In the weak field limit, the nodal precession frequency of an infinitesimally tilted orbit at the radius where the frequency of a circular Keplerian orbit is $`\nu _\mathrm{K}`$ is (see Stella & Vietri 1998; Morsink & Stella 1999) $`\nu _{\mathrm{NP}}`$ $``$ $`13.2\left({\displaystyle \frac{I_{45}}{M}}\right)\left({\displaystyle \frac{\nu _s}{300\text{Hz}}}\right)\left({\displaystyle \frac{\nu _\mathrm{K}}{1\text{kHz}}}\right)^2`$ (14) $`4.7\left({\displaystyle \frac{I_{45}}{M^{5/3}}}\right)\left({\displaystyle \frac{\eta }{0.01}}\right)\left({\displaystyle \frac{\nu _s}{300\text{Hz}}}\right)^2\left({\displaystyle \frac{\nu _\mathrm{K}}{1\text{kHz}}}\right)^{7/3},`$ where $`\eta (A/I_{zz})(\nu _s/300\mathrm{Hz})^2`$ in terms of $`A`$, the coefficient of the quadrupole moment of the gravitational field, and $`I_{\mathrm{zz}}`$, the neutron star moment of inertia with respect to its spin axis. Equation (14) is derived by expanding the full expression for $`\nu _{\mathrm{NP}}`$ in a power series in $`\nu _s`$ and retaining only terms up to second order. Lense-Thirring precession.—If the effects of the quadrupole component of the star’s gravitational field are negligible, the localized warping modes of the inner disk will precess with a frequency close to the Lense-Thirring frequency of a test particle (Marković & Lamb 1998), which is given by the first term in equation (14). Identifying the centroid frequency of the HBO with the Lense-Thirring frequency at the radius where the orbital frequency is equal to the frequency $`\nu _2`$ of the upper kilohertz QPO gives (Stella & Vietri 1998) $$\nu _{\mathrm{HBO}}=\frac{8\pi ^2I}{c^2M}\nu _s\nu _2^2=13.2\left(\frac{I_{45}}{M}\right)\left(\frac{\nu _s}{300\text{Hz}}\right)\left(\frac{\nu _2}{1\text{kHz}}\right)^2\text{Hz},$$ (15) where $`I10^{45}I_{45}`$ g cm<sup>2</sup> is the moment of inertia of the neutron star and $`\nu _2`$ is the orbital frequency at the radius responsible for the upper kilohertz QPO. The X-ray visibility as well as the excitation and damping of the precession modes of the inner disk have not yet been addressed (see Marković & Lamb 1998 for a discussion). Equation (15) predicts a relation between the HBO frequency, the spin frequency of the neutron star, and the frequency $`\nu _2`$ of the upper kilohertz QPO that depends only on the structure of the neutron star, through the ratio $`I/M`$. Figure 5 shows the HBO frequencies observed in the five Z sources in the present sample, plotted against the frequencies of their upper kilohertz QPOs. Separate fits of equation (15) to the data on the individual sources, using the neutron star spin frequency inferred from the frequency separation of the kilohertz QPOs and treating $`I/M`$ as a free parameter, give values of $`\chi ^2`$ per degree of freedom of order unity for four of the Z sources but $`4`$ for Sco X-1. There is no other freedom in relation (15) and hence pure Lense-Thirring precession must be rejected as an explanation of the HBO. Moreover, as Stella & Vietri (1998) noticed (see also Wijnands et al. 1998a; Jonker et al. 1998), the coefficients of $`\nu _s\nu _2^2`$ required to fit the data give values of $`I_{45}/M4`$, which is $`4`$–5 times larger than the largest ratios given by realistic equations of state for stars of any mass and about 2.5 times larger even than the largest ratio given by the extremely stiff relativistic mean-field equation of state L (see Table 3). In the Lense-Thirring precession model of the HBO, the relation between $`\nu _{\mathrm{HBO}}/\nu _s`$ and the upper kilohertz QPO frequency depends only on the mass of the star and the equation of state of neutron-star matter (see eq. ). Data from similar neutron stars should therefore follow similar relations. Figure 6 shows how the ratio $`\nu _{\mathrm{HBO}}/\nu _s`$ scales with the frequency $`\nu _2`$ of the upper kilohertz QPO; according to the Lense-Thirring precession model, this ratio should scale as $`\nu _2^2`$. Figure 6 shows that the data with frequencies $`\nu _2<850`$ Hz are consistent ($`\chi _{\mathrm{d}.\mathrm{o}.\mathrm{f}}^21.1`$) with a single relation of the form (15). Again, however, the coefficient given by the fit requires neutron stars with $`I_{45}/M4`$, which is implausibly large. Furthermore, the points that have $`\nu _2>850`$ Hz are inconsistent with the relation of the form (15) that fits the points with lower values of $`\nu _2`$. Effect of classical precession.—Stella & Vietri (1998; see also the extended discussion in Morsink & Stella 1999) suggested that the flattening of the $`\nu _{\mathrm{HBO}}`$$`\nu _2`$ correlation at high $`\nu _2`$ might be caused by the increasing importance, as the disk penetrates closer to the star, of the classical precession caused the rotation-induced quadrupole component of the star’s gravitational field. This precession is retrograde but smaller than the prograde gravitomagnetic precession and therefore tends to reduce the nodal precession frequency. We can test this suggestion quantitatively using the data for Sco X-1 and GX 17$`+`$2, which deviate most strongly from the relation of the form (15) that fits the points with low values $`\nu _2<850`$ Hz.. If the HBO is caused by nodal precession and classical precession is important, then $`\nu _{\mathrm{HBO}}/\nu _2^2`$ should decrease linearly with increasing $`\nu _2^{1/3}`$ (cf. Stella & Vietri 1998), because relation (14) can be rewritten as $`\left({\displaystyle \frac{\nu _{\mathrm{HBO}}}{1\text{Hz}}}\right)\left({\displaystyle \frac{\nu _2}{1\text{kHz}}}\right)^2`$ $`=`$ $`13.2\left({\displaystyle \frac{I_{45}}{M}}\right)\left({\displaystyle \frac{\nu _s}{300\text{Hz}}}\right)`$ (16) $`4.7\left({\displaystyle \frac{I_{45}}{M^{5/3}}}\right)\left({\displaystyle \frac{\eta }{0.01}}\right)\left({\displaystyle \frac{\nu _s}{300\text{Hz}}}\right)^2\left({\displaystyle \frac{\nu _2}{1\text{kHz}}}\right)^{1/3}.`$ Figure 7 plots $`\nu _{\mathrm{HBO}}/\nu _2^2`$ against $`\nu _2^{1/3}`$ for Sco X-1 and also shows the best-fit straight line with slope $`1/3`$, which has $`\chi _{\mathrm{d}.\mathrm{o}.\mathrm{f}.}^21`$. The fact that the data in Figure 7 can be fit satisfactorily by a straight line with slope $`1/3`$ is not strong evidence for this scaling, because the range of measured $`\nu _2^{1/3}`$ values is very narrow. However, we can use the best-fit value of the intercept of the straight line with the vertical axis to estimate the value of the parameter $`\eta `$ that characterizes the quadrupole moment of the gravitational field and to estimate $`I/M`$. The results are $$\eta _{\mathrm{Sco}}2.3\times 10^2M^{2/3}\left(\frac{\nu _s}{300\text{Hz}}\right)^1$$ (17) and $$\left(\frac{I_{45}}{M}\right)\left(\frac{\nu _s}{300\text{Hz}}\right)=20.8\pm 2.1.$$ (18) The value of $`I/M`$ required by equation (18) is $`20`$ time larger than the largest values given by realistic neutron-star equations of state (see Table 3). The deviation of the GX 17$`+`$2 data from the power-law relation (15) at high frequencies (see Fig. 6) requires that the classical precession frequency be negligible for $`\nu _2<850`$ Hz but comparable to the Lense-Thirring precession frequency at slightly larger values of $`\nu _2`$. This is not possible, because the exponents of $`\nu _2`$ in the Lense-Thirring and classical precession terms of equation (14) are too similar. The data are therefore inconsistent with the predictions of the simple nodal precession model. ### 3.2 Discussion In the Lense-Thirring precession model of the HBO, the HBO frequency, the spin frequency of the neutron star, and the frequency of the upper kilohertz QPO are related by equation (15). As discussed in the previous section, the data are consistent with this relation when the frequency of the upper kilohertz QPO is $`<850`$ Hz, but the inferred value of $`I/M`$ is implausibly large (see also Stella & Vietri 1997, 1998; Morsink & Stella 1999). Aside from the rather unlikely possibility that neutron stars have ratios of $`I/M`$ that are four times as large as the largest values for stellar models constructed with realistic equations of state, three other possibilities have been suggested for reducing this large discrepancy. First, the frequency difference $`\mathrm{\Delta }\nu `$ between the kilohertz QPOs might be equal to half the neutron star spin frequency $`\nu _s`$ rather than equal to it. This is very unlikely in any beat-frequency model of the kilohertz QPOs, because it would require a special direction that rotates with the neutron star but affects the inner accretion disk only once every two beat periods. However, $`\mathrm{\Delta }\nu =\frac{1}{2}\nu _s`$ appeared possible, given the initial analysis of the data taken during the type I X-ray bursts of 4U 1636$``$536 (Zhang et al. 1997; Strohmayer et al. 1998), which showed a strong oscillation at about 580 Hz, approximately twice the frequency separation of the two kilohertz QPOs. However, further analysis of this data by Miller (1999) using a matched-waveform filtering technique has revealed the presence of a weak coherent oscillation at about 290 Hz, approximately equal to the frequency separation of the two kilohertz QPOs. Thus, it now appears very unlikely that the spin frequencies of these neutron stars are twice the frequency separation of their kilohertz QPOs. Second, the observed HBO frequencies and their second harmonics might represent the second and fourth harmonics of the fundamental Lense-Thirring frequency (15), rather than the first and second harmonics. Indeed, a precessing circular orbit has a two-fold symmetry that could, in principle, produce even-order harmonics that are stronger than the odd-order harmonics. Moreover, power-density spectra of the Z sources show a relatively strong, broad-band noise component at frequencies comparable to the ones predicted by relation (15). This so-called low-frequency noise (Hasinger & van der Klis 1989) might inhibit detection of the fundamental of a low-frequency precession frequency (see, e.g., Fig. 6a of Kuulkers et al. for peaked features in the low-frequency noise component of GX 5$``$1). Determination of the upper limits on the amplitudes of any QPOs at these frequencies would significantly constrain this possibility. Note, however, that even if the HBO and its overtone are the second and fourth harmonics of the precession frequency, the $`I/M`$ ratios required to explain the HBO observations would still be a factor $`2`$ larger than predicted by realistic neutron-star equations of state. A further difficulty with the Lense-Thirring precession interpretation of the HBO is that the observed correlation between the HBO and kilohertz QPO frequencies is significantly different from what is predicted by relation (15) when the frequency of the upper kilohertz QPO is $`>850`$ Hz. As demonstrated in §3.1, this difference cannot be explained by classical precession; nor can it be explained by strong-field corrections to relation (15) (Stella & Vietri 1998). A third possibility is that radiation forces increase the ratio of $`\nu _{\mathrm{NP}}`$ to $`\nu _2`$ by the factor $`2`$–5 required to bring it into agreement with the observed HBO and upper kHz QPO frequencies. The Z sources are thought to be accreting at near-Eddington mass accretion rates when the frequencies of the upper kilohertz QPOs are comparable to $`1`$ kHz (see, e.g., Psaltis et al. 1995). Hence radiation forces, which were neglected in equation (14), almost certainly are important. At near-critical luminosities, both orbital and nodal precession frequencies can be altered by large factors compared to their values in the absence of radiation; hence radiation forces might possibly explain the large discrepancy between the observed frequencies of the HBO and the frequencies predicted by the nodal precession model. An explanation in terms of the combined effects of Lense-Thirring precession and radiation forces would, however, require the physically implausible result that radiation forces leave the variation with $`\nu _2`$ basically unchanged while increasing the ratio of $`\nu _{\mathrm{NP}}`$ to $`\nu _2`$ by a factor $`2`$–5. Such an explanation would also require that the QPO peaks not be significantly broadened by the radiation drag force at the same time that radiation forces are strong enough to change the orbital and precession frequencies by a factor $`2`$–5. The HBO peaks in Sco X-1, for example, have fractional widths $`\delta \nu /\nu 0.5`$ even when the inferred accretion rate is near the Eddington critical rate (van der Klis et al. 1997). More fundamentally, if radiation forces do change the orbital and precession frequencies of gas accreting onto the Z sources by large factors, as they may well do, the observed correlation between the HBO and upper kilohertz QPO frequencies would be explained primarily by the effect of the radiation forces and not by the gravitomagnetic torque. ## 4 CONCLUSIONS In §2 and §3 we have studied in detail the behavior of the HBO frequencies observed in five Z sources and in particular their correlation with the frequencies of the kilohertz QPOs, comparing the observed behavior with the behaviors predicted by the magnetospheric beat-frequency (Alpar & Shaham 1985; Lamb et al. 1985) and Lense-Thirring precession (Stella & Vietri 1998) models of the HBO. In §2 we showed that the magnetospheric beat-frequency model is consistent with the observed correlation between the HBO and upper kilohertz QPO frequencies in the five Z sources studied here if, as expected, the neutron stars in these sources are spinning near their magnetic spin equilibrium rates, the frequency separation between the upper and lower kilohertz QPOs is approximately equal to the neutron-star spin frequency, the inner part of their accretion disks are optically thick and radiation-pressure-dominated, and the frequency of the upper kilohertz QPO is approximately proportional to the mass accretion rate. The model predicts a universal relation between the horizontal branch oscillation, stellar spin, and upper kilohertz QPO frequencies that agrees well with the data on five Z sources. The spin rates predicted by the model are consistent with the range of the spin frequencies of the Z sources inferred from the frequency separation of their kilohertz QPOs if they are all accreting at similar, near-critical rates and all have $`10^9`$$`10^{10}`$ G dipole magnetic fields. Such magnetic fields are consistent with models of Z-source X-ray spectra. The inferred value of the critical fastness for the accretion rates and magnetic field strengths of the Z sources is $`0.8`$. If the frequency of the upper kilohertz QPO is an orbital frequency in the accretion disk, the magnetospheric beat-frequency model requires that a fraction of the accreting gas does not couple strongly to the stellar magnetic field until it has penetrated to within a few kilometers of the neutron star surface. In §3, we showed that the trend of the correlation between the HBO frequency and the upper kilohertz QPO frequency observed at upper kilohertz QPO frequencies $`\nu _2<850`$ Hz agrees with the trend predicted by the Lense-Thirring precession model. However, the observed trend is inconsistent with the model for $`\nu _2>850`$ Hz. The observed magnitudes of the HBO frequencies are $`4`$–5 times smaller than the magnitudes predicted by the Lense-Thirring precession model for realistic neutron-star equations of state. Thus, in order to be consistent with the observed magnitudes, either $`I/M`$ must be $`4`$–5 times larger than expected or the principal frequency of the X-ray oscillation generated by nodal precession must be $`4`$–5 times the nodal precession frequency. We thank Greg Cook for making available the numerical code used to compute the suite of neutron star models used in this work. DP acknowledges Charles Gammie for many useful discussions on the physics of warped accretion disks. DP also thanks L. Stella, V. Kalogera, R. Narayan, A. Esin, C. Gammie, K. Menou, and E. Quataert for discussions on the observational evidence for the Lense-Thirring effect. This work was supported in part by a post-doctoral fellowship of the Smithsonian Institute (DP), by the Netherlands Foundation for Research in Astronomy (ASTRON) grant 781-76-017 (RW, JH, PJ, MvdK), by NSF grant AST 96-18524 (FKL), by NASA grants NAG 5-2925 (FKL), NAG 5-2868 (MCM), NAG 5-3269 and NAG 5-3271 (JvP), and by several RXTE observing grants. WHGL gratefully acknowledges support from NASA. MvdK gratefully acknowledges the Visiting Miller Professor Program of the Miller Institute for Basic Research in Science (UCB). APPENDIX ## Appendix A AN EMPIRICAL DESCRIPTION OF THE CORRELATION OBSERVED BETWEEN THE HBO AND UPPER KILOHERTZ QPO FREQUENCIES In this appendix we show that the correlations observed between the HBO frequency $`\nu _{\mathrm{HBO}}`$, the upper kilohertz QPO frequency $`\nu _2`$, and the frequency separation $`\mathrm{\Delta }\nu `$ between the two kilohertz QPOs can be characterized by simple power-law relations among the frequencies involved. Our purpose is to facilitate comparison of the present data with future data (see, e.g., Psaltis et al. 1999) or other theoretical models. The frequency correlations in the five Z sources in our sample can be described adequately by the power-law relation $$\nu _{\mathrm{HBO}}=13.2a_1\left(\frac{\nu _2}{1\text{kHz}}\right)^{b_1}\mathrm{Hz},$$ (A1) where the constants $`a_1`$ and $`b_1`$ are different for each source. The confidence contours obtained by the fitting this relation to the measured HBO and upper kilohertz QPO frequencies of the five Z sources in the present sample are shown in Figure 8. The power-law index that describes the Sco X-1 data is significantly smaller than the index that describes the data on the other four sources. The best-fit relations for each source are the dashed lines shown in Figure 5. For all the Z sources except Sco X-1, the best-fit value of the parameter $`a_1`$ is approximately proportional to the spin frequency inferred from the frequency separation of the two kilohertz QPOs (see Fig. 6). Indeed, when the upper kilohertz QPO frequency is $`<850`$ Hz, the correlation between $`\nu _{\mathrm{HBO}}`$, $`\nu _s`$, and $`\nu _2`$ is described adequately by the relation $$\nu _{\mathrm{HBO}}=13.2a_2\left(\frac{\nu _s}{300\text{Hz}}\right)\left(\frac{\nu _2}{1\text{kHz}}\right)^{b_2},$$ (A2) with $`a_24.6`$, and $`b_21.8`$. The confidence contours obtained by fitting this relation to the Sco X-1 points and to the points on the other four sources for which $`\nu _2<850`$ Hz are shown in Figure 9. As Figure 6 shows, the frequency correlation is significantly flatter when $`\nu _2`$ is $`>850`$ Hz. In Sco X-1, the HBO and kilohertz QPOs were simultaneously detected mostly when it was on the normal branch. In the other four Z sources, the HBO and kilohertz QPOs were simultaneously detected mostly when they were on their horizontal branches. The transition from the horizontal to the normal branch is thought to take place when the mass accretion rate increases to within a few percent of the Eddington critical rate (see Lamb 1989; Psaltis et al. 1995). If so, the resulting change in the accretion flow pattern (see Lamb 1989) might be responsible for the different dependences of the HBO frequency $`\nu _{\mathrm{HBO}}`$ and the instantaneous frequency separation $`\mathrm{\Delta }\nu `$ of the two kilohertz QPOs on the upper kilohertz QPO frequency $`\nu _2`$ in Sco X-1, compared to the dependences in the other four Z sources in our sample. The ratio of $`\nu _{\mathrm{HBO}}`$ to $`\mathrm{\Delta }\nu `$ in Sco X-1 increases more steeply with $`\nu _2`$ than does the ratio of $`\nu _{\mathrm{HBO}}`$ to the (constant) inferred spin frequency $`\nu _s`$. This is demonstrated most clearly by the correlation plots shown in Figure 10. (For all the Z sources in the sample except Sco X-1, $`\mathrm{\Delta }\nu `$ is consistent with being constant or with varying in the same way as it does in Sco X-1 \[Psaltis et al. 1998; see also Wijnands et al. 1997, 1998b; Jonker et al. 1998\]. This is true mostly because of the relatively large uncertainties in the measured kilohertz QPO frequencies.) The dependence of $`\nu _{\mathrm{HBO}}/\mathrm{\Delta }\nu `$ on $`\nu _2`$ in Sco X-1 is more consistent with the behavior seen in the other four Z sources and suggests that we plot $`\nu _{\mathrm{HBO}}/\mathrm{\Delta }\nu `$ against $`\nu _2`$ for all five of the Z sources in our sample. The result is shown in Figure 11. In this plot we have included only points derived from simultaneous observations of the HBO and kilohertz QPO frequencies. The larger scatter of the points in Figure 11 at $`\nu _2<850`$ Hz compared to the scatter of the points in Figure 7 corresponding to the same values of $`\nu _{\mathrm{HBO}}`$ and $`\nu _2`$ is caused by the large uncertainties in $`\mathrm{\Delta }\nu `$. The points plotted in Figure 11 are consistent with power-law relations of the form $$\nu _{\mathrm{HBO}}=13.2a_3\left(\frac{\mathrm{\Delta }\nu }{300\text{Hz}}\right)\left(\frac{\nu _2}{1\text{kHz}}\right)^{b_3}.$$ (A3) The confidence contours obtained by fitting this relation to the data on each of the sources except Cyg X-2 are shown in Figure 12. Cyg X-2 was not included because its HBO and kilohertz QPO frequencies were measured simultaneously only once. These contours show that a single universal relation of the form (A3) with $`a_3=4.2`$ and $`b_3=1.6`$ is consistent with the data on all the sources except Sco X-1. In fact, $`b_3=1.6`$ is consistent with all the data. The Sco X-1 and GX 17$`+`$2 data have relatively small uncertainties and give best-fit coefficients $`a_3`$ that are slightly but significantly different. The uncertainties in the data on GX 5$``$1 and GX 340$`+`$0 are sufficiently large that their contours allow values of $`a_3`$ that are consistent with either the Sco X-1 or the GX 17$`+`$2 value. The surprising correlation between the HBO frequency, the frequency of the upper kilohertz QPO, and the instantaneous frequency separation of the kilohertz QPOs shown in Figure 11 may be coincidental. The relatively large uncertainties in the currently measured kilohertz QPO frequencies in all the Z sources except Sco X-1 prevent us from drawing any firm conclusions about the significance of this correlation. However, the strikingly similar relation between the lower and upper kilohertz QPO frequencies in all LMXBs that show kilohertz QPOs (Psaltis et al. 1998) together with the correlation shown in Figure 11 suggests that the varying frequency separation of the kilohertz QPOs in Sco X-1 is a general property of the kilohertz QPOs and is related, directly or indirectly, to the frequency of the HBO. Additional data are needed to test directly this conjecture (see Psaltis et al. 1999 for an alternative possibility).
no-problem/9903/cond-mat9903015.html
ar5iv
text
# Electric field dependent structural and vibrational properties of the Si(100)-H(2×1) surface and its implications for STM induced hydrogen desorption ## I INTRODUCTION STM induced desorption of hydrogen(H) from the monohydride Si(100) surface offers the possibility of lithography with atomic resolution. Investigations of the desorption mechanism have established the dependence of the desorption rate on the bias voltage, tunnel current and H isotope. At high positive biases, $`V_b>4`$ V, the experimental results are consistent with electron induced desorption due to direct excitation of the Si-H bond by a single electron. At negative and low positive biases the desorption rates show power-law dependencies on the electron current consistent with a multi-electron process, and the measured desorption rates are in quantitative agreement with first-principle calculations. The desorption by multi-electron scattering is only possible because the H stretch frequency has a long lifetime. The long lifetime is a result of a vibrational quantum too low for coupling with electron-hole excitations, while well above the Si phonon spectrum, and can thus only transfer energy to the substrate via a multi-phonon process. At room temperature experimental estimates of the lifetime due to this process are $`\tau 10`$ ns. However, a local excitation is not an eigenmode and will decay into a H surface phonon by a coherent process. This decay is several orders of magnitude faster than multi-phonon energy relaxation, and must therefore be included in the theoretical models. It has been proposed by Persson and Avouris that the vibrational Stark shifts due to the electric field from the tip can localize vibrational modes below the tip. The localized modes may still transfer energy laterally by incoherent diffusion (the Förster mechanism) but it was found that this decay channel is also reduced by the Stark shifts. However, the work assumed that the electric field from the STM tip is localized on a single H atom below the tip, and this is not the case for realistic tip geometries. In this work we present results for the vibrational properties of the Si(100)-H(2$`\times `$1) surface in the presence of the electric field from a more realistic model for the STM tip. The tip is described by sphere of radius, $`R_t=500`$ Å, with a protrusion of atomic dimensions, and we determine the electric field by solving the Poisson’s equation numerically. To obtain the effect on the H vibrations we set up a phonon Hamiltonian with parameters obtained from a first principles calculation of the vibrational properties of the Si(100)-H(2$`\times `$1) surface in an external electric field. We find that the electric field does give rise to a localized vibrational state below the tip; however, its lifetime is very short (10 ps) due to incoherent exciton motion. However, we find that the anharmonicity of the Si-H bond potential reduces the lateral energy transfer of higher excited excitations($`n>1`$) of the Si-H bond. We present first principle calculations of the desorption rate taking this effect into account, and find that two phonon excitations play an important role in the desorption process. The organization of the paper is the following. In Section II we describe the first principles method which in section II A is used to calculate the zero field atomic structure and Si-H stretch frequencies of the Si(100)-H(2$`\times `$1) surface. The electric field dependence of the frequencies is calculated in Section II B. In Section III we introduce a simple dipole-dipole interaction model for the Si-H stretch phonon band, and uses it to find localized vibrational states in the presence of electric fields from different STM tip geometries. In Section IV we calculate the lifetimes of the localized states due to incoherent lateral diffusion. In section V the lifetimes are used to model STM induced desorption. Section VI summaries the results. ## II Structure and vibrational properties of the Si(100)-H(2$`\times `$1) surface. In this section we calculate the vibrational frequencies and the dipole-dipole coupling matrix elements of the H vibrations on the Si(100)-H(2$`\times `$1) surface. In subsection A we present calculations for the unperturbed Si(100)-H(2$`\times `$1) surface, and the vibrational and structural shifts due to an external planar field are calculated in subsection B. The first principles calculations are based on density functional theory within the Generalized Gradient Approximation (GGA) for the exchange-correlation energy. Since we only consider filled shell systems, the calculations are all non-spin-polarized. Ultra-soft pseudo potentials constructed from a scalar-relativistic all-electron calculation are used to describe H and silicon(Si). The wave functions are expanded in a plane-wave basis set with a kinetic-energy cutoff of 20 Ry, and with this choice absolute energies are converged better than $`0.5`$ mRy/atom. With this approach we find a Si lattice constant of 5.47(5.43)Å and bulk modulus of 0.89(0.97) Mbar (parentheses show experimental values). For the H<sub>2</sub> molecule we obtain a bond length of 0.754(0.741)Å, a binding energy (including zero point motion) of 4.22(4.52) eV and a vibrational frequency of 4404(4399)cm<sup>-1</sup>. Generally the comparison with experiment is excellent, and similar theoretical values have been found in other studies using the GGA. ### A Zero field properties To model the monohydride Si(100) surface at zero field we use a (2$`\times `$1) slab with 12 Si atoms and 6 H atoms. The atoms at the bottom surface are bulk like, and their dangling bonds are saturated with H atoms. The two surfaces are separated by a 7.5 Å vacuum region, and we use the dipole correction in order to describe the different workfunction of the two surfaces. The surface is insulating and we use 2 $`k`$-points in the irreducible part of the Brillouin-zone (BZ) for the BZ integrations. Test calculations with more dense meshes show that BZ integration errors are negligible small. Figure 1 shows the atomic structure after relaxation of the H atoms and the 4 upper Si layers. Positions of the H atoms and the two upper Si layers compare well with other studies. For the third and fourth layer Si atoms we find a small asymmetric relaxation, and to our knowledge such relaxations have not been included in previous studies. To obtain the dynamical matrix of the Si-H stretch frequency we make H displacements of $`\pm 0.07,0.14,\mathrm{},0.35`$ Å in the Si-H bond direction and fit a sixth order polynomial to the data points. Since the Si-H stretch frequencies are four times higher than Si bulk frequencies, the Si substrate acts like a solid wall, and we therefore use the H mass $`\mathrm{M}_\mathrm{H}`$ when calculating frequencies from the dynamical matrix. With this approach we have calculated the frequency of the symmetric stretch $`\omega _\mathrm{s}`$ and the asymmetric stretch $`\omega _\mathrm{a}`$ at three high symmetry points in the surface Brillouin zone. In Table 1 the results are listed together with $`\mathrm{\Gamma }`$ point frequencies(parenthesis) obtained by infrared spectroscopy and the comparison between theory and experiment is excellent, especially we note that theory correctly predicts the splitting, $`\omega _\mathrm{s}\omega _\mathrm{a}=11`$ cm<sup>-1</sup>. We next investigate the bonding potential of the H atoms. In Fig. 2 the solid circles show the hydrogen energy, $`E_\mathrm{H}`$, for the Si-H bond lengths, $`z`$, used in the calculation of the dynamical matrix. The data are accurately described by a Morse potential $$E_\mathrm{H}(z)=E_\mathrm{d}(e^{2\alpha (zz_0)}2e^{\alpha (zz_0)}),$$ (1) and from a least-squares fit we obtain the frequency $`\omega _0=0.26`$ eV($`\alpha =1.57`$ Å<sup>-1</sup>), equilibrium bond length $`z_0=1.50`$ Å, and desorption barrier $`E_\mathrm{d}=3.4`$ eV. The extrapolated desorption energy coincides with the surface energy without a H atom plus the energy of a spin-polarized H atom. The triangles in Fig. 2 show the total energy for large values of $`zz_0`$. When the interaction between the H atom and the surface becomes weak the electrons start to spin-polarize and the data points in Fig. 2 show that spin polarization effects become important for $`z>2.5`$ Å. The inset shows the change in the surface dipole, $`\mathrm{\Delta }p`$ as a function of $`z`$ (the positive direction is from Si to H). The surface dipole increases almost linearly with $`z`$ and the dynamic dipole moment is $`\gamma =p/z=0.6`$ Debye/Å (=0.13 e). Modelling the surface dipole by an effective charge $`e^{}`$ on the H atom and its image charge -$`e^{}`$, we find $`e^{}0.07e`$. The sign of this charge transfer from Si to H is in accordance with a higher electronegativity of H relative to Si. ### B Electric field dependent properties To model the surface in a planar external field we use a (2$`\times `$1) slab with 24 Si atoms, 2 H atoms, and a vacuum region of 10 Å and the external field is modelled using the method of Ref. . The Si atoms at the back side of the slab are not passivated by H atoms, and dangling bonds on these atoms can donate free electrons and holes. In this way we take into account the effect of mobile carriers. Other computational details are identical to those for the zero field calculation. Curves in Fig. 3 show the field dependence of the equilibrium Si-H bond length $`z`$, the $`\mathrm{\Gamma }`$ point symmetric Si-H stretch frequency $`\omega _\mathrm{s}`$ and the $`\mathrm{\Gamma }`$ point symmetric–asymmetric splitting $`\mathrm{\Delta }\omega =\omega _\mathrm{s}\omega _\mathrm{a}`$. We first notice that all three quantities have an extremum at $`1.5`$ V/Å. This behaviour can be described by a simple Si-H tight-binding model with a field dependent H on-site element. The extrema occurs at the field where the H and Si on-site levels are in resonance, since at resonance the Si-H bond is strongest, and therefore the bond length minimal and the vibrational frequency maximal. Furthermore, at resonance the H dynamic dipole vanishes, and therefore also the part of $`\mathrm{\Delta }\omega `$ caused by H-H dipole interactions. The three solid lines in Fig. 3 show second order polynomials obtained by least squares fit to the data. The interpolated zero field values of $`z_0`$ and $`\omega _\mathrm{s}`$ agrees exactly with those obtained in section II A, while the interpolated $`\mathrm{\Delta }\omega `$ zero field value is slightly off. Taking into account the quite different slabs used for the two calculations we find the agreement fully satisfactory, and note that the difference can be taken as a measure of the accuracy of the approach. Recently, the electric field dependent properties of the H/Si(111)(1$`\times `$1) surface were calculated by Akpati et al., and they found Stark shifts $`30`$ percent larger than in the present calculation, and the extremum in bond length and frequency appears for a field of 1 V/Å. The agreement with the present calculation seems reasonable, bearing in mind that the Stark shifts are for different crystallographic surface directions. However, part of the difference might be due to the use of a cluster geometry and the local spin density approximation in Ref. . The present study is based on a slab geometry and the GGA. We expect that the thick slab geometry better describes the electric field induced polarization of the surface. ## III STM induced Stark localization In this section we will model the collective modes of the Si-H stretch vibrations by a set of local oscillators interacting through dipole forces, and use this model to calculate the Stark localization in the external electric field from a STM tip. In Fig. 4 is shown the lattice sites of the oscillators, corresponding to the positions of the H atoms in the (2$`\times `$1) cell. Each oscillator is described by a local frequency $`\omega _i`$ and a dynamic dipole moment $`\gamma _i`$. The Hamiltonian of the system is given by $$H_{ij}=\mathrm{}\omega _i\delta _{ij}+\frac{\chi _i\chi _j}{|𝐫_i𝐫_j|^3}(1\delta _{ij}),$$ (2) where $`r_i`$ is the position of oscillator $`i`$, and $`\chi _i=\sqrt{\mathrm{}/2\mathrm{M}_\mathrm{H}\omega _i}\gamma _i`$. We first consider the zero field case of identical oscillators with parameters $`\gamma _0`$, $`\omega _0`$. We find the dispersion from a numerical Fourier transform, and at the $`\mathrm{\Gamma }`$ point we obtain the Hamiltonian $$H(\mathrm{\Gamma })=\left(\begin{array}{cc}\omega _0+4.05V_0& 5.12V_0\\ 5.12V_0& \omega _0+4.05V_0\end{array}\right),$$ (3) where $`V_0=(\gamma _0^2)/(\mathrm{M}_\mathrm{H}\omega _0a_{100}^3)`$ and $`a_{100}=3.87`$ Å is the surface lattice constant. The two eigenmodes are $`\omega _\mathrm{s}(\mathrm{\Gamma })=\omega _0+9.17V_0`$, and $`\omega _\mathrm{a}(\mathrm{\Gamma })=\omega _01.07V_0`$. Using the calculated values of $`\gamma _0`$ and $`\omega _0`$ from section 2A, we obtain $`V_0=0.07`$ meV and thereby $`\mathrm{\Delta }\omega =5`$ cm<sup>-1</sup>. Dipole-dipole interactions can therefore only account for half of the dispersion obtained in the the frozen phonon calculation. We suggest that the remainder of the splitting is due to a short-range electronic interaction. This electronic interaction gives rise to an additional splitting, and explains why $`\mathrm{\Delta }\omega >0`$ at $`E=1.5`$ V/Å(see Fig. 3c) even though the dipole-dipole interaction vanishes at this field. To simplify the calculations we will in the following use the dipole-dipole interaction model(Eq. (2)) to describe all the interactions, and determine field dependent parameters $`\omega _\mathrm{E}`$ and $`\gamma _\mathrm{E}`$ by relating the $`\mathrm{\Gamma }`$-point eigenmodes of the model to the calculated frozen phonon values. In this way we approximate the effect of the short-range electronic interactions by long-range dipole forces. To test the accuracy of this approximation we have used the model to calculate $`\omega _\mathrm{s}`$ and $`\omega _\mathrm{a}`$ at the J and J’ point in the surface Brillouin zone. The result is shown in Table 1 and the comparison with the first principles calculation is reasonable. We next model the electric field below the STM tip. Usually it is found that the tip has a curvature in the range $`100`$ Å – $`1000`$ Å and it is generally accepted that the atomic resolution arises from a small protrusion or a single atom sticking out of the tip. We use the geometry in Fig. 5 to model such a tip. Two parameters, the tip curvature, $`R_t`$, and the protrusion size $`s`$, determine the tip geometry and we present results for parameters in the range, $`R_t=100`$$`500`$ Å and $`s=0`$–9 Å. For the tip-sample distance we use, $`h=3`$$`7`$ Å, which is the typical distance range in STM lithography experiments. To find the electric field below the tip the Poisson’s equation is solved numerically using ANSYS finite element analysis. Curves in Fig. 6 show the radial electric field at the surface for a potential difference of 5 V between the tip and the surface. Curves in Fig. 6a show the result when there is no protrusion on the tip ($`s=0`$ Å), and for this geometry the electric field attains its half value at $`r\sqrt{Rh}`$. In Fig. 6b results are for a tip with a protrusion of size $`s`$, and the protrusion gives rise to a reduced electric field below the tip and it decays rapidly around the tip apex. The small protrusion changes the electric field of the tip very little, and localization of the electric field is most pronounced for the large protrusion. Curves in Fig. 6c show the electric field from the geometry with $`R_t=500`$ Å and $`s=6`$ Å at three tip-surface separations $`h=3,6`$, and $`9`$ Å. The curves show that the field becomes more localized when the tip approaches the surface. To determine the vibrational states below the tip in the presence of the electric field, we set up the Hamiltonian in Eq. (2) for a finite cluster including sites up to a cutoff radius $`r_{cut}`$ and diagonalizes it numerically to find the eigenmodes $`\psi _\alpha `$ and frequencies $`\omega _\alpha `$. There may be several localized modes, but we are only interested in the localized state with the largest projection $`p`$ at the site directly below the STM tip ($`r=0`$). This state is determined using $$p=\underset{\alpha }{\mathrm{max}}[|\psi _\alpha |0|^2],$$ (4) where the maximum is over the eigenmodes with frequency outside the phonon band, $`\omega _\alpha \omega _0[2.8V_0,9.2V_0]`$. For the spatial electric fields considered in this paper the value of $`p`$ is converged for cluster sizes $`r_{cut}=50`$$`100`$ Å. Curves in Fig. 7 show $`p`$ and the corresponding vibrational frequency $`\omega _p`$ when the surface is subject to the fields of Fig. 6a. The “local E” curve corresponds to the geometry of Persson and Avouris where the electric field is localized at $`r=0`$. In this case a localized state is split of the phonon band at all negative fields, while at positive bias a threshold field of $`0.12`$ V/Å is needed to obtain localization. For typical fields in H desorption experiments, 0.5–1 V/Å, the state is completely localized at the site below the tip ($`p=1`$) and $`\omega _p`$ is similar to the frequency, $`\omega _\mathrm{E}`$, of the local oscillator at $`r=0`$. In the case of a tip with radius, $`R_t`$, a localized state exists for nearly all fields, i.e. at positive bias the threshold field is 0.03 V/Å. The lower positive threshold field compared to the “local E” case is obtained because the mode is a superposition of several sites with $`\omega _i\omega _\mathrm{E}`$. For typical fields in desorption experiments the mode has a substantial weight, $`p0.3`$, at the site below the tip. In Fig. 8 we show the effect of a small protrusion on the STM tip. In this case the spatial localization is improved, and for fields 0.5–1 V/Å we have $`p0.8`$. Thus we confirm the results of Persson and Avouris, that there exists a localized mode in the region below the tip, however, it is not completely localized at a single site. ## IV Decay of the localized vibration Consider an STM experiment where a tunneling electron scatters inelastically with the H atom below the tip and the H atom is excited into the $`n=1`$ vibrational state of the stretch mode. We now consider the decay of such an excitation. There are three important time scales, the coherent transfer time, $`\tau _c`$, the phase relaxation time $`\tau _{ph}`$ and the energy relaxation time $`\tau _{en}`$. The coherent transfer time is the time it takes for the local excitation to be transfered into the localized eigenmode below the tip, $`\tau _c\mathrm{}/\mathrm{\Delta }\omega 0.5`$ ps. Next the eigenmode looses its phase due to coupling with a $`200`$ cm<sup>-1</sup> Si phonon and the phase relaxation time has been measured to be $`\tau _{ph}8`$ ps at room temperature, and $`\tau _{ph}75`$ ps at 100 K. Finally the energy of the mode will decay into the Si substrate via a coupling with three Si-H bending modes(600 cm<sup>-1</sup>) and one 300 cm<sup>-1</sup> Si phonon. The time scale for this process is $`\tau _{en}10`$ ns at room temperature. In the previous section we found a localized eigenmode with $`p0.8`$. The excitation at $`r=0`$ will be a superposition of this mode and more extended states. After $`\tau _c`$ the extended states have diffused away, thus $`80`$% of the initial excitation is in the localized eigenmode, and the total probability of finding the initial excitation at $`r=0`$ is $`p^20.6`$. For $`t>\tau _{ph}`$ the excitation can diffuse away to the neighbouring H atoms due to dipole-dipole couplings. This is the so-called Försters mechanism for incoherent diffusion, and in the following we will calculate the incoherent diffusion rate, $`w`$, using Försters formula $`w`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle \underset{i0}{}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}|H_{0i}|^2A_0^0(\omega )A_i^0(\omega )𝑑\omega .`$ (5) In this equation $`A_i^0`$ is the spectral function at site $`i`$ for noninteracting H modes ($`H_{ij}^0\delta _{ij}`$), but including the coupling with substrate phonons which gives rise to the phase relaxation. The spectral functions are obtained from the noninteracting retarded Greens functions $`G_i^0(t)`$ $`=`$ $`i\mathrm{\Theta }(t)[\widehat{c}_i(t),\widehat{c}_i^{}(0)],`$ (6) $`A_i^0(\omega )`$ $`=`$ $`2\mathrm{I}\mathrm{m}G_i^0(\omega ),`$ (7) where $`\widehat{c}_i^{}`$ and $`\widehat{c}_i`$ are local creation and annihilation operators of the stretch mode. The phase relaxation can be described approximately by the Hamiltonian $$H_{ii}^0=\mathrm{}(\omega _i+\delta \omega \widehat{n}_i)\widehat{c}_i^{}\widehat{c}_i,$$ (8) where $`n_i=\widehat{b}_i^{}\widehat{b}_i`$ is the projected occupation operator of the $`\mathrm{\Omega }=200`$ cm<sup>-1</sup> Si phonon, and $`\delta \omega `$ is the change in the local frequency when the Si phonon is excited from level $`n`$ to $`n+1`$. The correlation functions of $`n_i`$ have been calculated by Persson et al. $`n(t)`$ $`=`$ $`n_\mathrm{B}(\mathrm{\Omega }),`$ (9) $`n(t)n(0)`$ $`=`$ $`n(1+n)e^{\eta t}+n^2.`$ (10) The friction parameter $`\eta `$ describes the damping of the Si phonon, and $`n_\mathrm{B}(\omega )=(e^{\beta \omega }1)^1`$ is the Bose occupation number and $`\beta `$ the inverse temperature. We now use the Matsubara formalism to obtain $`G_i^0`$ from an perturbation expansion in $`\delta \omega `$. We only consider the two lowest order diagrams shown in Fig. 9, and the corresponding self energies are $`\mathrm{\Sigma }_i^{(1)}`$ $`=`$ $`\mathrm{}\delta \omega n,`$ (11) $`\mathrm{\Sigma }_i^{(2)}`$ $`=`$ $`\mathrm{}\delta \omega ^2n(1+n){\displaystyle \frac{1}{\omega \omega _i+i\eta }}.`$ (12) The $`\mathrm{\Sigma }^{(1)}`$ term gives rise to a small frequency shift, while the $`\mathrm{\Sigma }^{(2)}`$ leads to a damping of the mode. Considering only the latter term, we find $`A_i^0(\omega )`$ $``$ $`{\displaystyle \frac{\mathrm{\Gamma }_i(\omega )/\mathrm{}}{(\omega \omega _i)^2+\mathrm{\Gamma }_i(\omega )^2/4}},`$ (13) $`\mathrm{\Gamma }_i(\omega )`$ $`=`$ $`{\displaystyle \frac{2\delta \omega ^2}{\eta }}{\displaystyle \frac{n_\mathrm{B}(\mathrm{\Omega })[n_\mathrm{B}(\mathrm{\Omega })+1]}{(\omega \omega _i)^2/\eta ^2+1}},`$ (14) where $`\mathrm{\Gamma }_i(\omega _i)`$ is the phase relaxation rate and we have used $`\mathrm{\Gamma }_i(\omega _i)\eta `$. Thus the spectral function resembles a Lorentzian with Full Width at Half Maximum(FWHM) $`\mathrm{\Gamma }_i(\omega _i)`$ for $`\omega \omega _0`$ and it decays as $`(\omega \omega _0)^4`$ in the tails. From the experimental dephasing lifetimes we obtain $`\mathrm{\Gamma }_i(\omega _i)=1/\tau _{ph}0.7`$ cm<sup>-1</sup> at room temperature. We estimate the coupling strength using $`\delta \omega _i\mathrm{\Omega }\omega _i/4E_\mathrm{d}=4`$ cm<sup>-1</sup>, and the friction parameter can then be determined from $`\eta =2\delta \omega ^2n_\mathrm{B}(\mathrm{\Omega })(1+n_\mathrm{B}(\mathrm{\Omega }))/2\mathrm{\Gamma }_i(\omega _i)50`$ cm<sup>-1</sup>. The values of $`\delta \omega `$ and $`\eta `$ obtained in this way are similar to the measured room temperature values for Si(111). To obtain the diffusion rate we perform the integration in Eq. (5) thus obtaining $$w\frac{4}{\mathrm{}^2}\underset{i0}{}\frac{\chi _0^2\chi _i^2}{r_i^6}\frac{\mathrm{\Gamma }_0(\omega _0)+\mathrm{\Gamma }_0(\omega _i)}{[\omega _i\omega _0]^2+[\mathrm{\Gamma }_0(\omega _0)+\mathrm{\Gamma }_0(\omega _i)]^2/4}.$$ (15) In the case where $`\mathrm{\Gamma }_0(\omega _i)=\mathrm{\Gamma }_0(\omega _0)`$ this result is similar to that of Ref. . Curves in Fig. 10 show the values of $`w`$ as obtained from Eq. (15) when the surface is subject to the same electric fields as in Fig. 7. The solid line corresponds to the electric field model of Persson and Avouris and similar to Ref. we find $`w5\times 10^9`$ s<sup>-1</sup> for typical STM fields. The other curves in Fig. 10 and the curves in Fig. 11 show that for more realistic models of the tip electric field the value of $`w`$ is more than one order of magnitude larger, and a typical value in an STM experiment is $`w10^{11}`$ s<sup>-1</sup>. Thus, the $`n=1`$ vibrational excitation at $`r=0`$ will diffuse away very fast to the nearest neighbour sites in contrast to the result of Persson and Avouris. The reason for this is that for a realistic STM geometry the electric field at $`r=0`$ is not very different from the nearest neighbour sites and there is a large diffusion rate into these sites. For the decay of the $`n>1`$ excitation we have to take into account the anharmonicity of the Si-H bond potential. In section II A it was shown that the bond potential of the H atom is well described by a Morse potential. The eigenstates of a Morse potential is given by $$\mathrm{}\omega (n)=E_\mathrm{d}\left[1\frac{\alpha \mathrm{}}{\sqrt{2\mathrm{M}_\mathrm{H}E_\mathrm{d}}}(n+\frac{1}{2})\right]^2,$$ (16) where $`n`$ takes positive integral values from zero to the greatest value for which $`n+\frac{1}{2}<\sqrt{2\mathrm{M}_\mathrm{H}E_\mathrm{d}}/\alpha \mathrm{}`$. For the H potential $`n=0,1,\mathrm{}24`$ and $`\omega (n)=0.129,0.378,0.618,0.847,\mathrm{}`$ eV. The anharmonicity is substantial and $`\omega (2)\omega (1)=\omega (1)\omega (0)+U`$, where $`U=0.010`$ eV. The frequency of the $`n=2`$ state is outside the phonon band, and this gives rise to a localization of the state. The diffusion rate of this state can be estimated from Eq. (15) by using $`\omega _0+U`$ for the frequency at site $`0`$, and the result of such a calculation is shown by the three lower curves in figure 11. The value of $`w`$ is of the same order of magnitude as the room temperature energy relaxation rate ($`10^8`$ s<sup>-1</sup>). For $`n>2`$ the relaxation rate is $`10^8`$ s<sup>-1</sup>. Thus, it is mainly the lifetime of the $`n=1`$ excitation which is affected by incoherent diffusion. In the next section we will investigate the effect of the reduced lifetime of the $`n=1`$ excitation on STM induced desorption. ## V Calculation of the desorption rate In this section we will calculate the desorption rate, $`R`$, of the H atom below the STM tip, due to electron inelastic scattering through dipole coupling or by resonance coupling with the Si-H $`5\sigma `$ and $`6\sigma ^{}`$ resonances. The fraction of electrons which scatters inelastically through dipole coupling is given by $`f_{in}^{dip}(\chi _0/ea_0)^20.001`$. The theoretical model we use for calculating the inelastic current, $`I_{nn+N}`$, due to resonance coupling has been described in Ref. . In those works we only considered resonance coupling and decay through energy relaxation with $`w_{\mathrm{en}}=1/\tau _{\mathrm{en}}=10^8`$ s<sup>-1</sup>, and dotted lines in Fig. 12 correspond to those results. The dashed lines show the result of including dipole coupling and the little difference between the dotted and dashed lines justify the neglect of dipole coupling in our previous studies. The solid lines in Fig. 12 show the result of including both dipole coupling and lateral diffusion of the $`n=1`$ excitation with $`w=10^{11}`$ s<sup>-1</sup>. Defining $`\mu (w)=R(w)/R(w=0)`$ as the suppression of the desorption due to lateral diffusion of the excitation, we find $`\mu 0.1`$$`0.3`$ at negative bias and $`\mu 0.02`$$`0.08`$ at positive bias. Using $`w=10^{10}`$ s<sup>-1</sup> or $`w=10^{12}`$ s<sup>-1</sup> changes $`\mu `$ less than 10 percent. This is quite different from the model of Persson and Avouris where $`\mu w/w_{\mathrm{en}}`$. The reason is that in our model we include multiple phonon excitations, i.e. we use $`N=1,2,3`$ in the calculation of the inelastic current. When the lateral diffusion rate of the $`n=1`$ level is large, the desorption proceeds via a direct excitation from $`n=0`$ to $`n=2`$. At negative biases $`<5`$ V the rate of double excitations relative to single excitations is $`I_{nn+2}/I_{nn+1}=0.07`$$`0.15\times (n+2)`$, while at positive biases $`>2`$ V it is $`0.015`$$`0.04\times (n+2)`$. Thus, the larger $`\mu `$ at negative bias relative to positive bias is due to a higher probability of a multiple excitation. ## VI Summary We have studied the effect of electric field on incoherent lateral diffusion of vibrational excitations and its implications for STM induced desorption of H from Si(100)-H(2$`\times `$1). We calculated the electric field at the surface for realistic STM tip geometries and determined the field dependent vibrational properties of the H overlayer based on first principles calculations of vibrational Stark shifts and dipole-dipole interaction matrix elements. We found that the electric field will localize the vibrational states below the STM tip, however, the lifetime of the $`n=1`$ excitations is short ($`10`$ ps) due to incoherent diffusion. The diffusion of higher level excitations $`n>1`$ is suppressed due to anharmonic frequency shifts. The damping of the STM induced desorption of H due to the lateral escape of the $`n=1`$ excitation depends on the fraction of multiple phonon excitation events relative to one phonon events in the inelastic scattering process. At low positive biases we find a damping of the desorption rate by $`\mu 0.02`$$`0.08`$, while at negative bias $`\mu 0.1`$$`0.3`$, reflecting the higher probability of inelastic scattering events with a multiple phonon excitation at the negative biases. There are no adjustable parameters in our model and the calculated desorption rates are in quantitative agreement with measured desorption rates. ## ACKNOWLEDGMENTS I acknowledge Jan Tue Rasmussen for making the ANSYS finite-element calculations, and thank Ben Yu-Kuang Hu, U. Quaade and F. Grey for valuable discussions and careful reading of the manuscript. This work was supported by the Danish Ministries of Industry and Research through project No. 9800466 and the use of national computer resources was supported by the Danish Research Councils.
no-problem/9903/hep-th9903199.html
ar5iv
text
# 1 original DPNU-99-07 March 1999 Dual Description for SUSY $`SO(N)`$ Gauge Theory with a Symmetric Tensor Nobuhito Maru<sup>1</sup><sup>1</sup>1JSPS Research Fellow. <sup>2</sup><sup>2</sup>2Address after April 1: Department of Physics, Tokyo Institute of Technology, Oh-okayama, Meguro, Tokyo 152-8551, Japan. Department of Physics, Nagoya University Nagoya 464-8602, JAPAN maru@eken.phys.nagoya-u.ac.jp Abstract We consider $`𝒩=1`$ supersymmetric $`SO(N)`$ gauge theory with a symmetric traceless tensor. This theory saturates ’t Hooft matching conditions at the origin of the moduli space. This naively suggests a confining phase, but Brodie, Cho, and Intriligator have conjectured that the origin of the moduli space is in a Non-Abelian Coulomb phase. We construct a dual description by the deconfinement method, and also show that the theory indeed has an infrared fixed point for certain values of $`N`$. This result supports their argument. PACS: 11.30.Pb, 12.60.Jv Keywords: Supersymmetric Gauge Theory, Duality Our understanding of the non-perturbative dynamics in supersymmetric (SUSY) gauge theories has made remarkable progress during the past several years. In particular, $`𝒩=1`$ SUSY gauge theory has rich low energy behaviors e.g. the runaway superpotential (no vacuum), some kinds of confining phases, (infrared) free magnetic phase, Non-Abelian Coulomb phase, (infrared) free electric phases and various applications to phenomenology e.g. models of dynamical SUSY breaking, models of composite quarks and leptons. In this paper, we discuss $`𝒩=1`$ SUSY $`SO(N)`$ gauge theory with a symmetric traceless tensor, which is one of the theories with an Affine quantum moduli space classified by Dotti and Manohar , and its low energy behavior has been investigated by Brodie, Cho, and Intriligator . In order to make our discussion clear, we briefly review the argument of Brodie, Cho, and Intriligator. Matter content and symmetries of the model are displayed in Table 1<sup>3</sup><sup>3</sup>3Throughout this paper, we use the Young tableau to denote the representation of the superfields. $`\text{ }\text{ }\text{ }\text{ }\text{ },\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ },\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }`$ stand for vector, adjoint (anti-symmetric), symmetric (and traceless) representations under $`SO`$ group, respectively.. There is no tree level superpotential. Here $`U(1)_R`$ is an anomaly free global symmetry. The 1-loop beta function coefficient is $`b_0=2(N4)`$,<sup>4</sup><sup>4</sup>4The following Dynkin indices are adopted: $`\mu (\text{ }\text{ }\text{ }\text{ }\text{ })=2`$, $`\mu (\mathrm{Adj}=\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ })=2N4`$, $`\mu (\text{ }\text{ }\text{ }\text{ }\text{ }\text{ })=2N+4`$. so the theory is asymptotically free for $`N5`$. The classical moduli space is parameterized in terms of the diagonal vacuum expectation values (VEVs) of $`S`$, which is of $`(N1)`$ complex dimensions. At the quantum level, the superpotential of the form $$W_{\mathrm{dyn}}=C\left[\frac{S^{2N+4}}{\mathrm{\Lambda }^{2N8}}\right]^{1/4}$$ (1) can appear, which is determined by holomorphy and symmetries. Here $`\mathrm{\Lambda }`$ denotes the dynamical scale of $`SO(N)`$ theory, and $`C`$ is a constant. In the weak coupling limit $`S/\mathrm{\Lambda }\mathrm{}`$, Eq. (1) diverges, and cannot reproduce the classical moduli space. Therefore, $`C`$ must vanish. This part of the moduli space is referred to as the “Higgs branch”. This classical moduli space can also be parameterized by VEVs of the gauge invariant composite operators<sup>5</sup><sup>5</sup>5$`\mathrm{det}S`$ and $`\mathrm{Tr}S^n(nN+1)`$ can be expressed by $`O_n(n=2,\mathrm{},N)`$, in other words, these operators are not linearly independent. $$O_n=\mathrm{Tr}S^n(n=2,\mathrm{},N).$$ (2) What is a remarkable property of this theory is that ’t Hooft anomaly matching conditions are saturated at the origin of the moduli space. This naively suggests that the $`SO(N)`$ model is in the confining phase at the origin. However, Brodie, Cho, and Intriligator have discussed that this confining picture at the origin is misleading because of the following three arguments. 1. Free electric subspaces exist and intersect at the origin. This implies that the massless spectrum at the origin cannot simply consist of the confining moduli $`O_n`$. 2. In the presence of the mass term for $`S`$, the moduli space must have another confining branch to be consistent with Witten index argument, where the non-perturbative superpotential is generated, while no superpotential exists on the Higgs branch. 3. A nontrivial phase and branch structure must arise when the original $`SO(N)`$ model is perturbed with a general tree level superpotential. From these arguments, they concluded that the $`SO(N)`$ model’s moduli space origin is in a non-Abelian Coulomb phase. If this is the case, it is natural to ask whether the dual description exists. However, explicit dual description has not yet been found so far. The purpose of this paper is to construct the dual description of $`SO(N)`$ model and show that it has a nontrivial infrared fixed point. Let us recall the “deconfinement” method introduced by Berkooz in order to construct the dual description of the $`SO(N)`$ gauge theory with a symmetric, traceless tensor. This method has been applied to the theories in which a two-index tensor field is included, and no tree level superpotential exists. According to this method, the new strong gauge dynamics is introduced, and the two-index tensor field is regarded as a composite field (meson) by the strong gauge dynamics, namely, $$X_{ab}g^{\alpha \beta }F_{\alpha a}F_{\beta b}.$$ (3) Here $`X`$ denotes a composite superfield, $`F`$ is an elementary superfield charged under both the original gauge group and the new strong gauge group, and $`g`$ is an invariant metric of the strong gauge group. Greek letters are indices of the new strong gauge group, while Roman letters are those of the original gauge group. For instance, the symmetric tensor, the antisymmetric tensor, the adjoint representation of $`SU`$ gauge group correspond to mesons of the strong $`SO`$, $`Sp`$, $`SU`$ gauge group, respectively. The advantage of this method is that a “deconfined” theory has only defining representations, therefore one can use a well-known duality to derive a new duality. We apply here this method to the symmetric traceless tensor of $`SO(N)`$ gauge group. Note that a symmetric tensor of $`SO(N)`$ is not irreducible, so there always appears a singlet under $`SO(N)`$, which is a trace part of the symmetric tensor. The matter content and symmetry of the deconfined theory is given in Table 2. The tree level superpotential is $$W=yzp+z^2s.$$ (4) $`SO(N+5)`$ gauge theory with $`(N+1)`$ flavors has a branch in which the dynamically generated superpotential vanishes , $$W_{dyn}=0.$$ (5) One can easily verify that the above deconfined theory is reduced to the original theory at the low energy. Consider the case $`\mathrm{\Lambda }_{SO(N)}\mathrm{\Lambda }_{SO(N+5)}`$, where $`\mathrm{\Lambda }_{SO(N),SO(N+5)}`$ is the dynamical scale of $`SO(N)`$, $`SO(N+5)`$ gauge theory, respectively. We know that $`SO(N+5)`$ gauge theory with $`(N+1)`$ flavors is confining , and the effective fields are mesons $`y^2,yz,z^2`$. As can be seen in the superpotential, $`yz,p,z^2,s`$ become massive at $`\mathrm{\Lambda }_{SO(N+5)}`$. After integrating them out, we see that only $`y^2`$ is massless and the superpotential vanishes. Thus, the original theory is recovered<sup>6</sup><sup>6</sup>6More precisely, $`y^2`$ includes a singlet under $`SO(N)`$ as noted before. This will be integrated out by adding the mass term.. Taking a dual of $`SO(N)`$ gauge theory with $`(N+6)`$ flavors , we obtain the following theory given in Table 3. Here the fields in parentheses stand for the elementary $`SO(10)`$ gauge singlet meson fields. The dual tree level superpotential is<sup>7</sup><sup>7</sup>7For simplicity, we set the scale dependent coefficients of the last three terms to be of order one. $$\stackrel{~}{W}=(yp)z+z^2s+(y^2)\stackrel{~}{y}^2+(yp)\stackrel{~}{y}\stackrel{~}{p}+(p^2)\stackrel{~}{p}^2$$ (6) Since $`(yp),z`$ are massive, integrating them out by the equations of motion $`0`$ $`=`$ $`{\displaystyle \frac{\stackrel{~}{W}}{(yp)}}=z+\stackrel{~}{y}\stackrel{~}{p}z=\stackrel{~}{y}\stackrel{~}{p},`$ (7) $`0`$ $`=`$ $`{\displaystyle \frac{\stackrel{~}{W}}{z}}=(yp)+2zs(yp)=2s\stackrel{~}{y}\stackrel{~}{p},`$ (8) we obtain the effective superpotential of the dual theory, $$\stackrel{~}{W}_{\mathrm{eff}}=\stackrel{~}{y}^2\stackrel{~}{p}^2s+(y^2)\stackrel{~}{y}^2+(p^2)\stackrel{~}{p}^2.$$ (9) The field content of the resulting dual theory is given in Table 4. Note here that while $`(y^2)`$ in deconfined theory denotes a composite meson, $`(y^2)`$ in the dual denotes a gauge singlet elementary meson. As mentioned earlier, there always exists a trace part of the symmetric tensor, so we add its mass term to the superpotential $$\delta W=mS_{\mathrm{singlet}}^2,$$ (10) and integrates it out. Then, the low energy effective theory becomes $`SO(N)`$ gauge theory with a symmetric traceless tensor, which we would like to consider. This deformation corresponds to $$\delta \stackrel{~}{W}=m(y_{\mathrm{singlet}}^2)^2$$ (11) in the dual description. Using the equation of motion for $`(y^2)_{\mathrm{singlet}}`$ $$0=\frac{\stackrel{~}{W}}{(y^2)_{\mathrm{singlet}}}=\stackrel{~}{y}^2+2m(y^2)_{\mathrm{singlet}},$$ (12) we obtain the following effective superpotential $$\stackrel{~}{W}_{\mathrm{eff}}=\stackrel{~}{y}^2\stackrel{~}{p}^2s\frac{1}{2m}\stackrel{~}{y}^4+(p^2)\stackrel{~}{p}^2+(y^2)\stackrel{~}{y}^2.$$ (13) Now, let us check the consistency of the duality. The ’t Hooft anomaly matching conditions are trivially satisfied since we use the deconfined method which guarantees the anomaly matching. The mapping of the gauge invariant operators which describes the moduli space is also trivial as depicted in Table 5. This mapping is consistent with the global symmetry $`U(1)_R`$. Next, we consider various flat direction deformations. First, consider the direction $`y^20`$, namely, $$y=\left(\begin{array}{cccccc}y_1& & & & & \\ & \mathrm{}& & & & \\ & & \mathrm{}& & & \\ & & & y_N& & \end{array}\right),$$ (14) where $`y`$ is a $`(N+5)\times N`$ matrix and $`y_i(i=1,\mathrm{},N)`$ are constants. To simplify the analysis, let us suppose that $`y_10`$ and $`y_i(i=2,\mathrm{},N)=0`$. In the deconfined theory, the following symmetry breaking occur $`SO(N)+(N+6)\text{ }\text{ }\text{ }`$ $``$ $`SO(N1)+(N+5)\text{ }\text{ }\text{ },`$ (15) $`SO(N+5)+(N+1)\text{ }\text{ }\text{ }`$ $``$ $`SO(N+4)+N\text{ }\text{ }\text{ }.`$ (16) Since one component of $`z`$ and $`p`$ become massive from the coupling in the superpotential, integrating them out by the equations of motion, we obtain the effective superpotential $$W_{\mathrm{eff}}=y^{}z^{}p^{}+z^2s+m(y_{\mathrm{singlet}}^2)^2.$$ (17) $`y^{},z^{}`$ and $`p^{}`$ are transformed as $`y^{}(\text{ }\text{ }\text{ }\text{ }\text{ },\text{ }\text{ }\text{ }\text{ }\text{ })`$, $`z^{}(\mathrm{𝟏},\text{ }\text{ }\text{ }\text{ }\text{ })`$ and $`p^{}(\text{ }\text{ }\text{ }\text{ }\text{ },\mathrm{𝟏})`$ under $`SO(N1)\times SO(N+4)`$. On the other hand, the corresponding direction in the dual is $$(y^2)=\left(\begin{array}{cccccc}y_1^2& & & & & \\ & 0& & & & \\ & & \mathrm{}& & & \\ & & & 0& & \end{array}\right).$$ (18) Along this direction, the symmetry breaking goes as follows $`SO(10)+(N+6)\text{ }\text{ }\text{ }`$ $``$ $`SO(10)+(N+5)\text{ }\text{ }\text{ },`$ (19) $`SO(N+5)+\text{ }\text{ }\text{ }+10\text{ }\text{ }\text{ }`$ $``$ $`SO(N+4)+\text{ }\text{ }\text{ }+10\text{ }\text{ }\text{ }.`$ (20) Since one component of $`\stackrel{~}{y}`$ becomes massive due to the coupling in the superpotential, integrating them out, we obtain the effective superpotential $$\stackrel{~}{W}_{\mathrm{eff}}=\stackrel{~}{y}^2\stackrel{~}{p}^2s+(y^2)^{}\stackrel{~}{y}^2+(p^2)\stackrel{~}{p}^2\frac{1}{2m}\stackrel{~}{y}^4.$$ (21) The fields with dash are transformed as $`\stackrel{~}{y}^{}(\text{ }\text{ }\text{ }\text{ }\text{ },\text{ }\text{ }\text{ }\text{ }\text{ })`$, $`(y^2)(\mathrm{𝟏},\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\mathrm{𝟏})`$ under $`SO(10)\times SO(N+4)`$. The above result is consistent simply because $`N`$ is replaced by $`N1`$. In fact, taking a dual of $`SO(N1)+(N+5)\text{ }\text{ }\text{ }\text{ }\text{ }`$ in the deconfined theory, we obtain $`SO(N1)\times SO(N+4)`$ as the dual gauge group. We will arrive at the following theory as in Table 6. The dual superpotential is $$\stackrel{~}{W}=(y^{}p^{})z^{}+z^2s+(y^2)\stackrel{~}{y}^2+(y^{}p^{})\stackrel{~}{y}^{}\stackrel{~}{p}^{}+(p^2)\stackrel{~}{p}^2+m(y^2)_{\mathrm{singlet}}^2.$$ (22) Integrating the massive fields $`(y^{}p^{})`$, $`z^{}`$ and $`(y^2)_{\mathrm{singlet}}`$ by using their equations of motion, one can obtain the same superpotential as that in Eq. (21). It is straightforward to extend the above result to the more general case where the VEV of $`y`$ takes the form as $$y=\left(\begin{array}{cccccccc}y_1& & & & & & & \\ & \mathrm{}& & & & & & \\ & & y_l& & & & & \\ & & & 0& & & & \\ & & & & \mathrm{}& & & \\ & & & & & 0& & \end{array}\right)(l=1,\mathrm{},N).$$ (23) In this case, the symmetry breaking is $`SO(N)+(N+6)\text{ }\text{ }\text{ }`$ $``$ $`SO(Nl)+(N+6l)\text{ }\text{ }\text{ },`$ (24) $`SO(N+5)+(N+1)\text{ }\text{ }\text{ }`$ $``$ $`SO(N+5l)+(N+1l)\text{ }\text{ }\text{ },`$ (25) and $`l`$ components of $`z`$ and $`p`$ become massive, so after integrating them out, we find the effective superpotential of the form $$W=y^{}z^{}p^{}+z^2s+m(y^2)_{\mathrm{singlet}}^2.$$ (26) The representations of $`y^{},z^{}`$ and $`p^{}`$ under $`SO(Nl)\times SO(N+5l)`$ are $`y^{}(\text{ }\text{ }\text{ }\text{ }\text{ },\text{ }\text{ }\text{ }\text{ }\text{ }),z^{}(\mathrm{𝟏},\text{ }\text{ }\text{ }\text{ }\text{ })`$ and $`p^{}(\text{ }\text{ }\text{ }\text{ }\text{ },\mathrm{𝟏})`$, respectively. In the dual, the corresponding direction is $$(y^2)=\left(\begin{array}{cccccccc}y_1^2& & & & & & & \\ & \mathrm{}& & & & & & \\ & & y_l^2& & & & & \\ & & & 0& & & & \\ & & & & \mathrm{}& & & \\ & & & & & 0& & \end{array}\right)(l=1,\mathrm{},N),$$ (27) and the symmetry breaking along this direction is $`SO(10)+(N+6)\text{ }\text{ }\text{ }`$ $``$ $`SO(10)+(N+6l)\text{ }\text{ }\text{ },`$ (28) $`SO(N+5)+\text{ }\text{ }\text{ }+10\text{ }\text{ }\text{ }`$ $``$ $`SO(N+5l)+\text{ }\text{ }\text{ }+10\text{ }\text{ }\text{ }.`$ (29) Since $`l`$ components of $`\stackrel{~}{y}`$ are massive, after integrating them out, we can find the effective dual superpotential of the form $$\stackrel{~}{W_{\mathrm{eff}}}=\stackrel{~}{y}^2\stackrel{~}{p}^2s+(y^2)\stackrel{~}{y}^2+(p^2)\stackrel{~}{p}^2.$$ (30) This result is also consistent. One can easily show explicitly that taking a dual of $`SO(Nl)+(N+6l)\text{ }\text{ }\text{ }\text{ }\text{ }`$ in the deconfined theory, one obtains the same deformed dual theory as seen in the above simple case. Next, we consider the other flat direction deformations $`p0`$. In the deconfined theory, the following symmetry breaking occur: $`SO(N)+(N+6)\text{ }\text{ }\text{ }`$ $``$ $`SO(N1)+(N+5)\text{ }\text{ }\text{ },`$ (31) $`SO(N+5)+(N+1)\text{ }\text{ }\text{ }`$ $``$ $`SO(N+5)+(N1)\text{ }\text{ }\text{ }`$ (32) The low energy deformed deconfined theory is given in Table 7, and the effective superpotential is $$W_{\mathrm{eff}}=m(y^2)_{\mathrm{singlet}}^2.$$ (33) In the dual, the direction under consideration corresponds to $`(p^2)0`$. Then, the low energy deformed dual theory is displayed in Table 8, and the dual superpotential is $$\stackrel{~}{W}_{\mathrm{eff}}=(y^2)\stackrel{~}{y}^2\frac{1}{2m}\stackrel{~}{y}^4.$$ (34) This resulting theory is also consistent under the deformation along $`p0`$. In fact, taking a dual of $`SO(N1)+(N+5)\text{ }\text{ }\text{ }\text{ }\text{ }`$ and integrating out the massive modes, we can easily derive the dual in Table 8 and the superpotential (34). Furthermore, we can check the consistency under the mass term deformation. Adding the mass term $`\delta W=\frac{1}{2}m^{}p^2`$ to the superpotential in the deconfined theory, and integrating out $`p`$, we can derive the effective theory: $`SO(N)`$ gauge theory with $`(N+5)`$ vectors and $`SO(N+5)`$ gauge theory with $`(N+1)`$ vectors and a singlet $`s`$. The effective superpotential takes the form $$W_{\mathrm{eff}}=\frac{1}{2m^{}}(yz)^2+z^2s+m(y^2)_{\mathrm{singlet}}^2.$$ (35) On the dual side, this deformation corresponds to adding the term $`\delta \stackrel{~}{W}=\frac{1}{2}m^{}(p^2)`$ to the dual superpotential. The equation of motion for $`(p^2)`$ forces $`\stackrel{~}{p}`$ to develop a VEV. This leads to break $`SO(10)`$ to $`SO(9)`$, then we arrive at the following effective theories: $`SO(9)`$ gauge theory with $`(N+5)`$ vectors and $`SO(N+5)`$ gauge theory with a symmetric tensor and 10 vectors and a singlet $`s`$, a gauge singlet meson $`(y^2)`$. The effective dual superpotential becomes $`\stackrel{~}{W}_{\mathrm{eff}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}m^{}\stackrel{~}{y_0}^2s+(y^2)\stackrel{~}{y}^2+(y^2)\stackrel{~}{y_0}^2`$ (36) $``$ $`{\displaystyle \frac{1}{2m}}(\stackrel{~}{y}^2)_{\mathrm{singlet}}^2{\displaystyle \frac{1}{2m}}(\stackrel{~}{y_0}^2)_{\mathrm{singlet}}^2{\displaystyle \frac{1}{m}}(\stackrel{~}{y}^2)_{\mathrm{singlet}}(\stackrel{~}{y_0}^2)_{\mathrm{singlet}},`$ where $`\stackrel{~}{y_0}`$ stands for a field transformed as ($`\mathrm{𝟏},\text{ }\text{ }\text{ }\text{ }\text{ }`$) under $`SO(9)\times SO(N+5)`$, which should be identified with the field $`z`$ in the deconfined theory. By rescaling the fields appropriately, one can see that the above result is consistent. Although we have constructed the dual description for $`SO(N)`$ SUSY gauge theory with a symmetric traceless tensor by deconfinement technique, it is not so trivial to see that the theory under consideration has a non-trivial infrared fixed point at the origin of the moduli space since the gauge groups are products. Following Terning’s argument , we would like to show explicitly that the theory has indeed the infrared fixed point. We note that one can analyze the theory for an arbitrary ratio of the two dynamical scales $`\mathrm{\Lambda }_1,\mathrm{\Lambda }_2`$ thanks to holomorphy , where $`\mathrm{\Lambda }_1,\mathrm{\Lambda }_2`$ are the scales of $`SO(10)`$ gauge theory with ($`N+6`$) flavors, $`SO(N+5)`$ gauge theory with a symmetric traceless tensor and ten vector flavors, respectively. Furthermore, there is no phase transition when the ratio is varied. There are three cases to be considered. For $`N<4`$, $`SO(N+5)`$ theory is asymptotically non-free and $`SO(10)`$ theory is asymptotically free. This implies $`\mathrm{\Lambda }_1\mathrm{\Lambda }_2`$ and if we renormalize the gauge coupling of $`SO(N+5)`$ theory $`g`$ at the scale $`\mathrm{\Lambda }_1`$, then $`g(\mu \mathrm{\Lambda }_1)1`$. For $`4<N18`$, $`SO(N+5)`$ theory becomes asymptotic free and the limit $`\mathrm{\Lambda }_1\mathrm{\Lambda }_2`$ corresponds to weak coupling of $`SO(N+5)`$ theory. For $`N>18`$, $`SO(10)`$ becomes asymptotically non-free, which implies $`\mathrm{\Lambda }_1\mathrm{\Lambda }_2`$ and $`g(\mu \mathrm{\Lambda }_1)1`$. For $`N=4`$, since the gauge coupling $`g`$ does not run, we can take an arbitrary small coupling. In any cases, the gauge coupling $`g0`$ as the ratio $`\mathrm{\Lambda }_1/\mathrm{\Lambda }_20`$ or $`\mathrm{}`$, so we can perform the perturbative analysis for $`g`$. Let us first consider the zero-th order case in $`g`$, $`i.e.`$ $`SO(N+5)`$ dynamics is turned off. The dimensions of the gauge invariant operators have to satisfy the following constraints to be in unitary representations of the superconformal algebra : $`D(\stackrel{~}{y}^2)`$ $`=`$ $`2+2\gamma _{\stackrel{~}{y}}(g=0)1,`$ $`D(\stackrel{~}{y}\stackrel{~}{p})`$ $`=`$ $`2+\gamma _{\stackrel{~}{y}}(g=0)+\gamma _{\stackrel{~}{p}}(g=0)1,`$ $`D(\stackrel{~}{y}^{10})`$ $`=`$ $`10+10\gamma _{\stackrel{~}{y}}(g=0)1,`$ (37) $`D(\stackrel{~}{y}\stackrel{~}{p}^9)`$ $`=`$ $`10+9\gamma _{\stackrel{~}{y}}(g=0)+\gamma _{\stackrel{~}{p}}(g=0)1,`$ $`D((y^2))`$ $`=`$ $`1+\gamma _{(y^2)}(g=0)1,`$ where $`\gamma _\varphi `$ is the anomalous dimension of the field $`\varphi `$, and the bound is saturated for free fields. We note that the first term in the dual superpotential (13) is a product of three gauge invariant operators. Thus, these are irrelevant because they can be relevant only if the dimensions of these gauge invariants are one, which means that these operators are free. The fields $`s`$ interacts only through the first term which is irrelevant, so these are free fields and their anomalous dimensions vanish. Therefore the equalities (S0.Ex2) cannot be saturated. In order to obtain more relations among the anomalous dimensions, we use the exact $`\beta `$ function for the $`SO(10)`$ coupling $`g_1`$ $$\beta (g_1)=\frac{g_1^3}{16\pi ^2}\frac{3\times 8(N+5)(1\gamma _{\stackrel{~}{y}}(g=0))(1\gamma _{\stackrel{~}{p}}(g=0))}{18\frac{g_1^2}{8\pi ^2}},$$ (38) and at the fixed point $$0=18N+(N+5)\gamma _{\stackrel{~}{y}}(g=0)+\gamma _{\stackrel{~}{p}}(g=0).$$ (39) The second and the last term in Eq. (13) are relevant operators with R-charge 2 for $`g=0`$, so the following conditions have to be satisfied, $`D(\stackrel{~}{y}^4)`$ $`=`$ $`4+4\gamma _{\stackrel{~}{y}}(g=0)=3,`$ $`D((y^2)\stackrel{~}{y}^2)`$ $`=`$ $`3+\gamma _{(y^2)}(g=0)+2\gamma _{\stackrel{~}{y}}(g=0)=3.`$ (40) On the other hand, $`\stackrel{~}{y}^2,\stackrel{~}{p}^2`$ and $`s`$ are gauge invariant operators for arbitrary $`g`$, and the corresponding constraint for the dimensions are $`D(\stackrel{~}{y}^2)`$ $`=`$ $`2+2\gamma _{\stackrel{~}{y}}(g)1,`$ $`D(\stackrel{~}{p}^2)`$ $`=`$ $`2+2\gamma _{\stackrel{~}{p}}(g)1,`$ (41) $`D(s)`$ $`=`$ $`2+2\gamma _s(g)1.`$ The second and the fourth inequalities of (S0.Ex2) and the first one of (S0.Ex6) lead to $$\gamma _{\stackrel{~}{p}}(g=0)>\frac{3}{4},$$ (42) therefore one can derive the bound for $`N`$ using the conditions (39), the first inequality of (S0.Ex6) and (42): $$N>\frac{64}{5}.$$ (43) The above result implies that $`SO(10)`$ theory has an infrared fixed point if $`N`$ is in the range of (43). Next, we would like to show that $`SO(N+5)`$ theory also has an infrared fixed point. The exact beta function for $`g`$ is $$\beta (g)=\frac{g^3}{16\pi ^2}\frac{3(N+3)10(1\gamma _{\stackrel{~}{y}}(g))(N+7)(1\gamma _{(y^2)}(g))}{1(N+3)\frac{g^2}{8\pi ^2}},$$ (44) where we assume that $`\gamma _{\stackrel{~}{y}}`$ and $`\gamma _{(y^2)}`$ can be expanded in $`g`$ perturbatively as follows $`\gamma _{\stackrel{~}{y}}`$ $`=`$ $`{\displaystyle \frac{g^2}{8\pi ^2}}{\displaystyle \frac{N+4}{2}}+𝒪(g^4),`$ (45) $`\gamma _{(y^2)}`$ $`=`$ $`{\displaystyle \frac{g^2}{8\pi ^2}}(N+5)+𝒪(g^4).`$ (46) If the 1-loop beta function coefficient is negative but the 2-loop one is positive, then the infrared fixed point will exist : $`\beta _0`$ $`=`$ $`(2N8)10\gamma _{\stackrel{~}{y}}(g=0)(N+7)\gamma _{(y^2)}(g=0)<0`$ $``$ $`N>{\displaystyle \frac{14}{5}},`$ $`\beta _1`$ $`=`$ $`5N+20+N^2+12N+35(2N^22N24)`$ $`10(N+3)\gamma _{\stackrel{~}{y}}(g=0)(N^2+10N+21)\gamma _{(y^2)}(g=0)>0`$ $``$ $`3N14,`$ where $`\beta _0,\beta _1`$ denote 1- or 2-loop beta function coefficients. Taking into account the conditions (43), (S0.Ex10), we find $`N=13,14`$. In summary, we have discussed a $`𝒩=1`$ SUSY $`SO(N)`$ gauge theory with a symmetric traceless tensor. This theory saturates ’t Hooft matching conditions among the fundamental fields and the gauge invariant composites at the origin of the moduli space. This naively suggests a confining phase but Brodie, Cho, and Intriligator have conjectured that the origin of the moduli space is in a Non-Abelian Coulomb phase. If this is the case, it is natural to ask whether the dual description exists. In this paper, we have constructed its dual by the deconfinement technique. Since this approach leads to the product gauge groups, it is not so trivial that the theory has a non-trivial infrared fixed point. Following , we have shown that the theory has indeed a non-trivial fixed point at the origin of the moduli space for $`N=13,14`$. Thus, this result supports the argument of Brodie, Cho, and Intriligator. Of course, the dual description is not necessarily unique, we may be able to find other dualities by further explorations. We hope this work will provide a useful guide to analyzing the theory where ’t Hooft anomaly matching appears to be coincidental. Acknowledgements The author would like to thank S. Kitakado for a careful reading of the manuscript. This work is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists (No. 3400).
no-problem/9903/cond-mat9903009.html
ar5iv
text
# Boundary effects in reaction-diffusion processes \[ ## Abstract The effects of a boundary on reaction systems are examined in the framework of the general single-species reaction/coalescence process. The boundary naturally represents the reactants’ container, but is applicable to exciton dynamics in a doped TMMC crystal. We show that a density excess, which extends into the system diffusively from the boundary, is formed in two dimensions and below. This implies a surprising result for the magnetisation near a fixed spin in the coarsening of the one-dimensional critical Ising model. The universal, dimensionally-dependent functional forms of this density excess are given by an exact solution and the field-theoretic renormalisation group. Date: . \] Reaction systems have been given a good deal of attention in the literature over the last few decades. They are widespread in nature and methods developed in their analysis have an applicability that extends well beyond conventional chemical systems. Moreover, they provide excellent examples of dynamical, many-body statistical processes and can exhibit a variety of interesting effects such as spontaneous symmetry breaking and pattern formation . In this work, we show that the imposition of impenetrable boundaries on a reaction system leads to a non-trivial spatial variation in the reactant density. Even though such a boundary would naturally represent the vessel holding the reactants, until now no such studies have appeared in the literature. In systems with many interacting degrees of freedom, boundaries often give rise to surface effects that penetrate far into the bulk. Our results demonstrate that in reaction systems such long-range effects are indeed present and have a high degree of universality. We choose the class of the general single-species reaction processes as the starting point for the study of these boundary effects in reaction systems, as it will provide a basis for the analysis of more complex systems. This universality class, comprising the annihilating random walk $`A+AO`$, coalescing random walk $`A+AA`$ and any combination thereof, is fundamental in the theoretical study of reaction systems and covers a broad range of physical phenomena. For example, the coalescence process is seen in the dynamics of excitonic annihilation reactions in the TMMC crystal . The predicted decay exponent from theory is in agreement with experiment for over five orders of magnitude. It should be noted that the boundary effects we introduce here are exhibited in the TMMC crystal in the presence of Mg<sup>2+</sup> or Cd<sup>2+</sup> doping. These defect ions act as perfect reflectors for the annihilating excitons . A mapping also exists from the annihilating random walk to the domain coarsening dynamics in the critical one-dimensional Ising magnet . In this mapping the $`A`$ particles represent domain walls with an annihilation of two $`A`$ particles corresponding to a domain shrinking to zero size in a background of the opposite phase. An impenetrable boundary in the reaction system corresponds to a fixed boundary spin in the Ising magnet. Our analysis will show that domain walls are more likely to be found near the fixed spin than far away. This gives the counter-intuitive result that the absolute value of the dynamic, coarse-grained magnetisation is actually lower near a fixed spin. In the interests of notational simplicity, we provide here the analysis specifically for the process $`A+AO`$. However, all densities given can be converted to the result for the coalescence process $`A+AA`$ by a simple factor of two. The annihilating random walk has been studied extensively in homogeneous unbounded systems, either of infinite extent or with periodic boundary conditions. Throughout the work, this will be referred to as the bulk case. It is well known that the variation of the density $`\varrho `$ with time $`t`$ differs from the mean-field prediction $`\varrho 1/t`$ for dimensions $`d2`$. In fact the actual density decays are $`\varrho t^1\mathrm{log}t`$ ($`d=2`$), and $`\varrho t^{d/2}`$ ($`d<2`$) with a universal amplitude. In all cases, it must be stressed that the density remains uniform throughout the system. In the following, we will first introduce the boundary into the reaction process and specify the model to be studied. The mean-field approximation will be shown to predict a homogeneous density unchanged from the bulk case. However, an argument will be presented to show that the mean-field prediction breaks down in low dimensions. The central result of this work is that a fluctuation-induced excess density develops at the boundary and extends into the system diffusively. We outline the field-theoretic renormalisation group (RG) description which we use to identify the universal quantities of this excess. These calculations were performed in real space to one-loop order and show that in two dimension and below, the density excess $`\varrho _E`$ has the following form $`\varrho _E`$ $`=`$ $`{\displaystyle \frac{1}{(8\pi Dt)^{d/2}}}f_d\left({\displaystyle \frac{z^2}{2Dt}}\right).`$ (1) Here, $`D`$ is the diffusion constant of the reactants, $`z`$ is the normal distance from the boundary and $`f_d`$ is the dimensionally dependent scaling function. We were able to find the late-time scaling functions $`f_2`$ and $`f_1`$ exactly in both two and one dimensions (given in Eqs. (9) and (14) respectively). The former we derive from the RG improved field-theoretic calculation and the latter from an exact solution. Finally, in the context of surface critical phenomena, the behaviour in the related ballistic annihilation reaction system will be briefly examined and compared with the diffusive case. We now introduce the model. The system is defined on a hypercubic, $`d`$-dimensional lattice with a lattice spacing of unity. The lattice, is infinite in $`d1`$ transverse dimensions and semi-infinite (sites $`1,2,\mathrm{},\mathrm{}`$) in what will be called the $`z`$ direction. At time $`t=0`$ the lattice is filled with an initial density $`\varrho _0`$ of identical particles that perform two types of dynamics: diffusion and mutual annihilation. The diffusion is represented by each particle hopping at a rate $`D`$ to any neighbouring site at random. A hop from site $`z=1`$ towards the boundary is disallowed. The diffusion is independent for each particle, and hence multiple occupancy of each site is possible, leading to bosonic particle statistics. However, if there are $`n2`$ particles on a particular site, a reaction can occur there with a rate $`\lambda n(n1)`$ to reduce $`n`$ by 2. The above dynamics can be approximated by a mean-field description. This involves ignoring all possible correlations by considering a self-consistent equation involving just the average density $`\overline{\varrho }`$. In the continuum limit the boundary is enforced by a zero-current restriction, thus $`_t\overline{\varrho }=D^2\overline{\varrho }2\lambda \overline{\varrho }^2`$ with $`_z\overline{\varrho }|_{z=0}=0.`$ (2) The boundary restriction is compatible with the bulk solution $`\overline{\varrho }=\varrho _0/(1+\varrho _02\lambda t)`$. Hence, in the absence of strong fluctuations, the density is uniform throughout the system. However, correlations must be properly accounted for in low dimensions. Consider the dynamics of the model, in one dimension, up to a time $`t`$ and far from the wall. Because random walks in one-dimension are recurrent, most particles within a diffusion length $`l_b\sqrt{2Dt}`$ in the bulk, will have interacted and annihilated. This leads to a density in the bulk of the system of $`\varrho _bl_b^1c_b/\sqrt{t}`$. However, close to the wall the diffusion length is smaller, $`\varrho _wl_w^1c_w/\sqrt{t}`$ since $`c_w>c_b`$, leading to a density excess near the boundary. Nevertheless, this argument is rather crude and a method for systematically including fluctuations is required. The RG has provided such a method for calculating bulk quantities , with the advantage of clearly identifying universal properties. We now present an overview of the generalisation to a system with a boundary: a case technically more complicated due to the lack of translational invariance. Details of the calculation will be provided elsewhere . The field-theoretic description is obtained by first writing a master equation. This describes the flow of probability between microstates of the system and is conveniently written in second-quantised form $`_t|P>=|P>`$. The vector $`|P>`$ is the probability-state vector written in a Fock space and acted upon by the evolution operator $``$ $``$ $`=`$ $`{\displaystyle \underset{i}{}}\left[D{\displaystyle \underset{j}{}}a_i^{}(a_ia_j)\lambda (1(a_i^{})^2)a_i^2\right]`$ (3) where $`a^{},a`$ are the usual bosonic operators. The sum $`i`$ is over all lattice sites, and the sum $`j`$ is over all of site $`i`$’s neighbours, with the condition that both sums are restricted to the half-space. The algebraic description $`(a,a^{})`$ is mapped onto a continuum path-integral for the action $`𝒮(\varphi ,\overline{\varphi })`$ where the complex fields $`\varphi `$ and $`\overline{\varphi }`$ are analogous to $`a`$ and $`a^{}1`$. The action $`𝒮=𝒮_D+𝒮_R+𝒮_{\varrho _0}`$ comprises diffusion $`𝒮_D`$, reaction $`𝒮_R`$ and initial-condition $`𝒮_{\varrho _0}`$ components. The diffusive part provides the propagator for the theory which is Gaussian for the transverse dimensions and has the the following “mirrored” form for the $`z`$-dimension $`𝒢_z(z_f,z_i,t)`$ $`=`$ $`G(z_fz_i,t)+G(z_f+z_i,t)`$ (4) where $`G(z,t)`$ is a Gaussian with a standard deviation of $`2Dt`$. The other components in the action are $`𝒮_R`$ $`=`$ $`2\lambda {\displaystyle _0^t}𝑑t{\displaystyle _{z>0}}d^dr\overline{\varphi }\varphi ^2+\lambda {\displaystyle _0^t}𝑑t{\displaystyle _{z>0}}d^dr\overline{\varphi }^2\varphi ^2,`$ (5) $`𝒮_{\varrho _0}`$ $`=`$ $`\varrho _0{\displaystyle _0^t}𝑑t{\displaystyle _{z>0}}d^dr\overline{\varphi }\delta (t).`$ (6) The upper critical dimension of the theory is $`d_c=2`$ and observables were rendered finite by dimensional reguralisation in $`d=2ϵ`$. The propagator is not dressed by the interactions (5), implying the boundary remains effectively reflecting on all scales. In the language of surface critical phenomena, this corresponds to the special transition persisting at all orders. This is different from the behaviour frequently seen in equilibrium surface critical phenomena and in related non-equilibrium systems . In fact only the reaction rate $`\lambda `$ is renormalised, with a fixed point structure identical to the bulk case . This is understandable as physically the renormalisation of $`\lambda `$ is connected to the fact that random walks in two dimensions and below are recurrent: a feature unaffected by the presence of a boundary. To get non-trivial, $`z`$-dependent results it is clear (from the lack of an excess in the mean-field equation) that the RG improved perturbation expansion must be taken to at least one-loop order. Writing the density as an expansion in $`ϵ=2d`$ and splitting the contribution into an excess $`\varrho _E(z,t)`$ and a homogeneous, background bulk density $`\varrho _B(t)`$ $`\varrho (z,t)`$ $`=`$ $`\varrho _B(t)+\varrho _E(z,t),`$ (7) the homogeneous bulk density for $`d<2`$ is found to be $`\varrho _B(t)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi ϵ(Dt)^{d/2}}}\left[1+{\displaystyle \frac{ϵ}{4}}\left(2\mathrm{log}(8\pi )5\right)\right]+O(ϵ).`$ (8) This is exactly the result found in as expected. However, a fluctuation-induced density excess is also found, representing the new result from this calculation $`\varrho _E(z,t)`$ $`=`$ $`{\displaystyle \frac{1}{8\pi (Dt)^{d/2}}}f_2\left({\displaystyle \frac{z^2}{2Dt}}\right)+O(ϵ),`$ (9) $`f_2(\xi ^2)`$ $`=`$ $`{\displaystyle _0^1}𝑑s{\displaystyle _0^s}𝑑r\left({\displaystyle \frac{r}{s}}\right)^2{\displaystyle \frac{\mathrm{exp}\left(\frac{\xi ^2}{(2sr)}\right)}{\left[(sr)(2sr)\right]^{1/2}}}.`$ (10) The function has the asymptotics $`f_2\mathrm{exp}^{\xi ^2/2}/\xi ^3`$ and is plotted in Fig. 1. A few things should be noted about the form of $`\varrho _E`$. First, this excess is not localised at the boundary but extends into the system diffusively, by virtue of its functional dependence of $`z^2/Dt`$. Also, the excess shares the same universality as the bulk density in that it is independent of the reaction rate $`\lambda `$ and the initial density $`\varrho _0`$. Finally, for $`d<2`$ the amplitude of the excess decays with the same exponent as the bulk. Hence, the ratio of the boundary to the bulk density $`{\displaystyle \frac{\varrho (0,t)}{\varrho _B(t)}}`$ $`=`$ $`1+ϵ\left({\displaystyle \frac{3}{4}}+{\displaystyle \frac{\pi }{2}}{\displaystyle \frac{3\pi ^2}{16}}\right)+O(ϵ^2)`$ (11) is a constant, universal quantity independent of all system parameters except the dimension. The behaviour in two dimensions provides for an interesting result: the one-loop calculation Eq. (9) is the exact late-time density excess. It is independent from the renormalised reaction rate and therefore represents the universal leading order, with higher-loop corrections decaying as $`(t\mathrm{log}t)^1`$. The excess given in Eq. (9) gives surprisingly accurate results even for short times, as can be seen in Fig. 1 where the result for the density excess is compared with simulations. The RG has provided information about the general behaviour of the density excess as a function of dimension. The universal quantities have been identified and the excess density correctly predicted for late times in $`d=2`$. Unfortunately, the $`ϵ`$ expansion gives disappointing results for the amplitude in $`d=1`$. Motivated by the mapping of the reaction dynamics onto the Ising magnet and the related excitonic coalescence process, we now provide the exact solution in one dimension. We generalise the bulk model described in to include an impenetrable boundary. The on-site reaction rate is chosen to be infinite and hence a site in the system can be occupied by at most one particle. The dynamical rules can be written in terms of the possible evolutions of a pair of neighbouring sites. Denoting a particle on site $`k`$ by $`A_k`$ and an empty site by $`O_k`$, the allowed changes are $`O_zA_{z+1}`$ $``$ $`A_zO_{z+1}\text{ with a rate D}`$ (12) $`A_zA_{z+1}`$ $``$ $`O_zO_{z+1}\text{ with a rate 2D}`$ (13) where the site label $`z=1,2\mathrm{}`$ is restricted to positive integers. The master equation can be written in the language of spin-half operators, and we were able to solve for the density using a similarity transformation found in (details of the calculation will be presented elsewhere ). The density can be written as a sum of Bessel functions, of which the continuum limit gives the following density excess $`\varrho _E(z,t)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{8\pi Dt}}}f_1\left({\displaystyle \frac{z^2}{2Dt}}\right),`$ (14) $`f_1(\xi ^2)`$ $`=`$ $`\sqrt{2}\text{erfc}\left({\displaystyle \frac{\xi }{\sqrt{2}}}\right)\mathrm{exp}\left({\displaystyle \frac{\xi ^2}{2}}\right)\text{erfc}\left(\xi \right).`$ (15) This function has the asymptotics $`f_1\mathrm{exp}^{\xi ^2/2}/\xi `$ and is plotted in Fig. 1. Again, the bulk component of the density $`\varrho _B=1/\sqrt{8\pi Dt}`$ was found to be identical to the infinite case . As expected, the form of the solution is in agreement with that predicted from the RG treatment. The universal ratio of the density near the boundary to the bulk density given in Eq. (11) is found to be exactly $`\sqrt{2}`$. It should be noted that the one-loop RG prediction for the universal ratio (given in Eq. (11) with $`ϵ=1`$) is numerically $`1.47`$ for one dimension, and therefore gives quite a fair indication of the exact value. This should be compared with the poor one-loop predictions for the amplitudes of the density itself in one dimension. The universality (independence from the reaction rate $`\lambda `$) predicted by the RG treatment, can be seen by comparing this exact result for $`\lambda =\mathrm{}`$ with data from a simulation of a system with finite reaction rate (Fig. 1). The result, Eq. (14), gives the time-dependent probability density of domain walls in the coarsening, one-dimensional critical Ising model near a very strong magnetic field, i.e. a fixed spin. The magnitude of the coarse-grained magnetisation is a function of the local density of domain walls - the fewer the domain walls the higher the magnetisation. Hence, the result implies that the absolute value of the magnetisation measured near a fixed spin is lower than in the bulk of the system. It would be very interesting to see if this dynamic effect is seen in other magnetic systems or in higher dimensions. In summary, results have been presented from an analysis of a reaction-diffusion process near an impenetrable boundary. The mean-field equation was shown to predict a flat density profile. However, it was demonstrated that in two dimensions and below, a density excess develops at the boundary. In one dimension it was found that the density excess is as significant a contribution as the bulk density with both decaying as $`t^{1/2}`$. In two dimensions the excess was found to be marginally subdominant, decaying as $`t^1`$. Both these density excesses shares the same universality as the bulk density, in that they are independent of the reaction rate and the initial density. Moreover, a higher degree of universality was found in the ratio between the boundary and bulk densities: a quantity depending only on the dimension of space. The functional forms for the density excess were obtained for both one and two dimensions and a link was made to the coarsening dynamics of the one-dimensional Ising model. Most importantly, it was shown that the excess is not localised at the boundary but extends diffusively throughout the bulk of the system. It should be noted that above the critical dimensions there is a sub-dominant density excess. However, these effects are transient and quickly decay to yield the mean-field result, as predicted. A quantity much studied recently and related to domain coarsening dynamics is the persistence exponent. This exponent describes the time dependence of the distribution of sites not yet visited by a domain wall. In the homogeneous bulk case, persistence has been examined by the RG and an exact result found in one dimension . In light of result (14) and the fact that the excess penetrates into the system, it would be worth examining the behaviour of the persistence in the presence of a fixed spin. Finally, it is interesting to consider the difference between the behaviour of diffusing and gas-phase ballistic reaction kinetics near a boundary. A simple realisation of the ballistic $`A+AO`$ system shows the same bulk density-decay exponent in one dimension as the diffusive case. However, contrary to the diffusive case, it can be shown that for ballistic reactions with an (elastic) impenetrable boundary there is a lower density of reactants near the wall . This is understandable, because in late times the remaining particles tend to congregate in groups moving in the same direction. When such a group hits the elastic wall it mostly annihilates within itself, leaving few particles to return. In the context of surface critical phenomena this corresponds to the impenetrable boundary behaving as an effectively absorbing boundary on long time scales, in contrast to the diffusive case. Acknowledgments. We would like to thank David Mukamel and Gunter Schütz for useful discussions. Fabian Essler, Martin Evans and Klaus Oerding are also thanked for their careful reading of the manuscript. The authors acknowledge support from the Israeli Science Foundation.
no-problem/9903/astro-ph9903023.html
ar5iv
text
# A Possible Explanation for the Radio “Flare” in the Early Afterglow of GRB990123 ## 1 Introduction GRB990123 had some remarkable characteristics that drew intense interest from astrophysicists. Its source appears to be at a cosmological redshift $`z1.61`$ (Kelson et al. 1999). Assuming isotropic emission this leads to a huge energy release: $`3\times 10^{54}`$ ergs in $`\gamma `$-rays alone (Kulkarni et al. 1999a). Prompt optical follow-ups saw a very bright (magnitude 9) early afterglow optical flash (Akerlof et al. 1999). Since the $`\gamma `$-ray burst (GRB) itself, its afterglow has been monitored almost constantly in X-rays, the optical band and radio. Radio observations of GRB990123 have been particularly puzzling. As expected from afterglow models, initial measurements (six hours after the burst) obtained only upper limits (Frail et al. 1999a). One day later, however, new observations showed a 8-$`\sigma `$ detection at 8.46 GHz of 260$`\pm 32\mu `$Jy (Frail et al. 1999b). On the other hand, observations at 4.88 GHz made during a period overlapping with the 8.46 GHz detection, yielded only upper limits (Galama et al. 1999). Finally, three to five days after the burst the afterglow radio output was again consistent with zero (Kulkarni et al. 1999b). Figure 1 summarizes the detection and upper limits. The radio emission is rather peculiar, as it does not conform to the gradual rise-and-decay time profile with a timescale of $`𝒪(10)`$ days which is expected from the standard GRB afterglow model and which was rather successfully confirmed by the radio afterglow observations of GRB970508 (Waxman, Kulkarni and Frail 1998). Several possibility have been raised to explain this one-time radio “flare”. Kulkarni et al. (1999b) have suggested that the flare is an interstellar scattering and scintillation event. Alternatively, it might be part of the early afterglow from the external reverse shock, if self-absorption of the radio emission suppressed the radio flux during the first day as well as during the second day but only at the 4.88 GHz band (Sari and Piran 1999). Here we investigate a third possibility (Shi and Gyuk 1999), that the radio “flare” might be due to the relativistic shock ploughing into a dense part (a cloud, or ejecta, for example) of the interstellar medium (ISM) off line-of-sight (LOS). This off LOS portion of the shock was therefore decelerated more efficiently than the rest (including that along LOS), and in turn gave rise to the premature emission of radio which also faded away relatively rapidly. We envision a geometry as in Figure 2. We will refer generally to this denser part of ISM off the LOS as a “cloud”. In the standard GRB afterglow model, the afterglow is generated by synchrotron radiation in the external shock of the GRB event as the shock gradually decelerates in a homogeneous ISM (with density $`n1`$ cm<sup>-3</sup>). The frequency of the synchrotron radiation depends strongly on the Lorentz factor, $`\gamma _e`$, of the electrons in the shock, which in turn scales with the Lorentz factor, $`\gamma `$, of the shock. Therefore, the afterglow starts at shorter wavelengths, in X-rays, and progressively moves to longer wavelengths, as the shock gradually slows down in the ISM. After $`𝒪`$(10) days, the afterglow peaks in the radio band. A similar progression is envisioned to occur in the portion of the shock that encounters the off-LOS cloud, except over a much shorter timescale. However, because the afterglow radiation is beamed to an opening angle $`1/\gamma `$ we can only see the off-LOS cloud shock if it is within $`1/\gamma `$ of the LOS. Thus relativistic beaming alone may prevent short wavelength (optical etc.) “flares” originating from the off-LOS cloud from being detectable. If the size of the cloud is comparable to the distance of the cloud to the GRB source, we can crudely approximate the deceleration of the relativistic shock in the cloud as if the shock were decelerating in an homogeneous ISM of enhanced particle density ($`n1`$ cm<sup>-3</sup>) and with spherical symmetry. This approximation should hold sufficiently well for the later epoch of the deceleration, which is relevant to radio emission. In so doing, we employ the scaling relations in the standard afterglow model. The Lorentz factor of the relativistic shock scales as $$\gamma \{\begin{array}{cc}6E_{52}^{1/7}n_1^{1/7}\gamma _{100}^{1/7}t_{\mathrm{day}}^{3/7}[(1+z)/2.6]^{3/7},\hfill & \text{for radiative shocks;}\hfill \\ 7E_{52}^{1/8}n_1^{1/8}t_{\mathrm{day}}^{3/8}[(1+z)/2.6]^{3/8},\hfill & \text{for adiabatic shocks.}\hfill \end{array}$$ (1) from energy and momentum conservation considerations (see e.g., Piran 1998). In equation (1), $`E_{52}`$ is the initial energy of the shock in units of $`10^{52}`$ erg, assuming a $`4\pi `$ expansion angle; $`n_1`$ is the particle number density of the medium in cm<sup>-3</sup>; $`\gamma _{100}`$ is the initial Lorentz factor of the shock in units of 100; $`t_{\mathrm{day}}`$ is the time elapsed since the GRB in days, as seen by the observer; and $`z`$ is the redshift of the GRB. In the radiative regime, the particles in the shock convert their kinetic energy into radiation rather efficiently. In the adiabatic regime, the radiation loss is negligible. For the external shock in a GRB, the transition from the radiative regime to the adiabatic regime occurs at a time (Piran 1998) $$t_{\mathrm{r}\mathrm{a}}0.002E_{52}^{4/5}n_1^{3/5}(ϵ_e/0.6)^{7/5}(ϵ_B/0.01)^{7/5}[(1+z)/2.6]^{12/5}\gamma _{100}^{4/5}\mathrm{day},$$ (2) where $`ϵ_e`$ is the fraction of thermal energy of the shock that resides in the random motion of electrons, and $`ϵ_B`$ is the ratio of the magnetic field energy to the thermal energy density of the shock ($`4\gamma ^2n_1m_pc^2`$ where $`m_p`$ is the proton mass and $`c`$ the speed of light). Canonical values are $`ϵ_e0.6`$ and $`ϵ_B0.01`$, obtained by fitting the standard afterglow model to the observed afterglow of GRB970508 (Wijers and Galama 1998; Granot, Piran and Sari 1998). Before $`t_{\mathrm{r}\mathrm{a}}`$ the cooling time is shorter than the dynamic timescale, and the shock is radiative. For an energetic GRB event such as GRB990123 where $`E_{52}10^3`$, and a cloud much denser than the average ISM ($`n_11`$), the transition occurs much later than a day. We therefore assume a radiative shock in our treatment. There are three synchrotron emission frequencies that are crucial: $`\nu _m`$, the peak synchrotron radiation frequency if electrons in the shock are slow-cooling; $`\nu _c`$, the peak synchrotron radiation frequency if electrons are fast-cooling; and $`\nu _a`$, the self-absorption frequency of the synchrotron radiation below which the radiation is absorbed by electrons in the shocks. Depending on the ratio of these key frequencies, the synchrotron radiation from the external shock can have very different spectral shapes. In the standard afterglow picture, the shock-heated electrons develop a power law number density distribution $`N(\gamma _e)\gamma _e^p`$ where $`\gamma _e\gamma _{e,min}`$ is the Lorentz factor of electrons. The minimum Lorentz factor cut-off is $$\gamma _{e,min}\frac{p2}{p1}\frac{m_p}{m_e}ϵ_e\gamma 2.2\times 10^3(ϵ_e/0.6)E_{52}^{1/7}n_1^{1/7}\gamma _{100}^{1/7}t_{\mathrm{day}}^{3/7}[(1+z)/2.6]^{3/7},$$ (3) where $`m_p`$ and $`m_e`$ are proton and electron masses respectively (Piran 1998). The power law index $`p`$ is found to be $`2.5`$ by fitting the observed GRB spectra and that of the afterglows. If electrons are slow-cooling, their peak synchrotron emission will be in the observer’s frame at a frequency $$\nu _m=\frac{\gamma \gamma _{e,min}^2}{1+z}\frac{eB}{2\pi m_ec}7\times 10^{12}(ϵ_B/0.01)^{1/2}(ϵ_e/0.6)^2E_{52}^{4/7}n_1^{1/14}\gamma _{100}^{4/7}t_{\mathrm{day}}^{12/7}[(1+z)/2.6]^{5/7}\mathrm{Hz},$$ (4) where $`e`$ is the electron charge and $`B`$ is the magnetic field. If, however, the shocked electrons cool quickly, they will mostly radiate from a cooled state, whose Lorentz factor is $$\gamma _{e,c}\frac{3m_ec}{4\sigma _T(B^2/8\pi )\gamma t}7\times 10^4(ϵ_B/0.01)^1E_{52}^{3/7}n_1^{4/7}\gamma _{100}^{3/7}t_{\mathrm{day}}^{2/7}[(1+z)/2.6]^{2/7}$$ (5) where $`\sigma _T=8\pi e^4/3m_e^2c^4=6.65\times 10^{25}`$ cm<sup>2</sup> is the Thompson scattering cross section (Piran 1998). The emitting frequency in the observer’s frame is $$\nu _c=\frac{\gamma \gamma _{e,c}^2}{1+z}\frac{eB}{2\pi m_ec}5\times 10^{15}(ϵ_B/0.01)^{1.5}E_{52}^{4/7}n_1^{13/14}\gamma _{100}^{4/7}t_{\mathrm{day}}^{2/7}[(1+z)/2.6]^{5/7}\mathrm{Hz}.$$ (6) To find the self-absorption frequency $`\nu _a`$, we follow Granot et al. (1999) and Wijers and Galama (1999) to calculate at what frequency the optical depth becomes unity. A crude estimate of the optical depth $`\tau `$ is $`\tau \alpha _\nu ^{}^{}R/\gamma `$, where $`\alpha _\nu ^{}^{}`$ is the absorption coefficient at a a frequency $`\nu ^{}`$, and $`R/\gamma `$ is the thickness of the shock, all in the local rest frame. However, we cannot directly adopt the formula for $`\nu _a`$ in Granot et al. (1999), or Wijers and Galama (1999), because both have assumed a slow electron cooling regime in an adiabatic shock. In our problem, the electrons are fast-cooling, and the shock is radiative. We therefore substitute the electron Lorentz factor $`\gamma _{e,c}`$ (fast-cooling regime) for $`\gamma _{e,min}`$ (slow-cooling regime), and likewise substitute the shock Lorentz factor $`\gamma `$ in radiative shocks for that in adiabatic shocks. The resultant self-absorption frequency is then $$\nu _a10^8(ϵ_B/0.01)^{6/5}E_{52}^{4/5}n_1\gamma _{100}^{4/5}t_{\mathrm{day}}^{4/5}[(1+z)/2.6]^{1/5}\mathrm{Hz}.$$ (7) We have implicitly assumed $`\nu _a\nu _c`$ since for rapid cooling most of the electrons will be at Lorentz factor $`\gamma _{e,c}`$. For these electrons, absorption at frequencies $`\nu _c`$ falls off rapidly. If we further assume that the time-integrated absorption from newly injected electrons in transition to their final cooled state is small, we will always have $`\nu _a\nu _c`$. Therefore, with $`E_{52}10^3`$, $`n_11`$ and $`t_{\mathrm{day}}1`$, and canonical values of $`ϵ_B`$, $`ϵ_e`$ and $`\gamma _{100}`$, there is a hierarchy in frequencies: $`\nu _a\nu _c<\nu _m`$. The peak flux of the synchrotron radiation seen by an observer is at $`\nu _c`$ (Piran 1998): $$F_\nu (\nu _c)=F_{\nu ,max}1.8\times 10^3(ϵ_B/0.01)^{1/2}E_{52}^{8/7}n_1^{5/14}\gamma _{100}^{8/7}t_{\mathrm{day}}^{3/7}[(1+z)/2.6]^{10/7}d_{28}^2\mu \mathrm{Jy},$$ (8) where $`<1`$ is a geometric factor to account for the fact that the emission is from off LOS so we only see an edge of the radiation cone, and $`d_{28}`$ is the luminosity distance in the units of 10<sup>28</sup> cm. Fluxes at other frequencies are (Piran 1998) $$F_{nu}\{\begin{array}{cc}(\nu /\nu _a)^2F_\nu (\nu _a)\hfill & \text{if }\nu <\nu _a\text{;}\hfill \\ (\nu /\nu _c)^{1/3}F_\nu (\nu _c)\hfill & \text{if }\nu _a<\nu <\nu _c\text{;}\hfill \\ (\nu /\nu _a)^{1/2}F_\nu (\nu _c)\hfill & \text{if }\nu _c<\nu <\nu _m\text{;}\hfill \\ (\nu /\nu _a)^{p/2}F_\nu (\nu _m)\hfill & \text{if }\nu >\nu _m\text{.}\hfill \end{array}$$ (9) The spectrum and its evolution is schematically plotted in Figure 3. We require $`\nu _c,\nu _a>8.46`$ GHz at the time of $`t_{\mathrm{day}}1`$ so that a non-detection of radio signal at 4.88 GHz is compatible with a simultaneous detection at 8.46 GHz. This condition is satisfied if $`n_12\times 10^4`$. Assuming at a redshift $`z1.61`$, the luminosity distance to GRB990123 is $`d_{28}4`$ (its order of magnitude is insensitive to different choices of cosmology). The peak flux is then $`F_{\nu ,max}3\times 10^5n_1^{5/14}\mu `$Jy when adopting for other parameters values mentioned above. Scaling down to 8.46 GHz, we find $`F_\nu (8.46\mathrm{GHz})F_{\nu ,max}(8.46\mathrm{GHz}/\nu _c)^23\times 10^3n_1^{31/14}\mu `$Jy.<sup>1</sup><sup>1</sup>1We have assumed $`\nu _a\nu _c`$, which will be the case for $`n_1200`$. To yield a 260 $`\mu `$Jy detection would therefore require $`n_1200`$. Because at this part of the spectrum $`F_\nu \nu ^2`$, a flux of 260$`\pm 32\mu `$Jy at 8.46 GHz implies a flux of 86$`\mu `$Jy at 4.88 GHz, consistent with the $`3\sigma `$ limit of 130$`\mu `$Jy measured at this frequency (Galama et al. 1999). Given $`200n_12\times 10^4`$, the Lorentz factor of the prematurely decelerated portion of the shock at $`t_{\mathrm{day}}1`$ is of order $`𝒪(1)`$. The non-detection of radio emission six hours after the burst may be due to strong absorption (i.e., $`\nu _a`$ too large), or it may simply be that the shock hadn’t yet encountered the cloud. While three days later, this portion of the shock has become very weak, and its emission is further absorbed by the main shock that propagates along LOS. It should be kept in mind that a factor of several below the level of the detected emission might render the emission undetectable. Assuming that the dimension of the cloud is comparable to the size of the fireball $`4\gamma ^2t`$ at $`t_{\mathrm{day}}1`$, we find a mass for the cloud to be $`10^5`$ to $`10^3M_{}`$. We speculate that it may be ejecta from the GRB site. The main portion of the relativistic shock along LOS is not affected by the cloud off LOS. It generates the main afterglow as expected from the standard GRB afterglow model. The temporal decay of this afterglow in a given frequency band follows a powerlaw $`t^{1.1}`$ (Vietri 1997; Waxman 1997; Sari, Piran and Narayan 1998). But as its radiation cone become wider (opening angle $`\theta 1/\gamma `$), the viewing area of an observer is larger. Eventually the area will engulf the portion of the shock that was prematurely decelerated and terminated. The temporal decay of the main afterglow should then be faster than $`t^{1.1}`$. The transition to a faster decay law is not unique: for example, if the relativistic shock is a narrow jet, the temporal decay of its afterglow steepens when its opening angle $`\theta <1/\gamma `$. Depending on the details of the model it may either steepen by an additional $`t^{3/4}`$ power (Mészáros & Rees 1999), or steepen to $`t^p`$ (Kulkarni et al. 1999). The rate of afterglow decay due to an off-LOS hole in a spherical shock should be more modest than that due to a jet and indeed in this scenario, we should expect the afterglow decay will eventually approaches its initial shallower decay profile, as the influence of the geometric defect becomes increasingly less significant. In summary, we show that the radio “flare” observed in the early afterglow of GRB990123 may be due to a relativistic shock encountering a denser part of the ISM, (with a density between $`200`$ and $`2\times 10^4`$ cm<sup>-3</sup>) off line-of-sight. A transition from a $`t^{1.1}`$ decay to a modestly faster temporal decay is expected in the later stage of the afterglow. This scenario also implies that the relativistic shock that generates the afterglow is unlikely to be beamed by more than a factor of a few. We thank George Fuller, Bob Gould and Art Wolfe for discussions. XS acknowledges support from NSF grant PHY98-00980 at UCSD. GG wishes to thank the Department of Energy for partial support under grant DEFG0390ER40546 and Research Corporation.
no-problem/9903/cond-mat9903306.html
ar5iv
text
# 1 Introduction ## 1 Introduction The growth of high quality compound semiconductors is of great technological importance . Despite the longstanding tradition of molecular beam epitaxy (MBE), it is still a challenging task to improve the growth of high quality thin films and well defined interfaces. In order to optimize MBE growth a detailed knowledge of the relation between microscopic processes and macroscopic properties is very important. Computer simulations are an ideal tool to access this relation between atomistic processes and epitaxial growth. In addition, different and new growth strategies can be easily implemented and tested in a fast and cheap way . In this paper we will investigate the macroscopic effects of two distinct microscopic mechanisms. The term microscopic refers to processes on the atomic scale: e.g. a single diffusion step of an adatom or desorption of an atom. These processes are the ingredients to the computer model used in this paper. This is contrasted to the term macroscopic for effects which are typically measurable in experiments: e.g. the overall mass desorption as can be monitored by the partial pressure , the form and the distribution of three–dimensional structures accessible by scanning tunneling microscopy , or the growth rate as determined by electron diffraction oscillations . The computer simulations employed here are ideally suited to bridge the gap between such macroscopic effects and their underlying microscopic processes, since both scales are accessible. Several strategies have been proposed in the literature to optimize MBE growth: In particular, layer–by–layer growth is most desirable in order to achieve high quality thin films . However, quite often a transition to non–layer–by–layer is observed where three–dimensional (3D) structures such as mounds or pyramids appear. In conventional MBE the time $`t_\times `$ until this growth mode crosses over to 3D–growth has been shown to vary with $`Ft_\times (D/F)^\delta `$ , where $`F`$ stands for the flux and $`D`$ for the diffusion constant of adatoms. Without desorption, Ehrlich–Schwoebel barriers and step edge diffusion (SED) $`\delta =2/3`$ has been observed for epitaxial growth . For metals, several methods have been proposed and tested to achieve and maintain layer–by–layer growth. For instance it has been shown that pulsing the deposition rate or pulsing the temperature leads to a prolonged layer–by–layer regime . Recently, it has been proposed that pulsed glancing–angle sputtering can even lead to “layer–by–layer growth forever” . All these concepts can so far be understood in terms of a typical diffusion length or an enhanced interlayer diffusion at step edges. In this paper we will propose strategies which exploit other specific microscopic processes, namely desorption and SED . As far as we know, no attempt has been made to exploit these processes in order to achieve improved growth. Some preliminary results of our investigation have been published in , and in this paper we describe the investigation in full detail. In Sec. 2 we introduce the solid–on–solid (SOS) model and the microscopic processes. In our computer experiments we first investigated the temperature dependence of the overall growth rate in the layer–by–layer regime (Sec. 3). We are able to correlate this macroscopic property to the microscopic dynamics of the computer model. This allows us to propose a new strategy for layer–by–layer growth. If, however, the growth of three–dimensional structures occurs another strategy is applicable. Using a simplified model of growth we have recently shown that SED plays a crucial role in this regime . These findings allow us to propose an optimized way for the growth of 3D–structures in Sec. 4. Concluding remarks concerning the experimental realisation and a summary will be given in Sec. 5. ## 2 Computational model Lattice models with the solid–on–solid restriction (SOS) have been proved to be a useful tool to study surface morphology . The model has a long history for the study of the surface roughness transition . Gilmer and Bennema were the first (to our knowledge) who included surface diffusion . Since then it has been intensively used to study epitaxial growth . Here we use its most simple form where only one kind of particles and a simple cubic lattice is considered. The particles represent single atoms when a comparison with a simple cubic metal is made. However, even compound semiconductors can be modeled, as long as kinetic features are investigated only. E.g. in Ref. the RHEED (reflection high energy electron diffraction) oscillations of GaAs(001) during growth have been quantitatively reproduced. In our simulations we use the Maksym–algorithm of Ref. . At each time step a Monte–Carlo move is carried out. The way how the event is selected makes it superior to conventional Monte–Carlo techniques (the algorithm uses partly a binary search in the array of possible events). We have used a system of 300 times 300 lattice sites, if not stated otherwise. Besides the SOS–restriction further simplifications are due to the particular choice of possible events labeled $`i`$ and the parametrisation of the corresponding rates $`\mathrm{\Gamma }_i`$. We allow jumps to the four nearest neighbor sites (diffusion on a flat surface, attachment and detachment from steps, …) and desorption. The rates do only depend on the four nearest neighbor sites as will be described below. We describe all these processes as Arrhenius–activated $$\mathrm{\Gamma }_i=\nu _i\mathrm{exp}\left(\frac{E_i}{\text{k}_\text{B}T}\right),$$ (1) as is predicted by several theories . One quite often assumes vibration frequencies $`\nu _i`$ of the order of Debye–frequencies, i.e. $`10^{12}`$$`10^{14}\text{s}^1`$. Indeed, in sublimation experiments of CdTe(001) $`10^{14}\text{s}^1`$ has been observed ; vibration frequencies for diffusion are often of the order of $`10^{12}\text{s}^1`$ (measurements for metals , calculations for GaAs(001) , or simulations and calculations for Si(001) ). Hence, it is reasonable to assume that the diffusion as well as the desorption rates of our model share one common prefactor $`\nu _i=\nu _0=10^{12}\text{s}^1`$ which allows to keep the number of parameters small. The activation energy for the different microscopic processes are parameterized as follows: a diffusion jump of a free adatom has to overcome a barrier $`E_B`$, each next in–plane neighbour adds an energy $`E_N`$. The rate of diffusion jumps which keep the height of the particle unchanged thus becomes $`\nu _0\mathrm{exp}\left((E_B+nE_N)/\text{k}_\text{B}T\right)`$, where $`n`$ represents the number of next in–plane neighbors. Note, that the overall rate for diffusion on a flat surface is four times this jump rate due to the four possible directions. Hence, the diffusion constant becomes $`D=\nu _0\mathrm{exp}(E_B/\text{k}_\text{B}T)`$ . Since we measure all length scales in units of the lattice constant $`a`$ we have neglected the term $`a^2`$ in $`D`$. At step edges an additional Ehrlich–Schwoebel barrier $`E_S`$ is considered . However, this barrier is not added for particles on top of an elongated islands of only one lattice constant width . The desorption barrier is $`E_D`$. Again, each next in–plane neighbour contributes $`E_N`$. The deposition of particles occurs with a rate $`F`$ measured in monolayers per second (ML/s). During deposition we consider another process which is not Arrhenius–activated. After a deposition site is chosen randomly we allow the particle to relax to a lower neighboring site. Here, we consider only relaxation to nearest–neighbor sites. Such transient diffusion or downward funneling has been observed in molecular dynamics of simple Lennard–Jones systems and has been related to the reentrant layer–by–layer growth at very low temperatures . In addition it has been shown to play a crucial role for slope selection in mound morphology . We will concentrate on one set of parameters, namely $`E_B=0.9\text{eV}`$, $`E_N=0.25\text{eV}`$, $`E_S=0.1\text{eV}`$, and $`E_D=1.1\text{eV}`$. This particular choice of parameters reproduces some features of CdTe(001) during sublimation and annealing. However, we would like to stress that the findings of our present work are of more general relevance, independent of the specific choice of the energetic parameters. ## 3 Reevaporation during layer–by–layer growth and the Flush Technique For clarity we will distinguish several processes of desorption. The term desorption will be explicitly used to describe the atomistic process: the desorption of a single atom. Sublimation is reserved to describe the evaporation of a surface when left in (perfect) vacuum. Reevaporation or more precisely reevaporation during growth will describe the overall desorption rate mostly due to the desorption of freshly deposited particles during growth. For two of the most important compound semiconductors a decrease of the MBE growth rate with increasing temperature was observed (CdTe(001) and GaAs(001) ). For CdTe(001) the reevaporation rate was found to follow an Arrhenius–rate with considerably lower values of the activation energy of 0.14 – 0.30 eV compared to sublimation (1.55 eV resp. 1.9 eV ). A tempting explanation is to ascribe this low energy to the existence of a physisorbed precursor . However, studies of the sublimation with computer simulation as well as experiments for CdTe(001) showed a strong influence of the morphology. In Ref. we already concluded that in MBE, one should expect desorption rates other than those measured by sublimation. Independently Pimpinelli and Peyla also showed that a physisorbed precursor is not necessary to explain the observed low energies using kinetic Monte–Carlo simulations as well as simple scaling arguments . In fig. 1, the diamonds represent the reevaporation which we derived from the difference between the applied flux ($`F=1`$ ML/s) and the measured growth rate (i.e. the reached height/simulated time). The data points for $`F=4`$ ML/s ($``$) show that the effective energy is independent of the applied flux. The triangles correspond to the sublimation i.e. the evaporation rate without application of an external flux ($`F=0`$ ML/s) . Both processes are found to be Arrhenius–activated, however, with strikingly different effective energies. The reevaporation rate during growth corresponds to an activation energy of approximately 0.90 eV which is even lower than the microscopic desorption energy of $`E_D=`$ 1.1 eV. At high temperatures the reevaporation rate saturates and equals the flux of impinging particles. On the contrary, the sublimation energy of approximately 1.73 eV is considerably higher . The relation of the sublimation energy to the microscopic parameters was shown to be approximated by $`E_{\text{sub}}0.61E_B+0.35E_D+2.85E_N+0.44E_S`$ . To derive this relation we varied all microscopic energy parameters independently. Applying the same microscopic analysis to the reevaporation during growth of this model we obtain $$E_{\text{re}}0.31E_B+0.94E_D+0.51E_N0.03E_S.$$ (2) As an example for this microscopic analysis, fig. 2 shows the measured influence of the diffusion barrier $`E_B`$ and of the desorption barrier $`E_D`$ to the reevaporation rate. We applied a flux of $`F=4`$ ML/s. Since the measured reevaporation rate is much lower (less than 1 ML/s) we can be sure to have no saturation effects. Note the opposite sign of the two contributions. The slope measures the prefactor in the above expression of $`E_{\text{re}}`$ (eq2). We want to mention that this result does not agree with the scaling relation obtained by Pimpinelli and Peyla . However, at lower temperatures (not shown) we observe a cross–over to their result with a critical nucleus size of $`i^{}=1`$. The crossover itself can be seen at the data point for $`F=4`$ ML/s at 500 K which lies above the value of about 0.2 ML/s which is extrapolated at 500 K from data points at higher temperatures. A detailed investigation of the validity–regime of our result and the relation to the results obtained by Pimpinelli and Peyla will be postponed to a future work. Besides the different weightings in $`E_{\text{re}}`$ and $`E_{\text{sub}}`$ the striking difference (at high as well as low temperatures) is the negative contribution of the diffusion barrier $`E_B`$ to $`E_{\text{re}}`$. This result seems to be of general validity and can be explained in the following way: Even though the island distance is influenced by $`E_B`$, the dominant effect of higher diffusion barriers seems to be the reduction of the diffusion length of the adatoms. Consequently, particles have a higher probability for desorption before they stick to an island. This result suggests a strategy to obtain high-quality (layer-by-layer) growth together with high growth rates. Short flushes of particles at the beginning of each monolayer would result in a great density of islands. Afterwards with a low flux the particles probably hit islands to stick to which will result in a low overall reevaporation rate. The proposed procedure (flush–mode) is drawn schematically in fig. 3. Figure 1 shows that the reevaporation rate indeed reduces by a factor of about two when applying this strategy. The mean flux was 1 ML/s as for the conventional growth simulations. The profile of the flux was composed as follows: In intervalls of one second we deposited a total amount of 0.23 ML within 0.003 s (see fig. 3). Afterwards a constant flux of 0.77 ML/s was applied. According to the decrease of evaporation the growth rate increases. The gain is highest at high temperatures (at 620 K the growth rate is doubled) since there the evaporation rate becomes comparable to the applied flux. In addition to higher growth rates, layer–by–layer growth is assisted by the flush–mode. In fig. 4 we compare three different techniques/models of growth: * conventional growth with $`F=1`$ ML/s and allowed desorption, * flush–mode with $`F_{\text{const}}=1`$ ML/s and additional 0.30 ML in 0.003 s each second and allowed desorption, * flush–mode without desorption ($`E_D=\mathrm{}`$), $`F_{\text{const}}=0.77`$ ML/s and additional 0.23 ML during 0.003 s each second. The different fluxes in B and C are chosen in order to achieve the synchronization of the pulses with layer–completion. Due to the possible desorption in B, however, synchronization can be achieved only approximately in this case. We investigate these different methods by comparing the surface width $$w=\sqrt{(h(x,y)h)^2}.$$ (3) Perfect layer–by–layer growth would lead to oscillations between zero and 0.5 (coverage of half a monolayer). Higher values of $`w`$ indicate a broader distribution of the heights. After the deposition of 60 ML using the different techniques the surface widths become considerably different (see fig. 4). The flush–mode without desorption (C) is even farer away from perfect layer–by–layer growth compared to conventional growth (A). The flush–mode in the presence of desorption (B) is superior to both, (A) and (C) in the long run and keeps the surface smooth. Looking at the deposition of the first 20 ML this seems to be surprising. The oscillations of $`w`$ with technique (B) are disrupted due to an obvious asynchronization. With (C) the synchronization is perfect leading to very strong and regular oscillations. Using the conventional technique (A) the oscillations are damped and much less pronounced. To summarize, the usage of the proposed flush–technique is useful to improve the growth rate (we achieved a factor two at high $`T`$) and to assist layer–by–layer growth. Hereby, the desorption of adatoms is crucial to achieve optimized growth: without desorption the flush–mode is worse compared to conventional growth even though strong oscillations are induced. The reevaporation has such an impact because it is height selective, i.e. adatoms on top of existing islands desorb easily whereas adatoms beneath islands preferentially are incorporated in the crystal. Clearly, this height selective behaviour is achieved only when a positive Ehrlich–Schwoebel barrier hinders the particles to be incorporated at step edges from above. We would like to point out that the usage of a chopped flux has been proposed and investigated by Rosenfeld et al in the framework of the concept of the two mobilities. However, our findings show that only in the presence of desorption the occurrence of oscillations is indeed coupled to a reduction of the surface roughness. In ref. the effect of a chopped flux on island distances was investigated. These findings would allow to optimize even further in that one calculates the minimal flux intensity and the time of the flush needed in order to achieve an increased island density. Here, we have chosen a safe high flux without explicit use of the results of . ## 4 Optimizing the structure of mounds in 3D–growth Quite generally, layer–by–layer growth , as well as step flow is not attainable forever . This can be due to e.g. Ehrlich–Schwoebel barriers which is typically positive . This favors new nucleation events on top of existing islands which leads to 3D–growth sooner or later. In order to optimize MBE growth it is thus also interesting to study the growth of 3D–structures by computer simulations. We will start with a brief summary of our findings for 3D–growth on the basis of a simplified model of epitaxial growth . This will enable us to introduce the basic concepts. After that, we will show how these results can be used in order to improve 3D–growth (to be specified below). We will test this new strategy with computer simulations of the SOS–model of Sec. 2. The most important simplification we introduced in ref. was an effective description of diffusion and nucleation. Rather than to simulate the simultaneous motion of many adatoms we concentrated on the simulation of individual particles which is a usual technique for simple growth models . Parameters to the model are the diffusion length and in a similar way SED is considered. Even though a similar SED has been introduced in ref. , there, in difference to the present work no search for kink sites was implemented. In MBE the typical length of the step edge diffusion process depends on temperature and flux of arriving particles . On a one–dimensional substrate the theory of island nucleation predicts a typical distance between nucleation centers of the form $$\mathrm{}_{\text{SED}}\left(\frac{d}{f}\right)^{1/4}$$ (4) where $`d`$ is the diffusion constant and $`f`$ the flux of arriving particles . If we apply this theory to the lateral or in–plane growth of a pyramid (concentrating on a slice of one ML thickness) the flux $`f`$ can be identified with the reduced flux per unit length of the step edge $`f=F\mathrm{}_T`$ where $`\mathrm{}_T`$ stands for the terrace width. Within this context $`d`$ becomes the diffusion constant for diffusion along the step edge. The scaling relation (4) for $`\mathrm{}_{\text{SED}}`$ was obtained under the restriction that two atoms (i.e. $`i^{}+1=2`$) already form a stable nucleus and no desorption occurs. We note that for greater values of $`i^{}`$ the correct theoretical result has been derived recently . However, for the model as described in Sec. 2 the assumption of $`i^{}=1`$ is reasonable. When $`\mathrm{}_{\text{SED}}`$ is of the order of the modeled system size (strong SED) the growth is characterized by the formation of square based pyramids with a well defined slope. The step edges are oriented along the lattice coordinates and the surface width was found in Ref. to grow with a power law $$wh^\beta \text{ with }\beta 0.45$$ (5) where $`\beta `$ is called the growth exponent . Typically, one expresses the scaling behaviour in terms of the elapsed time. However, in the context of this paper it is advantageous to use the mean height $`h`$ instead (as will become clear soon). The typical distance between the pyramids (the correlation length) was found to be proportional to $$\xi h^{1/z}\text{ and }z=\alpha /\beta 2.3$$ (6) in accordance to the occurrence of slope selection. More formally this means that the ratio of the typical length scales, $`w`$ and $`\xi `$, remains constant, and hence $`\alpha =1`$. The relatively high growth exponent of 0.45 reflects the fact that the coarsening process is SED–assisted . Due to the strong SED material is moved efficiently towards regions with high densities of kink sites, i.e. towards the contact points of pyramids or mounds. For lower values of $`\mathrm{}_{\text{SED}}`$ the coarsening process is purely noise assisted . Hence, the structures are merging more slowly. If the size of the pyramids exceeds $`\mathrm{}_{\text{SED}}`$ the pyramids loose their perfect shape. The structures become round, and step edges will be fringed . It is clear that due to the coarsening, conventional MBE growth is bound to drive itself into this state. Now, we turn to the investigation how the latter stage in MBE growth can be prevented. The main idea is that in order to prevent the occurrence of rough step edges one has to require that always $`\mathrm{}_{\text{SED}}\xi `$. In the following we demonstrate how to fulfill this condition by varying the flux $`F`$ of arriving particles. Equally well, one could adapt the growth temperature. However, for this a detailed knowledge of the activation energy of $`d`$ and the temperature dependence of $`\mathrm{}_T`$ is necessary which is often not available. Equating the expressions (6) and (4) we obtain the height dependence of the flux $$F(h)=ch^{4/z}$$ (7) where $`c`$ is an adequate constant. To reformulate this relation in terms of the time we use $`\text{d}h/\text{d}t=F`$ and solve the resulting differential equation, obtaining $$h(t)t^{z/(4+z)}.$$ (8) Reinserting this result into (7) we obtain that the flux should be varied according to $$F(t)t^{4/(4+z)}t^{0.65}$$ (9) where we inserted $`z=2.3`$ according to SED–assisted coarsening . We applied this strategy to the growth of the SOS–model of Sec. 2 at 560 K. Clearly, SED is not a process which is explicitly considered in this model. Typically, the atoms with only a single bond to a step edge will detach and diffuse on the terrace. However, the net result will be the same: single bonded atoms will be moved to places with higher coordination (kink sites). To prevent the inference of reevaporation we suppressed this process . However, we checked that even with desorption, the strategy is still applicable and useful. The flux was chosen as shown in fig. 5. We started with a constant flux of 2 ML/s. After the growth of twenty monolayers we adapted the flux with time $`t`$ according to $`F=F_0/(t/10\text{s})^{0.65}`$. In fig. 6 we compare the resulting evolution of the surface width $`w`$ with a simulation with a constant flux $`F=1`$ ML/s. With the new strategy we obtain a higher growth exponent of $`\beta 1/2`$ compared to conventional growth with $`\beta 1/3`$. These exponents fit well to $`\beta =0.45`$ for strong SED and $`\beta =0.33`$ for intermediate values of $`\mathrm{}_{\text{SED}}`$ . The result that $`w`$ grows fast and is described by a high growth exponent under these optimized growth conditions should not however be confused with the notion of a fast roughening, self–affine surface. It just means that the structures are merging fast and the mounds are getting high and wide. This can be seen directly in fig. 7. After the deposition of 300 ML under constant flux the structures are small whereas under optimized growth conditions the resulting structures are larger. Note that because of the higher initial flux of 2 ML/s the island density was much higher in the beginning under optimized growth conditions. Nevertheless the SED–assisted coarsening leads to a considerably fewer number of mounds (approximately 10 which should be compared to 20 with the conventional growth). It is clear that in order to obtain larger structures in conventional MBE one just has to grow for longer times. The step edges will become smooth due to the equilibration after the growth has been stopped. However, during growth the step edges do not remain smooth as in our optimized growth mode. Hence, a larger probability for the creation of vacancies or other crystal faults will be present. After growth stops these faults can probably be only partially eliminated in the non–optimized growth. We also mention that in the end another processes will of course become important too. In the limit of $`t\mathrm{}`$ no net growth will be achieved in the optimized growth and the equilibration of the surface will be dominant. ## 5 Conclusion We have investigated the effect of the microscopic dynamics on experimentally accessible macroscopic effects in MBE growth. Based on simulations of the solid–on–solid model we proposed two optimized growth–strategies. Comparing the layer–by–layer growth with sublimation we understood how the desorption of single adatoms comes into play during growth. During growth, freely diffusing adatoms are created by the external flux. During sublimation, however, such adatoms must first be created, e.g. through detachment from steps. This difference manifests itself in the different contribution of the microscopic activation energies to the effective energies. The diffusion barrier $`E_B`$ increases the effective energy of sublimation whereas it decreases the activation energy of the desorption rate during growth. Since the macroscopic desorption rate is influenced by the typical lifetime of single atoms we are able to intervene in the growth process. We showed that a flush–mode is able to prolong the layer–by–layer growth regime and to reduce the desorption rate: applying short pulses of particles we create a high density of islands. Afterwards with a low flux one completes the monolayer. At least for our particular simulations the desorption was crucial to obtain improved growth. Even though the flush–mode always induces strong oscillations, only in combination with desorption it leads to an improved growth. This can be explained with the height–selective behaviour of desorption, i.e. desorption occurs preferentially on top of islands as long as a positive Ehrlich–Schwoebel barrier is present. In experiments one should be able to produce such short flushes using a chopper or pulsed laser deposition in conjunction with conventional MBE. The method should be very useful in order to grow planar coherent thin films, e.g. for application in quantum well structures. In addition, such experiments would allow to decide wether desorption occurs out of a physisorbed precursor which has been debated in the literature . If so, one should not obtain an improved growth rate using the flush–mode. However, since layer–by–layer growth is unstable, the optimization of 3D–growth might be useful too. Based on recent results concerning the step edge diffusion we proposed to vary the flux of arriving particles in order to maintain smooth edges already during growth. Reducing the flux according to $`Ft^{0.65}`$ we were able to recover the high growth exponent of $`\beta 0.45`$ measured on the simplified MBE model . The fast coarsening process (since SED assisted) yields structures which soon become very large compared to those of conventional MBE. Irrespective of the desired size of the structures they can be produced under the same (strong SED) conditions which is accomplished by variation of $`F`$. Otherwise the MBE growth would drive itself in the regime where $`\mathrm{}_{\text{SED}}`$ is less then the typical extension of the structures. Thus, our method opens new possibilities for the controlled creation of these selforganized nanostructures by MBE. In addition, this strategy should reduce the probability for the creation of vacancies since during the conventional growth the rough edges would be overgrown later. However, this is speculative and cannot be verified in the framework of the solid–on–solid model. Typically, rather low fluxes are used in order to improve the quality of the grown structures. However, our result suggests that it is not disadvantageous to apply higher fluxes in the beginning. In the end, when the resulting structures are rather large it becomes important to reduce the flux in order to adapt it to the smoothening range of the step edge diffusion. \*** This work is supported by the Deutsche Forschungsgemeinschaft through Sonderforschungsbereich 410.
no-problem/9903/astro-ph9903041.html
ar5iv
text
# 1 Introduction ## 1 Introduction In this introductory talk I will address two questions. First, are ultraluminous galaxies of fundamental significance, or just interesting curiousities? Second, why do we care whether starbursts or monsters dominate the energetics of ultraluminous galaxies? I will argue that the answers to these questions are clear when addressed in a global, cosmological context. In what follows, I will define an ultraluminous galaxy to be a galaxy having a bolometric luminosity exceeding 10<sup>12</sup> L for H<sub>0</sub> = 70 km s<sup>-1</sup> Mpc<sup>-1</sup> and a spectral energy distribution that is dominated by rest-frame mid/far infrared emission. ## 2 Are Ultraluminous Galaxies Important? ### 2.1 Motivation We now have a rather complete energetic census of the ‘local’ (z $``$ 0.1) universe. Comparing the luminosity functions of far-IR selected galaxies, optically-selected galaxies, and quasars (cf. Soifer et al 1987) implies that ultraluminous IR-selected galaxies are of comparable energetic significance to quasars of similar bolometric luminosities, but are responsible for only of-order 1% of the far-IR emissivity (luminosity per unit co-moving volume element) and $``$0.3% of the total bolometric emissivity in the local universe. Thus, on simple energetic grounds it might indeed be possible to dismiss ultraluminous galaxies as merely intruiging oddities. The generic response to such a dismissal is to reply that although they are rare, ultraluminous galaxies are excellent local laboratories in which the physics and phenomenology of galaxy building can be studied in far greater detail than at high redshift. Some of the lessons that we have learned from the investigation of local ultraluminous galaxies have potentially wide-ranging implications. These include the role played by galactic mergers in making (some/all?) elliptical galaxies (cf. Schweizer 1997), the apparent efficacy with which such mergers can transport much of the ISM of the merging galaxies into the circumnuclear region (e.g. Mihos & Hernquist 1994), the subsequent triggering of circumnuclear star-formation at a rate approaching the maximum allowed by physical causality (viz. SFR $``$ M<sub>gas</sub>/t<sub>cross</sub> \- Heckman 1993), and the resulting heating and metal-enrichment of the inter-galactic medium by galactic ‘superwinds’ that are driven by the collective effect of the millions of supernovae and stellar winds in the starburst (cf. Heckman et al 1996). On the other hand, given the current paradigm of the hierarchical assembly of galaxies in which (to quote Simon White) ‘Galaxy formation is a process rather than an event’, it is a fair and certainly germane question to ask whether typical galaxies ever go through an ultraluminous phase. More precisely: could the ultraluminous phenomenon be an integral part of the formation of galactic spheroids (ellipticals and bulges)? Is Arp 220 really a galactic ‘Rosetta Stone’ or just an ‘(infra)red herring’? The recent stunning advances in the observation of galaxies at high redshift mean that we can finally start to answer these questions directly. ### 2.2 The Lyman Break Galaxies as Ultraluminous Galaxies Dickinson (1998) has recently published an ultraviolet (1500 Å rest-frame) luminosity function for a large sample of galaxies at z $``$ 3 from the Hubble Deep Field and larger but shallower ground-based surveys, selected on the basis of their rest-frame UV spectral energy distributions (the ‘U drop-out’ or ‘Lyman Break’ galaxies). Ignoring any correction for dust extinction, this luminosity function would imply that galaxies with apparent UV luminosities exceeding 10<sup>12</sup> L are exceedingly rare in the early universe, with co-moving space densities of-order 10<sup>-4</sup> that of present-day normal Schechter L galaxies. Far be it from me to propose at a conference laden with infrared astronomers that we actually ought to ignore the effects of dust on these results! Indeed, the generation of techniques to correct the Lyman Break galaxies for the effects of extinction has evolved into a virtual cottage industry (e.g. Madau, Pozzetti, & Dickinson 1998; Meurer, Heckman, & Calzetti 1999; Pettini et al 1998; Sawicki & Yee 1998). Daniela Calzetti will give a report from the front lines on this issue later in this conference. To summarize, plausible values for the mean/typical UV extinction suffered by the Lyman Break galaxies range from 1 to 4 magnitudes. The UV color- magnitude relation for the Lyman Break galaxies in which the fainter galaxies are bluer (Dickinson, private communication) is reminscent of the strong dependence of extinction on luminosity seen in local starbursts (Heckman et al 1998). This implies that a correction of the observed UV luminosity function of the Lyman Break galaxies for extinction will change the shape of the function, and not merely its normalization in luminosity. This is clearly seen at low-redshift (Buat & Burgarella 1998). Meurer, Heckman, & Calzetti (1999) have made a rough attempt to correct the Lyman Break luminosity function at z $``$ 3 for the effects of luminosity-dependent extinction. Their results imply that galaxies with intrinsic UV luminosities of 10<sup>12</sup> L are actually rather common at z $``$ 3, with a co-moving space density that is of-order 10<sup>-1</sup> that of present-day Schechter L galaxies. The luminosity/extinction correlation means that the spectral energy distributions of the most luminous Lyman Break galaxies should then be dominated by the infrared, so they would meet my definition of ‘ultraluminous galaxies’. Thus, the co-moving space density of ultraluminous galaxies at z $``$3 would be similar to the space density of M82-level starbursts today. ### 2.3 The SCUBA Sources in Context ISO and especially SCUBA have opened a new window on the early universe and allowed us to make the first direct comparisons of the far-IR properties of the universe of today to the distant past. The presentation of these marvelous new results will constitute a major portion of this conference, so I will keep my remarks brief. While the distribution of the SCUBA sources in redshift is still a matter of on-going investigation (e.g. Lilly et al 1998; Trentham, Blain, & Goldader 1998; Smail et al 1998), it is clear that they constitute a major new population of objects that are energetically significant in a cosmological context. As described above, Meurer, Heckman, & Calzetti (1999) have used empirical methods for correcting the Lyman-Break population at z$``$ 3 for extinction. The extinction-corrected intrinsic UV luminosity function they derive implies that the most luminous Lyman-Break galaxies may overlap the SCUBA sub-mm population in luminosity and space-density. Heckman et al (1998) have shown that local starbursts obey quite strong relations between such fundamental parameters as luminosity, metallicity, extinction, and the mass of the galaxy hosting the starburst. More massive galaxies host more metal-rich starbursts, which are in turn more heavily extincted by dust. This probably reflects the well-known mass-metallicity relation for galaxies and the roughly linear dependence of the dust/gas ratio on metallicity. Moreover - and as noted above - the more luminous local starbursts are more heavily extincted by dust, and the UV color-magnitude relation for the Lyman Break galaxies suggests this may also be true at high-redshift. It therefore seems plausible that the SCUBA sources at high-z are the high-luminosity tail of the Lyman-Break population, and probably represent the most metal-rich (dustiest) starbursts occuring in the most massive halos. This idea is certainly consistent with a strong similarity between the high-z SCUBA sources and local ultraluminous galaxies. ## 3 Starbursts Versus Monsters: A Global Inventory Of course, the over-arching theme of this conference is the debate over the nature of the fundamental energy source in ultraluminous galaxies: starburst or monster? In the spirit of the rest of my talk, I’d like to consider the issue of the relative energetic significance of stars vs. monsters from a global perspective. Andy Lawrence developes many of the same themes in his contribution to this conference. First, we can conduct an inventory of the luminous energy present in the universe today. This represents the cumulative effect of the production of luminous energy over the history of the universe (primarily by stellar nuclear-burning and accretion onto supermassive black holes), diminished only by the $`(1+z)`$ stretching of the photons. This inventory is made possible by the recent ultra-deep near-UV-through-near-IR galaxy counts in the Hubble Deep Field (Pozzetti et al 1998) on the one hand, and the landmark detection by COBE of a far-IR/sub-mm cosmic background on the other (Puget et al 1996; Hauser et al 1998; Schlegel, Finkbeiner, & Davis 1998). The total present-day energy density contained in the cosmic IR background is $``$ 6 $`\times `$ 10<sup>-15</sup> erg cm<sup>-3</sup> (Fixsen et al 1998), which is comparable to the total energy density contained in the NUV-through-NIR light due to faint galaxies (Pozzetti et al 1998). The origin of the latter is clear: the light of these faint galaxies is overwhelmingly due to ordinary stars (nuclear fusion). However, the origin of the cosmic IR background is not so clear. As I will outline below, simple ‘from-first-principles’ arguments imply that this luminous energy may have been generated predominantly by either stars or monsters that were deeply shrouded in dust. One obvious way to evaluate whether stellar nucleosynthesis could have been responsible for producing the energy contained in the cosmic IR background is to take an inventory of the byproducts of nuclear burning in the local universe. The recent compilation assembled by Fukugita, Hogan, & Peebles (1998) implies that the baryonic content of galaxies, the intracluster medium, and the general inter-galactic medium is $`\mathrm{\Omega }_B`$ $``$ 4.3 $`\times `$ 10<sup>-3</sup>, 2.6 $`\times `$ 10<sup>-3</sup>, and 1.4 $`\times `$ 10<sup>-2</sup> respectively. If we adopt a mean metallicity of 1.0, 0.4, and 0.0 Z for these respective baryonic repositories and use the estimate due to Madau et al (1996) that each gram of metals produced corresponds to the generation of 2.2 $`\times `$ 10<sup>19</sup> ergs of luminous energy, the implied co-moving density of energy produced by nuclear burning is then 2 $`\times `$ 10<sup>-14</sup> erg cm<sup>-3</sup>. If we instead assume that the ratio of metals inside galaxies to those outside galaxies is the same everywhere as it is clusters of galaxies (cf. Renzini 1997), then the total mass of metals today is about twice as large as the above estimate, as is the associated luminous energy. To compare these values to the cosmic IR background, we need to know the mean energy-weighted redshift at which the photons in the IR background originated. Taking $`<z>`$ = 1.5, the resulting observable energy density in the present universe would be in the range 8 to 16 $`\times `$ 10<sup>-15</sup> erg cm<sup>-3</sup>. This is comparable to the sum of the energy contained in the IR plus the NUV-through-NIR backgrounds. Thus, there is no fundamental energetics problem with a stellar origin for the cosmic IR background. What about dusty quasars? At first sight, this does not appear to be a plausible source for the bulk of the cosmic IR background. The cumulative emission from the known population of quasars - selected by optical, radio, or X-ray techniques - has resulted in a bolometric energy density today of about 3$`\times `$ 10<sup>-16</sup> erg cm<sup>-3</sup> (cf. Chokshi & Turner 1992), only about 5% of the cosmic IR background. But, what if there exists a substantial population of objects at high-redshift that are powered by accretion onto supermassive black holes, but which are so thoroughly buried in dust that they radiate primarily in the IR, and have thus far been missed in quasar surveys? That is, could the cosmic IR background be produced by a population of monster-powered ultraluminous galaxies at high redshift? Might the SCUBA sources be our first glimpse of this population? Could this same population of dust-enshrouded AGN be responsible for the bulk of the cosmic hard X-ray background (as Fabian et al 1998 have argued)? One way to assess whether accretion onto supermassive black holes is an energetically feasible source for the observed cosmic IR background is to examine the fossil record in nearby galaxies. The generation of the cosmic IR background by the accretion of matter onto supermassive black holes necessarily implies that the centers of galaxies today will contain the direct evidence for this accretion. Is there enough mass in the form of supermassive black holes in galaxies today to have produced the IR background? Recent dynamical surveys of the nuclei of nearby galaxies strongly suggest that supermassive black holes are common or even ubiquitous, with a mass that is $``$ 0.5% of the stellar mass of the spheroid (bulge or elliptical) within which the black hole resides (Magorrian et al 1998; Richstone et al 1998). The corresponding ratio of black hole mass to spheroid blue luminosity in solar units is roughly 0.045 for a Schecter L elliptical. Fukugita, Hogan, & Peebles (1998) estimate that the present-day blue luminosity density associated with spheroids is 4.6 $`\times `$ 10<sup>7</sup> L Mpc<sup>-3</sup>, so the implied mean density in the form of supermassive black holes is $``$ 2 $`\times `$ 10<sup>6</sup> M Mpc<sup>-3</sup>. If we assume that accretion onto a supermassive black hole releases luminous energy with an efficiency $`ϵ`$ = 10% ($`E=ϵMc^2`$), the present-day black hole mass density implies a total production of 1.2 $`\times `$ 10<sup>-14</sup> erg cm<sup>-3</sup> in co-moving coordinates. If the energy-weighted mean redshift at which this was emitted is $`z`$ 2, the present-day luminous energy density is then 4$`\times `$ 10<sup>-15</sup> erg cm<sup>-3</sup>. This is roughly an order-of-magnitude larger than the luminous energy produced by the known quasar population, but matches the energy contained in the cosmic IR background rather well. There are therefore three possible interpretations of this. First, we may have substantially over-estimated the mass of black holes in the nuclei of galaxies today. A recent analysis by van der Marel (1999) yields an mean ratio of black-hole-mass to spheroid luminosity that is a factor of 2 to 3 smaller than the Magorrian et al value. Second, the formation of a supermassive black hole may occur with a mean efficiency for the production of radiant energy that is small (e.g. 1% rather than 10%). Perhaps the quasar phase corresponds to high efficiency and produces most of the radiant energy, but most of the accretion and black hole growth produces very little radiation (e.g. Narayan 1997). Third, maybe the cosmic IR background does have a substantial contribution from dust-enshrouded ‘monsters’. If true, this would imply that over the history of the universe, monsters have produced as much luminous energy as stars! ## 4 Summary When examined from a global, cosmological perspective, the answers to the two questions I posed in the Introduction seem clear: 1. Are ultraluminous galaxies of fundamental significance, or just interesting curiousities? Ultraluminous galaxies are spectacular and fascinating in their own right. They are unique local laboratories that allow the detailed investigation of the physical processes by which galaxies were built and by which the intergalactic medium was heated and chemically-enriched. The most luminous members of the Lyman Break galaxy population at high-redshift are almost certainly ultraluminous systems dominated by far-IR emission and the SCUBA sources at high-z (probably the most metal-rich, dustiest starbursts occuring in the most massive halos) resemble local ultraluminous galaxies. Thus, dusty ultraluminous galaxies have been responsible for a significant fraction of the high-mass star- formation and associated metal production at early times. 2. Why do we care whether starbursts or monsters dominate the energetics of ultraluminous galaxies? We now know that the cosmic IR background contains as much energy as the integrated UV, visible, and NIR light from all the galaxies in the universe. Recent inventories of the by-products of both nuclear burning (metals and post-big-bang He) and of black hole accretion (dark compact objects in galactic nuclei) in the present-day universe imply that either a population of dusty star-forming galaxies or of dust-enshrouded monsters could have readily produced the IR background. Thus, on a global scale, the ‘starburst vs. monster’ debate is of central importance. It is possible that - integrated over cosmic time - accretion onto supermassive black holes has produced as much total radiant energy as nuclear burning in stars. Future multi-wavelength observations of the sources detected by ISO, SCUBA, and SIRTF will go a long ways towards settling this crucial issue. Acknowledgments I would like to thank the Local Organizing Committee (particularly Reinhardt Genzel, Dieter Lutz, and Linda Tacconi) and the staff of the Ringberg Castle for making this meeting stimulating, enjoyable, hastle-free, and so civilized! I also congratulate Bob Joseph and Dave Sanders for their spirited debate (although they’re both lousy soccer players). This work was supported in part by NASA LTSA grant NAGW-3138.
no-problem/9903/hep-ph9903219.html
ar5iv
text
# References EFI-99-07 hep-ph/9903219 March 1999 PARTICLES IN LOOPS – FROM ELECTRONS TO TOP QUARKS <sup>1</sup><sup>1</sup>1Dedicated to the memory of Professor Hiroshi Suura. Based on a colloquium in his honor at the University of Minnesota, 1 June 1994, updated to February 1999. Jonathan L. Rosner Enrico Fermi Institute and Department of Physics University of Chicago, Chicago, IL 60637 ABSTRACT > This article, in memory of Professor Hiroshi Suura, is devoted to the effects of particles in loops, ranging from quantum electrodynamics to precise tests of the electroweak theory and CP violation. INTRODUCTION I owe an enormous debt to Hiroshi Suura. It was partly work on the subject of this article that led him to bring me to the University of Minnesota, where I spent 13 pleasant years. He was an early collaborator , teaching me the value of clear thinking and careful statements. Throughout the years, he was a constant source of good ideas, sound judgement, and friendly advice. He was responsible for the contacts that led to my first visit to Japan in 1973, during which the generous hospitality my family and I received led us to return time and again to a country for which we have great love and admiration. During one such visit in 1981 I was privileged to meet Hiroshi’s sister and brother-and-law on the occasion of a Japanese Physical Society meeting in Hiroshima at the end of March. I am thus especially honored to be able to pay tribute to Hiroshi’s memory for a similar meeting eighteen years later. I miss him greatly. I shall not discuss Hiroshi’s important contributions to the theory of infrared corrections . This work has been central to a wide variety of experiments in elementary particle physics, particularly those involving electrons. Many of the precise measurements I shall describe could not have been done without it. However, another theme running through Hiroshi’s work and connecting it to the major issues of today’s particle physics is the idea of “particles in loops.” One of his most-quoted results concerns the effect of electron loops in the calculation of the muon’s anomalous magnetic moment $`a_\mu `$ . This leads to a difference between $`a_\mu `$ and the corresponding quantity $`a_e`$ for the electron, which was confirmed in beautiful experiments at CERN and is still the subject of intense scrutiny . Hiroshi once admitted his reluctance to be known for a calculation which took him such a short time. But his key contribution was not only in performing the calculation, but in being able to do so and in knowing what calculation to perform. The effects of “particles in loops” indeed permeate almost all of today’s high energy physics. They have allowed us to make fundamental discoveries about the properties of quarks, to anticipate the charmed quark’s existence and the top quark’s mass, and to understand, at least in part, the violation of CP symmetry. This article briefly reviews those effects. For more technical details (some of which will be updated here) see, e.g., Ref. . Many of the historical references are taken from . In Section II we discuss vacuum polarization and radiative corrections. Section III is devoted to specific effects of quarks and leptons in loops. We review electroweak unification in Section IV, and CP violation in Section V. Some speculations on composite Higgs bosons and composite fermions occupy Sections VI and VII, respectively. Section VIII summarizes. II. VACUUM POLARIZATION AND RADIATIVE CORRECTIONS A. Vacuum polarization The large positive charge of a nucleus “polarizes the vacuum.” Virtual electrons are attracted to the nucleus, while virtual positrons are repelled. A test charge at large distances sees the nucleus screened by the electrons, while at short distances it penetrates the screening cloud and sees a larger charge. In quantum electrodynamics this may be thought of as the effect of an electron-positron “loop” in the photon propagator. A direct calculation of this effect finds it to be infinite! However , one can circumvent this difficulty by comparing ratios of effective charges at two different distance scales. Defining the fine-structure constant $`\alpha e^2/4\pi \mathrm{}c`$ in terms of the charge $`e`$, and momentum scales $`\mu _i=\mathrm{}/r_i`$ in terms of distance scales $`r_i(i=1,2)`$, the lowest-order result is $$\alpha (\mu _1)=\frac{\alpha (\mu _2)}{1(\alpha /3\pi )\mathrm{ln}(\mu _1^2/\mu _2^2)}$$ (1) and hence $`\alpha (\mu _1)>\alpha (\mu _2)`$ for $`\mu _1>\mu _2`$. The electromagnetic interaction thus becomes stronger at higher momentum scales (shorter distance scales). For an electron bound in hydrogen , vacuum polarization leads to a stronger attraction in a $`2S`$ state than in a $`2P`$ state, leading to a splitting between the levels of $`\mathrm{\Delta }E(2S2P)=27`$ MHz. B. The Lamb shift The experimental value of the $`2S2P`$ splitting in hydrogen was first measured by W. Lamb in 1947 . In addition to the vacuum polarization effect mentioned above, there is a much more substantial shift in the other direction, in which an electron emits and reabsorbs a virtual photon while interacting with the nucleus . The most recent experimental values for the splitting are $`1057.8514\pm 0.0019`$ MHz , $`1057.845\pm 0.009`$ MHz , and $`1057.839\pm 0.012`$ MHz , to be compared with the theoretical calculation of $`1057.838\pm 0.006`$ MHz. C. The electron $`g`$-factor The process in which an electron emits and reabsorbs a virtual photon while interacting with an external field also alters its magnetic moment $`\mu _e`$, expressed in terms of its spin $`\stackrel{}{S_e}`$ via a quantity $`g`$: $`\stackrel{}{\mu _e}=\stackrel{}{S_e}ge/(2m_ec)`$. In the Dirac theory of the electron, $`g=2`$. The correction to this result is $$\frac{g2}{2}|_e=\frac{\alpha }{2\pi }0.328476965\left(\frac{\alpha }{\pi }\right)^2+\mathrm{}=(1159652140\pm 27)\times 10^{12}.$$ (2) The lowest-order term is due to Schwinger ; the corrections have been calculated up to $`𝒪(\alpha ^5)`$. The error is due mainly to uncertainty in $`\alpha `$. The latest experimental result is $$\frac{g2}{2}|_e=(1159652188\pm 3)\times 10^{12}.$$ (3) The agreement with theory, and that of the Lamb shift mentioned earlier, are examples of the successful application of quantum field theory to electrodynamics. D. The muon $`g`$-factor The second term on the right-hand side of Eq. (2), $`0.328\mathrm{}(\alpha /\pi )^2`$, contains a contribution from the process in which an electron emits and reabsorbs a virtual photon while interacting with the external field, and this virtual photon itself is subject to the vacuum polarization effect (1). The virtual photon thus can be affected by any charged particle-antiparticle pair in a loop. The pair providing the major contribution to the electron $`g`$-factor is an electron-positron pair; other heavier particles contribute, but not significantly to this order. For the muon $`g`$-factor, the situation is different. Here, both the $`e`$-loop and the $`\mu `$-loop are important. The major effect of the $`e`$-loop can be regarded as an effective modification of the leading-order $`\alpha /2\pi `$ correction: $$\frac{\alpha }{2\pi }\frac{\alpha }{2\pi }\left[1\frac{\alpha }{3\pi }\left(\mathrm{ln}\frac{m_\mu ^2}{m_e^2}\mathrm{const}\right)\right]^1$$ (4) as dictated by the correction (1). The fine-structure constant “runs” as a function of distance (i.e., momentum) scale. The theoretical expression for the muon $`g`$-factor thus differs from that for the electron at second order in $`\alpha `$ . This observation of Hiroshi’s was a shining example of how particles in loops other than those under direct study can affect measurable physics. We shall see a number of more recent applications of this idea in subsequent sections. The present theoretical expression for the muon $`g`$-factor is $$\frac{g2}{2}|_\mu =\frac{\alpha }{2\pi }+0.765857388(44)\left(\frac{\alpha }{\pi }\right)^2+\mathrm{}=(11659159.6\pm 6.7)\times 10^{10},$$ (5) to be compared with the experimental value $`(11659230\pm 85)\times 10^{10}`$. At this level of accuracy one must consider the effects of not only electrons and muons in loops, but also quarks. A new experiment at Brookhaven National Laboratory seeks to probe $`a_\mu `$ 20 times more precisely, reaching enough sensitivity to probe even the effect of weakly interacting particles in loops . III. QUARKS AND LEPTONS IN LOOPS A. Neutral pion decay The decay of the neutral pion $`\pi ^0`$ is governed by a triangle “anomaly” diagram . The $`\pi ^0`$ dissociates into a quark-antiquark pair which then annihilates into two photons. The process thus counts the number of quarks $`q`$ traveling around the loop, weighted by the product of their coupling to the $`\pi ^0`$ and the square of their charges $`Q(q)`$. Since the $`\pi ^0`$ is represented in the quark model as $`(u\overline{u}d\overline{d})/\sqrt{2}`$, the amplitude for $`\pi ^0\gamma \gamma `$ thus measures $`S=[Q(u)^2Q(d)^2]`$, where the sum is taken over the number of quark species (“colors”, if one wishes). For 3 colors of fractionally charged (“Gell-Mann–Zweig”) quarks, $`S=3[(2/3)^2(1/3)^2]=1`$. An alternative quark model involves integrally charged (“Han–Nambu”) quarks : Two colors of $`u`$ quark have $`Q(u_{1,2})=1`$ while one color has $`Q(u_3)=0`$; two colors of $`d`$ quarks have $`Q(d_{1,2})=0`$ while one color has $`Q(d_3)=1`$. The amplitude for $`\pi ^0\gamma \gamma `$ turns out to be the same . Hiroshi was intrigued with this possibility , and we had many interesting discussions on the subject. It is interesting that quarks at high density may undergo a color-flavor “locking” which converts them from the Gell-Mann–Zweig to the Han–Nambu variety . It is one of many results in the past year on which I would have enjoyed hearing Hiroshi’s opinion. B. Triangle anomalies and fermion families The triangle anomaly’s contribution to trilinear gauge boson couplings is undesirable in unified theories of the weak and electromagnetic interactions. In order that it vanish, the sum of $`I_{3L}Q^2`$ over all fermions must equal zero. Here $`I_{3L}`$ is “left-handed isospin,” equal to 1/2 for left-handed $`u`$ quarks and neutrinos, $`1/2`$ for left-handed $`d`$ quarks and charged leptons $`\mathrm{}^{}`$, and zero for all left-handed antiparticles. This sum vanishes for quarks and leptons within a single “family,” with respective contributions of 2/3, $`1/6`$, 0, and $`1/2`$ from, e.g., $`u,d,\nu _e,e^{}`$. The need for the charmed quark in the second family $`c,s,\nu _\mu ,\mu ^{}`$ was in part argued on the basis of anomaly cancellation. Definitive evidence for the charmed quark was presented within two years , in the form of a $`c\overline{c}`$ bound state, the $`J/\psi `$ particle. (There had already been indications of charm in cosmic ray events , which were taken very seriously in Japan .) The anomaly cancellation confirmed by the charmed quark’s discovery was short-lived. A third lepton $`\tau `$ was announced within the year . A third pair of quarks $`t,b`$ (proposed earlier to explain CP violation; see Sec. V) was then required to restore the cancellation . The $`b`$ was discovered in 1977 and the $`t`$ in 1994 , both at Fermilab. The high mass of the top quark, $`m_t=174\pm 5`$ GeV/$`c^2`$ , makes it a particularly important player in many loop diagrams, in ways which we now describe. IV. ELECTROWEAK UNIFICATION A. The SU(2) $`\times `$ U(1) gauge theory Fifty years ago it was popular to talk of the “four forces of Nature”: gravity, electromagnetism, the weak force, and the strong force. We sometimes forget that Newton’s theory of gravity itself was a unfication of terrestrial and celestial phenomena, while Maxwell’s theory of electromagnetism, building upon Faraday’s experiments, unified previously distinct electrostatic and magnetic results. During Hiroshi’s career we have seen the successful unification of the weak and electromagnetic interactions . In analogy with the view of electromagnetism as arising from photon exchange, we now view the weak interactions (those responsible, for example, for nuclear beta-decay) as arising from the exchange of charged, massive $`W`$ bosons. The unified theory allows self-consistent calculations of weak processes at high energies and to higher orders of perturbation theory. The prices to pay are that (1) the $`W^\pm `$ must exist (it was discovered in 1983 ), and (2) the simplest version also requires a massive neutral boson, the $`Z^0`$ (also discovered in 1983 ). The exchange of a $`Z`$ leads to new weak charge-preserving interactions, first seen in 1973 . The new theory has the symmetry SU(2) $`\times `$ U(1), broken to U(1) of electromagnetism by the mechanism which gives the $`W^\pm `$ and $`Z^0`$ bosons their masses while leaving the photon massless. The neutral SU(2) boson, $`W^0`$, and the U(1) boson, $`B^0`$, mix with an angle $`\theta `$ to give the massless photon and the massive $`Z^0`$. Since quarks of the same charge can mix with one another, the charge-changing transitions involving $`W`$ emission and absorption connect all quarks of charge 2/3 with all quarks of charge $`1/3`$ through a unitary matrix $`V`$, the Cabibbo-Kobayashi-Maskawa (CKM) matrix. As a result of the unitarity of $`V`$, the couplings of $`Z^0`$ remain diagonal in quark “flavor” even after mixing. The only corrections to the flavor-diagonal nature of neutral weak processes come at higher orders of perturbation theory, through particles in loops. B. Main electroweak corrections A major source of corrections to the electroweak theory, which can now be probed as a result of the precision of varied experiments, is the effect of particles in loops in the photon, $`W`$, and $`Z`$ propagators. All charged fermions can contribute in pairs to the photon charge renormalization (the effect of Eq. (1) and its higher-order generalizations). Whereas at long distances the fine structure constant $`\alpha `$ is approximately 1/137.036, when probed at the scale of the $`Z^0`$ mass it is $`\alpha (M_Z)1/128.9`$. This simple correction substantially improves the predictions of the unified theory for the $`W`$ and $`Z`$ masses, given the value of the electroweak mixing parameter $`\mathrm{sin}^2\theta =0.23156\pm 0.00019`$ measured in a wide variety of neutral-current processes . The $`W`$ and $`Z`$ propagators receive large contributions from loops involving the third quark family as a result of the large top quark mass. The prediction of the lowest-order electroweak theory, $`M_W/M_Z=\mathrm{cos}\theta `$, is modified to $$\frac{M_W^2}{M_Z^2}=\rho \mathrm{cos}^2\theta ,\rho 1+\frac{3G_Fm_t^2}{8\pi ^2\sqrt{2}}.$$ (6) Here $`G_F=1.16639\times 10^5`$ GeV<sup>-2</sup> is the Fermi coupling constant, and $`m_t=174\pm 5`$ GeV/$`c^2`$ is the top quark mass. The parameter $`\rho `$ is then about a percent, and multiplies the amplitude of every weak neutral-current process. Consequently, each of these processes probes $`m_t`$, so it was possible to anticipate its value (modulo effects of the Higgs boson, which we discuss next) before it was measured directly. C. The Higgs boson and its effects A consequence of endowing the $`W`$ bosons with mass is that the elastic scattering of longitudinally polarized $`W^+W^{}`$ does not have acceptable high-energy behavior. It would violate the unitarity of the $`S`$-matrix (i.e., would violate probability conservation) at high energies unless a spinless neutral boson (the “Higgs boson”) exists below a mass of $`M_H1`$ TeV/$`c^2`$ . The discovery of such a boson is a prime motivation for multi-TeV hadron colliders such as the Large Hadron Collider (LHC) now under construction at CERN. Searches in $`e^+e^{}`$ collisions at LEP find no evidence for the Higgs boson below nearly 100 GeV/$`c^2`$ , but precision electroweak experiments seem to favor a Higgs mass near this lower limit. Virtual Higgs boson can contribute to loops in the $`W`$ and $`Z`$ propagators, thus affecting not only $`\rho `$ but a parameter $`S`$ which expresses the difference between electroweak results at low momentum transfers and those probed at the higher momentum scale of $`Z^0`$ decays. One can calculate all electroweak observables for nominal values of $`m_t`$ and $`M_H`$ (say, 175 and 300 GeV$`/c^2`$, respectively) and then ask how they deviate from those nominal values, thereby specifying constraints on the parameters $`\rho `$ and $`S`$. Given the observed value of $`M_Z`$, one obtains a nominal value of $`\mathrm{sin}^2\theta =0.2321x_0`$. It is conventional to define $`\mathrm{\Delta }\rho =\alpha T`$, and one then finds $$T\frac{3}{16\pi x_0}\left[\frac{m_t^2(175\mathrm{GeV})^2}{M_W^2}\right]\frac{3}{8\pi (1x_0)}\mathrm{ln}\frac{M_H}{300\mathrm{GeV}}.$$ (7) Note the quadratic dependence on $`m_t`$, but only logarithmic dependence on $`M_H`$. That is why electroweak observables were able to predict a top quark mass (with some uncertainty) despite the absence of information about the Higgs boson mass. The “$`S`$” parameter is logarithmic in both $`m_t`$ and $`M_H`$. As in the case of $`T`$, it can be defined to be zero for nominal values of $`m_t`$ and $`M_H`$, so that deviations of $`S`$ from zero are indicative of new physics. Fits to a wide variety of electroweak parameters are performed periodically as these data become more and more precise. Such data include the ratio of charge-preserving to charge-changing deep inelastic cross sections for neutrinos on matter, the $`W`$ mass (measured at LEP and Fermilab), and a host properties of the $`Z`$ boson, such as its mass, width, branching ratios, and decay asymmetries (measured at LEP and the Stanford Linear Collider). Since the Higgs boson appears in loops, and the top quark mass is fairly well pinned down, such fits can constrain the (logarithm of the) Higgs boson mass. A recent fit finds $`M_H=84_{51}^{+91}`$ GeV/$`c^2`$ , or $`M_H<280`$ GeV at 95% confidence level. Of course, much of this range is already ruled out by the direct searches mentioned earlier. D. Effects of other new particles; atomic parity violation The $`S`$ and $`T`$ parameters respond differently to new particles. The $`T`$ parameter is affected by the presence of pairs of left-handed fermions with charges differing by one unit (such as $`t`$ and $`b`$) whose masses also differ from one another (as in the case of $`t`$ and $`b`$). However, it is not affected by new degenerate pairs. The $`S`$ parameter, on the other hand, is affected. It is a good probe of new particles in loops, even if these particles hide their contributions to $`T`$ by being degenerate in mass, and no matter how heavy these particles may be . One probe of $`S`$ is almost insensitive to $`T`$ . Atomic transitions can violate mirror symmetry (parity) as a result of the interference of photon and $`Z`$ exchange. The coherent coupling of the $`Z`$ to a nucleus is expressed in terms of the weak charge, given approximately as $`Q_W\rho (NZ4Z\mathrm{sin}^2\theta )`$, where $`N`$ and $`Z`$ are the number of neutrons and protons in the nucleus. Very recently, a new result in atomic cesium has been presented : $`Q_W=72.06\pm 0.28\pm 0.35`$, where the first error is experimental and the second is theoretical. This result is $`2.5\sigma `$ from the theoretical prediction of $`Q_W=73.20\pm 0.13`$ However, the deviation is opposite in sign from that caused by the the most naive addition of particles in loops! This result bears watching. The experiment has been pushed about as far as it can go, so it is now incumbent upon the theorists to check their calculations (and the refinements of them in Ref. that reduced the theoretical error so dramatically from previous values). V. CP VIOLATION A. The neutral kaon system The neutral kaon $`K^0`$ and its antiparticle $`\overline{K}^0`$ are an example of a degenerate two-state system, with the degeneracy lifted by coupling to final states. So, too, are the two equal-frequency modes of a circular drum with a single nodal line along the diameter. Any basis may be chosen in which the nodal line for one mode is perpendicular to that for the other. For example, let the $`K^0`$ correspond to the mode with the node at 45 degrees with respect to the $`x`$-axis; then the $`\overline{K}^0`$ will correspond to the orthogonal mode. Now a fly lands on the drum-head somewhere on the $`x`$ axis. The two degenerate states will be mixed and split in such a way that the fly couples to one mode (with the node perpendicular to the $`x`$-axis) and not the other (with the node along the $`x`$-axis). The fly is like the $`\pi \pi `$ final state, and the eigenstates are $$K_1=\frac{K^0+\overline{K}^0}{\sqrt{2}}(\pi \pi ),K_2=\frac{K^0\overline{K}^0}{\sqrt{2}}(\overline{)}\pi \pi ).$$ (8) Since the $`\pi \pi `$ system in the decay of the spinless kaons has even $`CP`$, where $`C`$ is charge-reversal and $`P`$ is parity, or space inversion, the states with definite mass and lifetime in the limit of CP conservation are $`K_1`$ and $`K_2`$. The $`K_1`$ is thus much shorter-lived than the $`K_2`$, which has to decay in some other manner than $`\pi \pi `$ . In 1964, Christenson, Cronin, Fitch, and Turlay discovered that the long-lived kaon also decays to $`\pi \pi `$. One could thus represent the states of definite mass and lifetime as $$K_SK_1+ϵK_2,K_LK_2+ϵK_1.$$ (9) The parameter $`ϵ`$ has a magnitude of a bit over $`2\times 10^3`$ and a phase of about $`\pi /4`$. Where does it come from? One possibility was suggested right after the discovery of CP violation: A new “superweak” interaction directly mixes $`K^0`$ and $`\overline{K}^0`$, with a phase which leads to CP violation. However, Kobayashi and Maskawa proposed that phases in the weak couplings of quarks to $`W`$ bosons generate $`ϵ`$ through loop graphs. Three quark families are needed for non-trivial phases. The Kobayashi-Maskawa proposal thus entailed the existence of the top and bottom quarks, later discovered at Fermilab. The loop graphs in question are ones in which, for example, a $`K^0=d\overline{s}`$ undergoes a virtual transition via $`W`$ exchange to a pair $`q_i\overline{q}_j`$, where $`q_i`$ and $`q_j`$ are any charge-2/3 quark: $`u,c,t`$. The $`q_i\overline{q}_j`$ pair can then exchange a $`W`$ of the opposite charge to become $`\overline{K}^0=s\overline{d}`$. The top quark provides the dominant contribution to this process because of its large mass. The Kobayashi-Maskawa (KM) theory of CP violation has recently survived two key tests, the most recent of which seems to have firmly buried the superweak theory. These are results which Hiroshi would have enjoyed. B. CP violation in $`B`$ meson decays The first new result concerns CP violation in the system of neutral $`B`$ mesons, predicted to be large in the KM theory. The same loop diagrams which mix neutral kaons also mix $`B^0=d\overline{b}`$ and $`\overline{B}^0=b\overline{d}`$. The phase of the mixing amplitude is predicted within rather narrow limits by fits to various weak-decay and mixing data. The best sign of CP violation in the $`B`$ meson system was anticipated to be the following asymmetry in rates: $$A(J/\psi K_S)\frac{\mathrm{\Gamma }(\overline{B}^0|_{t=0}J/\psi K_S)\mathrm{\Gamma }(B^0|_{t=0}J/\psi K_S)}{\mathrm{\Gamma }(\overline{B}^0|_{t=0}J/\psi K_S)+\mathrm{\Gamma }(B^0|_{t=0}J/\psi K_S)}0.$$ (10) Here the subscript indicates that the flavor of the neutral $`B`$ is identified at the time of its production; it oscillates between $`B^0`$ and $`\overline{B}^0`$ thereafter as a result of $`B^0`$$`\overline{B}^0`$ mixing. The asymmetry arises from the interference of the mixing amplitude with the decay amplitude. The decay $`B^0J/\psi K_S`$ can occur either directly or through the sequential process $`B^0\overline{B}^0J/\psi K_S`$, which imposes a modulating amplitude on the direct decay. The sign of this modulating amplitude is opposite to that in $`\overline{B}^0B^0J/\psi K_S`$ (interfering with the direct $`\overline{B}^0J/\psi K_S`$ process), and so a difference arises in both time-dependent and time-integrated rates. A recent result from the CDF Collaboration at Fermilab observes the asymmetry at about the $`2\sigma `$ level with the value predicted by the KM theory. Both SLAC and KEK are constructing “$`B`$-factories” to observe this asymmetry (and many others) at a compelling statistical level, and many other experiments (e.g., at Cornell, LEP, DESY and Fermilab) may have something to say soon on CP-violation in $`B`$ decays. C. Demise of the superweak theory The second new result concerns the most significant result on the decays of neutral kaons since the discovery that they violated CP in 1964. Since then, all CP-violating effects in the neutral kaon system could be parametrized by the single quantity $`ϵ`$ in Eq. (9). If that were so, one should see no difference between the CP-violating decays $`K_L\pi ^+\pi ^{}`$ and $`K_L\pi ^0\pi ^0`$ when normalized by the corresponding $`K_S`$ rates. Thus, the double ratio $$R\frac{\mathrm{\Gamma }(K_L\pi ^+\pi ^{})/\mathrm{\Gamma }(K_S\pi ^+\pi ^{})}{\mathrm{\Gamma }(K_L\pi ^0\pi ^0)/\mathrm{\Gamma }(K_S\pi ^0\pi ^0)}$$ (11) should equal 1. In the KM theory it can differ from 1 by up to a percent. A “direct” decay amplitude, parametrized by a quantity $`ϵ^{}`$, can violate CP. The double ratio is $`R=1+6\mathrm{Re}(ϵ^{}/ϵ)`$. The superweak theory has no provision for $`ϵ^{}`$. Two previous experiments gave conflicting results on whether $`ϵ^{}`$ was nonzero: $$\mathrm{Re}(ϵ^{}/ϵ)=(7.4\pm 5.9)\times 10^4(\mathrm{Fermilab}\mathrm{E731})\text{[59]},$$ (12) $$\mathrm{Re}(ϵ^{}/ϵ)=(23\pm 6.5)\times 10^4(\mathrm{CERN}\mathrm{NA31})\text{[60]}.$$ (13) A new experiment at Fermilab has now confirmed the CERN result with far more compelling statistics, finding $$\mathrm{Re}(ϵ^{}/ϵ)=(28.0\pm 4.1)\times 10^4(\mathrm{Fermilab}\mathrm{E832})\text{[61]}.$$ (14) The superweak theory is definitively ruled out. The magnitude of the effect is on the high end of the most recent theoretical range , but this may merely represent a shortcoming of methods to estimate hadronic matrix elements rather than any intrinsic limitation of the KM theory. The new result will probably reduce the uncertainty on the parameters of the CKM matrix. D. Alternative sources of CP violation So far the Kobayashi-Maskawa theory of CP violation has survived experimental tests. But what if it is eventually ruled inconsistent or incomplete? Many other theories are lurking in the wings, including superweak contributions to CP violation (clearly not the whole story), effects of right-handed $`W`$ bosons, and multi-Higgs models. These can be tested by a host of forthcoming experiments, including those on rare kaon and $`B`$ meson decays, searches for transverse muon polarization in the decays $`K\pi \mu \nu `$, searches for neutron and electron electric dipole moments, and searches for CP violation in decays of hyperons and charmed particles. The field is very rich and full of experimental opportunities. VI. COMPOSITE HIGGS BOSONS The SU(2) $`\times `$ U(1) electroweak gauge theory must be supplemented by a mechanism for breaking the symmetry. The standard (“Higgs”) mechanism involves the introduction of an SU(2) doublet of complex scalar fields, or four scalar mesons. Three of the four scalars become the longitudinal components of the $`W`$ and $`Z`$, and one remains as the physical Higgs boson. With more than one doublet, there will be additional observable scalar fields in the spectrum. The Higgs fields interact with one another quartically in the Lagrangian. In the presence of any physics beyond the electroweak scale, such as arises in “grand unifications” of the electroweak and strong interactions, such a theory cannot be fundamental. New physics must enter at a mass scale of a TeV or less in order that the Higgs boson mass not receive large radiative corrections from the higher mass scale. Independently of grand unified theories, the quartic Higgs interaction itself has undesirable high-energy behavior, so that the only theory which makes sense is the “trivial” one in which the quartic interaction vanishes. One approach to this problem is provided by supersymmetry, which provides a set of “superpartners” to the currently observed particles, differing from them by half a unit of spin. The quartic interaction is then not fundamental, and the superpartners cancel the large radiative corrections. Another approach is to postulate that the Higgs fields themselves are composite. This idea , known as “technicolor,” envisions the Higgs fields as fermion-antifermion pairs, with the new fermions bound by some new superstrong force, in the same way that pions are made of quarks bound by the force of quantum chromodynamics (QCD). In analogy with QCD, which implies a low-energy pion-pion quartic interaction which is not really fundamental, the Higgs boson quartic potential is then just a consequence of some more fundamental underlying theory. Properties of the new technifermions can be learned by an argument based on particles in loops. Their charges must be such as to ensure anomaly cancellation in the decay of a longitudinal $`Z`$ to two photons. If one has a single SU(2) doublet $`(U,D)`$ of technifermions (occurring in some number of “techni”-colors), the vanishing of $`Q(U)^2Q(D)^2`$ then requires $`Q(U)=1/2`$, $`Q(D)=1/2`$. This was the original solution of “minimal technicolor” . It was abandoned because there seems to be no evidence for fundamental fermions with charges $`\pm 1/2`$, and because the minimal model only explains the masses of $`W`$ and $`Z`$, not of quarks and leptons. Attempts to “extend” technicolor to a theory of quark and lepton masses introduce many new particles in loops and thus run afoul of the constraints from precise electroweak experiments mentioned in Sec. IV . In the next section I will propose a solution to which Hiroshi might have been sympathetic, in view of his early efforts to uncover the substructure of particles. VII. COMPOSITE FERMIONS Suppose the minimal techniquarks $`U`$ and $`D`$ of Sec. VII are the carriers of the weak isospin (the SU(2) quantum number) in quarks and leptons. A formula for the charge of quarks and leptons which suggests this identification is $`Q=I_{3L}+I_{3R}+(BL)/2`$, where $`I_{3L}`$ is left-handed isospin, $`I_{3R}`$ is right-handed isospin, $`B`$ is baryon number, and $`L`$ is lepton number. We imagine $`U,D`$ to carry $`I_{3L}`$ and $`I_{3R}`$ since these quantum numbers are naturally correlated with quark or lepton spin. (Note that $`I_{3L}+I_{3R}`$ is always equal to $`+1/2`$ for up quarks and neutrinos and $`1/2`$ for down quarks and charged leptons.) The $`(BL)/2`$ contribution to the charge then has to be carried by “something else”. Let it be a scalar $`\overline{S}_q`$ with charge $`1/6`$ for three colors of quarks or $`\overline{S}_{\mathrm{}}`$ with charge $`1/2`$ for leptons. The scalars thus belong to an SU(4)<sub>color</sub> group first proposed by Pati and Salam . A $`u`$ quark is then $`U\overline{S}_q`$, while an electron is $`D\overline{S}_{\mathrm{}}`$. Tests of this model (or of others of quark and lepton substructure) are possible at the highest LEP energies, forthcoming Tevatron experiments, and future hadron and lepton colliders. VIII. SUMMARY When this talk was originally given nearly five years ago at Minnesota, the top quark had just been discovered, confirming a prediction based on its role in loop diagrams. Since then there have been great strides in confirming other predictions of loop diagrams, including hints of CP violation in $`B`$ meson decays and the overthrow of the superweak theory of CP violation. Experiments in atomic parity violation suggest that we may not know the full story of effects of particles in loops, but the presence of at least one puzzling result is what makes our field interesting. Hiroshi would have enjoyed the recent developments. On this occasion I extend good wishes to Akiko and to his colleagues, and thank them for the opportunity to honor his memory. ACKNOWLEDGEMENTS I wish to thank S. Pakvasa for helpful comments on the early theory of charm. This work was supported in part by the United States Department of Energy under Contract No. DE FG02 90ER40560.
no-problem/9903/cond-mat9903423.html
ar5iv
text
# Optimized Constant Pressure Stochastic Dynamics ## I Introduction Molecular Dynamics (MD) simulations are a very efficient tool to study the statistical properties of thermodynamic systems, especially at high densities where the acceptance rates of standard Monte Carlo (MC) simulations are small. They are also very well suited to study dynamical properties. MD simulations are most naturally performed within the microcanonical (NVE) ensemble, while the simple Metropolis MC algorithm leads to the canonical (NVT) ensemble. However, it is often desirable to study a system within a different ensemble, and hence methods have been developed to extend both MC and MD methods to practically every ensemble of thermodynamics . For MC methods, this task is relatively straightforward: One starts from the partition function or equilibrium probability distribution function for the pertinent degrees of freedom, and defines a Markov process in the space of the latter. After verifying the condition of detailed balance, and making use of ergodicity (which often can be safely assumed, and in some cases even be shown rigorously), one has established that the process will ultimately produce fluctuations within the proper equilibrium. For example, for a simulation in the isothermal–isobaric (NPT) ensemble, the system is studied in a box with periodic boundary conditions, whose size is allowed to fluctuate. In order to keep the system homogeneous, the particle coordinates are instantaneously adjusted to these box fluctuations, via simple rescaling. We shall here be concerned with the simplest case only, where the box is just a cube of size $`L`$ in each direction. By writing $$\stackrel{}{r}_i=L\stackrel{}{s}_i$$ (1) one introduces reduced coordinates $`\stackrel{}{s}_i`$ in the unit cube instead of the original coordinates $`\stackrel{}{r}_i`$ of the $`N`$ particles in the box of volume $`V=L^d`$, $`d`$ denoting the spatial dimensionality. The abovementioned adjustment of the particle configuration to the box fluctuations is facilitated by updating $`L`$ and the $`\stackrel{}{s}_i`$ independently. The partition function appropriate to the NPT ensemble is then $`Z`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑V{\displaystyle d^d\stackrel{}{r}_1\mathrm{}d^d\stackrel{}{r}_N\mathrm{exp}\left(\beta U\beta PV\right)}`$ (2) $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑V{\displaystyle d^d\stackrel{}{s}_1\mathrm{}d^d\stackrel{}{s}_NV^N\mathrm{exp}\left(\beta U\beta PV\right)},`$ (3) where $`U`$ denotes the system’s potential energy, $`P`$ is the external pressure, and $`\beta =1/(k_BT)`$. From this one immediately reads off that one has to run a standard Metropolis algorithm on the state space $`(L,\{\stackrel{}{s}_i\})`$, using an effective Hamiltonian $$U_{eff}=U+PVNk_BT\mathrm{ln}V.$$ (4) The MD approach to non–microcanonical ensembles , pioneered by Andersen , Nosé and Hoover , is slightly more involved. Like in MC, one defines an additional dynamical variable whose fluctuations allow to keep the thermodynamically conjugate variable fixed. In our example, this variable is $`L`$, while $`P`$ is fixed. However, the dynamics is not specified via a random process, but rather by Hamiltonian equations of motion in the extended space. This requires the definition of canonically conjugate momenta of the additional variables, and in turn the introduction of artificial masses, which have no direct physical meaning but are adjusted in order to set the time scale of the new fluctuations. The analysis then proceeds by assuming ergodicity in the extended space, such that the statistics is described by a microcanonical ensemble in that space. If the equations of motion have been chosen properly, then the equilibrium distribution function of the algorithm, which is obtained by integrating out the artificial momenta, must coincide with that of the desired ensemble. For constant–pressure simulations, often the picture of a “piston” is employed. In the last few years many contributions on extending constant–pressure methods have been made, treating cases of isotropic boxes as well as non–cubic cells . We will here however be only concerned with the case of an isotropic box, as in the original Andersen method which produces an isobaric–isenthalpic (NPH) ensemble. Stochastic dynamics (SD) in its classical form can be viewed as a simulation method which is somewhere between MC and MD, sharing with the former the stochastic element (which also ensures the ergodicity of the method) and the generation of a canonical (NVT) ensemble, while being based on continuous equations of motion, and momenta, like the latter. Instead of Hamiltonian equations of motion one solves Langevin equations and adjusts the temperature via the balance between friction force and stochastic force (fluctuation–dissipation theorem). It is not surprising that this approach can be combined with the Andersen method in order to produce an NPT ensemble. It is the purpose of this paper to show that the algorithm can be derived in a very simple and straightforward manner, by exploiting the well–known equivalence of the Langevin equation with the Fokker–Planck equation , and avoiding the complicated reasoning of a recent publication . The derivation is outlined in Secs. II, III and IV, where we treat a straightforward generalization of SD to arbitrary Hamiltonian systems, and apply this to the original Andersen NPH method, which is thus modified to an NPT method. We then turn to the question of numerical implementation. Since SD reduces to standard Hamiltonian dynamics in the limit of zero friction, and practical applications are often run for rather small friction, it is useful to use an algorithm which reduces to a time–reversible symplectic integrator (TRSI) in the zero–friction limit. It is well–known that TRSIs are particularly well–suited for Hamiltonian systems , since, except for roundoff errors, they do not prefer a particular direction of time and are hence very stable. Our algorithm is derived and tested in Sec. V, where we apply the method to a simple model system of Lennard–Jones particles, and pay particular attention to the question how the parameters should be chosen for optimum performance. Finally, we conclude in Sec. VI. ## II Generalized Stochastic Dynamics Our starting point is a set of generalized coordinates $`q_i`$ and canonically conjugate momenta $`p_i`$, such that the Hamiltonian equations of motion read $$\dot{q}_i=\frac{}{p_i}\dot{p}_i=\frac{}{q_i},$$ (5) where $``$ is the Hamilton function. These equations of motion are generalized to their stochastic versions by adding friction forces with damping parameters $`\gamma _i\left(\left\{q_i\right\}\right)`$ and stochastic forces with noise strengths $`\sigma _i\left(\left\{q_i\right\}\right)`$ (note that a dependence on the generalized coordinates is permitted): $$\dot{q}_i=\frac{}{p_i}\dot{p}_i=\frac{}{q_i}\gamma _i\frac{}{p_i}+\sigma _i\eta _i(t),$$ (6) where the Gaussian white noise $`\eta _i`$ satisfies the usual relations $$\eta _i(t)=0\eta _i(t)\eta _j(t^{})=2\delta _{ij}\delta \left(tt^{}\right).$$ (7) Equivalently, the stochastic process is described by the Fokker–Planck equation which describes the time evolution of the probability distribution function $`\mathrm{\Phi }`$ in the full space of stochastic variables. For an arbitrary set of variables $`x_\nu `$ (in our case both $`q_i`$ and $`p_i`$) it reads $$\frac{\mathrm{\Phi }}{t}=i\widehat{L}_{FP}\mathrm{\Phi }=\underset{\nu }{}\frac{}{x_\nu }D_\nu ^{(1)}\mathrm{\Phi }+\underset{\mu \nu }{}\frac{^2}{x_\mu x_\nu }D_{\mu \nu }^{(2)}\mathrm{\Phi },$$ (8) where the right hand side of the equation defines the Fokker–Planck operator $`\widehat{L}_{FP}`$. Drift and diffusion coefficients $`D_\nu ^{(1)}`$ and $`D_{\mu \nu }^{(2)}`$ are related to the short–time behavior of the process via the Kramers–Moyal expansion : $`D_\nu ^{(1)}`$ $`=`$ $`\underset{\mathrm{\Delta }t0}{lim}{\displaystyle \frac{1}{\mathrm{\Delta }t}}\mathrm{\Delta }x_\nu (\mathrm{\Delta }t)=\underset{\mathrm{\Delta }t0}{lim}{\displaystyle \frac{1}{\mathrm{\Delta }t}}x_\nu (t+\mathrm{\Delta }t)x_\nu (t)`$ (9) $`D_{\mu \nu }^{(2)}`$ $`=`$ $`\underset{\mathrm{\Delta }t0}{lim}{\displaystyle \frac{1}{2\mathrm{\Delta }t}}\mathrm{\Delta }x_\mu (\mathrm{\Delta }t)\mathrm{\Delta }x_\nu (\mathrm{\Delta }t).`$ (10) Straightforward evaluation of these moments for the present case yields directly the Fokker–Planck operator: $`i\widehat{L}_{FP}`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{}{q_i}}{\displaystyle \frac{}{p_i}}{\displaystyle \underset{i}{}}{\displaystyle \frac{}{p_i}}\left({\displaystyle \frac{}{q_i}}\gamma _i{\displaystyle \frac{}{p_i}}\right)+{\displaystyle \underset{i}{}}{\displaystyle \frac{^2}{p_i^2}}\sigma _i^2`$ (11) $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{}{p_i}}{\displaystyle \frac{}{q_i}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{}{q_i}}{\displaystyle \frac{}{p_i}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{}{p_i}}\left(\gamma _i{\displaystyle \frac{}{p_i}}+\sigma _i^2{\displaystyle \frac{}{p_i}}\right).`$ (12) A canonical ensemble is generated if the Boltzmann distribution is the stationary distribution of the process (due to the stochastic element, the process is usually ergodic, such that only one stationary distribution exists). For the present case one finds $$i\widehat{L}_{FP}\mathrm{exp}\left(\beta \right)=\underset{i}{}\frac{}{p_i}\left(\gamma _i\beta \sigma _i^2\right)\frac{}{p_i}\mathrm{exp}\left(\beta \right),$$ (13) which vanishes if $$\sigma _i^2=k_BT\gamma _i.$$ (14) This relation is the generalized fluctuation–dissipation theorem which friction and noise have to satisfy in order to generate a canonical ensemble. ## III Andersen Extended System The Andersen method uses the box volume $`V=L^d`$ and the scaled coordinates $`\stackrel{}{s}_i`$, see Eqn. 1, as degrees of freedom. For the particle velocities one obtains $$\dot{\stackrel{}{r}_i}=L\dot{\stackrel{}{s}_i}+\dot{L}\stackrel{}{s}_i,$$ (15) however, the second term is deliberately omitted in order to achieve independent fluctuations of $`L`$ and $`\stackrel{}{s}_i`$. Hence the method amounts to postulating the Lagrangian $``$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{L^2}{2}}m_i\dot{\stackrel{}{s}_i}^2{\displaystyle \underset{i<j}{}}v_{ij}(L,\{\stackrel{}{s}_i\})+{\displaystyle \frac{Q}{2}}\dot{V}^2PV,`$ (16) where $`m_i`$ denotes the mass of the $`i`$th particle, $`Q`$ is the artificial piston mass or box mass, and $`v_{ij}`$ is the interaction potential between particles $`i`$ and $`j`$ (the generalization to three– and many–body forces is straightforward). The Hamiltonian is then obtained via Legendre transformation $$=\underset{i}{}\stackrel{}{\pi }_i\dot{\stackrel{}{s}_i}+\mathrm{\Pi }_V\dot{V}=\underset{i}{}\frac{1}{2L^2m_i}\stackrel{}{\pi }_i^2+\underset{i<j}{}v_{ij}+\frac{1}{2Q}\mathrm{\Pi }_V^2+PV,$$ (17) where we have used the canonically conjugate momenta $$\mathrm{\Pi }_V=\frac{}{\dot{V}}=Q\dot{V},\stackrel{}{\pi }_i=\frac{}{\dot{\stackrel{}{s}_i}}=m_iL^2\dot{\stackrel{}{s}_i},$$ (18) such that the Hamiltonian equations of motion read $`\dot{\stackrel{}{s}_i}={\displaystyle \frac{1}{L^2m_i}}\stackrel{}{\pi }_i`$ $`\dot{\stackrel{}{\pi }_i}=L\stackrel{}{f}_i`$ (19) $`\dot{V}={\displaystyle \frac{1}{Q}}\mathrm{\Pi }_V`$ $`\dot{\mathrm{\Pi }}_V=𝒫P,`$ (20) where $`\stackrel{}{f}_i`$ is the force acting on the $`i`$th particle, and $`𝒫`$ abbreviates the “instantaneous” pressure $$𝒫=\frac{L}{dV}\underset{i<j}{}\stackrel{}{f}_{ij}\stackrel{}{s}_{ij}+\frac{1}{dL^2V}\underset{i}{}\frac{1}{m_i}\stackrel{}{\pi }_i^2,$$ (21) $`\stackrel{}{f}_{ij}`$ being the force acting between particle $`i`$ and $`j`$, while $`\stackrel{}{s}_{ij}=\stackrel{}{s}_i\stackrel{}{s}_j`$. Obviously, $``$ is a constant of motion. Apart from the kinetic energy of the piston, and the deviation of the simulated molecular kinetic energy from the true kinetic energy, which are both small corrections , $``$ is just the enthalpy. For this reason, the method produces the NPH ensemble. ## IV Langevin Equation for the NPT Ensemble The idea of Feller et al. was to replace the canonical equations of motion (Eqn. 19) by a Langevin stochastic process. It was designed to avoid oscillations of the box volume (“ringing” of the box). In Ref. an infinite set of harmonic oscillators coupled to the box piston was used to prove the correctness of the approach. However, since the original Andersen method is based on a Hamiltonian system, the results which have been derived in Sec. II can be directly applied. Therefore the stochastic equations of motion read $`\dot{\stackrel{}{\pi }_i}`$ $`=`$ $`L\stackrel{}{f}_i{\displaystyle \frac{\gamma _i}{L^2m_i}}\stackrel{}{\pi }_i+\sqrt{k_BT\gamma _i}\stackrel{}{\eta }_i(t)`$ (22) $`\dot{\mathrm{\Pi }}_V`$ $`=`$ $`𝒫P{\displaystyle \frac{\gamma _V}{Q}}\mathrm{\Pi }_V+\sqrt{k_BT\gamma _V}\stackrel{}{\eta }_V(t)`$ (23) $`\eta _i^\alpha =\eta _V`$ $`=`$ $`0`$ (24) $`\eta _i^\alpha (t)\eta _j^\beta (t^{})`$ $`=`$ $`2\delta _{ij}\delta _{\alpha \beta }\delta (tt^{})`$ (25) $`\eta _V(t)\eta _V(t^{})`$ $`=`$ $`2\delta (tt^{})`$ (26) $`\eta _i^\alpha (t)\eta _V(t^{})`$ $`=`$ $`0,`$ (27) while the equations of motion for $`\stackrel{}{s}_i`$ and $`V`$ remain unchanged. Here $`\alpha `$ and $`\beta `$ denote Cartesian directions, and the fluctuation–dissipation relation has already been taken into account. This method generates the correct NPT ensemble, as is seem from writing down the partition function which naturally arises from the algorithm (cf. Sec. II): $$Z=𝑑\mathrm{\Pi }_Vd^d\stackrel{}{\pi }_1\mathrm{}d^d\stackrel{}{\pi }_N𝑑Vd^d\stackrel{}{s}_1\mathrm{}d^d\stackrel{}{s}_N\mathrm{exp}\left(\beta \right),$$ (28) where one obviously has to use the Hamiltonian of the Andersen method, see Eqn. 17. Integrating out the momenta, one sees directly that this is, apart from unimportant prefactors, identical to Eqn. 2, i. e. the correct partition function of the NPT ensemble. Note that there is still considerable freedom in the choice of the friction parameters. Feller et al. chose $`\gamma _i=0`$ and $`\gamma _V=\text{const.}`$. This is tantamount to coupling only the piston degree of freedom to the heat bath. Since this degree of freedom is tightly coupled to all the others, the method produces the same NPT ensemble as the more general case $`\gamma _i0`$. However, we view this latter case, which also includes a direct coupling of the particles to the heat bath, as more advantageous, not for fundamental reasons, but rather for practical ones: As in standard SD, every degree of freedom is thermostatted individually. For this reason, local instabilities, arising from discretization errors, are efficiently corrected for, without spreading throughout the system. Loosely spoken, a particle which happens to be too “hot” will be “cooled down” by its local friction, while in the opposite case it will be “heated up” by the noise. For this reason, the SD version with $`\gamma _i0`$ allows for a slightly larger time step than the pure Andersen method, while the Feller et al. version ($`\gamma _i=0`$, $`\gamma _V0`$) is not more stable, involving only global thermostatting. We have chosen $$\gamma _i=\gamma _0L^2,$$ (29) while choosing a constant value for $`\gamma _V`$. Then the Langevin equation for $`\stackrel{}{\pi }_i`$ is rewritten as $$\dot{\stackrel{}{\pi }_i}=L\left(\stackrel{}{f}_i\frac{\gamma _0}{m_i}\frac{\stackrel{}{\pi }_i}{L}+\sqrt{k_BT\gamma _0}\stackrel{}{\eta }_i(t)\right);$$ (30) this fits naturally to standard SD, where the friction force is $`\gamma _0\stackrel{}{p}_i/m_i=\gamma _0\stackrel{}{\pi }_i/(Lm_i)`$. Further details on the implementation will be given in Sec. V; there we will also discuss the choice of the parameters $`\gamma _0`$ and $`\gamma _V`$, as well as of the piston mass $`Q`$, which are all irrelevant for the statistical properties of the system, but of great importance for the equilibration properties of the algorithm. ## V Implementation ### A Symplectic Integrator Symplectic time–reversible integrators are known to be extremely useful for molecular dynamics simulations of Hamiltonian systems . This is so because, except for roundoff errors, they conserve the phase–space volume exactly (note the intimate relation to entropy production), and do not mark any particular direction of time (note that a global drift in the algorithm breaks this time–reversal symmetry). Hence they are very stable and allow for a large time step. The most common example (actually, the lowest–order scheme) is the well–known Verlet algorithm in its various formulations . A particularly transparent way to construct these algorithms is based on the Liouville operator $`i\widehat{L}`$, which is just the Fokker–Planck operator in the special case of vanishing friction (see Eqn. 11). Noticing that the formal solution of Eqn. 8 is just $$\mathrm{\Phi }(t)=\mathrm{exp}\left(i\widehat{L}t\right)\mathrm{\Phi }(0),$$ (31) one focuses directly on the operator $`\mathrm{exp}\left(i\widehat{L}t\right)`$, and decomposes $`\widehat{L}`$ into a sum of simpler operators, $$i\widehat{L}=\underset{k=1}{\overset{n}{}}i\widehat{L}_k,$$ (32) where the imaginary unit is extracted for convenience; $`\widehat{L}`$ as well as each of the $`\widehat{L}_k`$ is self–adjoint. Now, the exact time development within a time step $`\mathrm{\Delta }t`$ is replaced by an approximate one by using the factorization $$e^{i\widehat{L}\mathrm{\Delta }t}e^{i\widehat{L}_1\mathrm{\Delta }t/2}e^{i\widehat{L}_2\mathrm{\Delta }t/2}\mathrm{}e^{i\widehat{L}_{n1}\mathrm{\Delta }t/2}e^{i\widehat{L}_n\mathrm{\Delta }t}e^{i\widehat{L}_{n1}\mathrm{\Delta }t/2}\mathrm{}e^{i\widehat{L}_2\mathrm{\Delta }t/2}e^{i\widehat{L}_1\mathrm{\Delta }t/2}.$$ (33) This scheme is automatically phase–space conserving, since each of the operators is unitary, and time–reversible symmetric, since the inverse operator is just the original operator, evaluated for $`\mathrm{\Delta }t`$. Now, if each of the operators $`\widehat{L}_k`$ is simple enough, the action of $`e^{i\widehat{L}_kt}`$ on a phase space point can be calculated trivially, such that the algorithm is a succession of simple updating steps. Specifically, for the Andersen equations of motion (see Sec. III) we choose the operators $`i\widehat{L}_1`$ $`=`$ $`{\displaystyle \underset{i}{}}L\stackrel{}{f}_i{\displaystyle \frac{}{\stackrel{}{\pi }_i}}`$ (34) $`i\widehat{L}_2`$ $`=`$ $`\left(𝒫P\right){\displaystyle \frac{}{\mathrm{\Pi }_V}}`$ (35) $`i\widehat{L}_3`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Pi }_V}{Q}}{\displaystyle \frac{}{V}}`$ (36) $`i\widehat{L}_4`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{\stackrel{}{\pi }_i}{L^2m_i}}{\displaystyle \frac{}{\stackrel{}{s}_i}},`$ (37) resulting in the following updating scheme: 1. $`\stackrel{}{\pi }_i(t)\stackrel{}{\pi }_i(t+\mathrm{\Delta }t/2)=\stackrel{}{\pi }_i(t)+L(t)\stackrel{}{f}_i(t)\mathrm{\Delta }t/2`$ 2. $`\mathrm{\Pi }_V(t)\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)=\mathrm{\Pi }_V(t)+\left(𝒫P\right)\mathrm{\Delta }t/2`$ (note that for the evaluation of $`𝒫`$, one has to take the old positions $`\stackrel{}{s}_i(t)`$ and the old box size $`L(t)`$, but already the updated momenta $`\stackrel{}{\pi }_i(t+\mathrm{\Delta }t/2)`$) 3. $`V(t)V(t+\mathrm{\Delta }t/2)=V(t)+Q^1\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)\mathrm{\Delta }t/2`$ 4. $`\stackrel{}{s}_i(t)\stackrel{}{s}_i(t+\mathrm{\Delta }t)=\stackrel{}{s}_i(t)+\frac{\stackrel{}{\pi }_i(t+\mathrm{\Delta }t/2)}{L^2(t+\mathrm{\Delta }t/2)m_i}\mathrm{\Delta }t`$ 5. $`V(t+\mathrm{\Delta }t/2)V(t+\mathrm{\Delta }t)=V(t+\mathrm{\Delta }t/2)+Q^1\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)\mathrm{\Delta }t/2`$ 6. $`\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)\mathrm{\Pi }_V(t+\mathrm{\Delta }t)=\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)+\left(𝒫P\right)\mathrm{\Delta }t/2`$ (for evaluation of $`𝒫`$, one uses $`\stackrel{}{s}_i(t+\mathrm{\Delta }t)`$, $`L(t+\mathrm{\Delta }t)`$, and $`\stackrel{}{\pi }_i(t+\mathrm{\Delta }t/2)`$) 7. $`\stackrel{}{\pi }_i(t+\mathrm{\Delta }t/2)\stackrel{}{\pi }_i(t+\mathrm{\Delta }t)=\stackrel{}{\pi }_i(t+\mathrm{\Delta }t/2)+L(t+\mathrm{\Delta }t)\stackrel{}{f}_i(t+\mathrm{\Delta }t)\mathrm{\Delta }t/2`$ . It is often convenient to formulate the algorithm in terms of the conventional variables $$\stackrel{}{r}_i(t)=L(t)\stackrel{}{s}_i(t)\stackrel{}{p}_i(t)=L(t)^1\stackrel{}{\pi }_i(t),$$ (38) which are however not canonically conjugate with respect to each other. The pressure, in terms of these variables, is written as $$𝒫=\frac{1}{dV}\underset{i<j}{}\stackrel{}{f}_{ij}\stackrel{}{r}_{ij}+\frac{1}{dV}\underset{i}{}\frac{1}{m_i}\stackrel{}{p}_i^{\mathrm{\hspace{0.17em}2}},$$ (39) and the updating scheme, which now involves various rescaling steps, proceeds as follows: 1. $`\stackrel{}{p}_i^{}=\stackrel{}{p}_i(t)+\stackrel{}{f}_i(t)\mathrm{\Delta }t/2`$ 2. $`𝒫`$ is evaluated using Eqn. 39 with $`\stackrel{}{r}_i(t)`$, $`L(t)`$, and $`\stackrel{}{p}_i^{}`$; then $`\mathrm{\Pi }_V`$ is updated as before: $`\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)=\mathrm{\Pi }_V(t)+\left(𝒫P\right)\mathrm{\Delta }t/2`$ 3. $`V(t+\mathrm{\Delta }t/2)=V(t)+Q^1\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)\mathrm{\Delta }t/2`$ 4. $`\stackrel{}{r}_i^{}=\stackrel{}{r}_i(t)+\frac{L^2(t)}{L^2(t+\mathrm{\Delta }t/2)}\frac{\stackrel{}{p}_i^{}}{m_i}\mathrm{\Delta }t`$ 5. $`V(t+\mathrm{\Delta }t)=V(t+\mathrm{\Delta }t/2)+Q^1\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)\mathrm{\Delta }t/2`$ followed by two rescaling steps: * $`\stackrel{}{r}_i(t+\mathrm{\Delta }t)=\frac{L(t+\mathrm{\Delta }t)}{L(t)}\stackrel{}{r}_i^{}`$ * $`\stackrel{}{p}_i^{\prime \prime }=\frac{L(t)}{L(t+\mathrm{\Delta }t)}\stackrel{}{p}_i^{}`$ 6. $`𝒫`$ is evaluated using Eqn. 39 with $`\stackrel{}{r}_i(t+\mathrm{\Delta }t)`$, $`L(t+\mathrm{\Delta }t)`$, and $`\stackrel{}{p}_i^{\prime \prime }`$; then $`\mathrm{\Pi }_V(t+\mathrm{\Delta }t)=\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)+\left(𝒫P\right)\mathrm{\Delta }t/2`$ 7. $`\stackrel{}{p}_i(t+\mathrm{\Delta }t)=\stackrel{}{p}_i^{\prime \prime }+\stackrel{}{f}_i(t+\mathrm{\Delta }t)\mathrm{\Delta }t/2`$ . So far the algorithm has been developed for the case without friction and noise. For the case with friction and noise, we simply use the scheme given above, and introduce the following replacements: $`\stackrel{}{f}_i\mathrm{\Delta }t/2`$ $``$ $`\stackrel{}{f}_i\mathrm{\Delta }t/2\gamma _0{\displaystyle \frac{\stackrel{}{p}_i}{m_i}}\mathrm{\Delta }t/2+\sqrt{k_BT\gamma _0\mathrm{\Delta }t}\stackrel{}{z}_i`$ (40) $`\left(𝒫P\right)\mathrm{\Delta }t/2`$ $``$ $`\left(𝒫P\right)\mathrm{\Delta }t/2\gamma _V{\displaystyle \frac{\mathrm{\Pi }_V}{Q}}\mathrm{\Delta }t/2+\sqrt{k_BT\gamma _V\mathrm{\Delta }t}z_V.`$ (41) Here $`z_i^\alpha `$ and $`z_V`$ denote uncorrelated random numbers with zero mean and unit variance; for simplicity we sample them from a uniform distribution via $$z=\sqrt{12}\left(u\frac{1}{2}\right)$$ (42) where $`u`$ is uniformly distributed on the unit interval. The momenta which occur in Eqn. 40 are, for simplicity, taken as $`\stackrel{}{p}_i(t)`$ in step (1), $`\mathrm{\Pi }_V(t)`$ in step (2), $`\mathrm{\Pi }_V(t+\mathrm{\Delta }t/2)`$ in step (6), and $`\stackrel{}{p}_i^{\prime \prime }`$ in step (7). ### B Choice of Parameters We start from the observation that a molecular system is characterized by a typical molecular frequency $`\omega _0`$, which can be viewed as the “Einstein” frequency of oscillations of an atom in its “cage” . With use of the intermolecular potential $`v(\stackrel{}{r})`$ one gets $`\omega _0^2={\displaystyle \frac{\rho }{dm}}{\displaystyle d^d\stackrel{}{r}g(\stackrel{}{r})^2v(\stackrel{}{r})},`$ (43) with $`\rho `$ being the particle number density, $`m`$ the mass of the molecules, and $`g(\stackrel{}{r})`$ the pair distribution function. Alternatively, one can define a molecular time scale by the time which a sound wave needs for traveling the nearest neighbor distance . However, both frequencies coincide by order of magnitude. This frequency governs the time step $`\mathrm{\Delta }t`$ which one has to choose in order to keep the MD algorithm stable; a typical rule of thumb says $`\mathrm{\Delta }t=(1/50)(2\pi /\omega _0)`$. Similarly, the piston degree of freedom performs oscillations, if it is simulated with very weak friction in the NVT ensemble. Following Nosé , we can estimate their frequency $`\mathrm{\Omega }_0`$ quite easily. Within a linearized approximation, the isothermal compressibility $$\kappa _T=\frac{1}{V}\frac{V}{P}=\frac{1}{Vk_BT}\left(V^2V^2\right)$$ (44) controls the relation between pressure fluctuations $`\delta 𝒫=𝒫P`$ and volume fluctuations $`\delta V=VV`$ via $$\delta 𝒫=\frac{P}{V}\delta V=\frac{1}{V\kappa _T}\delta V.$$ (45) Therefore, one concludes from Eqn. 19 by trivial insertion $$\frac{d^2}{dt^2}\delta V=\frac{1}{QV\kappa _T}\delta V,$$ (46) which is the equation of motion of a harmonic oscillator with frequency $$\mathrm{\Omega }_0^2=\frac{1}{QV\kappa _T}.$$ (47) Obviously, the piston mass has to be chosen small enough such that the system can adjust its volume sufficiently fast. On the other hand, it cannot be chosen too small, since then $`\mathrm{\Omega }_0`$ becomes too large, see Eqn. 47. Clearly, one does not want $`\mathrm{\Omega }_0`$ to exceed $`\omega _0`$, since otherwise the simulation would need an unnecessarily small time step. The optimum piston mass is thus found from the resonance condition $`\mathrm{\Omega }_0=\omega _0`$ , which yields a quite different value for $`Q`$ (by a factor of $`L^2`$) than Andersen’s original suggestion — this original criterion has turned out to be not correct. The similar frequencies of the molecular oscillator and the box volume lead to a very quick energy transfer between them, resulting in a very efficient equilibration. However, one will often choose a substantially larger value for $`Q`$ in order to separate the time scales, such that the molecular motion on short length and time scales is largely unaffected by the piston motion. Regardless of the precise choice for $`Q`$, one should note that keeping $`\mathrm{\Omega }_0`$ constant implies a scaling of $`Q`$ with the inverse system size. When the coupling to the heat bath with friction and noise is added, the question arises how to choose the damping parameters $`\gamma _0`$ and $`\gamma _V`$. Let us hence study a (deterministic) damped harmonic oscillator, $$m\ddot{x}+\gamma \dot{x}+m\omega _0^2x=0.$$ (48) Obviously, for small $`\gamma `$ the damping can practically be neglected. On the other hand, for $`\gamma m\omega _0`$, damping force and harmonic force are of the same order of magnitude (for a harmonic oscillator, we can estimate the velocity via $`\dot{x}\omega _0x`$). The exact calculation yields $$\gamma _c=2m\omega _0$$ (49) for the “aperiodic limit”. This value quantifies the qualitative distinction between “weak” and “strong” damping, denoting the boundary between oscillatory behavior ($`\gamma <\gamma _c`$) and pure relaxational dynamics ($`\gamma >\gamma _c`$). Only in the weak damping case $`\gamma \gamma _c`$, the fastest time scale (which governs the time step) is given by $`\omega _0`$, while for $`\gamma \gamma _c`$ the damping term dominates, requiring a smaller time step. For this reason, $`\gamma `$ values beyond $`\gamma _c`$ are clearly undesirable. Even worse, for $`\gamma \gamma _c`$ the relaxation contains also a very slow component, whose characteristic time is, for large $`\gamma `$, given by $`\gamma /(m\omega _0^2)`$. Therefore, $`\gamma =\gamma _c`$ is clearly the optimum value for fast equilibration. However, for the single–particle damping $`\gamma _0`$ we typically choose a value which is between one and two orders of magnitude smaller than $`\gamma _c`$. This is in accord with the philosophy of simulating the system rather close to its true Hamiltonian dynamics, such that at least on the local scales, both spatially and temporally, the dynamics can be considered as realistic. We hence use the coupling to the heat bath mainly for additional stabilization, deliberately keeping the molecular oscillations in the simulation. On the other hand, there is no analogous argument of “physical realism” for the box motion, which is intrinsically unphysical. One is therefore clearly led to the choice $`\gamma _V=\gamma _{Vc}`$, such that the ringing is just avoided, while the box motions are still sufficiently fast. The same conclusion was obtained by Feller et al. , where however no theoretical background was provided. Using this choice, one is left with only one time scale for the piston motion, given by $`\mathrm{\Omega }_0`$, and this is in turn adjusted according to the needs of the simulation: If one is only interested in statics, then “resonant” coupling is desirable (i. e. $`\mathrm{\Omega }_0\omega _0`$, and also $`\gamma _0\gamma _c`$), at the expense of distorting the motions even on the molecular scale. If, on the other hand, it is desired to realistically simulate the molecular oscillations, one should enforce a separation of time scales by choosing both $`\mathrm{\Omega }_0\omega _0`$ and $`\gamma _0\gamma _c`$. ### C Numerical Test We study a system containing 100 particles interacting via a truncated Lennard–Jones potential whose attractive part is cut off: $$U_{LJ}(r)=4ϵ\left[\left(\frac{\sigma }{r}\right)^{12}\left(\frac{\sigma }{r}\right)^6+\frac{1}{4}\right];r<2^{1/6}\sigma .$$ (50) We choose Lennard–Jones units where the parameters $`\sigma `$ and $`ϵ`$ as well as the particle mass $`m`$ are set to unity. Figure 1 illustrates the problem of the “ringing” in this particular system, by displaying the autocorrelation function of the pressure fluctuations for various simulation parameters. The simulated temperature is $`k_BT=1.0`$, while the pressure was fixed at the rather low value $`P=1.0`$. The mean volume for the 100 particles is $`V=262.7`$, corresponding to a density $`\rho =0.38`$. The compressibility at this state point is $`\kappa _T=0.3`$, such that for a box mass of $`Q=0.1`$ one finds $`\mathrm{\Omega }_00.36`$ for the ringing frequency or $`2\pi /\mathrm{\Omega }_018`$ for the oscillation time. The figure shows that the box indeed oscillates, and that the frequency of the oscillations has been estimated correctly. The left part of the figure is for pure undamped Andersen dynamics where the dependence on $`Q`$ is shown. Choosing a value of $`Q=0.0001`$ leads to a very fast relaxation of the pressure autocorrelation function. The theoretical prediction for the best box mass for the molecular frequency of $`\omega _08.5`$ in this system leads to $`Q_{opt}=1/(V\kappa _T\omega _0^2)0.00018`$. But as seen in Fig. 2, even for this value of $`Q`$ the oscillations still remain on a very short time scale. Only a value of $`\gamma _V=0.001`$, close to $`\gamma _{Vc}=2Q\mathrm{\Omega }_00.002`$ suppresses the fluctuations efficiently and the autocorrelation function resembles the autocorrelation function in the NVE ensemble. Interestingly, it is also seen that for constant volume the pressure relaxation is considerably slower if the molecular damping is turned on. Conversely, this behavior is practically absent in the constant pressure case, where actually the “best” autocorrelation function was found for $`\gamma _V=0.001`$ (as discussed), combined with some additional molecular friction $`\gamma _0=0.5`$. The statistical accuracy of an observable is given by the ratio between simulation time and the integrated autocorrelation time $`\tau `$, i. e. the value of the time integral over the normalized autocorrelation function . From that perspective, a slow decay with many oscillations is actually not particularly harmful, since the integral value is rather small, due to cancellation. However, the result of Ref. holds only in the asymptotic limit where the simulation time is substantially longer than the decay of the correlation function. Moreover, the numerical integration of an oscillatory function converges only slowly and is hence rather awkward. For these reasons, a simulation algorithm which avoids oscillations is clearly preferable. In order to illustrate this further, Table I lists $`\tau `$ for the autocorrelation functions shown in Fig. 2. The smallest $`\tau `$ is actually found for the pure Andersen NPH simulation. However, turning on the damping increases $`\tau `$ only by less than a factor of two, while the decay time decreases nearly by a factor of ten. The case of a box whose mass has, for reasons of separation of time scales, been chosen substantially larger than the optimum value, is illustrated in the upper right part of Fig. 1 ($`Q=0.1`$, i. e. three orders of magnitude larger than $`Q_{opt}`$). The autocorrelation function decays most rapidly for $`\gamma _V=0.1`$, as theoretically expected ($`\gamma _{Vc}=0.072`$). Compared to undamped dynamics at the same mass, this is a considerable improvement. Nevertheless, this decay is still substantially slower than what one can obtain if also the mass is chosen optimally (Fig. 2) — this is simply the price which is being paid for achieving realistic molecular motion. For practical purposes, the decay obtained for $`Q=0.1`$, $`\gamma _V=0.1`$ is quite acceptable. ## VI Conclusion We have discussed the algorithm of Andersen and the generalization to stochastic piston motion by Feller et al. , generalizing it even further to also include stochastic motion of the particles. We gave a straightforward proof that the NPT ensemble is produced. The implementation by means of a symplectic algorithm is particularly stable and well–suited for MD problems. Another important point, the choice of the right simulation parameters, was studied both theoretically and numerically, and a guideline for their optimum values was given. We view this algorithm as a particularly useful realization of the constant–pressure ensemble. ## Acknowledgments Fruitful and critical discussions with Patrick Ahlrichs, Markus Deserno, Alex Bunker and Kristian Müller–Nedebock are gratefully acknowledged.
no-problem/9903/astro-ph9903370.html
ar5iv
text
# Galactic cosmic rays & gamma rays: a synthesis ## 1. Introduction We have developed a model which aims to reproduce self-consistently observational data of many kinds related to cosmic-ray origin and propagation: direct measurements of nuclei, antiprotons, electrons and positrons, $`\gamma `$-rays, and synchrotron radiation. These data provide many independent constraints on any model and our approach is able to take advantage of this since it must be consistent with all types of observation. Propagation of primary and secondary nucleons, primary and secondary electrons and positrons are calculated self-consistently. For cosmic rays this approach differs from most others in that the spatial transport is treated numerically. This allows us to include realistic astrophysical information such as the gas distribution and interstellar radiation field. ## 2. Description of the models A numerical method for the calculation of Galactic cosmic-ray propagation in 3D has been developed<sup>2</sup><sup>2</sup>2 For interested users our model is available in the public domain on the World Wide Web, http://www.gamma.mpe-garching.mpg.de/$``$aws/aws.html , as described in detail in Strong & Moskalenko (1998). The basic spatial propagation mechanisms are diffusion and convection, while in momentum space energy loss and diffusive reacceleration are treated. Fragmentation and energy losses are computed using realistic distributions for the interstellar gas and radiation fields. The basic procedure is first to obtain a set of propagation parameters which reproduce the cosmic ray B/C and <sup>10</sup>Be/$`^9`$Be ratios; the same propagation conditions are then applied to primary electrons. Gamma-ray and synchrotron emission are then evaluated with the same model. The models are three dimensional with cylindrical symmetry in the Galaxy, and the basic coordinates are $`(R,z,p)`$, where $`R`$ is Galactocentric radius, $`z`$ is the distance from the Galactic plane, and $`p`$ is the total particle momentum. In the models the propagation region is bounded by $`R=R_h`$, $`z=\pm z_h`$ beyond which free escape is assumed. For a given $`z_h`$ the diffusion coefficient as a function of momentum is determined by B/C for the case of no reacceleration; if reacceleration is assumed then the reacceleration strength (related to the Alfvén speed) is constrained by the energy-dependence of B/C. We include diffusive reacceleration since some stochastic reacceleration is inevitable, and it provides a natural mechanism to reproduce the energy dependence of the B/C ratio without an ad hoc form for the diffusion coefficient (e.g., Seo & Ptuskin 1994). The distribution of cosmic-ray sources is chosen to reproduce (after propagation) the cosmic-ray distribution determined by analysis of EGRET $`\gamma `$-ray data (Strong & Mattox 1996). The primary propagation is computed first, giving the primary distribution as a function of ($`R,z,p`$); then the secondary source functions of nucleons and $`\overline{p}`$, $`e^\pm `$ are obtained from the gas density and cross sections, and finally the secondary propagation is computed. The bremsstrahlung and inverse Compton (IC) $`\gamma `$-rays are computed self-consistently from the gas and radiation fields used for the propagation. ## 3. Evaluation of models In our evaluations we use the B/C data summarized by Webber et al. (1996), from HEAO–3 and Voyager 1 and 2. We use the measured <sup>10</sup>Be/$`^9`$Be ratio from Ulysses (Connell 1998) and from Voyager–1,2, IMP–7/8, ISEE–3 as summarized by Lukasiak et al. (1994). In diffusion/convection models with a diffusion coefficient which is a simple power-law in momentum a good fit is not possible; the basic effect of convection is to reduce the variation of B/C with energy, and although this improves the fit at low energies the characteristic peaked shape of the measured B/C cannot be reproduced. Although modulation makes the comparison with the low energy Voyager data somewhat uncertain, the fit is unsatisfactory. The failure to obtain a good fit is an important conclusion since it shows that the simple inclusion of convection cannot solve the problem of the low-energy falloff in B/C. In the absence of convection the limits on the halo size are $`4\mathrm{kpc}<z_h<12\mathrm{kpc}`$. If convection is allowed the lower limit remains but no upper limit can be set, and $`dV/dz<7`$ km s<sup>-1</sup> kpc<sup>-1</sup>. For diffusive reacceleration models, Fig. 1 illustrates the effect on B/C of varying $`v_A`$, from $`v_A=0`$ (no reacceleration) to $`v_A=30`$ km s<sup>-1</sup>, for $`z_h=5`$ kpc. This shows how the initial form becomes modified to produce the characteristic peaked shape. Fig. 2 shows <sup>10</sup>Be/$`^9`$Be for the same models, (a) as a function of energy for various $`z_h`$, (b) as a function of $`z_h`$ at 525 MeV/nucleon corresponding to the Ulysses measurement. Comparing with the Ulysses data point, we again conclude that $`4\mathrm{kpc}<z_h<12`$ kpc. Recently, Webber & Soutoul (1998) and Ptuskin & Soutoul (1998) have obtained $`z_h=24`$ kpc and $`4.9_2^{+4}`$ kpc, respectively, consistent with our results. ## 4. Probes of the interstellar nucleon spectrum: $`\overline{p}`$ and $`e^+`$ Diffuse Galactic $`\gamma `$-ray observations $`>1`$ GeV by EGRET have been interpreted as requiring a harder average nucleon spectrum in interstellar space than that observed directly (Hunter et al. 1997, Gralewicz et al. 1997, Mori 1997, Moskalenko & Strong 1998b,c). A sensitive test of the interstellar nucleon spectra is provided by secondary antiprotons and positrons. Secondary positrons and antiprotons in Galactic cosmic rays are produced in collisions of cosmic-ray particles with interstellar matter<sup>3</sup><sup>3</sup>3 Secondary origin of cosmic-ray antiprotons and positrons is basically accepted, though some other exotic contributors such as, e.g., neutralino annihilation (Bottino et al. 1998, Baltz & Edsjö 1998) are also discussed. . Because they are secondary, they reflect the large-scale nucleon spectrum independent of local irregularities in the primaries and thus provide an essential check on propagation models and also on the interpretation of diffuse $`\gamma `$-ray emission (Moskalenko & Strong 1998a, Moskalenko et al. 1998, Strong et al. 1999). These are an important diagnostic for models of cosmic-ray propagation and provide information complementary to that provided by secondary nuclei. However, unlike secondary nuclei, antiprotons reflect primarily the propagation history of the protons, the main cosmic-ray component. We consider 3 different models which differ mainly in their assumptions about the electron and nucleon spectra (Strong et al. 1999). In model C (“conventional”) the electron and nucleon spectra are adjusted to agree with local measurements. Model HN (“hard nucleon spectrum”) uses the same electron spectrum as in model C, but it is adjusted to match the $`\gamma `$-ray data at the cost of a much harder proton spectrum than observed. In model HEMN (“hard electron spectrum and modified nucleon spectrum”) the electron spectrum is adjusted to match the $`\gamma `$-ray emission above 1 GeV via IC emission, relaxing the requirement of fitting the locally measured electrons above 10 GeV, and the nucleon spectrum at low energies is modified to obtain an improved fit to the $`\gamma `$-ray data. (Some freedom is allowed since solar modulation affects direct measurements of nucleons below 20 GeV, and the locally measured nucleon spectrum may not necessarily be representative of the average on Galactic scales either in spectrum or intensity due to details of Galactic structure.) Our calculations of the antiproton/proton ratio, $`\overline{p}/p`$, and secondary positron spectra for these models (with reacceleration) are shown in Fig. 3. In the case of the conventional model, our results (solid lines) agree well with measurements above a few GeV where solar modulation is small and with the antiproton calculations of Simon et al. (1998). The dotted lines in Fig. 3 show the $`\overline{p}/p`$ ratio and positron spectrum for the HN model; the ratio is still consistent with the data at low energies but rapidly increases toward higher energies and becomes $``$4 times higher at 10 GeV. Up to 3 GeV it does not conflict with the data with their large error bars. It is however larger than the point at 3.7–19 GeV (Hof et al. 1996) by about $`5\sigma `$. Clearly we cannot conclude definitively on the basis of this one point, but it does indicate the sensitivity of this test. Positrons also provide a good probe of the nucleon spectrum, but are more affected by energy losses and propagation uncertainties. The predicted positron flux in the HN model is a factor 4 above the Barwick et al. (1998) measurements and hence provides further evidence against the “hard nucleon spectrum” hypothesis. The dashed lines in Fig. 3 show our results for the HEMN model. The predictions are larger than the conventional model but still agree with the antiproton and positron measurements. ## 5. Diffuse Galactic continuum gamma rays Recent results from both COMPTEL and EGRET indicate that IC scattering is a more important contributor to the diffuse emission that previously believed. The puzzling excess in the EGRET data $`>1`$ GeV relative to that expected for $`\pi ^0`$-decay has been suggested to orginate in IC scattering from a hard interstellar electron spectrum (e.g., Pohl & Esposito 1998). Our combined approach allows us to test this hypothesis (Strong et al. 1999)<sup>4</sup><sup>4</sup>4 Our model includes a new calculation of the interstellar radiation field based on stellar population models and IRAS and COBE data. . A “conventional” model, which matches directly measured electron and nucleon spectra and is consistent with synchrotron spectral index data, can fit the observed $`\gamma `$-ray spectrum only in the range 30 MeV – 1 GeV. A hard nucleon spectrum (HN model) can improve the fit $`>1`$ GeV but as described above the high energy antiproton and positron data probably exclude the hypothesis that the local nucleon spectrum differs significantly from the Galactic average. We thus consider the “hard electron spectrum” alternative. The electron injection spectral index is taken as –1.7, which after propagation with reacceleration provides consistency with radio synchrotron data (a crucial constraint). Following Pohl & Esposito (1998), for this model we do not require consistency with the locally measured electron spectrum above 10 GeV since the rapid energy losses cause a clumpy distribution so that this is not necessarily representative of the interstellar average. For this case, the interstellar electron spectrum deviates strongly from that locally measured. Because of the increased IC contribution at high energies, the predicted $`\gamma `$-ray spectrum can reproduce the overall intensity from 30 MeV – 10 GeV (Fig. 4 left) but the detailed shape above 1 GeV is still problematic. Fig. 4 (right) illustrates further refinement of this scenario (HEMN model) showing that a good fit is possible (Strong et al. 1999). Fig. 5 shows the model latitude and longitude $`\gamma `$-ray distributions for the inner Galaxy for 1–2 GeV, convolved with the EGRET point-spread function, compared to EGRET Phase 1–4 data (with known point sources subtracted). It shows that the HEMN model with large IC component can indeed reproduce the data. None of these models fits the $`\gamma `$-ray spectrum below $``$30 MeV as measured by the Compton Gamma-Ray Observatory (Fig. 4). In order to fit the low-energy part as diffuse emission, without violating synchrotron constraints (Strong et al. 1999), requires a rapid upturn in the cosmic-ray electron spectrum below 200 MeV. However, in view of the energetics problems (Skibo et al. 1997), a population of unresolved sources seems more probable and would be the natural extension of the low energy plane emission seen by OSSE (Kinzer et al. 1997) and GINGA (Yamasaki et al. 1997). ## 6. Conclusions Our propagation model has been used to study several areas of high energy astrophysics. We believe that synthesizing information from classical cosmic-ray studies with $`\gamma `$-ray and other data leads to tighter constraints on cosmic-ray origin and propagation. We have shown that simple diffusion/convection models have difficulty in accounting for the observed form of the B/C ratio without special assumptions chosen to fit the data, and do not obviate the need for an ad hoc form for the diffusion coefficient. On the other hand we confirm the conclusion of other authors that models with reacceleration account naturally for the energy dependence over the whole observed range. Taking these results together tends to favour the reacceleration picture. We take advantage of the recent Ulysses Be measurements to obtain estimates of the halo size. Our limits on the halo height are $`4\mathrm{kpc}<z_h<12`$ kpc. These limits should be an improvement on previous estimates because of the more accurate Be data, our treatment of energy losses, and the inclusion of more realistic astrophysical details (such as, e.g., the gas distribution) in our model, although it should be noted that the limits are strictly only valid in the context of this particular halo picture. The positron and antiproton fluxes calculated are consistent with the most recent measurements. The $`\overline{p}/p`$ data point above 3 GeV and positron flux measurements seem to rule out the hypothesis, in connection with the $`>1`$ GeV $`\gamma `$-ray excess, that the local cosmic-ray nucleon spectrum differs significantly from the Galactic average (by implication adding support to the “hard electron” alternative). It therefore seems probable that the interstellar electron spectrum is harder than that locally measured, but this remains to be confirmed by detailed study of the angular distribution. The low-energy Galactic $`\gamma `$-ray emission is difficult to explain as truly diffuse and a point source population seems more probable. ## References Baltz, E. A., & Edsjö, J. 1998, Phys. Rev. D, 59, 023511 Barwick, S. W., et al. 1998, ApJ, 498, 779 Bottino, A., et al. 1998, Phys. Rev. D, 58, 123503 Connell, J. J. 1998, ApJ, 501, L59 Gralewicz, P., et al. 1997, A&A, 318, 925 Hof, M., et al. 1996, ApJ, 467, L33 Hunter, S. D., et al. 1997, ApJ, 481, 205 Kinzer, R. L., Purcell, W. R., & Kurfess, J. D. 1997, in AIP Conf. Proc. 410, 4th Compton Symp., ed. C. D. Dermer et al. (New York: AIP), p.1193 Lukasiak, A., et al. 1994, ApJ, 423, 426 Mori, M. 1997, ApJ, 478, 225 Moskalenko, I. V., & Strong, A. W. 1998a, ApJ, 493, 694 Moskalenko, I. V., & Strong, A. W. 1998b, in Proc. 16th European Cosmic Ray Symp., ed. J. Medina (Alcalá: Univ. de Alcalá), p.347 (astro–ph/9807288) Moskalenko, I. V., & Strong, A. W. 1998c, Astroph. Lett. & Comm. (in Proc. 3rd INTEGRAL Workshop), in press (astro–ph/9811221) Moskalenko, I. V., Strong, A. W., & Reimer, O. 1998, A&A, 338, L75 Pohl, M., & Esposito, J. A. 1998, ApJ, 507, 327 Ptuskin, V. S., & Soutoul, A. 1998, A&A, 337, 859 Seo, E. S., & Ptuskin, V. S. 1994, ApJ, 431, 705 Simon, M., Molnar, A., & Roesler, S. 1998, ApJ, 499, 250 Skibo, J. G., et al. 1997, ApJ, 483, L95 Strong, A. W., & Mattox, J. R. 1996, A&A, 308, L21 Strong, A. W., & Moskalenko, I. V. 1998, ApJ, 509, 212 Strong, A. W., Moskalenko, I. V., & Reimer, O. 1999, ApJ, submitted (astro–ph/9811296) Strong, A. W., et al. 1998, Astroph. Lett. & Comm. (in Proc. 3rd INTEGRAL Workshop), in press Webber, W. R., & Soutoul, A. 1998, ApJ, 506, 335 Webber, W. R., et al. 1996, ApJ, 457, 435 Yamasaki, N.Y., et al. 1997, ApJ, 481, 821
no-problem/9903/cond-mat9903073.html
ar5iv
text
# The Dilute Bose Gas Revised ## Abstract The well–known results concerning a dilute Bose gas with the short–range repulsive interaction should be reconsidered due to a thermodynamic inconsistency of the method being basic to much of the present understanding of this subject. The aim of our paper is to propose a new way of treating the dilute Bose gas with an arbitrary strong interaction. Using the reduced density matrix of the second order and a variational procedure, this way allows us to escape the inconsistency mentioned and operate with singular potentials of the Lennard–Jones type. The derived expansion of the condensate depletion in powers of the boson density $`n=N/V`$ reproduces the familiar result, while the expansion for the mean energy per particle is of the new form:$`\epsilon =2\pi \mathrm{}^2an/m\{1+128/(15\sqrt{\pi })\sqrt{na^3}(15b/8a)+\mathrm{}\}`$, where $`a`$ is the scattering length and $`b0`$ stands for one more characteristic length depending on the shape of the interaction potential (in particular, for the hard spheres $`a=b`$). All the consideration concerns the zero temperature. It is well–known that to investigate a dilute Bose gas of particles with an arbitrary strong repulsion (the strong–coupling regime), one should go beyond the Bogoliubov approach (weak–coupling case) and treat the short–range boson correlations in a more accurate way. An ordinary manner of doing so is the use of the Bogoliubov model with the “dressed”, or effective, interaction potential containing “information” on the short–range boson correlations (see Ref. ). Below it is demonstrated that this manner leads to a loss of the thermodynamic consistency. To overcome this trouble, we propose a new way of investigating the strong–coupling regime which concerns the reduced density matrix of the second order (the 2–matrix) and is based on the variational method. The 2–matrix for the many–body system of spinless bosons can be represented as : $`\rho _2(𝐫_1^{},𝐫_2^{};𝐫_1,𝐫_2)=F_2(𝐫_1,𝐫_2;𝐫_1^{},𝐫_2^{})/\{N(N1)\},`$ where the pair correlation function is given by $$F_2(𝐫_1,𝐫_2;𝐫_1^{},𝐫_2^{})=\psi ^{}(𝐫_1)\psi ^{}(𝐫_2)\psi (𝐫_2^{})\psi (𝐫_1^{}).$$ (1) Here $`\psi (𝐫)`$ and $`\psi ^{}(𝐫)`$ denote the boson field operators. Recently it has been found that for the uniform system with a small depletion of the zero–momentum state the correlation function (1) can be written in the thermodynamic limit as follows : $`F_2(𝐫_1,𝐫_2;𝐫_1^{},𝐫_2^{})=n_0^2\phi ^{}(r)\phi (r^{})`$ (2) $`+2n_0{\displaystyle \frac{d^3q}{(2\pi )^3}n_q\phi _{𝐪/2}^{}(𝐫)\phi _{𝐪/2}(𝐫^{})\mathrm{exp}\{i𝐪(𝐑^{}𝐑)\}},`$ (3) where $`𝐫=𝐫_1𝐫_2,𝐑=(𝐫_1+𝐫_2)/2`$ and similar relations take place for $`𝐫^{}`$ and $`𝐑^{}`$, respectively. In Eq. (3) $`n_0=N_0/V`$ is the density of the particles in the zero–momentum state, $`n_q=a_𝐪^{}a_𝐪`$ stands for the distribution of the noncondensed bosons over momenta. Besides, $`\phi (r)`$ is the wave function of a pair of particles being both condensed. In turn, $`\phi _{𝐪/2}(𝐫)`$ denotes the wave function of the relative motion in a pair of bosons with the total momentum $`\mathrm{}𝐪`$, this pair including one condensed and one noncondensed particles. So, Eq. (3) takes into account the condensate–condensate and supracondensate–condensate pair states and is related to the situation of a small depletion of the zero–momentum one–boson state. For the wave functions $`\phi (r)`$ and $`\phi _𝐩(𝐫)`$ we have $`\phi (r)=1+\psi (r),\phi _𝐩(𝐫)=\sqrt{2}\mathrm{cos}(\mathrm{𝐩𝐫})+\psi _𝐩(𝐫)(p0)`$ (4) with the boundary conditions $`\psi (r)0`$ and $`\psi _𝐩(𝐫)0`$ for $`r\mathrm{}.`$ The functions $`\psi (r)`$ and $`\psi _𝐩(𝐫)`$ can explicitly be expressed in terms of the Bose operators $`a_𝐩^{}`$ and $`a_𝐩`$ . In particular, $$\stackrel{~}{\psi }(k)=\psi (r)\mathrm{exp}(i\mathrm{𝐤𝐫})d^3r=a_𝐤a_𝐤/n_0.$$ (5) Having in our disposal the distribution function $`n_k`$ and the set of the pair wave functions $`\phi (r)`$ and $`\phi _𝐩(𝐫)`$, we are able to calculate the main thermodynamic quantities of the system of interest. In particular, the mean energy per particle is expressed in terms of $`n_k`$ and $`g(r)`$ via the well–known formula $`\epsilon ={\displaystyle \frac{d^3k}{(2\pi )^3}T_k\frac{n_k}{n}}+{\displaystyle \frac{n}{2}}{\displaystyle g(r)\mathrm{\Phi }(r)d^3r},`$ (6) where $`T_k=\mathrm{}^2k^2/2m`$ is the one–particle kinetic energy, $`n=N/V`$ stands for the boson density and the relation $$g(r)=F_2(𝐫_1,𝐫_2;𝐫_1,𝐫_2)/n^2.$$ (7) is valid for the pair distribution function $`g(r)`$. The starting point of our investigation is the weak–coupling regime which implies weak spatial correlations of particles and, thus, is characterized by the set of the inequalities $$|\psi (r)|1,|\psi _𝐩(𝐫)|1.$$ (8) Specifically, the Bogoliubov model corresponds to the choice $$|\psi (r)|1,\psi _𝐩(𝐫)=0.$$ (9) Besides, owing to a small depletion of the Bose condensate $`(nn_0)/n`$ we have for the one–particle density matrix $`F_1(r)=\psi ^{}(𝐫_1)\psi (𝐫_2)`$: $$\left|\frac{F_1(r)}{n}\right|=\left|\frac{d^3k}{(2\pi )^3}\frac{n_k}{n}\mathrm{exp}(i\mathrm{𝐤𝐫})\right|\frac{nn_0}{n}1.$$ So, investigating the Bose gas within the Bogoliubov scheme, we have two small quantities: $`\psi (r)`$ and $`F_1(r)/n`$. This enables us to write Eq. (7) with the help of (3) as follows: $$g(r)=1+2\psi (r)+\frac{2}{n}\frac{d^3k}{(2\pi )^3}n_k\mathrm{exp}(i\mathrm{𝐤𝐫}),$$ (10) where we restricted ourselves to the terms linear in $`\psi (r)`$ and $`F_1(r)/n`$ and put $`\psi ^{}(r)=\psi (r)`$ because the pair wave functions can be chosen as real quantities. Equations for $`\stackrel{~}{\psi }(k)`$ and $`n_k`$ can be found varying the mean energy (6) with (10) taken into account. However, previously one should realize an important point, namely: $`n_k`$ and $`\stackrel{~}{\psi }(k)`$ can not be independent variables. Indeed, when there is no interaction between particles, there are no spatial particle correlations either. So, $`\stackrel{~}{\psi }(k)=0`$ and, since the zero–temperature case is considered, all the bosons are condensed, $`n_k=0`$. While “switching on” the interaction results in appearing the spatial correlations and condensate depletion: $`\stackrel{~}{\psi }(k)0`$ together with $`n_k0`$. In the framework of the Bogoliubov scheme $`\stackrel{~}{\psi }(k)`$ is related to $`n_k`$ by the expression $$n_k(n_k+1)=n_0^2\stackrel{~}{\psi }^2(k).$$ (11) Indeed, the canonical Bogoliubov transformation implies that $$a_𝐤=u_k\alpha _𝐤+v_k\alpha _𝐤^{},a_𝐤^{}=u_k\alpha _𝐤^{}+v_k\alpha _𝐤,$$ (12) where $$u_k^2v_k^2=1.$$ (13) At zero temperature $`\alpha _𝐤^{}\alpha _𝐤=0`$ and, using Eqs. (5) and (12) we arrive at $$n_k=v_k^2,\stackrel{~}{\psi }(k)=u_kv_k/n_0.$$ (14) With Eqs. (13) and (14) one can readily obtain (11). Now, let us show that all the results on the thermodynamics of a weak–coupling Bose gas can be derived for the Bogoliubov scheme with variation of the mean energy (6) under the conditions (10) and (11). Inserting (10) into (6) and, then, varying the obtained expression, we arrive at $$\delta \epsilon =\frac{d^3k}{(2\pi )^3}\left[\left(T_k+n\stackrel{~}{\mathrm{\Phi }}(k)\right)\frac{\delta n_k}{n}+n\stackrel{~}{\mathrm{\Phi }}(k)\delta \stackrel{~}{\psi }(k)\right].$$ (15) Relation (11) connecting $`\stackrel{~}{\psi }(k)`$ with $`n_k`$ results in $$\delta \stackrel{~}{\psi }(k)=\frac{(2n_k+1)\delta n_k}{2n_0^2\stackrel{~}{\psi }(k)}+\frac{\stackrel{~}{\psi }(k)}{n_0}\frac{d^3q}{(2\pi )^3}\delta n_q,$$ (16) where the equality $$n=n_0+\frac{d^3k}{(2\pi )^3}n_k$$ (17) is taken into consideration. Setting $`\delta \epsilon =0`$ and using Eqs. (15) and (16), we derive the following expression: $`2T_k`$ $`\stackrel{~}{\psi }(k)={\displaystyle \frac{n^2}{n_0^2}}\stackrel{~}{\mathrm{\Phi }}(k)(1+2n_k)`$ (19) $`+2n\stackrel{~}{\psi }(k)\left(\stackrel{~}{\mathrm{\Phi }}(k)+{\displaystyle \frac{n}{n_0}}{\displaystyle \frac{d^3q}{(2\pi )^3}\stackrel{~}{\mathrm{\Phi }}(q)\stackrel{~}{\psi }(q)}\right).`$ Here one should realize that Eq. (19) is able to yield results being accurate only to the leading order in $`(nn_0)/n`$ because the used expression for $`g(r)`$ given by (10) is valid to the next–to–leading order . So, Eq. (19) should be rewritten as $$2T_k\stackrel{~}{\psi }(k)=\stackrel{~}{\mathrm{\Phi }}(k)(1+2n_k)+2n\stackrel{~}{\psi }(k)\mathrm{\Phi }(k).$$ (20) Eq. (20) is an equation of the Bethe–Goldstone type or, in other words, the in–medium Schrödinger equation for the pair wave function. As $`2\stackrel{~}{\mathrm{\Phi }}(k)(n_k+n\stackrel{~}{\psi }(k))`$ is the product of the Fourier transforms of $`\mathrm{\Phi }(r)`$ and $`n(g(r)1)`$, we can rewrite Eq. (20) in the more customary form $$\frac{\mathrm{}^2}{m}^2\phi (r)=\mathrm{\Phi }(r)+n\mathrm{\Phi }(|𝐫𝐲|)\left(g(y)1\right)d^3y.$$ (21) The structure of Eq. (21) is discussed in the papers . Here we only remark that the right–hand side (r.h.s.) of (21) is the in–medium potential of the boson–boson interaction in the weak–coupling approximation. The system of equations (11) and (20) can easily be solved, which leads to the familiar results : $`n_k={\displaystyle \frac{1}{2}}\left({\displaystyle \frac{T_k+n\stackrel{~}{\mathrm{\Phi }}(k)}{\sqrt{T_k^2+2nT_k\stackrel{~}{\mathrm{\Phi }}(k)}}}1\right),`$ (22) $`\stackrel{~}{\psi }(k)={\displaystyle \frac{\stackrel{~}{\mathrm{\Phi }}(k)}{2\sqrt{T_k^2+2nT_k\stackrel{~}{\mathrm{\Phi }}(k)}}}.`$ (23) Now we are able to demonstrate that the investigation of the strong–coupling case based on the Bogoliubov model with the effective boson–boson interaction, results in a loss of the thermodynamic consistency. Indeed, as it was shown in the previous paragraph, any calculating scheme using the basic relations of the Bogoliubov model (10), (11) conclusively leads to Eqs. (20)-(23) provided this scheme does yield the minimum of the mean energy. In this case Eqs. (20)-(23) certainly includes the quantity $`\mathrm{\Phi }(r)`$ which is the “bare” interaction potential appearing in (6). The use of the Bogoliubov model with the effective interaction potential substituted for $`\mathrm{\Phi }(r)`$ can in no way disturb the relations given by (10) and (11). And Eq. (6) is the same in both the weak– and strong–coupling regimes. Thus, any attempts of replacing $`\mathrm{\Phi }(r)`$ by the effective “dressed” potential without modifications of (10) and (11) results in a calculating procedure which does not really provide the minimum of the mean energy. It is nothing else but a loss of the thermodynamic consistency. Remark that we do not mean, of course, that the t–matrix approach or the pseudopotential method can not be applied in the quantum scattering problem. It is only stated that the usual way of combining the ladder diagrams with the random phase approximation faces the trouble mentioned above. Though our present investigation is limited by the consideration of the Bose systems, the derived result gives a hint that the similar situation is likely to take place in the Fermi case, too. In this connection it is worth noting the problem associated with the lack of self–consistency of the standard method of treating the dilute Fermi gas . The strong–coupling regime is characterized by significant spatial correlations. So, Eq. (9) resulting in (10) is not relevant for an arbitrary strong repulsion between bosons at small separations when we have $`\psi (0)=1,\psi _𝐩(0)=\sqrt{2}`$ (see Refs. ). Therefore, to investigate the strong–coupling regime, Eq. (10) should be abandoned in favor of (3). Expression (3) is accurate to the next–to–leading order in $`(nn_0)/n`$. So, using (3) and (7), we can write $$g(r)=\phi ^2(r)+\frac{2}{n}\frac{d^3q}{(2\pi )^3}n_q\left(\phi _{𝐪/2}^2(𝐫)\phi ^2(r)\right).$$ (24) Let us now perturb $`\stackrel{~}{\psi }(k)`$ and $`n(k)`$. Working to the first order in the perturbation and keeping in mind conditions (11) and (24), from (6) we derive: $$2T_k\stackrel{~}{\psi }(k)=\stackrel{~}{U}(k)(1+2n_k)+2n\stackrel{~}{\psi }(k)\stackrel{~}{U}^{}(k)$$ (25) with $$\stackrel{~}{U}(k)=\phi (r)\mathrm{\Phi }(r)\mathrm{exp}(i\mathrm{𝐤𝐫})d^3r$$ (26) and $`\stackrel{~}{U}^{}(k)={\displaystyle \left(\phi _{𝐤/2}^2(𝐫)\phi ^2(r)\right)\mathrm{\Phi }(r)d^3r}.`$ (27) Using Eqs. (26), (27) as well as the relation $`\psi _𝐤(𝐫)\sqrt{2}\psi (r)(k0)`$ (see the boundary conditions (4)), we obtain $`\stackrel{~}{U}(0)\stackrel{~}{U}^{}(0).`$ This implies that the system of Eqs. (11) and (25) is not able to yield the relation $`n_k1/k(k0)`$ following from the “$`1/k^2`$” theorem of Bogoliubov for the zero temperature . Indeed, let us assume $`n_k\mathrm{}`$ for $`k0.`$ Then, from Eq. (11) at $`n=n_0`$ we find $`n|\stackrel{~}{\psi }(k)|/n_k1`$ when $`k0.`$ On the contrary, Eq. (25) gives $`n|\stackrel{~}{\psi }(k)|/n_k\stackrel{~}{U}(0)/\stackrel{~}{U}^{}(0)1`$ for $`k0.`$ So, consideration of the Bose gas based on Eqs. (3) and (11) does not produce satisfactory results. Nevertheless, it is worth noting that Eq. (25) has an important peculiarity which differentiate it from Eq. (20) in an advantageous way. The point is that in both the limits $`n0`$ and $`k\mathrm{}`$ Eq. (25) is reduced to $$\frac{\mathrm{}^2}{m}^2\phi (r)+\mathrm{\Phi }(r)\phi (r)=0.$$ (28) As it is seen, this is the exact “bare” (not in–medium) Schrödinger equation, other than its Born approximation following from (21). Thus, we can expect the line of our investigation to be right. As it was shown in the previous paragraph, an approach adequate for a dilute Bose gas with an arbitrary strong interaction can not be constructed without modifications of Eq. (11). This is also in agreement with a consequence of the relation $$|a_𝐤a_𝐤|^2a_𝐤a_𝐤^{}a_𝐤^{}a_𝐤$$ (29) resulting from the inequality of Cauchy–Schwarz–Bogoliubov $`|\widehat{A}\widehat{B}|^2\widehat{A}\widehat{A}^{}\widehat{B}^{}\widehat{B}.`$ With (5) and (29) one can easily derive $`n_0^2\stackrel{~}{\psi }^2(k)n_k(n_k+1)`$. Thus, it is reasonable to assume that Eq. (11) takes into account only the condensate–condensate channel and ignores the supracondensate–condensate ones. Now the question arises how to find corrections to the r.h.s. of Eq. (11). At present we have no regular procedure allowing us to do this in any order of $`(nn_0)/n`$. However, there exists an argument which makes it possible to realize the first step in this direction. The matter is that the alterations needed have to produce the equation for $`\stackrel{~}{\psi }_𝐩(𝐤)`$ which is reduced to the equation for $`\stackrel{~}{\psi }(k)`$ in the limit $`p0.`$ Though this requirement does not uniquely determine the corrections to Eq. (11), it turns out to be significantly restrictive. In particular, even the simplest variant of correcting Eq. (11) in this way, leads to promising results. Indeed, this variant is specified by the expression $`n_k(n_k+1)=n_0^2\stackrel{~}{\psi }^2(k)+2n_0{\displaystyle \frac{d^3q}{(2\pi )^3}n_q\stackrel{~}{\psi }_{𝐪/2}^2(𝐤)}.`$ (30) Eq. (30) is valid to the next–to–leading order in $`(nn_0)/n`$. So, we may rewrite it as $$n_k(n_k+1)=n^2\stackrel{~}{\psi }^2(k)+2n\frac{d^3q}{(2\pi )^3}n_q(\stackrel{~}{\psi }_{𝐪/2}^2(𝐤)\stackrel{~}{\psi }^2(k)).$$ (31) Perturbing $`\stackrel{~}{\psi }(k)`$ and $`n_k`$ and bearing in mind conditions (24) and (31), (6) gives Eq. (25) again. However, now $`\stackrel{~}{U}^{}(k)`$ obeys the new relation $`\stackrel{~}{U}^{}(k)`$ $`=`$ $`{\displaystyle \left(\phi _{𝐤/2}^2(𝐫)\phi ^2(r)\right)\mathrm{\Phi }(r)d^3r}`$ (33) $`{\displaystyle \frac{d^3q}{(2\pi )^3}\frac{\stackrel{~}{U}(q)\left(\stackrel{~}{\psi }_{𝐤/2}^2(𝐪)\stackrel{~}{\psi }^2(q)\right)}{\stackrel{~}{\psi }(q)}}`$ which significantly differs from (27). Indeed, the choice of the pair wave functions as real quantities implies that operating with integrands in (26) and (33), one can exploit $`\psi _𝐩(𝐫)\sqrt{2}\psi (r)p^2`$ at small $`p`$ . For $`k0`$ this provides $`\stackrel{~}{U}^{}(k)\stackrel{~}{U}(k)=t_k=ck^4+\mathrm{}.`$ Similar to Eq. (20), Eq. (25) can yields results correct only to the leading order in $`(nn_0)/n`$. So, it has to be solved together with (11) where $`n_0^2`$ should be replaced by $`n^2`$, rather than with (31). This leads to the following relation: $`n_k={\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\stackrel{~}{T}_k+n\stackrel{~}{U}(k)}{\sqrt{\stackrel{~}{T}_k^2+2n\stackrel{~}{T}_k\stackrel{~}{U}(k)}}}1\right),`$ (34) $`\stackrel{~}{\psi }(k)={\displaystyle \frac{\stackrel{~}{U}(k)}{2\sqrt{\stackrel{~}{T}_k^2+2n\stackrel{~}{T}_k\stackrel{~}{U}(k)}}},`$ (35) where $`\stackrel{~}{T}_k=T_k+nt_k`$. In the limit $`k0`$ Eq. (35) gives $`n_k(\sqrt{nm\stackrel{~}{U}(0)}/\mathrm{}k1)/2`$, which is fully consistent with the “$`1/k^2`$” theorem of Bogoliubov for the zero temperature . Eqs. (26) and (35) should be solved in a self–consistent manner. So, for $`n0`$ one can derive $$\stackrel{~}{U}(k)=\stackrel{~}{U}^{(0)}(k)(1+8\sqrt{na^3}/\sqrt{\pi }).$$ (36) Here $`\stackrel{~}{U}^{(0)}(k)=\phi ^{(0)}(r)\mathrm{\Phi }(r)\mathrm{exp}(i\mathrm{𝐤𝐫})d^3r`$, where $`\phi ^{(0)}(r)`$ obeys Eq. (28). Further, substituting $`k=\sqrt{n}y`$ in the integral for the condensate depletion $`(nn_0)/n=1/(2\pi )^3_0^+\mathrm{}𝑑k\mathrm{\hspace{0.17em}4}\pi k^2n_k/n`$, we obtain the familiar result $$(nn_0)/n=8\sqrt{na^3}/(3\sqrt{\pi })+\mathrm{},$$ (37) $`a`$ being the scattering length. Inserting (24), (34) and (35) into Eq. (6) and using (36), in a similar manner we derive $$\epsilon =\frac{2\pi \mathrm{}^2an}{m}\left\{1+\frac{128}{15\sqrt{\pi }}\sqrt{na^3}\left(1\frac{5}{8}\frac{b}{a}\right)+\mathrm{}\right\},$$ (38) where $`b0`$ is one more characteristic length defined as $$b=\frac{1}{4\pi }(\phi ^{(0)}(r))^2d^3r.$$ (39) As it is seen, the well–known result of papers can be derived from (38) with the choice $`b=0`$. However, this approximation is rather crude because the case of the hard–sphere interaction ($`\mathrm{\Phi }(r)=0(r>a)`$ and $`\mathrm{\Phi }(r)\mathrm{}(r<a)`$) is specified by $`b=a`$: $$\epsilon =\frac{2\pi \mathrm{}^2an}{m}\left\{1+\frac{16}{5\sqrt{\pi }}\sqrt{na^3}+\mathrm{}\right\}.$$ (40) In the general case and, in particular, for the singular potentials of the Lennard–Jones type we have $`ab`$. Remark that the last term in the r.h.s. of (30) does not make any contribution into the results given by (37) and (38). However, the next orders in the expansions of the energy and depletion depend on its contribution essentially. Concluding let us take notice of the important points of this Letter once more. It was demonstrated that thermodynamically consistent calculations based on (10) and (11) conclusively result in Eqs. (20)-(23). Therefore, using the Bogoliubov model with the “dressed” interaction does not provide the satisfactory solution of the problem of the strong–coupling Bose gas. As it was shown, when investigating this subject, one should go beyond the Bogoliubov scheme. To do this, we developed the approach reduced to the system of Eqs. (26), (33), (34) and (35). This equations reproduce the well–known result (37) for the condensate depletion and yields the new expansion (38) in powers of $`n`$ for the energy, (40) being the particular case of the hard spheres. One can expect alterations for the excitation spectrum, too. This work was supported by the RFBR Grant No. 97-02-16705.
no-problem/9903/astro-ph9903439.html
ar5iv
text
# Intrinsic Absorption Lines in the Seyfert 1 Galaxy NGC 5548: UV Echelle Spectra from the Space Telescope Imaging SpectrographBased on observations with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. ## 1 Introduction With the advent of the Hubble Space Telescope (HST), it has become clear that intrinsic absorption is a common phemonemon in the UV spectra of Seyfert 1 galaxies. From a study of HST archive spectra, we determined that $``$60% (10/17) of the Seyfert 1 galaxies in our sample have intrinsic absorption lines (Crenshaw et al. 1999). All ten of the Seyferts with absorption showed high ionization lines (N V $`\lambda \lambda `$1238.8, 1242.8; C IV $`\lambda \lambda `$1548.2, 1550.8), in addition to L$`\alpha `$. Low ionization lines were less common: four Seyferts showed detectable Si IV $`\lambda \lambda `$1393.8, 1402.8 absorption, and only one showed Mg II absorption $`\lambda \lambda `$ 2796.3, 2803.5 absorption (NGC 4151). The intrinsic absorption lines are blueshifted by up to 2100 km s<sup>-1</sup> with respect to the narrow emission lines, which indicates net radial outflow of the absorbing gas. At high spectral resolution, the lines often split into distinct narrow components, with widths in the range 20 – 400 km s<sup>-1</sup> (FWHM). Intrinsic absorption is also common in the X-ray spectra of Seyfert galaxies; about half of these objects show “warm absorbers”, characterized by O VII and O VIII absorption edges (Reynolds 1997; George et al. 1998) . We found a clear correspondence between the UV and X-ray absorption in our survey; of the eight Seyferts that were observed by both HST and ASCA, six showed both UV and X-ray absorption and two showed neither (Crenshaw et al. 1999). Mathur and collaborators first established a connection between the UV and X-ray absorbers, and claimed that a single zone of photoionized gas can explain both the observed strengths of O VII and O VIII absorption edges and the UV absorption lines in quasars (Mathur 1994) and the Seyfert galaxy NGC 5548 (Mathur et al. 1995). However, in NGC 4151 (Kriss et al. 1998) and NGC 3516 (Kriss et al. 1996; Crenshaw et al. 1998), multiple zones spanning a wide range in ionization parameter and effective hydrogen column density are needed to explain the wide range in ionization species and large column densities of the UV absorption lines. Since the intrinsic UV absorption in a Seyfert 1 galaxy is typically comprised of multiple kinematic components, an obvious question arises: are these components characterized by different physical conditions? An important related question is: to what extent do the UV and X-ray absorbers arise in the same regions? In order to address these issues, near-simultaneous high-resolution spectra of multiple lines are needed, to determine the column densities of different ions for each component. The only data that have met these requirements are the Goddard High Resolution Spectrograph (GHRS) observations of the C IV and Mg II absorption in NGC 4151 (Weymann et al. 1997; Kriss 1998). The C IV/Mg II column density ratio varies widely in this object, indicating a broad range in ionization parameter among the different kinematic components. The Space Telescope Imaging Spectrograph (STIS) on HST offers an important means for investigating the differences in physical conditions among different kinematic components, by providing echelle gratings that cover broad bandpasses in the UV at a resolution of $`\lambda `$/$`\mathrm{\Delta }\lambda `$ $``$ 40,000. To take advantage of this capability, we initiated a STIS Guaranteed Time Observations (GTO) program to obtain echelle spectra of several Seyfert 1 galaxies. Our first target is NGC 5548. ## 2 Observations and Analysis We obtained STIS echelle spectra of the nucleus of NGC 5548 on 1998 March 11 through the 0$`^{\prime \prime }.`$2 x 0$`^{\prime \prime }.`$2 aperture. The observations are described in Table 1, along with previous GHRS observations of the N V and C IV regions obtained $``$2 years earlier (the N V region contains a portion of the L$`\alpha `$ profile). We reduced the STIS echelle spectra using the IDL software developed at NASA’s Goddard Space Flight Center for the STIS Instrument Definition Team (Lindler et al. 1998). The procedures that we followed to identify the Galactic and intrinsic absorption lines and measure the intrinsic lines are given in Crenshaw et al. (1999). Figure 1 shows the regions in the echelle spectra where the intrinsic absorption lines were detected (L$`\alpha `$, N V, and C IV ), and the regions where the strongest low-ionization lines might be expected (Si IV, Mg II). The fluxes are plotted as a function of the radial velocity (of the strongest member for the doublets) relative to the emission-line redshift, z $`=`$ 0.01676, obtained from the NASA/IPAC Extragalactic Database (NED). To obtain velocities relative to the redshift from H I observations (z $`=`$ 0.01717), the velocity scale should be offset by an additional $``$123 km s<sup>-1</sup>. L$`\alpha `$ shows six distinct kinematic components, and the first five components are also seen in the absorption doublets of N V and C IV at essentially the same radial velocities. The Si IV and Mg II regions show no obvious counterparts, but the spectra are important for putting upper limits on the column densities of these ions. Components 1 – 5 are the same as those identified by Crenshaw et al. (1999) in the GHRS spectra. Component 6 can also been seen in the GHRS spectra of L$`\alpha `$, but was not identified in that paper. Mathur, Elvis, and Wilkes (1999) have also identified the velocity components in the GHRS spectra of C IV. Their identifications are the same as ours, except that they identify the dip in the red wing of component 4 as a separate component, and they identify the feature that we claim to be the C IV $`\lambda `$1550.8 line of component 5 with the C IV $`\lambda `$1548.2 line of a component that is slightly redshifted with respect to the systemic redshift. The feature that corresponds to the $`\lambda `$1550.8 line of the redshifted component can be seen in the GHRS spectra (Crenshaw et al. 1999), but was too weak to satisfy our criteria for detection. We cannot identify this component in the STIS N V and C IV regions, but there is a weak feature at $`+`$300 km s<sup>-1</sup> in the STIS L$`\alpha `$ region (Figure 1) that may correspond to Mathur et al.’s redshifted component. Due to the uncertainty about the existence of this component, we will not consider it further in this paper. Comparing the STIS spectra with the GHRS spectra, we find that the absorption components are resolved in both. The higher resolution of the STIS spectra makes the velocity structure in some of the components, such as the dip in the red wing of component 4, more distinguishable. We note that the short-wavelength GHRS spectrum does not include the blue wing of the L$`\alpha `$ and the absorption from component 1. The GHRS spectrum of C IV has a substantially higher SNR, since the exposure time was $``$3 times that of the STIS spectrum. Thus, the C IV absorption for component 1 is not apparent in the STIS spectrum, and we can only give an upper limit on its column density (see below). Table 2 gives the radial velocity centroids, widths (FWHM), and covering factors in the line of sight for each component. The values and uncertainties that we present are averages and standard deviations from individual lines; measurements of the individual lines and the Galactic lines will be given in another paper (Sahu et al. 1999). The methods for determining the measurement errors are described in Crenshaw et al. (1999), and include uncertainties due to different reasonable placements of the underlying emission. For each component, two lower limits are given for the covering factor in the line of sight: C<sub>los</sub>, which is the fraction of total emission (continuum plus broad-line emission) that is occulted, and C$`{}_{}{}^{BLR}{}_{los}{}^{}`$, which is the fraction of broad-line emission that is occulted (assuming the entire continuum source is occulted). These lower limits have been determined from the residual intensities in the L$`\alpha `$ cores (see Crenshaw et al. 1999). A comparison of the measurements in Table 2 with those from the GHRS spectra (Crenshaw et al. 1999) shows that, to within the errors, there have been no changes in the velocity centroids or widths of the components over $``$2 years. The lower limits to the covering factors are essentially the same as those obtained from the GHRS spectra. For components 1 –5, both the total covering factor, C<sub>los</sub>, and the BLR covering factor, C$`{}_{}{}^{BLR}{}_{los}{}^{}`$, are greater than one-half and in some cases, close to one. We can conclude that each of the components is likely to be outside of the BLR and comparable in size to or larger than the BLR in the plane of the sky. Component 6 shows weak L$`\alpha `$ absorption, is not detected in the other lines, and is located close to the systemic velocity of the host galaxy, which suggests that the gas responsible for this component is associated with the interstellar medium of the host galaxy. We will not discuss this component further. In Table 3, we give the column densities of L$`\alpha `$, N V, and C IV for each component. Since we only have lower limits to the covering factors, these values were determined by assuming that C<sub>los</sub> $`=`$ 1, and integrating the optical depths as a function of radial velocity across each component (see Crenshaw et al. 1999 for a discussion of the effects of C<sub>los</sub> $``$ 1 on the measured column densities). The blending of the L$`\alpha `$ components in Figure 1 indicates that they are more saturated than the other lines, and therefore the column densities are affected by the deblending of the components and by errors in the removal of scattered light. The uncertainties in the L$`\alpha `$ columns in Table 3 are due to measurement errors, and do not include these effects. Table 3 shows that there have been significant decreases in the N V column densities for components 1 and 3, and a possible decrease in the C IV column density for component 3, since the GHRS observations from $``$2 years earlier. These changes are also apparent in a comparison of Figure 1 with the GHRS spectra in Crenshaw et al. (1999). Since we only have an upper limit for the C IV column density of component 1, we have no information on its variability. The other components have not changed, given the uncertainties. To compare our results with those obtained for the X-ray warm absorber in NGC 5548, we use the ASCA observations of this Seyfert on 1993 July 27 (Reynolds 1997; George et al. 1998). From the optical depths of the O VII and O VIII absorption edges in Reynolds and the ionization cross sections, we calculate column densities of 1.0 x 10<sup>18</sup> and 1.6 x 10<sup>18</sup> cm<sup>-2</sup> for O VII and O VIII, respectively. ## 3 Photoionization Models We have generated photoionization models to investigate the physical conditions in the absorption components. The details of the photoionization code are described in Kraemer et al. (1994, and references therein). Most of the input parameters for these models (e.g., solar abundances) are identical to those used in our previous study of the narrow emission line region (NLR) in NGC 5548 (Kraemer et al. 1998). We have modeled the ionizing continuum as a broken power-law, $`L_\nu `$$`=`$K$`\nu ^\alpha `$, with the following spectral indices: $`\alpha `$ $`=`$ $``$1.0 ( h$`\nu `$ $`<`$ 13.6 eV), $`\alpha `$ $`=`$ $``$1.4 (13.6 eV $``$ h$`\nu `$ $`<`$ 1300 eV), and $`\alpha `$ $`=`$ $``$0.9 ( h$`\nu `$ $``$ 1300 eV). Note that we have modified the value of $`\alpha `$ above 1.3 keV from that used in Kraemer et al. (1998), based on Reynolds’s (1997) fit to the 2 – 10 keV continuum. The parameters that we varied in generating our photoionization models were the ionization parameter U (the number of ionizing photons per hydrogen atom at the illuminated face of the cloud), and the effective hydrogen column density N<sub>eff</sub> (i.e., the neutral plus ionized hydrogen column). The ionic column densities predicted by the models do not depend on the atomic density, n<sub>H</sub>. For simplicity we chose a fixed value of n<sub>H</sub> $`=`$ 5 x 10<sup>5</sup> cm<sup>-3</sup>; at the distance of our innermost component in the NLR models ($``$1 pc, see Kraemer et al. 1998), this yields a value of U $`=`$ 0.60 , which is approximately correct for producing the observed ratios of N V to C IV. We generated a single model for each kinematic component, by varying U until the N V/C IV column ratio was matched. We then adjusted N<sub>eff</sub> to fit the ionic column densities. We varied these parameters until the predicted columns matched the observed values to within the errors. Due to our concerns about the L$`\alpha `$ column densities (see the previous section), we did not use these columns to constrain the models. The model values for components 2 – 5 are listed in Table 4. Since we only have an upper limit to the C IV column density for component 1 in the STIS data, we will treat that case separately. The predicted column densities for Si IV and Mg II are well below detectability in all cases; the largest value computed for Si IV was $``$ 4 x 10<sup>9</sup>cm<sup>-2</sup> (component 4), and the Mg II column was effectively zero for all components. Despite our concerns about the observed H I columns, the observed and predicted values are reasonably close; the predicted values tend to be higher (by up to a factor of $``$2.4 for component 4). One possible explanation is that our L$`\alpha `$ measurements are indeed affected by saturation effects; another is that the abundances are greater than solar by a factor of two <sup>1</sup><sup>1</sup>1Since this is the first time that we have run models in this high-ionization regime, we compared our calculations with those from CLOUDY90 (Ferland et al. 1998). The ionic column densities for the high ionization lines were quite similar, and the H I columns from CLOUDY were a factor of $``$2 higher than ours (due to a different treatment of the cooling), which indicates that our overprediction of the H I columns is not a model artifact.. The values of U and N<sub>eff</sub> that we obtain for these components are well below those associated with typical X-ray warm absorbers (U $`=`$ 1 – 10, N<sub>eff</sub> $`=`$ 10<sup>21</sup> – 10<sup>23</sup> cm<sup>-2</sup>; Reynolds 1997; George et al. 1998). To investigate this issue further, we computed column densities for O VII and O VIII, which are also listed in Table 4. The total predicted columns for these ions are 1.6 x 10<sup>17</sup>cm<sup>-2</sup> and 9.9 x 10<sup>15</sup>, whereas the observed values are $``$6 and $``$160 times higher for O VII and O VIII, respectively, which shows that components 2 – 5 do not contribute significantly to the X-ray warm absorber. For component 3, the N V and C IV column densities have decreased by about the same amount in the STIS data (factor of $``$1.6) compared to the GHRS observations (Table 2). This would indicate that the ionization parameter was essentially the same on these two occasions. Thus, the most likely explanation for the absorption changes is that the effective column density changed, due to bulk motion of some of the absorbing gas out of the line of sight. A possible discrepancy with this interpretation is that the H I column density did not change appreciably, but we have already noted the difficulties involved in determining the H I columns from L$`\alpha `$. For component 1, we generated a model based on the GHRS column densities. Table 5 shows these values. Due to the large N V/C IV ratio, the ionization parameter and effective column density for this component are much higher than those for the other components. The predicted O VII and O VIII column densities are very close to the observed values, suggesting that this component is likely to be responsible for the X-ray warm absorber. Our values of U and N<sub>eff</sub> for this component are similar to those determined for the warm absorber in NGC 5548 by Reynolds (1997) and George et al. (1998). With the caveat that none of the UV or X-ray observations are simultaneous, we conclude that component 1 is likely to be the X-ray warm absorber. To investigate the column density variations in component 1, we generated two additional models for the STIS data, as shown in Table 5. For the first model, we assumed that U did not change between the two observations, and for the second model, we assumed that N<sub>eff</sub> remained constant. Both models match the observed N V column density, predict a low C IV column, and overpredict the neutral hydrogen column. Thus, the difference in ionic column densities between the GHRS and STIS data for Component 1 could be simply explained by: 1) an increase of U by a factor of $``$ 1.4, or 2) a decrease of N<sub>eff</sub> by a factor of $``$ 4 along the line of sight. We prefer the latter explanation, since the discrepancy between the observed and predicted neutral hydrogen columns is similar to that for the other components, but cannot rule out the former. ## 4 Conclusions We have obtained UV echelle spectra of NGC 5548 with STIS, and have confirmed the presence of five kinematic components of intrinsic absorption in the lines of L$`\alpha `$, C IV, and N V. An additional L$`\alpha `$ component is present near the systemic redshift, and is likely to be associated with the interstellar medium in the host galaxy. These components have not changed in their radial velocity coverage since the GHRS observations $``$2 years earlier. The column densities of N V and C IV in component 3 have decreased, and the column density of N V in component 1 has decreased over the two year interval (only an upper limit is available for component 1’s C IV column in the STIS data). We have used photoionization models to examine the physical conditions in the gas responsible for the intrinsic absorption. The ionization parameters and effective column densities for components 2 – 5 are lower than those associated with X-ray warm absorbers, and the predicted O VII and O VIII column densities for these components are too small to make a significant contribution to the X-ray absorption. From the GHRS observations of N V and C IV, we obtain much higher values of U (2.4) and N<sub>eff</sub> (6.5 x 10<sup>21</sup> cm<sup>-2</sup>) for component 1. Our predicted O VII and O VIII columns for component 1 match the previous observed values. We conclude that this component is likely to be responsible for the X-ray warm absorber. This component also has the highest outflow velocity and exhibits the strongest variability. Since the GHRS C IV and N V spectra were obtained $``$6 months apart, and the ASCA observations were obtained $``$3 years earlier, these conclusions need to be checked with simultaneous UV and X-ray observations. We find that the decrease in the N V and C IV column densities in component 3 is due to a change in the total column density of the absorption, rather than a change in ionization, since the N V/C IV ratio did not change significantly. This is the most likely explanation for the decrease in N V column in component 1 as well, although we cannot rule out a change in ionization. Assuming that bulk motion of the gas across the line of sight is responsible for the variations, complete coverage of the BLR, and a diameter of the N V emitting region of $``$4 light days (Korista et al.), we find a transverse velocity of v<sub>T</sub> $``$ 1650 km s<sup>-1</sup>. If the variations are due to bulk motion, these two components have transverse velocities which are comparable to their outflow velocities. We thank Ian George for helpful discussions on the X-ray absorption. D.M.C. and S.B.K. acknowledge support from NASA grant NAG 5-4103. Support for this work was provided by NASA through grant number NAG5-4103 and grant number AR-08011.01-96A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
no-problem/9903/cond-mat9903366.html
ar5iv
text
# First-principles and semi-empirical calculations for bound hole polarons in KNbO3 ## I Introduction It is well understood now that point defects play an important role in the electro-optical and non-linear optical applications of KNbO<sub>3</sub> and related ferroelectric materials. In particular, reduced KNbO<sub>3</sub> crystals containing oxygen vacancies reveal fast photorefractive response to short-pulse excitations which could be used for developing fast optical correlators. The prospects of the use of KNbO<sub>3</sub> for the light frequency doubling are seriously affected by the presence of unidentified defects responsible for induced infrared absorption. The photorefractive effect, important in particular for holographic storage, is also well known to depend on the presence of impurities and defects. Most of as-grown ABO<sub>3</sub> perovskite crystals are non-stoichiometric and thus contain considerable amount of vacancies. The so-called $`F`$ and $`F^+`$ centers (an O vacancy, V$`_\text{O}`$, which traps two or one electron, respectively), belong to the most common defects in oxide crystals. In electron-irradiated KNbO<sub>3</sub>, a broad absorption band observed around 2.7 eV at room temperature has been tentatively ascribed to the $`F`$-type centers. These two defects were the subject of our recent ab initio and semi-empirical calculations. A transient optical absorption band at 1.2 eV has been associated recently, in analogy with other perovskites, with a hole polaron (a hole bound, probably, to a K vacancy). The ESR study of KNbO<sub>3</sub> doped with Ti<sup>4+</sup> gives a proof that holes could be trapped by such negatively charged defects. For example, in BaTiO<sub>3</sub>, the hole polarons bound to Na and K alkali ions replacing Ba and thus forming a negatively charged site attracting a hole, have also been found. Cation vacancies are the most likely candidates for pinning polarons. In irradiated MgO, they are known to trap one or two holes giving rise to the V<sup>-</sup> and V<sup>0</sup> centers which are nothing but bound hole polaron and bipolaron, respectively. The results of the experimental studies of hole polarons in alkali halides and ferroelectric perovskites reveal two different forms of atomic structure of polarons: atomic one (one-site), when a hole is localized on a single atom, and molecular-type (two-site), when a hole is shared by two atoms forming a quasi-molecule. In the present study, we simulate hole polarons associated with a K vacancy in KNbO<sub>3</sub>, using an ab initio density functional theory (DFT)-based method in combination with a semi-empirical treatment based on the Hartree-Fock (HF) formalism, employing the periodic boundary conditions and the supercell geometry in both cases. ## II Methods The motivation for using the DFT-based and HF-based calculation methods in parallel is to combine strong sides of both in a single study. The DFT is expected to be able to provide good description of the ground state, i.e. to deliver reasonable relaxation energies and ground-state geometry. In the HF approach, the relaxation energies are generally less accurate because of the neglection of correlation effects. On the other hand, the HF formalism is straightforwardly suited for the evaluation of excitation energies, because the total energies can be calculated for any (ground-state or excited) electronic configuration on equal footing, that is generally not the case in the DFT. Practical experience shows that HF and DFT results often exhibit similar qualitative trends in the description of dielectric properties but quantitatively lie on opposite sides of experimental data, thus effectively setting error bars for a theoretical prediction. Our ab initio DFT treatment is based on the full-potential linearized muffin-tin orbital (LMTO) formalism, previously applied with success to the study of structural instability and lattice dynamics in pure KNbO<sub>3</sub>. For the study of defects, we used the version of LMTO as implemented by van Schilfgaarde and Methfessel, that was earlier used in our simulations for the $`F`$-center in KNbO<sub>3</sub> (Ref. ; see also for more details of the calculation setup there). The exchange-correlation has been treated in the local density approximation (LDA), as parametrized by Perdew and Zunger. The supercell LMTO approach has been earlier successfully used for the simulation of defects in KCl and MgO. In the present case we used the $`2\times 2\times 2`$ supercells, i.e. the distance between repeated point defects was $``$8 Å. As a consequence of the large number of eigenstates per k-point in a reduced Brillouin zone (BZ) of the supercell and of the metallicity of the doped system, it was essential to maintain a quite dense mesh for the k-integration by the tetrahedron method over the BZ. Specifically, clear trends in the total energy as function of atomic displacements were only established at $`10\times 10\times 10`$ divisions of the BZ (i.e., 186 irreducible k-points for a one-site polaron). In the HF formalism, a semi-empirical Intermediate Neglect of the Differential Overlap (INDO) method, modified for ionic and partly ionic solids, has been used. The INDO method has been successfully applied for the simulation of defects in many oxides. The calculations have been performed with periodic boundary conditions in the so-called large unit cell (LUC) model, i.e., for $`𝐤`$=0 in the appropriately reduced BZ. When mapped on the conventional BZ of the compound in question, this accounts for the band dispersion effects and allows to incorporate effectively the $`𝐤`$-summation on a relatively fine mesh over the BZ. Due to the robustness of the summation procedure, the total energy dependence on the atomic displacements was found satisfactorily smooth for any supercell size (one should keep in mind, however, that the numerical results may remain somehow dependent on the LUC in question). In the present work, $`4\times 4\times 4`$ supercells (320 atoms) were used, that is the extension of our preliminary INDO study with the $`2\times 2\times 2`$ supercells. The detailed analysis of the INDO parametrisation for KNbO<sub>3</sub> is presented in Ref. , and the application of the method to the $`F`$-center calculations – in Ref. . We restricted ourselves to a cubic phase of KNbO<sub>3</sub>, with the lattice constant $`a_0`$=4.016 Å. In the vacancy-containing supercell, the relaxation of either one (for the one-site polaron) or two neighboring (for the two-site polaron) O atoms, amongst twelve closest to the K vacancy, has been allowed for, and the changes in the total energy (as compared to the unrelaxed perovskite structure with a K atom removed) have been analyzed. Also, we studied the fully symmetric relaxation pattern (breathing of twelve O atoms) around the vacancy. Different relaxation patterns considered in the present study are shown in Fig. 1. The positions of more distant atoms in the supercell were kept fixed. ## III Results The removal from the supercell of a K atom with its 7 electrons contributing to the valence band (VB) produces slightly different effects on the electronic structure, as described within the DFT and in the HF formalism. Acording to the LMTO result, the Fermi energy lowers, and the system becomes metallic (remaining non-magnetic). Therefore, no specific occupied localized state is associated with the vacancy. The local density of states (DOS) at the sites of interest is shown in Fig. 2. As is typical for LDA calculations, the one-electron band gap in KNbO<sub>3</sub> comes out underestimated ($``$2 eV) as compared to the experimental optical gap ($``$3.3 eV). The removal of a K$`4s`$ electron amounts to adding a hole which forms a localized state at $``$10 eV above the Fermi level, i.e. above the unoccupied Nb$`4d`$ band. In the $`2p`$-DOS of O atoms neighboring the vacancy, a quasi-local state (that effectively screens the hole) is visible just below the Fermi level. Apart from that, the O$`2p`$-DOS is largely unaffected by the presence of vacancy, and the changes in the DOS of more distant sites (K, Nb) are negligible as compared with those in the perfect crystal. As the cubic symmetry is lifted by allowing a non-uniform relaxation of O atoms, the “screening” quasi-local state is clearly localized at the atom closest to the vacancy. At the same time, the hole state becomes smeared out in energy. This amounts to the bonding being established between the hole and the screening charge on one of its neighbors. In the INDO treatment, the one-electron optical gap is overestimated, as is typical for the HF calculations ($``$6 eV, see Ref. ), but the $`\mathrm{\Delta }`$SCF gap for the triplet state is 2.9 eV, close to the experiment. The quasi-local “screening” state is described by a wide band close to the VB top. This is consistent with the LDA description. But the INDO calculation also suggests, and this differs from the LDA, that the removal of an electron leaves an unpaired electron state split-off at $``$1eV above the VB band top. In case of asymmetrical O relaxation, the molecular orbital associated with this state is centered at the displaced O atom, being a combination of the $`2p_x`$, $`2p_y`$ functions of the latter in the setting when the plane spanned by their lobes passes through the vacancy site. The same applies qualitatively to the two-site polaron, with the only difference that the localized state is formed from the $`2p`$ orbitals of both O atoms approaching the vacancy, with a corresponding symmetry lowering. The localized hole state is also present in the HF description but lies much lower than the corresponding state in the LDA, forming a 0.9 eV -wide band located $``$ 0.2 eV below the conduction band bottom (see Fig. 3). Differently from the DFT-based approaches which address in principle only the ground-state electron density, the HF method provides a possibility to evaluate excitation energies by means of the so-called $`\mathrm{\Delta }`$SCF formalism, i.e. as the difference of total energies in relaxed ground-state and excited states. In agreement with the Schirmer’s theory for the small-radius polarons in ionic solids, the optical absorption corresponds to a hole transfer to the state delocalized over nearest oxygens. The absorption energies due to the electron transition from the quasi-local states near the VB top (1, Fig. 3) into the vacant polaron band (2, Fig. 3) for one-site and two-site polarons are close (Table I), and both are twice smaller than the experimental value for a hole polaron trapped by the Ti impurity. This shows that the optical absorption energy of small bound polarons can be strongly dependent on the defect involved. Another important observation is that the $`\mathrm{\Delta }`$SCF energy for absorption turns out to be considerably smaller than the estimate based on the difference of one-electron energies. In spite of generally observed considerable degree of covalency in KNbO<sub>3</sub> and contrary to a delocalized character of the $`F`$ center state, the one-site polaron state remains well localized at the displaced O atom, with only a small contribution from atomic orbitals of other O ions but none from K or Nb ions. Although there are some differences in the description of the (one-particle) electronic structure within the DFT- and HF-based methods, the trends in the total energy driving the structure optimization remain essentially the same. In both approaches, both one-site and two-site configurations of the hole polaron are much more energetically favorable than the fully symmetric (breathing mode) relaxation of twelve O atoms around the K vacancy. This is in line with what is known about small-radius polarons in other ionic solids and is caused by the fact that the lattice polarization induced by a point charge is much larger than that due to a delocalized charge. In the case of one-site polaron, a single O<sup>-</sup> ion is displaced towards the K vacancy by 1.5 % of the lattice constant (LMTO) or by 3% (INDO) – see Fig. 4. The INDO calculations show that simultaneously, 11 other nearest oxygens surrounding the vacancy tend to be slightly displaced outwards the vacancy. In the two-site (molecular) configuration, a hole is shared by the two O atoms which approach each other – by 0.5% (LMTO) or 3.5% (INDO) – and both shift towards a vacancy – by 1.1% (LMTO) or 2.5% (INDO). The lattice relaxation energies (which could be associated with the experimentally measurable hole thermal ionization energies) are presented in Table I. In both methods the two-site configuration of a polaron is lower in energy. A comparison of the present, 320-atomic INDO calculation with a preliminary calculation using 40–atomic LUC and self–consistency in the $`\mathrm{\Gamma }`$ point of the BZ only shows that the optical absorption energies are changed inconsiderably, unlike the lattice relaxation energies. The latter now are much smaller and thus in better agreement with the LMTO calculation. ## IV Conclusions In this pilot study we focused on the quantitative models of hole polarons in KNbO<sub>3</sub>. The main conclusion is that both one-center and two-center configurations are energetically favorable and close in energy (with a slight prefernce for the two-center configuration), as follows from the numerical simulation results by two different theoretical methods. The calculated optical absorption energies and the spatial distribution of relevant electronic states could provide guidelines for more direct experimental identification of the defects in question. The calculated hole polaron absorption ($``$1 eV) is close to the observed short-lived absorption band energy; hence this band could indeed arise due to a hole polaron bound to a cation vacancy. Further detailed study is needed to clarify whether such hole polarons are responsible for the effect of the blue light-induced infrared absorption reducing the second-harmonic generation efficiency in KNbO<sub>3</sub>. As compared with the DFT results, the INDO (as is generally typical for the HF-based methods) systematically gives larger atomic displacements and relaxation energies. The DFT results are more reliable in what regards the ground state of polarons, whereas the use of the HF formalism was crucial for the calculation of their optical absorption and hence possible experimental identification. ###### Acknowledgements. This study was partly supported by the DFG (a grant to E. K.; the participation of A. P. and G. B. in the SFB 225), Volkswagen Foundation (grant to R. E.), and the Latvian National Program on New Materials for Micro- and Optoelectronics (E. K.). Authors are greatly indebted to L. Grigorjeva, D. Millers, M. R. Philpott, A. I. Popov and O. F. Schirmer for fruitful discussions.
no-problem/9903/astro-ph9903020.html
ar5iv
text
# ORFEUS II and IUE Spectroscopy of EX Hydrae ## 1 Introduction Among the classes of cataclysmic variables (CVs), there is one in which the magnetic field of the white dwarf is strong enough to influence the flow of material lost by the secondary. These magnetic CVs are subdivided into the spin-synchronous ($`P_{\mathrm{spin}}P_{\mathrm{orb}}`$) polars (AM Her stars) and the spin-asynchronous ($`P_{\mathrm{spin}}<P_{\mathrm{orb}}`$) intermediate polars (DQ Her stars). In polars, the accreting matter is channeled along the magnetic field lines for most of its trajectory from the secondary’s inner Lagrange point to the white dwarf surface, while in intermediate polars (IPs), accretion is moderated by a disk. Although the disk maintains an essentially Keplerian velocity profile at large radii, it is truncated at small radii where the magnetic stresses become large enough to remove the disk angular momentum in a radial distance which is small compared to the distance to the white dwarf; the material then leaves the disk and follows the magnetic field lines down to the white dwarf surface in a manner similar to that of polars. In either class of magnetic CVs, the velocity of the flow as it nears the white dwarf is highly supersonic \[$`v=3600(M_{\mathrm{wd}}/0.5\mathrm{M}_{})^{1/2}(R_{\mathrm{wd}}/10^9\mathrm{cm})^{1/2}\mathrm{km}\mathrm{s}^1`$\], hence to match boundary conditions, the flow must pass through a strong shock far enough above the white dwarf surface for the hot \[$`kT=16(M_{\mathrm{wd}}/0.5\mathrm{M}_{})(R_{\mathrm{wd}}/10^9\mathrm{cm})^1`$ keV\], post-shock flow to be decelerated by pressure forces and settle on the white dwarf surface. Magnetic CVs are therefore strong X-ray sources modulated at the spin period of the white dwarf. For additional details of magnetic CVs, see Cropper (1990), Patterson (1994), and Warner (1995); for recent results, see the volumes by Buckley & Warner (1995) and Hellier & Mukai (1999). EX Hydrae is a bright ($`V13`$), high-inclination ($`i=77^{}\pm 1^{}`$), eclipsing IP with an orbital period of 98.26 minutes and a white dwarf spin period of 67.03 minutes. The mass of the white dwarf is measured by both dynamical (Hellier (1996)) and X-ray spectroscopic (Fujimoto & Ishida (1997)) methods to be $`M_{\mathrm{wd}}=0.49\mathrm{M}_{}`$, while details of the accretion geometry are established by the optical and X-ray observations of Hellier et al. (1987) and Rosen, Mason, & Córdova (1988). In the resulting “accretion curtain” model of EX Hya specifically and IPs in general, the spin-phase modulations are the result of the angular offset between the spin and magnetic dipole pole axes and the consequent strong azimuthal asymmetry of the flow of material from the disk to the surface of the white dwarf. Because of absorption by this accretion curtain, the spin-phase light curves peak when the upper pole points away from the observer—when the blueshift of the emission lines is maximum. In addition to this spin-phase modulation, binary-phase modulations are produced by partial eclipses by the secondary and by the bulge on the edge of the accretion disk. EX Hya has been studied extensively in the X-ray and optical wavebands (in addition to the references above, see, e.g., Siegel et al. (1989); Rosen et al. (1991); Allan, Hellier, & Beardmore (1998); Mukai et al. (1998)), but less so at UV and FUV wavelengths. Despite the 174 International Ultraviolet Explorer (IUE) spectra of EX Hya in the archive, the discussion of these UV data has been limited to the papers by Bath, Pringle, & Whelan (1980, written before EX Hya was recognized as an IP), Krautter & Buchholz (1990, a 2-page paper in a conference proceedings), and the statistical studies of Verbunt (1987), la Dous (1991), and Mauche, Lee, & Kallman (1997). Greeley et al. (1997) recently described and modeled UV–FUV spectra of this source acquired with the Hopkins Ultraviolet Telescope (HUT) during the Astro-2 mission in 1995 March. To extend the effort of documenting the phenomenology and understanding the accretion geometry of EX Hya, we here analyze and discuss five FUV spectra of this source acquired in 1996 December during the flight of the Orbiting and Retrievable Far and Extreme Ultraviolet Spectrograph–Shuttle Pallet Satellite (ORFEUS-SPAS) II mission. These spectra are superior those of HUT because of the higher spectral resolution and more extensive phase coverage, but suffer from the narrower bandpass. To help offset this shortcoming, we make use of an extensive set of IUE spectra of EX Hya obtained in 1995 June by K. Mukai. For completeness and ease of reference, we also present and describe the EUV/soft X-ray spin- and binary-phase light curves of EX Hya measured by EUVE in 1994 May–June (Hurwitz et al. (1997)). AAVSO measurements during and near the times of these observations demonstrate that in each instance EX Hya was at its quiescent optical magnitude of $`V13`$ (Mattei (1998)). ## 2 EUVE Photometry As described in detail by Hurwitz et al. (1997), EX Hya was observed by EUVE for 150 kiloseconds beginning on 1994 May 26.<sup>1</sup><sup>1</sup>1For the record, note that Hurwitz et al. (1997) erroneously report that the EUVE observation of EX Hya began on 1994 May 29; this date is actually the midpoint of the observation. Similarly, the dates referred to in §2.1 (§2.3) of that paper are not the first, but the $`48`$th ($`70`$th) observed binary eclipse (spin maximum). The resulting deep survey ($`\lambda \lambda 70`$–200 Å) spin- and binary-phase count rate light curves are shown in Figure 1, where the ephemerides of Hellier & Sproats (1992) have been employed to convert HJD photon arrival times to white dwarf spin and binary phases.<sup>2</sup><sup>2</sup>2The sinusoidal term in the binary ephemeris of Hellier & Sproats (1992) is ignored here and elsewhere in this paper because it is uncertain and because it has a full range of only 48 seconds or 0.008 binary cycles; at the midpoint of the EUVE observation, the correction amounts to $`41`$ seconds or $`0.007`$ binary cycles. Due to the $`30`$% efficiency of EUVE observations, the photons from whence these light curves were constructed were acquired over an interval of 6.5 days (from HJD 2449498.89420 until 2449505.42824). This observation was therefore sufficiently long to avoid the spin- and binary-period aliasing typical of low-Earth-orbit satellite observations of EX Hya, including the ORFEUS and IUE observations described below. The binary-phase EUVE light curve of EX Hya is shown in the middle and lower panels of Figure 1 and is seen to manifest a broad dip at $`\varphi _{98}0.85_{0.25}^{+0.15}`$ and a narrow eclipse at $`\varphi _{98}0.97`$. The dip is understood to be due to the passage through the line of sight of the bulge on the edge of the accretion disk caused by the impact of the accretion stream. With a residual intensity of approximately 0.13, the optical depth of the bulge is $`\tau 2.0`$ at $`\lambda 90`$ Å, the peak of the effective area curve of the deep survey instrument. If the occulting material is neutral and has solar abundances (specifically, one He atom for every ten H atoms), the inferred column density is $`N_\mathrm{H}7.4\times 10^{19}\mathrm{cm}^2`$. Such a column is essentially transparent ($`\tau 0.01`$) above 1.2 keV, consistent with the fact that the dip is seen only in soft X-rays (e.g., Rosen, Mason, & Córdova (1988)). The narrow eclipse is understood to be due to the grazing occultation of the EUV/soft X-ray emission region by the secondary. Fitting a linear background minus a Gaussian to the interval $`\varphi _{98}=0.95`$–0.99, we find that the eclipse is centered at $`\varphi _{98}=0.9714\pm 0.0003`$, has a $`\mathrm{FWHM}=0.007\pm 0.001`$ or $`38\pm 6`$ seconds, and a full width of $`\mathrm{\Delta }\varphi _{98}0.01`$ or 60 seconds; the residual intensity at mid-eclipse is consistent with zero at $`0.014\pm 0.013\mathrm{counts}\mathrm{s}^1`$. In contrast, the eclipse by the secondary of the hard X-ray emission region is significantly wider ($`\mathrm{\Delta }\varphi _{98}0.03`$ or 180 seconds) and partial (eclipse depth = 30–60%; Beuermann & Osborne (1988); Rosen et al. (1991); Mukai et al. (1998)). The centroid of the hard X-ray eclipse was recently measured with RXTE by Mukai et al. (1998) to be centered at $`\varphi _{98}=0.98`$, reinforcing the impression that the binary ephemeris of Hellier & Sproats (1992) may need to be updated. The spin-phase EUVE light curve of EX Hya is shown in the upper panel of Figure 1 and is seen to vary more sharply than a sine wave, but it is nonetheless reasonably well approximated by the sinusoidal function $`A+B\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$ with $`A=0.158\pm 0.001\mathrm{counts}\mathrm{s}^1`$, $`B=0.105\pm 0.002\mathrm{counts}\mathrm{s}^1`$, and $`\varphi _0=0.790\pm 0.002`$. The relative pulse amplitude is therefore $`B/A=67\%\pm 1\%`$ and the light curve peaks at $`\varphi _{67}=0.040\pm 0.002`$. This phasing again is formally different from the ephemeris of Hellier & Sproats (1992), but it establishes to sufficient accuracy for the present purposes that the EUV/soft X-ray light curve peaks at $`\varphi _{67}0`$. ## 3 ORFEUS Spectroscopy The FUV spectra of EX Hya were acquired with the Berkeley spectrograph in the ORFEUS telescope during the flight of the ORFEUS-SPAS II mission in 1996 November–December. The general design of the spectrograph is discussed by Hurwitz & Bowyer (1986, 1996), while calibration and performance of the ORFEUS-SPAS II mission are described by Hurwitz et al. (1998); for the present purposes, it is sufficient to note that the spectra cover the range $`\lambda \lambda =910`$–1210 Å and that the mean instrument profile $`\mathrm{FWHM}0.33`$ Å, hence $`\lambda /\mathrm{\Delta }\lambda 3000`$. Acquisition of the ORFEUS exposures was complicated by the fact that the satellite period (91 min) nearly equals the binary orbital period and four thirds of the white dwarf spin period. After consulting with B. Greeley it was decided to concentrate on the spin period, with observations every satellite orbit for 6 orbits, but practical considerations resulted in the coverage shown in Figure 1 and detailed in Table 1, which lists the HJD of the start of the exposures, the length of the exposures, and the range of binary and spin phases assuming the ephemerides of Hellier & Sproats (1992). Figure 2 shows the background-subtracted and flux-calibrated ORFEUS spectra binned to a resolution of 0.1 Å and smoothed with a 5-point triangular filter. Relatively strong residual geocoronal emission lines of H I $`\lambda 1025.7`$ (Lyman $`\beta `$), He I $`\lambda 584.3`$ (at 1168.7 Å in second order), N I $`\lambda 1134`$, $`\lambda 1200`$, and O I $`\lambda 988.7`$ have been subtracted from these spectra by fitting Gaussians in the neighborhood ($`\pm 5`$ Å) of each line. The remaining geocoronal emission lines are all very weak and contaminate only a limited number of discrete ($`\mathrm{FWHM}0.8`$ Å) portions of the spectra. These FUV spectra are generally consistent with the HUT spectra acquired 1995 March (Greeley et al. (1997)), with emission lines of O VI $`\lambda \lambda 1032`$, 1038 and C III $`\lambda 977`$, $`\lambda 1176`$ superposed on a nearly flat continuum. The broad and variable emission feature at $`\lambda 990`$ Å is likely N III, but the flux and position of this feature are uncertain because it coincides with a strong increase in the background at $`\lambda 1000`$ Å which renders noisy the short-wavelength end of these spectra. To quantify the continuum flux density variations of the FUV spectra of EX Hya, we measured the mean flux density at $`\lambda =1010\pm 5`$ Å. This choice for the continuum bandpass is somewhat arbitrary, but it avoids the noisy portion of the spectra shortward of $`\lambda 1000`$ Å and the broad weak bump between the O VI and C III $`\lambda 1176`$ emission lines. Ordered by spin phase, the mean flux density in this bandpass is $`f_{1010}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1)=0.181`$, 0.163, 0.130, 0.171, and 0.236. Of the spin and binary phases, it appears that these flux density variations occur on the spin phase, since as shown in Figure 3 they are reasonably well fitted ($`\chi ^2/\mathrm{dof}=6.4/2`$ assuming 5% errors in the flux densities) by $`f_{1010}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1)=A+B\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$ with $`A=0.192\pm 0.005`$, $`B=0.049\pm 0.007`$, and $`\varphi _0=0.743\pm 0.023`$; on the binary phase, the fit is significantly poorer ($`\chi ^2/\mathrm{dof}=28.8/2`$). The relative FUV continuum pulse amplitude is therefore $`B/A=25\%\pm 4\%`$ and the light curve peaks at $`\varphi _{67}=0.01\pm 0.020`$, consistent with the EUV/soft X-ray light curve. Since the ORFEUS bandpass is too narrow to meaningfully constrain the effective temperature, it is not possible to uniquely determine the cause of these continuum flux density variations: they could be due to variations in the effective temperature, variations in the effective size of the emission region, or some combination of these. Assuming $`M_{\mathrm{wd}}=0.49\mathrm{M}_{}`$ ($`R_{\mathrm{wd}}=9.8\times 10^8`$ cm), $`d=100`$ pc, and that the entire white dwarf surface radiates with a blackbody spectrum, the effective temperature varies with phase according to $`T_{\mathrm{eff}}(\mathrm{kK})=27.2+1.3\mathrm{sin}2\pi (\varphi _{67}0.743)`$. If, as for AM Her (Mauche & Raymond (1998)), we instead assume that we are seeing a 20 kK white dwarf with 37 kK spot, the apparent projected area of the spot varies with binary phase according to $`f=0.058+0.018\mathrm{sin}2\pi (\varphi 0.743)`$. To demonstrate that such two-temperature blackbody models do a good job of matching the ORFEUS spectra, we show in Figure 2 a series of $`20+37`$ kK blackbody models superposed on the data. Accompanying the continuum flux variations are variations in the flux and radial velocity of the emission lines. In what follows, we concentrate on the emission lines longward of $`\lambda =1000`$ Å where the spectra and hence the line fluxes and positions are not adversely affected by the high background and consequent low signal-to-noise ratio. Inspection of Figure 2 reveals that the spectra in the neighborhood of the C III $`\lambda 1176`$ emission line are sufficiently simply to allow fits with a linear continuum plus a Gaussian (5 free parameters), while the broad and narrow components of the O VI $`\lambda \lambda 1032`$, 1038 doublet require at a minimum a linear continuum plus four Gaussians. To constrain the fits of the O VI line, we constrain the separation of the doublets to their laboratory separation, and the widths of each component to be the same (for a total of 10 free parameters). Figure 4 shows the success we have had with the fits of the O VI lines, with both the data and the models binned to a resolution of 0.1 Å and smoothed with a 5-point triangular filter. Thanks to the high spectral resolution of the Berkeley spectrograph, the lines are cleanly resolved into narrow and broad components and it appears that the model produces reasonable fits of these complex line profiles. The most significant deviation of the fits relative to the data is in the last spectrum ($`\varphi _{67}=0.767`$–1.152 or $`\varphi _{98}=0.663`$–0.926), where the broad emission component of the doublet is cut by a pair of slightly blueshifted ($`v<300\mathrm{km}\mathrm{s}^1`$) narrow absorption features. A similar absorption component is present at that same phase in the C III emission lines, and it is likely not a coincidence that this absorption is strongest at the same binary phases where the EUV flux deficit is strongest ($`\varphi _{98}0.85\pm 0.1`$). For the present, it is sufficient to note that the presence of this absorption component does not appear to significantly affect the fits of the emission lines. The fitting parameters for the broad and narrow components of the O VI emission line have been converted into physical quantities (flux, velocity, FWHM) and are listed in Table 2. The velocities of the broad and narrow components of the line were fit with a sinusoidal function of the form $`v=\gamma +K\mathrm{sin}2\pi (\varphi \varphi _0)`$, whereby it became apparent that the velocity of the broad component of the line varies with the spin phase while that of the narrow component varies with the binary phase. The parameters of these fits are shown in Table 3 and the data and the best-fit radial velocity curves are shown in Figure 5. Maximum blueshift of the broad component of the O VI line occurs at $`\varphi _{67}=0.05\pm 0.070`$, while maximum blueshift of the narrow component of the line occurs at $`\varphi _{98}=0.30\pm 0.020.25.`$ As shown in Figure 6, these radial velocities anticorrelate nicely with the flux in the two components of the line. Specifically, the broad-line flux varies as $`f_{\mathrm{O}\mathrm{VI},\mathrm{b}}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1)=A+B\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$ with $`A=3.2\pm 0.2`$, $`B=1.8\pm 0.2`$, and $`\varphi _0=0.79\pm 0.04`$ (hence peaks at $`\varphi _{67}=0.04\pm 0.040`$), while the narrow-line flux varies as $`f_{\mathrm{O}\mathrm{VI},\mathrm{n}}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1)=C+D\mathrm{sin}2\pi (\varphi _{98}\varphi _0)`$ with $`C=0.33\pm 0.03`$, $`D=0.12\pm 0.03`$, and $`\varphi _0=0.95\pm 0.07`$ (hence peaks at $`\varphi _{98}=0.20\pm 0.070.25`$). The behavior of the C III $`\lambda 1176`$ emission line is less straightforward. While the flux in the line clearly correlates with spin phase according to $`f_{\mathrm{C}\mathrm{III}}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1)=E+F\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$ with $`E=1.38\pm 0.09`$, $`F=0.75\pm 0.10`$, and $`\varphi _0=0.84\pm 0.03`$ (hence peaks at $`\varphi _{67}=0.09\pm 0.030`$), the radial velocity ranges between $`\pm 200\mathrm{km}\mathrm{s}^1`$ within the errors and can be fit satisfactorily on either the spin or binary phases with the parameters shown in Table 3. If the C III radial velocity varies with spin phase, it has maximum blueshift at $`\varphi _{67}=0.26\pm 0.08`$; $`\mathrm{\Delta }\varphi _{67}=0.21\pm 0.11`$ after that of the O VI broad component, while if the C III radial velocity varies with binary phase, it has maximum blueshift at $`\varphi _{98}=0.14\pm 0.08`$; $`\mathrm{\Delta }\varphi _{98}=0.15\pm 0.09`$ before that of the O VI narrow component. The former alternative is favored by the broad width of the line and the strong flux variation on the spin phase. However, because of the long exposures and the relatively poor phase coverage and because the line is broad and typically rather weak, it is not possible with the existing data to usefully constrain the phasing of the radial velocity variations of the C III $`\lambda 1176`$ emission line. ## 4 IUE Spectroscopy As mentioned in the introduction, to date little has been done with the large number of IUE spectra of EX Hya in the archive. While a full analysis of these UV data is beyond the scope of the present work, it is nonetheless useful to perform an analysis of a subset of the existing spectra to extend the bandpass for which we have phase-resolved spectroscopic information for EX Hya. The 1995 March HUT spectra of EX Hya (Greeley et al. (1997)) of course cover the UV and FUV wavebands simultaneously, but those observations are limited to some extent by the limited range of spin ($`\varphi _{67}=0.05`$–0.50) and binary phases ($`\varphi _{98}=0.09`$–0.40) sampled. Of the 174 IUE spectra in the archive, 124 are short-wavelength spectra ($`\lambda \lambda =1150`$–1950 Å) obtained through the large aperture (i.e., are photometric). Of this subset, there is a continuous set of 45 spectra (SWP 55063–55107) with exposure times of 10 minutes obtained by K. Mukai over an interval of 1.3 days beginning on 1995 June 23. For the 41 spectra available from the IUE archive, Table 4 lists the sequence numbers, the HJD of the start of the exposures, and the range of binary and spin phases assuming the ephemerides of Hellier & Sproats (1992). Unfortunately, even this extensive and continuous set of IUE spectra suffers from aliasing between the spin and binary periods. Specifically, the phases of the exposures in this sequence satisfy $`\varphi _{67}(0.4\varphi _{98})\pm 0.2`$. During the portion of the orbit unaffected by the dip ($`\varphi _{98}=0.0`$–0.7), there were 14 spectra obtained during spin maximum ($`\varphi _{67}=0.8`$–1.2), but only 7 spectra were obtained during spin minimum ($`\varphi _{67}=0.3`$–0.7); during the dip ($`\varphi _{98}=0.75`$–0.95), there were 6 spectra obtained during spin minimum, but none were obtained during spin maximum. To populate this portion of phase space, we extracted from the archive all (5) short-wavelength large-aperture spectra satisfying the constraint \[$`\varphi _{98}=0.75`$–0.95, $`\varphi _{67}=0.8`$–1.2\]. The relevant details of these spectra are included in Table 4. From this subset of 32 IUE spectra of EX Hya, we produced the four mean phase-resolved spectra shown in Figure 7. From brightest to dimmest, the spectra were obtained during: (1) spin maximum away from the dip, (2) spin maximum during the dip, (3) spin minimum during the dip, and (4) spin minimum away from the dip. Like the HUT spectra, these spectra reveal emission lines of He II, C II–IV, N V, Si III–IV, and Al III superposed on a blue continuum. The most spectacular aspect of these spectra is the widths of the lines; the FWHM of the C IV line for instance is approximately 14 Å or $`2700\mathrm{km}\mathrm{s}^1`$ compared to 7–10 Å or 1400–$`1900\mathrm{km}\mathrm{s}^1`$ for other magnetic CVs. These mean spectra demonstrate the following effects on the UV lines and continuum as a function of spin and binary phase. First consider the effect of the dip. During spin maximum, the dip does not significantly affect the continuum or the He II line, but the flux in the other lines decreases by 30%–40%, with the red wings of the lines affected preferentially. During spin minimum, the continuum increases by roughly 20% during the dip, but there is little if any effect on the lines. Next consider variations on the spin phase. Away from the dip, the continuum decreases by roughly 40% going from spin maximum to spin minimum. The effect on the lines is much more pronounced: the flux in the N V and Si IV lines decreases by roughly 60%, the flux in the C IV line decreases by roughly 80%, and the He II line disappears altogether. The lines also markedly change shape: the N V and Si IV lines become less centrally peaked, while the blue side of the C IV line is preferentially suppressed. During the dip, the continuum decreases by roughly 20% going from spin maximum to spin minimum, the flux in the C IV line decreases by roughly 60%, again with most of the action on the blue side of the line, and again the He II line disappears altogether. To quantify the variations of the UV lines and continuum as a function of spin and binary phase, we attempted to fit the flux density of the individual spectra in the neighborhood of the C IV line ($`\lambda =1550\pm 50`$ Å) with a number of analytic functions. Ideally, the C IV line in these IUE spectra would be modeled the same way as O VI line in the ORFEUS spectra, with a linear continuum plus four Gaussians, but the IUE spectral resolution is insufficient to resolve the C IV line into its doublet components or to separately distinguish the emission and absorption components. The O VI narrow emission and absorption components are relatively weak, so there is some hope of successfully modeling the C IV line with a linear continuum plus one or two Gaussians (with 5 or 8 degrees of freedom, respectively). The simpler model faithfully measures the continuum flux density, but the overall fits are poor and the line parameters unreliable because a single Gaussian is incapable of reproducing the strongly asymmetric shape of the line during spin minimum. Good fits result if a second Gaussian (either in emission or absorption) is included in the model, but again the line parameters are unreliable because they tend to wander in their exploration of $`\chi ^2`$ space. After some experimenting, it was found that the most robust and reliable line parameters resulted using a model consisting of a linear continuum plus two Gaussians whose widths were fixed at 4.0 Å; specifically, $`f_\lambda =f_1+f_2\lambda ^{}+f_3\mathrm{exp}([\lambda ^{}\lambda _1]^2/2\sigma ^2)+f_4\mathrm{exp}([\lambda ^{}+\lambda _2]^2/2\sigma ^2)`$, where $`\sigma =4.0`$ Å, $`\lambda ^{}=\lambda \lambda _0`$, and $`\lambda _0=1549.48`$ Å, the optically thick mean of the laboratory wavelengths of the C IV doublet. The spin-phase behavior of the resulting flux in the C IV emission line is shown in Figure 8 for binary phases during and away from the dip. In both cases, the C IV flux peaks at $`\varphi _{67}0`$, but the amplitude and mean level of the oscillation is a strong function of binary phase. Excluding the anomalously low flux points shown by the diamonds, away from the dip the C IV flux varies as $`f_{\mathrm{C}\mathrm{IV},\mathrm{out}}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1)=A+B\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$, where $`A=5.6\pm 0.2`$, $`B=4.0\pm 0.3`$, and $`\varphi _0=0.76\pm 0.1`$ (hence peaks at $`\varphi _{67}=0.01\pm 0.010`$), whereas during the dip the flux varies as $`f_{\mathrm{C}\mathrm{IV},\mathrm{in}}(10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1)=C+D\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$, where $`C=3.9\pm 0.2`$, $`D=1.6\pm 0.2`$, and $`\varphi _0=0.76\pm 0.04`$ (hence peaks at $`\varphi _{67}=0.01\pm 0.040`$). The spin-phase variation of the C IV continuum flux density (specially, the model flux density at $`\lambda =1549.48`$ Å) is shown in Figure 9. At least away from the dip, there is a tendency for the continuum flux density to be higher near $`\varphi _{67}=0`$ and lower near $`\varphi _{67}=0.5`$, but there is considerable scatter in the data at any spin phase. The C IV line widths and radial velocities also follow from this parameterization of the spectra, although indirectly: the radial velocity is $`v=c(\lambda _1\lambda _2)/\lambda _0`$ and the line width (strictly, the separation of the two Gaussians) is $`w=c(\lambda _1+\lambda _2)/\lambda _0`$. The most obvious variation is that of $`w`$, which varies with spin phase as $`w(10^3\mathrm{km}\mathrm{s}^1)=\gamma +K\mathrm{sin}2\pi (\varphi _{67}\varphi _0)`$, with $`\gamma =1.9\pm 0.03`$, $`K=0.34\pm 0.04`$, and $`\varphi _0=0.20\pm 0.02`$ (hence peaks at $`\varphi _{67}=0.45\pm 0.020.5`$), but this variation is caused by the model’s fitting of the single-peaked line profiles (i.e., small Gaussian separations) during spin maximum and the double-peaked line profiles (i.e., large Gaussian separations) during spin minimum and does not translate into a variation in the net width of the line: the FWZI of the line is instead reasonably constant (with only a few exceptions, within 10%) at 30 Å or $`5700\mathrm{km}\mathrm{s}^1`$. Like the C III $`\lambda 1176`$ line (but unlike the O VI line) in the ORFEUS spectra, there is no radial velocity variation of the C IV line apparent on the white dwarf spin phase. However, as seen already in Figure 7, there is a tendency for the line to shift toward the blue during the dip (although the scatter in the individual measurements is large and the velocity difference is formally consistent with zero \[$`v=680\pm 710\mathrm{km}\mathrm{s}^1`$ during the dip compared to $`v=180\pm 1600\mathrm{km}\mathrm{s}^1`$ away from the dip, where the errors are the square root of the sample variance relative to the weighted mean\]). If the C IV radial velocity were as large as that of the O VI broad component, its amplitude would be roughly 1.7 Å, which is comparable to the width of the wavelength bins in the IUE spectra. Evidently, the complexity of the C IV line combined with the low spectral resolution and modest signal-to-noise ratio of the IUE spectra preclude centroiding the line to determine, or place useful limits on, its radial velocity. ## 5 Discussion Before wading into details, it is useful to compare the mean ORFEUS spectrum of EX Hya with that of AM Her assembled from the six spectra obtained days earlier with the same instrument (Mauche & Raymond (1998), hereafter MR98). These mean FUV spectra are shown in Figure 10, where the spectrum of AM Her has been multiplied by $`(75/100)^2`$ to account for the relative distance to the two sources. Given that EX Hya (an IP with a truncated accretion disk) and AM Her (a polar without a disk) are physically such different sources, it is amazing that their FUV spectra are so similar. First, the level of the FUV continua are nearly identical. Second, the shapes of the FUV continua are nearly indistinguishable, even so far as (1) the absence of Lyman absorption lines and (2) the presence of the broad weak bump between the O VI and C III $`\lambda 1176`$ emission lines. Third, both sources have C III, N III, and O VI emission lines with comparable widths and intensities; indeed, the intrinsic flux in the C III $`\lambda 977`$ and N III $`\lambda 991`$ emission lines are nearly identical. Fourth, both sources show broad and narrow component structure in the O VI emission line. With the exception of the absence of the He II $`\lambda 1085`$ emission line in the spectrum of EX Hya, the differences between these spectra are in the details. First, the broad (narrow) component of the O VI line of EX Hya is stronger (weaker) than that of AM Her. Second, the C III $`\lambda 1176`$ emission line of EX Hya is brighter and broader than that of AM Her. Next consider the constraints imposed on the location of the continuum and emission-line regions by the phase-resolved ORFEUS spectra. First consider the broad-line region. MR98 identified the accretion funnel as the source of the broad component of the O VI emission line in their ORFEUS spectra of AM Her. Consistent with simple expectations, in AM Her spin maximum of the FUV continuum and X-ray light curves occurs when the upper pole points toward the observer—when the redshift of the O VI broad component is the highest. Similarly, we identify the accretion funnel as the source of the O VI broad emission component in our ORFEUS spectra of EX Hya, but, consistent with the accretion curtain model, spin maximum of the FUV continuum and X-ray light curves occurs when the upper pole points away the observer—when the blueshift of the O VI broad component is the highest. This geometry implicates the accretion funnel itself as the source of the FUV continuum flux, not a separate intermediate-temperature spot on the surface of the white dwarf. If this is the case for both sources, it solves the problem of the absence of Lyman absorption lines in their FUV spectra. Next consider the narrow-line region. MR98 identified the irradiated face of the secondary as the source of the narrow component of the O VI emission line in their ORFEUS spectra of AM Her. The secondary cannot be the site of the narrow-line region in EX Hya, however, because maximum blueshift of the O VI narrow emission component occurs when the white dwarf, not the secondary, is moving toward the observer. Indeed, the radial velocity solution of the O VI narrow emission component ($`K=85\pm 9\mathrm{km}\mathrm{s}^1`$, $`\varphi _0=0.54\pm 0.02`$; Table 3) is consistent with that of the Balmer line wings in the optical ($`K=69\pm 9\mathrm{km}\mathrm{s}^1`$, $`\varphi _0=0.53\pm 0.03`$; Hellier et al. (1987)), so we identify the white dwarf itself as the source of the O VI narrow emission component. With a $`\mathrm{FWHM}200\mathrm{km}\mathrm{s}^1`$, the O VI narrow emission component may ultimately prove to be better than the Balmer lines (for which $`\mathrm{FWHM}2000\mathrm{km}\mathrm{s}^1`$) for determining the radial velocity of the white dwarf in EX Hya. Delving further into details, it is possible to combine information from the ORFEUS and IUE spectra of EX Hya to constrain the physical condition of the FUV and UV line-emitting plasma. First consider the broad-line region. In dense gas illuminated by hard X-rays (model 4 of Kallman & McCray (1982)), O VI exists over a range of ionization parameters $`\xi L/nr^240`$–70 and temperatures $`T40`$–130 kK. With $`L2\times 10^{32}\mathrm{erg}\mathrm{s}^1`$ (Allan, Hellier, & Beardmore (1998)), $`n\genfrac{}{}{0pt}{}{>}{}2\times 10^{11}\mathrm{cm}^3`$ for $`r5\times 10^9\mathrm{cm}5R_{\mathrm{wd}}`$. For the inferred range of ionization parameters, the dominant ionization stages of He, C, N, and Si are He II–III, C VI–VII, N VI–VII, and Si VI–XI, respectively, so if the observed lower ionization species dominate they must be produced in gas which is denser and/or lies further from the source of the ionizing flux. The observed line ratios may be affected by finite optical depths in the resonance lines, but modulo this effect the mean C III $`\lambda 977/\lambda 1176`$ line ratio of $`1.4`$ requires $`n\genfrac{}{}{0pt}{}{>}{}10^{11}\mathrm{cm}^3`$ and $`T\genfrac{}{}{0pt}{}{>}{}80`$ kK (Keenan et al. (1992); Keenan (1997)) and the mean Si III $`\lambda 1300/\lambda 1890`$ line ratio of $`\genfrac{}{}{0pt}{}{>}{}10`$ requires $`n\genfrac{}{}{0pt}{}{>}{}10^{12}\mathrm{cm}^3`$ and $`T\genfrac{}{}{0pt}{}{>}{}60`$ kK (Nussbaumer (1986)). The He II $`\lambda 1085/\lambda 1640`$ line ratio presents a puzzle. During spin maximum, the strength of the $`\lambda 1640`$ line in the IUE spectra is $`f_{1640}1.5\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (consistent with the HUT measurement), and the strength of the $`\lambda 1085`$ line in the ORFEUS spectra is $`f_{1085}\genfrac{}{}{0pt}{}{<}{}0.2\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (a factor of $`\genfrac{}{}{0pt}{}{>}{}2`$ less than the uncertain HUT estimate), so the He II $`\lambda 1085/\lambda 1640`$ line ratio is $`\genfrac{}{}{0pt}{}{<}{}0.13`$. For case B recombination, this ratio is $`>0.13`$ for the full range of densities ($`n=10^2`$$`10^{14}\mathrm{cm}^3`$) and temperatures ($`T=10`$–100 kK) tabulated by Storey & Hummer (1995), and is $`>0.17`$ for $`n>10^{10}\mathrm{cm}^3`$ and $`T<100`$ kK. Variability could explain this discrepancy, but if it does not we must appeal to some process which preferentially destroys $`\lambda 1085`$ line photons and/or enhances $`\lambda 1640`$ line photons. If the population of the $`n=2`$ level is high enough to render the $`\lambda 1085`$ transition optically thick, there is a branching ratio of order one half to convert $`\lambda 1085`$ photons into a combination of He II Balmer ($`\lambda 1216`$, $`\lambda 1640`$), Paschen ($`\lambda 4686`$), and Brackett photons; this process would roughly double the flux of $`\lambda 1640`$ photons and thereby resolve the discrepancy. Simultaneously, it is possible for the He II Balmer $`\beta `$ transition to be pumped by H I Lyman $`\alpha `$ photons, which generates $`\lambda 1640`$ and $`\lambda 4686`$ line photons when the ion decays. The strength of the $`\lambda 4686`$ line in the phase-averaged optical spectrum of Hellier et al. (1987) is uncertain because the absolute flux calibration is uncertain, but the measured value is $`f_{4686}0.2\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, so the He II $`\lambda 4686/\lambda 1640`$ line ratio is $`0.13`$. This ratio is consistent with the case B line ratios of Storey & Hummer, but it is unfortunately not diagnostic, given the uncertain calibration of the optical spectrum. Nonetheless, the available evidence points to the $`\lambda 1085`$ line flux being lower than expected, but since there are many ways for this to come about, in the absence of a detailed model it does not constrain the plasma conditions. Next consider the narrow-line region, or more specifically why the irradiated face of the secondary of AM Her produces a narrow O VI emission line, while that of EX Hya does not. Shielding of the secondary by the accretion disk may play a role, but we argue that the fundamental reason is that the ionization parameter is simply too low. Although the luminosity of the hard component of the X-ray spectra of both AM Her and EX Hya is $`L2\times 10^{32}\mathrm{erg}\mathrm{s}^1`$ (Ishida et al. (1997); Allan, Hellier, & Beardmore (1998)), AM Her also has a soft component in its X-ray spectrum with $`L2\times 10^{33}\mathrm{erg}\mathrm{s}^1`$ (Paerels et al. (1996)). The ten times lower net ionizing luminosity of EX Hya is exacerbated by the lower efficiency of photoionization by hard X-rays, but is ameliorated by the factor of two smaller distance from the white dwarf to the face of the secondary. Specifically, whereas the ionization parameter of the irradiated face of the secondary of AM Her is $`\xi 2\times 10^{33}\mathrm{erg}\mathrm{s}^1/2\times 10^{10}\mathrm{cm}^3/(5.3\times 10^{10}\mathrm{cm})^2=36`$, for which O VI dominates for dense gas illuminated by a mixture of hard and soft X-rays (model 5 of Kallman & McCray (1982)), that of EX Hya is $`\xi 2\times 10^{32}\mathrm{erg}\mathrm{s}^1/2\times 10^{10}\mathrm{cm}^3/(2.7\times 10^{10}\mathrm{cm})^2=14`$, for which O I dominates for dense gas illuminated by hard X-rays alone (model 4 of Kallman & McCray (1982)). For O VI to dominate in EX Hya, the plasma must lie closer to the source of the ionizing flux. To satisfy the phasing of the radial velocity of the O VI narrow emission component, the narrow-line region must be closer to the white dwarf than the center of mass of the binary ($`r<6.7\times 10^9\mathrm{cm}`$), hence $`n\genfrac{}{}{0pt}{}{>}{}10^{11}\mathrm{cm}^3`$, consistent with the density derived above for the broad-line region. Based on the ratio $`R`$ of the O VI line intensities shown in Table 2, the narrow-line region is optically thick ($`R=0.92\pm 0.29`$), while the broad-line region is more likely optically thin ($`R=1.8\pm 1.0`$). ## 6 Summary Using EUVE photometry and ORFEUS and IUE spectroscopy, we have presented a detailed picture of the behavior of EX Hya in the vacuum ultraviolet. Consistent with its behavior in the optical, and hence consistent with the accretion curtain model of EX Hya, we find that the FUV and UV continuum flux densities, the FUV and UV broad emission line fluxes, and the radial velocity of the O VI broad emission component all vary on the spin phase of the white dwarf, with the maximum of the FUV and UV continuum and broad emission line flux light curves coincident with maximum blueshift of the broad O VI emission component. On the binary phase, we find that the strong eclipse of the EUV flux by the bulge on the edge of the accretion disk is accompanied by narrow and relatively weak absorption components of the FUV emission lines and 30%–40% eclipses of all the UV emission lines except He II $`\lambda 1640`$, while the UV continuum is largely unaffected. Furthermore, both the flux and radial velocity of the O VI narrow emission component vary with binary phase. From the relative phasing of the FUV and UV continuum light curves and the FUV emission-line radial velocities, we identify the accretion funnel as the source of the FUV and UV continuum and the O VI broad emission component, and the white dwarf as the source of the O VI narrow emission component. The irradiated face of the secondary of EX Hya does not produce the narrow O VI emission component observed in ORFEUS spectra of AM Her because the ionization parameter (the X-ray luminosity) is too low. Various lines of evidence imply that the density of both the broad- and narrow-line regions is $`n\genfrac{}{}{0pt}{}{>}{}10^{11}\mathrm{cm}^3`$, but the O VI line ratios imply that the narrow-line region is optically thick while the broad-line region is more likely optically thin. As in AM Her, it is likely that the velocity shear in the broad-line region allows O VI photons to escape, rendering the gas effectively optically thin. We thank all those involved with making the ORFEUS-SPAS II mission a success: the members of the satellite and instrument teams at Institute for Astronomy and Astrophysics, University of Tübingen; Space Science Laboratory, University of California, Berkeley; and Landessternwarte Heidelberg-Königstuhl; the flight operations team; and the crew of STS-80. Special thanks (and apologies) are due to K. Mukai for acquiring the extensive set of IUE spectra used herein. F. Keenan is warmly thanked for supplying C III level population and line intensity data. Conversations and correspondence with B. Greeley, C. Hellier, K. Long, J. Raymond, and M. Sirk are gratefully acknowledged. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
no-problem/9903/gr-qc9903103.html
ar5iv
text
# GRAVITATIONAL COLLAPSE OF FLUID BODIES AND COSMIC CENSORSHIP: ANALYTIC INSIGHTS ## 1 Introduction The investigation on the “final fate” of gravitational collapse of initially regular distributions of matter is one of the most active field of research in contemporary general relativity. It is, indeed, known that under fairly general hypothesis solutions of the Einstein equations with “physically reasonable” matter can develop into singularities . The key problem that still remains unsolved is the nature of such singularities. The main open issue is whether the singularities, which arise as the end point of collapse, can actually be observed. Roger Penrose , was the first to propose the idea, known as cosmic censorship conjecture: does there exist a cosmic censor who forbids the occurrence of naked singularities, clothing each one in an absolute event horizon? This conjecture can be formulated in a “strong” sense (in a “reasonable” spacetime we cannot have a naked singularity) or in a weak sense (even if such singularities occur they are safely hidden behind an event horizon, and therefore cannot communicate with far-away observers). Since Penrose’s proposal there have been various attempts to prove the conjecture (see and references therein). Unfortunately, no such attempts have been successful so far. As a consequence, the research in this field turned to more tractable objectives. In particular, one would like to understand what happens in simple systems, like spherically symmetric ones (interestingly enough, even this apparently innocuous problem is far from being completely solved, although, as we shall see, a general pattern does seem to arise). Our aim here is to overview only models which have a clear physical interpretation. Therefore, we shall require satisfaction of the weak energy condition as well as existence of a singularity free initial data surface. Moreover, we shall take into consideration solutions of the Einstein field equations which are physically meaningful in terms of a (phenomenological) equation of state of the matter. In this respect it is worth mentioning that material schemes having a well defined microscopical interpretation, like the Vlasov-Einstein system, would be closer to such a requisite . However, very little is known on the dynamics of such models (a numerical investigation has been recently carried out ). In presence of excellent general reviews on gravitational collapse and cosmic censorship , we have focused our attention on a specific issue namely, the investigation of analytical models describing the gravitational collapse of massive stars. Therefore we are not going to address here many related important topics. These include Vaidya spacetimes , radiation shells (see and references therein), gravitational collapse of scalar fields , critical behaviour in numerical relativity , stability of Cauchy horizon in Reissner-Nordström spacetimes , the Hoop conjecture among others . ## 2 Einstein equations for spherical collapse What is known analytically in gravitational collapse is essentially restricted to spherical symmetry (one exception is given by the Szekeres “quasi-spherical” spacetimes , but the results are very similar to those holding in Tolman-Bondi models – See Section 4.2.4). Therefore we discuss the mathematical structure of the Einstein field equations describing collapse of a deformable body only in the spherically symmetric case. For perfect fluids this structure is well known ; we present here a more general case which takes into account anisotropic materials as well . Also, we consider only non-dissipative processes since very little is known in the dissipative case . The general, spherically symmetric, non-static line element in comoving coordinates $`t,r,\theta ,\phi `$ can be written in terms of three functions $`\nu ,\eta ,Y`$ of $`r`$ and $`t`$: $$ds^2=e^{2\nu }dt^2+\eta ^1dr^2+Y^2(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2).$$ (1) Throughout this paper, we shall assume that the collapsing body is “materially spherically symmetric” in the sense that all the physical “observables” do not depend on angles. The matter density of the material (baryon number density) is then given by $$\rho =\rho _0(r)Y^2\sqrt{\eta },$$ (2) where $`\rho _0(r)`$ is an arbitrary (positive) function. As in any theory of continuous media, to choose a specific material one has to specify the internal energy $`ϵ`$. This function depends on the parameters characterizing the state of strain of the body (for a recent review of relativistic elasticity theory see ). It can be shown that such parameters can be identified with $`Y`$ and $`\eta `$ in the comoving frame (in other words, any deformation is described “gravitationally”). Therefore, we introduce as equation of state of the material a positive function $$ϵ=\psi (r,Y,\eta ).$$ Here the explicit dependence on $`r`$ takes into account possible inhomogeneities. The energy-momentum tensor can be readily calculated and the result is a diagonal tensor of the form $`T_\nu ^\mu =\mathrm{diag}(ϵ,\mathrm{\Sigma },\mathrm{\Pi },\mathrm{\Pi })`$. Here the stress-strain relations (i.e. the relations giving the radial stress $`\mathrm{\Sigma }`$ and the tangential stress $`\mathrm{\Pi }`$ in terms of the constitutive function) are given by $$\mathrm{\Sigma }=2\eta \frac{\psi }{\eta }\psi ,$$ (3) $$\mathrm{\Pi }=\frac{1}{2}Y\frac{\psi }{Y}\psi .$$ (4) We shall always use the word stress rather than pressure since both $`\mathrm{\Sigma }`$ and $`\mathrm{\Pi }`$ can be in principle negative (tensions) without violating the energy conditions. Different materials correspond to different choices of the function $`\psi `$. It is, however, worth mentioning that the material scheme most widely used in astrophysical applications is the perfect fluid, which can be characterized as a material whose function of state depends on the number density only ($`ϵ=\stackrel{~}{\psi }(\rho )`$). In this case both stresses coincide: $$\mathrm{\Pi }=\mathrm{\Sigma }=\rho \frac{d\stackrel{~}{\psi }}{d\rho }\stackrel{~}{\psi }:=p,$$ where $`p`$ is the isotropic pressure. Two particular cases are worth mentioning in this scheme. One is that of linear pressure-density relation ($`p=kϵ`$); the corresponding equation of state is $`\stackrel{~}{\psi }(\rho )=\rho (1+A\rho ^k)`$ where $`A`$ is a constant. The other one is the dust model for which $`p=0`$. In this case the energy is distributed proportionally to the mass ($`\stackrel{~}{\psi }(\rho )=\rho `$). As soon as one allows anisotropy to occur, other interesting models appear (see for a review on the role of anisotropy in relativistic astrophysics). Recently, a particular anisotropic model has been singled out (for previous investigations on this kind of models see references in ). In this model, one assumes that the radial stress identically vanishes. The key role is played by the equation (3) which shows that the dependence of $`\psi `$ on $`\eta `$ must be a multiplicative dependence from $`\sqrt{\eta }`$ only. Therefore, materials with vanishing radial stresses can be characterized, using equation (2), via equations of state of the form $$\psi (r,\eta ,Y)=\rho h(r,Y),$$ (5) where $`h`$ is a positive, but otherwise arbitrary, function. Once an equation of state has been chosen, the Einstein field equations become a closed system; in spherical symmetry there are three independent equations for the three variables $`\nu `$, $`\eta `$ and $`Y`$. It has proven, however, to be very useful to write the field equations as a system of four differential equations. This is done by introducing the mass function defined as $$m(r,t)=\frac{Y}{2}\left(1Y^2\eta +\dot{Y}^2e^{2\nu }\right),$$ (6) where a dash and a dot denote derivatives with respect to $`r`$ and $`t`$ respectively. The mass function is arbitrary (positive) and allows us to write the field equations in the following form (four compatible equations for three variables) $$m^{}=4\pi ϵY^2Y^{},$$ (7) $$\dot{m}=4\pi \mathrm{\Sigma }Y^2\dot{Y},$$ (8) $$Y^{}\dot{\eta }=2\eta (\dot{Y}^{}\dot{Y}\nu ^{}),$$ (9) $$\mathrm{\Sigma }^{}=(ϵ+\mathrm{\Sigma })\nu ^{}2(\mathrm{\Sigma }\mathrm{\Pi })(Y^{}/Y).$$ (10) ## 3 Physical reasonability and initial data It is easy to produce new “solutions” of the Einstein field equations in “matter”. Indeed, just pick up a metric at will and claim that this is a “solution” referring to the calculated energy momentum tensor. Of course, what one has to do to remove the quotation marks from the above statements is to check the physical reasonability of the results. First of all, the weak energy condition must be imposed on the energy momentum tensor. This condition requires $`T_{\mu \nu }u^\mu u^\nu 0`$ for any non spacelike $`u^\mu `$ and implies, besides positivity of $`ϵ`$, non-negativeness of $`ϵ+\mathrm{\Sigma }`$ and $`ϵ+\mathrm{\Pi }`$. Due to equations (3) and (4), such conditions are equivalent to differential inequalities on the function $`\psi `$, namely, $`\frac{\psi }{\eta }0`$ and $`\frac{\psi }{Y}0`$. Imposing the weak energy condition per se does not assure physical reasonability, since there is no guarantee that the energy momentum tensor can be deduced from a field theoretic description of matter. What is needed is the satisfaction of a suitable equation of state. We shall require, in addition, the equation of state to be locally stable (this last requirement could be relaxed in presence of rotationally-induced stress). The local stability condition requires the (local) equilibrium state of the material to be unstrained. In the “comoving picture” this amounts to say that the energy must have an absolute minimum at the flat-space values of the metric. Let us collect the functions describing admissible equations of state in spherical symmetry in a set $$𝚿=\{\psi C^2(\mathrm{R}_+^3,\mathrm{R}_+):\psi (r,r,1)=\mathrm{min}\psi (r,Y,\eta ),\frac{\psi }{\eta }0,\frac{\psi }{Y}0\}.$$ We shall always assume that the value of $`\psi `$ at the minimum (which in general can be a function of $`r`$) has been rescaled to unity (this can be done without loss of generality). A solution of the Einstein field equations describes the collapse of an initially regular distribution of matter only if the spacetime admits a spacelike hypersurface ($`t=0`$ say) which carries regular initial data. This means that the metric, its inverse, and the second fundamental form all have to be continuous at $`t=0`$. On the initial hypersurface we use the scaling freedom of the $`r`$ coordinate to set $`Y(r,0)=r`$. We call a set of initial data complete if it is minimal, in the sense that no part of its content can be gauged away by a coordinate transformation. We will now prove that a complete set of initial data for equations (7)-(10), at fixed equation of state, is composed by a pair of functions. Physically, such functions describe the initial distribution of energy density $`ϵ_0=ϵ(r,0)`$ and of velocity $`V_0=e^{\nu (r,0)}\dot{Y}(r,0)`$. It is sometimes convenient to parameterize these two distributions in terms of two other functions $`\{F(r),f(r)\}`$ where $`F(r)=m(r,0)`$ is the initial distribution of mass and $`f(r)`$ is called “energy function”. The relationship between the two sets $`\{F,f\}`$ and $`\{ϵ_0,V_0\}`$ is given by the following formulae $$F^{}=4\pi r^2ϵ_0,f=V_0^22F/r.$$ (11) The function $`F`$ has to be non-negative with $`ϵ_0(0)=\underset{r0}{lim}F(r)/r^3`$ finite and non-vanishing, while $`f`$ has to be greater than $`1`$ to preserve the signature of the metric (see equation (13) below) with $`\underset{r0}{lim}f(r)=0`$. The proof that only two arbitrary functions of $`r`$ are a complete set of initial data at fixed equation of state is implicitly contained in many papers on spherical collapse (see e.g. ). It seems, however, that a complete proof has never been published in details, so we take this occasion to give it. We denote by a subscript, the initial value of each quantity appearing in the Einstein field equations. We know $`F=m_0`$, $`V_0=e^{\nu _0}\dot{Y}_0`$. Since $`Y_0=r`$ and $`\psi `$ is a known function of $`r,Y`$ and $`\eta `$, its initial value $`\psi _0`$ is also known - as well as the initial values of the stresses due to formulae (3) and (4) - as a function of $`r`$ and $`\eta _0`$ ($`\psi _0=\psi (r,r,\eta _0)`$). Evaluation of equation (7) at $`t=0`$ now gives $`\psi _0=F^{}/(4\pi r^2)`$, from which the value of $`\eta _0`$ can be extracted algebraically. At this point, the remaining three field equations can be used to evaluate the remaining data, i.e., $`\dot{m}_0`$, $`\nu _0`$ and $`\dot{\eta }_0`$: $$\dot{m}_0=4\pi \mathrm{\Sigma }_0r^2V_0,\nu _{}^{}{}_{0}{}^{}=\frac{\mathrm{\Sigma }_0^{}}{\psi _0+\mathrm{\Sigma }_0}2\frac{\mathrm{\Sigma }_0\mathrm{\Pi }_0}{r(\psi _0+\mathrm{\Sigma }_0)},\dot{\eta }_0=2\eta _0e^{\nu _0}V_{}^{}{}_{0}{}^{}.$$ (12) This completes the proof. In what follows, we consider only solutions which can be interpreted as models of collapsing stars, i.e. isolated objects rather than “universes”. This is possible only if the metric matches smoothly with the Schwarzschild vacuum solution (the matching between two metrics is smooth if both the first and the second fundamental form are continuous on the matching surface). ## 4 Classification and nature of singularities To understand the collapsing scenarios as well as the nature of the singularities, one would like to analyse exact solutions of the Einstein field equations. However, it goes without saying that the non-linearity of such equations makes them essentially untractable, even in spherical symmetry, without further simplifying hypotheses (like e.g. vanishing shear or acceleration, see ). In view of such difficulties, one would like to extract information from the Einstein equations without solving them completely. The mathematical structure of spherical collapse discussed in the previous Section shows that there is a one-to-one correspondence between solutions and choices of triplets ($`s`$, say) of functions $`s=\{F(r),f(r),\psi (r,\eta ,Y)\}`$ (we are, of course, identifying solutions modulo gauge transformations). We denote the set of solutions parameterized in this way by $`𝒮`$. Since in this parameterization we have already taken into account regularity as well as physical admissibility, the whole physical content of the cosmic censorship problem can be translated in the mathematical terms of predicting the endstate of any choice of $`s𝒮`$. ### 4.1 Non-singular solutions. The existence of non-singular, non-static solutions of the Einstein field equations is not forbidden by the singularity theorems (we refer the reader to for a recent review on singularity theorems and further references) and a gravitational collapse can lead to a bouncing back, at a finite area radius, without singularity formation. This phenomenon can be recursive, producing an eternally oscillating, globally regular solution . Speaking very roughly, one can prepare regular initial data in such a way that the region of possible trapped surface formation is disconnected from the data. In this way the remaining hypothesis of the singularity theorems can be satisfied without singularity formation (see also ). Much more exotic than the oscillating solutions, are the globally regular solutions that describe non-singular blackholes . These are matter objects “sitting” inside their Schwarzschild radii. Their finite extension replaces the central singularity with a matter-filled, non singular region. In this case trapped surfaces are obviously present and therefore the strong energy condition must be violated (a simple argument shows that also the dominant energy condition is necessarily violated). In any case, such strange objects are not ruled out by general relativity as far as only the weak energy condition and an equation of state are required . However, as far as we are aware, it is not known if a fully dynamic solution exists which could eventually lead to this exotic end state. ### 4.2 Singular solutions We now move to the case in which a singularity is formed in the future of a regular initial data set. The singularities of spherically symmetric matter filled spacetimes can be recognized from divergence of the energy density and curvature scalars, like e.g. the Kretschmann scalar $`R^{\mu \nu \rho \sigma }R_{\mu \nu \rho \sigma }`$. Essentially, these singularities can be of two kinds. We shall call shell crossing singularities those at which $`Y^{}`$ vanishes ($`Y0`$), and shell focusing singularities those at which $`Y`$ vanishes. To the two kind of singularities correspond two curves $`t_{sc}(r)`$ and $`t_f(r)`$ in the $`r`$-$`t`$ plane, defined by $`Y^{}(r,t_{sc}(r))=0`$ and by $`Y(r,t_f(r))=0`$ respectively. Physically, the shell crossing curve gives the time at which two neighbouring shells of matter intersect each other, while the shell focusing curve identifies at which time the shell labeled $`r`$ “crushes to zero size”. Of these two kinds of singularities, the one of physical interest is obviously that occurring first in the sense that, at fixed $`r`$, one has $`t_f>t_{sc}`$ or vice versa. It would be very interesting, therefore, to carry out a study of the field equations in order to obtain conditions for shell crossing avoidance in spherical spacetimes. This study has been up to now carried out only for dust spacetimes . From the point of view of censorship, the nature of a singularity in an asymptotically flat, initially regular spacetime can be one of the following. First of all, a singularity can be spacelike, like e.g. the Schwarzschild singularity or the singularity occurring at the endstate of the collapse of a Oppenheimer–Snyder dust cloud (see Section 4.2.2 below). These singularities lie in the future of all possible observers and therefore are strongly censored (i.e. allowed by strong cosmic censorship). If a singularity is not strongly censored, then it is naked, i.e. visible to some observer. However, two different cases can occur, namely the singularity can be locally or globally naked. A singularity is locally naked if light signals can emerge from it but fall back without reaching any asymptotic observer. A singularity of this kind will be visible only to observers who have crossed the horizon; therefore, the weak cosmic censorship holds for such endstates. An example of a spacetime containing a locally naked singularity is provided by the Kerr spacetime with mass greater than the angular momentum per unit mass (or by the Reissner–Nordström spacetime with mass greater than charge). Finally, a singularity is globally naked if light-rays emerging from it can reach an asymptotic observer. #### 4.2.1 Shell crossing singularities. The first explicit example showing formation of a naked singularity was found as a shell crossing singularity in a spherical dust cloud . It can be shown that these singularities are timelike and are always locally naked. Some definitions have been proposed to put singularities “in the order of increasing seriousness” . Essentially, what is done is to check the behaviour of the invariants of the Riemann tensor in the approach to the singularity. According to such criteria, the shell crossing singularities turn out to be “weak” at least as compared with the shell focusing singularities. This “weakness’ is considered by some authors as an hint of a possible extension of the spacetime . However, in spite of their “weakness”, there is at the moment no available general proof of extendibility of spacetimes through a shell cross (although some encouraging results exist, see ). The unique exception is a paper by Papapetrou and Hamoui . In this paper the authors claim to have explicitly found the extension in the case of “degenerate” shell crossing singularities, i.e. when the curve $`t_{sc}(r)`$ degenerates to an “instant of time” $`t_{sc}=T=\mathrm{const}.`$ In this case, it is easy to check that the crossing happens at a “point” $`r=\mathrm{const}.`$ rather than at a “3-space” and this is the key to their treatment. However, some results of this paper are unclear from the physical point of view. What is actually available is only a continuous extension of shell crossing singularities exists in the dust case . We shall show this in a slightly more general case. Integrating equation (9) formally with respect to time we can write $$\eta ^1=\frac{Y^2}{1+f}\mathrm{\Omega }^2,$$ (13) where $`f`$ is the energy function and $`\mathrm{\Omega }(r,t)=\mathrm{exp}\left(_0^t\frac{\nu ^{}\dot{Y}}{Y^{}}𝑑\stackrel{~}{t}\right)`$. Changing variable from $`r`$ to $`Y=Y(r,t)`$ one gets the metric $$ds^2=\frac{\mathrm{\Omega }^2}{1+f}\left[\left(\dot{Y}^2(1+f)\mathrm{\Omega }^2e^{2\nu }\right)dt^22\dot{Y}dYdt+dY^2\right]+Y^2(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2)$$ If $`\mathrm{\Omega }`$ is finite and non-vanishing at $`t=t_{sc}(r)`$, the above metric is continuous with continuous inverse at such surface. This happens if $`\nu ^{}`$ vanishes (dust case) or, more generally, if $`\nu ^{}`$ goes to zero at least as $`Y^{}`$ at the shell cross (this happens, for instance, in the case of vanishing radial stresses). In a recent paper , Szekeres and Lun have shown that there exist a system of coordinates in which the metric is of class $`C^1`$ as $`tt_{sc}^{}`$. However, again, this result per se does not show extendibility of the spacetime (see also ). #### 4.2.2 Non-central shell focusing singularities It is important to distinguish the central shell focusing singularity, i.e. that occurring at $`r=0`$, from the other focusing singularities, since in many cases it is easy to prove that non central singularities are censored. A necessary condition for the visibility of a “point” $`r`$ is that the condition $`1\frac{2m(r,t)}{Y(r,t)}>0`$, implying absence of trapped surfaces (see next Sub–section), is satisfied. Since the apparent horizon is the boundary of the region containing trapped surfaces, the above condition implies that the time of formation of the apparent horizon $`t_{ah}(r)`$, defined as $`2m(r,t_{ah}(r))=Y(r,t_{ah}(r))`$, must not be before $`t_f(r)`$, i.e. $`t_{ah}(r)t_f(r)`$. Now suppose $`m(r,t_f(r))`$ to be different from zero (as we have seen, this is not the case at the central singularity where $`m`$ has to vanish as $`r^3`$). Then $`12m(r,t)/Y(r,t)`$ goes to minus infinity as $`t`$ tends to $`t_f(r)`$ so that the singularity is covered. This shows that all the naked shell focusing singularities are necessarily massless, in the sense that $`m`$ has to vanish there . It follows that any non central singularity will certainly be censored if the mass is an increasing function with respect to $`t`$. Now equation (8) gives $`\dot{m}=4\pi \mathrm{\Sigma }Y^2\dot{Y}`$ and, since $`\dot{Y}<0`$ during collapse, we conclude that non central singularities are always covered if the radial stress $`\mathrm{\Sigma }`$ is non-negative (in particular, all non-central singularities occurring in dust as well as in models with only tangential stress are covered, since the mass does not depend on time in this case). In the presence of radial tensions, the question is still open. It is known that a perfect fluid with $`p=kϵ`$ exhibits naked non-central singularities if $`k<1/3`$ . #### 4.2.3 Central singularities: the root equation The first explicit examples of the formation of a naked shell focusing singularity were provided by Eardley , Eardley and Smarr and by Christodoulou . Since then the techniques to study the nature of the singularities in spherically symmetric spacetimes have been developed by many authors (see references in ) and finally settled up by Dwivedi and Joshi . The key idea is the following: if the singularity is visible, at least locally, there must exist light signals coming out from it. Therefore, by investigating the behaviour of radial null geodesics near the singularity, one can try to find out if outgoing null curves meet the singularity in the past. On such radial null geodesics, the derivative of $`Y(r,t)`$ reads $$\frac{dY}{dr}=Y^{}+\eta ^{1/2}\dot{Y}e^\nu .$$ Using (6) and (13) in the above equation we obtain $$\frac{dY}{dr}=Y^{}\left[1\sqrt{1+\frac{\mathrm{\Omega }^2}{1+f}\left(\frac{2m}{Y}1\right)}\right],$$ (14) where $`Y^{}`$ has to be understood as a known function of $`Y`$ and $`r`$. If the singularity is naked, equation (14) must have at least one solution with definite, outgoing tangent at $`r=0`$, i.e. a solution of the kind $`Y=X_0r^\alpha `$ where $`\alpha >1`$ and $`X_0`$ is a positive constant. Clearly, this behaviour is possible only if the necessary condition $`12m/Y>0`$ is satisfied. Indeed $`Y^{}`$ is equal to one, and therefore positive, on the initial data surface. If no shell crossing occurs, it remains positive, so that the right hand side of equation (14) cannot remain positive if $`2m/Y1`$ changes sign. Once the necessary condition is satisfied, one has to check if both $`X_0`$ and $`\alpha `$ exist such that the solution of equation (14) is of the specified form near the singularity. On using L’Hospital rule we have $$X_0=\underset{r0}{lim}\left(\frac{1}{\alpha r^{\alpha 1}}\frac{dY}{dr}\right)_{Y=X_0r^\alpha }.$$ (15) Using again (14), this equation becomes an algebraic relation for $`X_0`$ at fixed $`\alpha `$. If a positive definite $`X_0`$ exists the singularity is naked. #### 4.2.4 The dust case The general exact solution of the Einstein field equations is known in the most simple case of vanishing pressure (dust) . In this case, from equation (10), one gets $`\nu =0`$ (more precisely, $`e^\nu `$ is an arbitrary function of $`t`$ only which can be rescaled to unity without loss of generality). Then $`\mathrm{\Omega }=0`$ and it follows from equation (13) that $`\eta ^1=Y^2/(1+f)`$. The mass is constant in time ($`m=F(r)`$) due to equation (8) with $`\mathrm{\Sigma }=0`$. Therefore (6) can be written as a Kepler-like equation ($`\dot{Y}^2=f+2F/Y`$), which is integrable in parametric form for $`f0`$ and in closed form for $`f=0`$. Finally, the density can be read off from (7) as $`ϵ=F^{}/(4\pi Y^2Y^{})`$. A great effort has been paid to understand the nature of the central singularity in this solution and we now know the complete spectrum of possible endstates of the dust evolution in dependence of the initial data. We recall here what happens in the case of marginally bound solutions ($`f=0`$ since it is sufficiently general to illustrate a tendency and simple enough to be recalled in a few lines. For marginally bound dust, the solutions $`s=\{F,1,1\}`$ can be uniquely characterized by the expansion of the function $`F(r)`$ at $`r=0`$ or, and that is the same, by the expansion of $`ϵ_0=F^{}/4\pi r^2`$. Using this expansion in the root equation, it is not difficult to check the following results: * If the first non–vanishing term corresponds to $`n=1`$ or $`n=2`$ equation (15) always has a real positive root: the singularity is naked; * If the first non–vanishing term is $`n=3`$ the root equation reads $$2x^4+x^3+\xi x\xi =0,$$ (16) where $`x^2=X`$ and $`\xi =F_3/F_{0}^{}{}_{}{}^{5/2}`$. From the theory of quartic one can show that this equation admits a real positive root if $`\xi <\xi _c=(26+15\sqrt{3})/2`$. Therefore, $`\xi _c`$ is a critical parameter: at $`\xi =\xi _c`$ a “phase transition” occurs and the endstate of collapse turns from a naked singularity to a blackhole. * If $`n>3`$ the limit in equation (14) diverges: the singularity is covered. In particular, this case contains the solution first discovered by Oppenheimer and Snyder describing a homogeneous dust cloud. The naked singularities mentioned above are locally naked. It can, however, be shown that if locally naked singularities occur in dust spacetimes, then spacetimes containing globally naked singularities can be build up from these by matching procedures. The Penrose diagrams corresponding to the three different cases are shown in Figure 1. The above results can be extended to the general case of collapsing dust clouds, so that the final fate of the dust solutions $`s=\{F,f,1\}`$ is completely known. The final fate depends on a parameter which is a combination of coefficients of the expansions of $`F`$ and $`f`$ near $`r=0`$, and a structure similar to that of marginally bound collapse arises (see for details). #### 4.2.5 Vanishing radial stresses Recently, the general solution for spherically symmetric dust has been extended to the case in which only the radial stress vanishes . The solution can be reduced to quadratures using a system of coordinates first introduced by Ori for charged dust. One of the new coordinate is the mass $`m`$ which is constant in time (due to (8) with $`\mathrm{\Sigma }=0`$), the other coordinate is the “area radius” $`Y`$. In such coordinates the metric reads $$ds^2=\mathrm{\Gamma }^2\left(1\frac{2m}{Y}\right)dm^2+2\sqrt{1+f}\frac{\mathrm{\Gamma }}{hu}dYdm\frac{1}{u^2}dY^2+Y^2(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2),$$ (17) where $$u=\sqrt{1+\frac{2m}{Y}+\frac{1+f}{h^2}},\mathrm{\Gamma }=g(m)+\frac{h}{u^2\sqrt{1+f}}\frac{u}{m}𝑑Y,$$ (18) and the function $`g(m)`$ is arbitrary. The problem of understanding the nature of the singularities for such solutions is essentially still open. It is, indeed, possible to write the root equation in explicit form, but this equation contains a sort of “non-locality” due to the integral entering in the definition of $`\mathrm{\Gamma }`$. As a result only a few special cases have been analysed so far . Among the solutions with tangential stresses a particularly interesting one is the Einstein cluster . This is a spherically symmetric cluster of rotating particles. The motion of the particles is sustained by the angular momentum $`L`$ whose average effect is to introduce a non vanishing tangential stress in the energy-momentum tensor. The corresponding equation of state has the form $`h(m,Y)=\sqrt{1+L^2(m)/Y^2}`$. Therefore, a solution is uniquely identified by the choice of three arbitrary functions of $`m`$ only, namely $`F,f`$ and $`L^2`$ (for $`L=0`$ one recovers dust). It turns out that the final state “at fixed dust background” (i.e. for fixed $`F,f`$) depends on the expansion of $`L^2`$ near $`m=0`$ ($`L\beta m^y`$, say) . Considering, for simplicity, the marginally bound case, one finds that for $`4/3<y<2`$ either the singularity does not form (the system bounces back) or a blackhole is formed. The threshold of naked singularity formation lies at $`y=2`$, where a $`2`$-parameter structure very similar to that of dust occurs. At $`y=7/3`$ a sort of transition takes place and the evolution of the model is such that only the critical branch is changed, un-covering a part of the blackhole region in the corresponding dust spacetime; the non-critical branch is the same as in dust spacetimes. Finally for $`y>7/3`$ the evolution always leads to the same end state of the corresponding dust solution. #### 4.2.6 Self-similar collapse A spherically symmetric spacetime is self-similar if it admits an homothetic vector $`\xi `$, i.e. a vector satisfying $`_\xi g_{\mu \nu }=2g_{\mu \nu }`$. In the comoving frame the dimensionless variables $`\nu `$, $`\lambda `$ and $`Y/r`$ depend only on the “similarity variable” $`z:=r/t`$, and the Einstein field equations become ordinary differential equations (we refer the reader to for a complete treatment of self-similar solutions). Being governed by ordinary differential equations, self-similar spherical collapse can be analyzed with the powerful techniques of dynamical systems theory . The analysis of the singularities forming in self-similar spacetimes has been done by many authors for different equations of state, like dust , barotropic perfect fluids , radiation (Vaidya) shells and in general cases . The picture arising resembles the dust case in the sense that both naked singularities and blackholes can form depending on the values of the parameters characterizing the solution. A thoroughly review of these and other features of self-similar solutions can be found in ; here we limit ourself to stress that naked singularities exist in self-similar solutions with pressure, thereby showing that the phenomenon of naked singularities formation cannot be considered as an artifact of dust (i.e. vanishing stresses) solutions. #### 4.2.7 General stresses The problem of predicting the final fate of an initially regular distribution of matter supported by an arbitrary distribution of stress-energy (including e.g. the case of isotropic perfect fluids but also anisotropic crystalline structures which are thought to form at extremely high densities), is still open even in the spherically symmetric case. First of all, one has to take into account the fact that there is a high degree of uncertainty in the properties of the equation of state at very high densities. Recently, Christodoulou initiated the analysis of a simple model composed by a dust (“soft”) phase for energy density below a certain value $`\overline{ϵ}`$ and a stiff (“hard”) phase for $`ϵ>\overline{ϵ}`$ (in the hard phase the pressure is given by $`p=ϵ\overline{ϵ}`$). Although the details of the collapse in presence of general matter fields are still largely unknown, it is very unlikely that the “embarrassing” examples of naked singularity formation like those occurring in dust can eventually be eliminated with the “addition” of stresses. Indeed, it is reasonable to think that a sector of naked singularities exists in the choice of initial data for any fixed equation of state . The main issues that have, therefore, to be addressed are the genericity and the stability of naked singularity formation. Both the above italicized terms have a somewhat intuitive meaning that is, however, difficult to express in mathematical terms. Regarding genericity, one can mean that the set of initial data leading to naked singularities is not of measure zero. For instance, it has been shown that among the dust solutions $`s=\{F,f,1\}`$ naked singularities are generic in the sense that at a fixed density profile $`F`$, one can always choose energy functions $`f`$ leading to blackholes or naked singularities. A generalization of such a result would be that the naked singularities are generic - in this specific sense - at fixed, but arbitrary, equation of state $`\psi `$ (there is some convincing evidence for this, see ). The issue of stability is even more delicate than that of genericity. Indeed, any exact solution of a physical theory must survive to small but arbitrary perturbations in order to serve as a candidate for describing nature. In the case of exact solutions of the Einstein field equations, the notion of stability is very delicate due to to the gauge invariance of the theory with respect to spacetime diffeomorphisms. Recently, some evidence of stability of dust naked singularities against perturbations has been obtained . ## 5 Discussion: a picnic on the side of the road It is well known that, if the mass of a collapsing object does not fall below the neutron star limit ($`3M_{}`$), no physical process is able to produce enough pressure to balance the gravitational pull so that continued gravitational collapse must occur. It is widely believed that the final state of this process is a blackhole. However, what general relativity actually predicts, in the cases which have been analysed so far, is that either a blackhole or a naked singularity is formed, depending on the initial distribution of density and velocity and on the constitutive nature of the collapsing matter. One may raise the objection that most of the known analytical results could be an artifact of spherical symmetry. However, from the numerical point of view some evidence that this is not the case is coming up. Therefore, as singularity theorems showed that the singularities occurring in collapse are generic and not any artifact of symmetry, a similar situation may hold for the nature of singularities as well. One may at this point ask if and when, a cosmic censorship theorem holds in nature. An answer to such a question could be in the negative. However, it remains to understand the physics underlying the end state of gravitational collapse with respect to the choice of initial data at fixed matter model. In a famous novel (that inspired the film Stalker by A. Tarkovsky) a short visit of extraterrestrial life on the earth occurs. The gap between the two civilizations is so high that human beings are, with respect to the “garbage” left by the visitors, like ants exploring what remains on the side of the road after a human picnic. Something they find is useful, something useless, something even dangerous, but anyway everything is obscure and difficult to understand, looking like the weak shadow of a wonderful abyss of knowledge. Our present cosmic censorship understanding resembles this situation. Indeed, we are getting a variety of mathematical hints with somewhat obscure physical meanings. For instance: the condition on $`\xi `$ recalled in Section 4.2.4, the constraints arising in the gravitational collapse of Einstein clusters, the dimensionless numbers arising in Choptuik’s numerical results , the condition on the radiation flux which arises in Vaidya collapse . Such “numbers” should presumably be the remnant weak shadow of a general theorem when its hypotheses are enormously restricted by the choice of the equation of state and of the adopted symmetries. To get rid of this puzzle appears to be one of the most exciting objectives of future research in classical relativity theory. ## Acknowledgements Many discussions with Elisa Brinis and Pankaj Joshi are gratefully acknowledged. S. J. thanks the ICSC World Laboratory (Lausanne, Switzerland) for the Chandrasekhar Memorial Fellowship (1998-99).
no-problem/9903/astro-ph9903263.html
ar5iv
text
# High Energy Astrophysics ## I The Relativistic Universe Our universe is dominated by objects emitting radiation via thermal processes. The blackbody spectrum dominates, be it from the microwave background, the sun or the accretion disks around neutron stars. This is the ordinary universe, in the sense that anything on an astronomical scale can be considered ordinary. It is tempting to think of the thermal universe as THE UNIVERSE and certainly it accounts for much of what we see. However to ignore the largely unseen, non-thermal, relativistic, universe is to miss a major component and one that is of particular interest to the physicist, particularly the particle physicist. The relativistic universe is pervasive but largely unnoticed and involves physical processes that are difficult to emulate in terrestrial laboratories. The most obvious local manifestation of this relativistic universe is the cosmic radiation, whose origin, 86 years after its discovery, is still largely a mystery (although it is generally accepted, but not proven, that much of it arises in shock waves from galactic supernova explosions). The existence of a steady rain of particles, whose power law spectrum attests to their non-thermal origin and whose highest energies extend far beyond that achievable in man-made particle accelerators, attests to the strength and reach of the forces that power this strange relativistic radiation. If thermal processes dominate the ”ordinary” universe, then truly relativistic processes illuminate the ”extraordinary” universe and must be studied, not just for their contribution to the universe as a whole but as the denizens of unique cosmic laboratories where physics is demonstrated under conditions to which we can only extrapolate. The observation of the extraordinary universe is difficult, not least because it is masked by the dominant thermal foreground. In places, we can see it directly such as in the relativistic jets emerging from AGNs but, even there, we must subtract the foreground of thermal radiation from the host elliptical galaxy. Polarization leads us to identify the processes that emit the radio, optical and X-ray radiation as synchrotron emission from relativistic particles, probably electrons, but polarization is not unique to B synchrotron radiation and the interpretation is not always unambiguous. The hard, power-law, spectrum of many of the non-thermal emission processes immediately suggests the use of the highest radiation detectors to probe such processes. Hence hard X-ray and $`\gamma `$-ray astronomical techniques must be the observational disciplines of choice for the exploration of the relativistic universe. Because the earth’s atmosphere has the equivalent thickness of a meter of lead for this radiation, its exploitation had to await the development of space platforms for X-ray and $`\gamma `$-ray telescopes. Although the primary purpose of the astronomy of hard photons is the search for new sources, be they point-like, extended or diffuse, it opens the door to the investigation of more obscure phenomenon in high energy astrophysics and even in cosmology and particle physics. Astronomy at energies up to 10 GeV has made dramatic progress since the launch of the Compton Gamma Ray Observatory in 1991 and that work has been summarized . Beyond 10 GeV it is difficult to efficiently study $`\gamma `$-rays from space vehicles, both because of the sparse fluxes which necessitate large collection areas and the high energies which make containment a serious problem. The development of techniques whereby $`\gamma `$-rays of energy 100 GeV and above can be studied from the ground, using indirect, but sensitive, techniques is relatively new and has opened up a new area of high energy photon astronomy with some exciting possibilities and some preliminary results. The latter include the detection of TeV photons from supernova remnants and from the relativistic jets in AGNs. Such observations seriously constrain the models for such sources and in many cases lead to the development of a new paradigm. There remains the possibility that the annihilation lines from neutralinos might be discovered in the GeV-TeV region, that the evaporation of primordial black holes might be manifest by the emission of bursts of TeV photons, that the infrared density of intergalactic space might be probed by its absorbing effect on TeV photons from distant sources, and even (in some models) that the fundamental quantum gravity energy scale might be constrained by the observation of short-term TeV flares in extragalactic sources. ## II Detection Technique The techniques of ground-based Very High Energy (VHE) $`\gamma `$-ray astronomy are not new but only achieved credibility in the late eighties with the detection of the Crab Nebula. The most sensitive technique, the atmospheric Cherenkov imaging technique, is the one that has been most successful and is now in use at some eight observatories. Its history and present status has been reviewed elsewhere . It is an optical ”telescope” technique and thus suffers the usual limitations associated with optical astronomy: limited duty cycle, weather dependence, limited field of view. But it also has the advantage that it is relatively inexpensive because it uses the same detector technology (photomultipliers) as optical astronomy, the same optical reflectors that borrow from solar energy investigations, and the same pulse processing techniques that are routinely used in high energy particle physics. In addition the Cherenkov technique operates in an energy regime where the physics of particle interactions is relatively well understood and where there exist advanced Monte Carlo programs for the simulation of particle cascades. In recent years, VHE $`\gamma `$-ray astronomy has been dominated by two advances in technique: the development of the atmospheric Cherenkov imaging technique, which led to the efficient rejection of the hadronic background, and the use of arrays of atmospheric Cherenkov telescopes to measure the energy spectra of $`\gamma `$-ray sources. The former is exemplified by the Whipple Observatory 10m telescope (Figure 1) with more modern versions CAT, the French telescope in the Pyrenees, and the Japanese-Australian CANGAROO telescope in Woomera, Australia. The most significant examples of the latter are the five telescope array of imaging telescopes on La Palma in the Canary Islands which is run by the Armenian-German-Spanish collaboration, HEGRA, and the four, soon to be seven, Telescope Array in Utah which is operated by a group of Japanese institutions. These techniques are relatively mature and the results from observations with overlapping telescopes are in good agreement. Vigorous observing programs are now in progress at all of these facilities; the vital observing threshold has been achieved whereby both galactic and extragalactic sources have been reliably detected. Many exciting results are anticipated as more of the sky is observed with this generation of telescopes. ## III Galactic Sources It is a measure of the maturity of this new discipline that the existence and study of galactic sources of TeV radiation is now considered ordinary and relatively uncontroversial. This is a dramatic change from only a decade ago when the existence of any galactic sources at all was hotly contested. These sources were always variable and difficult to confirm or refute ; it was not until the observation of steady sources, in particular, the observation of the Crab Nebula (which has become the standard candle), that the relative sensitivity of the different techniques could be assessed and some standards of credibility set. The Crab Nebula has been observed by some eight independent groups and no evidence for variability has been detected. It has been seen at energies from 200 GeV to more than 50 TeV and accurate energy spectra have been determined . Originally predicted by Gould as a TeV energy source based on a Compton-synchrotron model, the complete $`\gamma `$-ray spectrum can now be fitted by an updated version of the same model . The variable parameter in this model is the magnetic field which is set by the TeV observations at 16$`\pm `$1 nanotesla, somewhat smaller than the value estimated from the equipartition of energy. In practice, recent optical observations reveal a complex structure at the center of the nebula (where the TeV photons are believed to originate) and more sophisticated models are certainly called for. VHE $`\gamma `$-rays have also been detected from other galactic sources. All of these detections are of sources with negative declinations, best seen in the Southern Hemisphere where there are fewer VHE observatories and hence the detections have largely been by one group. The exception is the $`\gamma `$-ray pulsar PSR1706-44 which was discovered by the CANGAROO group and confirmed by the Durham group ; both of these groups operate from Australia. The source is detected by EGRET at MeV-GeV energies as 100% pulsed. There is no evidence in the TeV signal for pulsations but there is weak evidence that the pulsar is in a plerion which may be the source of the TeV $`\gamma `$-rays. The CANGAROO group also report the detection of an unpulsed TeV signal from a location close to the Vela pulsar ; the position coincides with the birthplace of the pulsar and hence the signal may originate in a weak plerion left after the ejection of the pulsar. Another interesting result is the detection of Cen X-3 by the Durham group . Perhaps the most surprising (and controversial) result is the detection of a TeV source that is coincident with one part of the shell of the supernova remnant, SN1006 . X-ray observations had shown that there is non-thermal emission from two parts of the shell that is consistent with synchrotron emission from electrons with energy up to 100 TeV; hence the TeV $`\gamma `$-ray detection is not a surprise. The TeV emission is consistent with inverse Compton emission from electrons which have been shock accelerated in the shell. However it is not clear why it should be seen from only one region. Because this represents the first direct detection of SNR shell emission this result, when confirmed, has great significance. Not only can the magnetic field be estimated but also the acceleration time; these two parameters are very important for shock acceleration theory. More sensitive observations may reveal the detailed energy spectrum, whether or not the source is extended, and the relative strength of the TeV emission from each shell. Ideally, of course, one would like to see direct evidence from VHE $`\gamma `$-ray astronomy of emission from hadron collisions in SNR shells. These SNRs are widely believed to be the source of the hadronic cosmic rays seen in the solar system (at least up to proton energies of 100 TeV) which fill the galaxy. However this canonical model mostly rests on circumstantial evidence and it is highly desirable to find the smoking gun that would clinch the issue. Supernovae certainly have sufficient energy and their occurrence rate is about right; also there is a known mechanism associated with shock fronts to explain acceleration. Hence when EGRET detected a small number of $`\gamma `$-ray sources at GeV energies which appeared to coincide with known SNRs , it was widely believed that the cosmic ray origin problem had been solved. However Drury et al. had shown that the $`\gamma `$-ray spectrum of such sources should be rather flat power-laws that would extend to TeV energies. Extensive observations by the Whipple collaboration have failed to find any evidence for TeV emission . The upper limits are shown in Figure 2 along with the EGRET points. More elaborate models have been constructed that can be made to fit the observations . It is also possible that the EGRET source/SNR identifications are in error since the sources are not strong and the galactic $`\gamma `$-ray plane is a confused region at MeV/GeV energies. Either way, it would be reassuring for theories of cosmic ray origins to see definite detections from some shell-type SNRs where the emission is consistent with $`\pi `$ production in the shell. The next generation of VHE detectors should provide these definitive observations. ## IV Extragalactic Sources ### A Relativistic Jets One of the most surprising results to come from VHE $`\gamma `$-ray astronomy has been the discovery of TeV-emitting blazars. Unlike the observation of galactic supernovae such as the Crab Nebula, which are essentially standard candles, the light-curves of blazars are highly variable. In Figure 4 the nightly averages of the TeV flux from Markarian 421 (Mkn 421) in 1995 are shown as observed at the Whipple Observatory . Although AGN variability was a feature of the AGNs observed by EGRET on the Compton Gamma Ray Observatory at energies from 30 MeV to 10 GeV, the weaker signals (because of the finite collection area) do not allow such detailed monitoring, particularly on short time-scales. Active galactic nuclei (AGN) are the most energetic on-going phenomena that we see in extragalactic astronomy. The canonical model of these objects is that they contain massive black holes (often at the center of elliptical galaxies) surrounded by accretion disks and that relativistic jets emerge perpendicular to the disks; these jets are often the most prominent observational feature. Blazars are an important sub-class of AGNs because they seem to represent those AGNs which have one of their jets aligned in our direction. Observations of such objects are therefore unique. The VHE $`\gamma `$-ray astronomer is thus in the position of the particle physicist who is offered the opportunity to observe the accelerator beam, either head-on or from the side. For the obvious reason that there is more energy transferred in the forward direction the particle physicist usually chooses to put his most important detectors directly in the direction of the beam (or close to it) and its high energy products. While such observations give the best insight into the energetic processes in the jet, they do not give the best pictorial representation. Hence just as it is difficult to visualize the working of a cannon by looking down its barrel, it is difficult to get a picture of the jet by looking at it head-on. Observations at right angles to the jet give us our best low energy view of the jet phenomenon and indeed provide us with the spectacular optical pictures of jets from nearby AGNs (such as M87). ### B Sources Mkn 421 is the closest example of an AGN which is pointing in our direction. It is a BL Lac object, a sub-class of blazars, so-called because they resemble the AGN, BL Lacertae which is notorious because of the lack of emission lines in its optical spectrum. Because such objects are difficult, and somewhat uninteresting, for the optical astronomer they were largely ignored until they were found to be also strong and variable sources of X-rays and $`\gamma `$-rays. Mkn 421 achieved some notoriety largely because it was the first extragalactic source to be identified as a TeV $`\gamma `$-ray emitter . At discovery, its average VHE flux was $``$ 30% of the VHE flux from the Crab Nebula. Markarian 501 (Mkn 501), which is similar to Mkn 421 in many ways, was detected as a VHE source by the Whipple group in May 1995 . It was only 8% of the level of the Crab Nebula and was near the limit of detectibility of the technique at that time. The discovery was made as part of an organized campaign to observe objects that were similar to Mkn 421 and were at small redshifts. This same campaign later yielded the detection of the BL Lac object, 1ES 2344+514 which is also close (z = 0.044). Recently the Durham group has announced the detection of the BL Lac object, PKS2155-304 which is also at a small redshift (z = 0.116). A more controversial, but potentially more important detection, is that of 3C 66A reported by the Crimean group . These sources are summarized in Table I. Whereas the first two sources have been seen by a number of groups, the last three are reported by only one group and require confirmation. ### C Variability Perhaps the most exciting aspect of these detections is the observation of variability on time-scales from minutes to hours. The very large collection areas ($`>10,000m^2`$) associated with atmospheric Cherenkov Telescopes is ideally suited for the investigation of short term variability. The VHE emission from the two best observed sources, Mkn 421 and Mkn 501 (Figure 4), varies by a factor of a hundred. Although many hundreds of hours have now been devoted to their study, the variations are so complex that it is still difficult to characterize their emissions. It has been suggested that for Mkn 421 the emission is consistent with a series of short flares above a baseline that falls below the threshold of the Whipple telescope (Figure 4); the average flare duration is one day or shorter. The most important observations of Mkn 421 were in May, 1996 when it was found to be unusually active. On May 7, a flare was observed with the largest flux ever recorded from a VHE source. The observations began when the flux was already several times that of the Crab Nebula and it continued to rise over the next two hours before levelling off (Fig. 5). Observations were terminated as the moon rose but the following night it was observed at its quiescent level. One week later (May 15) a smaller, but shorter, flare was detected; in this case the complete flare was observed and the doubling time in the rise and fall was $``$ 15 minutes. This is the shortest time variation seen in any extragalactic $`\gamma `$-ray source at energies $`>`$ 10 MeV (apart from in a $`\gamma `$-ray burst). Mkn 501 is also variable, but as at other wavelengths, the characteristic time seems longer. Its baseline emission has varied by a factor of 15 over four years (Figure 4). Hour-scale variability has also been detected but its most important time variation characteristic appears to be the slow variations seen over the five months in 1997. ### D Spectrum The atmospheric Cherenkov signal is essentially calorimetric and hence it should be possible to derive the $`\gamma `$-ray energy spectrum from the observed light pulse spectrum. In practice it is more difficult because, unless an array of detectors is used, the distance to the shower core (impact parameter) is unknown. Although the extraction of a spectrum from even a steady and relatively steady source as the Crab Nebula required considerable effort and the development of new techniques, it was relatively easy to measure the spectra of Mkn 421 and Mkn 501 in their high state because the signal was so strong. The general features of the spectra derived from the Whipple observations are in agreement with those derived at the HEGRA telescopes . The May 7, 1996 flare of Mkn 421 provided an excellent data base for the extraction of a spectrum; the data can be fit by a simple power-law $`(dN/dEE^{2.6})`$. There is no evidence of a cutoff up to energies of 5 TeV (Figure 6). Because of the possibility of a high energy cutoff due to intergalactic absorption there is considerable interest in the highest energy end of the spectrum. Large zenith angle observations at Whipple and observation by HEGRA confirm the absence of a cutoff out to 10 TeV. The generally high state of Mkn 501 throughout 1997 give data from the Whipple telescope that can be best fit by a curved spectrum of the form: $`dN/dE`$ and $`E^{2.200.45log_{10}E}`$ (Figure 6). Here the spectrum extends to at least 10 TeV. The curvature in the spectrum could be caused by the intrinsic emission mechanism or by absorption in the source. Since Mkn 421 and Mkn 501 are virtually at the same redshift it is unlikely that it could be due to intergalactic absorption since Mkn 421 does not show any curvature . ### E Multiwavelength Observations The astrophysics of the $`\gamma `$-ray emission from the jets of AGNs are best explored using multiwavelength observations. These are difficult to organize and execute because of the different observing constraints on radio, optical, X-ray, space-based $`\gamma `$-ray and ground-based $`\gamma `$-ray observatories. Of necessity observations are often incomplete and, when complete coverage is arranged, the source does not always cooperate by behaving in an interesting way! The first multiwavelength campaign on Mkn 421 coincided with a TeV flare on May 14-15, 1994 and showed some evidence for correlation with the X-ray band; however no enhanced activity was seen in EGRET . A year later, in a longer campaign, there was again correlation between the TeV flare and the soft X-ray and UV data but with an apparent time lag of the latter by one day (Figure 7). The variability amplitude is comparable in the X-ray and TeV emission ($``$ 400%) but is smaller in the EUV ($``$200%) and optical ($``$20%) bands. In April, 1998 there was again a correlation seen between an X-ray flare observed by SAX and Whipple; in this case the TeV flare was much shorter (a few hours) compared to the X-ray (a day) . The first multiwavelength campaign on Mkn501 was undertaken when the TeV signal was seen to be at a high level. The surprising result was that the source was detected by the OSSE experiment on CGRO in the 50-150 kev band (Figure 7). This was the highest flux ever recorded by OSSE from any blazar (it has not detected Mkn 421) but the amplitude of the X-ray variations ($``$200%) was less than those of the TeV $`\gamma `$-rays ($``$400%) . ### F Multiwavelength Power Spectra Because of the strong variability in the TeV blazars it is difficult to represent their multiwavelength spectra. In Figure 8 we show the fluxes plotted as power ($`\nu \times F_\nu `$) from Mkn 421 and Mkn 501 during flaring as well as the average fluxes. Both sources display the two peak distribution characteristic of Compton-synchrotron models, e.g., the Crab Nebula. Whereas the synchrotron peak in Mkn 421 occurs near 1 keV, that of Mkn 501 occurs beyond 100 keV which is the highest seen from any AGN. In 1998 the synchrotron spectrum peak in Mkn 501 shifted back to 5 keV and the TeV flux fell below the X-ray flux. ### G Implications The sample of VHE emitting AGNs is still very small but it is possible to draw some conclusions from their properties (summarized in Table I). * The first three objects, all detected by the Whipple group, are the three closest BL Lacs in the northern sky. Some 20 other BL Lacs have been observed with z $`<`$ 0.10 without detectable emission. This could be fortuitous, because they are standard candles and these are closest (but the distance differences are small), or because they suffer the least absorption (but there is no cutoff apparent in their spectra). * All of the objects are BL Lacs; because such objects do not show emission lines and therefore probably do not have strong optical/infrared absorption close to the source, it is suggested that BL Lacs are preferentially VHE emitters. * Four of the five sources are classified as XBLs which indicates that they are strong in the X-ray region and that the synchrotron spectrum most likely peaks in that range (and that the Compton spectrum peaks in the VHE $`\gamma `$-ray range). The fifth, 3C 66A, is an RBL, like many of the blazars detected by EGRET; it is believed that these blazars have synchrotron spectra that peak at lower energies and Compton spectra that peak in the HE $`\gamma `$-ray region. * Only three (Mkn 421, PKS 2155-304 and 3C 66A) are listed in the Third EGRET Catalog; there is a weak detection reported by EGRET for Mkn 501. * If 3C 66A is confirmed (and to a lesser extent PKS 2155-305), then the intergalactic absorption is significantly less than had been suggested from galactic evolution models. * There is evidence for variability in all of the sources. The rapid variability seen in Mkn 421 indicates that the emitting region is very small which might suggest it is close to the black hole. In that case the local absorption must be very low (low photon densities). It seems more likely that the region is well outside the dense core. There are three basic classes of model considered to explain the high energy properties of BL Lac jets: Synchrotron Self Compton (SSC), Synchrotron External Compton (SEC) and Proton Cascade (PC) Models. In the first two the progenitor particles are electrons, in the third they are protons. VHE $`\gamma `$-ray observations have constrained the types of models that are likely to produce the $`\gamma `$-ray emission but still do not allow any of them to be eliminated. For instance, the correlation of the X-ray and the VHE flares is consistent with the first two models where the same population of electrons radiate the X-rays and $`\gamma `$-rays. There is little evidence for the IR component in BL Lac objects which would be necessary in the SEC models as the targets for Compton-scattering, so this particular type of model may not be likely for these objects. The PC models which produce the $`\gamma `$-ray emission through $`e^+e^{}`$ cascades also have great difficulty explaining the rapid cooling observed in the TeV emission from Mkn 421. Also the high densities of unbeamed photons near the nucleus, such as the accretion disk or the broad line region, are required to initiate the cascades and these cause high pair opacities to TeV $`\gamma `$-rays . Significant information comes from the multiwavelength campaigns (although thus far these have been confined to Mkn 421 and Mkn 501). Simultaneous measurements constrain the magnetic field strength ($`B`$) and Doppler factor ($`\delta `$) of the jet when the electron cooling is assumed to be via synchrotron losses. The correlation between the VHE $`\gamma `$-rays and optical/UV photons observed in 1995 from Mkn 421 indicates both sets of photons are produced in the same region of the jet; $`\delta >5`$ is required for the VHE photons to escape significant pair-production losses . If the VHE $`\gamma `$-rays are produced in the synchrotron-self-Compton process, $`\delta =1540`$ and $`B=0.030.9`$G for Mrk 421 , and $`\delta <15`$ and $`B=0.080.2`$G for Mkn 501 , . On the other hand by assuming protons produce the $`\gamma `$-rays in Mkn 421, Mannheim derives $`\delta =16`$ and $`B=90`$G. The Mkn 421 values of $`\delta `$ and $`B`$ are extreme for blazars, but they are still within allowable ranges and are consistent with the extreme variability of Mkn 421. ## V Intergalactic Absorption Thus far it has not been possible to make a direct measurement of the infrared background radiation at wavelengths more than 3.5 microns and less than 140 microns. This is unfortunate since the background potentially contains valuable information for cosmology, galaxy formation and particle physics. The problem for direct measurement is the presence of foreground local and galactic sources. However the infrared background can make its presence felt by the absorption it produces on the spectra of VHE $`\gamma `$-ray sources when they are at great distances. The absorption is via the $`\gamma \gamma e^+e^{}`$ process, the physics of which is well understood. The maximum absorption occurs when the product of the energy of the two photons ($`\gamma `$-ray and infrared) is approximately equal to the product of the rest masses of the electron-pair. Hence a 1 TeV $`\gamma `$-ray is most heavily absorbed by 0.1eV (1.2 micron) infrared photon in head-on collisions. The importance of this effect for VHE and UHE $`\gamma `$-ray astronomy was first pointed out by Nikishov ; its potential for making an indirect measurement of the infrared background was pointed out by Gould and Schreder and, more recently, in the aftermath of the EGRET detections of AGNs, by Stecker and de Jager . At the redshift of the AGNs detected at VHE energies to date (0.03 to 0.5) if the infrared density has the value assumed in some models , the effect is appreciable and should be apparent in carefully measured energy spectra in the range 1 to 50 TeV. Ideally for such a measurement the intrinsic emission spectrum of the $`\gamma `$-rays from the distant source should be known. In practice this is not the case although thus far all the AGNs detected in the GeV-TeV range appear to have very smooth power-law spectra. Biller et al. have made a conservative derivation of upper limits on the infrared spectrum based on the measured $`\gamma `$-ray spectrum from 0.5 to 10 TeV from Mkn 421 and Mkn 501 by the Whipple and HEGRA groups. These upper limits apply to infrared energies from 0.025 to 1.0 eV; they are the best upper limits over this range. At some wavelengths, these limits are as much as an order of magnitude below the upper limits set by the DIRBE/COBE satellite (see Figure 9). The infrared densities are calculated such that they do not cause the shape of the observed VHE spectrum to deviate from the bounds set from the VHE measurements. This approach has the effect of anchoring the lower energy TeV data to the appropriate infrared upper limits and then extending these bounds so that they are consistent with those based on the shape and extent of the AGN spectra at the higher energies. Thus the maximum energy density in each interval of infrared energy is determined; these limits are plotted in Figure 9 where a maximum energy of 10 TeV is considered; also shown are the upper limits from other methods. These upper limits do not conflict with the predictions of the infrared background based on detailed models of galactic evolution . They do however allow some more cosmological possibilities to be eliminated. In particular in one scenario, density fluctuations in the early universe (z $``$ 1000) could have produced very massive objects which would collapse to black holes at later times and could explain the dark matter. However although undetectable now, they would have produced an amount of infrared radiation that would have exceeded the above limits . These limits also place some constraints on radiative neutrino decay. ## VI Gamma Ray Bursts The contribution of TeV observations to the physics of $`\gamma `$-ray bursts is at once the most speculative and most important (potentially) of all the scientific topics considered here. As yet, there is no positive detection of TeV photons during or immediately after a classical $`\gamma `$-ray burst (GRB) (although there is one tantalizing but unconfirmed observation ). However since there is no turn over seen in the spectra of GRBs detected by EGRET at energies $`>`$ 30 GeV, there is the potential for interesting observations at VHE energies. The observed EGRET spectra are power laws with differential indices 1.95$`\pm `$0.25 . The sensitivity of current ACTs is such that sources with spectral indices $``$ 2 would be easily detectable even for fluences as low as $`5\times 10^8`$ ergs/cm<sup>2</sup> . Although only four of the very bright BATSE bursts were seen by EGRET, these were the brightest to occur within the field of view of EGRET and there is nothing to suggest that all bursts might not have GeV-TeV components. In fact, EGRET was not a very sensitive detector for GRBs both because of its limited collection area and its deadtime. There are now several models that suggest that TeV emission may be a strong feature of GRBs , . There are however several negative factors concerning the possible detection of GRBs by ACTs. The narrow field of view combined with the low duty-cycle (clear, dark nights) lessens the chance of the serendipitous detection of the TeV component of a GRB. If the GRBs are truly cosmological (as they appear to be), then intergalactic absorption by pair production on infra-red photons must come into play at some point, steepening the apparent spectra. However the next generation of ACTs will have reduced energy thresholds, better flux sensitivities and rapid slew capabilities; these features combined with the more accurate source locations anticipated with the launch of HETE-2 may provide TeV detections at the rate of a few per year. In addition, the water Cherenkov detector, MILAGRO, will have all sky coverage (although reduced sensitivity below 1 TeV) and will have guaranteed coverage of some bursts detected by satellites. A feature of the EGRET GRB observations was that there was evidence for delayed emission (up to 1.5 hours) from the burst site . This may indicate a different component at these energies. Some models predict that this delayed emission could persist for days and could hence be easily observed with narrow field of view instruments. The detection of a TeV $`\gamma `$-ray component in a GRB would be a serious parameter for the emission models, in particular the Lorentz bulk motion in the source would be constrained. It would also be an independent distance indicator since the source would have to show absorption if the redshift was $`>`$ 0.1. ## VII Neutralinos The best candidate for the cold dark matter component is the neutralino, the lightest supersymmetric particle. These particles annihilate with the emission of a single $`\gamma `$-ray line whose energy is equal to that of the neutralino mass; however other annihilation modes are possible and there may be a $`\gamma `$-ray continuum. There are limits on the possible masses from cosmology and from accelerator experiments but the range from 30 GeV to 3 TeV is allowed. The upper part of this range would be accessible for study by ground-based $`\gamma `$-ray telescopes. The neutralinos, if they exist, would be expected to cluster to the center of galaxies and might be detectable by their $`\gamma `$-ray emission, either as a line or a continuum. Detailed numerical simulations indicate that there may be a strong density enhancement towards the centers of galaxies such as our own. Hence the Galactic Center is a prime candidate for observations. This hypothesis is given some credence by the detection of a somewhat extended $`\gamma `$-ray source at the Galactic Center at energies above 300 MeV. In a recent paper, Bergstrom, Ullio and Buckley have estimated the flux from the annihilation radiation of neutrinos in the Galactic Center using the most recent models of the galactic mass distribution. The predicted line has a relative width of 10<sup>-3</sup>. Neither space nor ground-based detectors have energy resolution of this quality (even in the next generation of detectors) but the intensity of the line is such that it might be detectable even with relatively crude energy resolution. ## VIII Quantum Gravity Some quantum gravity models predict the refractive index of light in vacuum to be dependent on the energy of the photon. This effect, originating from the polarization of space-time, causes an energy- dependance to the velocity of light. Effectively, the quantum fluctuations are on distance scales near the Planck length, ($`L_P10^{33}`$cm), (corresponding to time-scales of 1/$`E_P`$, the Planck mass ($`10^{19}`$GeV)). Different models of quantum gravity give widely B differing predictions for the amount of time dispersion. In one model the first order time dispersion is given by: $$\mathrm{\Delta }t\xi \frac{E}{E_{QG}}\frac{L}{c}$$ (1) where $`\mathrm{\Delta }t`$ is the time delay relative to propagation at the velocity of light, $`c`$, $`\xi `$ is a model-dependent factor of order 1, $`E`$ is the energy of the observed photons, $`E_{QG}`$ is the quantum energy scale, and $`L`$ is the distance from the source. In most models $`E_{QG}E_P`$ but, in recent work in the context of string theory, it can be as low as 10<sup>16</sup>GeV . Recently it has been suggested that astrophysical observations of transient high energy emission from distant sources might be used to measure (or limit) the quantum gravity energy scale. Amelino-Camelia et al. suggested that BATSE observations of GRBs would provide a powerful method of probing this fundamental constant if variations on time-scales of milliseconds could be measured in the MeV signal in a GRB which was measured to be at a cosmological distance. Such time-scales and distances have been measured in GRBs but so far not in the same GRB. The absence of time dispersion in flares of TeV $`\gamma `$-rays from AGNs at known distances provides an even more sensitive measure. Biller et al. have used the sub-structure observed in the 15 minute flare in Mkn 421 observed by the Whipple group on April 15, 1996 to derive a lower limit on $`E_{QG}`$. On a time-scale of 280 seconds there is weak (2$`\sigma `$) evidence for correlated variability in two energy ranges: 300 GeV to 2 TeV and $`>`$ 2 TeV. For a Hubble Constant of 85 km/s/Mpc, the distance $`L`$ is $`1.1\times 10^{16}`$ light-seconds. This gives a lower limit to $`E_{QG}`$ of $`>4\times 10^{16}`$GeV assuming $`\xi `$ is $``$ 1. This is the most convincing lower limit on $`E_{QG}`$ to date. Because VHE $`\gamma `$-ray astronomy is still in its infancy and the exposure time on AGNs still limited, it is likely that much more sensitive measurements will lead to better limits on $`E_{QG}`$ as a new generation of detectors comes on-line and permits the detection of shorter time-variations and/or more distant sources. ## IX Future Prospects It is clear that to fully exploit the potential of ground-based $`\gamma `$-ray astronomy the detection techniques must be improved. This will happen by extending the energy coverage of the technique and by increasing its flux sensitivity. Ideally one would like to do both but in practice there must be trade-offs. Reduced energy threshold can be achieved by the use of larger but cruder mirrors and this approach is currently being exploited using existing arrays of solar heliostats (STACEE and CELESTE). A German-Spanish project (MAGIC) to build a 17m aperture telescope using state-of-the-art technology has also been proposed. These projects may achieve thresholds as low as 20-30 GeV where they will effectively close the current gap in the $`\gamma `$-ray spectrum from 20 to 200 GeV. Ultimately this gap will be covered by GLAST, the next generation $`\gamma `$-ray space telescope (which will use solid-state detectors) which is scheduled for launch in 2005 by an international collaboration. Extension to even higher energies can be achieved by the atmospheric Cherenkov telescopes working at large zenith angles and by particle arrays at very high mountain altitudes. An interesting telescope that will soon come on line and will complement these techniques is the MILAGRO water Cherenkov detector in New Mexico which will operate 24 hours a day with wide field of view and will have good sensitivity to $`\gamma `$-ray bursts and transients. VERITAS, with seven 10 m telescopes arranged in a hexagonal pattern with 80 m spacing, will aim for the middle ground, with its primary objective being high sensitivity in the 100 GeV to 10 TeV range. It will be located in southern Arizona and will be the logical development of the Whipple telescope. It is hoped to begin construction in 1999 and to complete the array by 2004. The German-French HESS (initially four, and eventually perhaps sixteen, 10m class telescopes) will be built in Namibia and the Japanese NEW CANGAROO array (with three to four telescopes in Australia) will have similar objectives. In each case the arrays will exploit the high sensitivity of the imaging ACT and the high selectivity of the array approach. The relative flux sensitivities of the present and next generation of VHE telescopes as a function of energy are shown in Figure 10, where the sensitivities of the wide field detectors are for one year and for the ACT for 50 hours; in all cases a 5$`\sigma `$ point source detection is required. The VERITAS sensitivity is derived from Monte Carlo simulations using the Whipple telescope as a baseline . The projected sensitivities of MAGIC, HESS, New CANGAROO and VERITAS are somewhat similar and we will refer to them collectively as Next Generation Gamma Ray Telescopes (NGGRTs). It is apparent from this figure that on the low energy side, the NGGRTs such as VERITAS will complement the GLAST mission (launch date 2005) and will overlap with STACEE and CELESTE which will be coming on line in 1999. At its highest energy, they will overlap with the Tibet Air Shower Array. It will cover the same energy range as MILAGRO but with greater flux sensitivity; however the wide field coverage of MILAGRO will permit the detection of transient sources which, once detected, can be monitored by VERITAS. As a Northern Hemisphere telescope VERITAS will complement the coverage of neutrino sources discovered by AMANDA and ICE CUBE at the South Pole. Finally if the sources of ultra-high energy cosmic rays discovered by HiRes and Auger are localized to a few degrees, VERITAS will be the most powerful instrument for their further localization and identification. ### A Science to come #### a AGNs By measuring the high energy end of the spectra for several EGRET sources. The NGGRTs can help determine what particles produce the $`\gamma `$-ray emission in blazars (electrons should show cut-offs which correlate with lower energy spectra, protons would not show a simple correlation). In addition, the recent efforts to unify the different classes of blazar into different manifestations of the same object type can be tested. In addition the infrared background will be probed by the detection of sources over a range of redshifts. #### b SNRs The existing data clearly indicate that in order to resolve the contributions of the various $`\gamma `$-ray emission mechanisms, one needs more accurate measurements over a more complete range of energies. The NGGRTs and GLAST will be a powerful combination to address these issues. The excellent angular resolution of the NGGRTs will allow detailed mapping of the emission in SNRs. The sensitivity and energy resolution, combined with observations at lower $`\gamma `$-ray and X-ray energies help to elucidate the $`\gamma `$-ray emission mechanism. This may lead to direct the confirmation or elimination of SNRs as the source of cosmic rays. #### c Gamma-ray pulsars The detection of VHE $`\gamma `$-rays would be decisive in favoring the outer gap model over the polar cap model. Six pulsars are detected at EGRET energies and their high energy emission is already seriously constrained by the VHE upper limits. The detection of a pulsed $`\gamma `$-ray signal above 50 GeV would be a major breakthrough. #### d Unidentified galactic EGRET sources The legacy of EGRET may be more than 70 unidentified sources, many of which are in the Galactic plane. The positional uncertainty of these sources make identifications with sources at longer wavelengths unlikely. In the galactic plane, probable sources are SNRs and pulsars, particularly in regions of high IR density (e.g., OB associations), but some may be new types of objects. The NGGRT should have the sensitivity and low energy threshold necessary to detect many of these objects. Detailed studies of these objects with the excellent source location capability of the NGGRTs could lead to many identifications with objects at longer wavelengths. Variability in these objects would be easily identified and measured with the NGGRT. Acknowledgements: Research in VHE $`\gamma `$-ray astronomy at the Whipple Observatory is supported by a grant from the U.S.D.O.E. Helpful comments from Mike Catanese, Stephen Fegan and Vladimir Vassiliev are also acknowledged.
no-problem/9903/gr-qc9903083.html
ar5iv
text
# Naked Singularity of the Vaidya-deSitter Spacetime and Cosmic Censorship Conjecture ## 1 Introduction Recently a detailed examination of several gravitational collapse scenarios has shown the development of locally naked singularities in a variety of cases such as the collapse of radiation shells, spherically symmetric self-similar collapse of perfect fluid, collapse of spherical inhomogeneous dust cloud , spherical collapse of a massless scalar field and other physically relevant situations. It is indeed remarkable that in all these cases families of non-spacelike geodesics emerge from the naked singularity; consequently these cases can be considered to be serious examples of locally naked singularity of strong curvature type as can be verified in each individual case separately. Such studies are expected to lead us to a proper formulation of the Cosmic Censorship Hypothesis. Note that all the scenarios considered so far (see for details) are spherically symmetric and asymptotically flat, and that the singularity obtained is locally naked. We may then ask if the occurrence of a locally naked singularity in these cases is an artefact of the special symmetry. Or, since the real universe has no genuine asymptotically flat objects, whether the local nakedness of the singularity in these cases is, in some way, a manifestation of the asymptotic flatness of the solutions considered. The question of special symmetry playing any crucial role in these situations is a hard one to settle and this possibility cannot be ruled out easily. However, the question of asymptotic flatness playing any special role in the development of a locally naked singularity, at least in the collapse of radiation shells, is an easy one to settle since the Vaidya metric in an expanding background is already known . It is the purpose of this paper to investigate the collapse of radiation shells in an expanding deSitter background to find out if the locally naked singularity occurs in this situation and to compare any difference with the similar collapse in the asymptotically flat case. We refer the reader to for the details of the latter situation and also for references pertaining to it. We should point out, for the benefit of those interested in the end result, that our conclusion is that the locally naked singularity of the Vaidya-deSitter metric is the same as that obtained in the asymptotically flat case. Therefore, asymptotic flatness of the solutions considered so far does not manifest itself in the nakedness of the singularity arising in these situations. This result then supports the view that the asymptotic observer be not given any special role in the formulation of the cosmic censorship hypothesis as will be discussed later. ## 2 Outgoing Radial Null Geodesics of the Vaidya-deSitter Metric The Vaidya-deSitter metric, or the Vaidya metric in a deSitter background, is $$ds^2=\left[1\frac{2m(v)}{r}\mathrm{\Lambda }\frac{r^2}{3}\right]dv^2+\mathrm{\hspace{0.33em}2}dvdr+r^2d\mathrm{\Omega }^2$$ (1) where $`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2`$, $`v`$ is the advanced time coordinate as is appropriate for the collapse situation, $`\mathrm{\Lambda }`$ is the cosmological constant and $`m(v)`$ is called the mass function. In this form the metric (1) describes the collapse of radiation. The radiation collapses at the origin $`r=0`$. As is well-known, the energy-momentum tensor for the radial influx of radiation is : $`T_{\alpha \beta }`$ $`=`$ $`\rho U_\alpha U_\beta `$ (2) $`=`$ $`{\displaystyle \frac{1}{4\pi r^2}}{\displaystyle \frac{dm}{dv}}U_\alpha U_\beta `$ where the null 4-vector $`U_\alpha `$ satisfies $$U_\alpha =\delta _\alpha ^v,U_\mu U^\mu =0$$ and represents the radial inflow of radiation, in the optic limit, along the world-lines $`v=constant`$. Clearly, for the weak energy condition $`\left(T_{\alpha \beta }U^\alpha U^\beta \mathrm{\hspace{0.17em}0}\right)`$ we require $$\frac{dm}{dv}\mathrm{\hspace{0.33em}0}$$ (3) to be satisfied. Now, let us consider the situation of radially injected flow of radiation in an initially empty region of the deSitter universe. The radiation is injected into the spacetime at $`v=0`$ and, hence, we have $`m(v)=0`$ for $`v<0`$ and the metric is that of a pure deSitter universe. \[Therefore, the inside of the radiation shells, to begin with, is an empty region of the deSitter metric and not the flat Minkowski metric.\] The metric for $`v=0`$ to $`v=T`$ is the Vaidya-deSitter metric representing a Schwarzschild field of growing mass $`m(v)`$ embedded in a deSitter background. The first radiation shell collapses at $`r=0`$ at time $`v=0`$. The subsequent shells collapse at $`r=0`$ successivly till $`v=T`$ when, finally, there is a singularity of total mass $`m(T)=m_o`$ at $`r=0`$. For $`v>T`$, all the radiation is assumed to have collapsed and the spacetime to have settled to the Schwarzschild field of constant mass $`m(T)=m_o`$ embedded in a deSitter background . To simplify the calculations, we choose $`m(v)`$ as a linear function $$2m(v)=\lambda v,\lambda >0$$ (4) This linear mass-function was introduced by Papapetrou in the asymptotically flat case of the Vaidya metric. Hence, in our case, the Vaidya-Papapetrou-deSitter spacetime is described by the following mass function for the metric (1) : $`m(v)=0`$ $`v<0`$ pure deSitter $`2m(v)=\lambda v`$ $`0<v<T`$ Vaidya-deSitter (5) $`m(v)=m_o`$ $`v>T`$ Schwarzschild-deSitter We note at the outset that the Vaidya-deSitter spacetime for linear mass-function as in (5) is not homothetically Killing unlike the asymptotically flat Vaidya metric. In fact, the line element (1) does not admit any proper conformal Killing symmetries. Consider the geodesic equations of motion for the Vaidya-deSitter metric as in (1). Let the tangent vector of a geodesic be $$K^\alpha =\frac{dx^\alpha }{dk}(\dot{v},\dot{r},\dot{\theta },\dot{\varphi })$$ (6) or, equivalently, $`K_\alpha g_{\alpha \beta }K^\beta `$ $`=`$ $`(K_v,K_r,K_\theta ,K_\varphi )`$ $``$ $`(g_{vv}K^v+K^r,K^v,r^2K^\theta ,r^2\mathrm{sin}^2\theta K^\varphi )`$ Then, the geodesic equations can be obtained from the Lagrangian $$2=K_\alpha K^\alpha $$ (7) For our purpose here it is sufficient to consider only the radiallly outgoing, future-directed, null geodesics originating at the singularity. Such geodesics can be obtained directly from the above Lagrangian as the following equation : $$\frac{dv}{dr}=\frac{2}{1\frac{2m}{r}\frac{\mathrm{\Lambda }r^2}{3}}$$ (8) which, for the linear mass function as in (5), is : $$\frac{dv}{dr}=\frac{2}{1\lambda \frac{v}{r}\frac{\mathrm{\Lambda }r^2}{3}}$$ (9) Now for the geodetic tangent to be uniquely defined and to exist at the singular point, $`r=0`$, $`v=0`$, of equation (9) the following must hold $$\underset{v0r0}{lim}\frac{v}{r}=\underset{v0r0}{lim}\frac{dv}{dr}=X_o$$ (10) say, and when the limit exists, $`X_o`$ is real and positive. In this last situation, we obtain a future-directed, non-spacelike geodesic originating from the singularity $`r=0`$, $`v=0`$ if we further demand that $`20`$. Then, the singularity will, at least, be locally naked. On the other hand, if there is no real and positive $`X_o`$, then there is no non-spacelike geodesic from the singularity to any observer and, hence, the singularity is not visible to any observer. Then, we may show that the singularity is covered by a null hypersurface (the horizon) and the spacetime is a black hole spacetime. Then as we approach the singular point of the differential equation (9) we have, using equations (9) and (10), $$2X_o+\lambda X_o^2=\mathrm{\hspace{0.33em}0}$$ (11) after suitable rearrangement of the terms. Thus for the real values of the tangent to a radially outgoing, null, future-directed geodesic originating in the singularity we obtain $$X_o=a_\pm =\frac{1\pm \sqrt{18\lambda }}{2\lambda }$$ (12) Clearly, we require $$\lambda \frac{1}{8}$$ (13) for $`X_o=lim_{v0r0}v/r`$ to be real in the situation considered. Note that the equation (10) is the same as that obtained by Dwivedi & Joshi when the metric (1) is asymptotically flat i. e. , $`\mathrm{\Lambda }=0`$. Consequently the values $`a_\pm `$ for the geodetic tangent and the condition (11) for these values to be real are the same as those obtained for the asymptotically flat situation when the mass function $`m(v)`$ is linear in $`v`$ as in equation (4). ## 3 Discussion The present-day picture of the gravitational collapse imagines that a sufficiently massive body compressed in too small a volume undergoes an unavoidable collapse leading to a singularity in the very structure of the spacetime. Of course, the deduction that a singularity will form as a result of such collapse tacitly assumes that we disregard those principles of the still-ellusive quantum theory of gravity which alter the nature of the spacetime from that given by the classical theory of gravitation - the general theory of relativity. Within the limits of applicability of the classical general relativity, we characterize such unavoidable collapse by demanding the existence of a point or of a hypersurface, called the trapped surface, whose future lightcone begins to reconverge in every direction along the cone. The deduction that a spacetime singularity will form is then obtained from the well-known Hawking-Penrose Singularity Theorems . These theorems require further physically reasonable assumptions such as the positivity of energy and total pressure, the absence of closed timelike curves and some notion of the genericity of the collapse situation. However, note that the existence of a trapped surface does not imply the absence of a naked singularity or its absence does not imply the presence of a naked singularity. The assumption of a trapped surface (or some other equivalent assumption) is, however, required to infer the occurrence of the spacetime singularity. \[ See for further details on this and other related issues. \] Now, our notion of the classical black hole situation is that of a spacetime singularity completely covered by an absolute event horizon. Unfortunately, the chronology of the developments related to now-famous black hole solutions emphasized the observers at future null infinity, $`^+`$, in earlier ideas of the cosmic censor. We note that there is no theory concerning what happens as a result of the appearance of a spacetime singularity. And, hence, the observer witnessing any such singularity will not be able to account for the observed physical behaviour of processes involving the singularity in any manner whatsoever. The cosmic censorship is then necessary to avoid precisely such situations. The black hole solutions, while emphasizing the role of observers at the future null infinity, led us into demanding that in the region between the absolute event horizon - the boundary $`I^+[^+]`$ of the past of $`^+`$ \- and the set of observers at infinity, $`^+`$, no spacetime singularity occurs. However, it is not hard to imagine a situation in which an observer and a collapsing body, both, are within a larger trapped surface. Thus, no information reaches $`^+`$ from this region. But, that trapped observer would be able to witness the forming spacetime singularity. We are, in essence, discussing here the case of a locally naked singularity. For such an observer, however, it would be impossible to account for the physical behaviour of systems involving the singularity since there is no theory for that. The purpose of a cosmic censor, being that of avoiding precisely such unpredictable physical situations for legitimate observers, is then lost on its formulation in terms of the observers at infinity since any such formulation cannot help the above observer. It is for avoiding such situations that we require some reasonable formulation of the cosmic censorship which does not single out the set of observers at infinity. One such formulation is that of Strong Cosmic Censorship as given by Penrose . Since our main interest here is to explore the role of asymptotic flatness in the development of a naked singularity in the situation of collapsing radiation shells, further analysis than that presented in Section 2 is not necessary to draw definite conclusions about it. The very fact that we have obtained a condition for the occurrence of a naked singularity in the collapse of radiation shells in an expanding background which is the same as that obtained when the background is non-expanding and asymptotically flat establishes that it is not the asymptotic flatness of the solutions considered that manifests, in some sense, in the development of a locally naked singularity. In other words, whether the spacetime is asymptotically flat or not does not make any difference to the occurrence of a locally naked singularity. This is evident in at least the situation of collapsing radiation shells as considered here. Furthermore the example considered above shows that the asymptotic observer has no role to play in the occurrence or non-occurrence of a naked singularity in the collapse of radiation shells. This means that the same asymptotic observer cannot have any special role to play in the formulation of the Cosmic Censorship Hypothesis which is being envisaged as a basic principle of nature, a physical law. Also, the above result is then consistent with the viewpoint that if the cosmic censorship is to be any basic principle of nature then it has to operate at a local level. Hence, no special role can be given to any set of observers in the formulation of such a basic principle; since the general theory of relativity as a theory of gravitation provides no fundamental length scale. Then, the present result unequivocally supports Penrose’s Strong Cosmic Censorship Hypothesis which, in essence, states that singularities should not be visible to any observer or, equivalently, no observer sees a singularity unless and until it is actually encountered. Acknowledgements : We are grateful to Ramesh Tikekar for discussions and to an anonymous referee for critical reading of the manuscript and helpful suggestions.
no-problem/9903/hep-ph9903361.html
ar5iv
text
# Hadronic Three Jet Production at Next-to-Leading Order ## I Introduction In this talk I will discuss recent work in constructing a next-to-leading order event generator for hadronic three jet production. This is the first calculation of three jet production at this order to include all parton sub-processes. Previous studies have only included the contributions of pure gluon scattering contribution. As my preliminary results show, the generator is now working properly and is ready to perform phenomenological studies. ## II Motivation When interpreting experimental data, one would like to have some understanding of the uncertainty associated with theoretical expectations. In QED and the weak interactions this is not a big problem because the couplings are sufficiently weak that higher order corrections are generally quite small. In QCD, however, the coupling is quite strong ($`\alpha _s`$ is still of order $`1/8`$ at the scale of the $`Z`$ boson mass) and it is difficult to obtain a reliable estimate of the theoretical uncertainty. Typically, one characterizes theoretical uncertainty by the dependence on the renormalization scale $`\mu `$. Since we don’t actually know how to choose $`\mu `$ or even a range of $`\mu `$, the uncertainty associated with scale dependence is somewhat arbitrary. There seems no way around this other than to calculate to higher order where the scale dependence is expected to be smaller. However, one often obtains other improvements besides reduced scale dependence by going to higher order. Sometimes one finds the next-to-leading order (NLO) correction to be very large. A notorious example is Higgs production at hadron colliders where the NLO corrections to the leading order gluon fusion process are of the order of $`100\%`$. Such large corrections often come from opening up new channels that are forbidden at leading order (LO), but one must still be concerned about the question of perturbative convergence. Regardless of the nominal scale dependence, one simply does not know how to trust a calculation when the perturbative corrections are large. Even if the overall NLO correction is relatively small, there may be regions of phase space, typically near the boundaries of the allowed region for the LO process, where NLO corrections are large. In these regions, NLO calculations are effectively of leading order and suffer from the large scale dependence associated with leading order. It is only in those regions of phase space where the NLO corrections are well behaved (as determined by the ratio of the NLO to LO terms) that one has confidence in the reliability of the calculation and can begin to believe the uncertainty estimated from scale dependence and it is only when one has a reliable estimate of the theoretical uncertainty that comparisons to experiment are meaningful. A next-to-leading order three jet calculation will have many phenomenological applications. One of the most important will be to perform a purely hadronic extraction of $`\alpha _s`$ via the ratio of two-jet to three-jet production. Because these processes have the same production mechanisms, such an extraction should be relatively free of parton distribution uncertainties. Since hadron machines produce events at all accessible energy scales, it should be possible to measure the running of $`\alpha _s`$ and thereby the QCD $`\beta `$ function which depends upon the strongly interacting matter content accessible at each scale. This calculation will also be useful for studying jet algorithms. One would like to have a flexible jet algorithm that makes it easy to compare experimental results with theoretical calculations. The presence of as many as four final state partons permits more complicated clustering conditions and tests the details of the algorithms. In our pure gluon study , we found that the iterative cone algorithms commonly used at hadron colliders have an intrinsic infrared sensitivity that precludes their direct implementation in fixed order calculations. With Run II at the Tevatron fast approaching, it would be desirable to settle on an algorithm suitable for both theory and experiment. Other applications include the study of energy flow within jets and background studies for new phenomena searches. Finally, this entire calculation is but a part of an eventual next-to-next-to-leading order (NNLO) calculation of two-jet production. That calculation is still a long way off. Not only are the two-loop virtual corrections unknown, but even the higher-order contributions to real emission are unknown. Still, the NNLO calculation will eventually be needed to compare to the high statistics data that will be collected at the Tevatron and the LHC. ## III Methods The NLO three jet calculation consists of two parts: two to three parton processes at one-loop (the virtual terms) and two to four parton processes (the real emission terms) at tree-level. Both of these contributions are infrared singular; only the sum of the two is infrared finite and meaningful. The virtual contributions are infrared singular because of loop momenta going on-shell. The virtual singularities take the form of single and double poles in the dimensional regulator $`ϵ`$ multiplying the Born amplitude. The real emission contributions are singular when two partons become collinear or when a gluon becomes very soft. The Kinoshita-Lee-Nauenberg (KLN) theorem guarantees that the infrared singularities cancel for sufficiently inclusive processes when the real and virtual contributions are combined. The parton sub-processes involved are $`ggggg`$ , $`\overline{q}qggg`$ , $`\overline{q}q\overline{Q}Qg`$ , and processes related to these by crossing symmetry, all computed to one-loop, and $`gggggg`$, $`\overline{q}qgggg`$, $`\overline{q}q\overline{Q}Qgg`$, and $`\overline{q}q\overline{Q}Q\overline{Q^{}}Q^{}`$ and the crossed processes computed at tree-level. Quark-antiquark pairs $`Q\overline{Q}`$ may or may not have the same flavor as the $`q\overline{q}`$ and $`Q^{}\overline{Q^{}}`$ pairs. Previous NLO three jet calculations have worked in the approximation of pure gluon scattering, using only the $`ggggg`$ and $`gggggg`$ processes. This is the first calculation to include all parton sub-processes. In order to implement the kinematic cuts necessary to compare a calculation to experimental data one must compute the cross section numerically. Thus, it is not sufficient to know that the singularities drop out in the end, we must find a way of canceling them before we start the calculation. The crucial issue in obtaining and implementing the cancelation is resolution. The real emission process is infrared singular in precisely those regions where the individual partons cannot all be resolved (even in principle, ignoring the complication of hadronization, showering, etc.) because of collinear overlap or by becoming too soft to detect. If we impose some resolution criterion, we can split the real emission calculation into two parts, the “hard emission” part in which all of the partons are well resolved and the “infrared” part in which one or more partons are unresolved. The hard emission part is computed in the normal way by means of Monte Carlo integration. The infrared part is treated differently, making use of the fact that matrix elements have well defined factorization properties in both soft and collinear infrared limits. In terms of color-ordered helicity amplitudes, $`_n(\mathrm{},1^{\lambda _1},2^{\lambda _2},\mathrm{})\stackrel{12}{}Split_{\lambda _c}(1^{\lambda _1},2^{\lambda _2})_{n1}(\mathrm{},c^{\lambda _c},\mathrm{})`$ (1) (2) $`_n(\mathrm{},1^{\lambda _1},s^{\lambda _s},2^{\lambda _2},\mathrm{})\stackrel{k_s0}{}Soft(1,s^{\lambda _s},2)_{n1}(\mathrm{},1^{\lambda _1},2^{\lambda _2},\mathrm{}),`$ (3) where $`Split`$ and $`Soft`$ are universal functions depending only on the momenta, helicities and particle types involved. The $`Split`$ functions are in a sense the square roots of the Altarelli-Parisi splitting functions. In computing the infrared part, we replace the full two-to-four parton matrix elements with their infrared factorized limits. We then integrate out the unresolved parton by integrating (in dimensional regularization) the $`Split`$ and $`Soft`$ functions over the unresolved region of phase space, resulting in single and double poles multiplying two-to-three parton Born matrix elements. These terms, as the KLN theorem says they must, have exactly the right pole structure to cancel the infrared poles of the virtual contribution. By analytically combining unresolved real emission with the virtual terms, we obtain a finite contribution that can be integrated numerically. Several different methods of implementing this infrared cancelation have successfully employed in various NLO calculations. The method we use is the “subtraction improved” phase space slicing method . The phase space slicing method uses a resolution criterion $`s_{min}`$, which is a cut on the two parton invariant masses, $$s_{ij}=2E_iE_j(1\mathrm{cos}\theta _{ij}).$$ (4) If partons $`i`$ and $`j`$ have $`s_{ij}>s_{min}`$ they are said to be resolved from one another. (Which is not to say that a jet clustering algorithm will not put them into the same jet.) If $`s_{ij}<s_{min}`$ partons $`i`$ and $`j`$ are said to be unresolvable. One advantage of the $`s_{min}`$ criterion is that it simultaneously regulates both soft ($`E_i0`$ or $`E_j0`$) and collinear ($`\mathrm{cos}\theta _{ij}1`$) emission. In the rearrangement of terms, the infrared region of phase space is where any two parton invariant mass is less than $`s_{min}`$. These regions are sliced out of the full two-to-four body phase space, partially integrated and then added to the two-to-three body integral. Because the infrared integral is bounded by $`s_{min}`$, the integrations over $`Split`$ and $`Soft`$ terms depend explicitly on $`s_{min}`$. In fact, in the cancelation of the virtual singularities, the $`1/ϵ`$ terms are replaced by $`\mathrm{ln}s_{min}`$ terms and the $`1/ϵ^2`$ terms by $`\mathrm{ln}^2s_{min}`$ terms. The hard real emission term is also $`s_{min}`$ dependent because the boundary of the sliced out region depends on $`s_{min}`$. Because $`s_{min}`$ is an arbitrary parameter the sum of the virtual and real emission terms must be $`s_{min}`$ independent. Thus, we have rearranged the calculation, trading a cancelation of infrared poles in $`ϵ`$ for a cancelation of logarithms of $`s_{min}`$. This provides an important cross check on our calculation. If we can demonstrate that our calculated cross section is $`s_{min}`$ independent we can be confident that we have correctly implemented the infrared cancelation. While the NLO cross section is formally independent of $`s_{min}`$, there are several practical considerations to choosing the value properly. As $`s_{min}`$ becomes smaller, the infrared approximations of the matrix elements becomes more accurate. However, the overriding concern is the numerical convergence of the calculation. Two terms, each diverging like $`\mathrm{ln}^2s_{min}`$ must be added with the logs canceling. As $`s_{min}`$ is made small, the logarithm becomes large and the individual terms, real and virtual, become larger in magnitude. The sum however, the NLO cross section, is unchanged. Thus, as $`s_{min}`$ becomes small, it becomes harder to engineer the cancelation to the precision to which one would like to compute the cross section. Based on this consideration, we would like to make $`s_{min}`$ as large as possible. There is an absolute upper limit imposed by the constraints of jet finding. We cannot make $`s_{min}`$ so large that it begins to interfere with jet clustering, say, by declaring unresolvable a pair of partons that a sensible jet clustering algorithm would say are separated from one another. Another problem with large values of $`s_{min}`$, alluded to before and which actually sets in at a lower scale than jet clustering interference, is that as $`s_{min}`$ is made larger the infrared approximations used in the slicing region become less precise. The “subtraction improved” part of our method involves the handling of the slicing region. As originally implemented, the infrared region was completely sliced out of the two-to-four integration and the full two-to-four matrix element was replaced by its soft or collinear limit. A better approximation is to leave the infrared regions in the “hard” phase space integral, but to compute only the difference between the true and approximate matrix elements in those regions. In our gluonic three jet production study we found that the subtraction improvement allowed us to use substantially larger values of $`s_{min}`$ than would have been possible with just phase space slicing. ## IV Preliminary Results The results presented below were computed for the following kinematic configurations: the $`\overline{p}p`$ center of mass energy is $`1800`$ GeV; the leading jet is required to have at least $`100`$ GeV in transverse energy, $`E_T`$, and there must be two additional jets with at least $`50`$ GeV of transverse energy; all jets must lie in the pseudorapidity range $`4.0<\eta _J<4.0`$. The CTEQ3M parton distribution functions and the EKS jet clustering algorithm (modified for three jet configurations as in reference ) were used. Figure 1 shows the computed next-to-leading order three jet cross section as a function of the resolution parameter $`s_{min}`$. Also shown is the leading order calculation. We see that the NLO result is stable over a wide range of values of $`s_{min}`$. This stability indicates that we are correctly implementing the infrared cancelation. Further calculations at larger values of $`s_{min}`$ are needed to actually determine the limit of the region of stability. In the lower plot, we see the statistical uncertainty on each point. As $`s_{min}`$ becomes small, it becomes increasingly difficult to calculate $`\sigma _{jjj}`$ to the desired precision. For instance, at $`s_{min}=10`$ GeV<sup>2</sup>, the real and virtual components are $`16.763\pm 0.008`$ and $`14.468\pm 0.005(nb)`$ respectively, while at $`s_{min}=1`$ GeV<sup>2</sup>, they are $`29.739\pm 0.035`$ and $`27.312\pm 0.010`$. To obtain the same absolute uncertainty on the sum of these numbers, the relative uncertainty on each of the components at $`s_{min}=1`$ GeV<sup>2</sup> must be one half that required at $`s_{min}=10`$ GeV<sup>2</sup>. Since the statistical uncertainty scales like the square root of number of points evaluated, it takes roughly four times as long to obtain a precise calculation at $`s_{min}=1`$ GeV<sup>2</sup> as it does at $`s_{min}=10`$ GeV<sup>2</sup>. We also see that the size of the NLO correction is of order $`15\%`$. This gives us confidence in the perturbative stability of the calculation. Together, these two observations indicate that we are performing a reliable calculation of the three jet cross section. Further tests using a variety of modern parton distribution functions, values of $`\alpha _s`$, renormalization scales, etc. are needed to obtain a clearer picture of the theoretical uncertainty associated with the calculation. Figure 2 shows the transverse energy spectrum of the leading jet in $`E_T`$. There is no indication of any large correction appearing in the jet spectrum. The dominant feature of the NLO spectrum is that it is somewhat softer than the LO spectrum. That is, NLO predicts that the spectrum falls more quickly with growing transverse energy than LO. This is explained in part by the fact that NLO opens up the available phase space by allowing a fourth jet in the final state. This same softening trend is also observed in the transverse energy spectrum of the second leading jet, shown in figure 3. Both jet spectra were computed at $`s_{min}=7.9`$ GeV<sup>2</sup>. ## V Conclusions We have successfully built a next-to-leading order event generator for inclusive three jet production at hadron colliders. This is the first NLO calculation of this process to include all parton sub-processes. Our results indicate that we are correctly canceling the infrared singularities and therefore obtaining reliable results. With this calculation we will be able to study many interesting phenomena within QCD. ## Acknowledgements W.B.K. wishes to thank L. Dixon and A. Signer for their assistance in verifying the one-loop matrix elements and Z. Bern for the use of the UCLA High Energy Physics Group’s computing cluster. This work was supported by the US Department of Energy under grants DE-AC02-76CH03000 (W.T.G.) and DE-AC02-98CH10886 (W.B.K.).
no-problem/9903/cond-mat9903292.html
ar5iv
text
# Novel Phenomena in Dilute Electron Systems in Two Dimensions For the past two decades, all two-dimensional systems of electrons were believed to be insulating in the limit of zero temperature. We review recent experiments that provide evidence for an unexpected transition to a conducting phase at very low electron densities. The nature of this phase is not understood, and is currently the focus of intense theoretical and experimental attention. BACKGROUND A two-dimensional (2D) system of electrons or “holes” (a hole, or missing electron, behaves like a positively charged electron) is one in which the positions of the electrons and their motion are restricted to a plane. Physical realizations can be found in very thin films, sometimes at the surface of bulk materials, in “quantum well” systems such as GaAs/AlGaAs that are specifically engineered for this purpose, and in the silicon metal-oxide-semiconductor field-effect transistors described below. Two-dimensional electron systems have been studied for nearly forty years , and have yielded a number of important discoveries of physical phenomena that directly reflect the quantum mechanical nature of our world. These include the integer Quantum Hall Effect (QHE), which reflects the quantization of electron states by a magnetic field, and the fractional Quantum Hall Effect, which is a manifestation of the quantum mechanics of many electrons acting together in a magnetic field to yield curious effects like fractional (rather than whole) electron charges. For nearly two decades it was believed that in the absence of an external magnetic field ($`H=0`$) all two-dimensional systems of electrons are insulators in the limit of zero temperature. The true nature of the conduction was expected to be revealed only at sufficiently low temperatures; in materials such as highly conducting thin films, this was thought to require unattainably low temperatures in the $`\mu `$Kelvin range. Based on a scaling theory for non-interacting electrons, these expectations were further supported by theoretical work for weakly interacting electrons . Confirmation that two-dimensional systems of electrons are insulators in zero field was provided by a beautiful series of experiments in thin metallic films and silicon metal-oxide-semiconductor field-effect transistors, where the conductivity was shown to display weak logarithmic corrections leading to infinite resistivity in the limit of zero temperature. It was therefore quite surprising when recent experiments in silicon metal-oxide-semiconductor field-effect transistors suggested that a transition from insulating to conducting behavior occurs with increasing electron density at a very low critical density, $`n_c10^{11}`$ cm<sup>-2</sup>. These experiments were performed on unusually high quality samples, allowing measurements at considerably lower electron densities than had been possible in the past. First viewed with considerable skepticism, the finding was soon confirmed for silicon metal-oxide-semiconductor field-effect transistors fabricated in other laboratories, and then for other materials, including p-type SiGe structures , p-type GaAs/AlGaAs heterostructures , and n-type AlAs and GaAs/AlGaAs heterostructures . It was soon realized that the low electron (and hole) densities at which these observations were made correspond to a regime where the energy of the repulsive Coulomb interactions between the electrons exceeds the Fermi energy (roughly, their kinetic energy of motion) by an order of magnitude or more. For example, at an electron density $`n_s=10^{11}`$ cm<sup>-2</sup> in silicon metal-oxide-semiconductor field-effect transistors, the Coulomb repulsion energy, $`U_ce^2(\pi n_s)^{1/2}/ϵ`$, is about 10 meV while the Fermi energy, $`E_F=\pi n_s\mathrm{}^2/2m^{}`$, is only 0.55 meV. (Here $`e`$ is the electronic charge, $`ϵ`$ is the dielectric constant, and $`m^{}`$ is the effective mass of the electron). Rather than being a small perturbation, as has been generally assumed in theoretical work done to date, interactions instead provide the dominant energy in these very dilute systems. EXPERIMENTS The inset to Fig. 1(a) shows a schematic diagram of the band structure of a silicon metal-oxide-semiconductor field-effect transistor consisting of a thin-film metallic gate deposited on an oxide layer adjacent to lightly p-doped silicon, which serves as a source of electrons. A voltage applied between the gate and the oxide-silicon interface causes the conduction and valence bands to bend, as shown in the diagram, creating a potential minimum which traps electrons in a two-dimensional layer perpendicular to the plane of the page. The magnitude of the applied voltage determines the degree of band-bending and thus the depth of the potential well, allowing continuous control of the number of electrons trapped in the two-dimensional system at the interface. For a very high-mobility (low disorder) silicon metal-oxide-semiconductor field-effect transistor, the resistivity is shown at several fixed temperatures as a function of electron density in Fig. 1(a). There is a well defined crossing at a “critical” electron density, $`n_c`$, below which the resistivity increases as the temperature is decreased, and above which the reverse is true. This can be seen more clearly in Fig. 1(b) where the resistivity is plotted as a function of temperature for various fixed electron densities. A resistivity that increases with decreasing temperature generally signals an approach to infinite resistance at $`T=0`$, that is, to insulating behavior; a resistivity that decreases as the temperature is lowered is characteristic of a metal if the resistivity tends to a finite value, or a superconductor or perfect conductor if the resistivity tends to zero. The crossing point of Fig. 1(a) thus signals a transition from insulating behavior below $`n_s<n_c`$ to conducting behavior at higher densities ($`n_s>n_c`$). Similar behavior obtains in other materials at critical densities determined by material parameters such as effective masses and dielectric constants. The value of the resistivity at the transition (the “critical resistivity” $`\rho _c`$) in all systems remains on the order of $`h/e^2`$, the quantum unit of resistivity. The electrons’ spins play a crucial role in these low-density materials, as demonstrated by their dramatic response to a magnetic field applied parallel to the plane of the two-dimensional system. We note that an in-plane magnetic field couples only to the electron spins and does not affect their orbital motion. The parallel-field magnetoresistance is shown for a silicon metal-oxide-semiconductor field-effect transistor in Fig. 2 for electron densities spanning the critical density $`n_c`$ at a temperature of 0.3 K. The resistivity increases by more than an order of magnitude with increasing field, saturating to a new value in fields above $`2`$ or $`3`$ Tesla above which the spins are presumably fully aligned. The total change in resistance is larger at lower temperatures and for higher mobility samples, exceeding two or three order of magnitude in some cases. Although first thought to be associated only with the suppression of the conducting phase, the fact that very similar magnetoresistance is found for electron densities above and below the zero-field critical density indicates that this is a more general feature of dilute two-dimensional electron systems. SOME OPEN QUESTIONS Strongly interacting systems of electrons in two dimensions are currently the focus of intense interest, eliciting a spate of theoretical attempts to account for the presence and nature of the unexpected conducting phase. Most postulate esoteric new states of matter, such as a low-density conducting phase first considered by Finkelshtein, a perfect metallic state , non-Fermi liquid behavior , and several types of superconductivity . A number of relatively more mundane suggestions have been advanced that attribute the unusual behavior seen in Figs. 1 and 2 to effects that are essentially classical in nature. These include a vapor/gas separation in the electron system, temperature- and field-dependent filling and emptying of charge traps unavoidably introduced during device fabrication at the oxide-silicon interface , and temperature-dependent screening associated with such charged traps . Although some may strongly advocate a particular view, all would agree that no consensus has been reached. A great deal more experimental information will be required before the behavior of these systems is understood. Information will surely be obtained in the near future from NMR, tunneling studies, optical investigations, and other techniques. One crucially important question that needs to be resolved by experiment is the ultimate fate of the resistivity in the conducting phase in the limit of zero temperature. Data in all 2D systems showing the unusual metal-insulator transition indicate that, following the rapid (roughly exponential) decrease with decreasing temperature shown in Fig. 1(b), the resistivity levels off to a constant, or at most weakly temperature-dependent, value. The temperature at which this leveling off occurs decreases, however, as the transition is approached. The question is whether the resistivity of dilute two-dimensional systems tends to a finite value or zero in the zero-temperature limit as the transition is approached. If the resistivity remains finite, this would rule out superconductivity or perfect conductivity. The question may then revert to whether localization of the electrons reasserts itself at very low temperatures, yielding an insulator as originally expected. There are well-known experimental difficulties associated with cooling the electron system to the same temperature as the lattice and bath (that is, the temperature measured by the thermometer), and these experiments will require great skill, care and patience. An equally important issue is the magnetic response of the electron system. Superconductors expel magnetic flux and are strongly diamagnetic, while Finkelshtein’s low-density phase would give a strongly paramagnetic signal. There are very few electrons in a low-density, millimeter-sized, $`100\AA `$-thick layer, and measurements of the magnetization will be exceedingly difficult. In closing, we address a crucial question regarding the nature of the apparent, unexpected zero-field metal-insulator transition: do these experiments signal the presence of unanticipated phases and new phenomena in strongly interacting two-dimensional electron systems, or can the observations be explained by invoking classical effects such as recharging of traps in the oxide or temperature-dependent screening? Some recent experiments suggest the former. A Princeton-Weizmann collaboration has demonstrated that the magnetic field-induced phase transition between integer Quantum Hall Liquid and insulator (the QHE-I transition) evolves smoothly and continuously to the metal-insulator transition in zero magnetic field discussed in this paper, raising the possibility that the two transitions are closely related. This conjecture is supported by the strong similarity between the temperature dependence of the resistivity in zero magnetic field and in the Quantum Hall Liquid phase. Additional insight can be obtained from a comparison of the “critical” resistivity, $`\rho _c`$, at the zero-field metal-insulator transition and the critical resistivity, $`\rho _{QHEI}`$, at the QHE-I transition measured for the same sample . Fig. 3(a) shows values of the zero-field critical resistivity, $`\rho _c`$, for a number of samples of different 2D electron and hole systems: $`\rho _c`$ varies by an order of magnitude, between approximately $`10^4`$ and $`10^5`$ Ohm, and exhibits no apparent systematic behavior. In contrast, Fig. 3(b) shows that the ratio $`\rho _c/\rho _{QHEI}`$ is close to unity when measured on the same sample for three different materials. Since the QHE-I transition is clearly a quantum phase transition, this suggests that the zero-field metal-insulator transition is a quantum phase transition as well. The intriguing relationship between critical resistivities for these two transitions shown for only a very few samples in Fig. 3(b) clearly needs further confirmation. Future work will surely resolve whether, and what, exciting and unanticipated physics is required to account for the puzzling and fascinating recent observations in two dimensions. AKNOWLEDGMENTS We are grateful to P. T. Coleridge for sharing his data with us prior to publication. M. P. Sarachik thanks the US Department of Energy for support under grant No. DE-FG02-84-ER45153. M. P. S. and S. V. Kravchenko acknowledge support by NSF grant DMR 98-03440. FIGURE CAPTIONS Figure 1: (a): Resistivity as a function of electron density for the two-dimensional system of electrons in a high-mobility silicon metal-oxide-semiconductor field-effect transistor. The different curves correspond to different temperatures. Note that at low densities the resistivity increases with decreasing temperature (insulating behavior), while the reverse is true for higher densities (conducting behavior). The inset shows a schematic diagram of the electron bands to illustrate how a two-dimensional layer is obtained (see text). (b): Resistivity as a function of temperature for the two-dimensional system of electrons in a silicon MOSFET. Different curves are for different electron densities. Figure 2: For different electron densities, the resistivity at $`0.3`$ Kelvin is plotted as a function of magnetic field applied parallel to the plane of the two-dimensional system of electrons in a silicon MOSFET. The top three curves are insulating while the lower curves are conducting in the absence of a magnetic field. The response to parallel field is qualitatively the same in the two phases, varying continuously across the transition. Figure 3: (a): The critical resistivity, $`\rho _c`$, which separates the conducting and insulating phases in zero magnetic field is shown for several 2D systems for which the transition occurs at different electron (or hole) densities, shown along the $`x`$-axis. Although the critical resistivity is of the order of the quantum unit of resistivity, $`h/e^226`$ kOhm, it varies by about a factor of 10. (b): For several materials, measurements of $`\rho _c`$ and $`\rho _{QHEI}`$ on the same sample yield ratios $`\rho _c/\rho _{QHEI}`$ that are near unity. Here $`\rho _c`$ is the critical resistivity separating the conducting and insulating phases in the absence of magnetic field and $`\rho _{QHEI}`$ is the critical resistivity at the transition from the Quantum Hall Liquid to the insulator in finite magnetic field. Data were obtained for p-GaAs/AlGaAs heterostructures from Refs. , for n-GaAs/AlGaAs from Ref. , and for p-SiGe from Refs. .
no-problem/9903/hep-lat9903029.html
ar5iv
text
# DESY 99-036 Multi-bosonic algorithms for dynamical fermion simulations11footnote 1Talk given at the workshop on Molecular Dynamics on Parallel Computers, February 1999, NIC, Jülich; to appear in the proceedings. ## 1 Introduction The numerical simulation of quantum field theories with fermions is an interesting and difficult computational problem. The basic difficulty is the necessary calculation of very large determinants, the determinants of fermion matrices, which can only be achieved by some stochastic procedure with the help of auxiliary bosonic “pseudofermion” fields. In hybrid Monte Carlo algorithms the number of pseudofermion fields corresponds to the number of fermion field components. The evolution of the pseudofermion fields in the updating process is realized by discretized molecular dynamics equations. The error implied by the finite length of discretization steps is corrected for by a global accept-reject decision which involves a fermion matrix inversion. The ingredients of the two-step multi-bosonic algorithm , which will be considered in this contribution, are somewhat different but still in a general sense similar. The number of pseudofermion fields is multiplied by the order of a polynomial approximation of some negative power $`x^\alpha `$ of the fermion matrix. These auxiliary bosonic fields are updated acording to a multi-bosonic action by using simple methods known from bosonic quantum field theories, as heatbath and overrelaxation. The error of the polynomial approximation is corrected also here in a global accept-reject decision. This is realized in a so called noisy correction step by using a better polynomial approximation, which realizes a kind of generalized inversion of the fermion matrix. In my talk I review some recent developments of the multi-bosonic algorithms. The performance of the two-step multi-bosonic algorithm is illustrated in a recent large scale Monte Carlo simulation of the supersymmetric Yang-Mills theory , which is being performed at NIC, Jülich. This application shows that the algorithm is able to cope with the difficulties arising at nearly zero gluino masse in reasonably large physical volumes. ## 2 Multi-bosonic actions The multi-bosonic representation of the fermion determinant is based on the approximation $$\left|det(Q)\right|^{N_f}=\left\{det(Q^{}Q)\right\}^{N_f/2}\frac{1}{detP_n(Q^{}Q)},$$ (1) where $`N_f`$ (Dirac-) fermion flavours are considered and the polynomial $`P_n`$ satisfies $$\underset{n\mathrm{}}{lim}P_n(x)=x^{N_f/2}$$ (2) in an interval $`x[ϵ,\lambda ]`$. This interval is chosen such that it covers the spectrum of the squared hermitian fermion matrix $`\stackrel{~}{Q}^2`$. The hermitian matrix $`\stackrel{~}{Q}`$ is defined from the original fermion matrix $`Q`$ $$\stackrel{~}{Q}\gamma _5Q=\stackrel{~}{Q}^{}$$ (3) which is assumed here to satisfy the relation $$Q^{}=\gamma _5Q\gamma _5.$$ (4) Note that in (1) only the absolute value of the determinant is taken which leaves out its sign (or phase). This can be taken into account at the evaluation of expectation values by reweighting. For the multi-bosonic representation of the determinant one uses the roots of the polynomial $`r_j,(j=1,\mathrm{},n)`$ $$P_n(Q^{}Q)=P_n(\stackrel{~}{Q}^2)=r_0\underset{j=1}{\overset{n}{}}(\stackrel{~}{Q}^2r_j).$$ (5) Assuming that the roots occur in complex conjugate pairs, one can introduce the equivalent forms $$P_n(\stackrel{~}{Q}^2)=r_0\underset{j=1}{\overset{n}{}}[(\stackrel{~}{Q}\pm \mu _j)^2+\nu _j^2]=r_0\underset{j=1}{\overset{n}{}}(\stackrel{~}{Q}\rho _j^{})(\stackrel{~}{Q}\rho _j)$$ (6) where $`r_j(\mu _j+i\nu _j)^2`$ and $`\rho _j\mu _j+i\nu _j`$. With the help of complex scalar (pseudofermion) fields $`\mathrm{\Phi }_{jx}`$ one can write $$detP_n(Q^{}Q)^1\underset{j=1}{\overset{n}{}}det[(\stackrel{~}{Q}\rho _j^{})(\stackrel{~}{Q}\rho _j)]^1$$ $$[d\mathrm{\Phi }]\mathrm{exp}\left\{\underset{j=1}{\overset{n}{}}\underset{xy}{}\mathrm{\Phi }_{jy}^+[(\stackrel{~}{Q}\rho _j^{})(\stackrel{~}{Q}\rho _j)]_{yx}\mathrm{\Phi }_{jx}\right\}.$$ (7) The exponent in (7) is the (negative) multi-bosonic action. Since it is quadratic in $`\stackrel{~}{Q}`$, its locality properties are inherited from the fermion matrix $`Q`$. For instance, if $`Q`$ has only nearest neighbour interactions then the multi-bosonic action (7) extends up to next-to-nearest neighbours. The multi-bosonic representation of the fermion determinant (7) can be used for a Monte Carlo procedure in terms of the pseudofermion fields $`\mathrm{\Phi }_{jx}`$. The difficulty for small fermion masses in large physical volumes is that the condition number $`\lambda /ϵ`$ becomes very large ($`10^410^6`$) and very high orders $`n=𝒪(10^3)`$ are needed for a good approximation. This requires large storage and the autocorrelation becomes bad since it is proportional to $`n`$. An additional question is how to control the systematic errors introduced by the polynomial approximation in (1). In principle one has to perform the limit to infinite order of the approximation $`n\mathrm{}`$, which means in practical terms a need to investigate the dependence of the results on $`n`$. Several solutions for eliminating the systematical errors due to the finite order of approximation $`n`$ in (1) are possible. For instance, one can calculate a correction factor from the eigenvalues of $`\stackrel{~}{Q}^2`$ and introduce a corresponding global Matropolis accept-reject step in the updating . The necessary calculation of the eigenvalues leads, however, to an algorithm growing with the square of the number of lattice points. A better solution is to apply a noisy correction step which is especially simple in case of $`N_f=2`$ flavours when an iterative inversion is sufficient . This can be generalized to an arbitrary number of flavours in the two-step multi-bosonic scheme which will be discussed in detail in the next section. The special case of $`N_f=1`$ flavours can be dealt with in a non-hermitian version applied directly to $`Q`$, insted of $`Q^{}Q`$ in (1). This works well for heavy fermion masses when the spectrum of $`Q`$ can be covered by en ellipse but it would be very cumbersome for small fermion masses where the eigenvalues are surrounding zero. Another possibility to perform the corrections of the systematic errors of the polynomial approximation is reweighting in the expectation values. For the special case of $`N_f=2`$ this has been introduced in the polynomial hybrid Monte Carlo scheme . The case of arbitrary $`N_f`$ can be solved by an appropriate polynomial approximation, which can also be implemented in the two-step multi-bosonic approach (see section 3.2). Other approaches for eliminating the systematic errors are possible, for instance, by choosing some specific ways of optimizing the approximate polynomials (see ). It is also possible to combine the multi-bosonic idea with other methods of dynamical fermion simulations. To review all attempts would take too much time and most of the proposals were not yet tested in really large scale simulations. In what follows I shall concentrate on the two-step multi-bosonic scheme which proved to be efficient in recent simulations of SU(2) Yang-Mills theory with light gluinos . ## 3 Two-step multi-bosonic algorithm The dynamical fermion algorithm directly using the multi-bosonic representation in (7) has difficulties with large storage requirements and long autocorrelations. One can achieve substantial improvements on both these problems by introducing a two-step polynomial approximation . In this two-step approximation scheme (2) is replaced by $$\underset{n_2\mathrm{}}{lim}P_{n_1}^{(1)}(x)P_{n_2}^{(2)}(x)=x^{N_f/2},x[ϵ,\lambda ].$$ (8) The multi-bosonic representation is only used for the first polynomial $`P_{n_1}^{(1)}`$ wich provides a first crude approximation and hence the order $`n_1`$ can remain relatively low. The correction factor $`P_{n_2}^{(2)}`$ is realized in a stochastic noisy correction step with a global accept-reject decision during the updating process (see section 3.1). In order to obtain an exact algorithm one has to consider in this case the limit $`n_2\mathrm{}`$. For very small fermion masses it turned out more practicable to fix some large $`n_2`$ and perform another small correction in the evaluation of expectation values by reweighting with a still finer polynomial (see section 3.2). ### 3.1 Update correction: global accept-reject The idea to use a stochastic correction step in the updating , instead of taking very large polynomial orders $`n`$, was proposed in the case of $`N_f=2`$ flavours in . $`N_f=2`$ is special because the function to be approximated is just $`x^1`$ and $`P_{n_2}^{(2)}(x)`$ can be replaced by the calculation of the inverse of $`xP_{n_1}^{(1)}(x)`$. For general $`N_f`$ one can take the two-step approximation scheme introduced in where the two-step multi-bosonic algorithm is described in detail. The theory of the necessary optimized polynomials is summarized in section 4 following . In the two-step approximation scheme for $`N_f`$ flavours of fermions the absolute value of the determinant is represented as $$\left|det(Q)\right|^{N_f}\frac{1}{detP_{n_1}^{(1)}(\stackrel{~}{Q}^2)detP_{n_2}^{(2)}(\stackrel{~}{Q}^2)}.$$ (9) The multi-bosonic updating with $`n_1`$ scalar pseudofermion fields is performed by heatbath and overrelaxation sweeps for the scalar fields and Metropolis sweeps for the gauge field. After a Metropolis sweep for the gauge field a global accept-reject step is introduced in order to reach the distribution of gauge field variables $`[U]`$ corresponding to the right hand side of (9). The idea of the noisy correction is to generate a random vector $`\eta `$ according to the normalized Gaussian distribution $$\frac{e^{\eta ^{}P_{n_2}^{(2)}(\stackrel{~}{Q}[U]^2)\eta }}{[d\eta ]e^{\eta ^{}P_{n_2}^{(2)}(\stackrel{~}{Q}[U]^2)\eta }},$$ (10) and to accept the change $`[U^{}][U]`$ with probability $$\mathrm{min}\{1,A(\eta ;[U^{}][U])\},$$ (11) where $$A(\eta ;[U^{}][U])=\mathrm{exp}\left\{\eta ^{}P_{n_2}^{(2)}(\stackrel{~}{Q}[U^{}]^2)\eta \eta ^{}P_{n_2}^{(2)}(\stackrel{~}{Q}[U]^2)\eta \right\}.$$ (12) The Gaussian noise vector $`\eta `$ can be obtained from $`\eta ^{}`$ distributed according to the simple Gaussian distribution $$\frac{e^{\eta ^{}\eta ^{}}}{[d\eta ^{}]e^{\eta ^{}\eta ^{}}}$$ (13) by setting it equal to $$\eta =P_{n_2}^{(2)}(\stackrel{~}{Q}[U]^2)^{\frac{1}{2}}\eta ^{}.$$ (14) In order to obtain the inverse square root on the right hand side of (14), we can proceed with polynomial approximations in two different ways. The first possibility was proposed in with $`x\stackrel{~}{Q}^2`$ as $$P_{n_2}^{(2)}(x)^{\frac{1}{2}}R_{n_3}(x)x^{N_f/4}S_{n_s}[P_{n_1}^{(1)}(x)].$$ (15) Here $$S_{n_s}(P)P^{\frac{1}{2}}$$ (16) is an approximation of the function $`P^{\frac{1}{2}}`$ on the interval $`P[\lambda ^{N_f/2},ϵ^{N_f/2}]`$. The polynomial approximations $`R_{n_3}`$ and $`S_{n_s}`$ can be determined by the same general procedure as $`P_{n_1}^{(1)}`$ and $`P_{n_2}^{(2)}`$. It turns out that these approximations are “easier” in the sense that for a given order higher precisions can be achieved than, say, for $`P_{n_1}^{(1)}`$. Another possibility to obtain a suitable approximation for (14) is to use the second decomposition in (6) and define $$P_{n_2}^{(1/2)}(\stackrel{~}{Q})\sqrt{r_0}\underset{j=1}{\overset{n_2}{}}(\stackrel{~}{Q}\rho _j),P_{n_2}^{(2)}(\stackrel{~}{Q}^2)=P_{n_2}^{(1/2)}(\stackrel{~}{Q})^{}P_{n_2}^{(1/2)}(\stackrel{~}{Q}).$$ (17) Using this form, the noise vector $`\eta `$ necessary in the noisy correction step can be generated from the gaussian vector $`\eta ^{}`$ according to $$\eta =P_{n_2}^{(1/2)}(\stackrel{~}{Q})^1\eta ^{},$$ (18) where $`P_{n_2}^{(1/2)}(\stackrel{~}{Q})^1`$ can be obtained as $$P_{n_2}^{(1/2)}(\stackrel{~}{Q})^1=\frac{P_{n_2}^{(1/2)}(\stackrel{~}{Q})^{}}{P_{n_2}^{(2)}(\stackrel{~}{Q}^2)}P_{n_3}(\stackrel{~}{Q}^2)P_{n_2}^{(1/2)}(\stackrel{~}{Q})^{}.$$ (19) In the last step $`P_{n_3}`$ denotes a polynomial approximation for the inverse of $`P_{n_2}^{(2)}`$ on the interval $`[ϵ,\lambda ]`$. Note that this last approximation can also be replaced by an iterative inversion of $`P_{n_2}^{(2)}(\stackrel{~}{Q}^2)`$. However, tests showed that the inversion by a least-squares optimized polynomial approximation is much faster because, for a given precision, less matrix multiplications have to be performed. In the simulation with light dynamical gluinos mainly the second form in (18)-(19) has been used. The first form could, however, be used as well. In fact, for very high orders $`n_2`$ or on a 32-bit computer the first scheme is better from the point of view of rounding errors. The reason is that in the second scheme for the evaluation of $`P_{n_2}^{(1/2)}(\stackrel{~}{Q})`$ we have to use the product form in terms of the roots $`\rho _j`$ in (17). Even using the optimized ordering of roots defined in , this is numerically less stable than the recursive evaluation according to (25), (31). If one uses the first scheme both $`P_{n_2}^{(2)}`$ in (12) and $`R_{n_3}`$ in (14)-(15) can be evaluated recursively. Nevertheless, on a 64-bit machine both methods work well and in case of (19) the determination of the least-squares optimized polynomials is somewhat simpler. The global accept-reject step for the gauge field can be performed after full sweeps over the gauge field links. A good choice for the order $`n_1`$ of the first polynomial $`P_{n_1}^{(1)}`$ is such that the average acceptance probability of the noisy correction be near 90%. One can decrease $`n_1`$ and/or increase the acceptance probability by updating only some subsets of the links before the accept-reject step. This might be useful on lattices larger than the largest lattice $`12^324`$ considered in . ### 3.2 Measurement correction: reweighting The multi-bosonic algorithms become exact only in the limit of infinitely high polynomial orders: $`n\mathrm{}`$ in (2) or, in the two-step approximation scheme, $`n_2\mathrm{}`$ in (8). Instead of investigating the dependence on the polynomial order by performing several simulations, it is practically better to fix some high order for the simulation and perform another correction in the “measurement” of expectation values by still finer polynomials. This is done by reweighting the configurations in the measurement of different quantities. In case of $`N_f=2`$ flavours this kind of reweighting has been used in within the polynomial hybrid Monte Carlo scheme. As remarked above, $`N_f=2`$ is special because the reweighting can be performed by an iterative inversion. The general case can, however, also be treated by a further polynomial approximation. The measurement correction for general $`N_f`$ has been introduced in . It is based on a polynomial approximation $`P_{n_4}^{(4)}`$ which satisfies $$\underset{n_4\mathrm{}}{lim}P_{n_1}^{(1)}(x)P_{n_2}^{(2)}(x)P_{n_4}^{(4)}(x)=x^{N_f/2},x[ϵ^{},\lambda ].$$ (20) The interval $`[ϵ^{},\lambda ]`$ can be chosen, for instance, such that $`ϵ^{}=0,\lambda =\lambda _{max}`$, where $`\lambda _{max}`$ is an absolute upper bound of the eigenvalues of $`Q^{}Q=\stackrel{~}{Q}^2`$. In this case the limit $`n_4\mathrm{}`$ is exact on an arbitrary gauge configuration. For the evaluation of $`P_{n_4}^{(4)}`$ one can use $`n_4`$-independent recursive relations (see section 4), which can be stopped by observing the convergence of the result. After reweighting the expectation value of a quantity $`A`$ is given by $$A=\frac{A\mathrm{exp}\{\eta ^{}[1P_{n_4}^{(4)}(Q^{}Q)]\eta \}_{U,\eta }}{\mathrm{exp}\{\eta ^{}[1P_{n_4}^{(4)}(Q^{}Q)]\eta \}_{U,\eta }},$$ (21) where $`\eta `$ is a simple Gaussian noise like $`\eta ^{}`$ in (13). Here $`\mathrm{}_{U,\eta }`$ denotes an expectation value on the gauge field sequence, which is obtained in the two-step process described in the previous subsection, and on a sequence of independent $`\eta `$’s. The expectation value with respect to the $`\eta `$-sequence can be considered as a Monte Carlo updating process with the trivial action $`S_\eta \eta ^{}\eta `$. The length of the $`\eta `$-sequence on a fixed gauge configuration can be, in principle, arbitrarily chosen. In praxis it can be optimized for obtaining the smallest possible errors. The application of the measurement correction is most important for quantities which are sensitive for small eigenvalues of the fermion matrix $`Q^{}Q`$. The polynomial approximations are worst near $`x=0`$ where the function $`x^{N_f/2}`$ diverges. In the exact effective gauge action, including the fermion determinant, the configuration with a small eigenvalue $`\mathrm{\Lambda }`$ are suppressed by $`\mathrm{\Lambda }^{N_f/2}`$. The polynomials at finite order are not able to provide such a strong suppression, therefore in the updating sequence of the gauge fields there are more configurations with small eigenvalues than needed. The exceptional configurations with exceptionally small eigenvalues have to be supressed by the reweighting. This can be achieved by choosing $`ϵ^{}=0`$ and a high enough order $`n_4`$. It is also possible to take some non-zero $`ϵ^{}`$ and determine the eigenvalues below it exactly. Each eigenvalue $`\mathrm{\Lambda }<ϵ^{}`$ is taken into account by an additional reweighting factor $`\mathrm{\Lambda }^{N_f/2}P_{n_1}^{(1)}(\mathrm{\Lambda })P_{n_2}^{(2)}(\mathrm{\Lambda })`$. The stochastic correction in (21) is then restricted to the subspace orthogonal to these eigenvectors. Instead of $`ϵ^{}>0`$ one can also keep $`ϵ^{}=0`$ and project out a fixed number of smallest eigenvalues. Let us note that, in principle, it would be enough to perform just a single kind of correction. But to omit the reweighting does not pay because it is much more comfortable to investigate the (small) effects of different $`n_4`$ values on the expectation values than to perform several simulations with increasing values of $`n_2`$. Without the updating correction the whole correction could be done by reweighting in the measurements. However, in practice this would not work either. The reason is that a first polynomial with relatively low order does not sufficiently suppress the exceptional configurations. As a consequence, the reweighting factors would become too small and would reduce the effective statistics considerably. In addition, the very small eigenvalues are changing slowly in the update and this would imply longer autocorrelations. A moderate surplus of gauge configurations with small eigenvalues may, however, be advantageous because it allows for an easier tunneling among sectors with different topological charges. For small fermion masses on large physical volumes this is expected to be more important than the prize one has to pay for it by reweighting, provided that the reweighting has only a moderate effect. ## 4 Least-squares optimized polynomials The basic ingredient of multi-bosonic fermion algorithms is the construction of the necessary optimized polynomial approximations. The least-squares optimization provides a general and flexible framework which is well suited for the requirements of multi-bosonic algorithms . In the first part of this section the necessary basic formulae are collected. In the second part a simple example is considered: in case of an appropriately chosen weight function the least-squares optimized polynomials for the approximation of the function $`x^\alpha `$ are expressed in terms of Jacobi polynomials. ### 4.1 Definition and basic relations The general theory of least-squares optimized polynomial approximations can be inferred from the literature . Here we introduce the basic formulae in the way it has been done in for the specific needs of multi-bosonic fermion algorithms. We shall keep the notations there, apart from a few changes which allow for more generality. We want to approximate the real function $`f(x)`$ in the interval $`x[ϵ,\lambda ]`$ by a polynomial $`P_n(x)`$ of degree $`n`$. The aim is to minimize the deviation norm $$\delta _n\left\{N_{ϵ,\lambda }^1_ϵ^\lambda 𝑑xw(x)^2\left[f(x)P_n(x)\right]^2\right\}^{\frac{1}{2}}.$$ (22) Here $`w(x)`$ is an arbitrary real weight function and the overall normalization factor $`N_{ϵ,\lambda }`$ can be chosen by convenience, for instance, as $$N_{ϵ,\lambda }_ϵ^\lambda 𝑑xw(x)^2f(x)^2.$$ (23) A typical example of functions to be approximated is $`f(x)=x^\alpha /\overline{P}(x)`$ with $`\alpha >0`$ and some polynomial $`\overline{P}(x)`$. The interval is usually such that $`0ϵ<\lambda `$. For optimizing the relative deviation one takes a weight function $`w(x)=f(x)^1`$. It turns out useful to introduce orthogonal polynomials $`\mathrm{\Phi }_\mu (x)(\mu =0,1,2,\mathrm{})`$ satisfying $$_ϵ^\lambda 𝑑xw(x)^2\mathrm{\Phi }_\mu (x)\mathrm{\Phi }_\nu (x)=\delta _{\mu \nu }q_\nu .$$ (24) and expand the polynomial $`P_n(x)`$ in terms of them: $$P_n(x)=\underset{\nu =0}{\overset{n}{}}d_{n\nu }\mathrm{\Phi }_\nu (x).$$ (25) Besides the normalization factor $`q_\nu `$ let us also introduce, for later purposes, the integrals $`p_\nu `$ and $`s_\nu `$ by $$q_\nu _ϵ^\lambda 𝑑xw(x)^2\mathrm{\Phi }_\nu (x)^2,p_\nu _ϵ^\lambda 𝑑xw(x)^2\mathrm{\Phi }_\nu (x)^2x,s_\nu _ϵ^\lambda 𝑑xw(x)^2x^\nu .$$ (26) It can be easily shown that the expansion coefficients $`d_{n\nu }`$ minimizing $`\delta _n`$ are independent from $`n`$ and are given by $$d_{n\nu }d_\nu =\frac{b_\nu }{q_\nu },$$ (27) where $$b_\nu _ϵ^\lambda 𝑑xw(x)^2f(x)\mathrm{\Phi }_\nu (x).$$ (28) The minimal value of $`\delta _n^2`$ is $$\delta _n^2=1N_{ϵ,\lambda }^1\underset{\nu =0}{\overset{n}{}}d_\nu b_\nu .$$ (29) The above orthogonal polynomials satisfy three-term recurrence relations which are very useful for numerical evaluation. The first two of them with $`\mu =0,1`$ are given by $$\mathrm{\Phi }_0(x)=1,\mathrm{\Phi }_1(x)=x\frac{s_1}{s_0}.$$ (30) The higher order polynomials $`\mathrm{\Phi }_\mu (x)`$ for $`\mu =2,3,\mathrm{}`$ can be obtained from the recurrence relation $$\mathrm{\Phi }_{\mu +1}(x)=(x+\beta _\mu )\mathrm{\Phi }_\mu (x)+\gamma _{\mu 1}\mathrm{\Phi }_{\mu 1}(x),(\mu =1,2,\mathrm{}),$$ (31) where the recurrence coefficients are given by $$\beta _\mu =\frac{p_\mu }{q_\mu },\gamma _{\mu 1}=\frac{q_\mu }{q_{\mu 1}}.$$ (32) Using these relations on can set up a recursive scheme for the computation of the orthogonal polynomials in terms of the basic integrals $`s_\nu `$ defined in (26). Defining the polynomial coefficients $`f_{\mu \nu }(0\nu \mu )`$ by $$\mathrm{\Phi }_\mu (x)=\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }x^{\mu \nu }$$ (33) the above recurrence relations imply the normalization convention $$f_{\mu 0}=1,(\mu =0,1,2,\mathrm{}),$$ (34) and one can easily show that $`q_\mu `$ and $`p_\mu `$ satisfy $$q_\mu =\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }s_{2\mu \nu },p_\mu =\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }\left(s_{2\mu +1\nu }+f_{\mu 1}s_{2\mu \nu }\right).$$ (35) The coefficients themselves can be calculated from $`f_{11}=s_1/s_0`$ and (31) which gives $`f_{\mu +1,1}`$ $`=`$ $`f_{\mu ,1}+\beta _\mu ,`$ $`f_{\mu +1,2}`$ $`=`$ $`f_{\mu ,2}+\beta _\mu f_{\mu ,1}+\gamma _{\mu 1},`$ $`f_{\mu +1,3}`$ $`=`$ $`f_{\mu ,3}+\beta _\mu f_{\mu ,2}+\gamma _{\mu 1}f_{\mu 1,1},`$ $`\mathrm{}`$ $`f_{\mu +1,\mu }`$ $`=`$ $`f_{\mu ,\mu }+\beta _\mu f_{\mu ,\mu 1}+\gamma _{\mu 1}f_{\mu 1,\mu 2},`$ $`f_{\mu +1,\mu +1}`$ $`=`$ $`\beta _\mu f_{\mu ,\mu }+\gamma _{\mu 1}f_{\mu 1,\mu 1}.`$ (36) The polynomial and recurrence coefficients are recursively determined by (34)-(4.1). The expansion coefficients for the optimized polynomial $`P_n(x)`$ can be obtained from (27) and $$b_\mu =\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }_ϵ^\lambda 𝑑xw(x)^2f(x)x^{\mu \nu }.$$ (37) ### 4.2 A simple example: Jacobi polynomials The approximation interval $`[ϵ,\lambda ]`$ can be transformed to some standard interval, say, $`[1,1]`$ by the linear mapping $$\xi =\frac{2x\lambda ϵ}{\lambda ϵ},x=\frac{\xi }{2}(\lambda ϵ)+\frac{1}{2}(\lambda +ϵ).$$ (38) A weight factor $`(1+\xi )^\rho (1\xi )^\sigma `$ with $`\rho ,\sigma >1`$ corresponds in the original interval to the weight factor $$w^{(\rho ,\sigma )}(x)^2=(xϵ)^\rho (\lambda x)^\sigma .$$ (39) Taking, for instance, $`\rho =2\alpha ,\sigma =0`$ this weight is similar to the one for relative deviation from the function $`f(x)=x^\alpha `$, which would be just $`x^{2\alpha }`$. In fact, for $`ϵ=0`$ these are exactly the same and for small $`ϵ`$ the difference is negligible. The advantage of considering the weight factor in (39) is that the corresponding orthogonal polynomials are simply related to the Jacobi polynomials , namely $$\mathrm{\Phi }_\nu ^{(\rho ,\sigma )}(x)=(\lambda ϵ)^\nu \nu !\frac{\mathrm{\Gamma }(\rho +\sigma +\nu +1)}{\mathrm{\Gamma }(\rho +\sigma +2\nu +1)}P_\nu ^{(\sigma ,\rho )}\left(\frac{2x\lambda ϵ}{\lambda ϵ}\right).$$ (40) Our normalization convention (34) implies that $$q_\nu ^{(\rho ,\sigma )}=(\lambda ϵ)^{\rho +\sigma +2\nu +1}\nu !\frac{\mathrm{\Gamma }(\rho +\nu +1)\mathrm{\Gamma }(\sigma +\nu +1)\mathrm{\Gamma }(\rho +\sigma +\nu +1)}{\mathrm{\Gamma }(\rho +\sigma +2\nu +1)\mathrm{\Gamma }(\rho +\sigma +2\nu +2)}.$$ (41) The coefficients of the orthogonal polynomials are now given by $$f_{\mu \nu }^{(\rho ,\sigma )}=\underset{\omega =0}{\overset{\nu }{}}(ϵ)^{\nu \omega }(ϵ\lambda )^\omega \left(\begin{array}{c}\mu \omega \\ \nu \omega \end{array}\right)\left(\begin{array}{c}\mu \\ \omega \end{array}\right)\frac{\mathrm{\Gamma }(\rho +\mu +1)\mathrm{\Gamma }(\rho +\sigma +2\mu \omega +1)}{\mathrm{\Gamma }(\rho +\mu \omega +1)\mathrm{\Gamma }(\rho +\sigma +2\mu +1)}.$$ (42) In particular, we have $$f_{\mu 0}^{(\rho ,\sigma )}=1,f_{11}^{(\rho ,\sigma )}=ϵ(\lambda ϵ)\frac{(\rho +1)}{(\rho +\sigma +2)}.$$ (43) The coefficients $`\beta ,\gamma `$ in the recurrence relation (31) can be derived from the known recurrence relations of the Jacobi polynomials: $$\beta _\mu ^{(\rho ,\sigma )}=\frac{1}{2}(\lambda +ϵ)+\frac{(\sigma ^2\rho ^2)(\lambda ϵ)}{2(\rho +\sigma +2\mu )(\rho +\sigma +2\mu +2)},$$ $$\gamma _{\mu 1}^{(\rho ,\sigma )}=(\lambda ϵ)^2\frac{\mu (\rho +\mu )(\sigma +\mu )(\rho +\sigma +\mu )}{(\rho +\sigma +2\mu 1)(\rho +\sigma +2\mu )^2(\rho +\sigma +2\mu +1)}.$$ (44) In order to obtain the expansion coefficients of the least-squares optimized polynomials one has to perform the integrals in (37). As an example, let us consider the function $`f(x)=x^\alpha `$ when the necessery integrals can be expressed by hypergeometric functions: $$_ϵ^\lambda 𝑑x(xϵ)^\rho (\lambda x)^\sigma x^{\mu \nu \alpha }=$$ $$=(\lambda ϵ)^{\rho +\sigma +1}\lambda ^{\mu \nu \alpha }\frac{\mathrm{\Gamma }(\rho +1)\mathrm{\Gamma }(\sigma +1)}{\mathrm{\Gamma }(\rho +\sigma +2)}F(\alpha \mu +\nu ,\sigma +1;\rho +\sigma +2;1\frac{ϵ}{\lambda }).$$ (45) Let us now consider, for simplicity, only the case $`ϵ=0`$, when we obtain $$b_\mu ^{(\rho ,\sigma )}=(1)^\mu \lambda ^{1+\rho +\sigma +\mu \alpha }\frac{\mathrm{\Gamma }(\rho +\sigma +\mu +1)\mathrm{\Gamma }(\alpha +\mu )\mathrm{\Gamma }(\rho \alpha +1)\mathrm{\Gamma }(\sigma +\mu +1)}{\mathrm{\Gamma }(\rho +\sigma +2\mu +1)\mathrm{\Gamma }(\alpha )\mathrm{\Gamma }(\rho +\sigma \alpha +\mu +2)}.$$ (46) Combined with (27) and (41) this leads to $$d_\mu ^{(\rho ,\sigma )}=(1)^\mu \lambda ^{\mu \alpha }\frac{\mathrm{\Gamma }(\rho +\sigma +2\mu +2)\mathrm{\Gamma }(\alpha +\mu )\mathrm{\Gamma }(\rho \alpha +1)}{\mu !\mathrm{\Gamma }(\rho +\mu +1)\mathrm{\Gamma }(\alpha )\mathrm{\Gamma }(\rho +\sigma \alpha +\mu +2)}.$$ (47) These formulae can be used, for instance, for fractional inversion. For the parameters $`\rho ,\sigma `$ the natural choice in this case is $`\rho =2\alpha ,\sigma =0`$ which corresponds to the optimization of the relative deviation from the function $`f(x)=x^\alpha `$. As we have seen in section 4.1, the optimized polynomials are the truncated expansions of $`x^\alpha `$ in terms of the Jacobi polynomials $`P^{(2\alpha ,0)}`$. The Gegenbauer polynomials proposed in for fractional inversion correspond to a different choice, namely $`\rho =\sigma =\alpha \frac{1}{2}`$. This is because of the relation $$C_n^\alpha (x)=\frac{\mathrm{\Gamma }(n+2\alpha )\mathrm{\Gamma }(\alpha +\frac{1}{2})}{\mathrm{\Gamma }(2\alpha )\mathrm{\Gamma }(n+\alpha +\frac{1}{2})}P_n^{(\alpha \frac{1}{2},\alpha \frac{1}{2})}(x).$$ (48) Note that for the simple case $`\alpha =1`$ we have here the Chebyshev polynomials of second kind: $`C_n^1(x)=U_n(x)`$. The special case $`\alpha =\frac{1}{2}`$ is interesting for the numerical evaluation of the zero mass lattice action proposed by Neuberger . In this case, in order to obtain the least-squares optimized relative deviation with weight function $`w(x)=x`$, the function $`x^{\frac{1}{2}}`$ has to be expanded in the Jacobi polynomials $`P^{(1,0)}`$. Note that this is different both from the Chebyshev and the Legendre expansions applied in . The former would correspond to take $`P^{(\frac{1}{2},\frac{1}{2})}`$, the latter to $`P^{(0,0)}`$. The corresponding weight functions would be $`[x(\lambda x)]^{\frac{1}{2}}`$ and $`1`$, respectively. As a consequence of the divergence of the weight factor at $`x=0`$, the Chebyshev expansion is not appropriate for an approximation in an interval with $`ϵ=0`$. This can be immediately seen from the divergence of $`d_\mu ^{(\frac{1}{2},\frac{1}{2})}`$ at $`\alpha =\frac{1}{2}`$ in (47). The advantage of the Jacobi polynomials appearing in these examples is that they are analytically known. The more general least-squares optimized polynomials defined in the previous subsection can also be numerically expanded in terms of them. This is sometimes more comfortable than the entirely numerical approach. ## 5 Outlook The two-step multi-bosonic algorithm has been shown to work properly in a recent large scale numerical simulation of light dynamical gluinos . Since gluinos are Majorana fermions the effective number of (Dirac-) fermion flavours is in this case $`N_f=\frac{1}{2}`$. This simulation is demanding because it deals with nearly zero and negative fermion masses in reasonably large physical volumes. It turned out possible to investigate the expected first order phase transition at zero gluino mass. The algorithm was able to cope with the metastability of phases near the phase transition. Up to now the investigated physical volumes were not very large: in case of the largest ($`12^324`$) lattice the product of the spatial extension times the square root of the string tension was $`L\sqrt{\sigma }2.4`$. The experience shows, however, that simulations with light gluinos and $`L\sqrt{\sigma }5`$ or larger would be feasible with reasonable effort. Another important conclusion in is that the inclusion of the sign of the fermion determinant, actually Pfaffian for Majorana fermions, can be achieved by determining the spectral flow of the hermitian fermion matrix $`\stackrel{~}{Q}`$ as a function of increasing hopping parameter. An interesting application of the two-step multi-bosonic algorithm is the numerical simulation of QCD. Up to now no serious effort was taken in this direction but first tests will be performed soon. A particularly relevant aspect for QCD is the possibility to deal with an odd number of quark flavours ($`N_f`$). The popular hybrid Monte Carlo algorithm can only be applied for even $`N_f`$. For non-even $`N_f`$ one can use finite step-size molucular dynamics algorithms like the one in , but then the extrapolation to zero step size is an additional difficulty.
no-problem/9903/astro-ph9903406.html
ar5iv
text
# Nature and evolution of Damped Lyman alpha systems ## 1. Introduction The redshifted Ly$`\alpha `$ absorptions observed in the spectra of QSOs originate in intervening clouds with wide-ranging HI column densities. When $`N`$(HI) $``$ 2 $`\times `$ 10<sup>20</sup> atoms cm<sup>-2</sup> the Ly $`\alpha `$ line shows a broad profile with extended ’radiation damping’ wings. Damped Ly $`\alpha `$ (DLA) absorption lines are always accompanied by narrow metal lines at the same redshift, $`z_{\mathrm{abs}}`$. These absorption line systems are quite rare, the number per unit redshift interval being the lowest among all types of QSO absorbers ($`n(z)0.2`$ at $`z_{\mathrm{abs}}2`$; Wolfe et al. 1995). About a hundred DLA systems are currently known as a result of several surveys, most of them performed in the optical spectral range (Wolfe et al. 1986, 1995). Owing to the dramatic drop of QSO counts at high $`z`$, only a reduced number of absorbers have been detected at $`z>4`$ (Storrie-Lombardi et al. 1996a). At $`z1.65`$ Ly $`\alpha `$ absorptions can only be observed with space-born UV telescopes and the number of systems identified is quite limited (Lanzetta, Wolfe & Turnshek 1995; Jannuzi et al. 1998 and refs. therein; Turnshek 1998). DLA systems have also been detected as redshifted 21 cm absorption in the continuum of radio loud quasars (Carilli et al. 1996 and refs. therein). Several reasons suggest that DLA systems originate in interstellar clouds within galaxies located in the direction of the QSO: (i) the high values of HI column density, typical of the interstellar medium of gas-rich galaxies; (ii) the presence of low ionization species of metals, observed in Galactic HI regions; (iii) the line-of-sight velocity dispersion, consistent with the typical values expected for galactic disks; (iv) the evolution of the comoving mass density of gas in DLA absorbers, which is suggestive of gas consumption due to star formation (Wolfe et al. 1995). DLA systems have low metallicities, typically $`Z/Z_{}10^1`$ and, in some cases, as low as $`Z/Z_{}`$ 10<sup>-2</sup> (Pettini et al. 1997a, 1999). Therefore the galaxies hosting DLA clouds must be chemically young and, in some cases, must be in the very first stages of their chemical enrichment. Studies of DLA absorbers allow us to probe young galaxies at high redshift from the observation of only one line of sight though each galaxy. This kind of investigation is complementary to studies of Ly-break galaxies, which allow us to probe high redshift galaxies from the observation of their integrated emission. The advantage of DLA absorption studies is the intrinsic brightness of the background QSO which can be used to obtain spectra of unparalleled resolution and signal-to-noise ratio at a given redshift. The quality of these spectra allows us to study the chemical and physical properties of young galaxies in unrivaled detail. Even if the link between DLA absorbers and intervening galaxies is commonly accepted, there is no general agreement on the nature of the galaxies hosting the DLA clouds (also called DLA galaxies hereafter). The traditional working hypothesis is that DLA galaxies are the progenitors of present-day spirals (Wolfe et al. 1986), but alternative interpretations have also been proposed (e.g. Tyson 1988). While it is possible to study the evolution of DLA systems per se, understanding the nature of DLA galaxies is fundamental to put the phenomenon in the general context of galactic evolution and to constrain theories of structure formation. The observational clues to the nature of DLA galaxies are summarized in the next section of this contribution. Selection effects are considered in Section 3, while the evolution properties are discussed in Section 4. Finally, the results are summarized in Section 5. ## 2. Clues to the nature of DLA galaxies One approach to cast light on the nature of DLA galaxies is to estimate the number of galaxies of a specific morphological type $`T`$ expected along a random line of sight. At $`z0`$ the number per unit distance interval is $$n_{}^T=\mathrm{\Phi }_{}^T(M)<A_{}^T(M)>𝑑M$$ (1) where $`\mathrm{\Phi }_{}^T`$ is the optical luminosity function of the galaxies of type $`T`$ in terms of the absolute magnitude $`M`$; $`A_{}^T`$ is the effective cross section of the column density contour $`N_{\mathrm{HI}}N_{\mathrm{min}}=2\times 10^{20}`$ cm<sup>-2</sup> and the angled brackets indicate a weighted average over all possible galaxian inclinations. Estimates of the HI content within galaxies at the present epoch indicate that most of the observed absorptions should originate in spirals (Rao & Briggs 1993). This prediction, however, is not confirmed by studies of candidate DLA galaxies at low redshift (Section 2.1). Considering the lack of understaning at $`z=0`$ it is clear that estimating the fraction $`n_{}^T/_Tn_{}^T`$ at high $`z`$ is highly speculative until we know the effects of evolution and merging on morphology and galaxian sizes. Observations are the key to understanding the nature of DLA galaxies. Spectroscopic data are used to study chemical and physical properties. Imaging data are used to study the morphology of candidate DLA galaxies. Imaging and spectroscopic data have a poor redshift overlap: while the imaging is most effective at $`z1`$ — the confusion with the QSO source is more critical at high $`z`$ — the spectroscopy is mostly performed at $`z_{\mathrm{abs}}1.65`$. ### 2.1. Morphology of candidate DLA galaxies The galaxies responsible for the DLA absorption can be identified by studying the field of the background QSO. Galaxies with impact parameter compatible with the expected extension of the HI disk are considered as candidate absorbers. The impact parameter, $`\rho `$ ($`h_{}^1`$ kpc), is estimated at the redshift of the absorber and, in order to confirm the identification, one should also measure the redshift of the galaxy and check if $`z_{\mathrm{gal}}=z_{\mathrm{abs}}`$. When galaxies are not detected within a reasonable value of $`\rho `$, an upper limit to the (surface) brightness of the intervening galaxy can still be derived. The results of searches for DLA galaxies in QSO fields are summarized in Table 1. Although the sample is limited and only a few galaxies have a redshift measurement, an important conclusion can already be derived: DLA galaxies at $`z1`$ show a variety of morphological types (S0, spirals, dwarfs) and different levels of surface brightness, including low surface brightness (LSB) galaxies. In other words, the population of DLA galaxies is not dominated by any specific type of galaxies and, in particular, spirals constitute a small fraction of the sample, contrary to the predictions based on the HI content of nearby galaxies (Rao & Briggs 1993). The selection effects discussed in Section 3 may be responsible for this unexpected result. ### 2.2. Elemental abundances Abundances of DLA systems can be measured with accuracy and are already available for about 50 systems (Lu et al. 1996; Prochaska & Wolfe 1999; Pettini et al. 1999; see also refs. in Vladilo 1998). The HI column density can be easily constrained within $`\pm `$ 0.1 dex, or even better, by fitting the damping wings of the Ly $`\alpha `$ profile. The most common metals show unsaturated transitions which allow column densities to be accurately determined. Ionization corrections are generally negligible thanks to the presence of neutrals or ions with IP $``$ 13.6 eV which are dominant ionization stages in HI regions. Dust probably represents the main source of uncertainty in abundance determinations since an unknown fraction of the elements is probably depleted into dust grains (Section 2.3). Studies of the intrinsic abundances of DLA systems in the presence of dust have been performed by Lauroesch et al. (1996) and by Kulkarni, Fall & Truran (1997). In these studies the dust-to-gas ratio, $`k`$, is considered a free parameter with same value in all systems. However, the level of depletion scales with $`k`$, and it is essential to estimate $`k`$ for each DLA cloud in order to properly correct the abundances (Vladilo 1998). Abundances of metal-poor Galactic stars are often used as a reference for DLA studies since they reflect the abundance pattern of the Milky-Way gas at the time in which the first stellar generations were formed. By comparing abundances of DLA systems at redshift $`z`$ with abundances of Galactic stars formed at look-back time $`t(z)`$, we can test whether DLA galaxies undergo a chemical evolution similar to that of the Milky Way. The comparison between the two sets of abundances can also be made at a given metallicity, which measures the level of chemical enrichment attained by each system. #### Metallicities. The absolute abundance of zinc, \[Zn/H\]<sup>1</sup><sup>1</sup>1 We adopt the usual notation \[X/Y\] $``$ log (X/Y) – log (X/Y) , is generally used to study the metallicities in DLA systems since zinc is expected to be unaffected by dust depletion (Pettini et al. 1997a; 1999). Observed metallicities span the interval $`2<[\mathrm{Zn}/\mathrm{H}]<0`$, with a column-density weighted mean value \[$`<`$Zn/H$`>`$\] $`1`$. The metallicity distribution is different from that found in the stellar populations of the Milky Way, a result that casts doubts on the relationship between high-$`z`$ DLA systems and present-day spirals (Pettini et al. 1997b; see, however, Wolfe & Prochaska 1998). #### Iron-peak abundances. Studies of metal-poor stars in the Galaxy indicate that iron-peak elements trace each other with approximate solar ratios (Ryan, Norris, & Beers 1996). Deviations from the solar pattern can be present, but are generally negligible at the metallicity level of DLA absorbers. The \[Zn/Fe\], \[Cr/Fe\], and \[Ni/Fe\] ratios measured in DLA systems show significant deviations from the solar pattern, inconsistent with those observed in metal-poor stars (Lu et al. 1996; Pettini et al. 1997a). All these ratios follow the differential dust depletion pattern observed in the Milky Way interstellar gas (Savage & Sembach 1996), suggesting that the observed abundances are dominated by dust depletion (Section 2.3). The \[Mn/Fe\] ratio is underabundant, consistent with that observed in metal-poor stars (Lu et al. 1996); however part of this effect can also be ascribed to dust (Vladilo 1998). #### \[$`\alpha `$/Fe\] ratios. The ratio between $`\alpha `$ and iron-peak elements is a well-known tracer of galactic evolution. In the Milky Way it decreases from the value \[$`\alpha `$/Fe\] $``$ +0.5 typical of metal-poor stars, down to \[$`\alpha `$/Fe\] = 0 at higher metallicities (Wheeler, Sneden & Truran 1989). The temporal delay between the metal injection from SNae Type II, rich in $`\alpha `$ elements, and SNae Type Ia, rich in iron-peak elements can explain the decrease of \[$`\alpha `$/Fe\] in the course of evolution (Matteucci 1991 and refs. therein). The \[Si/Fe\] and \[S/Fe\] ratios in DLA systems show overabundances which resemble Milky-Way metal-poor abundances (Lu et al. 1996). However, the result in itself is not conclusive since the \[Si/Fe\] and \[S/Fe\] ratios are also enhanced in the nearby ISM as a consequence of differential depletion (Savage & Sembach 1996). A way to obtain the intrinsic \[$`\alpha `$/Fe\] ratio is to select elements with negligible ISM depletions, such as sulphur and zinc, which trace the $`\alpha `$ and the iron-peak elements, respectively. The few available measurements give \[S/Zn\] $``$ 0 (Molaro, Centurión, & Vladilo 1998), suggesting that the enhancement of the \[Si/Fe\] and \[S/Fe\] ratios is due to differential depletion. This suggestion is confirmed by a re-analysis of abundances in DLA systems corrected for depletion (Vladilo 1998): the resulting \[Si/Fe\] and \[S/Fe\] values are approximately solar (Fig. 1), consistent with the \[S/Zn\] results. The Magellanic Clouds (Wheeler et al. 1989) and BCGs (Thuan, Izotov & Lipovetsky 1995) are examples of galaxies with \[$`\alpha `$/Fe\] $``$ 0 at low metallicity. In general, any galaxy with low SFR at early epochs will be able to produce \[$`\alpha `$/Fe\] $``$ 0 at low metallicity because the onset of Type Ia SNae will occur before the galaxy has time to attain solar metallicity (Matteucci 1991). #### Nitrogen. Nitrogen abundances can be used to probe the early stages of chemical evolution. However, the relative importance of different production mechanisms — i.e. primary versus secondary production — is not fully understood (Matteucci, Molaro, & Vladilo 1997). Nitrogen abundances have been measured in about ten DLA systems (Molaro et al. 1996; Lu, Sargent & Barlow 1998; Centurión et al. 1998). When the effects of dust are considered, a substantial fraction of \[N/Fe\] and \[N/S\] ratios are lower than those observed in Galactic metal-poor stars (Centurión et al. 1998). It is not possible to explain all the observations with a unique production mechanism: some cases suggest a secondary behaviour (i.e. the nitrogen ratios increase with metallicity), whereas others show evidence of primary production (i.e. the ratios are approximately constant with metallicity). Nitrogen ratios in DLA systems show similarities with those measured in nearby metal-poor galaxies, such as the \[N/O\] ratios in dwarf irregulars (Kobulnicky & Skillman 1996; van Zee et al. 1996) and the \[N/Fe\] ratios in BCGs (Thuan et al. 1995). In one or two cases there is evidence for an extremely high \[N/$`\alpha `$\] ratio, well above the values found in any astrophysical site (Molaro et al. 1996). ### 2.3. Dust The first evidence for dust in DLA systems was provided by a study of QSOs with and without foreground DLA absorption (Pei, Fall & Bechtold 1991). The optical spectral indices of the two samples are significally different and indicate an enhanced reddening of the QSOs with intervening absorption. The dust-to-gas ratios, $`k`$, derived from this statistical study are typically between 5% and 20% of the Galactic value. The observation of different images of gravitationally lensed QSOs with foreground DLA absorption is a powerful technique to study the dust properties of the intervening galaxy. The only case investigated up to now, namely the $`z_{\mathrm{abs}}`$ = 1.3911 system toward QSO 0957+561, shows clear evidence of differential dust reddening between the two adjacent images of the QSO (Zuo et al. 1997). The derived $`k`$ is between 40% and 70% of the Galactic value. As mentioned above, also the abundances of iron-peak elements provide evidence for dust in DLA systems since the \[Zn/Fe\], \[Cr/Fe\], and \[Ni/Fe\] ratios qualitatively follow the dust depletion pattern seen in the nearby interstellar gas (Savage & Sembach 1996). The observed pattern can be quantitatively explained by assuming that the dust in DLA systems has same the composition as in the Milky Way, but with a different value of $`k`$ in each system. In fact, one can estimate the dust-to-gas ratio in each system from the condition that the intrinsic iron-peak abundance ratios are solar (Vladilo 1998). The resulting dust-to-gas ratios show a large spread among different DLA absorbers, with $`k`$ values mostly distributed between 2% and 25% of the Galactic value, consistent with the range found by Pei et al. (1991). In a given DLA system, dust-to-gas ratios estimated from different pairs of iron-peak elements yield consistent results, as expected from the basic assumption of the method. Dust-to-gas ratios estimated in this way are well correlated with metallicity (Fig. 2 in Vladilo 1998). The existence of such a correlation and the evidence for dust obscuration described in Section 3.1 indicate that the $`k`$ values estimated indirectly from the iron-peak abundances are indeed related to dust present in DLA systems. ### 2.4. Kinematics Kinematical properties are derived from the study of the line-of-sight velocity dispersion of the systems, determined from the profiles of unsaturated metal lines. The HI gas is traced by low ions, such as Si<sup>+</sup> or Fe<sup>+</sup>, which have very similar profiles in each system and often show multiple absorption components. Since the line of sight samples the absorbers along random directions, a large number of observations is required to test models of DLA kinematics. The most extensive collection of velocity profiles has been obtained by Prochaska & Wolfe (1997, 1998) by means of Keck observations. The velocity widths, $`\mathrm{\Delta }V`$, measured above a given threshold of optical depth, range from about 50 km s<sup>-1</sup> up to 300 km s<sup>-1</sup>. When multiple components are present, the most intense one is generally found at one edge of the profile. These ”leading-edge” profiles can be naturally produced by the intersection of a rotating disk with an exponential gas density distribution. Analysis of the full set of profiles indicates a consistency with models of fast-rotating ($`V_{\mathrm{rot}}250`$ km s<sup>-1</sup>), thick disks, a result supporting a relationship between high-$`z`$ DLA systems and present-day spirals; models of slow-rotating, low-mass galaxies are instead ruled out (Prochaska & Wolfe 1997, 1998). However, these conclusions are obtained by assuming that all DLA systems are drawn from a homogeneous population of galaxies, an assumption probably incorrect, given the observational evidence shown in Table 1. In addition, the conclusion that fast rotating disks are the only viable explanation for the observed data has been disproved by Haehnelt et al. (1998). According to these authors, irregular protogalactic clumps can reproduce the velocity profiles distribution equally well. The few profiles with extremely high values of $`\mathrm{\Delta }V`$ can be explained with occasional alignment of clumps at the same redshift. An argument against the hypothesis that DLA systems rotate at $`V_{\mathrm{rot}}250`$ km s<sup>-1</sup> comes from an analysis of profile asymmetries performed by Ledoux et al. (1998). According to these authors, there is evidence for regular rotating disks only up to $`\mathrm{\Delta }V_{\mathrm{rot}}120`$ km s<sup>-1</sup>, while at higher velocities the kinematics is more complex. ### 2.5. Spin temperature For DLA systems that lie in front of a radio loud quasar it is possible to observe the 21 cm absorption in addition to the Ly $`\alpha `$ line. An analysis of both spectral ranges yields, under suitable assumptions, the harmonic mean along the line of sight of the spin temperature of the gas, $`<T_s>`$. The typical values found in DLA systems — $`<T_s>10^3`$ K (de Bruyn, O’Dea & Baum 1996; Carilli et al. 1996) —, are generally much larger than those observed in the disk of the Galaxy or in nearby spiral galaxies (Braun 1997 and refs. therein). The higher spin temperatures probably indicate that DLA galaxies have a larger fraction of warm gas than nearby spirals. One approach to understanding this difference is through variation of the interstellar pressure: the fraction of warm gas is expected to be higher in regions where the mean pressure is lower (Dickey 1995). Since the mean pressure is determined, in part, by the gravitational potential, a high fraction of warm gas could be a signature of gravitational potential lower than in the Milky Way disk. ## 3. Selection effects Selection effects can alter the fraction of specific types of galaxies, or particular regions of galaxies, detected in surveys of DLA systems. Recognizing the role of such effects is fundamental for a correct interpretation of the nature and evolution of DLA galaxies. ### 3.1. QSO obscuration The absorbers with the highest dust content will obscure the background QSO and will be missed from magnitude limited samples. This effect of QSO extinction was first investigated by Pei et al. (1991). The possibility of determining the dust-to-gas ratios $`k`$ in individual systems allows us to estimate the importance of the effect (Vladilo 1998). Evidence for QSO obscuration comes an inspection of Fig. 2: DLA systems for which the product $`𝒟=kN_{\mathrm{HI}}`$ exceeds a critical threshold are not observed. $`𝒟`$ is an estimate of the dust content along the line of sight and the tilted line in Fig. 2 represents the $`𝒟`$ value that yields an extinction of 1 magnitude of the QSO in the observer’s frame. Absorbers above this line have probably not been detected because they obscure the QSO by more than 1 magnitude. Since dust and metals are strinctly linked, one expects that systems with high metallicity and high column density are missed due to the same selection bias. This effect has indeed been reported by Boissè et al. (1998). As a consequence of QSO obscuration, DLA absorbers with higher and higher dust content (or metallicity) are only detectable at lower and lower values of column density. In particular, present-day spirals with solar metallicity (and hence $`k/k_{\mathrm{Gal}}1`$) can be missed when $`N_{\mathrm{HI}}10^{20.7}`$ cm<sup>-2</sup>, according to the trend shown in Fig. 2. Dwarf or LSB galaxies, which are characterized by lower metallicities and dust content, should be less affected by this selection bias. In addition, LSB galaxies should be less affected because the column density perpendicular to the disk is typically lower than in high surface brightness (HSB) galaxies. ### 3.2. Surface brightness Differences in surface brightness between galaxies can be understood if LSB galaxies are hosted in dark halos with values of the spin parameter, $`\lambda `$, larger than those of HSB galaxies (see Jimenez, Bowen & Matteucci 1999 and refs. therein). The cross-sections of individual galactic disks in equilibrium scale as $`\lambda ^2`$ and therefore will be dominated by objects with large angular momentum (Mo, Mao & White 1999). As a result, LSB galaxies are expected to dominate the cross-section for DLA absorption. On the other hand, HSB galaxies are expected to dominate the rates of star formation and metal production. ### 3.3. Galactocentric distance The probability of detecting a galaxy in the interval of galactocentric distances \[$`r,r+dr`$\] is, in general, a function of $`r`$. For a galactic disk seen face on, the differential cross section for DLA absorption is $`dA_{}2\pi rdr`$, until $`N_{\mathrm{HI}}N_{\mathrm{min}}`$. Therefore, galactic regions with larger $`r`$ have a higher probability of detection, unless the galaxy is seen exactly edge on. Any property which shows spatial gradients will be affected by this bias. In particular, our understanding of chemical evolution properties of DLA systems will be biased since external regions are less chemically evolved than inner regions. ### 3.4. Gravitational lensing The galaxy hosting the DLA system can act as a gravitational lens on the image of the background QSO. Smette, Claeskens & Surdej (1997) have developed a formalism to compute the effects of gravitational lensing (’by-pass’ effect and ’amplification bias’) on the observed number density of DLA systems. The ’by-pass’ effect causes the line of sight to avoid the central part of the intervening galaxy and to decrease its effective cross-section for absorption. The ’amplification bias’ boosts the apparent magnitude of the QSO and therefore increases the fraction of QSOs with foreground galaxies in magnitude-limited samples. The ’amplification bias’ acts in the opposite direction of dust obscuration. However, in order to predict the overall effect one should model dust obscuration and gravitational lensing in a self-consistent way. It is interesting to note that both the ’by-pass’ effect, and the ’galactocentric distance’ bias conspire to exclude from the surveys the inner regions of galaxies, i.e. the most chemically evolved regions. ## 4. Evolution of DLA systems Detecting evolution effects in DLA systems is difficult for several reasons. First, the low number of absorbers identified at low redshift represents a severe limitation, since $`z1.65`$ corresponds to a look-back time of about 2/3 of the age of the universe. Second, the selection effects mentioned in the previous section imply that the observed samples are biased. Third, the samples are not homogeneous, since they can include galaxies of different morphological types $`T`$, absolute magnitudes $`M`$, and redshifts of formation $`z_f`$; moreover the line of sight crosses the galaxy at a random radius $`r`$. Therefore, observations of any physical quantity $`Q`$ at different redshifts $`z_{\mathrm{abs}}`$ will yield a data set of the type $`Q=Q(z_{\mathrm{abs}};T,M,z_f;r)`$, where the redshift dependence of $`Q`$ will be disguised by fluctuations induced by the other variables. ### 4.1. Number density and comoving mass density The number of absorbers per unit redshift interval is given by the product of the absorber cross section, $`A_{}`$, times the number density of absorbers per comoving volume, $`\mathrm{\Phi }_{}`$. The expression for a standard Friedmann universe, $`n(z)=\mathrm{\Phi }_{}A_{}cH_{}^1(1+z)(1+2q_{}z)^{1/2}`$, is usually replaced with $$n(z)=n_{}(1+z)^\gamma ,$$ (2) where $`\gamma `$ is determined from the best fit to the empirical data points. In absence of intrinsic evolution $`\gamma `$ = 1/2 ($`q_{}=0.5`$) or $`\gamma `$ = 1 ($`q_{}=0.`$). The work by Lanzetta et al. (1995) yields $`\gamma =1.15\pm 0.55`$, consistent with no evolution. From a combined sample including DLA systems at $`z4`$, Storrie-Lombardi, Irwin & McMahon (1996b) find $`\gamma =1.3\pm 0.5`$, also consistent with no evolution. However, these authors do find evolution for the absorbers with $`N_{\mathrm{HI}}>10^{21}`$ cm<sup>-2</sup>, which show a decline at $`z3.5`$. By combining the sample of DLA systems with the constraints on present-day galaxies, Rao, Turnshek & Briggs (1995) find $`\gamma =2.27\pm 0.25`$. However, this result should not be taken as evidence for evolution until we understand clearly the link between DLA absorbers and nearby galaxies. The mean comoving mass density of gas in DLA systems is given by $$\mathrm{\Omega }_{\mathrm{dla}}(z)=\frac{H_{}\mu m_\mathrm{H}}{c\rho _{\mathrm{crit}}}_{N_{\mathrm{min}}}^{\mathrm{}}Nf(N,z)𝑑N,$$ (3) where $`\mu `$ is the mean molecular weight of the gas, $`\rho _{\mathrm{crit}}`$ is the current critical density of the universe and $`f(N,z)`$ is the column density distribution function (see Storrie-Lombardi, McMahon, & Irwin 1996c and refs. therein). Analysis of a sample including systems at $`z>4`$ indicates that $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ increases from $`z=0`$ to $`z2.5`$ and apparently declines at $`z>3.5`$ (Storrie-Lombardi et al. 1996c). In the lowest redshift bin $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ is roughly equal to the comoving density of neutral gas derived from 21-cm emission surveys of nearby galaxies, i.e. $`\mathrm{\Omega }_{\mathrm{dla}}(z0.64)\mathrm{\Omega }_{21\mathrm{c}\mathrm{m}}(0)`$. At the peak value, $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ is marginally consistent with the mass density in stars in nearby galaxies, i.e. $`\mathrm{\Omega }_{\mathrm{dla}}(z2.5)\mathrm{\Omega }_{\mathrm{star}}(0)`$. These two facts suggest that the evolution of $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ from the peak value to the present-day value is governed by gas consumption due to star formation, as suggested by Wolfe et al. (1995). However, if the inequality $`\mathrm{\Omega }_{\mathrm{dla}}(z)<\mathrm{\Omega }_{\mathrm{star}}(0)`$ holds true, we are missing part of the gas responsible for the formation of present-day stars. This could be a consequence of the QSO obscuration bias, which affects the absorbers with highest column densities, i.e. the absorbers that give a dominant contribution to $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ (Eq.3). A new study of DLA systems at low redshift suggests that $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ could be roughly constant from $`z0.4`$ up to $`z3`$, with $`\mathrm{\Omega }_{\mathrm{dla}}(z)>\mathrm{\Omega }_{21\mathrm{c}\mathrm{m}}(0)`$ (Turnshek 1998). This finding would imply that the bulk of star formation took place only relatively recently. However, the result must be considered with caution, since it is based on a very low number of systems and may also be affected by the gravitational lensing bias (Turnshek 1998). Jimenez, Bowen & Matteucci (1999) have recently computed model predictions of $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ for galaxies with low and high levels of surface brightness. According to these authors, HSB galaxies consume neutral gas at a rate which is too fast to explain the observed evolution of $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$; instead LSB galaxies provide a good fit to the data published by Storrie-Lombardi et al. (1996c). ### 4.2. Metallicity and abundance ratios Metallicities are expected to increase in the course of evolution. Measurements of the \[Zn/H\] ratio are now available for 40 DLA systems, including 10 absorbers at $`z<1.5`$ (Pettini et al. 1999). The analysis of this sample does not reveal evidence for an increase with time: the column-density weighted mean metallicity is not significantly higher at $`z<1.5`$ than at earlier epochs. However, the lack of detection of evolution could be due to the inhomogeneity of the sample and/or to the presence of some selection bias. The QSO obscuration bias may be responsible for missing systems of higher $`Z/Z_{}`$ and higher column density (Section 3.1), which are expected to give an important contribution to the metallicity at low redshift. The sample is inhomogeneous for studying the metallicity evolution in the sense that $`Z/Z_{}`$ depends on the SFR and on $`z_f`$, two parameters that vary in different types of galaxies; in addition $`Z/Z_{}`$ can vary with the galactocentric distance $`r`$ in a given galaxy. Even if evolution is not directly detected from the data, model predictions of metallicity evolution obtained for different types of galaxies yield results consistent with the observations (Lindner, Fritze & Fricke 1998; Jimenez et al. 1999). The models by Jimenez et al. (1999) show explicitely the dependence of the metallicity on $`z_f`$ and on $`r`$. According to these models, LSB galaxies formed at $`z_f=4`$ (but not later) fit well the DLA metallicities; HSB disks can account for the observed data only if they form continuously in the interval $`1z_f4`$. The \[$`\alpha `$/Fe\] ratio is expected to decrease in the course of chemical evolution (Section 2.2). Measurements of the \[Si/Fe\] ratio are available for almost 30 DLA systems (Lu et al. 1996; Prochaska & Wolfe 1999), and for part of them it is possible to perform the correction for dust depletion (Vladilo 1998). The mean corrected value is not significantly lower at $`z<2`$, where $`<`$\[Si/Fe\]$`>=+0.01\pm 0.07`$ dex, than at $`z>2`$, where $`<`$\[Si/Fe\]$`>=+0.10\pm 0.09`$ dex (Fig. 1). The lack of evolution could be due to the inhomogeneity of the sample or to the presence of some selection bias, as in the case of the metallicity. In fact, all the selection effects that alter the study of the metallicities will also affect the \[$`\alpha `$/Fe\] ratio, which evolves with metallicity. However, the \[$`\alpha `$/Fe\] ratios do not show evidence for evolution even when plotted versus $`Z/Z_{}`$ (Vladilo 1998). ### 4.3. Dust-to-gas ratios The dust-to-gas ratios $`k`$ estimated from the iron-peak abundances do not show any trend with redshift. This is consistent with the fact that $`k`$ is very well correlated with $`Z/Z_{}`$ (Fig. 2 in Vladilo 1998) and the metallicity does not evolve with redshift. Since metallicity is an indicator of chemical evolution, the good correlation between $`k`$ and $`Z/Z_{}`$ can be considered as evidence for evolution in DLA systems. The regular increase of the dust content with metallicity, however, constrasts with the lack of any correlation with redshift. We deduce that DLA systems do evolve in a regular fashion, but they attain a given level of metallicity (or dust-to-gas ratio) at different redshifts. This conclusion confirms that the sample of DLA systems must include galaxies with different formation redshifts $`z_f`$ and/or different SFRs. ### 4.4. Kinematics From the analysis of a sample of 16 absorbers, Ledoux et al. (1998) find that the maximum $`\mathrm{\Delta }V`$ at a given $`z`$ increases at lower redshifts. This result, if confirmed, would indicate that neutral regions exhibit increasingly faster motions with cosmic time. However, analysis of the set of 28 measurements of $`\mathrm{\Delta }V`$ obtained by Prochaska & Wolfe (1997, 1998) does not confirm the existence of such a trend. Wolfe & Prochaska (1998) find that the maximum $`\mathrm{\Delta }V`$ measured at a given \[Zn/H\] increases with metallicity. According to these authors this effect can be explained by the passage of the lines of sight through rotating disks with radial gradients in metallicity. An alternative explanation is that DLA galaxies with higher metallicities exhibit faster motions than DLA galaxies with lower metallicities. This interpretation would fit nicely in a general trend of increasing metallicity with increasing mass (i.e., velocity dispersion). In any case, the statistics are still insufficient to confirm the existence of this trend. ### 4.5. Spin temperature As mentioned in Section 2.5, the spin temperatures measured in high-$`z`$ DLA systems are higher than those measured in present-day spirals. The difference could be ascribed, in principle, to an effect of evolution. However, recent measurements in two DLA systems at $`z_{\mathrm{abs}}=0.221`$ and $`z_{\mathrm{abs}}=0.091`$ yield $`<T_s>10^3\mathrm{K}`$, consistent with the values found at high redshifts (Chengalur & Kanekar 1999). This suggests that evolution may not be crucial and confirms that DLA galaxies have properties intrinsically different from those observed in nearby spirals. ## 5. Summary The nature of the galaxies hosting DLA clouds is still a subject of debate. However, some important conclusions can be inferred by comparing the results obtained from different observations. At low redshifts, candidate DLA galaxies in the fields of the background QSOs show a variety of morphological types and different levels of surface brightness. Spirals are not the dominant contributors, contrary to the predictions based on the HI content of the nearby universe. At high redshift, the hypothesis that DLA absorbers originate in (proto)spirals is not supported by spectroscopic studies of metallicities, abundance ratios, dust-to-gas ratios and spin temperatures. In particular, \[$`\alpha `$/Fe\] ratios and nitrogen abundances hint at an origin in galaxies with properties similar to those observed in nearby, low-mass galaxies. Studies of kinematics are consistent with an origin both in massive disks (proto-spirals) and in low-mass galaxies. Evolution effects are generally not detected in DLA systems. A possible exception is the number density of $`N_{\mathrm{HI}}>10^{21}`$ cm<sup>-2</sup> absorbers, which seems to decline at $`z`$ 3.5. The comoving mass density $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ apparently peaks at $`z2.5`$ and decreases at lower redshifts, but this decrease is not corroborated by recent observations. At the peak value $`\mathrm{\Omega }_{\mathrm{dla}}(z)<\mathrm{\Omega }_{\mathrm{star}}(0)`$, suggesting that we are missing part of the gas responsible for the formation of present-day stars. Model predictions of LSB galaxies seem to better fit $`\mathrm{\Omega }_{\mathrm{dla}}(z)`$ than models of HSB galaxies. Metallicities, abundances ratios, dust-to-gas ratios, line-of-sight velocity dispersions, and spin temperatures do not show evidence of redshift evolution. As a consequence, the differences between the properties of present-day spirals and those of high-$`z`$ DLA systems cannot be ascribed to evolutionary effects: DLA galaxies appear to be intrinsically different from nearby spirals. Dust production follows metal production in a very regular fashion in DLA systems. While this regular behaviour is proof of evolution, the lack of any correlation between metallicity and redshift suggests that DLA galaxies attain a given level of metallicity at different cosmic epochs, i.e. DLA galaxies must have different formation redshifts $`z_f`$ and/or different SFRs. Several selection effects conspire to bias the observed population of DLA absorbers. In particular, high column density clouds located in environments with relatively high metallicity can be missed owing to the QSO obscuration effect. This bias tends to decrease the fraction of spirals detected in the surveys. In general, the contribution of the most chemically evolved galactic regions tends to be underestimated. On the other hand, selection effects tend to favour the detection of LSB galaxies and the fraction of low-mass galaxies does not seem to be underestimated. In spite of their small sizes, low-mass galaxies can be detected if the faint end of the luminosity function is sufficiently steep (Tyson 1988), a condition supported by results obtained at low $`z`$ (Zucca et al. 1997). In conclusion, DLA absorbers appear to be associated with a composite population of galaxies without strong effects of evolution and with a prominent representation of low-mass and/or LSB galaxies. A significant number of massive galaxies can be detected in absorption by increasing the statistics of DLA surveys and by pushing the observational limits down to fainter magnitudes in order to contrast selection effects. ## References Boissé, P., Le Brun, V., Bergeron, J., & Deharveng, J.M. 1998, A&A, 333, 841 Braun, R., 1997, ApJ, 484, 637 Carilli, C.L., Lane, W., de Bruyn, A.G., Braun, R., Miley, G.K. 1996, AJ, 111, 1830 Centurión, M., Bonifacio, P., Molaro, P., & Vladilo, G. 1998, ApJ, 509, 620 Chengalur, J.N., & Kanekar, N. 1999, MNRAS, 302, L29 de Bruyn, A.G., O’Dea, C.P., Baum, S.A. 1996, A&A, 305, 450 Dickey, J.M. 1995, in The Physics of the Interstellar and Intergalactic Medium, eds. A. Ferrara et al., A.S.P. Con. Ser. Vol. 80, 357 Haehnelt, M.G., Steinmetz, M., & Rauch, M. 1998, ApJ, 495, 647 Jannuzi, B.T., Bahcall, J.N., Bergeron, J., Boksenberg, A., Hartig, G.F., Kirhakos, S., Sargent, W.L.W., Savage, B.D., Schneider, D.P., Turnshek, D.A., Weymann, R.J., & Wolfe, A.M. 1998, ApJS, 118, 1 Jimenez, R., Bowen, D.V., & Matteucci, F. 1999, ApJ, 514, L83 Kulkarni, V.P., S.M., Fall, & J.W. Truran, 1997, ApJ, 484, L7 Lanzetta, K.M., Wolfe, A.M., Altan, H., Barcons, X., Chen, H.-W., Fernández-Soto, A., Meyer, D.M., Ortiz-Gil, A., Savaglio, S., Webb, J.K., & Yahata, N. 1997, AJ, 114, 1337 Lanzetta, K.M., Wolfe, A.M., & Turnshek, D.A. 1995, ApJ, 440, 435 Lauroesch, J.T., Truran, J.W., Welty, D.E., & York, D.G. 1996, PASP, 108, 641 Le Brun, V., Bergeron, J., Boissé, P., & Deharveng, J.M. 1997, A&A, 321, 733 Ledoux, C., Petitjean, P., Bergeron, J., Wampler, E.J., & Srianand, R. 1998, A&A, 337, 51 Lindner, U., Fritze-Von Alvensleben, U., & Fricke, K.J. 1998, A&A, 341, 709 Lu L., Sargent W.L.W., Barlow T.A. 1998, ApJ, 115, 55 Lu, L., Sargent, W.L.W., Barlow, T.A., Churchill, C.W., & Vogt, S. 1996, ApJS, 107, 475 Matteucci, F. 1991, in SN1987A and other Supernovae, ed. I.J. Danziger & K.Kjär, ESO Proc. No. 37, 703 Matteucci, F., Molaro, P., & Vladilo, G. 1997, A&A, 321, 45 Miller, E.D., Knezek, P.M., & Bregman, J.N. 1999, ApJ, 510, L95 Mo, H.J., Mao, S., & White, S.D.M. 1999, MNRAS, 304, 175 Molaro, P., Centurión, M., & Vladilo, G. 1998, MNRAS, 293, L37 Molaro, P., D’Odorico, S., Fontana, A., Savaglio, S., & Vladilo, G. 1996, A&A, 308, 1 Pei, Y.C., & Fall, S.M. 1995, ApJ, 454, 69 Pei, Y.C., Fall, S.M., & Bechtold, J. 1991, ApJ, 378, 6 Pettini, M., Ellison, S.L., Steidel, C.C., & Bowen, D.V. 1999, ApJ, 510, 576 Pettini, M., King, D.L., Smith, L.J., & Hunstead, R.W. 1997a, ApJ, 478, 536 Pettini, M., Smith, L.J., King, D.L., & Hunstead, R.W. 1997b, ApJ, 486, 665 Prochaska, J.X., & Wolfe, A.M. 1997, ApJ, 474, 140 Prochaska, J.X., & Wolfe A.M. 1998, ApJ, 507, 113 Prochaska, J.X., & Wolfe A.M. 1999, ApJ, in press (astro-ph/9810381) Rao, S.M., & Briggs, F. 1993, ApJ, 419, 515 Rao, S.M., & Turnshek, D.A. 1998, ApJ, 500, L115 Ryan, S.G., Norris, J.E., & Beers, T.C. 1996, ApJ, 471, 254 Savage B.D., & Sembach K.R. 1996, Ann. Rev. Astron. Astrophys., 34, 279 Smette, A., Claeskens, J.-F., & Surdej, J., 1997, New Astronomy 2, 53 Steidel, C.C., Dickinson, M., Meyer, D.M., Adelberger, K.L., & Sembach, K.R. 1997, ApJ, 480, 568 Storrie-Lombardi, L.J., Irwin, M.J., & McMahon, R.G. 1996b, MNRAS, 282, 1330 Storrie-Lombardi, L.J., McMahon, R.G., & Irwin, M.J. 1996c, MNRAS, 283, L79 Storrie-Lombardi, L.J., McMahon, R.G., Irwin, M.J., & Hazard, C. 1996a, ApJ, 468, 121 Thuan, T.X., Izotov, Y.I., & Lipovetsky, V.A. 1995, ApJ, 445, 108 Turnshek, D.A. 1998, in Structure and Evolution of the Intergalactic Medium from QSO absorption Line Systems, ed. P. Petitjean, & S. Charlot (Paris:Editions Frontieres), 263 Tyson, N.D. 1988, ApJ, 329, L57 van Zee, L., Haynes, P.M., Salzer, J.J., & Broeils, A. 1996, AJ, 112, 129 Vladilo, G. 1998, ApJ, 493, 583 Vladilo, G., Centurión, M., Falomo, R., & Molaro, P. 1997, A&A, 327, 47 Vladilo, G., Molaro, P., & Centurión, M. 1999, Proc. of the meeting The Birth of Galaxies, Chateau de Blois, France, June 28th - July 4th (1998), in press. Wheeler, J.C., Sneden, C., & Truran, J.W.Jr. 1989, Ann. Rev. Astron. Astrophys., 27, 279 Wolfe, A.M., Prochaska, J.X. 1998, ApJ, 494, L15 Wolfe, A.M., Lanzetta, K.M., Foltz, C.B., & Chaffee, F.H. 1995, ApJ, 454, 698 Wolfe, A.M., Turnshek, D.A., Smith, H.E., & Cohen, R.D. 1986, ApJS, 61, 249 Zucca, E., Zamorani, G., Vettolani, G., Cappi, A., Merighi, R., Mignoli, M., Stirpe, G. M., MacGillivray, H., Collins, C., Balkowski, C., Cayatte, V., Maurogordato, S., Proust, D., Chincarini, G., Guzzo, L., Maccagni, D., Scaramella, R., Blanchard, A., & Ramella, M. 1997, A&A, 326, 477 Zuo, L., Beaver, E.A., Burbidge, E.M., Cohen, R.S., Junkkarinen, V.T., & Lyons, R.W., 1997, ApJ, 477, 568
no-problem/9903/nucl-th9903008.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of low mass dileptons has recently received considerable interest. This has been triggered by the observation of enhanced production of dileptons with invariant mass around $`400500`$ MeV in relativistic heavy ion collisions by the CERES collaboration . This enhancement has been studied in various approaches, ranging from thermal model to complicated transport models. All those calculations include the known hadronic decay channels into lepton pairs and, in addition, dilepton production via re-interaction of particles, most prominently pion annihilation. They find that pion annihilation accounts for a large part of the observed enhancement, while other channels such as the pion-rho scattering or the Dalitz decay of the $`\mathrm{a}_1`$-meson are less important (see e.g. ). In ref. a large variety of initial conditions for the hadronic fireball has been considered under the constraint that the final state hadronic spectra are in agreement with experiment. Surprisingly little variation has been found in the resulting dilepton spectra (see fig. 1). Certain initial conditions would agree with the lower end of the sum of statistical and systematic errors of the CERES data for the sulfur on gold reactions. In in medium modifications of pions and the pion nuclear from-factor in a pion gas have been considered and have been found to be small. The conclusion of and many other works (see for a list of references) is that in order to reach the central data points of the $`\mathrm{S}+\mathrm{Au}`$ measurement, additional in medium modifications need to be considered. Most of the attention received the suggestion of Li et al. , that a dropping of the mass of the $`\rho `$-meson with density, – following, with some modifications, the original conjecture of Brown and Rho – can reproduce the central data points. On the other hand, Rapp et al. have extended the work of to include also the effect of baryons for the in medium modification of pions. The present status of those considerations is that the in medium change of the pion dispersion relation leads only to a small enhancement, whereas the inclusion of baryon resonances which couple directly to the rho meson appear to be able to increase the yield substantially. Most important is here the $`\mathrm{N}^{}(1520)`$ resonance, as first pointed out by the Giessen group . The p-wave resonances, which have been first considered by Friman and Pirner appear to play a lesser role. We should note, however, that the calculation of Steele et al. , although similar in spirit, finds a much smaller effect due to baryons. In this contribution we want to revisit the CERES data, in particular those for the system $`\mathrm{Pb}+\mathrm{Au}`$ . These data have been analyzed to provide not only an invariant mass spectrum but also transverse momentum spectra and thus may give new insight into the relevant production mechanisms. Recently, new (preliminary) results from the ’96 run have been shown . These data have much improved statistics as compared to the published data from the ’95 run. We also will present arguments concerning the importance of baryons. According the work of Rapp et al. baryons seem to be the most important source for the low mass enhancement. In contradiction to that, our estimates in found the baryons to be irrelevant. ## 2 The $`\mathrm{Pb}+\mathrm{Au}`$ data In this section we present some new results for the dilepton spectra for $`\mathrm{Pb}+\mathrm{Au}`$. The calculation is similar to that carried out in and we refer to this reference for details. A new element is the inclusion of the channel $`\pi +\rho \pi +e^+e^{}`$ . Using vector dominance this process is related to the elastic $`\pi +\rho \pi +\rho `$ scattering, which gives rise to the a collisional broadening of the rho meson, as first discussed in . We have attempted to include the effect of the collisional broadening into our transport model, by calculating the collisional width as a function of the local pion density. This certainly is a crude method and needs to be refined in the future. While there is some reduction of strength below the rho-omega peak due to the collisional broadening of the rho, the overall effect is small and the difference to the previous calculations are well within the theoretical uncertainties as discussed in . In fig. 2 we show the resulting invariant mass spectra together with the CERES data , where only the statistical errors are shown. We have also included the preliminary data from the ’96 run (full circles) . We find a reasonable overall agreement, especially with the new, preliminary data. In fig. 3 we show the invariant mass spectra for the tranverse momentum interval $`p_t<400\mathrm{MeV}`$ (left panel) and $`p_t>400\mathrm{MeV}`$ (right panel). Again the agreement is quite reasonable. Some discrepancy might possibly be around the rho-omega peak where our calculation overshoots the data somewhat. However, as already pointed out in the strength around the rho-omega peak is dominated by the omega decay contribution, which depends on the abundance of omegas in the final state. This, however, is not very well constrained by other data. Finally in fig. 4 we compare the prediction from for central Pb+Au collisions with the central ’96 data . Again good agreement with the data is found. Also shown in this figure is the invariant mass spectrum after in medium modification in a pion gas are taken into account. These in medium modification also include the effects of chiral symmetry restoration on the coupling of the photon to the rho meson . Notice, that it will be extremely difficult to extract those effects from the data. To summarize this section, provided the still preliminary ’96 data remain unchanged it seems the the Pb+Au data are consistent with a simply hadronic scenario without any large in medium modifications. As already pointed out, in medium modification due to the presence of a pion gas are small and thus their presence/absence can not be decided on from the present data. This situation could be improved considerably if the mass resolution would allow to separate out the omega decay. This could then be ‘subtracted’ and the effects of chiral symmetry restoration should lower the remaining spectrum by about a factor of two around the rho-peak . The recently completed upgrade of the CERES detector should allow for exactly that. But certainly the present Pb+Au data to not seem to call for sizable corrections as they were predicted by the dropping rho mass scenario . ## 3 The role of baryons The work of Rapp et. al has emphasized the role of baryons as a possible source for additional dileptons. They consider in medium modifications of the current-current correlator, the imaginary part of which is directly related to the dilepton production rate. Following the suggestion of the Giessen group Rapp et al. also find that the contribution of the $`N^{}(1520)`$-hole diagram, depicted in fig. 3 is the most important one. However, as illustrated in fig. 3, the imaginary part of this diagram is nothing but the Dalitz - decay of the $`N^{}(1520)`$. The contribution of the Dalitz decays of baryons has already been estimated in ref. . In this estimate, the formula for the branching ratio of photon to Dalitz decay of the Delta has been extended to higher masses. In order to arrive at an conservative estimate of an upper limit the fraction of higher lying resonances has purposely been overestimated by a factor of two. The photon decay width has been chosen to be $`1\mathrm{MeV}`$, which again is on the large side. The resulting dilepton spectrum has then been compared with that of the omega, in order to minimize the effect of the detector acceptance. The ratio of these yields is shown in fig. 5. The baryons hardly contribute half as much as the omega Dalitz. Considering the contribution of the omega Dalitz as shown in fig. 2 it seems that the baryons are anything but irrelevant. So what is the difference between this estimate and the results of Rapp et al. Several possibilities come to mind: * The $`N^{}(1520)`$ has a considerably larger Dalitz decay width than the the extrapolation from the Delta decay width would predict. This might be possible, in particular, because the $`N^{}(1520)`$ couples rather strongly to the $`\rho `$-meson. This is presently being investigated and we find that in both a relativistic as well as a nonrelativistic description the Dalitz decay is well in line with fig. 5. Similar results are also found by . * Rapp et al. sum the RPA-type Dyson-series for these diagrams. So in principle there could be collective effects, which are ignored in the simple calculation of the Dalitz decay. It appears, however, rather unlikely that at the temperatures under consideration, collectivity can play an important role. * Another source of discrepancy is the baryon to pion ratio assumed in the calculations. Using the freeze out parameters of we find a pion to baryon ratio for $`Pb+Au`$ of about 3, whereas a ratio of close to 6 is observed in experiments. The estimate of , on the other hand was based on a realistic pion/baryon ratio. All this points are specific to the environment created in a CERN energy heavy ion collision. At lower energies or in proton/pion nuclear reactions the density effects could very well be large. This will be investigated in the near future by the HADES detector at the GSI. ## 4 Conclusions At this point it is very hard to draw any firm conclusions as the new CERES data are still preliminary. Taking these data at face value, however, it appears that no or only small in medium corrections are needed in order to explain the data. This would be somehow unfortunate, although, as shown in in medium modifications due to the presence of pions indeed give rise only to small corrections. Certainly, all these calculations are rather unconstraint. For instance the number of omegas can be chosen within a considerably wide range. While these uncertainties have already been addressed, a measurement with a mass resolution which is sufficient to constrain the number of omegas in the final state would reduce these uncertainties to a large extent. This would then provide the basis for the search for the more subtle effects which, one would think, should be there. ## 5 Acknowledgments I would like to thank my collaborators C.M. Ko, C. Gale and A. Kanti Dutt-Mazumder with whom I am presently working the the issues addressed in this contribution. This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, and by the Office of Basic Energy Sciences, Division of Nuclear Sciences, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
no-problem/9903/hep-ph9903499.html
ar5iv
text
# NEARLY DEGENERATE NEUTRINO MASSES AND NEARLY DECOUPLED NEUTRINO OSCILLATIONS 11footnote 1Talk given by one of us (H.F.) in the 17th International Workshop on Weak Interactions and Neutrinos, Cape Town, South Africa, January 1999 ## References
no-problem/9903/cond-mat9903078.html
ar5iv
text
# Comparing different approaches to model the atomic structure of a ternary decagonal quasicrystal (Comparing different decagonal models) ## 1 Introduction From the beginning, the existence of quasicrystals has provoked questions about the physical driving force which results in a highly ordered but very complicated structure. Especially, the controversy whether truly deterministic quasicrystals can exist in real systems or whether quasicrystals always contain some inherent random disorder has stimulated much research. The first approach assumes quasicrystals to be stabilized energetically, while the second proposes them to be high–temperature phases due to an entropic contribution. The quasicrystal structure has often been described and analysed in the framework of quasiperiodic tilings, composed of two or more constituent “unit cells”. Recently, a new approach was made by the covering proposed by Gummelt . By assuming special overlap rules between prototiles of decagonal symmetry, it was shown that the resulting covering of these decagons is equivalent to a deterministic Penrose tiling. Later, Jeong and Steinhardt gave a simpler proof and further discussed the physical implications of this cluster approach. Especially, they showed that the Penrose tiling corresponds to a maximum density of overlapping decagonal clusters. The structure of existing quasicrystals has been analysed experimentally. In the case of decagonal quasicrystals, structure determinations were performed e.g. by Steurer et al. . This data was also the base for some further theoretical structure modeling . Experimental analyses have been complicated by the existence of competing structures such as approximants or microcrystals which strongly resemble quasicrystals. Furthermore, a problem of all experimental investigations so far, whether electron microscopy or x–ray diffraction, is that it is not possible to distinguish between the different TM atoms in the ternary Al–Co–Cu or Al–Co–Ni decagonal quasicrystals on which research has focussed. Concerning the overlap rules of Gummelt, the common experiment–based models do not provide any crystal chemical motivation. First attempts to bridge the gap between atomistic structure models and the mathematical algorithm of Gummelt were made recently . However, these models were explicitly designed to impose the Gummelt rules, but without any energetic calculations. This letter attempts to clarify the relationship of a specific structure model for a decagonal quasicrystal , based on an atomic decoration of a tiling, to the prototile framework. Specific features of the atomic arrangement prevent a strict equivalency of both approaches and can be therefore employed to discriminate between them. Recent experimental data will be discussed and used for comparison with the theoretical models. ## 2 Comparing tiling and covering approaches ### 2.1 Relationship between HBS tiling and prototile covering In a first step, the hexagon, boat and star (HBS) tiling will be connected to the overlap rule of decagonal clusters . For this purpose, the Gummelt decagon (including its decoration) can be subdivided into a boat and two hexagons. If we now allow overlaps of the decagons of type A (for the nomenclature see ref. ), the following cases can arise in the HBS framework: i) Two hexagons fall onto each other – no problem. ii) A hexagon falls onto a boat – then the hexagon should be rejected, leaving only the boat as part of the tiling. iii) Two boats overlap, forming a star – then both boats are to be replaced by a star. (Briefly: if two tiles overlap, they should be replaced by the tile which encompasses both. Note that larger and smaller tile always differ by a bowtie shape.) In the case of the type B overlap, the configuration should be replaced by that shown in Fig. 1. Using these rules, we can replace a Gummelt covering by an HBS tiling. Checking the allowed surroundings of a decagon (cf. fig. 14 in ref. ), it can be easily verified that these rules do not lead to any conflicts or ambiguities. As each surrounding in the prototile picture transforms into a different arrangement of HBS tiles (which always contains a star tile), we can conclude that these rules even describe a one–to–one correspondence between Gummelt covering and a HBS tiling. However, as the covering is equivalent to a Penrose tiling , the HBS tiling also has to follow specific matching rules. ### 2.2 The atomic decoration of Cockayne and Widom Recently, Cockayne and Widom (CW) presented a ternary model for a decagonal quasicrystal based on Monte–Carlo simulations and consisting of an HBS tiling supposed to impose matching rules which would equivalently allow an description in terms of a Penrose lattice. The atomic arrangement consists of interconnected decagonal clusters of about 2 nm diameter. Its atomic decoration shows remarkable similarities to the schematic Gummelt decoration (Fig. 2). Following the above argumentation, one could assume the CW model could be equivalently described as a overlap of decagonal prototiles. In fact, if an HBS tiling is constructed using the the CW decoration and the above strategy, only a few atoms do not match when placing a boat on top of a hexagon or a star on top of a boat. Part of the incongruencies can be resolved by allowing the two types of TM atoms in a zigzag chain to change places, but not all of them. In the following, the slight but important differences between the CW model and the prototile approach will be elucidated. First, the CW decoration breaks the symmetry both of the boat and the star tiles. In the case of the boat tile, the asymmetry of positions 1 and 2 in Fig. 2 could be resolved, although this would require also changes of all other positions marked with a cross. Furthermore, to symmetricise the boat while allowing a Gummelt overlap of type A would require not only position 3 but also position 4 to be occupied with a Cu atom. Together with the Co atom in between, this would result in a zipper–like interconnection of two zigzag chains of TM atoms. It is doubtful that such an arrangement would be stable. An even more ”dramatic” conflict concerns the bottleneck region of the bowtie. There, the two vertices (designated by 5 and 6 in Fig. 2) approach each other too closely for pentagonal Al clusters to be centered around both of them. Similar difficulties arise also with the decorated star tile. Therefore, the pentagonal Al clusters at the vertices and the zigzag chains are stumbling stones to any attempt to construct a decagonal cluster following the overlap rules exactly. In fact, these two features are also absent from such atomic models . In the case of the model of Steinhardt et al. this is all the more remarkable as part of the authors already proposed these zigzag chains in a previous paper . The covering forced by the Gummelt decoration is a deterministic Penrose tiling . Assuming the constituting cluster to be preferred on energetic reasons, the approach gives a physical mechanism for producing stable deterministic quasicrystals . On the other hand, the energetic stability of the cluster has been only postulated so far. In the specific proposal , the cluster is built of several hundred atoms, which makes it difficult to imagine that slightly different clusters will not be energetically degenerate. One could argue that such a cluster remains essentially unaffected by the exchange or displacement of a few of the about hundred atoms inside. Then, the CW model can be considered as a covering of a single decagonal prototile following the overlap rules of Gummelt, if only the atomic decoration of the prototile is not strictly fixed but slight changes and reshufflements are allowed. Therefore, the results of CW would come close to provide us with a crystal chemical motivation for the cluster decoration of the Gummelt model. However, the conceptual beauty of the prototile approach has been lost. In contrast, the CW model already contains an argument for stability, namely matching rules induced by ordering of TM atoms. This offers an explanation at a more fundamental level than having to postulate special stability of a specific cluster composed of hundreds of atoms. ## 3 Comparison of the structure models with experimental results In the following, some experimental results will be reviewed and compared with the above approaches. Tab. 1 contains a summary of the major differences between the models (including the often–discussed Burkov model ) and their comparison with the experiments. ### 3.1 Atomic positions in the 2 nm cluster A very peculiar feature of the CW model is the lack of a central atom in the 2 nm cluster. Instead, the center point is surrounded by triangular patterns of TM zigzag chains and pentagonal clusters comprising mainly Al atoms. This contradicts an established consensus that the 2 nm cluster should have perfect tenfold symmetry which is at the base of the common structural models . High–resolution transmission electron microscopy (HRTEM) images usually also show a tenfold cluster symmetry . Only in the case of decagonal Al–Pd–Mn , a triangular pattern in the center of the cluster was reported. However, it was interpreted to indicate columns in between the atomic positions, retaining the existence of a central atom. On the other hand, HRTEM micrographs contain strong dynamical contributions and phase contrast due to the microscope lens system. Therefore, the patterns are not easy to interprete. These disadvantages can be avoided by using the high–angle annular dark–field method which directly uses the electrons scattered from an incident beam into an annular detector. The resulting high–resolution images directly show the atomic structure. Using this method, images of the decagonal phase with composition Al<sub>72</sub>Co<sub>8</sub>Ni<sub>20</sub> have been obtained recently . As the scattering power of the Al atoms is five times less than that of the TM atoms, the latter show up prominently. Their positions fit remarkably well to the CW model. Even the triangular arrangement in the center of a 2 nm cluster column predicted by CW can be clearly recognized in the experimental images. The HBS tiles of CW are indicated by broken lines in fig. 3a of ref. . Matching rule violations are not observed. The vertices of the tiling show only a weak scattering contrast, indicating an Al atomic position, which is surrounded by a pentagonal arrangement of strong scatterers. Saitoh et al. explain this arrangement by a cluster (named S and marked with a star) motivated by its resemblance to clusters found in the Al<sub>13</sub>Fe<sub>4</sub>–type approximant. However, to account for the experimental observation, the central TM atom had to be arbitrarily substituted by Al, which gives unrealistic bond lengths. Furthermore, the tips of the stars indicating clusters of type S always overlap, as the bright dots forming these pentagons seem to be slightly elongated. This should not occur with the cluster proposed by Saitoh et al., but fits perfectly well to the TM zigzag chains in the model of ref. . As the two TM atomic positions are only 1.5 Å apart, they could not be directly resolved due to the experimental resolution of about 2 Å. The importance of the experimental findings of Saitoh et al. was already pointed out by Steinhardt et al. . Although their model is able to reproduce the observed positions of strong scatterers quite well by single TM atoms, it does not offer any explanation for their elongated shape. It was demonstrated that the model is also able to reproduce conventional HRTEM reasonably well, but the same is claimed by part of the coauthors for a different model . Here, the comparison is made with images interpreted originally to support a cluster model with tenfold symmetry . These contradictions only reflect the well–known fact that there is no unique relation between an HRTEM image and a structure model (even in the case of simple crystalline structures). Unfortunately, literature data do not provide a complete set of microscope parameters and imaging conditions which would be necessary to compare with image simulations of the CW model. Therefore, a more quantitative comparison of the CW model with standard HRTEM images has to be left for some future research. Meanwhile, another investigation using the same material and the same method has been published . This time, the resolution of the microscope was sufficient to resolve the zigzag chains of TM atoms (called “buckled columns” in this reference), thereby giving further evidence for the correctness of the CW approach and refuting the model of Steinhardt et al. . According to the study of Yan et al. , the material is composed from two types of clusters, one with the characteristic triangular pattern in the center, the other where the decagonal symmetry is retained even in the central part. The latter would contradict both the CW and the Steinhardt et al. model. However, it should not be forgotten that all HRTEM images inherently project all atomic positions along the direction of observation. The second type of cluster can be easily explained by assuming clusters of the first type, but stacked on top of each other rotated by multiples of 36. What has been called chemical disorder could also be interpreted as stacking disorder. These rotations, if acting in an ordered way, could also explain superstructures along the decagonal axis. Indeed, higher periods of 0.8, 1.2 and 1.6 nm were experimentally observed . If acting in a random way, they constitute a source of entropy. As every structure determination, whether by HRTEM or by Patterson analysis, contains an averaging, it also can be suggested why the majority of models contain a cluster of perfect decagonal symmetry . ### 3.2 Further comparison So far, the comparison was focussed only on the atomic positions projected along the decagonal axis. In the following, other features will be checked. The experimentally investigated phase contains a $`10_5`$ screw axis which is correctly reproduced by both models of CW and Steinhardt et al. Steinhardt et al. calculate a composition of Al<sub>72.5</sub>Co<sub>8.9</sub>Ni<sub>18.6</sub> for their model, which is equal to the experimental composition within the error margins. Starting from the composition Al<sub>64.8</sub>Co<sub>19.6</sub>Cu<sub>15.6</sub> of the CW model and bearing in mind that Cu substitutes for Al and Co in equal proportions, we obtain a pseudo–binary Al<sub>72.6</sub>TM<sub>27.4</sub> phase, which also compares remarkably well with the composition Al<sub>72</sub>TM<sub>28</sub> of the experimental studies . However, the Monte–Carlo simulations of binary Al–Co would suggest a quite different structure for the decagonal phase of such a composition . The quasicrystal of Saitoh et al. also does not correspond to the basic Co–rich but to the basic Ni–rich phase. Both variants are connected by several superstructures along the Al<sub>72</sub>Co<sub>x</sub>Ni<sub>28-x</sub>–line . As Co and Ni would be treated similarly by the Monte Carlo approach, one would not expect any variations with $`x`$. Therefore, the rôle of transition metals seems to be more subtle. It should be admitted, however, that there remains a major aspect of the CW model which still awaits experimental verification, namely the alternation of the two different TM atoms along a zigzag chain. As Co and Cu or Ni cannot be distinguished by electron microscopy or standard x–ray techniques, this test will be rather difficult to perform. Some caution should be applied in generalizing the results obtained on the above composition to “the” decagonal phase. It is by now commonly accepted that even in one alloy system different variants exist . Their structures might be more different than previously assumed. In this respect it is interesting to recall the neutron scattering experiments performed on Al<sub>72.5</sub>Co<sub>11</sub>Ni<sub>16.5</sub> exhibiting superstructure reflections which, however, were concluded not to be due to a Co/Ni ordering but a true structural feature . ### 3.3 Thermodynamical considerations Experimentally, the basic–Ni decagonal structure was observed to be stable only as a high–temperature phase , indicating an entropic contribution which stabilizes the structure. On the other hand, the prototile approach was advanced mainly to provide a plausible mechanism for energetic stabilization via maximizing the density of clusters assumed to have minimum energy . Therefore, the prototile idea lacks experimental support also from the thermodynamic viewpoint. Although the CW model was also derived by energetic arguments, it might be more flexible to accomodate random components. The concept of random tilings is able to provide an entropic term for stability. However, a recent tiling analysis showed the high–temperature basic Ni–rich phase to be a perfect Penrose tiling. (This contrasts to a variant with superlattice ordering where even single phasons could be identified by HRTEM .) The evident entropic term was attributed to chemical disorder without specifying its exact origin. In the following some mechanisms will be listed which can introduce disorder easily and in a into the CW model without changing the underlying tiling: i) Switching between the two mirror–related variants of the boat tile, or rotating the star tile which also results in non–equivalent atomic configurations due to the symmetry breaking of the decoration; ii) rotating the 2 nm cluster, as mentioned above; iii) exchange of Co and Cu in the zigzag chains. ## 4 Summary An algorithm is presented which transforms a covering with decagonal prototiles and the overlap rules of Gummelt into an HBS tiling. The atomic decoration of an HBS tiling proposed Cockayne and Widom, although it resembles the Gummelt decoration very much, contains features which do not allow it to be interpreted as a reasonable realization of such a covering in the strict sense. Only if minor variations in the constituting decagonal cluster are allowed, the approaches could be reconciled. Experimental investigations render support to the major features of the model of Cockayne and Widom, but contradict the competing proposal of Steinhardt et al. Tab. 1 contains a brief overview of the comparison. Chemical disorder has been proposed as a means to obtain an entropic stabilization which is suggested by the experimentally determined phase diagram. ## Acknowledgment The author gratefully thanks Eric Cockayne for fruitful discussions and valuable comments. The project was supported by the Deutsche Forschungsgemeinschaft (grant no. Wi1645/1). ## Table | | Cockayne and | Steinhardt | Burkov | Saitoh | Yan | | --- | --- | --- | --- | --- | --- | | | Widom (CW) | et al. | | et al. | et al. | | construction | HBS tiling | covering | covering / | | | | | | | tiling | | | | central atom | no | no | yes | no | no | | tenfold symmetry | | | | | sometimes | | in cluster center | broken | broken | perfect | broken | broken | | zigzag chains | | | | | | | of TM atoms | yes | no | no | not resolved | yes | | Co/Cu ordering | yes | yes | no | ? | ? | | screw axis | $`10_5`$ | $`10_5`$ | $`10_5`$ | $`10_5`$ | $`10_5`$ | | composition | Al<sub>64.8</sub>Co<sub>19.6</sub>Cu<sub>15.6</sub> | Al<sub>72.5</sub>Co<sub>8.9</sub>Ni<sub>18.6</sub> | Al<sub>60</sub>TM<sub>40</sub> | Al<sub>72</sub>Co<sub>8</sub>Ni<sub>20</sub> | Al<sub>72</sub>Co<sub>8</sub>Ni<sub>20</sub> | | | = Al<sub>72.6</sub>TM<sub>27.4</sub> | | | | | | stabilisation | energetic | energetic | | (entropic ) | (entropic ) | Tab. 1: Comparison of special features of the different structure models (left part), especially those suitable for comparison with the experimental results (right part). ## Figure captions Fig. 1: Part of an HBS tiling corresponding to a type B overlap of two decagonal clusters. The boundaries of the original decagonal clusters are indicated by broken lines. Fig. 2: Atomic cluster of 2 nm diameter taken from ref. (fig. 2) and formed by a boat and two hexagons of the CW model. The Al atoms at the vertices of the HBS tiling and the few Al atoms marked by a cross are located at $`z=0\text{ or }0.5`$ whereas all other atoms occupy positions at $`z=0.25\text{ or }0.75`$ (in units of the $`c`$–axis periodicity). The cluster is superposed on the Gummelt decoration (indicated by thick lines) of a decagonal prototile, showing large similarities. The relevance of the atoms designed by numbers 1 to 6 is discussed in the text. Fig. 1 Fig. 2
no-problem/9903/astro-ph9903374.html
ar5iv
text
# Turbulence in differentially rotating flows ## 1 Introduction Differential rotation is observed in various astrophysical objects, from planets to galaxies, and one suspects that it gives rise to turbulence, since shear flows are liable to hydrodynamical and MHD instabilities. When these instabilities are of the linear type, they are relatively easy to study by perturbing slightly the equilibrium state. But some of them occur only at finite amplitude, in which case the answer must be sought in computer simulations or laboratory experiments, with their inherent limitations. There has been some debate recently on whether a keplerian disk, which is linearily stable (Rayleigh 1916), may be unstable to finite amplitude perturbations. It may look as if this question presents little interest, since it has been proved that a very weak magnetic field suffices to render such a disk linearly unstable (Chandrasekhar 1960; Balbus and Hawley 1991). However the properties of angular momentum transport depend sensitively on which instability dominates in the considered regime, and a finite amplitude instability can overpower the linear instability which is the first to occur, as the relevant control parameter increases. One example is the Couette-Taylor flow, with the outer cylinder at rest and the inner cylinder rotating with angular velocity $`\mathrm{\Omega }`$. When $`\mathrm{\Omega }`$ is increased, the transport of angular momentum first scales as $`\mathrm{\Omega }^{3/2}`$, but thereafter it varies as $`\mathrm{\Omega }^2`$, once the flow has become fully turbulent (Taylor 1936), as if the initial linear instability were superseded by a stronger shear instability (see also Lathrop et al. 1992). By extrapolating the $`\mathrm{\Omega }^{3/2}`$ law to high $`\mathrm{\Omega }`$ one would clearly underestimate the transport. In the present article, we take as working assumption that any differentially rotating flow experiences, at high Reynolds number, the turbulent regime observed in the Couette-Taylor (CT) experiment, and that this turbulence will then transport angular momentum in the same way as in that experiment. The CT flow has been chosen as reference because it is the simplest flow to realize in the laboratory, with both shear and rotation that can be varied independently. We examine whether the experimental data suggest a prescription for the angular momentum transport, which may be used to model astrophysical objects. A similar approach has been taken by Zeldovich (1981), but our conclusions will differ from his (see Appendix). ## 2 The Couette-Taylor experiment The CT experimental apparatus consists of two coaxial rotating cylinders of radius $`R_1`$ and $`R_2`$ separated by a gap $`\mathrm{\Delta }R=R_2R_1`$, which is filled with a fluid of viscosity $`\nu `$. The cylinders can rotate with different angular velocities $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$; their height is in general much larger than their radius, to minimize the effect of the boundaries. The Reynolds number is usually defined in terms of the differential rotation $`\mathrm{\Delta }\mathrm{\Omega }=|\mathrm{\Omega }_2\mathrm{\Omega }_1|`$ and by taking the gap width as characteristic length: $$Re=\frac{\mathrm{\Delta }\mathrm{\Omega }R\mathrm{\Delta }R}{\nu },$$ (1) where $`R`$ is the mean radius $`R=(R_1+R_2)/2`$. ### 2.1 Critical parameters and transition to turbulence When the inner cylinder is rotating and the outer one is at rest, angular momentum decreases outward and the flow is linearly unstable for Reynolds numbers higher than $`Re_c=41.2\sqrt{R/\mathrm{\Delta }R}`$ (for narrow gap, cf. Taylor 1923). This well known instability takes first the shape of steady toroidal, axisymmetric cells (the Taylor vortices); it is very efficient in transporting angular momentum, whose gradient is strongly reduced. At higher $`Re`$ a wavy pattern appears and after a series of bifurcations the flow becomes fully turbulent. This case has been studied by many experimental teams, and it is extremely well documented (see Andereck et al. 1986). It has also been modeled successfully in three-dimensional numerical simulations (Marcus 1984; Coughlin & Marcus 1996). In the opposite case, when the outer cylinder is rotating and the inner one is at rest, the angular momentum increases outward and the flow is linearly stable. The only theoretical prediction concerning the non-linear behavior is that by Serrin (1959), later refined by Joseph and Munson (1970), who established that the flow is stable against finite amplitude perturbations below $`Re=2\pi ^2`$ (in the narrow gap limit). To this date, no numerical simulation has been able to demonstrate the finite amplitude instability. But this instability does occur in the laboratory, and it has been described already by Couette (1890). It was studied in detail by Wendt (1933) and Taylor (1936), who showed that for Reynolds numbers exceeding $`Re_c\mathrm{2\hspace{0.17em}10}^3`$, the flow becomes unstable and immediately displays turbulent motions. The critical Reynolds number depends on whether the angular speed is increased or decreased in the experiment, a typical property of finite amplitude instabilites. Moreover, it is sensitive to gap width, as demonstrated by Taylor. Figure 1 displays results from Wendt and Taylor: $`Re_c`$ is roughly constant below $`\mathrm{\Delta }R/R=1/20`$, but above it increases as $`(\mathrm{\Delta }R/R)^2`$, as was already noticed by Zeldovich (1981), a behavior for which an explanation was proposed by Dubrulle (1993). In the latter regime one can define another critical Reynolds number $`Re_c^{}`$ involving, instead of gap width, the gradient of angular velocity; since $$Re_c=\frac{R^3}{\nu }\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Delta }R}\left(\frac{\mathrm{\Delta }R}{R}\right)^2=Re_c^{}\left(\frac{\mathrm{\Delta }R}{R}\right)^2,$$ (2) the instability condition becomes $$Re^{}=\frac{R^3}{\nu }\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Delta }R}Re_c^{}\mathrm{6\hspace{0.17em}10}^5.$$ (3) We see that two conditions must be satisfied for the finite amplitude instability to occur: the first $`ReRe_c`$ is the classical criterion of shear instability, valid also for plane parallel flows, whereas the second $`Re^{}Re_c^{}`$, involving what we shall call the gradient Reynolds number, is genuine to differential rotation. In addition, to trigger the instability the strength of the perturbation must exceed a certain threshold, which presumably also depends on $`Re`$ or $`Re^{}`$. ### 2.2 Transport of angular momentum In the turbulent regime, the torque measured by Taylor scales approximately as $`G(\mathrm{\Omega }_2)^n`$ for a given gap width, where the exponent $`n`$ tends to 2 for large $`\mathrm{\Omega }_2`$. The measurements made by Wendt confirm that scaling with $`n2`$. It suggests that the transport of angular momentum may be considered as a diffusive process, and that the mean turbulent viscosity $`\overline{\nu }_t`$ increases linearly with $`\mathrm{\Omega }_2`$, or $`\mathrm{\Delta }\mathrm{\Omega }`$. It is then natural to examine whether this viscosity may be expressed as $$\overline{\nu }_t=\alpha R\mathrm{\Delta }\mathrm{\Omega }\mathrm{\Delta }R$$ (4) with $`\alpha `$ being a constant of order unity, since the largest turbulent eddies would have a size $`\mathrm{\Delta }R`$ and a peripheral velocity $`R\mathrm{\Delta }\mathrm{\Omega }/2`$. The parameter $`\alpha `$ is easily derived from the torque measurements, and the surprising result is that it decreases with gap width (fig. 2). For the smallest gaps, $`\alpha `$ scales as the inverse of $`\mathrm{\Delta }R/R`$, but the slope steepens farther as if the scaling would tend asymptotically to $$\alpha \left(\frac{\mathrm{\Delta }R}{R}\right)^2\text{for}\left(\frac{\mathrm{\Delta }R}{R}\right)1.$$ (5) (Comparing Wendt’s experimental data of his two largest gaps, one deduces an exponent $`\delta \mathrm{ln}\alpha /\delta \mathrm{ln}(\mathrm{\Delta }R/R_2)=1.83.`$) We may therefore conclude that, in the limit of large gap, the mean turbulent viscosity actually scales as $$\overline{\nu }_t\left(\frac{\alpha \mathrm{\Delta }R}{R}\right)^2R^3\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Delta }R}=\beta ^{}R^3\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Delta }R},$$ (6) with $`\beta ^{}\mathrm{4\hspace{0.17em}10}^6`$. This strongly suggests that the local value of the turbulent viscosity is then independent of gap width, and that it is determined only by the local shear: $$\nu _t=\beta r^3\left|\frac{d\mathrm{\Omega }}{dr}\right|,$$ (7) $`r`$ being the radial coordinate. In principle, one should be able to verify this prescription for $`\nu _t`$ by examining the rotation profiles measured by Taylor and Wendt. According to (7), the conservation of angular momentum requires that its flux, given by $$=\left[\nu +\left|\beta r^3\frac{d\mathrm{\Omega }}{dr}\right|\right]r^2\frac{d\mathrm{\Omega }}{dr},$$ (8) varies as $`1/r`$ between the cylinders. Therefore $`r^3(d\mathrm{\Omega }/dr)`$ should be constant in the turbulent part of the profile (as it is in the laminar flow). But this constancy can be expected only if the transport of angular momentum is achieved by the viscous and turbulent stresses alone. That is not the case in Taylor’s experiment: as acknowledged by him, an Ekman circulation is induced by the ends of his apparatus, although he tries to minimize the boundary effects by chosing a large aspect ratio (height/radius). Moreover his rotation profiles are deduced from pressure measurements made with a Pitot tube located at half height of the cylinders, where the radial return flow has its maximum intensity. Consequently, the torque inferred from these profiles is about half of that measured directly at the inner cylinder. The aspect ratio is less favorable in Wendt’s experiment, but there the top boundary is a free surface, and most of his measurements have been made with the bottom boundary split in two annuli, attached respectively to the inner and the outer cylinders, which reduces drastically the circulation and renders his results more reliable. We examined his rotation profiles obtained for the largest gap width ($`\mathrm{\Delta }R/R=0.38`$) and with 4 different speeds of the inner cylinder ($`0\mathrm{\Omega }_1\mathrm{\Omega }_2/2`$); we found that in the bulk of the fluid, these profiles are compatible with the constancy of the gradient Reynolds number, as predicted by (8). However we cannot rule out a mild variation of $`\beta `$ within the profile. Unfortunately, Wendt gives the torque only for the case where the inner cylinder is at rest; we draw from it the following value of the coefficient $`\beta `$: $$\beta =1.5\pm \mathrm{0.5\hspace{0.17em}10}^5.$$ (9) The estimated uncertainty reflects that of the velocity measurements: $`\beta `$ results from second derivation of the shape of the fluid surface. ## 3 Are keplerian flows unstable? More precisely, the question we address is whether the Couette flow is unstable when the angular velocity decreases outwards and the angular momentum increases outwards, as in keplerian flow. There is no definite answer yet, because this regime has not been explored at high enough Reynolds number. But some information can be gleaned from Wendt’s study. He reports the results of experiments where the two cylinders rotate such that $`\mathrm{\Omega }_2R_2^2=\mathrm{\Omega }_1R_1^2`$. At low Reynolds number, this setup enforces a laminar flow of constant angular momentum (neutral flow), but at high Reynolds number this flow becomes turbulent for two of the three gaps used by Wendt. The angular velocity and angular momentum profiles for one of these turbulent flows are reproduced in fig.3. (According to Wendt’s data, $`\mathrm{\Omega }_2R_2^2`$ actually exceeds $`\mathrm{\Omega }_1R_1^2`$ by a half a percent.) The profiles clearly demonstrate the flow instability, with angular momentum being transported down the angular velocity gradient, which becomes somewhat flater far enough from the boundaries, whereas the angular momentum profile steepens substantially. In the turbulent bulk of the flow $`q=d\mathrm{ln}\mathrm{\Omega }/d\mathrm{ln}r1.4`$, compared to the initial $`q=2`$. We recall that $`q=1.5`$ in keplerian flow, and that in the numerical simulations performed by Balbus et al. (1996, 1998), the instability is lost already at about $`q=1.95`$. A crude estimate of the parameter $`\beta `$ in the viscosity formulation (7) indicates that the size of the turbulent eddies is much smaller than the gap width: $`\mathrm{}=\sqrt{\beta }R\mathrm{\Delta }R/100`$. The corresponding values of $`Re^{}`$ for the three experiments are reported in fig.4 together with the critical line of fig.1. They are located respectively above and below this line for the unstable and stable flows. The data are too scarce to locate precisely the critical line of these “neutral” flows, but we can conclude that the critical gradient Reynolds number $`Re^{}`$ then lies between $`\mathrm{2\hspace{0.17em}10}^5`$ and $`\mathrm{6\hspace{0.17em}10}^5`$. ## 4 Discussion The firmest result of our analysis of the Couette-Taylor experiment is that the criterion for finite amplitude instability may be expressed in terms of the gradient Reynolds number $`Re^{}=R^3(\mathrm{\Delta }\mathrm{\Omega }/\mathrm{\Delta }R)/\nu `$, where the critical Reynolds number $`Re_c^{}\mathrm{6\hspace{0.17em}10}^5`$ is independent of the width of the gap between cylinders, for wide enough gap. The turbulent transport of angular momentum then seems also to be independent of gap width; it proceeds always down the angular velocity gradient, as confirmed by the behavior of the initially “neutral” flows examined by Wendt. Though the experimental evidence is somewhat less compelling, we have established empirically an expression which links the turbulent viscosity to the local shear. The value of $`Re_c^{}`$, and that of $`\beta `$ in eq.(7), have been derived from Wendt’s experiment with the inner cylinder at rest, and it is not obvious that these parameters would be the same for different ratios of cylinder speeds. Also, the linear scaling $`\nu _t/\nu =\beta Re^{}`$ may be valid only for those moderate gradient Reynolds numbers which could be reached in the laboratory Nevertheless, it is tempting to apply this expression (7) to accretion disks, as an alternate for the commonly used prescription $`\nu _t=\alpha c_sH`$, where $`H`$ is the scale height of the disk and $`c_s`$ the local sound speed (Shakura & Sunyaev 1973). Some caution is required because this viscosity has been derived from experiments performed with an incompressible fluid. Moreover (7) implies that the eddies which dominate in the transport of angular momentum have a size of order $`\mathrm{}\sqrt{\beta }r`$, independent of the strength of the local shear, and that their velocity is of order $`\mathrm{}rd\mathrm{\Omega }/dr`$. When applying this prescription to a compressible flow, one has to make sure that this velocity is smaller than the sound speed and, in the case of an accretion disk, that the size $`\mathrm{}`$ of the eddies, which are three-dimensional, does not largely exceed the scale height $`H`$. The behavior of “neutral” flows demonstrates that the shear instability always transports angular momentum down the angular velocity gradient, which means outward for accretion disks. Note that in a keplerian disk our expression is equivalent to $$\nu _t=\beta ^{}r^2\mathrm{\Omega }\text{with}\beta ^{}=\frac{3}{2}\beta .$$ (10) Such a prescription has been suggested originally by Lynden-Bell and Pringle (1974), and recently it was used again by Duschl et al. (1998). As a test, it is being applied to the modelling of accretion discs in active galactic nuclei (Huré & Richard 1999). The reader may wonder why we have only used experimental results dating from the thirties, namely those of Wendt (1933) and Taylor (1936). The reason is that no one, since them, has studied in such extent the regime of outward increasing angular momentum.<sup>1</sup><sup>1</sup>1For instance, the Reynolds numbers explored by Coles (1965) are one order of magnitude lower than those reached by Wendt, which explains why he did not encounter the finite amplitude instability of neutral flows. We suspect that it is because the flow becomes then turbulent at once, without undergoing a series of bifurcations associated with enticing patterns. But we hope that experimentalists will turn again to this classical problem, which is of such great interest for geophysical and astrophysical fluid dynamics, and that they will explore the rotation regimes for which the data are so incomplete. In the meanwhile, the quest will continue to detect the finite amplitude instability in computer simulations. ###### Acknowledgements. We thank our anonymous referee for his sharp remarks, which incited us to strengthen our case and to dig even deeper into Wendt’s experimental data. References Andereck C.D., Liu S.S., Swinney H.L. 1986, J. Fluid Mech. 164, 155 Balbus S.A., Hawley J.F 1991, ApJ 376, 214 Balbus S.A., Hawley J.F., Stone J.M. 1996, ApJ 467, 76 Balbus S.A., Hawley J.F., Winters W.F. 1998, submitted to ApJ (astro-ph/9811057) Chandrasekhar S. 1960, Proc. Nat. Acad. Sci. 46, 53 Coles D. 1965, J. Fluid Mech. 21, 385 Coughlin K., Marcus P. 1996, Phys. Rev. Letters. 77, 2214 Dubrulle B. 1993, Icarus 106, 59 Duschl W.J., Strittmatter P.A., Biermann P.L. 1998, AAS Meet. 192, #66.17 Huré J.-M., Richard D. 1999 (preprint) Joseph D.D., Munson B.R. 1970, J. Fluid Mech. 43, 545 Kato S., Yoshizawa A., 1997, Publ. Astron. Soc. Japan, 49, 213 Lathrop D.P., Fineberg J., Swinney H.L. 1992, Phys. Rev. Letters., 68, 1515 Lynden-Bell D., Pringle J.E. 1974, MNRAS 168, 603 Marcus P.S. 1984, J. Fluid Mech. 146, 45 & 65 Rayleigh, Lord 1916, Proc. Roy. Soc. London A 93, 148 Serrin J. 1959, Arch. Ration. Mech. Anal. 3, 1 Shakura N.I., Sunyaev R.A. 1973, A&A 24, 337 Taylor G.I. 1923, Phil. Trans. Roy. Soc. London A 223, 289 Taylor G.I. 1936, Proc. Roy. Soc. London A 157, 546 Wendt F., 1933, Ing. Arch. 4, 577 Zeldovich Y.B. 1981, Proc. Roy. Soc. London A 374, 299 Appendix On Zeldovich’s analysis of Taylor’s results. Zeldovich (1981) had a similar goal in mind when he analyzed the results of Taylor’s experiment. But he started from the idea that the turbulent flow was governed by the epicyclic frequency $`N_\mathrm{\Omega }`$ and the turnover frequency $`\omega `$, where $$N_\mathrm{\Omega }^2=\frac{1}{r^3}\frac{d}{dr}(r^2\mathrm{\Omega })^2\text{and}\omega ^2=\left(r\frac{d\mathrm{\Omega }}{dr}\right)^2,$$ since they measure respectively the stability of the flow and the strength of the shear. He defines a non-dimensional parameter $`Ty=N_\mathrm{\Omega }^2/\omega ^2`$, akin to the Richardson number used in stratified shear flow, and he seeks the confirmation of his intuition that $`Ty`$ be constant in Taylor’s turbulent rotation profiles, in which case the angular velocity would obey a power law $$\mathrm{\Omega }r^q.$$ His best fit yields $`q=5.5`$. However these profiles are contaminated by the Ekman circulation mentioned above, and they differ marquedly from those obtained by Wendt. A consequence of this constant $`Ty`$ would be that the parameter $`\alpha `$ in (4) would vary as $$\alpha \left(1\frac{\mathrm{\Delta }R}{R}\right)^{2q+4}=\left(1\frac{\mathrm{\Delta }R}{R}\right)^{15}$$ a property which is not substantiated by the combined results of Taylor and Wendt, as can be seen in Fig. 2.
no-problem/9903/math9903055.html
ar5iv
text
# Untitled Document ON THE $`z`$-DEGREE OF THE KAUFFMAN POLYNOMIAL OF A TANGLE DECOMPOSITION Mark E. Kidwell and Theodore B. Stanford Mathematics Department United States Naval Academy 572 Holloway Road Annapolis, MD 21402 mek@nadn.navy.mil stanford@nadn.navy.mil 0. Introduction. In 1987, the elder author produced an upper bound on the degree of the then-new Brandt-Lickorish-Millett-Ho polynomial in terms of the crossing number and the length of the longest bridge in a link diagram. This result extends immediately to the $`z`$-degree of the two-variable Kauffman polynomial (in any of its forms; we shall use the Dubrovnik version $`D(\lambda ,z)`$). The length of the longest bridge (that is, the longest consecutive string of overcrossings) in a diagram represents some measure of how nonalternating the diagram is. For a fixed crossing number, the greatest $`z`$-degree will occur when the diagram is prime and alternating, which is when the longest bridge (and every bridge) has length $`1`$. A bridge of length $`>1`$ contributes to a quicker unravelling of the link under the skein relations, and lowers the $`z`$-degree. More precisely, it is shown in that the $`z`$-degree of the polynomial is less than or equal to $`NB`$, where $`N`$ is the crossing number and $`B`$ is the length of the longest bridge in a given link diagram. It is furthermore shown that if the link diagram is prime, reduced, and alternating, then the $`z`$-degree is $`N1`$. For a diagram with more than one long bridge, one would like to find an inequality that considers more than just the single longest bridge. For example, it was pointed out by Thistlethwaite that if a link diagram is composite, then the longest bridge in each factor counts toward lowering the $`z`$ degree of $`D`$. On the other hand, there are numerous examples (such as $`8_{19}`$$`8_{21}`$ from the table in Rolfsen ) of non-alternating diagrams with $`N`$ crossings and two or more bridges of length $`2`$ and with $`z`$-degree (of the Kauffman polynomial) of $`N2`$. Thus it is necessary to find some method of keeping the bridges separate from each other while computing the skein tree. We will accomplish this by cutting the link diagram into tangles, and considering the longest bridge in each tangle. We obtain an upper bound on the $`z`$-degree in terms of the number of crossings in the diagram, and the lengths of each of these longest separated bridges. The Dubrovnik polynomial of a tangle may be defined as a linear combination, over an appropriate ring, of simple tangles. We bound the $`z`$-degree of the polynomials in this linear combination in terms of the crossing number and length of the longest bridge in each tangle. Our result for links follows by closing up the tangles. Yokota has shown that if $`L`$ is represented by a reduced alternating diagram with $`n`$ crossings, then the span of the Kauffman polynomial in the other variable ($`\lambda `$ or $`a`$) is equal to $`n`$. Our result concerns not the span of $`z`$ but the degree. (The highest negative power of $`z`$ that occurs is always one less than the number of components in the link.) Acknowledgement: The authors would like to thank Joan Birman for her indelible contributions to their careers. 1. The Bound. By a tangle we mean a planar rectangular diagram, with overcrossings and undercrossings labeled in the usual way, containing any number of closed “circle” components and exactly two “arc” components. Two of the arc endpoints are on the top of the rectangle and two on the bottom. Equivalence of tangles is up to regular isotopy. A bridge is a maximal segment of a tangle containing no undercrossings. The length of a bridge is the number of crossings at which it overcrosses. Following Morton and Traczyk , we shall work over the ring $`\mathrm{\Lambda }^{}`$ generated over $`Z`$ by $`\lambda ^{\pm 1},z`$, and $`\delta `$ with the single relation $$\lambda ^1\lambda =z(\delta 1)$$ $`()`$ At the end of our calculations, we will use this relation to eliminate $`\delta `$ at the expense of introducing $`z^1`$ (but not lowering the $`z`$-degree). The module $`M_2`$ is defined to be the set of all $`\mathrm{\Lambda }^{}`$-linear combinations of tangles modulo the following local relations, where $`T\mathrm{unknot}`$ means the addition of a single unknotted and unlinked circle component to $`T`$, and the meaning of the rest of the symbols is indicated in Figure 1. (i) $`T^+T^{}=z(T^0T^{\mathrm{}})`$ (ii) $`T^{\mathrm{right}}=\lambda ^1T`$; $`T^{\mathrm{left}}=\lambda T`$ (iii) $`T\mathrm{unknot}=\delta T`$ $`T^+`$ $`T^{}`$ $`T^0`$ $`T^{\mathrm{}}`$ $`T^{\mathrm{left}}`$ $`T^{\mathrm{right}}`$ Figure 1 Morton and Traczyk prove that the tangles called $`P,Q`$ and,$`R_1`$ in Figure 2 form a free $`\mathrm{\Lambda }^{}`$-basis for $`M_2`$. We find it easier to control the $`z`$-degree by working with $`P,Q,R_1`$, and $`R_2`$. For $`f\mathrm{\Lambda }^{}`$, we understand the $`z`$-degree of $`f`$ to be the minimum $`z`$-degree over all polynomials in the equivalence class of $`f`$. We shall also sometimes refer to the $`z`$-degree of the Kauffman polynomial of a link or diagram as simply the $`z`$-degree of the link or diagram. $`P`$ $`Q`$ $`R_1`$ $`R_2`$ Figure 2 As in , there are a number of bad situations that must be dealt with before we can proceed to the main argument. Call a bridge $`b`$ in a tangle $`T`$ improper if any of the following is true. Examples are shown in Figure 3. (a) The bridge $`b`$ consists of a full circle with length $`B>0`$. (b) One or both of the crossings at which the bridge ends also has $`b`$ as an overcrossing. (c) The bridge $`b`$ starts and ends at the same crossing and has length $`B>1`$. (d) The bridge begins and ends at an endpoint of the tangle and has length $`B>1`$. (a) (b) (c) (d) Figure 3 Lemma 1.1. If a given diagram of a tangle $`T`$ has $`N`$ crossings and contains an improper bridge $`b`$ of length $`B`$, then the diagram can be altered by type II and III Reidemeister moves to a diagram with $`N^{}`$ crossings and a bridge $`b^{}`$ of length $`B^{}`$ in such a way that $`N^{}<N`$ and $`N^{}B^{}NB`$. Proof: For improper bridges in (a), (b), and (c) above, the proof is identical to Lemma 0 in . In the case of an overcrossing arc, we may move the arc to the far right of the tangle, leaving at most one crossing on that arc. The two possible cases are shown in Figure 4. The reduction in crossing number is matched by a reduction in the length of the bridge. Figure 4 Theorem 1.2. Let $`T`$ be a tangle with $`N`$ crossings and a bridge of length $`B`$. Considered as an element of $`M_2`$, $`T`$ can be written as a $`\mathrm{\Lambda }^{}`$-linear combination $$f_1(z,\lambda ^{\pm 1},\delta )P+f_2(z,\lambda ^{\pm 1},\delta )Q+f_3(z,\lambda ^{\pm 1},\delta )R_1+f_4(z,\lambda ^{\pm 1},\delta )R_2$$ where the $`z`$-degree of each $`f_i`$ is less than or equal to $`NB`$. Proof: The theorem is true for any tangle with crossing number $`0`$ or $`1`$ simply by replacing each disjoint circle with a $`\delta `$ and a loop with $`\lambda ^{\pm 1}`$. If the set of counterexamples to the theorem is nonempty, let $`T`$ be such a tangle diagram with minimal crossing number $`N`$; among all such tangles with minimal crossing number, let $`T`$ be one with maximal longest bridge length $`B`$. Let $`b`$ be one of the longest bridges in $`T`$. By Lemma 1.1, $`b`$ cannot be improper; thus it must have at least one endpoint $`c`$ in the interior of $`T`$. If we perform the skein operation (i) at $`c`$, the two smoothings $`T^0`$ and $`T^{\mathrm{}}`$ will have smaller crossing number than $`T`$, while the new $`T^{\pm 1}`$ will have a longer bridge. None of these tangles can be counterexamples to the theorem. Thus the $`z`$-degree of any coefficient of $`T^0`$ or $`T^{\mathrm{}}`$ is at most $`(N1)B`$, and the $`z`$-degree of any coefficient of the changed tangle $`T^{\pm 1}`$ is at most $`N(B+1)`$. Thus there can be no coefficient of $`zT^{\mathrm{}}`$ or $`zT^0`$ or $`T^{\pm 1}`$ with $`z`$-degree high enough for $`T`$ to be a counterexample to the theorem. If one should want to write $`T`$ as a $`\mathrm{\Lambda }^{}`$-linear combination of the basis tangles $`P,Q,R_1`$, one can do so at the expense of adding $`1`$ to the $`z`$-degree of the $`P`$ and $`Q`$ coefficients. Note that $`NB=0`$ in each of the tangles $`P,Q,R_1`$, so performing a skein move to replace $`R_2`$ with $`zPzQ+R_1`$ adds $`1`$ to the $`z`$-degree and accomplishes nothing as far as crossings and bridges are concerned. We now consider a wiring diagram in the plane as defined in Morton . The endpoints of several disjoint tangles are joined by non-crossing arcs. See for example Figure 5. One obtains a link by inserting a tangle into every box of a wiring diagram. Figure 5 Theorem 1.3. Let $`L`$ be a link diagram written as a wiring diagram with $`k`$ tangles $`\{T_i\}_1^k`$. Let tangle $`T_i`$ have $`N_i`$ crossings and a longest bridge of length $`B_i`$. Then the Dubrovnik polynomial $`D_L(\lambda ^{\pm 1},\delta ,z)`$ has $`z`$-degree less than or equal to $`k1+{\displaystyle \underset{i=1}{\overset{k}{}}}(N_iB_i)`$ Proof: Apply the skein relations inside of each tangle $`T_i`$ to obtain a linear combination of $`P,Q,R_1`$, and $`R_2`$ with $`z`$-degree bounded by $`N_iB_i`$. The result is that the Dubrovnik polynomial has been written as a $`\mathrm{\Lambda }^{}`$-linear combination of the polynomials of at most $`4^k`$ links, with each coefficient of the combination having $`z`$-degree at most $`_{i=1}^k(N_iB_i)`$. Each of these $`4^k`$ links has at most $`k`$ crossings and, unless it has no crossings at all, a bridge of length at least $`1`$. By the main theorem of , each link has a Dubrovnik polynomial with $`z`$-degree at most $`k1`$. It is customary to use relation (\*) to replace the variable $`\delta `$ in $`\mathrm{\Lambda }^{}`$ with $`(\lambda ^1\lambda )z^1+1`$. Since this replacement introduces no positive powers of $`z`$ and leaves one constant term, it has no effect on the $`z`$-degree of a polynomial in $`\mathrm{\Lambda }^{}`$. It is also irrelevant to the $`z`$-degree of $`D`$, whether one uses the regular isotopy invariant or the ambient isotopy invariant version of the polynomial, since these differ only by a power of $`\lambda `$. 2. Examples from Rational Tangles. Consider the closed chain $`L`$ represented by the two different diagrams in Figure 6. The seven-component link $`L`$ must have crossing number at least $`14`$, since each component has linking number $`\pm 1`$ with exactly two other components and so each component must have at least four crossings. Therefore the diagrams in Figure 6 are minimal crossing diagrams. If we apply Theorem 1.3 to the top diagram, we find two bridges of length $`2`$ which can be separated into two separate tangles, and we therefore obtain an upper bound of $`1421=11`$ for the $`z`$-degree of this link. However, the bottom diagram may be decomposed into three tangles, each containing a bridge of length $`2`$, and so we get an upper bound of $`1431=10`$. The $`z`$-degree of $`L`$ is in fact $`10`$. Figure 6 More generally, let $`L`$ be a chain similar to the one in Figure 6 but with $`p+q`$ components. Let $`p`$ be the number of positive linkages (consecutive components “positively” linked) and $`q`$ be the number of negative linkages. For example, the top diagram of Figure 6 has four positive linkages followed by three negative linkages, whereas the bottom diagram alternates between positive and negative. Arranging the linkages in positive-negative pairs, we find that Theorem 1.3 gives a bound of $`N\mathrm{min}(p,q)1`$ for the $`z`$-degree of $`L`$ (where $`\mathrm{min}(p,q)`$ is the smaller of the two numbers $`p`$ and $`q`$). This bound is in fact exact. However, grouping all the positive linkages together and all the negative linkages together, we get a bound of $`N3`$, which is far from exact if $`\mathrm{min}(p,q)`$ is large. If $`p=0`$ or $`q=0`$, then $`L`$ is an alternating link, and $`N\mathrm{min}(p,q)1`$ reduces to the $`N1`$ of . Consider the case $`p>q>0`$, so that $`N\mathrm{min}(p,q)=Nq1=2(p+q)q1=2p+q1`$. Apply the skein relation (i) to one of the crossings in a positive linkage, with $`L=L^+`$. Then $`L^{}`$ is a connected sum of $`p+q`$ Hopf links, and so has degree $`p+q1<2p+q1`$. $`L^{\mathrm{}}`$ is isotopic (picking up a factor of $`\lambda `$, of course) to a chain with $`N2`$ crossings, $`p1`$ positive linkages, and $`q`$ negative linkages, so it has $`z`$-degree $`Nq3`$. Applying the move in Figure 7, $`L^0`$ is isotopic (picking up $`\lambda ^1`$) to a chain with $`p`$ positive linkages and $`q1`$ negative linkages. Inductively, we see that the $`L^0`$ term has the largest $`z`$-degree, and that the $`z`$-degree of $`L`$ is therefore $`Nq1`$. Figure 7 The case $`p<q`$ follows similarly. The case $`p=q`$ is more complicated to do by this method because in applying the skein relation (i), one encounters the case that more than one of the three replacement terms have maximal degree. We will see that the formula holds in the $`p=q`$ case below. First, however, we want to point out that the chains described so far are all prime links, so that the reduction in $`z`$-degree obtained (reduction below the crossing number) is not just the connected sum effect (as mentioned in the introduction) in disguise. For suppose that such a link $`L`$ has a two-sphere $`S`$ which intersects $`L`$ in exactly two points. Those two points must be on the same component $`K`$. $`LK`$ cannot be split, no matter what $`K`$ is (linking number considerations again), so every component of $`LK`$ lies on the same side of $`S`$. On the other side of $`S`$ must be only a single arc of $`K`$, which cannot be knotted because none of the components of $`L`$ are individually knotted. More examples may be obtained by considering chains where two consecutive components may be twisted together any even number of times. Such a chain with $`k`$ components may be indexed by $`k`$ nonzero even integers $`(m_1,m_2,\mathrm{}m_k)`$. The integers describe the linking between consecutive components. For example, the diagram indexed by $`(2,4,4,2)`$ is shown in Figure 8. The same arguments used above show that such a diagram has minimal crossing number and represents a prime link. If the $`m_i`$ are permuted, then even if the represented link changes, the Kauffman polynomial does not, since permutations may be accomplished by mutations. Let $`p`$ be the number of positive $`m_i`$ (which correspond to positive linkages) and let $`q`$ be the number of negative $`m_i`$ (which correspond to negative linkages), so that $`k=p+q`$. If the positive and negative linkages are arranged in alternating fashion, then once again Theorem 1.3 gives us a bound of $`N\mathrm{min}(p,q)1`$. Again, we shall see that this bound is exact. Arranging the positive linkages all together, however, again gives a bound of $`N3`$. Figure 8 We now define $`R_1`$ to be a positive rational tangle. Moreover, we will say that any tangle built up out of a positive rational tangle $`T`$ and $`R_1`$ in either of the two ways shown in Figure 9 is also a positive rational tangle. We define a negative rational tangle similarly, building with $`R_2`$ instead of $`R_1`$. The term rational comes from Conway , and in fact all rational tangles are either positive or negative (in our sense), depending on whether their associated continued fractions (in Conway’s sense) represent positive or negative numbers. It is clear that a positive or negative rational tangle is alternating. A positive tangle that can be written as in Figure 9a will be called vertical, and one that can be written as in Figure 9b will be called horizontal. We define $`R_1`$ to be neither horizontal nor vertical. We define horizontal and vertical negative tangles similarly. (a) vertical (b) horizontal Figure 9 Another way of stating Theorem 1.2 is that any tangle may be written as a polynomial in $`z`$ of degree $`NB`$, where the coefficients of the polynomial are themselves polynomials in $`\lambda ^{\pm 1}`$, $`P`$, $`Q`$, $`R_1`$, and $`R_2`$. For rational tangles, we have the following: Theorem 2.1. A positive rational tangle with $`N`$ crossings may be uniquely written as a polynomial in $`z`$ of degree $`N1`$, where the coefficients are polynomials in $`\lambda ^{\pm 1}`$, $`P`$, $`Q`$, and $`R_1`$. If $`N>1`$, then the $`z^{N1}`$ coefficient of a vertical tangle is $`\pm (R_1\lambda Q)`$, and the $`z^{N1}`$ coefficient of a horizontal tangle is $`\pm (R_1\lambda ^1P)`$. A negative rational tangle may be uniquely written as a polynomial in $`z`$ of degree $`N1`$, where the coefficients are polynomials in $`\lambda ^{\pm 1}`$, $`P`$, $`Q`$, and $`R_2`$. If $`N>1`$, then the $`z^{N1}`$ coefficient of a vertical tangle is $`\pm (R_2\lambda ^1Q)`$, and the $`z^{N1}`$ coefficient of a horizontal tangle is $`\pm (R_2\lambda P)`$. Proof: The uniqueness follows from the result of Morton and Trazcyk , that $`P`$, $`Q`$, and $`R_1`$ form a free basis for $`M_2`$. (It is clear that $`R_1`$ may be replaced by $`R_2`$ without affecting this statement.) Beyond that, it is a simple induction argument, applying the skein relation (i) to $`R_1`$ in Figure 9 for the positive case (and similarly for the negative case). Now it follows easily that if $`L`$ is constructed by wiring $`p`$ positive and $`q`$ negative vertical rational tangles together in a horizontal line, the $`z`$-degree of $`L`$ is $`N\mathrm{min}(p,q)1`$. We replace each rational tangle with a linear combination as in the proof of Theorem 1.3. If $`p>q`$, then the largest $`z`$-degree in the result is obtained from the $`R_1`$ term in each positive tangle and the $`Q`$ term in each negative tangle. The leading $`z`$-coefficient will be $`\pm \lambda ^q`$ times the leading $`z`$-coefficient of the $`(2,p)`$ torus link (which is built up of $`p`$ copies of $`R_1`$ and $`q`$ copies of $`Q`$). If $`q<p`$, then the largest $`z`$-degree is obtained from the $`R_2`$ term in each negative tangle and the $`Q`$ term in each positive tangle. The leading $`z`$-coefficient will be $`\pm \lambda ^p`$ times the leading $`z`$-coefficient of the $`(2,p)`$ torus link. If $`p=q`$, then both of the terms just described are of maximal degree, but because they differ by $`\lambda ^{2p}`$ they cannot cancel. References. J.H. Conway, An enumeration of knots and links, and some of their algebraic properties, 1970 Computational Problems in Abstract Algebra (Proc. Conf., Oxford, 1967), 329–358. Pergamon, Oxford. M.E. Kidwell, On the degree of the Brandt-Lickorish-Millett-Ho polynomial of a link, Proceedings of the American Mathematical Society 100 (1987), 755–762. H.R. Morton, Invariants of links and 3-manifolds from skein theory and from quantum groups, Topics in Knot Theory (Erzurum, 1992), 107–155. NATO ASI Series C, 399, Kluwer Academic Publishers, 1993. H.R. Morton and P. Traczyk, Knots and algebras, Contribuciones Matematicas en homenaje al professor D. Antonio Plans Sanz de Bremond (ed. E. Martin-Peinador and A. Rodez Usan), University of Zaragoza (1990) 201–220. D. Rolfsen, Knots and Links, Mathematics Lecture Series, volume 7. Publish or Perish, Inc., Wilmington, DE, 1976. M.B. Thistlethwaite, Kauffman’s polynomial and alternating links, Topology 27 (1988), 311–318. Y. Yokota, The Kauffman polynomial of alternating links, Topology and its Applications 65 (1995), 229–236.
no-problem/9903/cond-mat9903285.html
ar5iv
text
# Reply to:—Interlayer Josephson vortices in the high-𝑇_𝑐 superconducting cuprates Farid raises the issue of whether the Clem-Coffey solution is really appropriate to describe interlayer Josephson vortices in layered superconductors. We used this result to quantitatively analyze our images of interlayer vortices in the high-temperature layered cuprate superconductor Tl-2201 in order to determine the interlayer penetration depth, $`\lambda _c`$. The length scales that appear in this model are the interlayer spacing $`s`$, the in-plane penetration depth $`\lambda _a`$, and $`\lambda _c`$. For most cuprate superconductors, $`a`$ is a bit over 10 Å, $`\lambda _a`$ is 0.1–0.2 microns, and $`\lambda _c`$ can be microns and depends strongly on the detailed chemical composition of the material, varying greatly with small changes in oxygen doping. In the usual description of an interlayer Josephson vortex, the core extends a distance $`s`$ perpendicular to the layers, and $`s\lambda _c/\lambda _a`$ along the layers : the field outside the core is described by the well-known anisotropic London model. For the sake of completeness in our paper, we fit our data using the Coffey-Clem model, which includes an approximate solution for the vortex core. Because our experiment is only sensitive to magnetic structure on micron length scales, the key features of this model are, first, that the length scale for the vortex core is less than a micron, and second, that the magnetic fields outside the vortex core are described by the anisotropic London model. Any other model with these two characteristics would give the same result for $`\lambda _c`$ within experimental error. Farid points out the lack of an exact solution for the difficult nonlinear problem of the structure of the vortex core, and implicitly speculates that the correct solution may turn out to influence the magnetic structure on length scales much larger than $`s\lambda _c/\lambda _a`$ and even much larger than $`\lambda _c`$. If this speculation is correct, our interpretation that our images of interlayer Josephson vortices are a direct measurement of the c-axis penetration depth, $`\lambda _c`$ 20 microns in optimally doped Tl-2201 , will be only one piece of a large body of related experimental and theoretical work that will need to be reevaluated. We look forward to the opportunity to fit our data to a theory including an exact treatment of both the vortex core and the spreading associated with the superconductor-vacuum interface. On the basis of related experimental evidence, it seems unlikely to us that this exact theoretical solution will result in a qualitative reevaluation of our results. First, since our article was published, two independent groups using different optical techniques have reported $`\lambda _c`$ = 17 microns and $`\lambda _c`$ = 12 microns in optimally doped Tl-2201. These results are both independent of the vortex structure. Second, we imaged vortices in the much-studied cuprate superconductor LSCO, and found $`\lambda _c`$ 5 microns . This unpublished result is consistent with measurements by several other techniques . Therefore, if the exact theoretical solution indicates that the size of an interlayer Josephson vortex is much larger than the interlayer penetration depth, it will contradict the combined experimental results of several groups on these two materials and will require a new understanding of the optical as well as the magnetic properties of layered superconductors. It is a minor point that Farid misquotes our result as $`\lambda _c22`$ microns in Tl-2201, when our article stated “$`\mathrm{}`$ we find $`\lambda _c=19\pm 2`$ microns. There are larger systematic errors, which we estimate to be $`<`$30% $`\mathrm{}`$”. Incidentally, we have since studied what we view as the most likely source of systematic error, namely the spreading of the vortex at the superconductor-vacuum interface, and refined our estimate to $`\lambda _c=18\pm 3`$ microns . Farid’s observation that fitting our data to the Clem-Coffey model does not give the correct values for $`s`$ and $`\lambda _a`$ should be taken as a limitation of the data, and not necessarily the theory. Our measurements are made a few microns away from the sample surface. The detector is an 8 micron SQUID, fabricated with micron line widths and shielded leads which may cause some distortion of the magnetic field . The data points are spaced every 1 micron, and the scan-to-scan $`x`$-position is irreproducible on a 0.1-micron length scale due to the tradeoffs required to get a large area scan. It is unreasonable to expect a sensible answer about any structure on the 0.001 micron length scale or even the 0.1 micron length scale, no matter how exact one’s theoretical model. Finally, Farid comments that the predicted relationship between $`E_J^0`$ and $`\lambda _c`$ , which we refuted through our measurement of $`\lambda _c`$, may not do “justice to the ILT theory.” We are grateful for a chance to comment on this issue. Based on a comparison of the published predictions of one of the theory’s authors with the available experimental data, ILT is not sufficient to explain the high critical temperature and condensation energy in the cuprate superconductors Tl-2201 and Hg-1201 . Some other mechanism must therefore be in operation, perhaps in addition to the ILT mechanism. In this sense, ILT is not “the” theory of cuprate superconductivity. The interlayer tunneling model remains a creative and influential set of ideas which may correctly describe some aspects of cuprate superconductivity, as intriguing new evidence suggests . We thank J. Berlinsky, J. R. Clem, and V. Kogan for useful discussions.
no-problem/9903/hep-ex9903011.html
ar5iv
text
# Measurement of 𝑩^𝟎-𝑩̄^𝟎 Flavor Oscillations Using Jet-Charge and Lepton Flavor Tagging in 𝒑⁢𝒑̄ Collisions at √𝒔=1.8 TeV ## I Introduction In the Standard Model of electroweak interactions , the quark mass eigenstates are related to the weak eigenstates via the unitary $`3\times 3`$ Cabibbo-Kobayashi-Maskawa (CKM) Matrix $`V_{\mathrm{CKM}}`$ . The nine elements in this matrix, $`V_{ij}`$, where $`i=\mathrm{u},\mathrm{c},\mathrm{t}`$ and $`j=\mathrm{d},\mathrm{s},\mathrm{b}`$, are completely determined from three angles and a phase. A nonzero phase gives CP violation in the weak interaction. Measurements of decays of hadrons containing $`b`$ quarks ($`B`$ hadrons) are of great interest because they determine the magnitudes of five of the nine elements of $`V_{\mathrm{CKM}}`$ as well as the phase. ### A The Unitarity Triangle The unitarity of $`V_{\mathrm{CKM}}`$ leads to nine unitarity relationships, one of which is of particular interest: $$V_{\mathrm{ud}}V_{\mathrm{ub}}^{}+V_{\mathrm{cd}}V_{\mathrm{cb}}^{}+V_{\mathrm{td}}V_{\mathrm{tb}}^{}=0.$$ (1) In the complex-plane, this sum of three complex numbers is a triangle, commonly referred to as the Unitarity Triangle. Measurements of the weak decays of $`B`$ hadrons and the already known CKM matrix elements determine the magnitudes of the three sides of the Unitarity Triangle, and $`CP`$ asymmetries in $`B`$ meson decays determine the three angles. The primary goal of $`B`$ physics in the next decade is to measure precisely both the sides and angles of this triangle and test consistency within the Standard Model. We can use several approximations to express eq. 1 in a more convenient form. The elements $`V_{\mathrm{ud}}1`$ and $`V_{\mathrm{cd}}\lambda =\mathrm{sin}\theta _\mathrm{C}`$, where $`\theta _\mathrm{C}`$ is the Cabibbo angle, are well measured. Although the elements $`V_{tb}`$ and $`V_{ts}`$ are not well measured, the theoretical expectations are that $`V_{tb}1`$ and $`V_{ts}V_{cb}^{}`$. With these assumptions, eq. 1 becomes $$\frac{V_{\mathrm{ub}}^{}}{\lambda V_{\mathrm{cb}}^{}}1\frac{V_{\mathrm{td}}}{\lambda V_{\mathrm{ts}}}=0.$$ (2) Measurement of $`\mathrm{\Delta }m_d`$, the subject of this paper, directly impacts the determination of $`V_{\mathrm{td}}/V_{\mathrm{ts}}`$. ### B Determining $`V_{td}`$ and $`V_{ts}`$ From Neutral $`B`$ Meson Flavor Oscillations Second-order weak processes transform a neutral $`B`$ meson into its antiparticle: $`B^0\overline{B}^0`$, giving a probability for a $`B^0`$ to decay as a $`\overline{B}^0`$ that oscillates with time. The frequency of these oscillations is the mass difference $`\mathrm{\Delta }m_d`$ between the $`B`$ mass eigenstates, which are linear combinations of the flavor eigenstates $`B^0`$ and $`\overline{B}^0`$. The mass difference is proportional to $`|V_{tb}^{}V_{td}|^2`$, so in principle, a measurement of $`\mathrm{\Delta }m_d`$ determines this product of CKM matrix elements. In practice, however, large theoretical uncertainties limit the precision of $`V_{td}`$. The same problem exists in determining $`V_{ts}`$ from $`\mathrm{\Delta }m_s`$, the frequency of $`B_s^0\overline{B}_s^0`$ oscillations. These theoretical uncertainties are reduced in determining the ratio $`|V_{td}/V_{ts}|^2`$ from $`\mathrm{\Delta }m_d/\mathrm{\Delta }m_s`$. Unfortunately, at this time, attempts to measure $`\mathrm{\Delta }m_s`$ have only led to lower limits. The determination of $`\mathrm{\Delta }m_s`$ is a key future measurement of $`B`$ hadrons, since $`|V_{td}/V_{ts}|^2`$ determines the magnitude of one of the sides of the Unitarity Triangle, as expressed in eq. 2. Measurements of $`\mathrm{\Delta }m_d`$ and $`\mathrm{\Delta }m_s`$ require determining the initial flavor of the $`B`$ meson, that is, whether the $`B`$ meson contained a $`b`$ quark or a $`\overline{b}`$ antiquark. Flavor determination is also crucial in the measurement of $`CP`$ violation in the decays of neutral $`B`$ mesons to $`CP`$ eigenstates. This paper describes a measurement of $`\mathrm{\Delta }m_d`$ using data collected by the CDF experiment from $`p\overline{p}`$ collisions with a center-of-mass energy of 1.8 TeV produced by the Fermilab Tevatron. In addition, the measurement of $`\mathrm{\Delta }m_d`$ is used to demonstrate the performance of two methods (described below) of identifying the flavor of B hadrons in the environment of $`p\overline{p}`$ collisions. These methods of flavor identification will be important in the measurement of CP violation in the decays of neutral $`B`$ mesons and in the study of $`\mathrm{\Delta }m_s`$ . ### C Previous Measurements of $`\mathrm{\Delta }m_d`$ from CDF The CDF collaboration has exploited the large $`b`$ quark cross-section at the Fermilab Tevatron to make several precision measurements of the properties of $`B`$ hadrons, including lifetimes and masses . Although the $`b`$ quark production cross-section at the Tevatron is large, the $`p\overline{p}`$ inelastic cross-section is three orders of magnitude larger, so specialized triggers are required to collect large samples of B hadrons. To date, the triggers that have been utilized are based on leptons (electrons and muons) or dileptons. Some analyses use the semileptonic decays of $`B`$ hadrons, $`B\mathrm{}\nu X`$; some use the semileptonic decays of charmed particles from $`B`$ hadron decay (e.g. $`BDX`$, followed by $`D\mathrm{}\nu Y`$); and some use leptonic $`J/\psi `$ decays ($`BJ/\psi X`$, followed by $`J/\psi \mu ^+\mu ^{}`$). Previous measurements of $`\mathrm{\Delta }m_d`$ from CDF were based on data samples collected with a dimuon trigger and single-lepton triggers ($`\mathrm{}=e,\mu `$. The single-lepton triggers were used to partially reconstruct approximately 6,000 $`B^0`$ mesons via their semileptonic decays $`B^0\mathrm{}^+\nu D^{()}X`$ (in this paper, reference to a particular decay sequence implies the charge-conjugate sequence as well). The analysis reported in this paper uses the same data sample collected with this trigger, but increases the number of $`B`$ mesons by over an order of magnitude by inclusively reconstructing $`B`$ hadrons that decay semileptonically. The inclusive reconstruction is made possible by the relatively long lifetime of $`B`$ hadrons: the decay point of the $`B`$ hadron is typically separated from the production point (the primary vertex) by a couple of millimeters. The inclusive reconstruction is based on identifying this decay point by associating the trigger lepton with other charged decay products to reconstruct a secondary vertex. ### D Method of Measuring $`\mathrm{\Delta }m_d`$ The oscillation frequency $`\mathrm{\Delta }m_d`$ can be found from either a time independent measurement (that is, from the total number of $`B^0`$’s that decay as $`\overline{B}^0`$’s) or from a time dependent measurement (that is, from the rates that a state that is pure $`B^0`$ at $`t=0`$ decays as either a $`\overline{B^0}`$ or $`B^0`$ as a function of proper decay time $`t`$). The latter technique has better sensitivity and allows a simultaneous study of the tagging methods since the amplitude of the oscillation depends on the effectiveness of the tagging method. The expected rate is $$𝒫(B^0B^0)=\frac{1}{2\tau _B}e^{t/\tau _B}(1+\mathrm{cos}(\mathrm{\Delta }m_dt));$$ (3) $$𝒫(B^0\overline{B}^0)=\frac{1}{2\tau _B}e^{t/\tau _B}(1\mathrm{cos}(\mathrm{\Delta }m_dt)),$$ (4) where $`\tau _B`$ is the mean lifetime of the two mass eigenstates of the $`B^0`$ (the difference in lifetime of these two eigenstates is very small and has been neglected), and $`t`$ is the proper decay time of the $`B^0`$ in its restframe. To measure this rate, we need to make three measurements: (1) the proper decay time, (2) the $`B`$ flavor at decay, and (3) the produced $`B`$ flavor. We determine the proper time by measuring the distance from the production point to the decay point in the laboratory frame combined with an estimate of the $`B^0`$ momentum. The flavor at decay is determined from the charge of the lepton, assuming it comes from semileptonic $`B`$ decay. In our measurement of $`\mathrm{\Delta }m_d`$ using $`B^0\mathrm{}^+\nu D^{()}X`$ decays, the flavor at production was identified using a same-side tagging technique based on the electric charge of particles produced in association with the $`B^0`$. This flavor tag has also been applied to a sample of $`B_d^0/\overline{B}_d^0J/\psi K_S^0`$ decays to measure the CP asymmetry. To identify the flavor at production in this analysis, we rely on the fact that the dominant production mechanisms of $`b`$ quarks in $`p\overline{p}`$ collisions produce $`b\overline{b}`$ pairs. The flavors of the $`B`$ hadrons are assumed to be opposite at the time of production. In this paper, we identify the flavor of the other $`B`$ hadron using two techniques: the soft-lepton flavor tag (SLT) and the jet-charge flavor tag (JCT). Several precise measurements of $`\mathrm{\Delta }m_d`$ have been published by experiments operating on the $`\mathrm{{\rm Y}}(4S)`$ and $`Z^0`$ resonances. The measurement of $`\mathrm{\Delta }m_d`$ presented here is competitive in precision and in addition quantifies the performance of the flavor tags, which are crucial for future measurements of $`CP`$ violation in the decays of $`B`$ mesons and the measurement of $`\mathrm{\Delta }m_s`$ at a hadron collider. The jet-charge flavor tag is a powerful technique in studies of neutral $`B`$ meson flavor oscillations by experiments operating on the $`Z^0`$ resonance . This analysis is the first application of the jet-charge flavor tag in the environment of a hadron collider. ## II The CDF Detector The data sample used in this analysis was collected from 90 $`\mathrm{pb}^1`$ of $`p\overline{p}`$ collisions recorded with the CDF detector at the Fermilab Tevatron. The CDF detector is described in detail elsewhere . We summarize here the features of the detector that are important for this analysis. The CDF coordinate system has the $`z`$ axis pointing along the proton momentum, with the $`x`$ axis located in the horizontal plane of the Tevatron storage ring, pointing radially outward, so that the $`y`$ axis points up. The coordinates $`r`$, $`\varphi `$, and $`\theta `$ are the standard cylindrical coordinates. The CDF spectrometer consists of three separate detectors for tracking charged particles: the silicon vertex detector (SVX), the vertex detector (VTX), and the central tracking chamber (CTC), which are immersed in a magnetic field of 1.4 Tesla pointed along the $`+z`$ axis. The SVX consists of four concentric cylinders of single-sided silicon strip detectors positioned at radii between 3 cm and 8 cm from the beam line. The strips are oriented parallel to the beam axis and have a pitch of 60 $`\mu `$m in the inner three layers and 55 $`\mu `$m on the outermost layer. The SVX is surrounded by the VTX, which is used to determine the $`z`$ coordinate of the $`p\overline{p}`$ interaction (the primary vertex). Surrounding the SVX and VTX is the CTC. The CTC is a drift chamber that is 3.2 m long with 84 layers of sense wires located between a radius of 31 cm and 133 cm. The sense wires are organized into five axial superlayers and four stereo superlayers with a stereo angle of $`3^{}`$. The momentum resolution of the spectrometer is $`\delta p_T/p_T=[(0.0009\mathrm{GeV}/cp_T)^2+(0.0066)^2]^{\frac{1}{2}}`$, where $`p_T`$ is the component of momentum transverse to the $`z`$ axis ($`p_T=p\mathrm{sin}\theta `$). Charged particle trajectories reconstructed in the CTC that are matched to strip-clusters in the SVX have an impact parameter resolution of $`\delta d_0=\left[13+(40\mathrm{GeV}/c)/p_T\right]\mu `$m, where the impact parameter $`d_0`$ is the distance of closest approach of the trajectory to the beam axis in the plane perpendicular to the beam axis. The outer 54 layers of the CTC are instrumented to record the ionization ($`dE/dx`$) of charged tracks. Surrounding the CTC are the central electromagnetic calorimeter (CEM) and the central hadronic calorimeter (CHA). The CEM has strip chambers (CES) positioned at shower maximum, and a preshower detector (CPR) located at a depth of one radiation length. Beyond the central calorimeters lie two sets of muon detectors. To reach these two detectors, particles produced at the primary vertex with a polar angle of $`90^{}`$ must traverse material totaling 5.4 and 8.4 pion interaction lengths, respectively. The trigger system consists of three levels: the first two levels are implemented in hardware. The third level consists of software reconstruction algorithms that reconstruct the data, including three-dimensional track reconstruction in the CTC using a fast algorithm that is efficient only for $`p_T>1.4`$ GeV/$`c`$. ## III Data Sample Selection The sample selection begins with data from the inclusive $`e`$ and $`\mu `$ triggers. At Level 2, both of these triggers require a track with $`p_T>7.5`$ GeV/$`c`$ found by the central fast tracker (CFT) , a hardware track processor that uses fast timing information from the CTC as input. The resolution of the CFT is $`\delta p_T/p_T=0.035(\mathrm{GeV}/c)^1p_T`$. In the case of the electron trigger, the CFT track must be matched to a cluster in the electromagnetic calorimeter, with transverse energy $`E_T>8.0`$ GeV, where $`E_T=E\mathrm{sin}\theta `$, and $`E`$ is the energy of the calorimeter cluster. In the case of the muon trigger, the CFT track must be matched to a reconstructed track-segment in both sets of muon detectors. In the third level of the trigger, more stringent electron and muon selection criteria, which are similar to the selection criteria described in Section III A, are applied. The inclusive electron data set contains approximately 5.6 million events and the inclusive muon data set contains approximately 2.0 million events. These data are dominated by leptons from the decay of heavy flavors ($`b\mathrm{}`$ and $`c\mathrm{}`$) and hadrons that mimic the lepton signal. ### A Electron and Muon Identification Electron candidates are identified using information from both the calorimeters and the tracking detectors. The electron calorimeter cluster in the CEM must have $`E_T>6`$ GeV. The longitudinal shower profile of this cluster is required to be consistent with an electron shower with a leakage energy from the CEM into the CHA of less than 4%. The lateral shower profile of the CEM cluster has to be consistent with the profile determined from test beam electrons. A track with $`p_t>6`$ GeV/$`c`$ must match the electron calorimeter cluster. This match is based on a comparison of the track position with the calorimeter cluster position determined in the CES: the difference between the extrapolated position of the track and the position of the cluster centroid must satisfy $`r|\mathrm{\Delta }\phi |<1.5`$ cm and $`|\mathrm{\Delta }z\mathrm{sin}\theta |<3`$ cm. To identify muons, we require a match between the extrapolated CTC track and the track segment in the muon chamber in both the $`r`$-$`\phi `$ and $`r`$-$`z`$ view. The uncertainty in this match is taken into account and is dominated by multiple scattering in the detector material. The transverse muon momentum must satisfy $`p_T>6`$ GeV/$`c`$. Finally, to ensure optimal resolution of the $`B`$ hadron decay point, the electron and muon candidate tracks have to be reconstructed in the SVX detector. ### B Jet Reconstruction Further analysis of the data sample is based on the charged particle jets in the event. Charged particles (instead of the more commonly used calorimeter clusters) are used to form jets in order to keep the electron and muon samples as similar as possible. These jets are found using a cone clustering algorithm. Tracks with $`p_T>1.0`$ GeV/$`c`$ are used as jet seeds. If two seeds are within $`\mathrm{\Delta }R<0.7`$ , the momenta of the seeds are added together to form a new seed. After all possible seed merging, lower momentum tracks ($`0.4<p_T<1.0`$ GeV/$`c`$) that are within $`\mathrm{\Delta }R<0.7`$ of a seed are added in to form the final jets. The trigger lepton is always associated to a jet, and below we refer to this jet as the trigger-lepton jet. A jet can consist of a single track with $`p_T>1`$ GeV/$`c`$. ### C Secondary Vertex Reconstruction In order to reconstruct the time of decay in the $`B`$ rest frame (the proper time), we must measure the point of decay with respect to the primary interaction in the lab and estimate the momentum of the $`B`$. Since the SVX provides only coordinates in the plane transverse to the beam axis, the measurement of the separation between the point of decay and the primary vertex is done only in the $`x`$-$`y`$ plane. We refer to this separation as the decay length $`L_{xy}`$. Only the component of the $`B`$ momentum transverse to the beam axis ($`p_T^B`$) is needed to calculate the proper time at decay, since the decay length is measured in the $`x`$-$`y`$ plane. The positions of the $`p\overline{p}`$ interactions or “primary vertices” are distributed along the beam direction according to a Gaussian with a width of $`30`$ cm. In the plane transverse to the beam axis, these interactions follow a distribution that is a Gaussian with a width of $`25\mu `$m in both the $`x`$ and $`y`$ dimensions. To reconstruct the primary event vertex, we first identify its $`z`$-position using the tracks reconstructed in the VTX detector. When projected back to the beam axis, these tracks determine the longitudinal location of the primary interaction with a precision of about 0.2 cm along the beam direction. If there is more than one reconstructed primary vertex in an event, the trigger lepton is associated with the primary vertex closest in $`z`$ to the intercept of the trigger lepton with the beam line. The transverse position of the primary vertex is determined for each event by a weighted fit of all tracks with a $`z`$ coordinate within 5 cm of the $`z`$-vertex position of the primary vertex associated with the trigger lepton. The tracks used in this fit are required to have been reconstructed in the SVX detector. First all tracks are forced to originate from a common vertex. The position of this vertex is constrained by the transverse beam envelope described above. Tracks that have large impact parameters with respect to this vertex are removed, and the fit is repeated. This procedure is iterated until all tracks are within the required impact parameter requirement. At least five tracks must be used in the determination of the transverse position of the primary vertex or we use the nominal beam-line position. The primary vertex coordinates transverse to the beam direction have an uncertainty in the range 10-35 $`\mu `$m, depending on the number of tracks and the event topology. The reconstruction of the $`B`$ decay point (referred to below as the secondary vertex) in the trigger-lepton jet is based on the technique developed to identify jets formed by $`b`$ quarks coming from $`t`$ quark decay . Some modifications to this technique were necessary to maintain good efficiency for reconstructing the $`B`$ hadron decay point in our data sample, since the $`B`$ hadrons in this sample have substantially lower $`p_T`$ than the $`B`$ hadrons from top quark decay. The search for a secondary vertex in the trigger-lepton jet is a two stage process. In both stages, tracks in the jet are selected for reconstruction of a secondary vertex based on the significance of their impact parameter with respect to the primary vertex, $`d_0/\sigma _{d_0}`$, where $`\sigma _{d_0}`$ is the estimate of the uncertainty on $`d_0`$. The uncertainty $`\sigma _{d_0}`$ includes contributions from both the primary vertex and the track parameters. The first stage requires at least three candidate tracks for the reconstruction of the secondary vertex. The trigger lepton is always included as a candidate, whether or not it satisfies the $`d_0/\sigma _{d_0}`$ requirement. Tracks consistent with coming from the decay $`K_S^0\pi ^+\pi ^{}`$ or $`\mathrm{\Lambda }^0p\pi ^{}`$ are not used as candidate tracks. Two candidate tracks are constrained to pass through the same space point to form a seed vertex. If at least one additional candidate track is consistent with intersecting this seed vertex, then the seed vertex is used as the secondary vertex. If the first stage is not successful in finding a secondary vertex, the second stage is attempted. More stringent track requirements (on $`d_0/\sigma _{d_0}`$ and $`p_T`$, for example) are imposed on the candidate tracks. All candidate tracks satisfying these stricter criteria are constrained to pass through the same space point to form a seed vertex. This vertex has an associated $`\chi ^2`$. Candidate tracks that contribute too much to the $`\chi ^2`$ are removed, and a new seed vertex is formed. This procedure is iterated until a seed vertex remains that has at least two associated tracks and an acceptable value of $`\chi ^2`$. The trigger lepton is one of the tracks used to determine the trigger-lepton jet secondary vertex in 96% of the events. The decay length of the secondary vertex $`L_{xy}`$ is the projection of the two-dimensional vector pointing from the primary vertex to the secondary vertex on the jet axis (defined by the sum of all the momenta of the tracks included in the jet); if the cosine of the angle between these two vectors is positive (negative), then $`L_{xy}`$ is positive (negative). Secondary vertices from the decay of $`B`$ hadrons are expected to have positive $`L_{xy}`$, while negative $`L_{xy}`$ vertices usually result from random combinations of mismeasured tracks. To reduce the background from these false vertices, we require $`|L_{xy}/\sigma _{L_{xy}}|>2.0`$, where $`\sigma _{L_{xy}}`$ is the estimated uncertainty on $`L_{xy}`$. We require a secondary vertex to be associated with the trigger-lepton jet. This requirement leaves us with 243,800 events: 114,665 from the electron data sample and 129,135 from the muon data sample. The fraction of events with $`L_{xy}<0`$ is 4.5% in the electron data sample and 5.7% in the muon-trigger sample. For reasons we discuss later, only events with $`L_{xy}>0`$ are used in the determination of $`\mathrm{\Delta }m_d`$ and the study of the performance of the flavor tags. The distribution of the number of jets with total transverse momentum $`p_T>5`$ GeV/$`c`$ is shown in the upper plot of Figure 1. Approximately 60% of the events contain a second jet in addition to the trigger-lepton jet. The lower plot in Figure 1 shows the difference in azimuth between the trigger-lepton jet and the jet with the highest $`p_T`$ in these events. A large fraction of these jets are back-to-back with the trigger-lepton jet as expected from the lowest-order processes that produce $`b\overline{b}`$ pairs. We search for secondary vertices in the other jets in the events as well. If an additional secondary vertex is found in one of these other jets, we classify this event as a “double-vertex” event. If only the single secondary vertex associated with the trigger-lepton jet is found, this event is classified as a “single-vertex” event. The distinction between single-vertex and double-vertex events is important in applying the jet-charge flavor tag as described below. ### D Determination of the $`B`$ Hadron Flavor The next step in the analysis is to identify the flavor at production of the $`B`$ hadron that produced the trigger lepton. We accomplish this by identifying the flavor of the other $`B`$ hadron produced in the collision. We refer to this other $`B`$ hadron as the “opposite $`B`$” in the text below. We first search for an additional lepton coming from the semileptonic decay of this opposite $`B`$. Because the $`p_T`$ of this lepton is not biased by the trigger, it is typically much smaller than the $`p_T`$ of the trigger lepton, so we call this method of flavor identification the “soft-lepton tag” or SLT. The soft lepton can be either an electron or a muon. The lepton selection criteria are similar to the selection criteria described in Section III A and Reference , with additional selection criteria that use $`dE/dx`$, and pulse height in the CPR and CES. The soft lepton must have a track with $`p_T>2`$ GeV/$`c`$, and the invariant mass of the soft lepton and the trigger lepton must be greater than 5 GeV/$`c^2`$. This requirement removes soft leptons coming from sequential semileptonic decays of charm particles produced in the decay of the $`B`$ hadron producing the trigger lepton. The lepton identification criteria restrict the acceptance of the soft leptons to a pseudorapidity of $`|\eta |<1.0`$, where $`\eta =\mathrm{ln}(\mathrm{tan}(\frac{\theta }{2}))`$. Approximately 5.2% of the 243,800 events contain a soft lepton candidate. If a soft lepton is not found, we try to identify the jet produced by the opposite $`B`$. We calculate a quantity called the jet charge $`Q_{\mathrm{jet}}`$ of this jet: $$Q_{\mathrm{jet}}=\frac{_iq_i(\stackrel{}{p}_i\widehat{a})}{_i\stackrel{}{p}_i\widehat{a}},$$ (5) where $`q_i`$ and $`\stackrel{}{p}_i`$ are the charge and momentum of the $`i^{\mathrm{th}}`$ track in the jet and $`\widehat{a}`$ is a unit-vector defining the jet axis. For $`b`$-quark jets, the sign of the jet charge is on average the same as the sign of the $`b`$-quark that produced the jet, so the sign of the jet charge may be used to identify the flavor at production of the $`B`$ hadron producing the trigger lepton. If a second jet in the event other than the trigger-lepton jet has a secondary vertex, then we use this jet to calculate the jet charge. Double-vertex events with a jet-charge flavor tag are referred to as JCDV events. If only the trigger-lepton jet contains a secondary vertex, we search for a jet with $`\mathrm{\Delta }\varphi >\pi /2`$ with respect to the trigger lepton and $`p_T>5`$ GeV/$`c`$. If there is more than one jet satisfying the above criteria, we choose the jet with the highest $`p_T`$. Single-vertex events with a jet-charge flavor tag are referred to as JCSV events. Approximately 7.5% of the 243,800 events above are JCDV events and approximately 42% are JCSV events. ## IV Data Sample Composition The events in our selected data sample come from three sources: $`b\overline{b}`$ production, $`c\overline{c}`$ production, and light quark or gluon production. In each event, the trigger lepton may be a true lepton or it may be a hadron that mimics the experimental signature of a lepton (a fake lepton). The secondary vertex in the trigger-lepton jet may be a true vertex due to the decay of heavy flavor ($`b`$ or $`c`$) or a random combination of erroneously reconstructed tracks that appear to form a vertex that is displaced from the primary interaction (a fake vertex). Light quark or gluon jets produce false vertices with $`L_{xy}>0`$ with equal probability as false vertices with $`L_{xy}<0`$. The small fraction ($`<`$6%) of events with $`L_{xy}<0`$ indicates that this background is small. In the analysis, we assume that all events come from heavy flavor ($`b\overline{b}`$ and $`c\overline{c}`$) production. The probability of a light quark or gluon event producing a fake vertex and a fake lepton is negligible, although in the evaluation of the systematic uncertainties, we take into account the possible effects of a small amount of non-heavy flavor background. Below, we describe how we determine the fraction of our samples that are due to $`b\overline{b}`$ production, $`c\overline{c}`$ production, and fake leptons. ### A Simulation of Heavy Flavor Production and Decay To understand the composition of our data, we use Monte Carlo samples of $`b\overline{b}`$ and $`c\overline{c}`$ production. Version 5.6 of the PYTHIA Monte Carlo generator was used to generate high-statistics $`b\overline{b}`$ and $`c\overline{c}`$ samples. The $`b\overline{b}`$ and $`c\overline{c}`$ pairs are generated through processes of order up to $`\alpha _s^2`$ such as $`gg𝐪\overline{𝐪}`$ and $`q\overline{q}𝐪\overline{𝐪}`$, where $`𝐪=`$ $`b`$ or $`c`$. Processes of order $`\alpha _s^3`$, such as gluon splitting, where $`gggg`$ is followed by $`g𝐪\overline{𝐪}`$, are not included, but initial and final state gluon radiation is included. The $`b`$ and $`c`$ quarks are hadronized using the Peterson fragmentation function with the parameters $`ϵ_b=0.006`$ and $`ϵ_c=0.06`$. The bottom and charm hadrons were decayed using version 9.1 of the CLEO Monte Carlo QQ . Events with a lepton with $`p_T>6`$ GeV/$`c`$ were accepted based on an efficiency parameterization of the CFT trigger that depends on the lepton $`p_T`$. The accepted events were passed through a simulation of the CDF detector that is based on parameterizations and simple models of the detector response that are functions of the particle kinematics. After the simulation of the CDF detector, the Monte Carlo events were treated as if they were real data. ### B Sources of Trigger-Electrons The trigger-electrons in the sample can come from three sources: heavy flavor decay ($`be`$, $`bce`$, and $`ce`$), photon conversion ($`\gamma e^+e^{}`$ or $`\pi ^0\gamma e^+e^{}`$), or hadrons that fake the electron signature in the detector. The contribution from heavy flavor decay is discussed in section IV D. We attempt to identify and reject photon conversions by searching for the partner of the trigger-electron. We search for an oppositely charged track that forms a good, zero opening angle vertex with the trigger-electron. The $`dE/dx`$ of this track, as measured in the CTC, must be consistent with the electron hypothesis. We removed 2% of the electron trigger sample that was identified as photon conversions. We estimate that about 1% of the remaining events contain a trigger-electron from a photon conversion that was not identified. To determine the fraction of events that contain a hadron that fakes an electron, we fit the trigger-electron $`dE/dx`$ spectrum for its $`e`$, $`\pi `$, $`K`$, and $`p`$ content. We found the non-electron fraction of the sample to be (0.6 $`\pm `$ 0.5)%, where the uncertainty is statistical only. Since this background is small, we neglect it in the remainder of the analysis. ### C Sources of Trigger-Muons The trigger muons in the sample can come from heavy flavor decay ($`b\mu `$, $`bc\mu `$, and $`c\mu `$), $`\pi `$ and $`K`$ decay, and from hadrons that penetrate the absorbing material in front of the muon chambers. The contribution from heavy flavor decay is discussed in section IV D. To study the properties of fake muon events, we used a control sample of events that only required a high $`p_T`$ CFT track in the trigger. The trigger track was treated like a trigger lepton, and the jet containing the trigger track was required to contain a secondary vertex. The $`L_{xy}`$ distribution of the control sample was very similar to the heavy flavor Monte Carlo $`L_{xy}`$ distributions and the $`L_{xy}`$ of the signal data samples. We conclude from this comparison that due to the secondary vertex requirement most of the fake muon events in the data are events from heavy flavor production. As described in Appendix B, we estimate the fraction of events with a fake trigger muon whose dilution is zero by comparing the flavor tagging performance of the $`e`$ and $`\mu `$ trigger data. The estimated fraction of events with fake muons that have zero dilution is $`(12\pm 6)`$%. ### D Fraction of Data Sample due to Heavy Flavor Production and Decay We determine the fraction of events in the data due to $`b\overline{b}`$ and $`c\overline{c}`$ production using two kinematic quantities: the trigger-lepton $`p_T^{\mathrm{rel}}`$ and the invariant mass $`m^{\mathrm{cl}}`$ of the cluster of secondary vertex tracks. The quantity $`p_T^{\mathrm{rel}}`$ is defined as the magnitude of the component of the trigger-lepton momentum that is perpendicular to the axis of the trigger-lepton jet. The trigger lepton is removed from the jet, and the jet axis is recalculated to determine $`p_T^{\mathrm{rel}}`$. To calculate $`m^{\mathrm{cl}}`$, we assign the pion mass to all of the tracks used to form the secondary vertex (except the trigger lepton). We include the trigger lepton even if it is not attached to the secondary vertex. These kinematic quantities are effective in discriminating between $`b\overline{b}`$ and $`c\overline{c}`$ events because of the significant mass difference between the hadrons containing $`b`$ and $`c`$ quarks ($`3`$ GeV/$`c^2`$). Template $`p_T^{\mathrm{rel}}`$ and $`m^{\mathrm{cl}}`$ distributions were obtained from the $`b\overline{b}`$ and $`c\overline{c}`$ Monte Carlo samples. The $`p_T^{\mathrm{rel}}`$ and $`m^{\mathrm{cl}}`$ distributions for the data were fit to the sum of the $`b\overline{b}`$ and $`c\overline{c}`$ Monte Carlo templates, where the normalization for each template was a free parameter. The $`e`$ and $`\mu `$ trigger data were fit separately, and the data for each trigger were divided according to flavor tag. The three categories were: soft-lepton (SLT), jet-charge single-vertex (JCSV), and jet-charge double-vertex (JCDV). The results of the $`m^{\mathrm{cl}}`$ and $`p_T^{\mathrm{rel}}`$ fits were averaged to obtain the nominal values for the fraction of events from $`b\overline{b}`$ production ($`F_{b\overline{b}}`$). The fits are shown in Fig. 2 and Fig. 3, and Table I gives the nominal values of $`F_{b\overline{b}}`$ for the $`e`$ and $`\mu `$ trigger data. The data are mostly ($`>90\%`$) from $`b\overline{b}`$ production. ## V Method of Measuring the Flavor Tag $`ϵD^2`$ and $`\mathrm{\Delta }m_d`$ As outlined in the introduction, to measure $`\mathrm{\Delta }m_d`$ we compare the flavor of the $`B^0`$ meson when it was produced to the flavor of the $`B^0`$ meson when it decays as a function of the proper decay time of the meson. ### A Reconstruction of the $`B`$ Proper Decay Time $`t`$ To reconstruct the time of decay in the $`B`$ rest frame ($`t`$), we must combine the two-dimensional decay length ($`L_{xy}`$) with the component of the $`B`$ momentum in the $`x`$-$`y`$ plane ($`p_T^B`$). The proper time is $$t=\frac{L_{xy}m_B}{cp_T^B}$$ (6) where $`m_B`$ is the mass of the $`B^0`$ and $`c`$ is the speed of light. The proper decay length is the proper time multiplied by the speed of light ($`ct`$). We do not observe all of the decay products of the $`B`$: the neutrino from the semileptonic decay is not detected, as well as other neutral decay products and charged decay products that may not have been associated with the secondary vertex. This means that $`p_T^B`$ is not known and must be estimated based on observed quantities and the $`b\overline{b}`$ Monte Carlo. The momentum in the transverse plane of the cluster of secondary vertex tracks $`p_T^{\mathrm{cl}}`$ and $`m^{\mathrm{cl}}`$ are the observed quantities used in the estimation of $`p_T^B`$. The trigger lepton is included in the calculation of $`p_T^{\mathrm{cl}}`$ and $`m^{\mathrm{cl}}`$, even if it is not attached to the secondary vertex, since we assume that it is a $`B`$ decay product. The estimate of the $`B`$ hadron transverse momentum $`p_T^B`$ for an event is $$p_T^B=\frac{p_T^{\mathrm{cl}}}{K}$$ (7) where $`K`$ is the mean of the distribution of $`K=p_T^{\mathrm{cl}}/p_T^B`$ determined with the $`b\overline{b}`$ Monte Carlo. Figure 4 shows $`K`$ distributions for two ranges of $`m^{\mathrm{cl}}`$ and two ranges of $`p_T^{\mathrm{cl}}`$ in the $`e`$-trigger $`b\overline{b}`$ Monte Carlo sample. For higher $`m^{\mathrm{cl}}`$ and $`p_T^{\mathrm{cl}}`$ values, the $`K`$ distribution has a higher mean and a narrower width: a larger fraction of the $`B`$ momentum is observed so the observed $`p_T^{\mathrm{cl}}`$ is a more precise estimate of the $`B`$ momentum. To take advantage of the $`m^{\mathrm{cl}}`$ and $`p_T^{\mathrm{cl}}`$ dependence of the $`K`$ distribution, we bin the data in four ranges of $`m^{\mathrm{cl}}`$ and $`p_T^{\mathrm{cl}}`$ for a total of 16 $`K`$ distributions. Different sets of $`K`$ distributions are used for the $`e`$ and $`\mu `$ trigger data. Figure 5 shows the reconstructed proper decay length ($`ct`$) distributions for the $`e`$ and $`\mu `$ trigger data. The plots of the data are compared to the expected shape from $`b\overline{b}`$ and $`c\overline{c}`$ production, where the $`b\overline{b}`$ and $`c\overline{c}`$ distributions were combined using $`F_{b\overline{b}}`$ in Table I and setting the fraction of $`c\overline{c}`$ events to $`F_{c\overline{c}}=1F_{b\overline{b}}`$. The lack of events near $`ct=0`$ is due to the transverse decay length significance requirement $`|L_{xy}/\sigma _{L_{xy}}|>2`$. The exponential fall-off of the data in $`ct`$ agrees with the Monte Carlo prediction. There is, however, an excess of events with $`ct<0`$ in the data. This excess is due in part to backgrounds and higher-order processes not included in the Monte Carlo. These backgrounds include electrons from photon conversions and fake muons. The higher-order processes include gluon splitting to $`b\overline{b}`$ pairs, which can produce a pair of $`B`$ hadrons that are close in $`\mathrm{\Delta }R`$. In these events, the decay products of both $`B`$ hadrons may be included in the same jet. This leads to secondary vertices that include tracks from both $`B`$ hadrons, resulting in a specious measurement of $`L_{xy}`$ that may be positive or negative. To verify this, we generated a sample of $`b\overline{b}`$ events using the ISAJET event generator, which includes gluon splitting. This sample showed an increased fraction of reconstructed vertices with negative $`L_{xy}`$. ### B Determination of the $`B`$ Flavor at Decay The flavor of the $`B`$ hadron associated with the trigger lepton at the time of decay is identified by the trigger-lepton charge, assuming the lepton is from a semileptonic $`B`$ decay. A $`B`$ hadron that contains an anti-$`b`$ quark ($`\overline{b}`$) will give a positively charged lepton in a semileptonic decay. The trigger lepton can also originate from the decay of a charmed hadron produced in the decay of the $`B`$ hadron, e.g. $`BDX`$,$`D\mathrm{}^{}X`$, which produces “wrong-sign” trigger leptons, or $`BJ/\psi X`$, $`J/\psi \mathrm{}^+\mathrm{}^{}`$, which produces both wrong-sign and right-sign trigger leptons. We refer to trigger leptons from these sources as sequential leptons. The Monte Carlo $`b\overline{b}`$ sample is used to determine the fraction of trigger leptons, $`f_{\mathrm{seq}}`$, from these sequential decays. We find $`f_{\mathrm{seq}}`$ is 9.4% in the electron sample and 13.6% in the muon sample. Approximately 75% of these sequential decays, i.e., 7.0% and 10.2%, produce wrong-sign leptons with the charge opposite to the charge from direct semileptonic decay. We assign a systematic uncertainty of 25% of its value to the fraction of sequential leptons, based on uncertainties on the branching fractions included in the CLEO Monte Carlo QQ and on measurements of these branching fractions at the $`\mathrm{{\rm Y}}(4S)`$ and the $`Z^0`$ resonance. ### C Determination of the $`B`$ Flavor at Production To determine the flavor of the trigger-lepton $`B`$ at the time of production, we attempt to identify the flavor of the other $`B`$ in the event, and assume that the original flavor of the trigger-lepton $`B`$ is opposite that of the other $`B`$. As described previously, we use two methods to obtain the flavor of the other $`B`$ in the event: soft-lepton tagging (SLT) and jet-charge tagging (JCT). The jet-charge tag has two sub-classes: jet-charge double-vertex (JCDV) and jet-charge single-vertex (JCSV). The soft-lepton method is the most effective (i.e., has the highest probability of producing a correct tag), but least efficient method. The jet-charge methods are less effective, but more efficient. The presence of a secondary vertex in the jet used for the jet charge greatly enhances its effectiveness. #### 1 Quantifying the Statistical Power of the Flavor Tags We quantify the statistical power of the flavor tagging methods with the product $`ϵD^2`$, where $`ϵ`$ is the efficiency for applying the flavor tag, and $`D`$ is the dilution, which is related to the probability that the tag is correct ($`P_{\mathrm{tag}}`$) by $$D=2P_{\mathrm{tag}}1.$$ (8) We measure $`ϵD^2`$ in our data. To illustrate the statistical significance of the product $`ϵD^2`$, we discuss an asymmetry measurement with two types of events, $`a`$ and $`b`$, where the flavor tagging method identifies whether the event is of type $`a`$ or type $`b`$. Type $`a`$ and type $`b`$ could be “mixed” and “unmixed” decays of a neutral $`B`$ meson, for example. The measured asymmetry $`A_{\mathrm{meas}}`$ is $$A_{\mathrm{meas}}=\frac{N_aN_b}{N_a+N_b},$$ (9) where $`N_a`$ and $`N_b`$ are the number of events that are tagged as type $`a`$ and type $`b`$, respectively. The true asymmetry $`A`$ is $$A=\frac{N_a^0N_b^0}{N_a^0+N_b^0},$$ (10) where $`N_a^0`$ and $`N_b^0`$ are the true number of events of type $`a`$ and type $`b`$, respectively, in the sample. The efficiency is $$ϵ=\frac{N_a+N_b}{N_a^0+N_b^0}.$$ (11) The true asymmetry is related to the measured asymmetry by $$A=\frac{1}{D}A_{\mathrm{meas}};$$ (12) and the statistical uncertainty on the true asymmetry is $$\sigma _A=\sqrt{\frac{1D^2A^2}{ϵD^2T}},$$ (13) where $`T`$ is the total number of events in the sample $`T=N_a^0+N_b^0`$. The statistical power of different flavor tagging methods varies as $`ϵD^2`$. #### 2 Measuring the Dilution of the Flavor Tags We measure the dilution of the flavor tags from our data sample. We start by defining a raw dilution, $`D_{\mathrm{raw}}`$: $$D_{\mathrm{raw}}=\frac{N_{\mathrm{RS}}N_{\mathrm{WS}}}{N_{\mathrm{RS}}+N_{\mathrm{WS}}},$$ (14) where $`N_{\mathrm{RS}}`$ and $`N_{\mathrm{WS}}`$ are the number of right-sign and wrong-sign events, respectively. Right-sign (wrong-sign) means that the charge of the trigger lepton is opposite to (the same as) the charge of the soft lepton or jet charge. If the charge of the trigger lepton unambiguously identified the flavor at production of the $`B`$ hadron, then $`D_{\mathrm{raw}}`$ would be equal to the true dilution $`D`$ of the flavor tag. All wrong-sign events would result from the flavor tag being incorrect. However, since some wrong-sign events result from the trigger lepton coming from a $`B`$ meson that mixed or from a sequential decay, $`D_{\mathrm{raw}}`$ is an underestimate of the true dilution of the flavor tag. Backgrounds from $`c\overline{c}`$ production and fake leptons further complicate the interpretation. Nevertheless, the true dilution is approximately related to $`D_{\mathrm{raw}}`$ by a scale factor $`N_D`$: $$D=N_DD_{\mathrm{raw}}.$$ (15) The form of Equation 15 is derived in Appendix A. We use this estimation of the true dilution to estimate the probability that the flavor tag is correct on an event-by-event basis: $$P_{\mathrm{tag}}=\frac{1}{2}\left(1+N_DD_{\mathrm{raw}}\right),$$ (16) This probability is used in the measurement of $`\mathrm{\Delta }m_d`$ as described in Section V D. The dilution normalization $`N_D`$ is determined simultaneously with $`\mathrm{\Delta }m_d`$. #### 3 Dilution for the Soft-lepton Tag To maximize the effectiveness of the soft-lepton flavor tag, the data are binned in the $`p_T^{\mathrm{rel}}`$ of the soft lepton. The $`p_T^{\mathrm{rel}}`$ of the soft lepton is defined in the same way as the $`p_T^{\mathrm{rel}}`$ of the trigger lepton. The same-sign and opposite-sign soft-lepton $`p_T^{\mathrm{rel}}`$ distributions are shown in Figure 6, where the sign comparison is between the trigger-lepton charge and soft-lepton charge. The $`p_T^{\mathrm{rel}}<0`$ bin is for the case where the soft lepton is the only track in the jet, so that $`p_T^{\mathrm{rel}}=0`$. If neither $`B`$ decayed in a mixed state and if both leptons are from semileptonic decay, the charge of the trigger lepton would be opposite the charge of the soft lepton. Figure 7 shows the raw dilution $`D_{\mathrm{raw}}`$ for the soft-lepton tagged events. The raw dilution is derived from the number of same-sign and opposite-sign events in each bin of $`p_T^{\mathrm{rel}}`$. The dilution is lower for low $`p_T^{\mathrm{rel}}`$ because fake leptons and leptons from sequential semileptonic decay tend to have relatively low $`p_T^{\mathrm{rel}}`$ values. The $`p_T^{\mathrm{rel}}`$ dependence of $`D_{\mathrm{raw}}`$ was parameterized using the form $$D_{\mathrm{raw}}(p_T^{\mathrm{rel}})=A\left(1e^{p_T^{\mathrm{rel}}+B}\right),$$ (17) where $`A`$ and $`B`$ are parameters determined from the data. The average raw dilution is used for isolated soft leptons that have no measurement of $`p_T^{\mathrm{rel}}`$. The form of Equation 17 is empirical and was found to describe the shape of $`D_{raw}`$ as a function of $`p_T^{\mathrm{rel}}`$ well in the Monte Carlo. Parameters $`A`$ and $`B`$ are measured separately for the $`e`$ and $`\mu `$ trigger data because the fraction of trigger leptons from sequential decay and the trigger lepton purity are different for the $`e`$ and $`\mu `$ data, which affects the raw dilution. Parameters $`A`$ and $`B`$ are also measured separately for the soft-$`e`$ and soft-$`\mu `$ tags because the fractions of fake soft leptons may be different for soft electrons and soft muons. The fitted values of $`A`$ and $`B`$ and the average raw dilution for isolated (no $`p_T^{\mathrm{rel}}`$) soft leptons for soft-$`e`$ and soft-$`\mu `$ tags in the $`e`$ and $`\mu `$ trigger data are listed in Table II. The values of $`A`$ and $`B`$ determined from the data are assumed to describe the SLT raw dilution as a function of $`p_T^{\mathrm{rel}}`$ for $`b\overline{b}`$ events. This is only an approximation since a small fraction (less than $`10\%`$) of the data are $`c\overline{c}`$ events. A $`c\overline{c}`$ event in which $`c\mathrm{}^+\nu s`$ and $`\overline{c}\mathrm{}^{}\overline{\nu }\overline{s}`$ can produce opposite-sign events in which one of the leptons is associated with a secondary vertex, and the other lepton produces a soft-lepton tag. The soft-lepton tags in $`c\overline{c}`$ events have a much softer $`p_T^{\mathrm{rel}}`$ spectrum than soft-lepton tags in $`b\overline{b}`$ events. The Monte Carlo predicts that $`c\overline{c}`$ events affect the values of $`A`$ and $`B`$ by an amount less than the statistical uncertainties on the fitted values of $`A`$ and $`B`$. The effects of the approximation above are accounted for since the values of $`A`$ and $`B`$ are varied by their statistical uncertainties in the determination of the systematic uncertainties on $`\mathrm{\Delta }m_d`$ and the dilution normalization parameters. #### 4 Dilution for the Jet-charge Tag Figure 8 shows the jet charge distributions for single-vertex and double-vertex events. The data have been divided into events with a positively or negatively-charged trigger lepton ($`\mathrm{}`$). There is an anticorrelation between the sign of the jet charge and the trigger-lepton charge on average. The degree of separation between the $`\mathrm{}^+`$ and $`\mathrm{}^{}`$ distributions is related to the raw dilution of the jet-charge flavor tag, shown in Figure 9 as a function of the magnitude of the jet charge $`|Q_{\mathrm{jet}}|`$. For double-vertex events, the presence of the second secondary vertex increases the probability that the jet selected for the calculation of the jet charge is in fact the other $`B`$ in the event. This translates into a significantly higher raw dilution for double-vertex events. The $`|Q_{\mathrm{jet}}|`$ dependence of $`D_{\mathrm{raw}}`$ is used to predict the probability that the jet-charge tag is correct on an event-by-event basis, just as $`p_T^{\mathrm{rel}}`$ is used for soft leptons. The $`|Q_{\mathrm{jet}}|`$ dependence of $`D_{\mathrm{raw}}`$ in the data in Figure 9 was parameterized with the form $$D_{\mathrm{raw}}(|Q_{\mathrm{jet}}|)=|Q_{\mathrm{jet}}|D_{\mathrm{max}}$$ (18) excluding events with $`|Q_{\mathrm{jet}}|=1`$ (the rightmost data point). For events with $`|Q_{\mathrm{jet}}|=1`$, the average $`|Q_{\mathrm{jet}}|=1`$ dilution is used. The slope $`D_{\mathrm{max}}`$ is determined separately for the single-vertex and double-vertex events in the $`e`$ and $`\mu `$ trigger data, respectively. These slopes and the average raw dilution for $`|Q_{\mathrm{jet}}|=1`$ are listed in Table III. We use equation 16, with $`D_{\mathrm{raw}}`$ estimated for each event using equation 17 for SLT events and equation 18 for JCT events, in the determination of $`\mathrm{\Delta }m_d`$ to effectively discriminate between high and low quality flavor tags. ### D Unbinned Maximum Likelihood Fit We use an unbinned maximum likelihood fit to simultaneously determine $`\mathrm{\Delta }m_d`$ and the dilution normalization $`N_D`$. The effectiveness ($`ϵD^2`$) of each flavor tag is derived from the measured $`ϵ`$, $`D_{\mathrm{raw}}`$, and the dilution normalization ($`N_D`$) from the fit. The fraction of $`B^0`$ mesons that decay in a mixed state as a function of the proper time at decay is given by $$F_{\mathrm{mix}}(t)=\frac{1}{2}\left(1\mathrm{cos}(\mathrm{\Delta }m_dt)\right).$$ (19) In a pure $`B_d^0`$ sample with perfect flavor tagging and proper time resolution, $`F_{\mathrm{mix}}(t)`$ would be equivalent to the fraction of same-sign events, comparing the sign of the flavors at decay and production. If the flavor tag is imperfect, equation 19 becomes $$F_{\mathrm{mix}}(t)=P_{\mathrm{tag}}\frac{1}{2}\left(1\mathrm{cos}(\mathrm{\Delta }m_dt)\right)+P_{\mathrm{mistag}}\frac{1}{2}\left(1+\mathrm{cos}(\mathrm{\Delta }m_dt)\right),$$ (20) where $`P_{\mathrm{tag}}`$ ($`P_{\mathrm{mistag}}`$) is the probability that the flavor tag is correct (incorrect). Using the relations $`P_{\mathrm{tag}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1+N_DD_{\mathrm{raw}});`$ (21) $`P_{\mathrm{mistag}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1N_DD_{\mathrm{raw}});`$ (22) equation 20 reduces to $$F_{\mathrm{mix}}(t)=\frac{1}{2}\left(1N_DD_{\mathrm{raw}}\mathrm{cos}(\mathrm{\Delta }m_dt)\right).$$ (23) Although $`N_D`$ and $`\mathrm{\Delta }m_d`$ are correlated, the basic concept is that $`N_D`$ is determined from the amplitude of the oscillation in the same-sign fraction and $`\mathrm{\Delta }m_d`$ is determined from the frequency. We determine $`\mathrm{\Delta }m_d`$ and $`N_D`$ by minimizing the negative log-likelihood: $$\mathrm{ln}=\underset{i}{\overset{n_{\mathrm{SS}}}{}}\mathrm{ln}(𝒫_{\mathrm{SS}}^i)+\underset{j}{\overset{n_{\mathrm{OS}}}{}}\mathrm{ln}(𝒫_{\mathrm{OS}}^j),$$ (24) where $`𝒫_{\mathrm{SS}}`$ is the probability density for events that are tagged as same-sign, and $`𝒫_{\mathrm{OS}}`$ is the probability density for events that are tagged as opposite-sign. Each event has three inputs into the likelihood: 1. the assignment as same-sign or opposite-sign, which is based on the comparison of the sign of the SLT or JCT flavor tag with the charge of the trigger lepton, 2. the estimated probability that the SLT or JCT flavor tag is correct, using equation 16, with $`D_{\mathrm{raw}}`$ from equation 17 for the SLT flavor tag or from equation 18 for the JCT flavor tag, and 3. the decay distance $`L_{xy}`$. In addition to the above three inputs, we use $`p_T^{\mathrm{cl}}`$ and $`m^{\mathrm{cl}}`$ to select the $`K`$ distribution that is used in the determination of the reconstructed proper decay-distance. The construction of the probability densities requires several parameters. These parameters are listed in Table IV. Both $`𝒫_{\mathrm{SS}}`$ and $`𝒫_{\mathrm{OS}}`$ are the sum of several terms. First they have a term for the $`B^0`$ signal: $`𝒫_{\mathrm{OS}}(B^0)=𝒫_{\mathrm{tag}}𝒫_{\mathrm{nomix}}+𝒫_{\mathrm{mistag}}𝒫_{\mathrm{mix}};`$ (25) $`𝒫_{\mathrm{SS}}(B^0)=𝒫_{\mathrm{tag}}𝒫_{\mathrm{mix}}+𝒫_{\mathrm{mistag}}𝒫_{\mathrm{nomix}};`$ (26) where $`𝒫_{\mathrm{tag}}`$ and $`𝒫_{\mathrm{mistag}}`$ are given by equations 21 and 22, respectively, and $`𝒫_{\mathrm{nomix}}`$ and $`𝒫_{\mathrm{mix}}`$ are given by equations 3 and 4, respectively. Next there are terms for the other $`B`$ hadrons, including terms for both $`B^+`$ and $`b`$ baryons, as well as a term for $`B_s^0`$. The terms for $`B^+`$ are $`𝒫_{\mathrm{OS}}(B^+)=𝒫_{\mathrm{tag}}{\displaystyle \frac{1}{\tau _{B^+}}}e^{t/\tau _{B^+}};`$ (27) $`𝒫_{\mathrm{SS}}(B^+)=𝒫_{\mathrm{mistag}}{\displaystyle \frac{1}{\tau _{B^+}}}e^{t/\tau _{B^+}};`$ (28) where $`\tau _{B^+}`$ is the lifetime of the $`B^+`$, and $`t`$ is the proper decay-time. The terms for $`b`$ baryons are similar, except that $`\tau _{B^+}`$ is replaced by the lifetime of the $`b`$ baryons, $`\tau _{\mathrm{baryon}}`$. The terms for $`B_s^0`$ are similar to the terms for $`B^0`$, except that $`\mathrm{\Delta }m_d`$ is replaced by $`\mathrm{\Delta }m_s`$, which is assumed to be very large (i.e. beyond our experimental sensitivity) so that these terms effectively look like $$𝒫_{\mathrm{OS}}(B_s^0)=𝒫_{\mathrm{SS}}(B_s^0)=\frac{1}{2\tau _{B_s^0}}e^{t/\tau _{B_s^0}}.$$ (29) The values of the lifetimes of the $`B`$ hadrons and the value of $`\mathrm{\Delta }m_s`$ used in the probability densities are listed in Table IV. Each term for the various $`B`$ hadrons is multiplied by the expected relative contribution, $`f_{B^0}`$, $`f_{B^+}`$, $`f_{B_s^0}`$, and $`f_{\mathrm{baryon}}`$, of these various hadrons to the data sample. The production fractions are renormalized to take into account the different semileptonic branching fractions of the various $`B`$ hadrons: the semileptonic widths are assumed to be identical, so the semileptonic branching fractions are scaled to agree with the relative lifetimes. In addition to the terms for direct semileptonic decay, there are terms that take into account the contribution from sequential decays. The fraction of sequential decays $`f_{\mathrm{seq}}`$ is listed in Table IV; 75% of these sequential decays produce leptons with a sign opposite to direct semileptonic decay. The terms that take into account the contribution from $`c\overline{c}`$ events are similar to the terms for $`B^+`$, except that we use an assumed flavor tagging dilution and an effective lifetime $`\tau _{c\overline{c}}`$. The true flavor tagging dilution is not known a priori for both bottom and charm decays. Just as for $`B`$ decays, we do not consider the Monte Carlo reliable for predicting the JCT or SLT dilution for charm decays, so we use assumed values for the charm dilution and varied them by the maximum possible amount for the systematic uncertainties. For the JCT, we expect the dilution for charm decays to be worse than for bottom decays, therefore we assume $`D_{c\overline{c}}/D_{b\overline{b}}=0.5`$ for the JCT, and vary $`D_{c\overline{c}}/D_{b\overline{b}}`$ from 0 to 1 in the evaluation of the systematic errors. The ratio $`D_{c\overline{c}}/D_{b\overline{b}}`$ is used to rescale the predicted $`b\overline{b}`$ dilution for the event, based on $`|Q_{jet}|`$ using Equation 18, to give the predicted $`c\overline{c}`$ dilution for the event. The SLT dilution for charm decays could be anything from 0 to 1 depending on the fraction of fake soft leptons in $`c\overline{c}`$ events. Unlike bottom decays, the SLT dilution for charm decays does not fall near $`p_T^{\mathrm{rel}}=0`$ due to soft leptons from sequential decays, therefore we assume $`D_{c\overline{c}}=0.5`$, independent of $`p_T^{\mathrm{rel}}`$ for the SLT, and vary $`D_{c\overline{c}}/D_{b\overline{b}}`$ from 0 to 1 in the evaluation of the systematic errors. The effective lifetime $`\tau _{c\overline{c}}=1.53`$ ps is determined from the $`c\overline{c}`$ Monte Carlo samples, where the proper time at decay was reconstructed using the $`K`$ distributions from the $`b\overline{b}`$ Monte Carlo and equation 6. The relative fractions of $`b\overline{b}`$ ($`F_{b\overline{b}}`$) and $`c\overline{c}`$ ($`F_{c\overline{c}}`$), which can be determined from Table I, multiply the terms in the likelihood corresponding to $`b\overline{b}`$ and $`c\overline{c}`$ production, respectively. Finally, for the case of the $`\mu `$-trigger data, a term for fake muons was included with the $`B^0`$ lifetime and zero dilution. The relative fraction of fake muons $`F_{\mathrm{fake}\mu }^{\mathrm{\hspace{0.25em}0}}`$ (discussed in Appendix B) is listed in Table IV. In this case, $`F_{b\overline{b}}`$ and $`F_{c\overline{c}}`$ are scaled by $`1F_{\mathrm{fake}\mu }^{\mathrm{\hspace{0.25em}0}}`$. The probability densities $`P_{\mathrm{SS}}(t)`$ and $`P_{\mathrm{OS}}(t)`$ are functions of the true proper time at decay $`t`$. We take into account the experimental resolution on $`t`$ by convoluting $`P_{\mathrm{SS}}(t)`$ and $`P_{\mathrm{OS}}(t)`$ with $`L_{xy}`$ and $`p_T^B`$ resolution functions. Distributions of $`\delta L_{xy}L_{xy}(\mathrm{measured})L_{xy}(\mathrm{true})`$ from the $`e`$ and $`\mu `$ trigger $`b\overline{b}`$ Monte Carlo samples are parameterized with the sum of three Gaussians, a $`\delta L_{xy}>0`$ exponential, and a $`\delta L_{xy}<0`$ exponential. The $`\delta L_{xy}`$ distributions and their parameterizations are shown in Figure 10 for the $`e`$ and $`\mu `$ trigger data. The $`K`$ distributions, like those shown in Figure 4, are used as $`p_T^B`$ resolution functions. The $`L_{xy}`$ and $`p_T^B`$ resolution functions describe the $`t`$ smearing for $`b\overline{b}`$ events from direct production ($`ggb\overline{b}`$ and $`q\overline{q}b\overline{b}`$). They do not describe backgrounds such as photon conversion events (for the $`e`$ trigger) or events with a fake trigger muon. They also do not describe $`b\overline{b}`$ events from gluon splitting ($`gggg`$ followed by $`gb\overline{b}`$), which tend to have worse $`t`$ resolution due to secondary vertices that include decay products from both $`B`$ hadrons. While the total amounts of these backgrounds are reasonably small, they become more important near $`ct=0`$ and for very large $`ct`$ values (beyond several $`B`$ lifetimes). For these reasons, we only use events with $`0.02\mathrm{cm}<ct<0.30\mathrm{cm}`$ in the fit for $`\mathrm{\Delta }m_d`$ and the dilution normalization $`N_D`$. ## VI Fit Results The free parameters in the unbinned maximum likelihood fit are the mass difference $`\mathrm{\Delta }m_d`$ and the dilution normalization factors $`N_D`$. There are six dilution normalization factors: 1. $`N_{D,\mathrm{JCSV}}^e`$ and $`N_{D,\mathrm{JCSV}}^\mu `$: the dilution normalization factors for the jet-charge flavor tag in the case that the jet does not contain a secondary vertex for the electron-trigger and muon-trigger data samples, respectively. 2. $`N_{D,\mathrm{JCDV}}^e`$ and $`N_{D,\mathrm{JCDV}}^\mu `$: the dilution normalization factors for the jet-charge flavor tag in the case that the jet does contain a secondary vertex for the electron-trigger and muon-trigger data samples, respectively. 3. $`N_{D,\mathrm{SLT}}^e`$ and $`N_{D,\mathrm{SLT}}^\mu `$: the dilution normalization factors for the soft-lepton flavor tag for the electron-trigger and muon-trigger data samples, respectively. To determine the dilution normalization factors needed to calculate the flavor tag $`ϵD^2`$ values (see Section VI C), the data are grouped into four subsamples: 1. $`e`$-trigger, soft-lepton flavor tag, 2. $`e`$-trigger, jet-charge flavor tag, 3. $`\mu `$-trigger, soft-lepton flavor tag, and 4. $`\mu `$-trigger, jet-charge flavor tag. There is some overlap of events in these four subsamples as some events have both a SLT and a JCT. About 20% of the events with soft-lepton tags did not pass the single-lepton Level 2 trigger and came, instead, from a dilepton trigger. In order for the SLT efficiency to be well defined, we require that the SLT events pass the single-lepton trigger in data subsamples 1 and 3, which are used to determine $`N_{D,\mathrm{SLT}}^e`$, $`N_{D,\mathrm{SLT}}^\mu `$, and the SLT efficiencies needed for calculating the SLT $`ϵD^2`$. The fit results for the individual flavor taggers are given in Table V. The dilution normalization factors are given for two cases: $`\mathrm{\Delta }m_d`$ free to float in the fit and $`\mathrm{\Delta }m_d`$ fixed to the world average ( 0.474 $`\mathrm{}\mathrm{ps}^1`$). The value of $`\mathrm{\Delta }m_d`$ is held fixed to the world average because $`\mathrm{\Delta }m_d`$ and the $`N_D`$ factors are correlated. The correlation coefficients between $`\mathrm{\Delta }m_d`$ and the $`N_D`$ constants range from 0.55 to 0.81. Fixing $`\mathrm{\Delta }m_d`$ reduces the statistical uncertainty on the dilution normalization constants and removes any bias from statistical fluctuations that pull $`\mathrm{\Delta }m_d`$ high or low. The $`N_D`$ factors determined with $`\mathrm{\Delta }m_d`$ fixed to the world average are used in the calculation of the flavor tag $`ϵD^2`$ values. The $`N_D`$ factors in Table V are consistent with our expectations from the composition of the data (see Appendix A). To determine $`\mathrm{\Delta }m_d`$, we fit all four data subsamples simultaneously. For events with both a SLT and a JCT, we use the SLT because it has significantly higher average dilution. The results of the simultaneous fit of the $`e`$ and $`\mu `$ trigger data using both flavor tagging methods are listed in Table VI. We find $`\mathrm{\Delta }m_d=0.500\pm 0.052\mathrm{}\mathrm{ps}^1`$, where the uncertainty is statistical only. This is consistent with the world average value (0.464 $`\pm `$ 0.018 $`\mathrm{}\mathrm{ps}^1`$. Using the SLT tag in doubly flavor tagged events removes events with higher-than-average dilution from the JCT. This results in lower values of $`N_D`$ for the jet-charge flavor tag than found in the fits of the individual samples listed in Table V. Figure 11 shows the fraction of same-sign events as a function of the reconstructed proper decay length $`ct`$. The points with error bars are the data. The curve is a representation of the results of the fit for $`\mathrm{\Delta }m_d`$ and $`N_D`$. Figure 11 has been included to illustrate the clear evidence of $`B_d^0`$ mixing in the data. However, it does not contain all of the information that goes into the unbinned likelihood fit. In Figure 11, all events are treated equally. In the fit, events are effectively weighted based on their estimated dilution. ### A Check of Fitting Procedure To check our fitting procedure, we used a fast Monte Carlo which generated hundreds of data samples, each with the same statistics, tagging dilution, and $`t`$ resolution as the real data. Figure 12 shows the fit results of 400 fast Monte Carlo samples, representing the SLT flavor tagged, $`e`$-trigger data. The top row of plots show distributions of the fitted values of $`\mathrm{\Delta }m_d`$ and $`N_D`$ for the 400 samples. The arrows indicate the values of $`\mathrm{\Delta }m_d`$ and $`N_D`$ with which the samples were generated. The mean of each distribution is consistent with the generation value. The middle row of plots show the distributions of the statistical uncertainty on $`\mathrm{\Delta }m_d`$ and $`N_D`$. The arrows indicate the statistical uncertainty from the fit to the SLT flavor tagged, $`e`$-trigger data. The statistical uncertainties on $`\mathrm{\Delta }m_d`$ and $`N_D`$ for the data are near the most probable values from the fast Monte Carlo samples. The bottom row of plots show the distributions of the deviation of the fitted value from the generation value divided by the statistical uncertainty for $`\mathrm{\Delta }m_d`$ and $`N_D`$. These distributions have a mean of zero and unit width, which confirms that $`\mathrm{\Delta }m_d`$ and $`N_D`$ are unbiased and that the statistical uncertainty is correct. ### B Systematic Uncertainties To determine the systematic uncertainty on $`\mathrm{\Delta }m_d`$, the fixed input parameters in Table IV were varied by the amounts listed in this Table. To determine the systematic uncertainty associated with the dilution parameterizations, the parameters describing the $`D_{\mathrm{raw}}`$ dependence on $`|Q_{\mathrm{jet}}|`$ and $`p_T^{\mathrm{rel}}`$ of the soft lepton were varied by their statistical uncertainties. A systematic uncertainty to account for the possible presence of fake secondary vertices from non-heavy flavor backgrounds as well as heavy flavor events (gluon splitting) was determined using a combination of data and Monte Carlo. The observed excess in the data over the combined contribution of $`b\overline{b}`$ and $`c\overline{c}`$ (see Figure 5) was used to define the shape of this background. This shape was included in the likelihood function with a dilution that varied from 0 to $`D_{b\overline{b}}`$. The systematic uncertainty assigned to the variation of each input parameter was the shift in $`\mathrm{\Delta }m_d`$ from the fit result with the nominal values of the input parameters. The total systematic uncertainty is the sum in quadrature of these shifts. The same procedure was applied to the four data subsamples ($`e`$ SLT, $`e`$ JCT, $`\mu `$ SLT, $`\mu `$ JCT) to determine the systematic uncertainties for the dilution normalization factors. The procedure described above was checked for the largest individual contributions to the systematic uncertainty using fast Monte Carlo samples. Samples generated with a variation on one of the input parameters (e.g. $`\tau _{B^+}/\tau _{B^0}=1.02+0.05`$) were fit using the nominal parameter (e.g. $`\tau _{B^+}/\tau _{B^0}=1.02`$). The average bias on the fitted values for the fast Monte Carlo samples was consistent with the deviation on the fitted values observed when the data are fit with a fixed parameter variation. Table VII lists the individual contributions to the systematic uncertainty on $`\mathrm{\Delta }m_d`$ ($`\pm 0.043\mathrm{}\mathrm{ps}^1`$). The largest single contribution ($`\pm 0.032\mathrm{}\mathrm{ps}^1`$) is the unknown soft-lepton flavor tag dilution for $`c\overline{c}`$ events. The SLT $`c\overline{c}`$ dilution was varied over the full possible range (0 to 1). If the assumed $`c\overline{c}`$ dilution in the fit is lower (higher) than its true value, the fraction of same-sign events at small $`t`$ will appear lower (higher), which biases $`\mathrm{\Delta }m_d`$ low (high). The second largest contribution ($`\pm 0.021\mathrm{}\mathrm{ps}^1`$) is the uncertainty on the lifetime ratio of the $`B^+`$ and the $`B^0`$ mesons. If the value of $`\tau _{B^+}/\tau _{B^0}`$ in the fit is higher (lower) than its true value, the fraction of same-sign events at short $`t`$ values will appear larger (smaller), which biases $`\mathrm{\Delta }m_d`$ high (low). The third largest contribution ($`\pm 0.009\mathrm{}\mathrm{ps}^1`$) is the uncertainty on the SLT raw dilution parameterization: $`D_{raw}`$ as a function of soft-lepton $`p_T^{\mathrm{rel}}`$. The size of the variation for the SLT is shown by the dashed curves in Figure 7. The raw dilution for events with no $`p_T^{\mathrm{rel}}`$ measurement was independently varied by the statistical uncertainty on the raw dilution for no-$`p_T^{\mathrm{rel}}`$ events. Tables VIII and IX list the contributions to the systematic uncertainty on the dilution normalization factors for the $`e`$ and $`\mu `$ trigger data respectively. As for the systematic uncertainty on $`\mathrm{\Delta }m_d`$, the largest contribution comes from the unknown dilution for $`c\overline{c}`$ events. The fraction of $`B_s`$ events, which we assume are half same-sign and half-opposite sign, and the fraction of events where the trigger lepton is from a sequential decay both affect the assignment of same-sign events. If these fractions are low (high), more (less) same-sign events will be attributed to mistags, thus they are strongly coupled to the dilution normalization. The uncertainty on the raw dilution parameterizations also has a large effect on the dilution normalization systematic uncertainty. ### C Flavor Tag $`ϵD^2`$ The measurement of the statistical power $`ϵD^2`$ of the flavor tagging methods is done in two steps. First, the raw $`ϵD^2`$ is calculated using the raw dilution rather than the true dilution. Then, the raw $`ϵD^2`$ is rescaled by $`N_D^2`$, which translates the raw dilution to the true dilution. The $`N_D`$ factor for each flavor tagging method in the $`e`$ and $`\mu `$ samples is determined in the unbinned maximum likelihood fit with $`\mathrm{\Delta }m_d`$ fixed to the world average (0.474 $`\mathrm{}\mathrm{ps}^1`$). These $`N_D`$ values are given in Table V, where the first uncertainty is statistical and the second systematic. The raw or uncorrected value of $`ϵD^2`$ is obtained by summing the $`ϵD_{\mathrm{raw}}^2`$ values in the bins of either $`p_T^{\mathrm{rel}}`$ for the SLT or $`|Q_{\mathrm{jet}}|`$ for the JCT. The efficiency for each bin is defined as the number of events in the bin divided by the total number of events before flavor tagging. Table X lists the total efficiency, raw $`ϵD^2`$, dilution normalization, and true $`ϵD^2`$ for each of the flavor tagging methods. Taking the average of the $`e`$ and $`\mu `$ trigger data, we find $`ϵD^2`$ to be ($`0.78\pm 0.12\pm 0.08`$)% for the JCT and ($`0.91\pm 0.10\pm 0.11`$)% for the SLT where the first uncertainty is statistical and the second systematic. These $`ϵD^2`$ values are about one order of magnitude lower than typical flavor tagging techniques employed on the $`Z^0`$ resonance , however the large $`b\overline{b}`$ cross section in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV yields $`b\overline{b}`$ samples that are about one order of magnitude larger at CDF than those collected on the $`Z^0`$ resonance. The statistical uncertainty for our measurement of $`\mathrm{\Delta }m_d`$ is still competitive with similar measurements on the $`Z^0`$ resonance, since our smaller $`ϵD^2`$ is compensated by our larger sample size. The values of $`ϵD^2`$ for these flavor tags depend on the data sample in which they are used. In particular, during next run of the Tevatron, we will collect large samples of $`B^0/\overline{B}^0J/\psi K_\mathrm{S}^0`$ (for the precise measurement of the $`CP`$ asymmetry parameter $`\mathrm{sin}(2\beta )`$) and hadronic $`B_s^0`$ decays (for the precise determination of $`\mathrm{\Delta }m_s`$). The triggers used to collect these data samples will be different from the inclusive lepton trigger used to collect the data for this analysis. As a result, the $`B`$ hadron production properties (e.g., $`p_T`$ of the $`B`$) are different, and this affects $`ϵD^2`$. Despite these differences, the results in this paper demonstrate that both the jet-charge and the soft-lepton flavor tagging methods are viable in the environment of $`p\overline{p}`$ collisions. ## VII Summary We have measured $`\mathrm{\Delta }m_d`$ using soft-lepton and jet-charge flavor tagging methods. This is the first application of jet-charge flavor tagging in a hadron-collider environment. The flavor at decay was inferred from the charge of the trigger lepton, which was assumed to be the product of semileptonic $`B`$ decay. The initial flavor was inferred from the other $`B`$ in the event, either using the charge of a soft lepton or the jet charge of the other $`B`$. The proper-time at decay for each event was determined from a partial reconstruction of the decay vertex of the $`B`$ that produced the trigger lepton and an estimate of the $`B`$ momentum. The value of $`\mathrm{\Delta }m_d`$ was determined with an unbinned maximum likelihood fit of the same-sign and opposite-sign proper-time distributions (comparing the sign of the trigger-lepton charge and the flavor tag). The statistical power of the flavor tagging methods was measured in the unbinned maximum likelihood fit by fitting for a scale factor $`N_D`$, for each of the flavor tagging methods, which is the ratio of the raw dilution and the true dilution. We find $`\mathrm{\Delta }m_d=0.500\pm 0.052\pm 0.043\mathrm{}\mathrm{ps}^1`$, where the first uncertainty is statistical and the second systematic. This is consistent with the world average value of $`0.464\pm 0.018\mathrm{}\mathrm{ps}^1`$ and competitive in precision with other individual measurements of $`\mathrm{\Delta }m_d`$. We quantify the statistical power of the flavor tagging methods with $`ϵD^2`$, which is the tagging efficiency multiplied by the square of the dilution. We find $`ϵD^2`$ to be ($`0.78\pm 0.12\pm 0.08`$)% for the jet-charge flavor tag and ($`0.91\pm 0.10\pm 0.11`$)% for the soft-lepton flavor tag, where the first uncertainty is statistical and the second systematic. These $`ϵD^2`$ are much lower than what has been achieved in experiments on the $`Z^0`$ resonance, however we have demonstrated that the much higher $`b\overline{b}`$ cross-section at the Tevatron ($`p\overline{p}`$, $`\sqrt{s}=1.8`$ TeV) can be used to compensate for the disadvantage in $`ϵD^2`$. The jet-charge and soft-lepton flavor tagging techniques will be important tools in the study of $`CP`$ violation in the up-coming run of the Tevatron. ## Acknowledgments We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Science and Culture of Japan; the Natural Sciences and Engineering Research Council of Canada; the National Science Council of the Republic of China; the Swiss National Science Foundation; and the A. P. Sloan Foundation. ## A The Dilution Normalization Factor The true dilution $`D`$ of a flavor tagging method is defined as $$D=2P_{\mathrm{tag}}1,$$ (A1) where $`P_{\mathrm{tag}}`$ is the probability that the flavor tag is correct. An equivalent expression for $`D`$ is $$D=\frac{N_TN_M}{N_T+N_M},$$ (A2) where $`N_T`$ ($`N_M`$) is the number of correct (incorrect) tags in a sample of $`N_{\mathrm{total}}=N_T+N_M`$ events. The raw dilution is defined as $$D_{\mathrm{raw}}=\frac{N_{OS}N_{SS}}{N_{OS}+N_{SS}},$$ (A3) where $`N_{OS}`$ ($`N_{SS}`$) is the number of opposite-sign (same-sign) events in the sample, comparing the trigger-lepton charge with either the sign of the soft-lepton charge or the jet charge. If the data were pure $`b\overline{b}`$ with no $`B`$ mixing and all of the trigger leptons were from direct $`B`$ decay, all opposite (same) sign events would be correctly (incorrectly) flavor-tagged. That is, we would have $`N_T=N_{OS}`$, $`N_M=N_{SS}`$, and $`D=D_{\mathrm{raw}}`$. There are, however, several things in the data that break the $`N_T=N_{OS}`$ and $`N_M=N_{SS}`$ assumptions. They are * $`B`$ mixing: If the trigger lepton is from a $`B`$ hadron that decays in a state opposite its original flavor, the trigger-lepton charge will have the “wrong” sign. In this case, events with the correct flavor tag are same-sign. * Sequential decays: The charge of trigger leptons from sequential $`B`$ decay ($`bc\mathrm{}sX`$) is opposite that of direct $`B`$ decay. For trigger leptons that are from sequential decay, events with the correct flavor tag are same-sign, if the trigger-lepton $`B`$ did not mix. * $`c\overline{c}`$ events: Events from $`c\overline{c}`$ production may have a non-zero dilution that is not the same as the dilution from $`b\overline{b}`$ events. * Fake leptons: The $`e`$-trigger data has essentially no fake trigger electrons. However, about 12% of the $`\mu `$-trigger data have a hadron that faked a muon, whose charge is random (see Appendix B). If there were no fake leptons, the number of opposite-sign and same-sign events from $`b\overline{b}`$ production are given by $`N_{OS}^{b\overline{b}}`$ $`=`$ $`(1f_{\mathrm{seq}}^{\mathrm{ws}})\left[(1\overline{\chi }^{})N_T^{b\overline{b}}+\overline{\chi }^{}N_M^{b\overline{b}}\right]`$ (A5) $`+f_{\mathrm{seq}}^{\mathrm{ws}}\left[\overline{\chi }^{}N_T^{b\overline{b}}+(1\overline{\chi }^{})N_M^{b\overline{b}}\right]`$ $`N_{SS}^{b\overline{b}}`$ $`=`$ $`(1f_{\mathrm{seq}}^{\mathrm{ws}})\left[\overline{\chi }^{}N_T^{b\overline{b}}+(1\overline{\chi }^{})N_M^{b\overline{b}}\right]`$ (A7) $`+f_{\mathrm{seq}}^{\mathrm{ws}}\left[(1\overline{\chi }^{})N_T^{b\overline{b}}+\overline{\chi }^{}N_M^{b\overline{b}}\right]`$ where $`f_{\mathrm{seq}}^{\mathrm{ws}}`$ is the fraction of trigger leptons in $`b\overline{b}`$ events that are from sequential decay in which the trigger-lepton charge has the “wrong” sign, $`\overline{\chi }^{}`$ is the effective<sup>*</sup><sup>*</sup>* It is an “effective” probability because our secondary vertexing method is inefficient for low values of $`t`$, which causes $`\overline{\chi }^{}`$ to be larger than $`\overline{\chi }`$ . probability that the $`B`$ hadron that produced the trigger lepton decayed in a mixed state, and $`N_{OS}^{b\overline{b}}`$ ($`N_{SS}^{b\overline{b}}`$) is the number of same-sign (opposite-sign) $`b\overline{b}`$ events. For events from $`c\overline{c}`$ production we have $`N_{OS}^{c\overline{c}}`$ $`=`$ $`N_T^{c\overline{c}}`$ (A8) $`N_{SS}^{c\overline{c}}`$ $`=`$ $`N_M^{c\overline{c}}`$ (A9) where $`N_{OS}^{c\overline{c}}`$ ($`N_{SS}^{c\overline{c}}`$) is the number of same-sign (opposite-sign) $`c\overline{c}`$ events. Using Equations A3, A5, A7, A8, and A9 the raw dilution can be written as $$D_{\mathrm{raw}}=\frac{(12\overline{\chi }^{})(12f_{\mathrm{seq}}^{\mathrm{ws}})(N_T^{b\overline{b}}N_M^{b\overline{b}})+(N_T^{c\overline{c}}N_M^{c\overline{c}})}{N_T^{b\overline{b}}+N_M^{b\overline{b}}+N_T^{c\overline{c}}+N_M^{c\overline{c}}}$$ (A10) If a fraction of the events ($`F_{\mathrm{fake}\mathrm{}}^{\mathrm{\hspace{0.25em}0}}`$) have a fake trigger lepton whose charge-sign is random, these events will have a raw dilution of zero since the number of same-sign fake-lepton events will equal the number of opposite-sign fake-lepton events. Taking fake leptons into account gives $$D_{\mathrm{raw}}=(1F_{\mathrm{fake}\mathrm{}}^{\mathrm{\hspace{0.25em}0}})\frac{(12\overline{\chi }^{})(12f_{\mathrm{seq}}^{\mathrm{ws}})(N_T^{b\overline{b}}N_M^{b\overline{b}})+(N_T^{c\overline{c}}N_M^{c\overline{c}})}{N_T^{b\overline{b}}+N_M^{b\overline{b}}+N_T^{c\overline{c}}+N_M^{c\overline{c}}}$$ (A11) Using Equation A2, we define the true flavor tagging dilution in $`b\overline{b}`$ and $`c\overline{c}`$ events as $$D_{b\overline{b}}\frac{N_T^{b\overline{b}}N_M^{b\overline{b}}}{N_T^{b\overline{b}}+N_M^{b\overline{b}}}$$ (A12) and $$D_{c\overline{c}}\frac{N_T^{c\overline{c}}N_M^{c\overline{c}}}{N_T^{c\overline{c}}+N_M^{c\overline{c}}}$$ (A13) respectively. The fraction of events from $`b\overline{b}`$ and $`c\overline{c}`$ production are defined by $$F_{b\overline{b}}\frac{N_T^{b\overline{b}}+N_M^{b\overline{b}}}{N_T^{b\overline{b}}+N_M^{b\overline{b}}+N_T^{c\overline{c}}+N_M^{c\overline{c}}}$$ (A14) and $$F_{c\overline{c}}\frac{N_T^{c\overline{c}}+N_M^{c\overline{c}}}{N_T^{b\overline{b}}+N_M^{b\overline{b}}+N_T^{c\overline{c}}+N_M^{c\overline{c}}}$$ (A15) respectively. Combining Equations A10, A12, A13, A14, A15 gives $`D_{\mathrm{raw}}`$ $`=`$ $`(1F_{\mathrm{fake}\mathrm{}}^{\mathrm{\hspace{0.25em}0}})\left[(12\overline{\chi }^{})(12f_{\mathrm{seq}}^{\mathrm{ws}})F_{b\overline{b}}D_{b\overline{b}}+F_{c\overline{c}}D_{c\overline{c}}\right]`$ (A16) $`D_{\mathrm{raw}}`$ $`=`$ $`\left\{(1F_{\mathrm{fake}\mathrm{}}^{\mathrm{\hspace{0.25em}0}})\left[(12\overline{\chi }^{})(12f_{\mathrm{seq}}^{\mathrm{ws}})F_{b\overline{b}}+F_{c\overline{c}}{\displaystyle \frac{D_{c\overline{c}}}{D_{b\overline{b}}}}\right]\right\}D_{b\overline{b}}`$ (A17) $`D_{\mathrm{raw}}`$ $`=`$ $`{\displaystyle \frac{1}{N_D}}D_{b\overline{b}}`$ (A18) where we have defined the dilution normalization factor $`N_D`$ as $$\frac{1}{N_D}(1F_{\mathrm{fake}\mathrm{}}^{\mathrm{\hspace{0.25em}0}})\left[(12\overline{\chi }^{})(12f_{\mathrm{seq}}^{\mathrm{ws}})F_{b\overline{b}}+F_{c\overline{c}}\frac{D_{c\overline{c}}}{D_{b\overline{b}}}\right].$$ (A19) Equation A19 can be used to calculate the expected values for the $`N_D`$ parameters. For this calculation, we will assume * $`\overline{\chi }^{}0.20`$ from the Monte Carlo. * $`f_{\mathrm{seq}}^{\mathrm{ws}}(e\mathrm{trigger})=0.07`$ and $`f_{\mathrm{seq}}^{\mathrm{ws}}(\mu \mathrm{trigger})=0.10`$ using $`f_{\mathrm{seq}}^{\mathrm{ws}}=0.75\times f_{\mathrm{seq}}`$ and the values in Table IV. * The $`F_{b\overline{b}}`$ values are given in Table I. We also use $`F_{c\overline{c}}=1F_{b\overline{b}}`$. * For the JCT, we assume $`D_{c\overline{c}}/D_{b\overline{b}}(\mathrm{JCT})=0.5`$. Using the average SLT dilution and the assumption that $`D_{c\overline{c}}(\mathrm{SLT})=0.5`$, we estimate $`D_{c\overline{c}}/D_{b\overline{b}}(\mathrm{SLT})1.3`$. Using the numbers above, we find * $`N_{D,\mathrm{SLT}}^e1.8`$. * $`N_{D,\mathrm{SLT}}^\mu 2.1`$. * $`N_{D,\mathrm{JCSV}}^eN_{D,\mathrm{JCDV}}^e1.9`$. * $`N_{D,\mathrm{JCSV}}^\mu N_{D,\mathrm{JCDV}}^\mu 2.4`$. ## B Fraction of Fake Trigger Muons As is stated in Section IV C, we believe that most of the fake trigger-muons in the data are from heavy flavor decay. There may be some correlation on average between the sign fake muon charge and the $`B`$ flavor at decay, however we assume that this correlation is smaller than that of real trigger-muons. Fake muon events are divided into two groups: 1. Fake muon events whose dilution is the same as the dilution for real muons. 2. Fake muon events whose dilution is zero. We treat group 1 as if they are real trigger-muon events. We treat group 2 as if they are $`b\overline{b}`$ events with a flavor tagging dilution of zero. We determine the fraction of events with fake muons that have zero dilution in the $`\mu `$-trigger data by assuming that the true flavor tagging dilution is the same in the $`e`$ and $`\mu `$ trigger data. The raw dilution ($`D_{\mathrm{raw}}`$), which assumes all opposite-sign (same-sign) events are tags (mistags), is different for the $`e`$ and $`\mu `$ trigger data for the following reasons: 1. The fraction of real trigger-leptons in $`b\overline{b}`$ events that are not from direct $`b\mathrm{}`$ decay ($`f_{\mathrm{seq}}`$) is 9.4% for the $`e`$-trigger data and 13.6% for the $`\mu `$-trigger data. 2. The fraction of events from $`c\overline{c}`$ production $`F_{c\overline{c}}`$ is slightly different (see Table I). 3. We estimate that only 1% of the trigger electrons are fake, while, as shown below, about 10% of the $`\mu `$-trigger events contain a fake muon. We can correct $`D_{\mathrm{raw}}`$ for (1) and (2) using the equation $$D_{\mathrm{raw}}^{}=\frac{D_{\mathrm{raw}}}{F_{b\overline{b}}(12f_{\mathrm{seq}}^{\mathrm{ws}}+F_{c\overline{c}}D_{c\overline{c}}/D_{b\overline{b}})}$$ (B1) where $`f_{\mathrm{seq}}^{\mathrm{ws}}`$ is the fraction of non $`b\mathrm{}`$ decays that have the “wrong” sign. The Monte Carlo gives $`f_{\mathrm{seq}}^{\mathrm{ws}}=0.75f_{\mathrm{seq}}`$. The values of $`D_{\mathrm{raw}}^{}`$ for the SLT, JCSV, and JCDV flavor tagging methods in the $`e`$ and $`\mu `$-trigger data are given in Table XI. The weighted average of $`D_{\mathrm{raw}}^{}(e)/D_{\mathrm{raw}}^{}(\mu )`$ for the SLT, JCSV, and JCDV flavor tagging methods gives 1.14 $`\pm `$ 0.08. The fraction of events with fake muons that have zero dilution can be extracted using $$F_{\mathrm{fake}\mu }^{\mathrm{\hspace{0.25em}0}}=1\frac{1}{D_{\mathrm{raw}}^{}(e)/D_{\mathrm{raw}}^{}(\mu )}.$$ (B2) Equation B2 gives $`F_{\mathrm{fake}\mu }^{\mathrm{\hspace{0.25em}0}}=12\pm 6`$ %. The relatively large uncertainty on $`F_{\mathrm{fake}\mu }^{\mathrm{\hspace{0.25em}0}}`$ gives a significant systematic uncertainty on the dilution normalization $`N_D`$ for the flavor tags (see Table IX), however, the contribution to the systematic uncertainty on $`\mathrm{\Delta }m_d`$ is relatively small.
no-problem/9903/astro-ph9903345.html
ar5iv
text
# A Survey for Cool White Dwarfs and the age of the Galactic Disc ## 1 Introduction A reliable age for the local Galactic Disc places a valuable constraint on both globular cluster ages and cosmological models. A number of independent methods of investigating this problem have been employed in the past (eg. Jimenez 1998 and references therein), resulting in a broad consensus that the lower limit for the Disc age lies between 8 and 12 Gyr. Potentially one of the most reliable means of estimating the Disc age is via cool white dwarf (CWD) stars. These estimates use the idea, first proposed by Schmidt (1959), that in a galaxy of finite age there will be a temperature beyond which the oldest, coolest white dwarfs (WDs) have not had time to cool. This predicted cut-off in the luminosity function (LF) of WDs, if satisfactorily observed, can then be used in conjunction with WD cooling models to derive the Disc age. CWDs are difficult to find, being both extremely faint and of similar colour to the numerous K and M-type dwarfs; and are almost exclusively discovered by means of their proper motion. The cut-off in the WDLF was observed (Liebert, Dahn & Monet 1988, hereafter LDM) after thorough follow up observations of CWD candidates drawn from the Luyten Half Second (LHS) Catalogue (Luyten 1979). Although at that time a Disc age of $`9.3\pm 2`$ Gyr was derived from this sample (Winget et al. 1987), further observations and improvements in model atmospheres (Bergeron, Ruiz & Leggett 1997, hereafter BRL) and theoretical LFs (Wood 1992, 1995) has prompted a recent redetermination of the Disc age for the same sample (Leggett, Ruiz & Bergeron 1998, hereafter LRB), yielding a value of $`8\pm 1.5`$ Gyr. While the existence of the cut-off in the LDM WDLF has not been challenged by subsequent observational work, the details of its precise position and shape have. A sample of CWDs found using common proper motion binaries (CPMBs), again culled from the Luyten surveys, suggest that there are $`5`$ times more very faint CWDs than found by LDM (Oswalt et al. 1996, hereafter OSWH). A Disc age of $`9.5\pm 1`$ Gyr was found using this sample, and the factor of $`5`$ increase in the faintest WDs has been confirmed by an independent search for CWDs in the south (Ruiz and Takamiya 1995). Until now, the proper motion catalogues used to extract samples of CWDs have been produced by ‘blink’ comparison. While these surveys have clearly been successful in picking up individual stars of low luminosity and high proper motion, the use of such a subjective survey technique raises worries concerning completeness. The advent of high precision micro-densitometers such as SuperCOSMOS (Hambly 1998 and references therein) allow proper motions and magnitudes to be calculated objectively using a series of plates in the same field. Hambly et al. (1997) have recently discovered in Taurus perhaps the coolest known WD using just such a procedure. This object, WD 0346+246, should certainly appear in the LHS Catalogue, and would then presumably have been included in the LDM sample since it fulfills the given criteria. This object was discovered serendipitously from work in a particular $`6\times 6^{}`$ Schmidt field, and therefore casts further doubt on the completeness of the Luyten catalogue. In this work we exploit both the extensive collection of over 300 plates in ESO/SERC field 287 and the power of the SuperCOSMOS measuring machine to produce a deep, complete multi-colour proper motion survey from which a sample of CWDs has been extracted. Much attention was given to the critical question of completeness, with specific reference to the choice of survey limits and the sources of potential contaminants. The paper is organised as follows: the plate database and reduction of the digitally scanned data is described in Section 2; the choice of survey limits to combat sample contamination and the method of WD sample selection are discussed in Section 3; in Section 4 the important question of the high proper motion limit is addressed by independent tests; some follow up observations of our WD candidates are presented in Section 5; stellar parameters are calculated for our sample in Section 6, which also includes a discussion of potential contaminant populations; the WDLF from our sample is presented in Section 7, and the derived Disc ages discussed in Section 8. ## 2 Schmidt Plate Data Reduction In the course of a long term quasar variability study (eg. Véron & Hawkins 1995) over 300 Schmidt plates have been taken in ESO/SERC field 287. Figure 1 shows the distribution over time of the plate collection in this field in the two principal passbands available, $`\mathrm{B}_\mathrm{J}`$ and $`\mathrm{R}_\mathrm{F}`$ . Additional plate material exists in the U (19 plates), V (11 plates) and I (40 plates) passbands. All these plates have been digitally scanned by either the SuperCOSMOS plate measuring machine or its predecessor COSMOS (MacGillivray and Stobie, 1984). As described in Appendix A, a CWD survey utilising the reduced proper motion (RPM) population discrimination technique requires photometry in at least two passbands and proper motion data. Although for most stellar types a survey of this sort will be proper motion limited, for extremely intrinsically faint objects such as cool degenerates the photometric survey limits also become important. A principal concern in this work is therefore to maximise the survey depth. The technique of stacking digitised Schmidt plates has been in existence for some time (eg. Hawkins 1991; Kemp & Meaburn 1993; Schwartzenberg et al. 1996), and has recently been thoroughly investigated with SuperCOSMOS (Knox et al. 1998). The database presented in Figure 1 is an obvious candidate for stacking in a proper motion survey, since at least 4 plates are available in most years and the use of 4 plate stacks rather than single plate data will yield deeper photometric survey limits with no reduction in the proper motion time baseline. The data has therefore been grouped into 4 plate stacks according to Table 1. In order to track objects through epochs and calculate relative proper motions, all the stacks have been shifted to a common co-ordinate system corresponding to a measure of the high quality $`\mathrm{B}_\mathrm{J}`$ plate J3376. This is achieved by a global transformation followed by local transformations (translation, rotation and scale), performed by splitting the stack area into a grid of 16x16 small areas (Hawkins 1986). Every object in each area is used to define the local transformation (since fainter objects are more numerous, they essentially define the astrometric coordinate system). Objects are then paired up between epochs. The software used in this work operates using a fixed ‘box size’ (13arcecs) in which it looks for a pair, thus in the first instance a high proper motion limit is imposed on the survey – this issue will be addressed in detail later. The pairing procedure yields a list of x and y coordinates for each object, one set for each stack the object is found on. Calculating proper motions is then simply a matter of performing a linear regression fit to each object’s x and y coordinates as a function of time. However, erroneous pairing inevitably occurs between stacks, and we therefore wish to perform some form of bad point rejection to reduce contamination by spurious proper motions. In order to reject deviant points (and calculate parameters such as $`\sigma _\mu `$ and $`\chi ^2`$) an estimate of the error associated with each measure of position is required. We assume this error is simply a function of magnitude and that it will vary from stack to stack, but not across the survey area. This error is calculated using the deviation of an object’s position on a particular stack from the mean position over the 20 stacks used, and is determined over 10 magnitude bins. A $`3\sigma `$ iterative rejection procedure is implemented to reject spurious pairings or high proper motion objects which are not reflecting the true positional errors sought. The calculated errors are much as one might expect: decreasing for brighter objects until factors such as saturation and blended images makes positional measures more uncertain. A straight line fit can now be applied to the x and y data for each object, the gradient of which is taken to be the measured proper motion, $`\mu _x`$ and $`\mu _y`$ respectively. An example is shown in Figure 2, the points showing the deviation at each epoch from the average object position with error bars calculated as above. Deviant points arising from spurious pairings often lie far from the other data and will give rise to spurious high proper motion detections if not removed. We therefore iteratively remove points lying $`3\sigma `$ from the fitted line. This can occasionally lead to further problems if there are several bad points associated with the object, and the result of several iterations can be a larger spurious motion detection. This source of contamination is generally eliminated by insisting sample objects are detected on virtually every stack. The validity of the positional error estimation scheme described above has been verified by confirming that scatter plots of log reduced $`\chi ^2`$ as a function of magnitude cluster around zero for all magnitudes in all regions of the survey area. Instrumental magnitudes are calculated for every object detected on each stack in the standard COSMOS/SuperCOSMOS fashion (Beard et al. 1990 and references therein). Briefly, an object detection is defined by a given number of interconnected pixels with intensity above a given threshold (eg. 8 interconnected pixels with intensity above a $`2.5\sigma `$ sky noise threshold for SuperCOSMOS data). An object’s instrumental magnitude is then calculated as the log of the sum of the intensity above background across the object area. This quantity varies monotonically with true magnitude, and is therefore suitable for use in constructing calibration curves using a CCD sequence. A sequence of $`200`$ stars with CCD magnitudes measured in a variety of passbands exists in field 287 (Hawkins et al. 1998) , yielding U, B, V, R and I photometry to a typical accuracy of 0.15 magnitudes (see Section 5.2). Significantly smaller errors are theoretically obtainable from photographic material, and the larger uncertainties we find appear to be caused by systematic deviations of sequence objects from the calibration curve. This is not a colour or field effect, and is probably caused by differences in detection media. ## 3 Survey limits and Sample Selection The ‘catalogue’ resulting from the implementation of the procedure described in the previous section consists of astrometric and photometric measures for over 200,000 objects. Criteria for inclusion in this preliminary sample is merely detection in both $`\mathrm{B}_\mathrm{J}`$ and R passbands (since these are required for construction of the reduced proper motion diagram (RPMD)) and a measure of proper motion in both these passbands. It is from these objects that an uncontaminated proper motion sample is to be drawn; and we require well defined universal survey limits so that space densities can be calculated from the final survey sample. Number count plots from this survey data increase linearly with increasing magnitude, as shown for the R data in Figure 3, before dropping precipitously. This cut-off is attributed to the survey detection limit, and the position of the turnover is used to determine photometric survey limits. The limits used are 21.2 in R and 22.5 in B. The proper motion distribution for all objects in our survey area detected on at least 15 stacks in both B and R is shown in Figure 4. Low proper motions are generally an artifact of measuring machine error, thus the distribution indicates a typical error in measured proper motions of $`10\mathrm{m}\mathrm{a}\mathrm{s}/\mathrm{yr}`$. Our criteria for choosing a survey proper motion limit are elimination of contaminant spurious proper motions from the sample and , with this in mind, maximising the size of the final proper motion sample extracted. The deterioration of positional accuracy with magnitude means that fainter objects also have more uncertain proper motion determinations. The peak of the proper motion distribution therefore moves to higher proper motions with object samples drawn from progressively fainter magnitude bins. For this reason, we consider the survey proper motion limit as a function of magnitude. Three independent methods of determining an appropriate proper motion limit have been investigated for this dataset: 1. analysis of proper motion error distribution 2. cumulative proper motion number counts 3. RPMD inspection all of which are described here. The characteristics of the ‘noise’ in the proper motion distribution can be analysed by assuming all objects are ‘zero proper motion objects’ and the calculated proper motions arise purely from random measurement error. The ‘true’ values of $`\mu _x`$ and $`\mu _y`$ measured by linear regression are therefore zero, and the random measurement errors give rise to a normal error distribution in $`\mu _x`$ and $`\mu _y`$ about zero with common $`\sigma `$. We are of course interested in the total proper motion, which follows a Rayleigh distribution of the form $$P(\mu )=\frac{\mu }{\sigma ^2}e^{\frac{\mu ^2}{2\sigma ^2}}.$$ (1) This survey utilises two independent measures of proper motion in the same field, one from the stacks of B plates and the other from the R stacks. A useful means of reducing the final proper motion survey limit will be to compare these two measures and reject inconsistent motions. Comparison of independent measures of proper motion needs to be incorporated into this analysis if it is to be useful in predicting sample contamination later. The measured B and R motions and their associated error distributions are therefore characterised $$P(\mu _b)=A_b\mu _be^{C_b\mu _b^2}$$ (2) and $$P(\mu _r)=A_r\mu _re^{C_r\mu _r^2}$$ (3) respectively. The algorithm used to select sample objects on the basis of proper motions will use three sequential cuts to eliminate spurious motions. Firstly, the averaged proper motion must exceed the proper motion limit $`\mu _{lim}`$ , ie: $$\frac{\mu _r+\mu _b}{2}>\mu _{lim}.$$ (4) Secondly, the difference in proper motion must not exceed a second survey parameter $`(\mathrm{\Delta }\mu )`$: $$|\mu _b\mu _r|<(\mathrm{\Delta }\mu ),$$ (5) and finally the difference in position angle must not exceed a third survey parameter $`(\mathrm{\Delta }\varphi )`$: $$|\varphi _j\varphi _r|<(\mathrm{\Delta }\varphi ).$$ (6) An object must satisfy all three of these criteria to be included in the sample. The desired outcome of this analysis is the ability to predict the expected contamination from spurious motions for a survey with parameters $`\mu _{lim}`$, $`(\mathrm{\Delta }\mu )`$ and $`(\mathrm{\Delta }\varphi )`$ given the ‘zero proper motion object’ error distribution described by equations 2 and 3. In order to do this we consider the combined probability distribution $`P(\mu _b)P(\mu _r)`$ on the $`\mu _b`$$`\mu _r`$ plane. The survey criteria described above effectively limit the selected sample to a specific area on the $`\mu _b`$$`\mu _r`$ plane, shown in Figure 5. Given a realistic estimate of the (normalised) $`P(\mu _b)`$ and $`P(\mu _r)`$ distributions an integration over the hatched region is Figure 5 yields an estimate for the fraction of objects, X, belonging to the error distribution likely to contaminate the sample. The integral $`X={\displaystyle _{J_1}^{J_2}}{\displaystyle _{2\mu _{lim}\mu _b}^{\mu _b+(\mathrm{\Delta }\mu )}}P(\mu _b)P(\mu _r)d\mu _rd\mu _b+`$ (7) $`{\displaystyle _{J_2}^{\mathrm{}}}{\displaystyle _{2\mu _b(\mathrm{\Delta }\mu )}^{\mu _b+(\mathrm{\Delta }\mu )}}P(\mu _b)P(\mu _r)d\mu _rd\mu _b`$ where $$J_1=\frac{2\mu _{lim}(\mathrm{\Delta }\mu )}{2}$$ (8) and $$J_2=\frac{2\mu _{lim}+(\mathrm{\Delta }\mu )}{2}$$ (9) is easily calculated numerically for arbitrary $`\mu _{lim}`$ and $`(\mathrm{\Delta }\mu )`$. Since this analysis concerns zero proper motion objects whose spurious motion arises purely from machine measurement error, we may assume no preferred position angle, and the resulting fraction X will be cut by a further $`(2(\mathrm{\Delta }\varphi ))/360`$ ($`(\mathrm{\Delta }\varphi )`$ in degrees) by the position angle selection criterion, ie. $$X_{final}=\frac{X}{2(\mathrm{\Delta }\varphi )/360}.$$ (10) The calculation described above requires knowledge of the P($`\mu _b`$) and P($`\mu _r`$) ‘zero proper motion’ error distributions. If equation 7 is to be used to calculate numbers of contaminants it is important that $`P(\mu _b)`$ and $`P(\mu _r)`$ can be simultaneously rescaled from normalised distributions. The B and R data are therefore paired so that $`P(\mu _b)`$ and $`P(\mu _r)`$ contain the same objects and thus the same number of objects. For the purposes of this analysis this proper motion data is thought of as consisting of two distinct distributions: the ‘zero proper motion object’ error distribution, on which a distribution of real proper motions is superposed. Any attempt to determine the error distribution from the measured distribution must use only the low proper motion data where the random errors of interest dominate systematics introduced by real proper motions. In the example to be shown here the first 15 bins (1mas/yr bins) of the normalised proper motion distribution is taken to be representative of the error distribution. The first 15 data points in both B and R are fitted with an assumed error distribution like equation 3. These fits are shown along with the measured data in Figure 6, with the real data rising above the error fits representing real proper motions. Since the error distributions exclude real motions they must be renormalised before use in equation 7. The rescaling factor to be used in calculating numbers of contaminants is taken to be the number of objects contained in the error distributions. This number will in general be different for the B and R data, so the average is used. Numerical methods can now be used to predict the number of contaminant spurious proper motions for a range of $`\mu _{lim}`$, $`(\mathrm{\Delta }\mu )`$ and $`(\mathrm{\Delta }\varphi )`$ using equations 7 and 10. The result of a series of calculations is shown in Figure 7. The predicted contamination falls rapidly below one object as $`\mu _{lim}`$ exceeds $`42\mathrm{m}\mathrm{a}\mathrm{s}/\mathrm{yr}`$. Care must be taken however that the error distributions are not overly sensitive to the number of points in the measured $`\mu _b`$ and $`\mu _r`$ distributions used in their calculation. The error distributions were therefore recalculated using the first 30 bins in the measured $`\mu _b`$ and $`\mu _r`$ distribution and the predicted contamination plot redrawn using these new distributions . The two calculations prediction of the $`\mu _{lim}`$ at which the survey contamination drops to below 1 object are consistent to within $`3\mathrm{m}\mathrm{a}\mathrm{s}/\mathrm{yr}`$. Presented in Table 2 are the results of calculating the $`\mu _{lim}`$ at which the predicted number of contaminant objects falls below one for a range of magnitude cuts, $`(\mathrm{\Delta }\mu )`$ and derived $`P(\mu _b),P(\mu _r)`$ distributions. Table 2 indicates that the $`(\mathrm{\Delta }\mu )`$ survey parameter has a small bearing on the $`\mu _{lim}`$ where $`N_{contam}1`$. The $`(\mathrm{\Delta }\mu )`$ criterion can therefore be relaxed substantially to ensure no real motions are rejected. A similar argument is applicable to the $`(\mathrm{\Delta }\varphi )`$ parameter, where the small advantage gained by tightening the $`(\mathrm{\Delta }\varphi )`$ criterion again becomes increasingly outweighed by the potential for rejection of real proper motions. To conclude comment on this analysis, any $`\mu _{lim}`$ greater than $`45`$mas/yr should eliminate contamination arising from normally distributed positional measurement errors, although thus far no account has been taken of spurious motions arising from other sources (eg. erroneous pairings). The second means of investigating the proper motion limit is via proper motion number counts. Assuming the local Disc has constant stellar density and that Disc kinematics give rise to an approximate inverse correlation between distance and proper motion, a plot of log cumulative number count (from large to small $`\mu `$) versus log $`\mu `$ should follow a straight line of gradient $`3`$ (ie. $`\mathrm{log}N3\mathrm{log}\mu `$). In principle a proper motion limit for this survey could be obtained by determining the point at which our measured proper motions deviate from this relation due to the existence of spurious motions. In Figure 8 the cumulative number counts are plotted as a histogram. If the points between $`\mathrm{log}\mu =1.9`$ and $`\mathrm{log}\mu =2.5`$ are fit with a straight line the resulting gradient is $`(2.998\pm 0.071)`$, in excellent agreement with the idealised predicted slope of $`3`$. The fit is shown as a dashed line. This finding compares favourably with a similar analysis of Luyten (1969, 1974, 1979) and Giclas (1971, 1978) common proper motion binary stars, which on an analogous plot describe a straight line of significantly shallower gradient that the expected $`3`$ indicating increasing incompleteness with decreasing proper motion (Oswalt & Smith 1995). Towards lower proper motions our data rises above the line, indicating the onset of contamination at $`\mu 50`$ mas/yr, lending credence to the findings of the previous analysis. The final means of assessing the effects of varying $`\mu _{lim}`$ is the RPMD. The RPMD is, for all sample objects lacking follow up observations, the sole means of stellar population discrimination. For this reason it is worthwhile inspecting the RPMD of samples produced using various survey limits. The principal concern is that the white dwarf locus be as distinct as possible at all colours while maximising the sample size. It can be argued that there is little purpose in lowering $`\mu _{lim}`$ for bright objects to anything near the level of potential contamination. Consider searching for an intrinsically faint star such as a reasonably cool white dwarf, with an absolute R magnitude of $`14`$, amongst survey objects with apparent R magnitudes as faint as $`19`$. Such an object, having a (conservative) tangential velocity of $`40\mathrm{k}\mathrm{m}/\mathrm{s}`$, would have a proper motion of $`80\mathrm{m}\mathrm{a}\mathrm{s}/\mathrm{yr}`$ – well beyond the predicted contamination limit. This argument becomes even more forceful for both intrinsically fainter and apparently brighter objects, implying that a conservative $`\mu _{lim}`$ is desirable for all but the faintest survey objects. Figure 9 is a RPMD produced with a $`\mu _{lim}`$ of 50 mas/yr at every magnitude. The main sequence and white dwarf loci are visible, as is a dense group of objects between the two at $`H_R19.5`$. This group of objects probably consists of two sub-groups. Firstly of course, the most likely sample contaminants are faint objects creeping through just over the proper motion limit at this point. Secondly, one would expect a large population of objects to lie on the RPMD where the $`\mu `$ and R distributions are most populated , ie. at $`\mu _{lim}`$ and at faint R (an object with $`\mu =50`$ mas/yr and R=21 has $`H_\mathrm{R}=19.5`$), and it should therefore not be automatically assumed that every object lying at this point in the RPMD is suspect. A cause for concern however, is the way this population bridges the gap between the main sequence and the white dwarf population, a property one would expect from a contaminant locus rather than the detection of bona fide proper motions. If a more conservative limit of $`\mu _{lim}`$=60 mas/yr is adopted for objects with $`R>20.5`$ the RPMD (Figure 11) looks more promising, with the ‘contaminant locus’ all but gone, leaving only the expected mild confusion (Evans 1992) between populations at their faint extremity. The adoption of an extremely conservative $`\mu _{lim}`$ of 80 mas/yr for all objects leads to an even more well defined RPMD (Figure 10), where the white dwarf population discrimination is almost without exception unambiguous. The survey parameters to be used to select the preliminary sample (ie. a sample subject to further object-by-object scrutiny and potential rejection, with the possibility of further objects being included the sample which lie just outside the RPM cut on one or both RPMDs) have been chosen with reference to the findings in this section. Error analysis and number counts suggest a $`\mu _{lim}`$ of 50 mas/yr, a $`(\mathrm{\Delta }\mu )`$ of 50 mas/yr and a $`(\mathrm{\Delta }\varphi )`$ of 90 degrees should essentially eliminate contamination arising from normally distributed measurement errors. It was found, however, that the white dwarf locus is insufficiently distinct in the RPMD obtained using these survey parameters. A slight restriction of $`\mu _{lim}`$ for objects with $`\mathrm{R}20.5`$ does much to solve the population discrimination problem. Therefore the survey parameters given above, with the exception of a $`\mu _{lim}`$ of 60 mas/yr for object with $`\mathrm{R}20.5`$, appears to be an acceptable compromise between sample maximisation and potential contamination and population discrimination problems. The RPM can be expressed in terms of tangential velocity ,$`V_T`$ ,and absolute magnitude, M : $$H=M+5\mathrm{log}_{10}V_T3.379.$$ (11) Evans (1992) has produced theoretical RPMDs by using expected $`5\mathrm{log}_{10}V_T`$ distributions and absolute magnitude - colour relations for various populations. Although these theoretical predictions were made for specific fields and incorporated error estimates peculiar to that work, they serve as a useful guide to the expected distribution of stellar populations on the RPMD. The RPMD white dwarf population locus is found to be an unambiguous population discriminator for all colours blue-wards of $`(OE)1.8`$, with an increasing chance of contamination from the spheroid main sequence population red-wards of this colour. Transforming from $`(OR)`$ to $`(BR)`$ (Evans, 1989 - equation 13), spheroid population contamination should become a problem red-wards of $`(BR)1.6`$ which in turn corresponds to $`(B_JR)1.2`$. Indeed on inspection of Figure 11, the white dwarf locus does begin to become confused at this colour. This contamination occurs solely from the direction of small RPM, and the RPMD can still be used with some confidence as a population discriminator for objects with high measured RPM red-wards of $`(B_JR)1.2`$. In order to accommodate these contamination considerations, the preliminary survey white dwarf sample is defined as those objects blue-wards (in both plots) of the lines in Figure 11. The lines cut the top of the white dwarf locus at $`(B_\mathrm{J}R)1.2`$ but allow slightly redder objects with higher RPMs into the sample. The white dwarf locus is unambiguous blue-wards of $`(B_\mathrm{J}R)1.2`$. Every object in the sample must appear in at least 15 stacks in each passband. The positional data as a function of time have been scrutinised for every object selected as a white dwarf candidate, and those with dubious motions rejected. While such a process may seem rather arbitrary, it was necessary to incorporate this screening stage in the sample extraction because simple automated rejection algorithms such as the $`3\sigma `$ rejection routine used here cannot be guaranteed to eliminate spurious motions. Some examples are shown in Figures 12 and 13. All three objects shown successfully satisfied all the survey criteria. The object plotted at the top of Figures 12 and 13 (KX27) shows a clear, genuine motion in both x and y in both passbands and was included in the final sample without hesitation. The middle object (KX18) has larger positional uncertainties and a smaller overall motion, but still shows consistent, smooth motions and was also included. The final object shows evidence of large non-linear deviations in the last four epochs of the x measures in both passbands. Although the bad-point rejection algorithm has removed at least one datum from each x plot (as shown by the multiple straight line fits), this object shows no evidence of proper motion based on the first 16 data points and certainly cannot be considered a reliable proper motion object candidate. This object, along with 9 others, were rejected from the final WD sample. These rejected objects tended either to have large offsets from a positional distribution otherwise consistent with zero motion at either the first or last few epochs, as is the case with the rejected object described above; or the positional measures had an unusually large scatter around the mean position, indicating the error in position was larger than the objects magnitude would suggest. The reasons for these larger errors may be unusual image morphology or the effect of proximity to a neighbouring object. While this final rejection procedure is unsatisfactory in terms of its lack of objectivity, it is certainly preferable to inclusion of such objects in the final sample, or the introduction of extremely stringent survey limits which would doubtless exclude genuinely interesting objects. The digitised images have also been inspected. The final sample consists of 56 objects which fully satisfy the photometric, proper motion and RPM/colour survey limits. A further two objects, which satisfy the photometric and proper motion limits but fall marginally outside the RPM/colour cut shown in Figure 11 have also been included after favourable follow up observations detailed in Section 5. ## 4 Very high proper motion, faint object sensitivity limits In the previous Section, we discussed in some detail the checks made to establish a clean astrometric and photometric catalogue. The availability of plates over such a wide epoch range as detailed in Table 1 allowed us to search for faint and/or very high proper motion objects in this field. This is important for a number of reasons. For example, it is crucial to firmly establish what the upper limit of detectable proper motion is for the methods used, and also to check if significant numbers of objects have been missed because of this limit or because of the pairing algorithm used. Also, it is important to check at fainter magnitudes for very dim, high proper motion objects since it is the coolest (and therefore faintest) objects that constrain the age determination based on the turn–over in the WDLF. It is also interesting to search for very faint, high proper motion halo WDs in the light of the current debate concerning the origin of the dark lensing bodies detected in microlensing experiments (eg. Isern et al. 1998 and references therein). We performed three experiments to investigate high proper motion and/or faint objects: 1. Within the restricted epoch range 1992 to 1996, ie. five separate epochs, each consisting of a stack of four R plates, we used completely independent software employing a ‘multiple–pass’ pairing technique aimed specifically at detecting high proper motion objects. This software has successfully detected a very cool, high proper motion degenerate WD 0346+246 elsewhere (Hambly, Smartt & Hodgkin 1997). The pairing algorithm described previously used a $`200\mu `$m search radius over 19 yr resulting in an upper limit of $`1`$ arcsec yr<sup>-1</sup>, whereas in the multiple–pass test we used a maximum search radius of $`650\mu `$m over a 4 yr baseline, theoretically enabling detection of objects with annual motions as high as $`10`$ arcsec. The highest proper motion object detected in the catalogue was relatively bright ($`\mathrm{R}14`$), with $`\mu 0.8`$ arcsec yr<sup>-1</sup>. This experiment revealed two $`\mu 0.8`$ arcsec yr<sup>-1</sup> objects , the one mentioned above and another slightly fainter object ($`\mathrm{R}15`$) ; no objects were found with motions larger than this. The colours and reduced proper motions of the two objects indicate that they are M-type dwarfs. The second object was undetected in the catalogue due to a spurious pairing, an increasingly likely scenario for high proper motion objects detected without a multiple pass algorithm since they move substantially from their master frame position. We note that all the expected objects having $`\mu >0.2`$ arcsec yr<sup>-1</sup> detected in the catalogue were also found by this procedure. 2. Using the procedure described in the previous Section, but with a relaxed minimum number of epochs, high proper motion objects were searched for. With a $`200\mu `$m pairing requirement over a maximum epoch separation of 7 yr, the upper limit of proper motion was $`1.9`$ arcsec yr<sup>-1</sup>. The two objects having $`\mu 0.8`$ arcsec yr<sup>-1</sup> found in the previous experiment were also recovered here; again, no objects were found with motions larger than this. 3. To investigate the possibility of fainter objects, we stacked up the R band material in groups of 16 plates at epochs 1980, 1983 to 1986, 1987 to 1991 and 1992 to 1996. Obviously, over any individual four year period an object having a proper motion greater than $`1`$ arcsec yr<sup>-1</sup> will have an extended image and will not be detected to the same level of faintness as a stationary star; nonetheless, the $`0.75^\mathrm{m}`$ increase in depth afforded by going from 4 to 16 plate stacks (eg. Knox et al. 1998) at least allows us to investigate the possible existence of objects having $`\mu 0.5`$ arcsec yr<sup>-1</sup> down to $`\mathrm{R}23`$ (100% complete to $`\mathrm{R}22`$) over an area of 25 square degrees. In this experiment, all the objects expected from the catalogue were recovered; in addition, one star was found having $`\mathrm{R}20`$, $`\mu =0.47`$ arcsec yr<sup>-1</sup> at $`\mathrm{PA}=179^{}`$ and RA,DEC = 21h30m8.553s, $`44^{}`$46’24.09 (J2000.0). This object is the M–type dwarf ‘M20’ discovered in the photometric survey of Hawkins & Bessell (1988) and has $`\mathrm{B}_\mathrm{J}23`$. The faintness in the blue passband is the reason that this object is absent from the catalogue. Once more, no other high proper motion, fainter stars were found. These three experiments allow us to be confident that there is no large population of objects having $`\mu 1`$ arcsec yr<sup>-1</sup> down to faintness limits of $`\mathrm{R}22`$ and $`\mathrm{B}_\mathrm{J}23`$. Furthermore, the cut–off in the WD sequence seen in the reduced proper motion diagrams is real, and not an artefact of incompleteness. ## 5 Follow up Observations While the RPMD technique is a powerful population discriminator, it is desirable to obtain follow up observations of a sub-set of sample members. The principle motivation for this is to explicitly demonstrate the applicability of our survey technique by confirming the WD status of the sample objects via spectroscopy. Spectroscopic observations red-wards of the WD cut in the RPMD may also be used to investigate the possibility of ultra-cool white dwarfs existing in the sample; and as a corollary to this, such observations allow clearer population delineation in the RPMD. In addition, photometric observations through standard filters ensure confidence in photographic-to-standard photometry transformations, provide useful independent checks of stellar parameters (eg $`T_{\mathrm{eff}}`$) derived from fits to photometry and may in future allow alternative population discrimination via eg. colour-colour plots. ### 5.1 Spectroscopy This project was allocated 3 nights of observing time in 1996 between the 8th and 10th of August, and a further 3 nights in 1997 between the 5th and 7th of August on the 3.9m Anglo-Australian Telescope. A spectral coverage of 4000-7500$`\mathrm{\AA }`$ was obtained with the RGO spectrograph and 300B grating in conjunction with the Tek CCD. Various standards were observed throughout each usable night and CuAr lamp exposures used for wavelength calibration. The data were reduced following standard procedures within the IRAF <sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy Inc., under contract with the National Science Foundation of the United States of America. environment. The strategy behind these observations was to define as clearly as possible the constituents of the lower portion of the RPMD, from the blue end of the WD locus down to the extremely high H objects below the M stars. The spectroscopically observed proper motion sample objects are shown in Figure 14. Both the unambiguous bluer region of the WD locus and the portion of RPMD lying red-ward of our WD cut generally consist of fairly bright objects. These regions have been probed spectroscopically, albeit more sparsely than the cool WD region consisting of mostly very faint objects. A combination of poor weather conditions at the AAT and the faintness of our CWD sample has rendered high signal-to-noise spectra of these objects unobtainable thus far. The bluest objects have clearly defined Hydrogen Balmer lines with equivalent widths equal to or in excess of those typical for DA white dwarfs of similar colour (Greenstein and Liebert, 1990). The objects red-wards of our sample cut below the main sequence (MS) show spectra clearly distinct from cool WDs. Both subdwarfs and high velocity M dwarfs have been identified in this region of the RPMD. A star lying near the cut-off region of the RPMD clearly showing the absence of strong metal features that would be present even in a low-metallicity subdwarf is very likely a CWD. We apply this line of argument to our data by selecting from the recent models of Hauschildt et al. (1998) a spectrum appropriate to a subdwarf (metallicity $`[\mathrm{M}/\mathrm{H}]2`$) of similar effective temperature to a given CWD sample object spectrum. The model spectrum is smoothed to the approximate resolution of the AAT spectra and multiplied through by a synthetic noise spectrum commensurate with the AAT CWD spectrum in question. The only features likely to be visible after this procedure are $`5200\mathrm{\AA }`$ MgI and the $`4300\mathrm{\AA }`$ CH G-band, and it is in these regions that we look for any evidence that our CWD sample objects are in fact subdwarfs. This procedure has been undertaken for all CWD candidate spectra. Of these, the spectra of the three objects lying within the region of serious potential subdwarf contamination are displayed, and we restrict comment on the investigation of the bluer objects to the statement that none show any evidence of subdwarf like spectral features. KX58, displayed in Figure 15, is a convincing CWD candidate, showing a smooth continuum spectrum with no suggestion of the metal features apparent in the model subdwarf spectra. The noisier spectrum of KX57 in Figure 16 also shows no evidence of subdwarf features, although poorer signal-to-noise makes the identification less certain. The object 18d lies significantly red-ward of our CWD cut-off on the RPMD, and unfortunately its spectrum (Figure 17) is extremely noisy. While it is difficult to draw any conclusions from such poor data, the dip at $`5200\mathrm{\AA }`$ is a reasonable indication that this object is a subdwarf or MS star. Thus the position of 18d on the RPMD in conjunction with the spectral data evidence means this object does not warrant inclusion in our CWD sample. To summarise the findings of our spectroscopic survey, the only objects showing notable deviation from expected WD spectra are the objects below the M dwarf portion of the RPMD (diamonds in Figure 14) and the more ambiguous case of object 18d discussed above. ### 5.2 Photometry CCD photometry of a subsample of our CWD sample was obtained between 29th of July and the 4th of August 1997 on the 1m telescope of the South African Astronomical Observatory in Sutherland. Johnson-Cousins V, R, I photometry was obtained for all program stars on the Tek (512x512) CCD, with B measures also acquired for sufficiently bright objects. E-region standards were observed continuously through each usable night. Observed magnitudes with associated errors are displayed in Table 3. These observed magnitudes provide an independent check on the accuracy of the SuperCOSMOS photographic photometry, and we use the deviations of $`\mathrm{m}_{\mathrm{photographic}}`$ from $`\mathrm{m}_{\mathrm{CCD}}`$ to obtain errors on the B, V, R, I photographic photometry of 0.17, 0.14, 0.13 and 0.16 respectively. The CCD photometry also allowed tighter estimates of effective temperature to be derived for observed objects, although they did not provide the hoped for useful constraints on log g. ## 6 Sample Analysis Previous studies of CWD samples (eg LDM, BRL) have often benefited from a comprehensive and wide ranging observational database, including high quality spectra, optical and IR photometry and parallaxes. These observations, in conjunction with detailed WD models, allow determinations of stellar parameters such as effective temperature, log g, atmospheric composition, mass and bolometric luminosity. However, since this is a relatively new project and is concerned with stars of unusually faint apparent magnitude, such a database does not yet exist for this sample. It is therefore necessary for us to restrict our analysis, in the first place by exploiting the homogeneity of WD masses by assuming a common typical log g for our entire CWD sample (the 60 stars with measured log g in BRL have a mean surface gravity $`\overline{\mathrm{log}g}=8.099\pm 0.044`$), and secondly by treating the atmospheric constituent of each star as an unknown parameter whose influence on the resulting WDLF must be determined later. Bergeron et al. (1995) have published a detailed grid of model predictions for Johnson-Cousins U, B, V, R, I (and IR) photometry and bolometric corrections as a function of effective temperature and log g. Making our assumption that log g is always equal to 8, values for effective temperature and bolometric luminosity assuming both a H and He atmosphere can be calculated for every sample object. Fitting for $`T_{\mathrm{eff}}`$ is achieved by interpolating the model grid at 10K intervals for the colour indices $`(UB)`$, $`(BR)`$, $`(VR)`$ and $`(VI)`$. We then evaluate $`\chi ^2`$ at each $`T_{\mathrm{eff}}`$ interval using all available colour indices for the object in question. While the photographic photometry is effective in adequately constraining $`T_{\mathrm{eff}}`$, smaller errors are obtainable with the SAAO CCD photometry, and it is used where available. The resulting $`\chi ^2`$, $`T_{\mathrm{eff}}`$ distribution yields a fitted value for $`T_{\mathrm{eff}}`$ and an estimate of the associated error, which can be used to read off interpolated values of absolute V magnitude and bolometric luminosity. This procedure is performed for each object for both an assumed H and He atmosphere. Distance moduli obtained from these fits allow calculation of tangential velocities via the SuperCOSMOS proper motion measures. The distribution of derived tangential velocities is consistent with expectations for a sample of Disc stars (Evans 1992), and shows no evidence of contamination from high velocity halo objects. A summary of the results of the fitting procedure is shown in Table 4, including $`T_{\mathrm{eff}}`$ and $`M_{\mathrm{bol}}`$ with associated errors and $`V_{\mathrm{tan}}`$ values. Space density values for each object are also presented in Table 4; these are discussed in the following section. An independent check on the validity of our initial CWD sample parameters (SuperCOSMOS photometry and astrometry) can be made by comparing known CWD samples with our data on the RPMD. Since the RPMD is the original means of population discrimination it is also interesting to overplot known subdwarf samples on our RPMD to further address the question of potential contamination. BRL published observations of a sample of 110 CWDs, including most of the coolest known degenerates. Extensive lists of extreme subdwarfs are less easily obtainable. Ryan (1989) used a RPM criterion to extract over 1000 subdwarf candidates from the NLTT catalogue. Accurate B and R photometry was published for these objects, providing a useful means of delineating the bluer portion of the WD RPMD locus. Monet et al. (1992) identified a subset of 17 extreme subdwarfs from their CCD parallax program also involving Luyten catalogue stars. Although only V, I photometry is published for these stars, we use the photometry published in Ryan (1989) to define colour transformations allowing the 17 subdwarfs from Monet et al. (1992) to be plotted on the $`(BR)`$, RPMD planes. These objects define a portion of the RPMD marginally red-wards of the faintest CWDs where the most extreme contaminants may be expected to lie. Figure 18 shows the two RPMD with the four samples plotted. There are several points to be made concerning this plot. Firstly, our CWD sample and the the BRL sample of previously known CWDs lie on the same region of the diagram, providing further confirmation of the validity of our survey procedure. It can also be seen that the BRL sample contains redder, cooler stars than our sample. This may be expected since the BRL sample is rather eclectic and contains some of the coolest WDs known, whereas our sample is drawn from a rigidly defined survey in a particular ESO/SERC field. We note also that the cool portion of the BRL sample does not extend into the portion of the RPMD beyond our population discrimination cut-off shown in Figure 11, which may be interpreted as indicating that we are not failing to sample portions of the RPMD containing CWDs (but see Section 8 below). Both subdwarf samples lie in clearly distinct regions of the RPMD to our sample, although the cooler subdwarfs are all too red to directly assess contamination of the CWD sample from the direction of small RPM. However the subdwarf RPM locus is not predicted to deviate significantly from a straight line in the CWD colour regime (Evans 1992), and if we take the high H extent of the two subdwarf samples plotted to be indicative of the limit of the extreme subdwarf locus on the RPMD, the dashed lines plotted on Figure 18 should be a good guide to the limit of the subdwarf locus for the intermediate colour range. It may then immediately be seen that the vast majority of our CWD sample is safely within the WD region of the RPMD. The two redder borderline stars have reasonable spectroscopic confirmation of their WD status (Figures 16 and 15), leaving only one potentially dubious object. ## 7 The WDLF In order to construct a WDLF, space densities must be calculated for a survey limited by both apparent magnitude and proper motion. The CWD survey sample presented in Table 4 consists of stars with widely varying intrinsic brightness and tangential velocity, and is therefore not volume-limited (since for example intrinsically bright objects are sampled out to greater distances than the coolest, faintest stars). The standard solution to this problem, Schmidt’s (1968, 1975) $`1/V_{max}`$ estimator, has been extensively studied with specific reference to the WDLF (Wood & Oswalt 1998). The $`1/V_{max}`$ method assigns each sample object a survey volume defined by the maximum distance $`d_{max}`$ the object could have and satisfy the survey limit criteria. For this survey, an object at distance $`d`$ with proper motion $`\mu `$ and magnitudes $`B`$ and $`R`$ has $`d_{max}`$ $$d_{max}=\mathrm{min}[d\frac{\mu }{\mu _{lim}},d10^{0.2(R_{lim}R)},d10^{0.2(B_{lim}B)}].$$ (12) For the simple case of uniform stellar space density, the survey field solid angle $`\mathrm{\Omega }`$ can then be used to calculate the corresponding $`V_{max}`$ ($`=\mathrm{\Omega }d_{max}^3/3`$) . However, many of the objects in our CWD sample yield a $`d_{max}`$ comparable to the scale height of the Disc, and we follow the method of Tinney, Reid & Mould (1993) in generalising our calculation of $`V_{max}`$ to allow for the truncation of the survey volume by the scale height effect. In this prescription, the volume is found by integrating over an exponentially decreasing density law (Stobie, Ishida & Peacock 1989) with scale height h at galactic latitude b, yielding the modified expression for the volume out to a distance d: $$V=\mathrm{\Omega }\frac{h^3}{\mathrm{sin}^3b}\{2(\xi ^2+2\xi +2)e^\xi \},$$ (13) where $`\xi =d\mathrm{sin}b/h`$ (Equation 9 in Tinney et al. 1993). Each object is thought of as a sampling of the survey volume $`V_{max}`$, and thus contributes a space density of $`V_{max}^1`$ to its particular LF bin. We adopt the convention of LDM in assigning the uncertainty in each space density contribution as being equal to that contribution (ie. $`1\pm 1`$ objects per sampling). We may therefore construct a LF simply by summing the individual space density contributions in each luminosity bin, and the error obtained by summing the error contributions in quadrature. A true reflection of the LFs observational uncertainties should also allow for the uncertainties inherent in photometric fits to model atmospheres described in the previous section. The errors in $`M_{\mathrm{bol}}`$ detailed in Table 4 suggest the introduction of horizontal error bars in any observational LF is necessary. When converting our sample $`M_{\mathrm{bol}}`$ values into luminosity units via $$M_{bol}=2.5\mathrm{log}L/L_{}+4.75$$ (14) we also calculate the upper and lower 1 sigma luminosity uncertainties for each object. For a bin containing N objects we then combine eg. the upper luminosity 1 sigma uncertainties, $`\sigma _u`$, using $$\sigma _U=\sqrt{\frac{_i^N\sigma _{u_i}^2}{N}}$$ (15) to yield an estimate for the horizontal error bar $`\sigma _U`$, with an analogous procedure for the lower luminosity error bounds. In addition, to give the most realistic estimate for the LF in magnitude bins containing a very few objects, we plot the binned data at the mean luminosity of the objects in the bin, rather than at the mid-point of that bin. Table 5 gives the LF calculated in this fashion for integer magnitude bins. Only the cool end of the LF is given here ($`M_{\mathrm{bol}}>12.25`$), and a field size of 0.0086 steradians and Disc scale height of 300 parsecs were used in calculating space densities. Columns 3 and 4 give the plotted (Figure 19) LF with upper and lower error bounds in parenthesis. The hot WD data point not detailed in Table 5 represents the 6 stars with $`(3>\mathrm{log}L/L_{}>2)`$. These hotter objects tend to have large errors in fitted $`M_{\mathrm{bol}}`$, and the resulting large horizontal error bars makes broader binning appropriate. It is necessary to choose either a pure hydrogen or helium atmosphere for each object to construct the LF. We use our ‘best guess’ atmospheres for the LF described here: for each object we choose the atmosphere with the lower $`\chi ^2`$ model fit to the photometry, with the exception of objects with $`6000>T_{\mathrm{eff}_\mathrm{H}}>5000`$ which are deemed to be occupying the ‘non-DA gap’ at this temperature (BRL) and are therefore automatically designated a pure H atmosphere. Note however that the photometry does not adequately constrain atmospheric composition and this ‘best guess’ LF is only one arbitrary realisation of the data (see Section 8 below). The standard means of assessing the completeness of a sample analysed using the $`1/V_{max}`$ method is to calculate the mean value of the ratio of $`V_{obs}`$, the volume out to an object, to $`V_{max}`$. A complete survey evenly sampling the survey volume should yield $`V_{obs}/V_{max}=\frac{1}{2}`$. This is generally not the case for published CWD samples, although some authors have incorporated completeness corrections into their analyses to account for the effects of the original survey incompleteness (OSWH). The $`V_{obs}/V_{max}`$ calculated for this sample is 0.495 or 0.496 choosing either all H or He atmospheres. From a complete sample containing 58 objects we expect $`V_{obs}/V_{max}=0.5\pm 0.038`$, indicating our sample is consistent with being drawn from a complete survey. It should be emphasised however that this result cannot be regarded as proof of completeness, since clearly an incomplete survey sample may also exhibit $`V_{obs}/V_{max}\frac{1}{2}`$. The total space density determined from our ‘best guess’ sample is $`4.16\times 10^3`$ WDs per cubic parsec, approximately 25% greater than that found by LRB. These results are certainly consistent, since the simulations of Wood and Oswalt (1998) predict errors of $`50\%`$ in total space density estimates from samples of 50 CWDs using the $`1/V_{max}`$ technique and additional uncertainties are introduced by our lack of knowledge of the WD atmospheric constituents. Our findings reiterate that WDs represent only a small fraction ($`1\%`$) of the local dynamically estimated mass density. Interestingly, we do not confirm the much higher total WD space densities found recently by two independent studies. We note however that these studies (Ruiz & Takamiya 1995, Festin 1998) make only a tentative claim to detection of a high WD space density due to the small samples ($`\mathrm{N}<10`$) involved. A third study (OSWH) searched exclusively for WDs in CPMBs, and found a total space density of $`5.3\times 10^3`$ for these objects. At this space density, and given our survey area and the resolution of the COSMOS data, we would not expect to find any CPMB WDs. It is therefore not surprising that no object in our sample is a CPMB member, since the survey technique is only sensitive to lone WDs or double degenerate binary systems. ## 8 discussion An estimate of the Disc age may be obtained by comparing our WDLF with expectations from theoretical models. We compare our data with two sets of models, which are available in the form of curves at integer 1 Gyr Disc age intervals. Since we are fitting to a cut-off in space density, the lack of detected objects beyond our faintest observational bin assumes added significance. We can calculate the probability of detecting zero objects in the next faintest luminosity bin because we have well defined survey limits: at given faint luminosity the proper motion survey limit is irrelevant and the survey is sampling a known volume defined by the photometric survey limits. This volume was calculated using B and R magnitudes for very cool WDs from the recent models of Hansen (1998), which combined with the photometric survey limits yield a $`d_{\mathrm{max}}`$ for both an H and an He atmosphere WD. The minimum $`d_{\mathrm{max}}`$ defines the survey volume at that magnitude. We have assumed the LF at any given luminosity consists of equal numbers of H and He WDs, in which case the H WD survey volume provides the best constraint on the LF and is adopted for fitting. Using Poisson statistics, the probability of detecting zero objects in a volume V in which a model predicts space density $`\rho `$ is simply $`e^{\rho V}`$. We assume the errors on the data points are normally distributed, and derive best fits by maximising the probability of the dataset for each curve and comparing the fits for the various Disc ages. The various inputs to the first set of WDLFs we use, those of Wood, were described in detail in Wood (1992). Careful consideration was given to the various inputs to the WDLF, such as initial mass function, star formation rates and initial-final mass relation. The Wood WD evolutionary models for the WDLFs have since been updated (Wood 1995, OSWH), and consist of mixed carbon-oxygen core WDs with hydrogen and helium surface layer masses of $`10^4`$ and $`10^2\mathrm{M}_{}`$ respectively, and also utilise revised opacities and neutrino rates. It is these more recent model WDLFs that are used here. The best fit of our ‘best guess’ LF to the Wood model WDLFs is for a Disc age of 9 Gyr, and is shown in Figure 20. Although this gives a reasonable first indication of the Disc age implied by our sample, further investigation is necessary since the photographic photometry fitting procedure used to estimate $`T_{\mathrm{eff}}`$ (as described in Section 6) does not reliably constrain the atmospheric constituent of the stars in our sample. We have addressed this question by constructing a large number of WDLF ‘realisations’ from our sample data, each time giving each star a 50% probability of having either a H or He atmosphere. Every resulting WDLF was fit to the models and the best fit Disc age recorded. We have also used this analysis to assess the effect of binning on the fitting procedure. Figure 21 (a) displays the results of fits to 1000 realisations binned in 1.0 $`M_{\mathrm{bol}}`$ bins and a further 1000 in 0.75 $`M_{\mathrm{bol}}`$ bins. These results give a fairer indication of the Disc age and associated errors than Figure 20, which is effectively one arbitrarily picked realisation. The second set of theoretical WDLFs we use are described in García-Berro et al. (1997) (henceforth GB models). These LFs include the expectation that the progenitors of the faintest WDs are likely to have been massive stars since these stars evolve more quickly and the resulting (Oxygen-Neon) massive WDs also cool faster. These models also include a predicted delay in Carbon-Oxygen WD cooling induced by the separation of C and O at crystallization (Hernanz et al. 1994). The incorporation of these considerations into the theoretical WDLF leads to a broader predicted peak at the tail of the LF. We find a best fit of 11 Gyr to our sample, as shown in Figure 22. Again, we have investigated the effect of our poor knowledge of our samples atmospheres in the same way as above. The results, shown in Figure 21 (b), highlight the effect that variations in binning can have on the fits for this second set of models. The major contributor to this problem seems to be that the model curves are sufficiently indistinct in the region of our sample dataset (as can be seen clearly in Figure 22) that changes in observational LF binning can have a significant bearing on the result of the fit. This effect is apparent in Figure 21 (b), where a bin size of 0.75 $`M_{\mathrm{bol}}`$ yields a strong preference of a Disc age of 14 Gyr, in contrast to the 12-13 Gyr Disc age selected by the other binning regime (note also that 14 Gyr was the oldest curve available for fitting). Although it is difficult to resolve this matter satisfactorily with the current data set, there are two pertinent points to raise. The problem does not apply to the Wood models, the fits to which are extremely difficult to dislodge from the 9–11 Gyr region indicated by Figure 21 (a). Secondly, Wood and Oswalt (1998), as a result of their Monte Carlo analysis of the WDLF, recommend choosing a binning in which the crucial lowest luminosity bin contains $`5`$ objects; giving a reasonable compromise between the requirements of good signal to noise in the final bin and having that bin as faint as possible to provide maximum information on the position of the cut off. This means a bin size of 1 $`M_{\mathrm{bol}}`$ for our data set, for which the fitted Disc age is well constrained for both models. A brief investigation of the effect of altering the Disc scale height in Equation 13 revealed that for any scale height between 250 and 400 pc the alterations in space densities for a LF binned as in Table 5 are restricted to a few hundredths in $`\mathrm{log}\varphi `$. Variations at this level have a negligible effect on the fits to model LFs. This behaviour is to be expected , since it is objects with large $`d_{max}`$ which are most affected by changes in the scale height, and these make the smallest contributions to the space density when using the $`1/V_{max}`$ prescription. An overview of all the various fittings to both sets of models points firmly to a Disc age between 9 and 11 Gyr using the Wood models, and an age of 12-14 Gyr using the GB models. This discrepancy is expected (LRB, García-Berro et al. 1997) and indicates the extent of errors introduced by uncertainties in the WD crystallisation process; the Wood model Disc age should be a reliable lower limit however, and Figure 21 demonstrates the difficulty in obtaining a Disc age of less than 9 Gyr from our data. We note that the Wood models seem to represent our data better, the GB models not following the peak in the observed LF. Wood (1992) reported that the uncertainties in the inputs to model WDLFs lead to a further $`1`$ Gyr contribution to the total error. Further insight into the error associated with the Disc age may be gained by considering the extremities of the distribution of atmosphere types within the sample. The 50% probability assigned to the H and He atmosphere is adopted in the absence of strong evidence for a dominant atmospheric type amongst the crucial coolest WDs (see eg. the coolest bin in Figure 1 of BRL), and has no physical basis. Adopting all H atmospheres leads to high space density estimates in the final LF bins and arbitrarily old Disc age estimates. More interestingly, a LF composed of all He atmosphere WDs is still incompatable with Disc ages below 8 Gyr, regardless of the model WDLF used. This reiterates the important point that the cosmologically interesting lower Disc age limit appears to be 8 Gyr, and that even a Disc as young as this must be considered unlikely (Figure 21). Our adopted Disc age estimation is therefore $`10_{+3}^1`$ Gyr. There are at least two possible effects arising from our present lack of extensive follow up observational data that could affect the derived Disc age. First, as may be seen in Figure 14 there is a small group of objects lying just beyond our RPMD cut-off that have not been included in our CWD sample. Although the distribution of known WDs on the RPMD (the BRL objects in Figure 18) indicates a population of CWDs is not expected in this region, the reason for this may be that many of the known CWDs have been themselves selected on the basis of RPM criteria (Luyten 1970). It may be that the region of the RPMD just beyond our cut-off has not been adequately investigated before, since in the presence of noisier data from blink comparators and eye-measure photometry it would be hopelessly confused. These considerations argue strongly for a wholesale spectroscopic survey using multi-fibre instruments of the entire population below $`H_R=19`$ for complete confidence that our sample does not exclude any CWDs. For the present, the possibility that a few more CWDs exist in our catalogue but do not satisfy our RPMD survey criteria cannot be ruled out. Such objects would certainly be very cool, resulting in a higher Disc age estimate. The second effect concerns the question of mass. The difficulty is amply demonstrated by the case of CWD ESO 439-26 (Ruiz et al. 1995), which was observed to have a luminosity fainter by 1 magnitude than the WDLF cut off. Analysis of the object’s optical energy distribution in conjunction with a measure of trigonometric parallax allowed the authors to conclude that it’s low luminosity was in fact due to its large mass, or small radius. Again, ideally it would be desirable to obtain parallaxes and CCD optical and IR photometry of our entire sample, obviating the need to assume a mass of $`0.6M_{}`$ (and allowing the surface composition to be constrained). In summary, our WD sample has passed every test for completeness applied to it. The calculated space density of WDs is slightly higher than that for the LDM sample, however we do not detect the much higher total space densities found by more recent authors (Ruiz 1995, Festin 1998). Our WDLF yields a Disc age estimate of $`10_{+3}^1`$ Gyr, older but still consistent with previous estimates (Winget 1987, LRB, OSWH). In the context of current cosmochronometry, our Disc age estimate is consistent with current Globular cluster age estimates of 13-14Gyr (Vandenberg 1998). Finally, when combined only with a conservative 1 Gyr value for the halo–Disc formation interval (Burkert, Truran & Hensler 1992, Pitts & Tayler 1992) and a further 1 Gyr for the big bang–halo formation interval, a 10 Gyr Disc excludes $`\mathrm{\Omega }=1`$, $`\mathrm{\Lambda }=0`$ cosmologies based on current estimates for $`H_{}`$ of 60-80 (Freedman 1998). ## 9 Conclusions 1. Using a large collection of COSMOS/SuperCOSMOS digitised Schmidt plate data in ESO/SERC field 287 we have extracted a sample of proper motion objects. Number counts indicate this sample is complete down to the survey proper motion limit, which was chosen with care to exclude contaminant spurious proper motion measures. 2. A sample of cool white dwarfs have been culled from our proper motion objects using the reduced proper motion technique. By overplotting samples of known extreme subdwarfs we show our CWD sample is unlikely to be contaminated by other stellar population groups. We have confirmed the WD status of a number of our sample with AAT spectroscopy. The sample passes the $`(V/V_{max})`$ completeness test. 3. We calculate a total WD space density of $`4.16\times 10^3`$ WDs per cubic parsec using Schmidt’s $`(1/V_{max})`$ method. Careful comparison of luminosity functions constructed from our sample and theoretical models indicate an age for the local Galactic Disc of $`10_{+3}^1`$ Gyr, older than previous estimates using this technique. ## Acknowledgments We would like to thank M. Wood and E. García-Berro for access to theoretical WDLFs, and P. Bergeron, P. Hauschildt and B. Hansen for supplying model atmosphere predictions. Thanks also to S. Ryan for supplying lists of subdwarfs, and to Andy Taylor for useful discussions concerning statistics. This work would not have been possible without the time and expertise of the SuperCOSMOS and UK Schmidt Library staff. Particular thanks to Harvey MacGillivray, Mike Read and Sue Tritton. Richard Knox acknowledges a PPARC postgraduate studentship. ## Appendix A Reduced Proper Motion Diagrams The reduced proper motion (RPM) is defined by $$H=m+5\mathrm{log}_{10}\mu +5$$ (16) where m is apparent magnitude and $`\mu `$ proper motion. A reduced proper motion diagram (RPMD) is a plot of colour against RPM. It is a powerful way of combining proper motions and photometry to distinguish stellar population groups. Equation 16 can be re-written using the relationships $`m=M5+5\mathrm{log}d`$ and $`\mu =V_T/4.74d`$ to give $$H=M+5\mathrm{log}_{10}V_T3.379$$ (17) where M is the absolute magnitude, $`V_T`$ the transverse velocity (in $`\mathrm{kms}^1`$) and d the distance. Since M and $`V_T`$ are both intrinsic properties of the star, so too is H. To see the significance of H, suppose that every star had an identical $`V_T`$; H would then clearly be simply M plus a constant and the distribution of a particular population group in H at a particular colour would depend solely on the spread of the populations’ colour-magnitude relation at that colour. Of course there is a distribution in $`5\mathrm{log}_{10}V_T`$ for each population, but since the tangential velocities are distributed around a most probable value the RPM serves as an estimate of M, ie. $`M=a+bH`$ (a and b constants). The resulting locus for each population in the RPMD is then the convolution of its colour magnitude distribution with its $`5\mathrm{log}_{10}V_T`$ distribution over the diagrams colour range. To allow population discrimination in some colour region we therefore only require that the various population loci do not overlap significantly in that region of the RPMD. In effect the RPMD is analogous to the Hertsprung-Russell diagram, and in both plots the white dwarf population is quite distinct in most colours.
no-problem/9903/cond-mat9903298.html
ar5iv
text
# AN AUGMENTED SPACE RECURSION STUDY OF THE ELECTRONIC STRUCTURE OF ROUGH EPITAXIAL OVERLAYERS ## 1 Introduction Magnetism at surfaces, overlayers and interfaces has evoked much interest in recent times . The chemical environment of an atom at a surface or overlayer is very different from the bulk. The difference in environment, existence of surface states and hybridization of the states of the overlayer with those of the substrate can give rise to a wide variety of new and interesting material and magnetic properties. This wide variety has the potential for being the basis of surface materials design. This is the underlying reason for the absorbing theoretical interest in this field. In this communication we wish to argue that the Augmented Space Recursion (ASR) introduced by us earlier is one of the most suitable techniques for the study of rough overlayers and interfaces. First principles all electron techniques for the determination of the electronic structure based on the local spin density approximation (LSDA) have made reasonably accurate quantitative calculations possible. Originally the most popular of the methods was the parametrized tight-binding or the linear combination of atomic orbitals (LCAO) method . However, the fact that the parametrized hamiltonian is, in general, never transferable and that the basis does not have sufficient variational freedom, has led to the eclipse of such methods for quantitative calculations ; in particular of properties as sensitive to these assumptions as the magnetic moment . There have been attempts of resuscitating the LCAO by introducing ideas of environment dependent parametrization . The generally accepted quantitative techniques include the Augmented Plane Wave (APW) and its linearized version (LAPW) and the Korringa-Kohn-Rostocker (KKR) and its linearized version (LMTO). The two basically related methods come both in the Full Potential versions where no assumption is made about the shape of the charge density or the potential, or in the spherically symmetrized Muffin-tin Potential versions. The electrons may be treated either semi-relativistically or fully relativistically . In addition, Andersen and co-workers have proposed a tight-binding LMTO (TB-LMTO) where the real space representation of the hamiltonian is sparse. Which of the two basic methods we choose often depends on a matter of taste and history. Moreover, how far we wish to go down the ladder of different approximations is guided by the accuracy required and the computational heaviness we wish to face. We would not like to comment on this, other can justifying the specific technique we have chosen for ourselves. The other important aspect of the problem is the loss of translational symmetry perpendicular to the surface. This aspect has been dealt with by different authors in different ways : 1. finite slab calculations, which assume that finite size effects are negligible 2. supercell calculations, where the translational symmetry is restored. Each supercell has a replica of the finite system and the assumption is that the supercells are large enough so as not to affect one another 3. the slab Green function method where the translational symmetry parallel to the surface is utilized and the perpendicular direction is treated in real space - . The embedding method of Inglesfield and coworkers belongs to this group, where the Green function of the semi-infinite solid is calculated by downfolding onto this semi-infinite subspace. 4. the fully real space based Recursion method which does not require any translational symmetry and was originally developed for dealing with surfaces and interfaces. Overlayers produced by molecular beam epitaxy and other vapor deposition techniques are, by and large, rough. Local probes, such as STM techniques, reveal steps, islands and pyramid-like structures. Moreover, there is always interdiffusion between the overlayer and the substrate leading to a disordered alloy like layer at the interface. This brings in the last important aspect of the problem : roughness or disorder parallel to the surface. A majority of the theoretical work done on surfaces and overlayers so far had always assumed flat layers. These generally involve the use of surface Green functions, G(k,z), which allow breaking of translational symmetry perpendicular to the surface, but presume such symmetry parallel to it . Roughness has been introduced in overlayers by randomly alloying it with empty spheres . Such alloying has been assumed to be homogeneous and has been treated within a mean field or the coherent potential approximation (CPA) . Attempts at going beyond the CPA has not been generally successful. One of the more successful approaches in this direction is the Augmented Space Formalism (ASF) and techniques basically based on it, like the travelling cluster approximation (TCA) . Let us now justify why we wish to introduce the Augmented Space Recursion based on the TB-LMTO as an attractive method for the study of rough surfaces, overlayers or interfaces. The CPA has proven to be an accurate approximation in a very large body of applications. Why then do we wish to go beyond ? We should recall that the CPA is exact when the local coordination is infinite. Its accuracy is inversely proportional to the local coordination. We therefore expect the CPA to be comparatively less accurate at a surface as compared with the bulk calculations. Further, the CPA basically describes homogeneous randomness. It cannot accurately take into account clustering, short-ranged ordering or local lattice distortions, of the kind we expect to encounter in the rough surfaces produced experimentally. The ASF allows us to describe exactly such situations, without violating the so called “herglotz” properties which the approximated averaged Green function must possess . We shall combine the ASF with the Recursion method to calculate the configuration averaged Green functions. We should note that the Augmented Space Theorem is exact and the approximation involves in terminating the recursion-generated continued fraction. Analyticity preserving “terminators” have been introduced by Haydock and Nex and Lucini and Nex . Recently Ghosh et.al. have discussed the convergence of the Augmented Space Recursion and indicated how to generate physical quantities within a prescribed error window. The Recursion method, being entirely in real space, does not require any translational symmetry and is ideally suited for systems with inhomogeneous disorder. However, for the Recursion method to be a practicable computational technique, we must choose a basis of representation in which the effective hamiltonian is sparse, i.e. short ranged in real space. The best choice of a computationally simple yet accurate basis is the TB-LMTO. This is what we describe in this communication. However, the screened-KKR would also be a more quantitatively accurate choice. We would require the energy dependent extension of the Recursion method. This has been developed recently and its application to the screened-KKR will be described in a subsequent communication. To illustrate the method we shall take a well studied example : that of Fe deposited on the (100) surface of a Ag substrate. The lattice parameter of bcc Fe, the most commonly known ferromagnet matches the nearest neighbour distance on the (100) surface of fcc Ag (half the face diagonal), a very good non magnetic electrical conductor. This favours epitaxial deposition of bcc Fe on Ag(100) manifesting interesting magnetic properties. Before describing the methodology in some detail we need to clarify the following point : in order to describe inhomogeneous disorder we have taken recourse to the Generalized Augmented Space Theorem . This generalized ASF does take into account short-ranged order through the Warren-Cowley parameter and yields an analytic herglotz approximation. In a recent publication the authors make the strange statement that the generalized ASF yields negative densities of states and quotes the work of Razee and Prasad . The statement is untrue and the misconception should be cleared up. A careful reading of the quoted article will show that in applying the generalized ASF Razee and Prasad use the Nikodym-Radon transform and write the joint density of states of the hamiltonian parameters $`𝒫(\{ϵ_i\})`$ as $`\left(p(ϵ_i)\right)\mathrm{\Phi }(\{ϵ_i\})`$ . For homogeneous disorder $`\mathrm{\Phi }(\{ϵ_i\})`$ is unity, while for inhomogeneous disorder the authors expand the function as an infinite series involving various correlation functions between the $`\{ϵ_i\}`$ ( the simplest two site correlation can be written in terms of the Warren-Cowley parameter). They then truncate this series after a few terms. This extra approximation cannot guarantee the preservation of the herglotz analytic properties and is the cause of the observed negative density of states in some energy regimes. The generalized ASF described by Mookerjee and Prasad does not take recourse to such an approximation and has been shown to be exact in the referenced paper. Approximation then arises entirely due to the recursion termination - which has been shown to preserve the herglotz analytic properties. ## 2 The Generalized Augmented Space Theorem In this section we shall describe the generalized Augmented Space Formalism. The hamiltonian is a function of a set of random variables $`\{n_i\}`$ which are not independent, so that the joint probability distribution can be written in terms of the conditional probability densities of the individual variables as : $$p(\{n_i\})=p(n_1)\underset{k}{}p\left(n_k|n_{k1},n_{k2},\mathrm{}n_1\right)$$ Each random variable $`n_k`$ has associated with it its own configuration space $`\mathrm{\Phi }_k`$ and, in the case of correlated disorder, a set of operators $`\{M_k^{\lambda _{k1},\lambda _{k2},\mathrm{}\lambda _1}\}`$ whose spectral density are the conditional probability densities of the random variable, dependent on the configurations of the previous labeled ones. The $`\lambda _k`$ label the configurations of the variable $`n_k`$. The configuration space of the set of random variables is the product $`\mathrm{\Psi }`$ = $`_k^{}\mathrm{\Phi }_k`$. What the generalized augmented space theorem proved was that, if we define operators on this full configuration space, $$\stackrel{~}{M}_k=\underset{\lambda _1}{}\underset{\lambda _2}{}\mathrm{}\underset{\lambda _{k1}}{}P_1^{\lambda _1}P_2^{\lambda _2}\mathrm{}P_{k1}^{\lambda _{k1}}M_k^{\lambda _{k1},\lambda _{k2},\mathrm{}\lambda _1}II\mathrm{}$$ then the configuration average of any function of the hamiltonian is given exactly by : $$<<(\{n_k\})>>=F^0|\stackrel{~}{}\left(\{\stackrel{~}{M}_k\}\right)|F^0$$ (1) The average state $`|F^0`$ is defined by : $`|F^0`$ $`=`$ $`{\displaystyle \underset{k}{}}|f_k^0`$ $`|f_k^0`$ $`=`$ $`{\displaystyle \underset{\lambda _k}{}}\sqrt{\omega _{\lambda _k}^{\lambda _1,\lambda _2\mathrm{}\lambda _{k1}}}|\lambda _k`$ where the numbers under the root sign are the conditional probability weights for the various configurations of the variable $`n_k`$. In our model, the random variables are the occupation variables of a site by two different kind of atoms . The simplest model is one that assumes that the occupation of the nearest neighbours of a site depends on its own occupation. The probability densities are given by : $`p(n_1)`$ $`=`$ $`x\delta (n_11)+y\delta (n_1)`$ $`p(n_2|n_1=1)`$ $`=`$ $`(x+\alpha y)\delta (n_21)+(1\alpha )y\delta (n_2)`$ $`p(n_2|n_1=0)`$ $`=`$ $`(1\alpha )x\delta (n_21)+(y+\alpha x)\delta (n_2)`$ Where x and y are the concentrations of the constituents and $`\alpha `$ is the Warren-Cowley short-ranged order parameter. $`\alpha `$=0 refers to the completely random case, when the various operators $`M_k^{\lambda _{k1},\mathrm{}\lambda _1}`$ become independent of the superscripts and the generalized augmented space theorem reduced to the usual augmented space theorem. $`\alpha <\mathrm{\hspace{0.33em}0}`$ indicates tendency towards ordering alternately, while $`\alpha >\mathrm{\hspace{0.33em}0}`$ indicates tendency towards segregation. The representations of the corresponding operators required are the following : $$M_1=\left(\begin{array}{cc}\text{x}& \sqrt{xy}\\ \sqrt{xy}& \text{y}\end{array}\right)$$ $$M_2^1=\left(\begin{array}{cc}\text{x+}\alpha \text{y}& \sqrt{(1\alpha )y(x+\alpha y)}\\ \sqrt{(1\alpha )y(x+\alpha y)}& \text{(1-}\alpha \text{)y}\end{array}\right)$$ $$M_2^0=\left(\begin{array}{cc}\text{(1-}\alpha \text{)x}& \sqrt{(1\alpha )x(y+\alpha x)}\\ \sqrt{(1\alpha )x(y+\alpha x)}& \text{y+}\alpha \text{x}\end{array}\right)$$ $$P_1^0=\left(\begin{array}{cc}\text{x}& \sqrt{xy}\\ \sqrt{xy}& \text{y}\end{array}\right)$$ $$P_1^1=\left(\begin{array}{cc}\text{y}& \sqrt{xy}\\ \sqrt{xy}& \text{x}\end{array}\right)$$ ## 3 TB-LMTO-ASR formulation Our system consists of a semi-infinite Ag substrate with layers of Fe atoms on the (100) surface. We shall describe the hamiltonian of the electrons within a tight- binding linearized muffin-tin orbitals basis (TB-LMTO). As described earlier, we shall take care of the charge leakage into the vacuum by layers of empty spheres containing charge but no atoms. We shall roughen the topmost layer by randomly alloying the Fe atoms with empty spheres. We shall allow for short-ranged order in the alloying. Segregation will imply that the Fe atoms and empty spheres cluster together forming islands and clumps. Ordering on the other hand will imply that Fe atoms like to be surrounded by empty spheres and vice versa. The details of the description of the effective augmented space hamiltonian has been described at length in an earlier paper . We shall indicate the generalization of that result when nearest neighbour short-ranged order is introduced as described above. $`\stackrel{~}{H}`$ $`=`$ $`𝐇_1\stackrel{~}{𝐈}+𝐇_2{\displaystyle \underset{k}{}}𝐏_k𝐏_{}^k+𝐇_3{\displaystyle \underset{k}{}}𝐏_k\{𝐓_{}^k+𝐓_{}^k\}`$ $`+𝐇_4{\displaystyle \underset{k}{}}{\displaystyle \underset{k^{}}{}}𝐓_{kk^{}}𝐈++\alpha 𝐇_2{\displaystyle \underset{mϵN_1}{}}𝐏_m𝐏_{}^1\{𝐏_{}^m𝐏_{}^m\}+`$ $`+𝐇_5{\displaystyle \underset{mϵN_1}{}}𝐏_m𝐏_{}^1\{𝐓_{}^m+𝐓_{}^m\}++𝐇_6{\displaystyle \underset{mϵN_1}{}}𝐏_m𝐏_{}^1\{𝐓_{}^m+𝐓_{}^m\}+`$ $`+\alpha 𝐇_2{\displaystyle \underset{mϵN_1}{}}𝐏_m\{𝐓_{}^1+𝐓_{}^1\}\{𝐏_{}^m𝐏_{}^m\}+`$ $`+𝐇_7{\displaystyle \underset{mϵN_1}{}}𝐏_m\{𝐓_{}^1+𝐓_{}^1\}\{𝐓_{}^2+𝐓_{}^2\}`$ where, $`N_1`$ are the set of nearest neighbours of the site labeled 1 on the surface and for calculations of averaged local densities of states at a constituent labeled by $`\lambda `$ we have $`𝐇_1`$ $`=`$ $`A(C/\mathrm{\Delta })\mathrm{\Delta }_\lambda \left(EA(1/\mathrm{\Delta })\mathrm{\Delta }_\lambda 1\right)`$ $`𝐇_2`$ $`=`$ $`B(C/\mathrm{\Delta })\mathrm{\Delta }_\lambda EB(1/\mathrm{\Delta })\mathrm{\Delta }_\lambda `$ $`𝐇_3`$ $`=`$ $`F(C/\mathrm{\Delta })\mathrm{\Delta }_\lambda EF(1/\mathrm{\Delta })\mathrm{\Delta }_\lambda `$ $`𝐇_4`$ $`=`$ $`\left(\mathrm{\Delta }_\lambda \right)^{1/2}S_{RR^{}}\left(\mathrm{\Delta }_\lambda \right)^{1/2}`$ $`𝐇_5`$ $`=`$ $`F(C/\mathrm{\Delta })\mathrm{\Delta }_\lambda \left[\sqrt{(1\alpha )x(x+\alpha y)}+\sqrt{(1\alpha )y(y+\alpha x)}1\right]`$ $`𝐇_6`$ $`=`$ $`F(C/\mathrm{\Delta })\mathrm{\Delta }_\lambda \left[y\sqrt{(1\alpha )(x+\alpha y)/x}+x\sqrt{(1\alpha )(y+\alpha x)/y}1\right]`$ $`𝐇_7`$ $`=`$ $`F(C\mathrm{\Delta })\mathrm{\Delta }_\lambda \left[\sqrt{(1\alpha )y(x+\alpha y)}\sqrt{(1\alpha )x(y+\alpha x)}\right]`$ $`A(Z)`$ $`=`$ $`xZ_A+yZ_B`$ $`B(Z)`$ $`=`$ $`(yx)\left(Z_AZ_B\right)`$ $`F(Z)`$ $`=`$ $`\sqrt{xy}\left(Z_AZ_B\right)`$ The C, $`\mathrm{\Delta }`$ and S are matrices in angular momenta, the first two being diagonal. We note first of all that when the short-ranged order disappears and $`\alpha `$ = 0, the terms H<sub>5</sub> to H<sub>7</sub> also becomes zero and the hamiltonian reduces to the standard one described earlier . This effective hamiltonian is sparse in the TB-LMTO basis, but as the expressions show there is an energy dependence in the first three terms. This compels us to carry out recursion at every energy step. However, Ghosh et. al. have shown that the corresponding energy dependence of the continued fraction coefficients is very weak and if we carry out recursions at a few selected seed energies across the spectrum, we may obtain accurate results by spline fitting the coefficients over the spectrum. For the self-consistent calculations we require to calculate the partial (atom projected) density of states at various sites in different layers. This is done by running the recursion starting from sites in different layers. We shall assume that after 5 layers from the surface bulk values are obtained. We checked that this is indeed the case, by comparing the results for the 5-th layer and a full bulk calculation. The Fermi-energy of the system is that of the bulk substrate which we have taken from the bulk calculations. In all cases we have used upto seven shells in augmented space and terminated the recursion after 8-10 steps of recursion. We have used the terminator proposed by Lucini and Nex . As discussed in an earlier paper , we have made sure that the moments of the densities of states converges with the number of augmented space shells and recursions within a preassigned error range, which is consistent with the errors made in the TB-LMTO approximations. We have made the recursive calculations LDA self-consistent. For this we had to obtain the radial solutions of the Schödinger equation involving the spherically symmetric LDA potential $$V_p^\lambda (r)=2\frac{Z^\lambda }{r}+V_p^{\lambda ,H}\left[\rho ^\lambda (r)\right]+V_p^{\lambda ,XC}\left[\rho ^\lambda (r)\right]+\underset{L}{}\underset{q}{}M_{pq}^LQ_q^L$$ $`\lambda `$ labels the type of atom, Z<sup>λ</sup> its atomic number, p labels the particular layer. The second term in the equation is the Hartree potential, which is obtained by solving the Poisson equation with the layer and atom projected charge densities. The third term is the exchange-correlation term. For this term we have used the Barth-Hedin form. In the last term $$Q_p^L=\underset{\lambda }{}x_p^\lambda \left\{\frac{\sqrt{4\pi }}{2\mathrm{}+1}_0^sY_L(\widehat{r})|r|^{\mathrm{}}\rho _p^\lambda (r)𝑑rZ^\lambda \delta _{\mathrm{},0}\right\}$$ Here $`\lambda `$ for the overlayer is either Fe or empty sphere and the concentrations x$`{}_{p}{}^{}{}_{}{}^{\lambda }`$ is either x or(1-x). For the substrate $`\lambda `$ refers only to Ag and its concentration is 1, while for the charge layers outside the overlayer $`\lambda `$ refers to the empty sphere and its concentration is also 1. This last term describes the effect of redistribution of charge near the surface which is particularly important for surface electronic structure. This charge density near the surface is far from spherically symmetric. We have taken both the monopole ($`\mathrm{}`$=0,m=0) and the dipole ($`\mathrm{}`$=1,m=0) contributions. We have also averaged the multipole moments in each layer and used the technique described by Skriver and Rosengaard to evaluate the matrices $`M_{pq}^L`$ by an Ewald technique. ## 4 Results and Discussion. In order to compare our results with calculations carried out earlier, we shall first carry out calculations on a (100) surface of bcc Fe. Earlier Wang and Freeman had used the LCAO method for the study of the same system. The FP-LAPW had been used by Ohnishi et.al. also the study the (100) surface of bcc Fe. The bulk lattice parameter was chosen (as in the case of Ohnishi et.al.) to be 5.4169 a.u. At this stage no lattice relaxation was considered. The results quoted below were for the semi-relativistic self-consistent LSDA TB-LMTO both supercell and ASR. The following table compares the magnetic moment per atom for the three different methods quoted above : | | S | S-1 | S-2 | B | | --- | --- | --- | --- | --- | | Wang and Freeman | 3.01 | 1.69 | 2.13 | 2.16 | | Ohnishi et.al. | 2.98 | 2.35 | 2.39 | 2.25 | | Sanyal et.al.<sup>(a)</sup> | 2.86 | 2.16 | 2.38 | 2.17 | | Sanyal et.al.<sup>(b)</sup> | 2.99 | 2.17 | 2.38 | 2.27 | Table 1 Magnetic moments in bohr-magnetons/atom <sup>(a)</sup> supercell and <sup>(b)</sup> ASR calculations Our central layer magnetic moment per atom is close to the bulk value given by Wang and Freeman and slightly lower than that given by Ohnishi et.al.. All three methods exhibit Friedel oscillations in the magnetic moment, although Wang and Freeman’s oscillations are larger than both Ohnishi et.al. and our work. Our magnetic moment at the surface layer is rather small as compared to the earlier works. However, in these initial calculations (shown as (a) in the table) we have not taken into account surface relaxation. Local lattice relaxation can be easily taken into account within the TB-LMTO-ASR . We refer the reader to the details of the relaxation method in the reference mentioned. A 7-8 $`\%`$ relaxation of the surface layer leads to a surface magnetic moment of 2.99 $`\mu _B/atom`$ which is in good agreement with both the earlier works (shown as (b) in Table 1). We shall now turn to the study of Fe (100) on the (100) surface of fcc Ag substrate. We shall carry out the calculations using two different techniques. First, we shall use the Tight Binding Linearised Muffin Tin Orbital (TB-LMTO) method with a minimal (s,p,d) basis set for Fe and Ag sites in a tetragonal supercell. Both spin polarized as well as non-spin polarized calculations were performed on a Fe/Ag multilayer containing a monolayer of Fe, a monolayer of empty spheres above them and four Ag layers as the substrate. The empty spheres take care of the charge leakage into the vacuum across the free surface. The results of the calculation show that spin polarization yields a lower total ground state energy as compared with the unpolarised case by $``$0.092 eV/atom suggesting that the ground state is magnetic . All the Fe layers have ferromagnetically arranged moments with interface Fe layers having a magnetic moment of $`2.86\mu _B`$ (bulk value 2.27 $`\mu _B`$). Also Fe induces a ferromagnetic moment in Ag at the interface of $`0.012\mu _B`$ per atom. The calculation also suggests Friedel oscillations in net valence charge in Ag as one goes from interface to bulk in Ag. This is because of moment spillage into the empty spheres. Such moment spillage outside the surface has also been observed by Ohnishi et.al. . We shall refine our calculations in three steps. First we shall introduce the local lattice relaxation technique within the TB-LMTO-ASR to relax the surface layer. We shall inflate the interlayer distance between the surface layer and the one just below it. Figure 1 shows the variation of the magnetic moment at the surface layer as a function of the percentage lattice dilatation at the surface. The minimum of the total energy occurs at around 7.5$`\%`$ dilatation. Here the moment carried by the monolayer of Fe is 3.17 $`\mu _B/atom`$, which is not very far from the value of 3.1 $`\mu _B/atom`$ quoted by Blügel based on FP-LAPW calculations . Next we shall begin with a planar monolayer of Fe on Ag and roughen the monolayer by alloying it with empty spheres. We shall now use the self-consistent ASR for obtaining the electronic density of states and local magnetization as a function of the concentration of alloying and the short-range order parameter. We shall begin the LDA-self-consistency by using, to start with, the converged potential parameters from the supercell calculations on planar surfaces and the equilibrium lattice distances i.e. with a 7.5 $`\%`$ surface lattice dilatation. With this starting point the self-consistency is reached much faster than otherwise. Figure 2(a) shows the local density of states at a point in the bulk Ag substrate (full lines) and that for an Ag atom on the 100 surface of fcc Ag (without the deposited Fe overlayer) (dotted lines), obtained by a eight-step recursion process. We have checked that the recursion does converge in the sense suggested by Haydock and Ghosh et.al. of the convergence of integrals of the form $$_{\mathrm{}}^E\mathrm{\Phi }(E^{})n(E^{})𝑑E^{}$$ where $`\mathrm{\Phi }(E)`$ is a well-behaved, monotonic function in the integration range. The Fermi-energy or the chemical potential is calculated from the bulk and is shown in the Figure 2(a). As expected we notice that the d-band width decreases at the surface. This is expected, as the surface atoms are less coordinated than the bulk (eight on the 100 surface as against twelve in the bulk). There is also a redistribution of spectral weight in the band. It is clear that the amount of charge in a Wigner-Seitz sphere around a surface atom is less than that around a bulk atom. This extra charge leaks out into the so-called empty-spheres, which carry no atoms but only this leaked charge. By the time we go down about four layers below the surface, we begin to get local densities indistinguishable from the bulk results. Figure 2(b) Shows the local density of states for the up and down electrons in the Fe overlayer. This is for a perfectly planar overlayer on the 100 surface. As is usual in either bulk Fe or Fe overlayers on noble metals, the majority occupied spin band (here up) shows much more structure than the minority occupied one (here down). Since the Ag d-bands centered round –0.5 ryd do not overlap with either of the Fe d-bands around –0.2 ryd and –0.1 ryd, there is no significant hybridization of these two, which usually leads to a widening of the Fe d-bands and consequent lowering of the local magnetic moment. The Fermi-energy is that of the bulk Ag and is shown in the figure. We now alloy the overlayer with empty spheres and re-converge the self-consistent ASR. In Figure 3 we show the local magnetic moment on a Fe atom in the rough overlayer as a function of the Fe concentration in that layer (dotted line) with 7.5 $`\%`$ surface dilatation. For concentration x=1 of Fe we obtain the local magnetic moment corresponding to that of Figure 2(b). The value of 3.17 $`\mu _B/atom`$ is a considerable enhancement on bulk bcc iron local magnetic moment. The agreement with the supercell calculations is very close. Blügel has argued that this can be inferred from the Stoner criterion because of the narrowing of the overlayer d-bands as compared with the bulk. As we alloy the overlayer with empty spheres, the local magnetic moment on an Fe atom increases, until in the extreme case it approaches that of an isolated Fe atom at $`>`$ 3.6 $`\mu _B/atom`$. Again we much understand this from Blügel’s argument. We find that the empty spheres hardly inherit any induced magnetization, as a result as the concentration of empty spheres increase, the average coordination of Fe atoms decrease, thus increasing the magnetic moment. In the extreme limit we obtain the case of an Fe impurity atom sitting in a sea of empty spheres. Its magnetic moment approaches that of a free Fe atom. The only difference is caused by its hybridization with the Ag substrate. Figure 3 also shows (full lines) the averaged magnetization in the overlayer. This is defined by : x M<sub>Fe</sub> \+ y M<sub>ES</sub>. Since M<sub>ES</sub> is negligible, this average overlayer magnetization decreases almost linearly with x and vanishes at x=0. The two types of magnetization shown in the figure are measured by local magnetic probes and global magnetization experiments. Figure 4 shows the local magnetization at atoms in different layers . We clearly see that there is an induced magnetization in the Ag atoms of the topmost substrate layers. Magnetization oscillates layer wise into the bulk. Figure 5 shows the variation of the local magnetic moment at a Fe site (dotted line) and the averaged magnetic moment in the overlayer as a function of the Warren-Cowley short-ranged order parameter for (a) x=0.9 and (b) x= 0.75 . We note that when the Warren-Cowley parameter indicates phase segregation the magnetic moment shows an increase. We may understand this behaviour from the following argument : For $`\alpha >`$ 0 the tendency is towards phase segregation. Islands of Fe (in our case, clusters of nearest neighbour atoms) precipitate in a sea of empty spheres (particularly in the low Fe concentration regime). This situation mimics the islands and pyramids observed in actual MBE deposited surfaces. A simple calculation with an isolated five atom nearest neighbour cluster sitting on the surface shows that the local density of states on the cluster is much narrower than a homogeneous distribution of Fe atoms on the surface. This leads to a larger magnetic moment/atom on the cluster. The maximum enhancement of the magnetic moment due to short-ranged clustering is around 3 $`\%`$. Clustering enhancement of magnetic moment competes with the ‘poisoning’ effect. Interfaces are never sharp, there is always an interdiffusion of substrate atoms into the surface layer and vice versa. In our final calculation we have taken a perfectly planar (non rough) monolayer of Fe on the (100) surface of fcc Ag and allowed upto 10$`\%`$ interdiffusion of Fe and Ag atoms in the surface layer and the one just below it. The surface layer is then an alloy Fe<sub>x</sub>Ag<sub>1-x</sub> and the next layer an alloy Ag<sub>x</sub>Fe<sub>1-x</sub>. The following table shows the magnetic moments in the surface layer for different values of x. | x | Averaged Mag. Mom. | Fe Mag. Mom. | Ag Mag. Mom. | | --- | --- | --- | --- | | 0.95 | 3.02 | 3.18 | 0.014 | | 0.90 | 2.86 | 3.17 | 0.017 | Table 2 Lowering of Surface magnetism due to ‘poisoning’ by substrate All magnetic moments are in ($`\mu _B/atom`$) We notice that the depletion of magnetic moment due to poisoning by the substrate is about 4.5$`\%`$. In an actual experimental situation both the enhancement effects due to surface lattice dilatation and clustering and the depletion effect due to poisoning are present simultaneously. We have a handle on the determination of the lattice dilatation. Surface roughness may be probed with local techniques like the STM . If we could determine the amount of interdiffusion, we would be in a position to quantitatively predict the surface magnetic moment. The conclusion of this communication is to suggest that the Augmented Space Recursion coupled with any first principles and accurate technique which yields a sparse hamiltonian representation (like the TB-LMTO or the screened KKR) can take into account surface roughness, short-ranged clustering, surface dilatation and interdiffusion effects accurately and it would be an useful methodology to adopt. ACKNOWLEDGEMENTS A.M., G.P.D. and B.S. should like to thank the ICTP and its Network Project and the DST, India for financial support of this work. P.B. would like to thank the C.S.I.R., India for its financial assistance. The collaborative project between the University of Warwick and the S.N.Bose National Centre is also gratefully acknowledged. We should like to thank I. Dasgupta and T. Saha-Dasgupta whose bulk LDA-self- consistent codes formed the basis of this surface generalization. FIGURE CAPTIONS Surface magnetic moment (bohr-magnetons/atom) as a function of $`\%`$ surface dilatation (dilatation of the distance between the surface overlayer and the next layer in the substrate) Local density of states at a Ag atom in the bulk (dotted line) and on the (100) surface (full lines) Local density of states at a Fe atom in an overlayer on the (100) surface of a Ag substrate. Both the up spin and the down spin densities are shown. Local magnetic moment (dotted line) and averaged magnetic moment (full line) on a Fe atom in a rough overlayer on the (100) surface of a Ag substrate. Roughness is modelled by an alloy of Fe and empty spheres. The magnetic moments are shown as a function of the concentration of Fe in this model alloy. Results are for 7.5$`\%`$ surface dilatation. Oscillation of magnetic moment on different layers of a Fe overlayer on the (100) surface of a Ag substrate. Surface magnetic moment (bohr-magnetons/atom) as a function of the Warren-Cowley short-ranged order parameter for (a) 90$`\%`$ Fe 10$`\%`$ Empty spheres and (b) 75$`\%`$ Fe 25$`\%`$ Empty spheres in the surface overlayer with 7.5$`\%`$ surface dilatation.
no-problem/9903/hep-ph9903409.html
ar5iv
text
# References Physical Review D 60, 117503 (1999) Are the reactions $`\gamma \gamma VV^{}`$ a challenge for the factorized Pomeron at high energies ? N.N. Achasov and G.N. Shestakov Laboratory of Theoretical Physics, S.L. Sobolev Institute for Mathematics, 630090, Novosibirsk 90, Russia Abstract We would like to point to the strong violation of the putative factorized Pomeron exchange model in the reactions $`\gamma \gamma VV^{}`$ in the high-energy region where this model works fairly well in all other cases. PACS number(s): 12.40.Nn, 13.60.Le, 13.65.+i The factorized Pomeron exchange model is one of the most well-grounded and good working phenomenological models in high energy physics. Currently this model is particularly used in analyses of the DESY $`ep`$ collider HERA and CERN $`e^+e^{}`$ collider LEP2 data on $`\gamma p`$ and $`\gamma \gamma `$ interactions (see, for example, Refs. \[1-5\]). About five years ago, immediately after the ARGUS observation of $`\gamma \gamma \rho ^0\varphi `$ , we intended to publish a work entitled “Is the reaction $`\gamma \gamma \rho ^0\varphi `$ a challenge for the factorization model at high energies ?” As a result, there appeared the paper: “Estimate of $`\sigma (\gamma \gamma VV^{})`$ at high energies” , in which, on the basis of the factorization model, the cross section for the reaction $`\gamma \gamma \rho ^0\varphi `$ was estimated in the range $`11.5W_{\gamma \gamma }18.4`$ GeV (where $`W_{\gamma \gamma }`$ is the $`\gamma \gamma `$ center-of-mass energy): $`\sigma (\gamma \gamma \rho ^0\varphi )=(1.22.4)`$ nb. We obtained the estimate taking into account all possible combinations of the existing sets of the data on the reactions $`\gamma p\rho ^0p`$ and $`\gamma p\varphi p`$, in the incident photon laboratory energy range from 70 to 180 GeV, in the factorization relation for the $`\gamma \gamma \rho ^0\varphi `$, $`\gamma p\rho ^0p`$, $`\gamma p\varphi p`$, and $`pppp`$ cross sections . A comparison of this estimate with the ARGUS data, $`\sigma (\gamma \gamma \rho ^0\varphi )=(0.16\pm 0.16)`$ nb for $`3.25W_{\gamma \gamma }3.5`$ GeV, has shown that between 3.5 and 11.5 GeV the $`\gamma \gamma \rho ^0\varphi `$ reaction cross section can increase by an order of magnitude. Nothing of the kind has yet occurred in elastic and quasielastic reactions with the Pomeron exchange and with particles involving light quarks. Therefore, such an unusually strong rise of $`\sigma (\gamma \gamma \rho ^0\varphi )`$ expected from the factorization model and from the ARGUS data would be essentially a real challenge for our current ideas about the dynamics of quasi-two-body reactions. Why is the $`\gamma \gamma \rho ^0\varphi `$ cross section so small near 3.5 GeV ? In Ref. we concluded that either we faced a new physical phenomenon in the reaction $`\gamma \gamma \rho ^0\varphi `$ or the ARGUS data were underestimated for some reason. In Ref. we also applied the factorization model to other reactions $`\gamma \gamma VV^{}`$ ($`V(V^{})=\rho ^0,\omega ,\varphi `$). In particular, for the $`\rho ^0\rho ^0`$ and $`\rho ^0\omega `$ channels in the range $`11.5W_{\gamma \gamma }18.4`$ GeV, we obtained the following estimates: $`\sigma (\gamma \gamma \rho ^0\rho ^0)=(9.921)`$ nb and $`\sigma (\gamma \gamma \rho ^0\omega )=(1.93.8)`$ nb. Note that the central values of our estimates for $`\sigma (\gamma \gamma \rho ^0\rho ^0)`$, $`\sigma (\gamma \gamma \rho ^0\omega )`$, and $`\sigma (\gamma \gamma \rho ^0\varphi )`$ are in excellent agreement with the similar ones obtained in Ref. for the other purpose. Here we want once again to question the factorization model for the reactions $`\gamma \gamma VV^{}`$ in connection with the imposing data obtained by the L3 Collaboration on the reaction $`\gamma \gamma \rho ^0\rho ^0`$, which has been reported at the International Workshop on $`e^+e^{}`$ Collisions from $`\varphi `$ to $`J/\psi `$ in Novosibirsk . Figure 1 shows the cross section for the process $`\gamma \gamma \pi ^+\pi ^{}\pi ^+\pi ^{}`$ measured by the L3 Collaboration in the energy range from 0.75 to 4.9 GeV . For $`W_{\gamma \gamma }<2`$ GeV, $`\sigma (\gamma \gamma \pi ^+\pi ^{}\pi ^+\pi ^{})`$ is rather large and is strongly dominated by $`\rho ^0\rho ^0`$ production . Let us now look at the high $`W_{\gamma \gamma }`$ region. For $`4.5W_{\gamma \gamma }4.9`$ GeV, as is clear from the L3 data shown in Fig. 1, $`\sigma (\gamma \gamma \rho ^0\rho ^0)`$ is certainly less than 1.5 nb. Thus, for the reaction $`\gamma \gamma \rho ^0\rho ^0`$ one can repeat exactly the same statements which have been done in Ref. and mentioned above in connection with the data on $`\rho ^0\varphi `$ production and the factorization model prediction. However, we now assess the situation of the factorization model as more critical. The fact is that the L3 Collaboration has already measured the rate of $`\gamma \gamma \rho ^0\rho ^0`$ events up to $`W_{\gamma \gamma }=10`$ GeV . If the $`\gamma \gamma \rho ^0\rho ^0`$ cross section does not increase approximately by an order of magnitude with increasing $`W_{\gamma \gamma }`$ from 5 to 10 GeV, then it will signify that the factorization model for the reaction $`\gamma \gamma \rho ^0\rho ^0`$ is a failure in the energy region where this works fairly well in other cases. A failure of the factorization should be expected not only in the $`\rho ^0\rho ^0`$ and $`\rho ^0\varphi `$ channels but in the $`\rho ^0\omega `$, $`\omega \omega `$, $`\omega \varphi `$, and $`\varphi \varphi `$ ones, too, because, at high energies, the reactions $`\gamma \gamma \rho ^0\rho ^0`$, $`\gamma \gamma \rho ^0\omega `$, $`\gamma \gamma \rho ^0\varphi `$, $`\gamma \gamma \omega \omega `$, $`\gamma \gamma \omega \varphi `$, and $`\gamma \gamma \varphi \varphi `$ are due to have similar mechanisms. Thus, it may happen that either the $`\gamma \gamma \rho ^0\rho ^0`$ reaction cross section reaches the magnitude expected on the basis of the factorization model only at still higher energies, and there is a need to look for a specific dynamical reason for so defiant a phenomena in the formation mechanism of the Pomeron exchange for quasi-two-body reactions, or the L3 detection efficiency for the process $`\gamma \gamma \rho ^0\rho ^0`$, which is small at high $`W_{\gamma \gamma }`$ , has been, however, overestimated by an order of magnitude. Both of these possibilities are thus extremely important and require an immediate elucidation. However, it seems almost improbable that the same accident has occurred in measuring the two different reactions $`\gamma \gamma \rho ^0\varphi `$ and $`\gamma \gamma \rho ^0\rho ^0`$ with the two different detectors ARGUS and L3, respectively. We would like to thank V. Schegelsky for the discussion of the L3 data. FIGURE CAPTION Fig. 1 The L3 preliminary data on the $`\gamma \gamma \pi ^+\pi ^{}\pi ^+\pi ^{}`$ cross section (open circles) and the ARGUS data on the $`(J^P,|J_z|)=(2^+,\mathrm{\hspace{0.17em}2})`$ partial cross section for $`\gamma \gamma \rho ^0\rho ^0`$ (full squares) and $`\gamma \gamma \rho ^+\rho ^{}`$ (open triangles).
no-problem/9903/adap-org9903006.html
ar5iv
text
# Neutral Evolution of Mutational Robustness ## I Introduction Kimura’s contention that a majority of genotypic change in evolution is selectively neutral has gained renewed attention with the recent analysis of evolutionary optimization methods and the discovery of neutral networks in genotype-phenotype models for RNA secondary structure and protein structure . It was found that collections of mutually neutral genotypes, which are connected via single mutational steps, form extended networks that permeate large regions of genotype space. Intuitively, a large degeneracy in genotype-phenotype maps, when combined with the high connectivity of (high-dimensional) genotype spaces, readily leads to such extended neutral networks. This intuition is now supported by recent theoretical results . In in vitro evolution of ribozymes, mutations responsible for an increase in fitness are only a small minority of the total number of accepted mutations . This indicates that, even in adaptive evolution, the majority of point mutations is neutral. The fact that only a minority of loci is conserved in sequences evolved from a single ancestor similarly indicates a high degeneracy in ribozymal genotype-phenotype maps . Neutrality is also implicated in experiments where RNA sequences evolve a given structure starting from a range of different initial genotypes . More generally, neutrality in RNA and protein genotype-phenotype maps is indicated by the observation that their structures are much better conserved during evolution than their sequences . Given the presence of neutral networks that preserve structure or function in sequence space, one asks, How does an evolving population distribute itself over a neutral network? Can we detect and analyze structural properties of neutral networks from data on biological or in vitro populations? To what extent does a population evolve toward highly connected parts of the network, resulting in sequences that are relatively insensitive to mutations? Such mutational robustness has been observed in biological RNA structures and in simulations of the evolution of RNA secondary structure . However, an analytical understanding of the phenomenon, the underlying mechanisms, and their dependence on evolutionary parameters—such as, mutation rate, population size, selection advantage, and the topology of the neutral network—has up to now not been available. Here we develop a dynamical model for the evolution of populations on neutral networks and show analytically that, for biologically relevant population sizes and mutation rates, a population’s distribution over a neutral network is determined solely by the network’s topology. Consequently, one can infer important structural information about neutral networks from data on evolving populations, even without specific knowledge of the evolutionary parameters. Simulations of the evolution of a population of RNA sequences, evolving on a neutral network defined with respect to secondary structure, confirm our theoretical predictions and illustrate their application to inferring network topology. ## II Modeling Neutrality We assume that genotype space contains a neutral network of high, but equal fitness genotypes on which the majority of a population is concentrated and that the neighboring parts of genotype space consist of genotypes with markedly lower fitness. The genotype space consists of all sequences of length $`L`$ over a finite alphabet $`𝒜`$ of $`A`$ symbols. The neutral network on which the population moves can be most naturally regarded as a graph $`G`$ embedded in this genotype space. The vertex set of $`G`$ consists of all genotypes that are on the neutral network; denote its size by $`|G|`$. Two vertices are connected by an edge if and only if they differ by a single point mutation. We will investigate the dynamics of a population evolving on this neutral network and analyze the dependence of several population statistics on the topology of the graph $`G`$. With these results, we will then show how measuring various population statistics enables one to infer $`G`$’s structural properties. For the evolutionary process, we assume a discrete-generation selection-mutation dynamics with constant population size $`M`$. Individuals on the neutral network $`G`$ have a fitness $`\sigma `$. Individuals outside the neutral network have fitnesses that are considerably smaller than $`\sigma `$. Under the approximations we use, the exact fitness values for genotypes off $`G`$ turn out to be immaterial. Each generation, $`M`$ individuals are selected with replacement and with probability proportional to fitness and then mutated with probability $`\mu `$. These individuals form the next generation. This dynamical system is a discrete-time version of Eigen’s molecular evolution model . Our analysis can be directly translated to the continuous-time equations for the Eigen model. The results remain essentially unchanged. Although our analysis can be extended to more complicated mutation schemes, we will assume that only single point mutations can occur at each reproduction of an individual. With probability $`\mu `$ one of the $`L`$ symbols is chosen with uniform probability and is mutated to one of the $`A1`$ other symbols. Thus, under a mutation, a genotype $`s`$ moves with uniform probability to one of the $`L(A1)`$ neighboring points in genotype space. ### A Infinite-Population Solution The first step is to solve for the asymptotic distribution of the population over the neutral network $`G`$ in the limit of very large population sizes. Once the (infinite) population has come to equilibrium, there will be a constant proportion $`P`$ of the population located on the network $`G`$ and a constant average fitness $`f`$ in the population. Under selection the proportion of individuals on the neutral network increases from $`P`$ to $`\sigma P/f`$. Under mutation a proportion $`\nu `$ of these individuals remains on the network, while a proportion $`1\nu `$ falls off the neutral network to lower fitness. At the same time, a proportion $`Q`$ of individuals located outside $`G`$ mutate onto the network so that an equal proportion $`P`$ ends up on $`G`$ in the next generation. Thus, at equilibrium, we have a balance equation: $$P=\frac{\sigma }{f}\nu P+Q.$$ (1) In general, the contribution of $`Q`$ to $`P`$ is negligible. As mentioned above, we assume that the fitness $`\sigma `$ of the network genotypes is substantially larger than the fitnesses of those off the neutral network and that the mutation rate is small enough so that the bulk of the population is located on the neutral network. Moreover, since their fitnesses are smaller than the average fitness $`f`$, only a fraction of the individuals off the network $`G`$ produces offspring for the next generation. Of this fraction, only a small fraction mutates onto the neutral network $`G`$. Therefore, we neglect the term $`Q`$ in Eq. (1) and obtain: $$\frac{\sigma }{f}\nu =1.$$ (2) This expresses the balance between selection expanding the population on the network and deleterious mutations reducing it by moving individuals off. Under mutation an individual located at genotype $`s`$ of $`G`$ with vertex degree $`d_s`$ (the number of neutral mutant neighbors) has probability $$\nu _s=1\mu \left(1\frac{d_s}{(A1)L}\right)$$ (3) to remain on the neutral network $`G`$. If asymptotically a fraction $`P_s`$ of the population is located at genotype $`s`$, then $`\nu `$ is simply the average of $`\nu _s`$ over the asymptotic distribution on the network: $`\nu =_{sG}\nu _sP_s/P`$. As Eq. (3) shows, the average $`\nu `$ is simply related to the population neutrality $`d=_{sG}d_sP_s/P`$. Moreover, using Eq. (2) we can directly relate the population neutrality $`d`$ to the average fitness $`f`$: $$d=L(A1)\left[1\frac{\sigma f}{\mu \sigma }\right].$$ (4) Despite our not specifying the details of $`G`$’s topology, nor giving the fitness values of the genotypes lying off the neutral network, one can relate the population neutrality $`d`$ of the individuals on the neutral network directly to the average fitness $`f`$ in the population. It may seem surprising that this is possible at all. Since the population consists partly of sequences off the neutral network, one expects that the average fitness is determined in part by the fitnesses of these sequences. However, under the assumption that back mutations from low-fitness genotypes off the neutral network onto $`G`$ are negligible, the fitnesses of sequences outside $`G`$ only influence the total proportion $`P`$ of individuals on the network, but not the average fitness in the population. Equation (4) shows that the population neutrality $`d`$ can be inferred from the average fitness and other parameters—such as, mutation rate. However, as we will now show, the population neutrality $`d`$ can also be obtained independently, from knowledge of the topology of $`G`$ alone. The asymptotic equilibrium proportions $`\{P_s\}`$ of the population at network nodes $`s`$ are the solutions of the simultaneous equations: $$P_s=(1\mu )\frac{\sigma }{f}P_s+\frac{\mu }{L(A1)}\underset{t[s]_G}{}\frac{\sigma }{f}P_t,$$ (5) where $`[s]_G`$ is the set of neighbors of $`s`$ that are also on the network $`G`$. Using Eq. (4), Eq. (5) can be rewritten as: $$dP_s=\left(𝐆\stackrel{}{P}\right)_s,$$ (6) where $`𝐆`$ is the adjacency matrix of the graph $`G`$: $$𝐆_{st}=\{\begin{array}{cc}1\hfill & t[s]_G,\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}$$ (7) Since $`𝐆`$ is nonnegative and the neutral network $`G`$ is connected, the adjacency matrix is irreducible. Therefore, the theorems of Frobenius-Perron for nonnegative irreducible matrices apply. These imply that the proportions $`P_s`$ of the limit distribution on the network are given by the principal eigenvector of the graph adjacency matrix $`𝐆`$. Moreover, the population neutrality is equal to $`𝐆`$’s spectral radius $`\rho `$: $`d=\rho `$. In this way, one concludes that asymptotically the population neutrality $`d`$ is independent of evolutionary parameters ($`\mu `$, $`L`$, $`\sigma `$) and of the fitness values of the genotypes off the neutral network. It is a function only of the neutral network topology as determined by the adjacency matrix $`𝐆`$. This fortunate circumstance allows us to consider several practical consequences. Note that knowledge of $`\mu `$, $`\sigma `$, and $`f`$ allows one to infer a dominant feature of $`G`$’s topology, namely, the spectral radius $`\rho `$ of its adjacency matrix. In in vitro evolution experiments in which biomolecules are evolved (say) to bind a particular ligand , by measuring the proportion $`\nu `$ of functional molecules that remain functional after mutation, we can now infer the spectral radius $`\rho `$ of their neutral network. In other situations, such as in the bacterial evolution experiments of Ref. , it might be more natural to measure the average fitness $`f`$ of an evolving population and then use Eq. (4) to infer the population neutrality $`d`$ of viable genotypes in sequence space. ### B Blind and Myopic Random Neutral Walks In the foregoing we solved for the asymptotic average neutrality $`d`$ of an infinite population under selection and mutation dynamics and showed that it was uniquely determined by the topology of the neutral network $`G`$. To put this result in perspective, we now compare the population neutrality $`d`$ with the effective neutralities observed under two different kinds of random walk over $`G`$. The results illustrate informative extremes of how network topology determines the population dynamics on neutral networks. The first kind of random walk that we consider is generally referred to as a blind ant random walk. An ant starts out on a randomly chosen node of $`G`$. Each time step it chooses one of its $`L(A1)`$ neighbors at random. If the chosen neighbor is on $`G`$, the ant steps to this node, otherwise it remains at the current node for another time step. It is easy to show that this random walk asymptotically spends equal amounts of time at all of $`G`$’s nodes . Therefore, the network neutrality $`\overline{d}`$ of the nodes visited under this type of random walk is simply given by: $$\overline{d}=\underset{sG}{}\frac{d_s}{|G|}.$$ (8) It is instructive to compare this with the effective neutrality observed under another random walk, called the myopic ant. An ant again starts at a random node $`sG`$. Each time step, the ant determines the set $`[s]_G`$ of network neighbors of $`s`$ and then steps to one at random. Under this random walk, the asymptotic proportion $`P_s`$ of time spent at node $`s`$ is proportional to the node degree $`d_s`$ . It turns out that the myopic neutrality $`\widehat{d}`$ seen by this ant can be expressed in terms of the mean $`\overline{d}`$ and variance $`\mathrm{Var}(d)`$ of node degrees over $`G`$: $$\widehat{d}=\overline{d}+\frac{\mathrm{Var}(d)}{\overline{d}}.$$ (9) The network and myopic neutralities, $`\overline{d}`$ and $`\widehat{d}`$, are thus directly given in terms of local statistics of the distribution of vertex degrees, while the population neutrality $`d`$ is given by $`\rho `$, the spectral radius of $`G`$’s adjacency matrix. The latter is an essentially global property of $`G`$. ## III Mutational Robustness With these cases in mind, we now consider how different network topologies are reflected by these neutralities. In prototype models of populations evolving on neutral networks, the networks are often assumed to be or are approximated as regular graphs . If the graph $`G`$ is, in fact, regular, each node has the same degree $`d`$ and, obviously, one has $`d=\overline{d}=\widehat{d}=d`$. In more realistic neutral networks, one expects $`G`$’s neutralities to vary over the network. When this occurs, the population neutrality is typically larger than the network neutrality: $`d=\rho >\overline{d}`$. This difference quantifies precisely the extent to which a population seeks out the most connected areas of the neutral network. Thus, a population will evolve a mutational robustness that is larger than if the population were to spread uniformly over the neutral network. Additionally, the mutational robustness tends to increase during the transient phase in which the population relaxes towards the its asymptotic distribution. Assume, for instance, that initially the population is located entirely off the neutral network $`G`$ at lower fitness sequences. At some time, a genotype $`sG`$ is discovered by the population. To a rough approximation, one can assume that the probability of a genotype $`s`$ being discovered first is proportional to the number of neighbors, $`L(A1)d_s`$, that $`s`$ has off the neutral network. Therefore, the population neutrality $`d_0`$ when the population first enters the neutral network $`G`$ is approximately given by: $$d_0=\overline{d}\frac{\mathrm{Var}(d)}{L(A1)\overline{d}}.$$ (10) Therefore, we define the excess robustness $`r`$ to be the relative increase in neutrality between the initial neutrality and (asymptotic) population neutrality: $$r\frac{dd_0}{d_0}.$$ (11) For networks that are sparse, i.e. $`\overline{d}L(A1)`$, this is well approximated by $`r(d\overline{d})/\overline{d}`$. Note that, while $`r`$ is defined in terms population statistics, the preceding results have shown that this robustness is only a function of $`G`$’s topology and should thus be considered a property of the network. ## IV Finite-Population Effects Our analysis of the population distribution on the neutral network $`G`$ assumed an infinite population. For finite populations, it is well known that sampling fluctuations converge a population and this raises a question: To what extent does the asymptotic distribution $`P_s`$ still describe the distribution over the network for small populations? As a finite population diffuses over a neutral network , one might hope that the time average of the distribution over $`G`$ is still given by $`P_s`$. Indeed, the simulation results shown below indicate that for moderately large population sizes, this seems to be the case. However, a simple argument shows that this cannot be true for arbitrarily small populations. Assume that the population size $`M`$ is so small that the product of mutation rate and population size is much smaller than $`1`$; i.e. $`M\mu 1`$. In this limit the population will, at any point in time, be completely converged onto a single genotype $`s`$ on the neutral network $`G`$. With probability $`M\mu `$ a single mutant will be produced at each generation. This mutant is equally likely to be one of the $`L(A1)`$ neighbors of $`s`$. If this mutant is not on $`G`$, it will quickly disappear due to selection. However, if the mutant is on the neutral network, there is a probability $`1/M`$ that it will take over the population. When this happens, the population will effectively have taken a random-walk step on the network, of exactly the kind followed by the blind ant. Therefore, for $`M\mu 1`$, the population neutrality will be equal to the network neutrality: $`d=\overline{d}`$. In this regime, $`r0`$ and excess mutational robustness will not emerge through evolution. The extent to which the initial population neutrality approaches $`d`$ is determined by the extent to which evolution on $`G`$ is dominated by sampling fluctuations. In neutral evolution, population convergence is generally only a function of the product $`M\mu `$ . Thus, as the product $`M\mu `$ ranges from values much smaller than $`1`$ to values much larger than $`1`$, we predict that the population neutrality $`d`$ shifts from the network neutrality $`\overline{d}`$ to the infinite-population neutrality, given by $`𝐆`$’s spectral radius $`\rho `$. ## V RNA Evolution on Structurally Neutral Networks The evolution of RNA molecules in a simulated flow reactor provides an excellent arena in which to test the theoretical predictions of evolved mutational robustness. The replication rates (fitnesses) were chosen to be a function only of the secondary structures of the RNA molecules. The secondary structure of RNA is an essential aspect of its phenotype, as documented by its conservation in evolution and the convergent in vitro evolution toward a similar secondary structure when selecting for a specific function . RNA secondary structure prediction based on free energy minimization is a standard tool in experimental biology and has been shown to be reliable, especially when the minimum free energy structure is thermodynamically well defined . RNA secondary structures were determined with the Vienna Package , which uses the free energies from . Free energies of dangling ends were set to $`0`$. The neutral network $`G`$ on which the population evolves consists of all RNA molecules of length $`L=18`$ that fold into a particular target structure. A target structure (Fig. 1) was selected that contains sufficient variation in its neutrality to test the theory, yet is not so large as to preclude an exhaustive analysis of its neutral network topology. Using only single point mutations per replication, purine-pyrimidine base pairs {G-C, G-U, A-U} can mutate into each other, but not into pyrimidine-purine {C-G, U-G, U-A} base pairs. The target structure contains $`6`$ base pairs which can each be taken from one or the other of these two sets. Thus, the approximately $`2\times 10^8`$ sequences that are consistent with the target’s base pairs separate into $`2^6=64`$ disjoint sets. Of these, we analyzed the set in which all base pairs were of the purine-pyrimidine type and found that it contained two neutral networks of $`51,028`$ and $`5,169`$ sequences that fold into the target structure. Simulations were performed on the largest of the two. The exhaustive enumeration of this network showed that it had a network neutrality of $`\overline{d}=12.0`$ with standard deviation $`\sqrt{Var(d)}3.4`$, a maximum neutrality of $`d_s=24`$, and a minimum of $`d_s=1`$. The spectral radius of the network’s $`51028\times 51028`$ adjacency matrix was $`\rho 15.7`$. The theory predicts that, when $`M\mu 1`$, the population neutrality should converge to this value. The simulated flow reactor contained a population of replicating and mutating RNA sequences . The replication rate of a molecule depends on whether its calculated minimum free energy structure equals that of the target: Sequences that fold into the target structure replicate on average once per time unit, while all other sequences replicate once per $`10^4`$ time units on average. During replication the progeny of a sequence has probability $`\mu `$ of a single point mutation. Selection pressure in the flow reactor is induced by an adaptive dilution flow that keeps the total RNA population fluctuating around a constant capacity $`M`$. Evolution was seeded from various starting sequences with either a relatively high or a relatively low neutrality. Independent of the starting point, the population neutrality converges to the predicted value, as shown in Fig. 2. Subsequently, we tested the finite-population effects on the population’s average neutrality at several different mutation rates. Figure 3 shows the dependence of the asymptotic average population neutrality on population size $`M`$ and mutation rate $`\mu `$. As expected, the population neutrality depends only on the product $`M\mu `$ of population size and mutation rate. For small $`M\mu `$ the population neutrality increases with increasing $`M\mu `$, until $`M\mu 500`$ where it saturates at the predicted value of $`d15.7`$. Since small populations do not form a stationary distribution over the neutral net, but diffuse over it , the average population neutrality at each generation may fluctuate considerably for small populations. Theoretically, sampling fluctuations in the proportions of individuals at different nodes of the network scale inversely proportional to the square root of the population size. We therefore expect the fluctuations in population neutrality to scale as the inverse square root of the population size as well. This was indeed observed in our simulations. Finally, the fact that $`r0.31`$ for this neutral network shows that under selection and mutation, a population will evolve a mutational robustness that is $`31`$ percent higher than if it were to spread randomly over the network. ## VI Conclusions We have shown that, under neutral evolution, a population does not move over a neutral network in an entirely random fashion, but tends to concentrate at highly connected parts of the network, resulting in phenotypes that are relatively robust against mutations. Moreover, the average number of point mutations that leave the phenotype unaltered is given by the spectral radius of the neutral network’s adjacency matrix. Thus, our theory provides an analytical foundation for the intuitive notion that evolution selects genotypes that are mutationally robust. Perhaps surprisingly, the tendency to evolve toward highly connected parts of the network is independent of evolutionary parameters—such as, mutation rate, selection advantage, and population size (as long as $`M\mu 1`$)—and is solely determined by the network’s topology. One consequence is that one can infer properties of the neutral network’s topology from simple population statistics. Simulations with neutral networks of RNA secondary structures confirm the theoretical results and show that even for moderate population sizes, the population neutrality converges to the infinite-population prediction. Typical sizes of in vitro populations are such that the data obtained from experiments are expected to accord with the infinite-population results derived here. It seems possible then to devise in vitro experiments that, using the results outlined above, would allow one to obtain information about the topological structure of neutral networks of biomolecules with similar functionality. We will present the extension of our theory to cases with multiple-mutation events per reproduction elsewhere. We will also report on analytical results for a variety of network topologies that we have studied. Finally, here we focused only on the asymptotic distribution of the population on the neutral network. But how did the population attain this equilibrium? The transient relaxation dynamics, such as that shown in Fig. 2, can be analyzed in terms of the nonprincipal eigenvectors and eigenvalues of the adjacency matrix $`𝐆`$. Since the topology of a graph is almost entirely determined by the eigensystem of its adjacency matrix, one should in principle be able to infer the complete structure of the neutral network from accurate measurements of the transient population dynamics. Acknowledgments We thank the participants of the Santa Fe Institute workshop on Evolutionary Dynamics for stimulating this work, which was supported in part at SFI by NSF under grant IRI-9705830, Sandia National Laboratory, and the Keck Foundation. M.H. gratefully acknowledges support from a fellowship of the Royal Netherlands Academy of Arts and Sciences.
no-problem/9903/astro-ph9903194.html
ar5iv
text
# A2111: A 𝑧=0.23 Butcher-Oemler Cluster with a Non-isothermal Atmosphere and Normal Metallicity ## 1 Introduction Both optical (Geller & Beers 1982) and X-ray (Jones & Forman 1992) morphological studies of galaxy clusters indicate that a significant fraction of nearby clusters have substructures that are possibly due to mergers. Temperature maps derived from spectro-spatial X-ray observations are a necessary complement to the X-ray, optical, and radio imaging data in the sense that hydro-dynamical simulations of subcluster mergers show that heating of the cluster atmosphere may be present in a recent post-merger system even when evidence of a merger is not visible in the X-ray surface brightness morphology (Evrard, Metzler, & Navarro 1996). On the other hand, there are clusters such as A2256 which exhibit structure in all three wavebands consistent with a merger yet the temperature map obtained with ASCA indicates a quiescent dynamical state (Markevitch 1996). Spatially resolved spectroscopy can thus help us to find hot spots similar to those seen in the simulations (Roettiger, Loken, & Burns 1997; Evrard, Metzler, & Navarro 1996), which together with the optical, X-ray imaging, and radio observations, provide a detailed description of the dynamical state of the cluster. Such spectral analysis has been carried out for a number of clusters with data from ASCA, which has a broad energy coverage and modest spatial resolution. While the X-ray spectroscopic evidence of merger may be difficult to obtain for some clusters (e.g., A2256) in others the asymmetric X-ray morphology and temperature structure are consistent with those seen in simulations of subcluster merger. Such examples are A754 (Henriksen & Markevitch 1996), the Coma cluster (Honda et al. 1997), and A1367 (Donnelly et al. 1998). As violent events, subcluster mergers may also affect the evolution of galaxies. Relevant processes include ram-pressure (White et al. 1991), the tidal effect from the cluster potential (Henriksen & Byrd 1996), and “galaxy harassment” (Oemler, Dressler, & Butcher 1997). Consequently, properties of cluster galaxies may be intimately connected to the changing dynamical state and galaxy environment of clusters (e.g., Kauffmann 1995; Oemler, Dressler, & Butcher 1997). Clusters at early epochs ($`z0.2`$) tend to contain higher fractions of blue galaxies — the Butcher-Oemler effect. HST observations indicate that the effect results from a high rate of star formation in spiral galaxies (Dressler et al. 1994; Couch et al. 1994) and from a high fraction of disturbed galaxy systems (Oemler, Dressler, & Butcher 1997). Based on a study of 10 Butcher-Oemler clusters, Wang & Ulmer (1997) have revealed a correlation between the blue galaxy fraction and the X-ray isophote ellipticity. A2111 at $`z=0.23`$ is one of the clusters in the sample and contains a high fraction of blue galaxies ($`f_b=0.16`$). Based on ROSAT PSPC and HRI observations, Wang, Ulmer, & Lavery (1997; hereafter WUL) have further reported that A2111 has a highly asymmetric X-ray morphology and the X-ray centroid and ellipticity shift with spatial scale, which suggests that the cluster may be undergoing a merger. In this paper, we present a spatial-spectral analysis, using an ASCA observation, complemented by the ROSAT PSPC data of A2111. This analysis enables us to search for spatial and spectral signatures of a merger over a broad energy band and to compare the metal abundance of A2111 with nearby clusters. Throughout the paper, H<sub>0</sub> = 50 km sec<sup>-1</sup> Mpc<sup>-1</sup> is used, and 90% confidence error bars are quoted on all quantities. ## 2 Observations and Analysis A2111 was observed on January 15-16, 1997 with ASCA for 30,000 seconds. Data was obtained with both the GIS and SIS; each has two sensors. The GIS has a higher effective area at higher energies ($`>`$ 5 keV) than the SIS so that use of all 4 data sets is optimum for studies of multi-component emission. The data were filtered using the REV2 criteria utilized by the ASCA Data Processing Center. Data were excluded under the following conditions: with a radiation belt monitor (RBM) count $`>`$ 100 cts/s, during earth occultation or at low elevation angle to the Earth ($`<`$ 5 degrees for the GIS and $`<`$ 10 degrees for the SIS), when the pointing was not stable (deviation of $`>`$ 0.01 degrees), during South Atlantic Anomaly passage, and when the cutoff rigidity (COR) was $`>`$ 6 GeV/c. Additionally, the SIS was required to be $`>`$ 20 degrees to the bright earth and were cleaned to remove hot pixels. The resulting good exposure times are given in Table 1 The ROSAT PSPC observations have been discussed in WUL. Briefly, the observations have an exposure of 7511s, a spatial resolution of $``$0.5’, and about 7 overlapping energy bands in the 0.1-2 keV range. To obtain an emission weighted spectrum for the cluster, we first conducted a joint fit to the spectra from the ROSAT PSPC and the ASCA GIS and SIS detectors. Extracted from a region within 6’ from the assumed cluster centroid at $`15^h39^m36^s.554;+34^{}25^{}31^{\prime \prime }.16`$ (R.A.; Dec.; J2000), the spectra include essentially all of the cluster emission. Background for the PSPC, taken from source-free regions of the image, is calculated from 4 circular regions of radius 7.3’ located at: (15:40:56.408, +34:50:13.32), (15:36:41.344, +34:27:59.74), (15.38:16.532, +33:51:05.97), and (15:42:17.971, +34:10:45.85). The SIS data was taken in 1-ccd mode and the cluster essentially fills the chip, we thus utilized blank sky, deep ASCA observations taken at high Galactic latitudes for background subtraction. The GIS background was extracted similarly to avoid uncertainties related to vignetting, shadowing of the instrument supports, and gain variations with radius from the detector center.The energy bands used are: 0.1 - 2 keV for the PSPC, 0.3 - 10 keV for the SIS, and 0.7 - 10 keV for the GIS. We adopted the Raymond & Smith thermal plasma model. The two GIS normalizations were fixed to have the same emission integral, as were the two SIS normalizations. The redshift of the cluster was taken to be 0.23. The abundance, column density, and temperature were left as free parameters giving a total of 5 free parameters. We fit this model to the 2 GIS data sets with free normalizations and found that the normalizations were essentially identical, as expected. This test was repeated for the 2 SIS data sets yielding the same result justifying tying the normalizations as described above. The fit to all 5 data sets is not acceptable with a reduced $`\chi ^2=344.2`$ for 311 degrees of freedom. The data and the best model fit are presented in the top panel of Fig. 1 and the residuals are shown in the bottom panel. While the above analysis was not sensitive to any temperature structure in A2111, we measured the ICM temperature of the cluster in two regions, with radii 0-3’ and 3-6’. Further dividing the regions was not practical due to the limited extent of the cluster compared to the XRT+GIS PSF and the limited counting statistics of the ASCA observation. The temperature measurement used a PSF modeling technique described in Markevitch (1996) and Takahashi et al. (1995). This technique has been successfully used in similar analyses for several relatively low redshift clusters (see references in Markevitch, Sarazin, & Henriksen 1997). Briefly, the PSPC image is used as a model surface brightness template which is convolved with the ASCA mirror effective area and PSF to produce model spectra in the two regions. The ROSAT image was flat fielded, background subtracted, and rotated to match the GIS roll angle. The PSPC energy range used is 0.5 - 2.0 keV and the emission measure is corrected to the ASCA energy band. Since the PSPC is used as a surface brightness template to get the EM for the spectral fits, a slightly higher band (0.5 - 2 keV) was used to better match the GIS and SIS. Channels for each data set are grouped to contain at least 20 counts. The model PSF, which is based on GIS observations of Cyg X-1 at various radii from the detector center (Takahashi et al. 1995), is increasingly uncertain at low energies so the minimum energy used was 1.5 keV to minimize this uncertainty. To maintain greater than approximately 20 counts in any fitted energy bin, the data were grouped to give energy bins of 1.5-2.5, 2.5-4., 4.-7. keV in the SIS and 1.5-2.5, 2.5-3., 3.-5., 5.-7., 7.-11 keV in the GIS. Markevitch (1996) discussed in detail various consistency checks performed in validating the use of the method for ASCA data. Since the A2111 observation and the deep, blank sky observations used in the background subtraction were taken at different times, their COR values are different. We thus subtracted the background using blank sky images, each at a specific COR value, weighted to the amount of source data obtained at that COR value. The SIS background image was normalized by exposure time. A 20% systematic error in the SIS and a 5% error in the GIS background normalization were included in the fitting procedure. The SIS error was estimated at 20% based on a day-to-day variation in the GIS background of $``$20% for a specific COR value. By using a composite background consisting of GIS observations with the same COR values as the data, the GIS background was better determined and the error in the normalization was estimated at 5% (Markevitch 1996 and references within). There errors were then added in quadrature with the random errors. Table 1 presents the resulting number of background subtracted counts in each of the regions from each detector integrated over the full energy band. We simultaneously fitted the four spectra from each region using the Raymond & Smith model while fixing the abundance at the best fit value from the single region fit, 0.25 Solar. We fixed the column density at the measured 21 cm value, 1.9$`\times `$10<sup>20</sup> cm<sup>-2</sup>, because the data used, $`>`$1.5 keV, are insensitive to the exact value. Confidence intervals on temperature were estimated by the following procedure. A Monte-Carlo simulation of the number of counts in each energy band of the spectra, assuming a Gaussian distribution of counts around the observed value, was carried out and the spectra were fitted to obtain the best-fit temperature. Two hundred simulated spectra were fit and the variance of the distribution of best-fitting temperatures was calculated to obtain the 90% confidence range. A systematic error of 5% each for the PSF and effective area are included in the error simulation. The best fit models and GIS2 data are presented in Figs. 2 and 3 for the 0-3’ and 3-6’ regions respectively. ## 3 Results and Discussion We present in Figs. 4 and 5 the exposure-corrected SIS and GIS contour maps. Both maps show an overall elongation of the cluster X-ray morphology. The X-ray intensity distribution in the SIS map is very clumpy, compared to that in the GIS map. Similar features also appear in the PSPC data. The statistical significance of individual features is marginal, however. But the overall clumpiness of the X-ray distribution is real, since the SIS and GIS maps were smoothed in the exactly same way to have the same noise level. The PSF of the ASCA X-ray telescopes (XRT) has a relatively sharp core (FWHM of $``$50 arcsec) but broad, energy dependent wings which extend to a half-power diameter of 3 arcmin. The intrinsic spatial broadening of the SIS is negligible compared to that of the XRT. The GIS has its own PSF characterized by broad low energy wings which adds to the XRT PSF. Thus, the clumpy structure appears much more clearly in the SIS map. The clumpy X-ray morphology may arise from the presence of multiple components of the ICM. Assuming an approximate pressure balance, the temperature inhomogeneity could naturally result in large emission measure differences in the ICM, which is manifested in the X-ray emission. The results from our spectral modeling are summarized in Table 2. The emission weighted temperature for the cluster derived from a joint fit of the ROSAT and ASCA data is 4.9 - 5.9 keV (90% limit), which overlaps the results reported by WUL based solely on the PSPC (2.1 - 5.3 keV). The 90% confidence range on the column density, 1.03 - 1.36$`\times `$10<sup>20</sup> cm<sup>-2</sup> is slightly below that measured from 21 cm, 1.9$`\times `$10<sup>20</sup> cm<sup>-2</sup>. However, refitting the model with the column density fixed at the Galactic value gives a $`\chi ^2`$ is 378.2/312 degrees of freedom, an increase in $`\chi ^2`$ of 33.4. The preference for a column density below Galactic for the single temperature component model may be due to the overall poor fit of an isothermal model. Our spatial-spectral analysis of A2111 further suggests that the average temperature of 6.46$`\pm `$0.87 keV in the central region ($`r<1`$ Mpc) of A2111 is significantly higher than 3.10$`\pm `$1.19 keV in its surrounding, r = 3 - 6’. A higher temperature in the central region is consistent with the results from simulations after a subcluster has passed through the core of the main cluster (Roettiger, Loken, & Burns 1993), supporting the hypothesis that A2111 is undergoing a merger. Using only the PSPC data, WUL found that if the column density is fixed at the Galactic value, the subcomponent has a higher temperature than the rest of the cluster. This is consistent with the heating in the ASCA temperature map. Thus, A2111 is the the first intermediate redshift cluster which contains a large blue galaxy fraction for which the optical, X-ray imaging, and X-ray spectral data are all consistent with the interpretation of a merger. As a test of the robustness of the ASCA PSF correction, Donnelly et al. (1998) applied two independent methods of correcting for the ASCA PSF (one of which was used for this paper) in analyzing the A1367 data. Similar features in the derived temperature maps were obtained using each methods. The only cluster with a similar redshift for which a similar spatial-spectral study using ASCA data has been conducted, is the z = 0.2 cluster, A2163 (Markevitch 1996). In A2163, the temperature drops with radius out to 3 Mpc, consistent with a polytropic index ($`\gamma `$)of 1.9. The cluster atmosphere is apparently convectively unstable after a very recent major merger. The temperature profile for A2111 is less steep than A2163, the equivalent $`\gamma `$ is = 1.45 (using the density parameters from WUL: $`\beta `$ = 0.54 and core radius = 0.21 Mpc); perhaps this cluster has passed the stage of convective instability or involves less massive subclusters. Alternatively, a temperature drop with radius may reflect a more centrally concentrated gravitational potential of the cluster rather than shock heating from a merger. However, the case for A2111 being a merger candidate is strengthened by the spectral and spatial results taken together. The best fit abundance of A2111, 0.11 - 0.39 solar (90% confidence), is typical of low z clusters and is consistent with the studies of large samples (Allen & Fabian 1998; Mushotzky & Loewenstein 1997) which indicate that metallicity in galaxy clusters in essentially constant out to z $``$0.3. A2111 is unlike the nearby clusters which show a similar abundance because it has a high frequency of star forming galaxies, while the nearby clusters do not. While increased star formation will increase the metallicity of the interstellar medium and subsequent enrich the intergalactic medium by a variety od processes, including ram-pressure and tidal stripping, the similarity in abundance argues against this eposodic star formation having a significant effect on the overall metallicity of the cluster gas. In conclusion, our ASCA data show that A2111 has an elongated and clumpy X-ray morphology as well as a relatively high temperature core, compared to the surrounding regions. These results, together with the apparent substructure observed in the ASCA and ROSAT observations, strongly suggest that the cluster is undergoing a merger, which is likely responsible for the observed large blue galaxy fraction of the cluster. Future X-ray observations will be necessary to study the relationship between possible element abundance gradients in the intergalactic medium and the blue galaxy distribution in the cluster. MJH thanks the National Science Foundation for support through Grant No. AST-9624716, and QDW acknowledge the support from NASA for his research.
no-problem/9903/hep-ph9903310.html
ar5iv
text
# Possible Sources of a Transient Natural Neutrino Flux ## Acknowledgements I would like to thank M. Messier for making his transparencies of the DPF talk available to me. D. Casper, J. Learned and M. Messier have been helpful in dispelling my reservations about systematic errors and assuring me that errors in the live time are negligible. I thank M. Weinstein for some discussions about possible transient celestial neutrino sources. I. Bigi and P. Harrison have been a source of inspiration and encouragement. I thank J. Poirier for a careful reading of the manuscript.
no-problem/9903/astro-ph9903308.html
ar5iv
text
# A Multiphase Model for the Intracluster Medium ## 1 Introduction The existence of non-baryonic or “dark” matter on very large scales in the universe is inferred from a number of observations, including X–ray and gravitational lensing observations of galaxy clusters. Observations suggest that the baryonic component of clusters predominantly consists of hot, diffuse, intergalactic medium (ICM) which emits X–rays by scattering of electrons in the Coulomb fields of electrons and ions, i.e., thermal bremsstrahlung. The X–ray observations determine the ICM mass content in a model dependent fashion. Recent analysis of the flux limited Edge sample employs the standard, isothermal $`\beta `$–model and finds a mean ICM mass fraction $`f_{ICM}=0.212\pm 0.006`$ (Mohr, Mathiesen & Evrard 1998; see also White & Fabian 1996; David, Forman & Jones 1996) within the virial regions of 27 nearby clusters with X–ray temperatures above 5 keV. This value is several times larger than that expected in an Einstein–deSitter universe with the observed light element abundances (White et al. 1993). One way to reconcile the cluster observations with a universe having critical mass density $`\mathrm{\Omega }_m=1`$ is to suspect that the standard model treatment of the ICM posseses substantial systematic errors. Gunn & Thomas (1996, hereafter GT96), motivated by models of cooling flows (Nulsen 1986; Thomas 1988) propose that a multi-phase ICM structure exists throughout the cluster atmosphere. A given macroscopic volume element contains gas at a range of densities and temperatures which are assumed to be in pressure equilibrium. Fixing the gas mass within this volume, the emission measure of a multiphase gas will increase as the clumping factor $`C<\rho ^2>/<\rho >^2`$. But since we observe luminosity, not gas mass, the implication is that clumped gas requires less total mass $`M_{gas}1/\sqrt{C}`$ in a given volume to produce a fixed X–ray emissivity. The standard analysis of the cluster plasma assumes that it exists in a single thermodynamic phase at any location within the cluster. In most cases an isothermal, “beta” model (Cavaliere & Fusco-Femiano 1976) is used to describe the cluster plasma electron density for a spherically symmetric atmosphere. Under these assumptions, the observed azimuthal X–ray surface brightness profile determines the volume emissivity at radius $`r`$ from the cluster center $$\xi (r)\rho ^2(r)\mathrm{\Lambda }_X(T_X)=\xi _0\left(1+\frac{r^2}{r_c^2}\right)^{3\beta +\frac{1}{2}}.$$ (1) Here $`\xi _0`$ is the central value of the X–ray emissivity, $`r_c`$ is the core radius of the X–ray emission, $`\mathrm{\Lambda }_X(T_X)`$ is the (suitably normalized) plasma emission function at a temperature $`T_X`$ over a prescribed X–ray bandwidth. The temperature $`T_X`$ is determined from X–ray spectral measurements by, for example, fitting the observed spectrum to a thermal bremsstrahlung model. With observations and plasma emission model in hand, one then constructs the gas mass density $`\rho (r)=(\xi _0/\mathrm{\Lambda }_X(T_X))^{1/2}(1+r^2/r_c^2)^{3\beta /2}`$ and integrates outward from the origin to define enclosed gas mass. The total (baryonic plus non-baryonic) mass within a radius $`r`$ is inferred from assuming that the plasma is in hydrostatic equilibrium, supported against gravity entirely by thermal pressure. The fluid equation of hydrostatic equilibrium then sets the total, gravitating mass $$M_{\mathrm{tot}}(r)=\frac{r^2}{G}\frac{1}{\rho }\frac{dP}{dr},$$ (2) which for the fiducial, single-phase, isothermal $`\beta `$–model cluster gives $$M_{\mathrm{tot},\mathrm{s}}(r)=\frac{3\beta }{G}\frac{k_BT_X}{\mu m_p}\frac{r^3/r_c^2}{1+r^2/r_c^2}.$$ (3) In this paper, we extend a multiphase ICM model first proposed by Gunn & Thomas (1996, herafter GT96) to incorporate radial variability in the multiphase structure. Radial variability is a natural expectation. Since both cooling timescales ($`\rho ^1`$ if nearly isothermal) and local gravitational timescales ($`\rho ^{1/2}`$) increase outward from the cluster core, the timescale for development of multiphase structure should also be larger at the virial surface than in the core of a cluster. We introduce the theoretical model in §2 below. In §3, we examine the effects of a multiphase structure on the mean intracluster gas fraction $`f_{ICM}`$ inferred from X–ray observations and consider observable implications for the Sunyaev–Zeldovich (SZ) effect and X–ray spectroscopy of the ICM. For the latter, we examine two specific signatures — the excess (relative to the single-phase case) in central SZ decrement and an X-ray spectral hardness ratio — for the case of a “Coma-like” cluster. We show how the pairing of X–ray spectroscopy and SZ image can be used to estimate the magnitude of systematic error introduced into estimates of ICM gas fraction by assuming the standard $`\beta `$–model . ## 2 Theory GT96 argue that if a spectrum of plasma density fluctuations were generated in a cluster at a substantial fraction of a Hubble time in the past, then its densest phases would cool and be removed from the plasma. Following Nulsen (1986), they argued that this would produce a power-law spectrum of fluctuations $`f(\rho )\rho ^\gamma `$ at the present, formed from a narrow range of phases that were initially tuned to have cooling times comparable to a Hubble time. However, this argument ignores the stochastic nature of gravitational clustering in hierarchical models of structure formation. In such models, clusters grow largely by mergers of proto-cluster candidates embedded within the large–scale filamentary network. It is suspected that strong mergers may, through plasma turbulence, effectively “reinitialize” density fluctuations in the ICM. Since the time since the last major merger is a random variable in a coeval population, then a volume limited sample will contain clusters whose multi-phase structures are at different stages of development. This idea is consistent with observed properties of the local X–ray cluster population, in which a range of central cooling flow behavior is present (Fabian 1994). Because of this and other uncertainties in the dynamical development of multiphase structure, we postulate a log-normal form for the multiphase density perturbations. We do not attempt a formal justification for this choice; it is motivated largely by a condition of “reasonableness” and the fact that it simplifies calculations below. The formalism requires only low order moments of the distribution, so the model can be recalculated for arbirtary $`f(\rho )`$. We postulate the existence of plasma density fluctuations in a spherically symmetric cluster atmosphere which: (i) are isobaric at a given radius, (ii) produce a volume emission profile consistent with equation (1) and (iii) exhibit an isothermal emission weighted temperature with radius. The first item is based on a hydrostatic assumption and the remainder impose observed constraints on the X–ray image and emission weighted temperature profile. Although isothermality extending to $`r_{200}`$ — the radius within which the mean total mass density is 200 times the critical density — may not be supported by observations (Markevitch et al. 1998) or simulations (Frenk et al. 1998), temperature drops of only $`1020\%`$ are allowed within $`r_{200}/3`$ (Irwin, Bregman & Evrard 1998). Since the observables we stress in the analysis are core dominated, our results are not particularly sensistive to departures from isothermality which may exist near $`r_{200}`$. ### 2.1 The multiphase distribution function We assume a log-normal form for the cluster plasma density phase distribution $`f(\rho )d\rho `$, the fraction of a volume element at a radius $`r`$ that contains plasma of density of between $`\rho `$ and $`\rho +d\rho `$, $$f(\rho )d\rho =\frac{1}{\sqrt{2\pi }\sigma (r)}\mathrm{exp}\left(\frac{\mathrm{ln}^2[\rho /\rho _0(r)]}{2\sigma ^2(r)}\right)\frac{d\rho }{\rho }.$$ (4) The quantity $`\rho _0(r)`$ is a reference density and $`\sigma ^2(r)`$ is the variance of the distribution. Since the core radius presents a characteristic scale in the X–ray image, we take a form $$\sigma ^2(r)=\sigma _c^2(1+r^2/r_c^2)^ϵ,$$ (5) for the variance, with $`r_c`$ the core radius of the beta-model density profile described earlier, and $`\sigma _c`$ and $`ϵ`$ are free parameters which set the magnitude and radial dependence of the multiphase structure. We consider such a parameterization in order to couple the magnitude of density fluctuations to the likelihood that the local conditions have allowed cooling to amplify them. A simple parameterization is one in which the variance scales with the inverse of the local cooling time $`\sigma ^2(r)\tau _{\mathrm{cool}}^1(r)`$. An isothermal atmosphere (for which $`\tau _{\mathrm{cool}}(r)\rho ^1(r)`$) will have $`\sigma ^2(r)\rho (r)`$, implying $`ϵ=3\beta /2=1`$ for the characteristic $`\beta =2/3`$ value seen in X–ray images. We consider values in the range $`ϵ01`$. The limit $`ϵ\mathrm{}`$ represents a multiphase structure existing purely within the cluster core. In the limit $`\sigma _c0`$, we recover a single-phase plasma for any value of $`ϵ`$, while the limit $`ϵ0`$ yields the multiphase model results (no position dependence) of GT96. The definition of $`\rho _0(r)`$ is now absorbed into the specification of the mean density at radius $`r`$ $$\rho (r)\rho f(\rho )𝑑\rho =\rho _0\mathrm{exp}\left(\frac{1}{2}\sigma ^2(r)\right),$$ (6) where $``$ represents an ensemble average of volume elements on a spherical shell of radius $`r`$. A useful equation is a generalization of equation (6) to higher moments, namely $$\rho ^q=\rho ^qf(\rho )𝑑\rho =\rho ^q\mathrm{exp}\left(\frac{q(q1)}{2}\sigma ^2(r)\right).$$ (7) ### 2.2 A multiphase “isothermal $`\beta `$–model ” cluster We now impose some observational constraints on the model. Assuming a power–law emissivity function $$\mathrm{\Lambda }_X(T)T^\alpha ,$$ (8) the requirement that the emission weighted temperature profile be isothermal at temperature $`T_X`$ implies that the condition $$T_X\frac{\rho ^2T^{1+\alpha }}{\rho ^2T^\alpha }$$ (9) holds at all cluster radii. Under an ideal gas assumption $`P=(\rho /\mu m_p)k_BT`$, with $`m_p`$ the proton mass and $`\mu `$ the mean molecular weight, equations (9) and (7) can be used to define the local gas pressure in the multiphase medium $$P(r)=\frac{k_BT_X}{\mu m_p}\rho (r)\mathrm{exp}[(1\alpha )\sigma ^2(r)].$$ (10) We now equate the known emission profile of the cluster, equation (1), to the ensemble-averaged value of the emissivity $$\xi _0\left(1+\frac{r^2}{r_c^2}\right)^{3\beta }=\frac{\mathrm{\Lambda }_0}{m_p^2}\left(\frac{\mu m_p}{k_B}\right)^\alpha \rho (r)^{2\alpha }P^\alpha (r).$$ (11) The combination of equations (9) and (11) is the canonical “isothermal $`\beta `$–model ” assumption. From an observer’s perspective, a multiphase cluster in these two measures is indistinguishable from the single-phase case. Equation (11) can be rearranged to give $`\rho (r)`$ $`=`$ $`\left({\displaystyle \frac{\xi _0m_p^2}{\mathrm{\Lambda }_X(T_X)}}\right)^{1/2}\left(1+{\displaystyle \frac{r^2}{r_c^2}}\right)^{\frac{3}{2}\beta }`$ (12) $`\mathrm{exp}\left({\displaystyle \frac{(\alpha 1)(\alpha +2)}{4}}\sigma ^2(r)\right).`$ Although this now defines the characteristic density $`\rho _0`$ used in equation (4), it is better to identify the limit $`\sigma ^2(r)0`$ as the single-phase density. Following GT96, we introduce a multiphase “correction factor” for the gas mass $`C_\rho (r)`$ which relates the mean gas density in the multiphase case $`\rho _m(r)`$ to its single-phase value $`\rho _s(r)`$ $$\rho _m(r)\rho (r)C_\rho (r)\rho _s(r).$$ (13) Equation (12) then implies $$C_\rho (r)=\mathrm{exp}\left(\frac{(\alpha 1)(\alpha +2)}{4}\sigma ^2(r)\right).$$ (14) A similar exercise for the gas pressure $$P(r)C_P(r)\left(\frac{k_BT_X}{\mu m_p}\right)\rho _s(r)$$ (15) yields $$C_P(r)=\mathrm{exp}\left(\frac{(1\alpha )(2\alpha )}{4}\sigma ^2(r)\right).$$ (16) Note that for values of the X–ray emission exponent $`\alpha <1`$, the multiphase gas mass is lower than that of the single-phase model while the multiphase pressure is greater than the single-phase pressure. This arises because the high-density phases are more efficient in producing a given X–ray power (provided the emission is only a weak function of temperature). Since the emission weighted $`T_X`$ reflects the temperature in higher than average density regions, the pressure at all radii is increased over the single-phase case. The cluster gas mass for the multiphase model within a radius $`r`$ is given by $$M_{\mathrm{gas},\mathrm{m}}(r)=4\pi _0^rC_\rho (r^{})\rho _s(r^{})r_{}^{}{}_{}{}^{2}𝑑r^{},$$ (17) so that the enclosed gas mass for the multiphase model differs from the single-phase case by the factor $`C_{\mathrm{gas}}(r)`$ $``$ $`{\displaystyle \frac{M_{\mathrm{gas},\mathrm{m}}(r)}{M_{\mathrm{gas},\mathrm{s}}(r)}}`$ (18) $`=`$ $`{\displaystyle \frac{_0^rC_\rho (r^{})\rho _s(r^{})r_{}^{}{}_{}{}^{2}𝑑r^{}}{_0^r\rho _s(r^{})r_{}^{}{}_{}{}^{2}𝑑r^{}}}.`$ The total mass of the cluster within a radius $`r`$ is determined from the hydrostatic equilibrium, equation (2), when comibined with equations (12)-(16) give $`M_{\mathrm{tot},\mathrm{m}}(r)`$ $`=`$ $`\left({\displaystyle \frac{r^2}{G}}\right)\left({\displaystyle \frac{C_P(r)P_s^{}(r)+P_s(r)C_P^{}(r)}{C_\rho (r)\rho _s(r)}}\right)`$ (19) $`=`$ $`{\displaystyle \frac{C_P(r)}{C_\rho (r)}}M_{\mathrm{tot},\mathrm{s}}(r)+{\displaystyle \frac{r^2}{G}}{\displaystyle \frac{k_BT_X}{\mu m_p}}\left|{\displaystyle \frac{C_P^{}(r)}{C_\rho (r)}}\right|.`$ The total cluster mass for the multiphase model differs from that of the single-phase model by the factor $$C_{\mathrm{tot}}(r)=\frac{C_P(r)}{C_\rho (r)}+\frac{r^2}{GM_{\mathrm{tot},\mathrm{s}}(r)}\frac{k_BT_X}{\mu m_p}\left|\frac{C_P^{}(r)}{C_\rho (r)}\right|.$$ (20) For bremsstrahlung emission ($`\alpha 0.5`$), the gas mass is decreased and the total mass increased in the multiphase case, implying the enclosed gas fraction at radius $`r`$ is lower than that for a single-phase medium by the factor $`C_\mathrm{b}(r)=C_{\mathrm{gas}}(r)/C_{\mathrm{tot}}(r)`$. Figure 1 plots the correction factor for the cluster gas mass, $`C_{\mathrm{gas}}`$, for a few choices of the controlling parameters $`\sigma _c`$ and $`ϵ`$. For purposes of illustration, we take structural parameters representative of rich clusters, namely $`r_c/r_{200}=0.1`$ and $`\beta =2/3`$ (Neumann & Arnaud 1999), and assume pure bremsstrahlung emission, $`\alpha =1/2`$. The effect of the radial falloff of the multiphase structure on gas mass estimates is substantial. Density variations with large central rms perturbations $`\sigma _c2.0`$ produce substantial (factor $`3`$) relative correction to the gas mass in the cluster core, but the effect on the total virial gas mass (mass within $`r_{200}`$ ) is reduced to $`25\%`$ if $`ϵ=\frac{1}{2}`$ and only $`10\%`$ if $`ϵ=1`$. Degeneracies exist in the virial gas correction factor; a relatively weak multiphase plasma distributed throughout the cluster can produce an effect that is similar to a plasma with strong density variations concentrated toward the center of the cluster (cf. $`\{\sigma _c,ϵ\}`$ combinations of $`\{1,\frac{1}{2}\}`$ and $`\{2,1\}`$). The correction factor for the total cluster mass, $`C_{\mathrm{tot}}`$, for the same multiphase parameters is shown in Figure 2. By steepening pressure gradients, the multiphase effects increase the total cluster mass derived from equation (19). Once again, weaker multiphase effects distributed throughout the cluster can yield a total mass within $`r_{200}`$ that is similar to a cluster plasma with stronger density variations concentrated in the cluster center. However, such concentrated multiphase effects will produce a total mass profile that is steeper (cf.$`(\sigma _c,ϵ)`$ of $`(2,\frac{1}{2})`$ vs. $`(2,1)`$). Observations of strong gravitational lensing could be used to break this parameter degeneracy, to the extent that the hydrostatic assumption is valid in the cluster core. ## 3 Consequences We now turn to the issue of the effect of multiphase structure on inferred ICM gas fractions and cluster observables. For the latter, we consider the effects of multiphase plasma on a cluster’s X–ray spectrum and the Sunvaev-Zel’dovich microwave decrement through a line-of-sight taken through the center of the cluster. All of the results we discuss assume a standard structure model with core radius for the broadband X–ray emissivity (equation (1)) of $`r_c=0.1r_{200}`$, exponent $`\beta =2/3`$, and, for creation of X–ray spectra, an emission-weighted X–ray temperature $`T_X=10^8`$ K. Unless otherwise stated, we employ a value of $`\alpha =0.36`$ for the exponent of the plasma emission function, derived from a Raymond-Smith code as described below. Since we ignore galaxies in our modeling, the ICM gas fraction is synonymous with the cluster baryon fraction. We use the terms interchageably below, but it must be remembered that the stellar content of cluster galaxies and intracluster light presents an absolute lower limit to the baryon content of clusters. ### 3.1 Baryon fraction bias The effects of increased total mass and decreased gas mass shown in Figures 1 and 2 combine multiplicatively to reduce the cluster baryon fraction. The magnitude of the effect within the virial radius is shown in Figure 3, where we show contours of $`C_\mathrm{b}(r_{200})`$, the baryon reduction factor, in the $`\{\sigma _c,ϵ\}`$ plane. The baryon reduction effect peaks at high $`\sigma _c`$ and low $`ϵ`$. At $`ϵ=0`$, the uniform, mulitphase structure of GT96 is recovered, with magnitude $$C_\mathrm{b}=\frac{C_\rho ^2}{C_\mathrm{P}}=\mathrm{exp}\left(\frac{(\alpha 1)(\alpha +6)}{4}\sigma _c^2\right).$$ (21) To reduce the baryon fraction by factors $`C_\mathrm{b}\mathrm{}>2`$ requires density variations of magnitude $`\sigma _c\mathrm{}>1`$. In the following discussion, we highlight a set of five specific models, listed in Table 1. Models A and B have small ($`20\%`$) baryon corrections, models C and D have large baryon bias $`C_\mathrm{b}2`$ and model E is an extreme model in which the baryon fraction is reduced by an order of magnitude. ### 3.2 Sunyaev-Zel’dovich Effect The SZ effect is produced by inverse-Compton scattering of cosmic microwave background (CMB) radiation off thermally excited electrons in the hot ICM plasma (see Birkinshaw 1998 for a recent review). We calculate the central Comptonization parameter of the (nonrelativistic) thermal SZ effect $$y(0)=n_e(l)\sigma _T\frac{k_BT(l)}{m_ec^2}𝑑l$$ (22) where the integral $`dl`$ is along a narrow line of sight through the center of the spherical cluster. Here $`n_e=\rho /\mu _em_p`$ is the electron number density and $`\sigma _T`$ the Thomson cross section. Since the plasma phases are assumed isobaric, the product $`n_e(r)T(r)`$ is constant, and no phase integral is necessary in the multiphase case. Deviation of the $`y`$–decrement from that of a single-phase plasma is caused by the alteration of the overall pressure profile in the cluster. The fractional deviation of the central Comptonization parameter $`\mathrm{\Delta }y/y_s=[y_m(0)y_s(0)]/y_s(0)`$ in the multiphase with respect to the single-phase model is shown in Figure 4. In the case of a uniform, multiphase structure ($`ϵ=0`$), the fractional change in $`y`$ follows from the pressure correction factor, equation (16), $$\mathrm{\Delta }y/y_s=\mathrm{exp}\left(\frac{(1\alpha )(2\alpha )}{4}\sigma _c^2\right)1.$$ (23) Figure 4 shows data for the case $`\alpha =0.36`$, approximately the slope of the $`210`$ keV luminosity versus temperature, derived from a Raymond–Smith plasma code assuming a one–third solar abundance of metals. Similar values result for the case $`\alpha =0.5`$. The two models with $`20\%`$ baryon diminution have modest, but potentially discernable, SZ effects. Model A has a central decrement enhanced by $`10\%`$ while model B is enhanced by $`25\%`$ over the single-phase case. The latter is similar to the $`30\%`$ effect for model C, one of the large baryon fraction diminution models. The other factor 2 baryon fraction model — model D — has a central value of $`y`$ increased by $`50\%`$ over the standard $`\beta `$–model . The extreme model E has a signal enlarged by a factor 2 over the standard case. Even in the single-phase case, there is inherent uncertainty in predicting the SZ effect amplitude from X-ray observations which arises from uncertainty in the physical distance to the cluster. At low redshifts, the distance error is completely due to uncertainty in the Hubble constant. Given a cluster with fixed X–ray properties, a fractional deviation in central SZ decrement $`\mathrm{\Delta }y/y_s`$ due to a multiphase medium could, instead, be interpreted as a distance effect. This would imply a fractional error in the Hubble constant $$\mathrm{\Delta }H_0/H_0=2\mathrm{\Delta }y/y_s.$$ (24) For example, in a universe with true Hubble constant of $`65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, observations of a multiphase model A cluster would yield a value $`52\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and model B would produce an estimate of $`42`$. The other highlighted models would produce even lower estimates of $`\mathrm{H}_0`$. Note that this result has the opposite sense of correction for the SZ decrement compared to other estimates of the SZ effect with multiphase gas (e.g.. Holzapfel et al. 1997). This is because the gas pressure for isobaric density fluctuations is greater than that for a single-phase medium, whereas other models with adiabatic density fluctuations, such as those present in SPH calculations without cooling (Inagaki, Suginohara & Suto 1995), have a pressure lower than that of the single-phase gas. ### 3.3 X-ray spectra Spectroscopic analysis of the X–ray emission provides an alternative, independent diagnostic of multiphase structure. We calculate expected X–ray spectra from the multiphase plasma in two ways. We first consider a simple model for the X–ray bremsstrahlung continuum from the cluster, using an emission function of a purely hydrogen plasma, with $`\epsilon (E,T)T^{\frac{1}{2}}e^{\frac{E}{kT}}`$ and a Gaunt factor of unity. The plasma emission function is then $`\mathrm{\Lambda }(T)=K_0T^{1/2}`$, with $`K_0`$ an arbitrary normalization amplitude. Second, a more detailed spectrum is calculated, using the Raymond-Smith plasma emission code (Raymond and Smith 1977) with metal abundances one-third of the solar value (Allen 1973). The former approach highlights continuum behavior while the latter allows the study of the behavior of X–ray lines between $`0.59`$ keV. We generate the composite spectrum emitted by the multiphase cluster atmosphere by numerically integrating the weighted emission over the density distribution at a particular radius, then intergrating over the cluster volume $`V=\frac{4\pi }{3}r_{200}^3`$. Gas beyond the virial radius $`r_{200}`$ is ignored. The appearance of the continuum plasma X–ray emission in the multiphase case can differ substantially from that of the isothermal, single-phase plasma. In Figure 5, we show the $`0.05100`$ keV bolometric X–ray spectra of three multiphase models (A, C and E) along with the single-phase case. All spectra are normalized to yield the same emission weighted temperature of $`10^8\text{ K}`$. We assume all phases are optically thin. The most important effect on the spectra is the appearance of both low-energy ($`Ek_BT_X`$) and high-energy ($`Ek_BT_X`$) enhancements of the spectrum with increasing magnitude of multiphase effects. This shape change arises from the blending of gas at temperatures both below and above the fiducial $`10^8\text{ K}`$ value. In the limit of extreme multiphase strength (model E), the bremsstrahlung spectrum approaches power-law like behavior. A more complete X-ray spectrum of clusters is code. In particular, the use of such a code allows invesigation of line emission as a diagnostic of multiphase structure. Figure 6 shows the simulated emission, derived from a Raymond-Smith code, between $`0.115`$ keV photon energies for a plasma with an assumed metallicity equal to one–third of solar abundance. Along with the rise of the low-energy continuum, the other prominent effect of increased multiphase structure is the strengthening of low-energy ($`1`$ keV) emission lines. To highlight line versus continuum effects, we plot both zero and one–third solar metallicity predictions for the emission for each model shown. The complex of lines between $`0.5`$ and $`1.5`$ keV presents a useful diagnostic for multiphase structure. Included in this region of the spectra are the Fe L-shell lines, as well as H-like and He-like emission from N, O, Ne and Mg. For example, weak baryon bias models (A and B) are readily distinguished by this emission signature, as are models the more strongly multiphase models, C and D. In contrast to the low energy lines, the strength of the 7 keV iron complex is almost unaffected by multiphase structure. These lines originate in hot phases very close to the fiducial temperature of $`10^8\text{ K}`$. The emission weighted temperature constraint imposed on the models requires that the contribution to the total emission from phases near the fiducial temperature cannot vary by large factors. Hence the hot emission lines do not vary significantly among the multiphase models. Given the very different behavior of the low and high energy line emission, we investigate the behavior of a hardness ratio $$=\frac{L_{tot}[0.61.5\mathrm{keV}]}{L_{tot}[6.67.5\mathrm{keV}]}$$ (25) in the multiphase model plane. For the single phase case, $`=2.69`$. Contours of constant $``$ in the $`\{\sigma _c,ϵ\}`$ plane are shown in Figure 7. From this figure, it is clear that even moderate signal-to-noise spectra could produce useful constraints among models with similar baryon fraction correction factors. Models A and B differ in $``$ by nearly a factor 2. Models B and C are nearly degenerate in this measure, but inspection of Figure 6 shows that model C is more continuum dominated at low energies while model B has a larger line contribution. ### 3.4 Limiting the baryon fraction bias The concluding sentence of GT96 expresses a view “that the intracluster medium is much more complex than most people have hitherto assumed and that there is sufficient uncertainty in its modeling to permit a critical density, Einstein-deSitter universe”. Within the context of the expanded version of their model which we develop here, we can ask whether this opinion is supported by recent data. A number of high quality measurements of the Hubble constant from SZ and X–ray observations have been made recently (Myers et al. 1997; Birkinshaw & Hughes 1994; Jones 1995; Hughes & Birkinshaw 1998; Holzapfel et al. 1997). Hughes & Birkinshaw (1998) present an ensemble value $`\mathrm{H}_0=47\pm 7\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ from these studies. For a true value $`\mathrm{H}_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, supported by Type Ia supernovae (Hamuy et al. 1995; Riess et al. 1996), expanding photosphere of Type II supernovae (Schmidt et al. 1994) gravitational lens time delays (Kundi et al. 1997; Schechter et al. 1997), this ensemble average is low by 28%. Assuming that a multiphase structure is at least partly responsible for this biased estimate — other effects may lead to an underestimate at the $`510\%`$ level (Cen 1998) — then a bound on the SZ decrement enhancement $`\mathrm{\Delta }y/y_s\mathrm{}<0.14`$ results. Comparing the contours in Figures 3 and 4, this limit restricts baryon fraction diminution factors to be modest, $`0.75\mathrm{}<C_\mathrm{b}1`$. ### 3.5 Caveats and extensions The model we present contains a number of simplifying assumptions. It is important to note that the model is, in principle, falsifiable. Given a known Hubble constant, likelihood analyses of SZ observations and X–ray spectra will independently identify preferred regions in the $`\{\sigma _c,ϵ\}`$ plane. If these regions are consistent, the observations can be combined to yield a best estimate location $`\{\widehat{\sigma }_c,\widehat{ϵ}\}`$, and an estimate of the baryon fraction bias $`C_\mathrm{b}`$ (Figure 3). Inconsistent constraints may imply a need to relax one or more of the following assumptions. Lack of spherical symmetry. With rare exceptions, cluster X–ray images are close to round. Most have axial ratios $`b/a\mathrm{}>0.8`$ (Mohr et al. 1995). Such small deviations from spherical symmetry lead to scatter, but little bias, in determinations of $`\mathrm{H}_0`$ from SZ+X–ray analysis (Sulkanen, 1999). Given supporting evidence for a multiphase ICM in a cluster of moderate ellipticity, the spherical model introduced here could be extended to prolate or oblate spheroids. A more profitable approach might be to include multiphase structure in the deprojection method discussed by Zaroubi et al. (1998). Non-isothermal emission weighted temperature profiles. There is indication from ASCA observations (Markevitch et al. 1998) that the emission weighted ICM temperature declines substantially within the virial radius. However, ROSAT colors rule out a temperature drops of $`12/20\%`$ for $`5/10\mathrm{keV}`$ clusters within one-third of $`r_{200}`$ (Irwin, Bregman & Evrard 1999). It is straightforward to include a radial temperature gradient $`T_X(r)`$ into the analysis, entering into the definition of the pressure profile, equation 10. Non-lognormal distribution of density fluctuations. The chosen form of the density distribution is motivated by simplicity and by the observation that non–linear gravity on a Gaussian random density field characteristically generates a log-normal pdf (Cole & Weinberg 1994). The results are sensitive to low order moments of the density distribution. We await observations and future numerical simulations including cooling and galaxy-gas interactions in a three-dimensional setting to shed light on the appropriate form of the density fluctuation spectrum. Non-isobaric equation of state. This may be the most readily broken of our model assumptions. The cluster environment is very dynamic. During large mergers, the behavior of the gas in the inner regions of infalling subclusters is essentially adiabatic (Evrard 1990; Navarro, Frenk & White 1993). During quiescent periods between mergers, a cluster atmosphere may stabilize and develop the assumed isobaric perturbations during a cooling flow phase (Thomas et al. 1986). In the transition period, isobaric perturbations in the core may co-exist with adiabatic perturbations near the virial radius. Empirical constraints will come from improved spectroscopic imaging. Binding mass estimates under hydrostatic equilibrium. As noted in §2.2, the radial dependence of our model multiphase distribution can lead to a total mass profile $`M_{tot,m}(r)`$ that is steeper than that determined for a single-phase gas. Galaxy cluster lensing observations could be used to test the mass distribution predicted by multiphase models (see Figure 2). This provides another independent constraint on the admissable region of the multiphase $`\{\sigma _c,ϵ\}`$ plane. ## 4 Summary and Discussion We present a spherically symmetric, multiphase model of the intracluster medium in galaxy clusters. The model assumes existence of a lognormal distribution of isobaric density and temperature fluctuations at any radius. The radially dependent variance of the density fluctuations $`\sigma ^2(r)`$ is subject to two empirical constraints : 1) that the broadband X–ray emissivity profile matches observations and 2) that the X–ray emission-weighted temperature is constant with radius. We calculate the bias introduced in cluster gas mass fraction estimates when a single-phase model is a applied to a multiphase atmosphere. As derived by GT96, the standard analysis of the X–ray observations with a single-phase assumption will overestimate the baryon fraction in the multiphase case. Examining observable effects on the central Sunyaev-Zel’dovich decrement as well as X–ray spectroscopy, we demonstrate how, within the context of this model, the bias can be recovered by existing and future observations. Large values of the clumping factor $`𝒞`$, hence large reduction in the cluster baryon fraction are not favored by current observations. Models with high values of $`\sigma _c`$ produce a nearly power-law X–ray bremsstrahlung continuum and bias estimates of the Hubble constant. An ensemble mean value $`47\pm 7\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ (Hughes & Birkinshaw 1998) arising from recent SZ+X–ray analysis, when compared to an assumed value of $`65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, suggests clumping only overestimates ICM gas fractions by $`\mathrm{}<20\%`$. Spatially resolved X–ray spectroscopy, particularly of line emission in cooler region (0.1-3keV), will provide tests of multiphase model. Data from the upcoming AXAF and XMM missions will be particularly valuable. ## Acknowledgments This work is supported by the National Science Foundation through the 1998 physics REU summer program at the University of Michigan and through grant AST-9803199. We acknowledge NASA support through grant NAG5-7108 and NSF through grant AST-9803199. M.E.S. thanks NASA’s Interagency Placement Program, the University of Michigan Department of Astronomy, and the University of Michigan Rackham Visiting Scholars Program.
no-problem/9903/astro-ph9903223.html
ar5iv
text
# Cosmic microwave background map making with data between 10 and 90 GHz ## 1 Introduction When analysing data from cosmic microwave background (CMB) experiments it is important to be able to distinguish features originating from the CMB and those that originate from foregrounds. With the sensitivity of experiments improving it is becoming increasingly important to subtract these foregrounds from the data before any cosmological interpretation is made. Using multi-frequency observations it should be possible to extract information on both the CMB and foregrounds sources. The multi-maximum entropy method (multi-MEM) can be used to analyse multi-frequency observations to obtain constraints on the various foregrounds that affect CMB experiments. Hobson et al. (1998) use this method to analyse simulated data from the Planck satellite experiment and Jones et al. (1999a) apply it to simulated data from the MAP satellite. In applying this technique the different frequency observations must be sensitive to the same structures (i.e. they have an overlapping window function) as well as observing the same region. Fortunately the Tenerife beam-switching experiments have an overlapping window function and common observing region with the COBE satellite. This constitutes a region with a frequency range of 10 GHz to 90 GHz and, therefore, offers enough data to allow Galactic subtraction from CMB data. We present the results of this joint analysis in the form of a CMB map and Galactic foreground map at 10 GHz. ## 2 Tenerife data The observing strategy and telescope design of the Tenerife experiments have been described elsewhere (see Gutierrez et al. 1995 and references therein). The data to be used in the analysis here have been presented in a companion paper (Gutierrez et al. 1999) and consist of scans at constant declination from a set of beam-switching experiments. To make use of a large range in frequency coverage it is only possible to use the data between Declinations $`32.5^{}`$ and $`42.5^{}`$ where there is data at both 10 GHz and 15 GHz. The Declination strips are separated by $`2.5^{}`$ and so there are 5 scans in the final analysis (it is noted that the technique described below does not require the same number of declinations at each frequency but as we require a Galactic separation we are limited by our coverage at the lowest frequency where the Galactic emission dominates). We also concentrate our analysis at high Galactic latitudes away from strong Galactic emission (RA $`160^{}260^{}`$) where the CMB signal will be more dominant). The final stacked data scans consist of continuous observations binned in $`1^{}`$ intervals in Right Ascension (RA) and the final average noise per bin is $`130\mu `$K at 10 GHz and $`54\mu `$K at 15 GHz (corresponding to $`50\mu `$K and $`20\mu `$K per beam respectively). The beamwidths are $`4.9^{}`$ and $`5.2^{}`$ FWHM at 10 GHz and 15 GHz, respectively. We also include the 33 GHz ($`5.0^{}`$ FWHM and $`55\mu `$K error per bin), Dec. $`40^{}`$ Tenerife data that was presented in Hancock et al. (1994) as an extra constraint. No other data at this frequency is currently available. ## 3 COBE data The COBE satellite window function overlaps that of the Tenerife window function (Watson et al. 1992) and so it should see the same features. The sensitivity of the four-year COBE data is also comparable to that of the Tenerife data and so it forms a useful constraint on the level of CMB, relative to the Galactic emission, contained in the two data sets. We extract the region overlapping that observed by the Tenerife 10 GHz experiment at high Galactic latitude (RA $`160260^{}`$, Dec. $`32.542.5^{}`$) which corresponds to 200 COBE pixels at each frequency. As the noise in the 30 GHz COBE channel is much larger than that in the other two channels (53 GHz and 90 GHz) and also much larger than in the Tenerife data we opt not to use this data in the analysis. Our final data set therefore consists of data covering RA $`160260^{}`$, Dec. $`32.542.5^{}`$ at 10, 15, 33, 53 and 90 GHz covering almost an order of magnitude in frequency. ## 4 Multi-MEM analysis technique The multi-MEM technique has been presented elsewhere (Hobson et al. 1998). Here we will only outline the implementation of the technique to the data set considered in this paper. As has been previously shown (Jones et al. 1998) the Tenerife data set contains long term baseline variations which can be simultaneously subtracted from the data when applying the MEM technique. This is still done with the multi-MEM technique and the only difference between the single channel MEM presented in Jones et al. (1998) and the multi-MEM technique considered here is that the $`\chi ^2`$ is now a sum over each of the frequency channels and the entropy is a sum over each of the reconstructed maps (in this case the CMB and Galactic foreground maps). The information that the multi-MEM technique requires is only the frequency spectra of the channels to be reconstructed (although this can be relaxed if a search over spectral indices is performed as in Section 5.1). The maximum entropy result was found using a Newton-Raphson iteration method until convergence was obtained (usually about 120 iterations were used although convergence was reached within $`60`$ iterations). ### 4.1 The choice of $`\alpha `$ and $`m`$ The choice of the Bayesian parameter $`\alpha `$ and the model parameter $`m`$ in the Maximum Entropy method have often been treated as a ‘guesswork’. In Fourier space it is possible to calculate the Bayesian value for the $`\alpha `$ parameter but in real space this becomes much more complicated as it involves the inversion of large matrices (see Hobson et al. 1998). In the past $`m`$ was chosen to be the rms of the signal expected in the reconstruction and $`\alpha `$ was chosen so that the final value of $`F=\chi ^2\alpha S`$ was approximately equal to the number of data points, $`N`$. Actually $`F`$ was chosen to be just below $`N+2\sqrt{N}`$ which is the confidence limit on $`F`$ calculated by using the degrees of freedom on $`\chi ^2`$ as a function of the reconstruction. Here we investigate the behaviour of the Monte-Carlo reconstructions with varying $`\alpha `$ and $`m`$ and show that this approximation is actually a very good one. Figures 1 and 2 show the variation of $`F`$ with $`\alpha `$ and $`m`$ respectively. Also shown is the average error on the reconstructions. As is seen the minimum in the errors on the reconstruction for $`\alpha `$ occurs when $`F`$ is within the allowed range for classic MEM ($`F=N\pm 2\sqrt{N}`$ where $`N`$ is the number of data points). The minimum value for $`F`$ when varying $`m`$ occurs where $`m=50\mu `$K which is the rms of the expected reconstruction. Therefore, our initial ‘guesses’ at $`\alpha `$ and $`m`$ are properly justified and are the values used in the following analysis. ## 5 Application to the data The multi-MEM technique was applied to the full Tenerife and COBE data set assuming that there were two sources for the fluctuations seen in the data. The spectral dependencies of these two sources were that of CMB and that of free-free emission (although this was later relaxed, see Section 5.1). Simultaneous long term baseline removal was performed on the Tenerife maps and any features with a period longer than $`25^{}`$ (corresponding to features that are outside the Tenerife window function) were subtracted. The convergence of the MEM algorithm is shown in Figure 3. Figure 4 shows the CMB reconstruction obtained and Figure 5 shows the Galactic foreground reconstruction. As can be seen the two maps clearly have different features. The sky coverage is limited by the smallest survey (10 GHz) although the single declination at 33 GHz (Dec. $`40^{}`$) is included in the analysis as an extra constraint. The rms signal at 10 GHZ for the CMB and Galactic maps are $`42\mu `$K and $`36\mu `$K respectively. Taking a free-free spectral index this corresponds to a Galactic signal of $`15\mu `$K at 15 GHz and $`3\mu `$K at 33 GHz which implies that the Galactic foreground is negligable in both cases (when added in quadrature). The error on the CMB and Galactic reconstruction is $`40\mu `$K and $`30\mu `$K per $`1^{}`$ pixel respectively (compared with errors of $`46\mu `$K and $`35\mu `$K on the reconstructions of the separate 10 GHz and 15 GHz channels respectively presented in Gutierrez et al. 1999). This error was calculated from the variance over 300 Monte-Carlo simulations of the MEM analysis. The reconstruction contains individual features at $`5^{}`$ resolution (the maximum resolution of the Tenerife experiments) and the error over one of these features is therefore $`9\mu `$K for the CMB map and $`7\mu `$K for the Galactic map. ### 5.1 Spectral index determination The analysis performed assumed that the Galactic foreground was free-free emission with a temperature spectral index of -2.1. Clearly, this is an assumption which may be incorrect as the dominant foreground in this region could be from another source. Therefore, it is necessary to vary this spectral index to see how the foreground and CMB reconstructions change. This has been done over a range for the spectral index of 0.0 to -4.0. Figure 6 shows the variation in $`\chi ^2`$ (comparing the predicted data to the real data) over this range of spectral index. As can be seen there is a broad minimum above a spectral index of $`2`$ which implies that in this range very little is changed in the two reconstructions. This is because the main information on the Galactic channel occurs in the 10 GHz data and the higher frequency data acts as upper limits. Therefore, it appears as though the Galactic foreground fits the data well with a spectral index of $`2`$ which corresponds to free-free emission (this spectral index corresponds to the point at which the Galactic contribution at the higher frequencies is just below the upper limit). Changes in the CMB reconstruction were well below the noise when the spectral index was varied for the Galactic channel. ## 6 Identification of features The main purpose of using the multi-MEM technique is to allow clean separation of CMB sources from foreground emission. This can be checked by comparing the two maps with existing surveys and previous predictions. It is very difficult to compare the features in the maps with known Galactic features as very few surveys at similar frequencies and scale cover the Tenerife region of sky. However, there is a survey at 5 GHz which uses an interferometer and has been used to create a map of Galactic fluctuations covering this region (Jones et al. 1999b, hereafter JB98). Also, the 408 MHz (Haslam, Salter, Stoffel & Wilson 1982) and 1420 MHz (Reich & Reich 1988) surveys cover this region but the artefacts within the 1420 MHz survey makes a meaningful comparison difficult (JB98). By comparison with the survey presented in JB98 some common features are observed. For example, the Galactic feature at RA $`175^{}`$, Dec. $`32.5^{}`$ was seen in the 5 GHz and the 408 MHz survey. The point source at Dec. $`30^{}`$, RA $`200^{}`$ appeared to be extended in the 5 GHz survey and there appears to be a Galactic feature in the same region which could account for this extension. However, there are features which appear in the Galactic map here and not in the lower frequency surveys although this could be due to the different angular scale dependencies. For example, there is a large feature at RA $`230^{}`$, Dec. $`35^{}40^{}`$. This only shows up in the 5 GHz map at very small amplitude and this could be due to the interferometer resolving out the feature. This feature was observed at Dec. $`40^{}`$ in an $`8^{}`$ FWHM beam at 10 GHz using the predecessor to the Tenerife experiments (Davies et al 1987). The CMB map is even more difficult to compare with other maps as the only two surveys which have covered this region are the COBE and Tenerife experiments. Therefore, no comparison is possible at this time although it is possible to check previous predictions made by the COBE and Tenerife teams. The main predictions about the CMB were the feature at Dec. $`40^{}`$, RA $`185^{}`$ (Hancock et al. 1994) and the positive-negative-positive feature (smoothed to $`10^{}`$ scale in RA) at Dec. $`35^{}`$, RA. $`200250^{}`$ (Bunn, Hoffman & Silk, 1996 and Gutierrez et al. 1997), both of which are clearly seen here (the second being a combination of negative and positive fluctuations). The other features in this map are all potentially CMB in origin but must wait until other experiments with overlapping window functions have surveyed this region before being assigned unambiguously. ## 7 Conclusions and further work The reconstructed maps using data between 10 GHz and 90 GHz were presented. Features of Galactic and cosmological origin were identified. The CMB map is very similar to the reconstructed map using the 15 GHz Tenerife data alone (Gutierrez et al. 1999) and so it is possible to use that data alone as a constraint on the CMB to put limits on the cosmological parameters as has been done in the past. It was found that the Galactic contamination in this frequency range is consistent with free-free emission (if the upper limit set by the 15 GHz data is a true limit), although no constraint on the relative level of free-free and synchrotron emission was possible. Taking all of the Galactic foreground to be free-free emission it was found that the 15 GHz data was contaminated by $`15\pm 3\mu `$K which is very small when added in quadrature to the CMB signal of $`42\pm 9\mu `$K. If the foreground was all synchrotron emission then this value would be even lower. At 33 GHz the free-free component would contribute a signal of $`3.0\pm 0.6\mu `$K which is negligible. There are many future applications of this technique. The multi-MEM has been applied to simulated Planck and MAP satellite data already. We are presently collating other CMB and Galactic surveys together to put further constraints on the foregrounds and the CMB itself at different angular scales and covering a wider frequency range to reduce the errors obtained here. The only constraint that this method has on producing real CMB maps is that there is enough frequency coverage of experiments with over lapping window functions that observe the same region of sky. We are also combining data from experiments with different window functions to put direct constraints on the spatial power spectrum of the CMB and Galactic fluctuations. We are currently working on applying this technique to the spherical sky and a full likelihood analysis, as well as tests for non-Gaussianity within the data, will be presented soon. ## Acknowledgements AWJ acknowledges King’s College, Cambridge, for support in the form of a Research Fellowship.
no-problem/9903/nucl-th9903021.html
ar5iv
text
# Can We Extract Lambda-Lambda Interaction from Two-Particle Momentum Correlation ? ## 1 INTRODUCTION Extracting hyperon-hyperon interactions is one of the most challenging current problems in nuclear physics. It provides us of an opportunity to verify various ideas on baryon-baryon interactions such as the flavor SU(3) symmetry. Hyperon-hyperon interactions may also modify the properties of neutron stars, in which abundant hyperons are expected to exist. However, it is very difficult to determine them experimentally. For example, only three double $`\mathrm{\Lambda }`$ nuclei are found in these 35 years, where only the low-energy $`{}_{}{}^{1}\text{S}_{0}^{}`$ $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ interaction is accessible. Therefore, other ways to extract hyperon-hyperon interactions have been desired so far. One of these ways is opened up by a recent measurement of the invariant mass spectrum of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ pair in $`{}_{}{}^{12}\text{C}(K^{},K^+\mathrm{\Lambda }\mathrm{\Lambda })`$ reaction by Ahn et al.(KEK-E224 collaboration) . The invariant mass spectra, or two-particle momentum correlations, have been widely used to evaluate the source size when the final state interactions are negligible or well-known. On the other hand, it may possible to extract information of unknown strong interactions, if we have realistic and reliable source functions. In this work, we apply the correlation function technique to the above $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ invariant mass spectrum, and try to determine $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ interaction at low energies, by using a source function generated by the IntraNuclear Cascade (INC) model , which well reproduce inclusive $`(K^{},K^+)`$ spectra. ## 2 INVARIANT MASS SPECTRUM IN <sup>12</sup>C($`K^{},K^+`$) REACTION The inclusive spectra of $`(K^{},K^+)`$ reaction on various targets can be explained well by the INC model , and the predicted reaction mechanism in this model was experimentally verified recently . In this work, we use this INC model including the baryon mean field potential effects to generate a hyperon source function. In Fig. 2, we compare the calculated $`K^+`$ momentum spectra in inclusive $`(K^{},K^+)`$ and coincidence $`(K^{},K^+\mathrm{\Lambda }Y)`$ reactions with data . The calculated results well explains the data except for the underestimate of two lambda production at around $`P_{K^+}1.1`$ GeV/c. This underestimate is concentrated in the low invariant mass region of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$, as shown in Fig. 2. This is as expected, since the correlation caused by the final state interaction modify the invariant mass spectrum at low mass region most effectively. The enhancement of the two-particle momentum spectrum has been extensively studied in the context of correlation function. For example, in the correlation function formula , the probability to find particle pair at momenta $`\stackrel{}{p}_1`$ and $`\stackrel{}{p}_2`$ reads, $$P(\stackrel{}{p}_1,\stackrel{}{p}_2)=d^4x_1d^4x_2S_{12}(\stackrel{}{p}_1,x_1,\stackrel{}{p}_2,x_2)\left|\psi ^{()}(\stackrel{}{r}_{12};\stackrel{}{k})\right|^2,$$ (1) where $`\stackrel{}{r}_{12}`$ is the relative distance of two particles at particle creation, $`S_{12}`$ denotes the source function, and the wave function $`\psi ^{()}`$ is chosen to have the outgoing relative momentum $`\stackrel{}{k}=(\stackrel{}{p}_1\stackrel{}{p}_2)/2\mathrm{}`$. In this work, we limit ourselves to working in a single channel description of $`\mathrm{\Lambda }\mathrm{\Lambda }`$, and assume that two $`\mathrm{\Lambda }`$ particles are in their spin singlet state. Then, the relative wave function of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ at small relative momentum is simplified as follows. $$\psi ^{()}(\stackrel{}{r};\stackrel{}{k})\sqrt{2}\left[\mathrm{cos}(kr\mathrm{cos}\theta )j_0(kr)+e^{i\delta _0}u_0(r;k)\right].$$ (2) Based on these formula, we have calculated the $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ invariant mass spectrum including the correlation effects. We have used here phenomenological $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ potentials parametrized in a two-range Gaussian form, $`V_{\mathrm{\Lambda }\mathrm{\Lambda }}(r)=v_l\mathrm{exp}(r^2/\mu _l^2)+v_s\mathrm{exp}(r^2/\mu _s^2)`$. As shown by the thin histogram in Fig. 2, the enhancement at low invariant masses can be reproduced with an appropriate potential, although the second ”peak” around 25 MeV is difficult to explain. At higher invariant masses the correlation effects are washed out by the oscillation of $`|\psi |^2`$ in the source spreading upto around $`r_{12}`$ 5 fm. We have searched for the $`\chi ^2`$ local minima in the $`(v_l,v_s)`$ plane for a given longer range parameter, $`\mu _l=0.61.0`$ fm (Table 1). In Fig. 4, we compare the extracted $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ potentials with the Nijmegen models. From this comparison, model ND (NF) with the hardcore radius $`r_c0.50(0.46)`$ fm, and model NSC89 with the cutoff mass $`M_{cut}920`$ MeV are expected to give reasonable enhancement in the $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ invariant mass spectrum. ## 3 DO TWO LAMBDAS BOUND ? As shown in Table 1, all the best fit parameters (trg-a) give negative scattering length ($`\delta _0ak`$), implying that there is no bound state. However, there is another local minimum (trg-b) in the region where there is a bound state, $`a>0`$. This double-well structure appears because the enhancement is described by the wave function squared. With the long wave approximation, the wave function becomes essentially a constant, then the correlation effect can be factorized as follows , $$P(\stackrel{}{p}_1,\stackrel{}{p}_2)2F(k)P_c(\stackrel{}{p}_1,\stackrel{}{p}_2),P_c=d^4x_1d^4x_2S_{12},F(k)=(1a/b)^2ck^2,$$ (3) where $`b`$ denotes the intrinsic range. There are two solutions of $`a`$ giving the same enhancement at low energies, $`ab\left(1\pm \sqrt{F(0)}\right)`$. When the spectrum is enhanced at low energies, $`F(0)>1`$, one of them is negative and the other becomes positive. In order to overcome this problem and to distinguish these two solutions, it may be helpful to use relativistic heavy-ion collisions, where the source size is much larger than that in $`(K^{},K^+)`$ reaction and the long wave approximation would not work. For example, the scattering wave function have at least one node in the range $`ra`$ when a bound state exists, which suppresses the contribution at around $`ra`$ in two-particle correlations. In Fig. 4, we show the calculated correlation function in relativistic heavy-ion collisions with the source function generated by using a recently developed cascade model, JAM . As expected in the above discussion, two-particle correlations are suppressed at small momenta with trg10b (with bound state) compared with trg10a (no bound state). ## 4 Summary We have analyzed the $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ invariant mass spectrum in $`{}_{}{}^{12}\text{C}(K^{},K^+\mathrm{\Lambda }\mathrm{\Lambda })`$ reaction by using the classical source function given by IntraNuclear Cascade (INC) model combined with the correlation function formula, which takes account of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ interaction. Within a single channel description of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$, and under the assumption that two lambdas are in their spin-singlet state, we limit the allowed range of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ scattering parameters. We have also discussed a possibility to use relativistic heavy-ion collisions as one of the ways to distinguish the $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ potentials, which reproduce the invariant mass spectrum of $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ equally well in $`(K^{},K^+)`$ but have different sign of the scattering length. The authors would like to thank Dr. J. K. Ahn and Prof. J. Randrup for fruitful discussions. This work was supported in part by the Grant-in-Aid for Scientific Research (Nos. 07640365, 08239104 and 09640329) from the Ministry of Education, Science and Culture, Japan,
no-problem/9903/hep-ph9903295.html
ar5iv
text
# Jets and fragmentation ## 1 Introduction Recent experimental results from HERA at DESY show that the hadronic final state in deeply inelastic scattering can be studied with high precision. The results include the measurement of the strong coupling constant $`\alpha _s(Q^2)`$ by means of the (2+1)-jet rate $`R_{2+1}=\sigma _{2+1}/\sigma _{\text{tot}}`$ and event shapes , a direct determination of the gluon density , and the measurement of momentum fraction distributions for charged particles . The latter indicates that it may be possible to study scaling violations of fragmentation functions, the virtuality $`Q^2`$ of the photon being the relevant scale for the fragmentation process. In this review I will concentrate on three selected topics: * NLO calculations for jet quantities: To exploit the increased experimental precision reliable theoretical predictions in next-to-leading order (NLO) of QCD perturbation theory are required. In this proceedings contribution I give an overview of recent developments in NLO calculations for deeply-inelastic processes. The main improvement during the last two to three years was that universal Monte Carlo programs have become available which permit the numerical calculation of any (2+1)-jet-like infrared-safe observable in NLO. * Matching of DIS and photoproduction: For photoproduction ($`Q^20`$) and deeply inelastic scattering (DIS, $`Q^2\mathrm{\Lambda }_{\text{QCD}}`$) it is well known how to calculate cross sections systematically in perturbation theory. Recently, a formalism has been developed which permits calculations in the transition region $`Q^2\mathrm{\Lambda }_{\text{QCD}}`$. * One-particle-inclusive processes: The measurements of momentum fraction distribution mentioned above show a very good agreement of experimental data and theoretical predictions in next-to-leading order for moderately large $`Q^2`$ and $`x_p`$. However, the theoretical prediction breaks down both for small $`Q^2`$ and small $`x_p`$. For lack of space, I had to leave out many, if not most, interesting topics. For more detailed information, I would like to refer the interested reader to the proceedings of the DIS 98 workshop in Brussels . ## 2 NLO calculations for jet quantities One of the basic problems of perturbative QCD calculations is that experimentally hadrons are observed in the final state while theoretical calculations yield results for partons. Moreover, not all observables can be calculated in perturbation theory in a meaningful way. In principle, there are two possibilities: * Infrared-safe observables, which are constructed such that all soft and collinear singularities cancel among real and virtual corrections or can be absorbed into redefined parton densities. The same observable is then evaluated both for parton final states (theory prediction) and hadron final states (experimental data), possibly after the experimental data have been corrected for systematic errors. * Alternatively, additional non-perturbative objects can be introduced, for instance fragmentation functions, which allow for a study of one-particle-inclusive processes. Fragmentation functions have to be measured and parametrized experimentally, and may serve to hide final-state collinear singularities which do not cancel because of integrations over restricted phase space regions. In this section I consider the first possibility; one-particle-inclusive processes will be treated in Section 4. In QCD perturbation theory, expectation values for parton observables are calculated as a phase space integral of a product of a differential parton cross section $`\sigma ^{(n)}(p_1,\mathrm{},p_n)`$ for $`n`$-parton final states and an observable $`𝒪^{(n)}(p_1,\mathrm{},p_n)`$: $$𝒪=\underset{n}{}\text{dPS}^{(n)}\sigma ^{(n)}(p_1,\mathrm{},p_n)𝒪^{(n)}(p_1,\mathrm{},p_n).$$ (1) In next-to-leading-order calculations, there are three contributions to be included: $$𝒪=\sigma _{\text{Born}}𝒪^{(n1)}+\sigma _{\text{virtual}}𝒪^{(n1)}+\sigma _{\text{real}}𝒪^{(n)}.$$ (2) Here $`\sigma _{\text{Born}}`$ is the lowest order cross section, $`\sigma _{\text{virtual}}`$ are the virtual and $`\sigma _{\text{real}}`$ are the real corrections. If the Born term has $`n1`$ final-state partons, $`\sigma _{\text{virtual}}`$ will also have $`n1`$ and $`\sigma _{\text{real}}`$ will have $`n`$ final-state partons. Infrared singularities arise in $`\sigma _{\text{virtual}}`$ in the loop integrations and in $`\sigma _{\text{real}}`$ in the phase space integration over $`\text{dPS}^{(n)}`$. As already mentioned in the introduction, theoretical predictions for partons in the final state are infrared-finite only for a special class of observables. The technical requirement for infrared-safe observables is that they behave well under soft and collinear limits: $`𝒪^{(n)}(p_1,\mathrm{},p_i,\mathrm{},p_n)`$ $`{\displaystyle \genfrac{}{}{0pt}{}{}{\genfrac{}{}{0pt}{}{}{p_i0}}}`$ $`𝒪^{(n1)}(p_1,\mathrm{},\widehat{p}_i,\mathrm{},p_n),`$ (3) $`𝒪^{(n)}(p_1,\mathrm{},p_i,\mathrm{},p_j,\mathrm{},p_n)`$ $`{\displaystyle \genfrac{}{}{0pt}{}{}{\genfrac{}{}{0pt}{}{}{p_ip_j}}}`$ $`𝒪^{(n1)}(p_1,\mathrm{},\widehat{p}_i,\mathrm{},\widehat{p}_j,\mathrm{},p_n,p_i+p_j).`$ Momenta denoted by $`\widehat{p}`$ are to be omitted. The main technical problem is the extraction of the infrared singularities from the real corrections. It turns out that this can be done in an observable-independent way, such that it is possible to build Monte-Carlo programs which are able to integrate arbitrary infrared-safe observables. This can be done because the structure of QCD cross sections in kinematical limits is known: the factorization theorems of QCD state that the structure of the parton cross section $`\sigma _{\text{real}}`$ for collinear and soft limits is of the form of a product of a singular kernel $`K`$ and the Born cross section $`\sigma ^{(n1)}`$: $$\sigma ^{(n)}\genfrac{}{}{0pt}{}{}{\genfrac{}{}{0pt}{}{}{\text{soft/collinear}}}K\sigma ^{(n1)}.$$ (4) The product of $`\sigma _{\text{real}}`$ and $`𝒪^{(n)}`$ thus behaves in a simple way: the cross section goes over into a kernel $`K`$ and the Born cross section, and the observable approaches the corresponding observable for Born term kinematics. The kernel $`K`$ is independent of the phase space variables of the $`(n1)`$-particle phase space, and thus the phase space integration over the corresponding variables can be performed analytically. ### 2.1 Calculations Particularly interesting for phenomenological applications are processes with 2+1 jets in the final state (which means that 2 jets are produced from the hard scattering cross section, plus the remnant jet of the incident photon). First of all, in leading order of QCD perturbation theory these processes are of $`𝒪\left(\alpha _s\right)`$, and are thus suitable for a measurement of the strong coupling constant $`\alpha _s`$. Moreover, the gluon density enters in leading order in the so-called boson–gluon-fusion process. Therefore, this process can also be used to measure the gluon density . By now there are several calculations for (2+1)-jet processes available with corresponding weighted Monte-Carlo programs: * PROJET : The jet definition is restricted to the modified JADE jet clustering scheme; the program is based on the calculation published in Refs. . * DISJET : Again the jet definition is restricted to the modified JADE scheme; the program is based on the calculation in Refs. . * MEPJET : This is a program for the calculation of arbitrary observables which uses the phase-space-slicing method. The corresponding calculation uses the Giele–Glover formalism for the analytical calculation of the IR-singular integrals of the real corrections, and the crossing-function technique to handle initial-state singularities. The latter requires the calculation of “crossing functions” for each set of parton densities. * DISENT : This program is based on the subtraction method. The subtraction term is defined by means of the dipole formalism<sup>2</sup><sup>2</sup>2 The subtraction term is written as a sum over dipoles (an “emitter” formed from two of the original partons and a “spectator” parton). Besides the factorization theorems of perturbative QCD, the main ingredient is an exact factorization formula for the three-particle phase space, which allows for a smooth mapping of an arbitrary 3-parton configuration onto the various singular contributions. . * DISASTER++ : This is a C++ class library<sup>3</sup><sup>3</sup>3 The acronym stands for “Deeply Inelastic Scattering: All Subtractions Through Evaluated Residues”. Most of the program is written in C++. A FORTRAN interface is available; thus there is no problem to interface the class library to existing FORTRAN code. . The subtraction method is employed, and the construction of the subtraction term resembles the method of Ref. , i.e. it is obtained by the evaluation of the residues of the cross section in the soft and collinear limits. Double counting of soft and collinear singularities is avoided by means of a general partial fractions method. * JetViP : This program implements the calculation of , which extends the previous calculations into the photoproduction limit $`Q^20`$. The calculation has been done by means of the phase space slicing method. Up to now, the polarization of the virtual photon is restricted to be longitudinal or transverse. The two basic approaches which are employed to extract the infrared singularities from the real corrections are the phase-space-slicing method and the subtraction method. * The phase-space-slicing method splits up the full parton phase space into two regions: a region $`R`$ where all partons can be resolved, and a region $`U`$ where two or more partons are unresolved. This split is usually achieved by means of a technical cut parameter $`s_{\text{min}}`$. Two partons with momenta $`p_1`$ and $`p_2`$ are unresolved if their invariant mass $`2p_1p_2`$ is smaller than $`s_{\text{min}}`$ and resolved if it is larger. The integration over the resolved region $`R`$ can be performed safely by Monte Carlo integration, because all infrared singularities are cut out by the phase space cut. The integration over the unresolved region $`U`$ is divergent and cannot be performed numerically, but because of the constraint $`2p_1p_2<s_{\text{min}}`$ the cross section factorizes (see Eq. 4) in the limit $`s_{\text{min}}0`$. This contribution is approximated by this limit. The integration over the singular region can be done analytically, and the divergent parts can be extracted. In the limit of $`s_{\text{min}}0`$ the sum of the two integrals over $`R`$ and $`U`$ should approach the integral over the full phase space. It should be kept in mind that this convergence has to be checked explicitly by varying $`s_{\text{min}}`$ and looking for a plateau in this variable. * A calculation using the subtraction method defines a subtraction term $`S`$ which makes the integral $`\text{dPS}\left(\sigma ^{(n)}𝒪^{(n)}S\right)`$ finite. The original integral is, as an exact identity, rewritten as $$\text{dPS}\sigma ^{(n)}𝒪^{(n)}=\text{dPS}\left(\sigma ^{(n)}𝒪^{(n)}S\right)+\text{dPS}S.$$ (5) The first integral can be done by a Monte Carlo integration. For the term $`S`$, the factorization from Eq. 4 holds exactly. As for the phase-space-slicing method, the second term is integrated analytically. No technical cut-off has to be introduced<sup>4</sup><sup>4</sup>4This is, strictly speaking, not correct. A dimensionless cut $`t_{\text{cut}}`$ of the order of $`10^{10}`$ to $`10^{12}`$ is used to avoid phase space regions where the subtraction no longer works because of the finite precision of floating point numbers.. Both methods have their merits and their drawbacks. The phase-space-slicing method is technically simple and can be easily implemented once the matrix elements for the real and virtual corrections are known. The main problem is the residual dependence on the technical cut $`s_{\text{min}}`$. The independence of numerical results from variations of this cut has to be checked; moreover, the integration over the region $`R`$ mentioned above requires very high statistics, because the integration region is close to the singular limit. The subtraction method does not require a technical cut, but the construction of the subtraction term $`S`$ is usually quite involved. If this can be afforded, the subtraction method is the method of choice. ### 2.2 Program comparisons It is interesting to compare the available universal Monte Carlo programs numerically to check whether all available calculations are consistent. Experimental papers usually contain statements that the programs “agree on the one per cent level”. A closer investigation, however, reveals that a statement of this kind is not correct. The three programs MEPJET, DISENT and DISASTER++ have been compared in Ref. for the modified JADE jet clustering algorithm in the E-scheme for several choices of physical and unphysical parton densities<sup>5</sup><sup>5</sup>5By “unphysical” I mean parton densities of the form $`q(x)=(1x)^\alpha `$ and $`g(x)=(1x)^\alpha `$, where $`\alpha `$ is some power. These are introduced to have a more stringent test on the hard scattering matrix elements.. The result is that DISENT 0.1 and DISASTER++ 1.0 agree well, with discrepancies of the MEPJET results. Presently this is studied in the framework of the HERA Monte Carlo workshop<sup>6</sup><sup>6</sup>6http://home.cern.ch/$``$graudenz/heramc.html at DESY. There does not yet exist a systematic comparison of the JetVip program with MEPJET, DISENT and DISASTER++. ## 3 Matching of DIS and photoproduction For large photon virtuality $`Q^2`$, the coupling of the exchanged virtual photon in a lepton–nucleon scattering process is exclusively pointlike. Extending this kind of calculation down to $`Q^20`$ leads to the problem that the photon propagator diverges. Instead, it is possible to calculate the scattering process for the scattering of a quasi-real photon and a nucleon, where the flux of quasi-real photons is described by a Weizsäcker–Williams approximation. For small $`Q^2`$, in addition to the cross section contribution from the pointlike coupling, a resolved contribution has to be added, because the quasi-real photon may fluctuate into a hadronic state, which in turn interacts strongly with the incident nucleon. This process is modelled by means of parton densities $`f_{i/\gamma }`$ of the virtual photon. The assumption of a photon structure is also required in order to treat collinear singularities arising from the splitting of the real photon via its pointlike coupling into a collinear quark-antiquark pair. This collinear singularity does not cancel against the virtual corrections, but is absorbed into the $`f_{i/\gamma }`$. Typically, the infrared singularities are regularized by dimensional regularization; the singularities then show up as poles in $`ϵ`$, where the space-time dimension is set to $`d=42ϵ`$. This type of calculation has recently been extended to the case of exchanged photons with moderate $`Q^2`$ in Refs. . Here, because $`Q^2`$ is finite, strictly speaking there is no collinear singularity, and therefore no poles in $`ϵ`$ related to the photon splitting arise. However, the integral over the phase space of the quark-antiquark pair yields a logarithm in $`Q^2`$. Because this logarithm may be large, and can therefore spoil perturbation theory, it has to be resummed. This is done by absorbing it into the redefined parton densities of the photon. The corresponding renormalization group equation then takes care of the resummation. Depending on the factorization scale for the virtual photon, the resolved contribution can be surprisingly large even for fairly large $`Q^2`$, compared with the “standard” DIS calculation for the pointlike coupling. This seems to be in contradiction with the statement that the resolved contribution should die out for increasing $`Q^2`$. There are two reasons for this: (a) The choice of the factorization scale $`\mu `$ for the resolved photon dictates the size of the resolved contribution. The parton density of the virtual photon is exactly zero for $`\mu =Q`$. The factorization scale employed in Refs. is $`\sqrt{Q^2+E_T^2}`$ ($`E_T`$ is the transverse energy of the produced jets), which makes sure that even at large $`Q^2`$ there is always a resolved contribution. (b) In the full NLO calculation, there are four different matrix elements which contribute: the direct process in LO and NLO (with the dangerous logarithm in $`Q^2`$ subtracted), and the resolved process in LO and NLO. It is expected that the sum of the first three processes reproduce the result for the standard calculations, and this is indeed the case: the logarithm that has been subtracted for the direct coupling is added up again via the parton density of the photon in the LO resolved contribution. The difference comes from the resolved contribution in NLO: the corresponding parton subprocess, which is of $`𝒪\left(\alpha _s^3\right)`$, is one order in $`\alpha _s`$ higher than the “standard” calculation. Thus, this contribution could be considered as part of the NNLO correction to the Born term for (2+1)-jet production. Differences between the two approaches are therefore expected. ## 4 One-particle-inclusive processes The comparison of $`x_p`$-distributions<sup>7</sup><sup>7</sup>7The variable $`x_p`$ is defined to be the fraction $`2E/Q`$, where $`E`$ is the energy of an observed particle in the current hemisphere of the Breit frame. for charged particle production from experimental data and the NLO program CYCLOPS leads to severe discrepancies for small values of $`Q^2`$ or small $`x_p`$. For large $`Q^2`$ and large $`x_p`$, data and theory agree nicely. Where does this discrepancy come from? The theoretical prediction is made in the fragmentation function picture: the cross section for the inclusive production of charged particles is obtained by a convolution of the hard scattering cross section calculated in perturbative QCD and fragmentation functions which have been obtained from fits to $`e^+e^{}`$ data. Fragmentation functions depend on the momentum fraction $`z`$ of the parent parton carried by the observed particle. An assumption in this picture is that the mass of the observed particle can be neglected relative to any other scale of the process, in particular relative to its momentum. The variable $`z`$ can thus be defined either by means of fractions of energies or fractions of momenta. In the real world, the observed particle has a mass, and this gives, thus, rise to an uncertainty in the theoretical description. It is clear that mass effects will be important if $`x_p=𝒪(2m_\pi /Q)`$, $`m_\pi `$ being a typical hadronic mass. It turns out that excluding data points with a value of $`x_p`$ close to or smaller than this leads to a good agreement between data and theory. A different argument in terms of rapidities of partons and observed particles has been given in Ref. . During the Durham workshop, a quantitative estimate of power corrections $`1/Q^2`$ to the fixed-order NLO prediction has been made. Y. Dokshitser and B. Webber proposed a factor $`1/(1+4\mu ^2/(x_pQ)^2)`$, depending on a mass parameter $`\mu `$, to be multiplied with the NLO cross section; this factor together with a fit of $`\mu `$ is able to describe the experimental data fairly well (see the contribution to these proceedings by P. Dixon, D. Kant, and G. Thompson). ## 5 Summary I have discussed three topics related to hadronic final states at HERA. For the basic processes, theoretical predictions are available in next-to-leading-order accuracy. Independent calculations permit the comparison of results, and a few problems with Monte Carlo programs have already been fixed. What is still missing are calculations for $`W`$ and $`Z`$-exchange in the subtraction formalism for jet cross sections. This is likely to become available in the near future. Moreover, a calculation for transverse momentum spectra of charged particles has not yet been done. The calculation for the transition region of DIS and photoproduction fills a gap in the theoretical description of lepton–nucleon scattering. However, it is not yet clear whether the parton densities for virtual photons are process-independent beyond NLO, such that they can be measured in one process and used for predictions in a different one. The necessity to introduce a power correction term for one-particle-inclusive distributions already at fairly large values of $`Q^2`$ shows that the calculation of fixed-order QCD corrections is not sufficient for a good description of experimental data. Unfortunately, a power correction term introduces an additional mass parameter, which cannot be calculated from first principles. ## 6 Acknowledgements I would like to thank the organizers of the 3rd UK Phenomenology Workshop on HERA Physics for the invitation to Durham. St. John’s College, where the workshop took place, really is an amazing place. Discussions with P. Dixon, G. Kramer, B. Pötter, M. Seymour and G. Thompson are gratefully acknowledged. ## 7 References
no-problem/9903/cond-mat9903141.html
ar5iv
text
# Growth of nanostructures by cluster deposition : experiments and simple models ## I INTRODUCTION Growth of new materials with tailored properties is one of the most active research directions for physicists. As pointed out by Silvan Schweber in his brilliant analysis of the evolution of physics after World War II : ”An important transformation has taken place in physics : As had previously happened in chemistry, an ever larger fraction of the efforts in the field \[are\] being devoted to the study of novelty \[creation of new structures, new objects and new phenomena\] rather than to the elucidation of fundamental laws and interactions \[…\] Condensed matter physics has indeed become the study of systems that have never before existed.” Among these new materials, those presenting a structure controlled down to the nanometer scale are being extensively studied . There are different ways to build up nanostructured systems : atomic deposition , mechanical milling , chemical methods , gas-aggregation techniques …Each of these techniques has its own advantages, but, as happens with atomic deposition techniques, the requisites of control (in terms of characterization and flexibility) and efficiency (in terms of quantity of matter obtained per second) are generally incompatible. As a physicist wishing to understand the details of the processes involved in the building of these nanostructures, I will focus in this review on a carefully controlled method : low energy cluster deposition . Clusters are large ”molecules” containing typically from 10 to 2000 atoms, and have been studied for their specific physical properties (mostly due to their large surface to volume ratio) which are size dependent and different from both the atoms and the bulk material . By depositing preformed clusters on a substrate, one can build nanostructures of two types : in the submonolayer range, separated (and hopefully ordered) nanoislands, and for higher thicknesses, thin films or cluster assembled materials (CAM). The main advantage of the cluster deposition technique is that one can carefully control the building block (i.e. the cluster) and characterize the growth mechanisms. By changing the size of the incident clusters one can change the growth mechanisms and the characteristics of the materials. For example, it has been shown that by changing the mean size of the incident carbon clusters, one can modify the properties of the carbon film, from graphitic to diamond-like . This review is organized as follows. First, I present briefly the interest of nanostructures, both in the domain of nanoislands arranged on a substrate and as nanostructured, continuous films. I also review the different strategies employed to deposit clusters on a substrate : by accelerating them or by achieving their soft-landing. The scope of this section is to convince the reader that cluster deposition is a promising technique for nanostructure growth in a variety of domains, and therefore deserves a careful study. In Section III, models for cluster deposition are introduced. These models can also be useful for atomic deposition in some simple cases, namely when aggregation is irreversible. The models are adapted here to the physics of cluster deposition. In this case, reevaporation from the substrate can be important (as opposed to the usual conditions of Molecular Beam Epitaxy), cluster-cluster aggregation is always irreversible (as opposed to the possibility of bond breaking for atoms ) and particle-particle coalescence is possible. After a brief presentation of Kinetic Monte-Carlo (KMC) simulations, I show how the submonolayer regime can be studied in a wide variety of experimental situations : complete condensation, growth with reevaporation, nucleation on defects, formation of two and three dimensional islands …Since I want these models to be useful for experimentalists, Section V is entirely devoted to the presentation of a strategy on how to analyze experimental data and extract microscopic parameters such as diffusion and evaporation rates. I remind the reader that a simple software simulating all these situations is available at no cost on simple request to the author. Section VI analyzes in detail several experiments of cluster deposition. These studies serve as examples of the recipes given in Section V to analyze the data and also to demonstrate that clusters can have surprisingly large mobilities (comparable to atomic mobilities) on some substrates. A first interpretation of these intriguing results at the atomic level is given in Section VII, where the kinetics of cluster-cluster coalescence is also studied. The main results of this Section are that high cluster mobilities can be achieved provided the cluster does not find an epitaxial arrangement on the substrate and that cluster-cluster coalescence can be much slower than predicted by macroscopic theories. A note on terminology : The structures formed on the surface by aggregation of the clusters are called islands. This is to avoid the possible confusion with the terms usually employed for atomic deposition where the clusters are the islands formed by aggregation of atoms on the surface. Here, the clusters are preformed in the gas phase before deposition. I use coverage for the actual portion of the surface covered by the islands and thickness for the total amount of matter deposited on the surface (see also Table I). ## II INTEREST AND BUILDING OF NANOSTRUCTURES Before turning to the heart of this paper - the growth of nanostructures by cluster deposition - I think it is appropriate to show why one wants to obtain nanostructures at all and how these can be prepared experimentally. Due to the technological impetus, a tremendous amount of both experimental and theoretical work has been carried out in this field, and it is impossible to summarize every aspect of it here. For a recent and rather thorough review, see Ref. where the possible technological impact of nanostructures is also addressed. Actually, several journals are entirely devoted to this field . The reader is also referred to the enormous number of World Wide Web pages (about 6000 on nanostructures), especially those quoted in Ref. . A short summary of the industrial interest of nanostructures and introductory reviews on the interest of ”Nanoscale and ultrafast devices” or ”Optics of nanostructures” have appeared recently. There are two distinct (though related) domains where nanostructures can be interesting for applications. The first stems from the desire of miniaturization of electronic devices. Specifically, one would like to grow organized nanometer size islands with specific electronic properties. As a consequence, an impressive quantity of deposition techniques have been developed to grow carefully controlled thin films and nanostructures from atomic deposition . While most of these techniques are complex and keyed to specific applications, Molecular Beam Epitaxy (MBE) has received much attention from the physicists , mainly because of its (relative) simplicity. The second subfield is that of nanostructured materials , as thin or thick films, which show (mechanical, catalytic, optical) properties different from their microcrystalline counterparts . I will now briefly review the two subfields since cluster deposition can be used to build both types of nanostructures. Moreover, some of the physical processes studied below (such as cluster-cluster coalescence) are of interest for both types of structure. ### A Organized nanoislands There has been a growing interest for the fabrication of organized islands of nanometer dimensions. One of the reasons is the obvious advantage of miniaturizing the electronic devices both for device speed and density on a chip (for a simple and enjoyable introduction to the progressive miniaturization of electronics devices, see Ref. ). But it should be noted that at these scales, shrinking the size of the devices does also change their properties, owing to quantun confinement effects. Specifically, semiconductor islands smaller than the Bohr diameter of the bulk material (from several nm to several tens of nm) show interesting properties : as their size decreases, their effective bandgap increases. The possiblity to tailor the electronic properties of a given material by playing on its size has generated a high level of interest in the field of these quantum dots . But quantum dots are not the only driving force for obtaining organized nanoislands. Isolated nanoparticles are also interesting as model catalysts (see Refs. and Chapter 12 of Ref. ). Clearly, using small particles increases the specific catalytic area for a given volume. More interesting, particles smaller than 4-5 nm in diameter might show specific catalytic properties, different from the bulk , although the precise mechanisms are not always well identified (Chapter 12 of Ref. ). One possibility is the increase, for small particle sizes, of the proportion of low coordination atoms (corners, kinks) whose electronic (and therefore catalytic) properties are expected to be different from bulk atoms. For even smaller particles (1-2 nm), the interaction with the substrate can significantly alter their electronic properties . Recently, there have been attempts at organizing the isolated islands to test the consequences on the catalytic properties . Obtaining isolated clusters on a surface can also be interesting to study their properties. For example, Schaefer et al. have obtained isolated gold clusters onto a variety of substrates to investigate the elastic properties of single nanoparticles by Atomic Force Microscopy (AFM). Let me now briefly turn on to the the possible ways of obtaining such organized nanoislands. Deposition of atoms on carefully controlled substrates is the main technique used presently by physicists to try to obtain a periodic array of nanometer islands of well-defined sizes. A striking example of organized nanoislands is given in Fig. 1. These triangular islands have been grown on the dislocation network formed by the second Ag atomic layer on Pt(111). Beautiful as these triangles are, they have to be formed by nucleation and growth on the substrate, and therefore the process is highly dependent on the interaction of the adatoms with the substrate (energy barriers for diffusion, possibility of exchange of adatoms and substrate atoms, …). This drastically limits the range of possible materials that can be grown by this method. However, the growth of strained islands by heteroepitaxy is under active study, since stress is a force which can lead to order, and even a tunable order, as observed for example in the system $`PbSe/Pb_{1x}Eu_xTe`$ (see Refs. for further details on stress). In this review, I will focus on an alternative approach to form nanoislands on substrates : instead of growing them by atom-atom aggregation on the substrate, a process which dramatically depends on the idiosyncrasies of the substrate and its interaction with the deposited atoms, one can prepare the islands (as free clusters) before deposition and then deposit them. It should be noted that the cluster structure can be extensively characterized prior to deposition by several in-flight techniques such as time-of-flight spectrometry, photo-ionization or fragmentation . Moreover, the properties of these building blocks can be adjusted by changing their size, which also affects the growth mechanisms, and therefore the film morphology . A clear example of the possibility to change the film morphology by varying only the mean cluster size has been given a few years ago by Fuchs et al. (Fig. 2) and this study has been completed recently by Brechignac’s group for larger cluster sizes . There are several additional interests for depositing clusters. First, these are grown in extreme nonequilibrium conditions, which allows to obtain metastable structures or alloys. It is true that neither islands grown on a substrate are generally in equilibrium, but the quenching rate is very high in a beam, and the method is more flexible since one avoids the effects of nucleation and growth on a specific substrate. For example, PdPt alloy clusters - which are known to have interesting catalytic properties - can be prepared with a precise composition (corresponding to the composition of the target rod, see below) and variable size and then deposited on a surface . The same is true for SiC clusters where one can modify the electronic properties of the famous $`C_{60}`$ clusters by introducing in a controlled way Si atoms before deposition . This allows to tune within a certain range the properties of the films by choosing the preparation conditions of the preformed clusters. It might also be anticipated that cluster nucleation is less sensitive to impurities than atomic nucleation. Atomic island growth can be dramatically affected by them, as exemplified by the celebrated case of the different morphologies of Pt islands grown on Pt(111) which were actually the result of CO contamination at an incredibly low level : $`10^{10}`$ mbar . Instead, clusters, being larger entities, might interact less specifically with the substrate and its impurities. There is still no systematic way of organizing the clusters on a surface. One could try to pin them on selected sites such as defects or to encapsulate the clusters with organic molecules before deposition in order to obtain ordered arrays on a substrate . ### B Nanostructured materials Although my main focus in this review is the understanding of the first stages of growth, it is worth pointing out the interest of thicker nanostructured films (for a recent review of this field, see Ref. ). It is known that the (magnetic, optical and mechanical) properties of these films can be intrinsically different from their macrocrystalline counterparts. The precise reasons for this are currently being investigated, but one can cite the presence of a significant fraction (more than 10 %) of atoms in configurations different from the bulk configuration, for example in grain boundaries . It is reasonable to suppose that both dislocation generation and mobility may become significantly difficult in nanostructured films . For example, recent studies of the mechanical deformation properties of nanocrystalline copper have shown that high strain can be reached before the appearance of plastic deformation. A review of the effects of nanostructuration on the mechanical response of solids is given by Weertman and Averback in chapter 13 of Ref. . Another interesting property of these materials is that their crystalline order is intermediate between that of the amorphous materials (first neighbor order) and of crystalline materials (long range order). It is given by the size of the crystalline cluster, which can be tuned. For example, for random magnetic materials, by varying the size of the clusters, and consequently of the ferromagnetic domain, one can study the models of amorphous magnetic solids . ### C How can one deposit clusters on surfaces? After detailing the potential interests of nanostructures, I now address the practical preparation methods by cluster deposition. Two main variants have been explored. Historically, the first idea has been to produce beams of accelerated (ionized) clusters and take advantage of the incident kinetic energy to enhance atomic mobility even at low substrate temperatures. This method does not lead in general to nanostructured materials, but to films similar to those obtained by atomic deposition, with sometimes better properties. A more recent approach is to deposit neutral clusters, with low energy to preserve their peculiar properties when they reach the surface. The limit between the two methods is roughly at a kinetic energy of 0.1 to 1 eV/atom. #### 1 Accelerated clusters The Japanese group of Kyoto University was the first to explore the idea of depositing clusters with high kinetic energies (typically a few keV) to form thin films . The basic idea of the Ionized Cluster Beam (ICB) technique is that the cluster breaks upon arrival and its kinetic energy is transferred to the adatoms which then have high lateral (i.e. parallel to the substrate) mobilities on the surface. This allows in principle to achieve epitaxy at low substrate temperatures, which is interesting to avoid diffusion at interfaces or other activated processes. Several examples of good epitaxy by ICB have been obtained by Kyoto’s group : Al/Si which has a large mismatch and many other couples of metals and ceramics on various crystalline substrates such as Si(100), Si(111) …Molecular Dynamics (MD) simulations have supported this idea of epitaxy by cluster spreading . The reader is referred to Yamada’s reviews for an exhaustive list of ICB applications, which also includes high energy density bombardment of surfaces to achieve sputter yields significantly higher than obtained from atomic bombardment . However, the physics behind these technological successes is not clear. In fact, the very presence of a significant fraction of large clusters in the beam seems dubious . There is some experimental evidence offered by Kyoto’s group to support the effective presence of a significant fraction of large clusters in the beam, but the evidence is not conclusive. In short, it is difficult to make a definite judgement about the ICB technique. There is no clear proof of the presence of clusters in the beam and the high energy of the incident particles renders difficult any attempt of modelling. Kyoto’s group has clearly shown that ICB does lead to good quality films in many cases but it is not clear how systematic the improvement is when compared to atomic deposition techniques. Haberland’s group in Freiburg has developed recently a different technique called Energetic Cluster Impact (ECI) where a better controlled beam of energetic clusters is deposited on surfaces . Freiburg’s group has shown that accelerating the clusters leads to improvements in some properties of the films : depositing slow clusters (energy per atom 0.1 eV) produces metal films which can be wiped off easily, but accelerating them before deposition (up to 10 eV per atom) results in strongly adhering films . MD simulations of cluster deposition have explained qualitatively this behavior : while low energetic clusters tend to pile up on the substrate leaving large cavities, energetic clusters lead to a compact film (Fig. 3). It is interesting to note that, even for the highest energies explored in the MD simulations (10 eV per atom), no atoms were ejected form the cluster upon impact. The effect of film smoothening is only due to the flattening of the cluster when it touches the substrate. Some caution on the interpretation of these simulations in needed because of the very short time scales which can be simulated (some ps). Similar MD simulations of the impact of a cluster with a surface at higher energies have also been performed . Recently, Palmer’s group has studied the interaction of Ag clusters on graphite for various incident kinetic energies (between 15 and 1500 eV). They have shown that, for small ($`Ag_3`$) clusters, the probability of a cluster penetrating the substrate or not critically depends on its orientation relative to the substrate. #### 2 Low energy clusters Another strategy to grow nanostructures with cluster beams consists in depositing low energy particles . Ideally, by depositing the clusters with low kinetic energies, one would like to conserve the memory of the free cluster phase to form thin films with original properties. Since the kinetic energy is of the order of 10 eV per cluster , i.e. a few meV per atom which is negligible compared to the binding energy of an atom in the cluster, no fragmentation of the clusters is expected upon impact on the substrate. Fig. 3 suggests that the films are porous , which is interesting to keep one of the peculiarities of the clusters : their high surface/volume ratio which affects all the physical (structural, electronic) properties as well as the chemical reactivity (catalysis). Concerning deposition of carbon clusters, experiments as well as simulations have shown that the carbon clusters preserve their identity in the thick film. Another interesting type of nanostructured film grown by cluster deposition is the growth of cermets by combining a cluster beam with an atomic beam of the encapsulating material . The point is that the size of the metallic particles is determined by the incident cluster size and the concentration by the ratio of the two fluxes. Then, these two crucial parameters can be varied independently, in contrast to the cermets grown from atomic beams and precipitation upon annealing. Cluster beams are generated by different techniques : Multiple Expansion Cluster Source (MECS, ), gas-aggregation …All these techniques produce a beam of clusters with a distribution of sizes, with a dispersion of about half the mean size. For simplicity, I will always refer to this mean size. In gas-aggregation techniques, an atomic vapor obtained from a heated crucible is mixed with an inert gas (usually Ar or He) and the two are cooled by adiabatic expansion, resulting in supersaturation and cluster formation. The mean cluster size can be monitored by the different source parameters (such as the inert gas pressure) and can be measured by a time of flight mass spectrometer. For further experimental details on this technique, see Refs. . To produce clusters of refractory materials, a different evaporation technique is needed : laser vaporization . A plasma created by the impact of a laser beam focused on a rod of is thermalized by injection of a high pressure He-pulse (typically, 3-5 bars during 150 to 300 $`\mu `$s), which permits the cluster growth. The mean cluster size is governed by several parameters such as the helium flow, the laser power and the delay time between the laser shot and the helium pulse. As a consequence of the pulsed laser shot, the cluster flux reaching the surface is not continuous but chopped. Typical values for the chopping parameters are : active portion of the period $`100\mu s`$ and chopping frequency $`f=10Hz`$. #### 3 Other approaches Alternatively, one can deposit accelerated clusters onto a buffer layer which acts as a ”mattress” to dissipate the kinetic energy. This layer is then evaporated, which leads to cluster soft-landing onto the substrate . The advantage of this method is that it is possible to select the mass of the ionized clusters before deposition. However, it is difficult with this technique to reach high enough deposition rates to grow films in reasonable times. Vitomirov et al deposited atoms onto a rare-gas buffer layer : the atoms first clustered on top and within the layer which was afterwards evaporated, allowing the clusters to reach the substrate. Finally, deposition of clusters from a Scanning Tunneling Microscope (STM) tip has been shown to be possible, both theoretically and experimentally . ## III MODELS OF PARTICLE DEPOSITION I describe in this section simple models which allow to understand the first stages of film growth by low energy cluster deposition. These models can also be useful for understanding the growth of islands from atomic beams in the submonolayer regime in simple cases, namely (almost) perfect substrates, irreversible aggregation, etc. and they have allowed to understand and quantify many aspects of the growth : for a review of analysis of atomic deposition with this kind of models, see Refs. . The models described below are similar to previous models of diffusing particles that aggregate, but such “cluster-cluster aggregation” (CCA) models do not incorporate the possibility of continual injection of new particles via deposition, an essential ingredient for thin film growth. Given an experimental system (substrate and cluster chemical nature), how can one predict the growth characteristics for a given set of parameters (substrate temperature, incoming flux of clusters …)? A first idea - the ”brute-force” approach - would be to run a Molecular Dynamics (MD) simulation with ab-initio potentials for the particular system one wants to study. It should be clear that such an approach is bound to fail since the calculation time is far too large for present-day computers. Even using empirical potentials (such as Lennard-Jones, Embedded Atom or Tight-Binding) will not do because there is an intrinsically large time scale in the growth problem : the mean time needed to fill a significant fraction of the substrate with the incident particles. An estimate of this time is fixed by $`t_{ML}`$, the time needed to fill a monolayer : $`t_{ML}1/F`$ where $`F`$ is the particle flux expressed in monolayers per second (ML/s). Typically, the experimental values of the flux are lower than 1ML/s, leading to $`t_{ML}1s`$. Therefore, there is a time span of about 13 decades between the typical vibration time ($`10^{13}s`$, the lower time scale for the simulations) and $`t_{ML}`$, rendering hopeless any ”brute-force” approach. There is a rigorous way of circumventing this time span problem : the idea is to ”coarsen” the description by defining elementary processes, an approach somewhat reminiscent of the usual (length, energy) renormalization of particle physics . One ”sums up” all the short time processes (typically, atomic thermal vibrations) in effective parameters (transition rates) valid for a higher level (longer time) description. I will now briefly describe this rigorous approach and then proceed to show how it can be adapted to cluster deposition. ### A Choosing the elementary processes Voter showed that the interatomic potential for any system can be translated into a finite set of parameters, which then provides the exact dynamic evolution of the system. Recently, the same idea has been applied to Lennard-Jones potentials by using only two parameters. The point is that these coarse-grained, lattice-gas approach needs orders of magnitude less computer power than the MD dynamics described above. One can understand the basic idea by the following simple example : for the MD description of the diffusion of an atom by hopping, one has to follow in detail its motion at the picosecond scale, where the atom mainly oscillates in the bottom of its potential well. Only rarely at this time scale will the atom jump from site to site, which is what one is interested on. Voter showed that, provided some conditions are met concerning the separation of these two time scales, and restricting the motion to a regular (discrete) lattice (see for more details), one could replace this ”useless” information by an effective parameter taking into account all the detailed motion of the atom within the well (including the correlations between the motions of the atom and its neighbors) and allowing a rapid evaluation of its diffusion rate. Unfortunately, this rigorous approach is not useful for cluster deposition, because the number of atomic degrees of freedom (configurations) is too high. Instead, one chooses - from physical intuition - a ”reasonable” set of elementary processes, whose magnitudes are used as free parameters. This allows to understand the role of each of these elementary processes during the growth and then to fit their value from experiments (Fig. 4). These are the models which I will study in this paper, with precise examples of parameter fit (see section VI). Examples of such fits from experimental data for atomic deposition include homoepitaxial growth of GaAs(001) , of Pt(100) or several metal(100) surfaces . Of course, fitting is not very reliable when there are too many (almost free) parameters. An interesting alternative are intermediate cases, where parameters are determined from known potentials but with a simplified fitting procedure taking into account what is known experimentally of the system under study : see Ref. for a clear example of such a possibility. ### B Predicting the growth from the selected elementary processes To be able to adjust the values of the elementary processes from experiments, one must first predict the growth from these processes. The oldest way is to write ”rate-equations” which describe in a mean-field way the effect of these processes on the number of isolated particles moving on the substrate (called monomers) and islands of a given size. The first author to attempt such an approach for growth was Zinsmeister in 1966, but the general approach is similar to the rate-equations first used by Smoluchovsky for particle aggregation . In the seventies, many papers dealing with better mean-field approximations and applications of these equations to interpret experimental systems were published. The reader is referred to the classical reviews by Venables and co-workers and Stoyanov and Kaschiev for more details on this approach. More recently, there have been two interesting improvements. The first is by Villain and co-workers which have simplified enormously the mathematical treatment of the rate-equations, allowing one to understand easily the results obtained in a variety of cases . Pimpinelli et al. have recently published a summary of the application of this simplified treatment to many practical situations using a unified approach . The second improvement is due to Bales and Chrzan who have developed a more sophisticated self-consistent rate-equations approach which gives better results and allows to justify many of the approximations made in the past. However, these analytical approaches are mean-field in nature and cannot reproduce all the characteristics of the growth. Two known examples are the island morphology and the island size distribution . The alternative approach to predict the growth are Kinetic Monte-Carlo (KMC) simulations. KMC simulations are an extension of the usual Monte Carlo algorithm and provide a rigorous way of calculating the dynamical evolution of a complicated system where a large but finite number of random processes occur at given rates. KMC simulations are useful when one chooses to deal with only the slowest degrees of freedom of a system, these variables being only weakly coupled to the fast ones, which act as a heat bath . The ”coarsened” description of film growth (basically, diffusion) given above is a good example , but other applications of KMC simulations include interdiffusion in alloys, slow phase separations …The principle of KMC simulations is straightforward : one uses a list of all the possible processes together with their respective rates $`\nu _{pro}`$ and generates the time evolution of the system from these processes taking into account the random character of the evolution. For the simple models of film growth described below, systems containing up to 4000 x 4000 lattice sites can be simulated in a reasonable time (a few hours), which limits the finite size effects usually observed in this kind of simulation. Let me now discuss in some detail the way KMC simulations are implemented to reproduce the growth, once a set of processes has been defined, with their respective $`\nu _{pro}`$ taking arbitrary values or being derived from known potentials. There are two main points to discuss here : the physical correctness of the dynamics and the calculation speed. Concerning the first point, it should be noted that, originally , Monte Carlo simulations aimed at the description of the equations of state of a system. Then, ”the MC method performs a ”time” averaging of a model with (often artificial) stochastic kinetics \[…\] : time plays the role of a label characterizing the sequential order of states, and need not be related to the physical times” . One should be cautious therefore on the precise Monte Carlo scheme used for the simulation when attempting at describing the kinetics of a system, as in KMC simulations. For example, there are doubts about some simulation work carried out using Kawasaki dynamics. This point is discussed in great detail in Ref. . Let me address now the important problem of the calculation speed. One could naively think of choosing a time interval $`\mathrm{\Delta }t`$ smaller than all the relevant times in the problem, and then repeat the following procedure : (1) choose one particle randomly (2) choose randomly one of the possible processes for this particle (3) calculate the probability $`p_{pro}`$ of this process happening during the time interval $`\mathrm{\Delta }t`$ ($`p_{pro}=\nu _{pro}\mathrm{\Delta }t`$) (4) throw a random number $`p_r`$ and compare it with $`p_{pro}`$ : if $`p_{pro}<p_r`$ perform the process, if not go to the next step (5) increase the time by $`\mathrm{\Delta }t`$ and goto (1) This procedure leads to the correct kinetic evolution of the system but might be extremely slow if there is a large range of probabilities $`p_{pro}`$ for the different processes (and therefore some $`p_{pro}1`$). The reason is that a significant fraction of the loops leads to rejected moves, i.e. to no evolution at all of the system. Instead, Bortz et al. have proposed a clever approach to eliminate all the rejected moves and thus reduce dramatically the computational times. The point is to choose not the particles but the processes, according to their respective rate and the number of possible ways of performing this process (called $`\mathrm{\Omega }_{pro}`$). This procedure can be represented schematically as follows : (1) update the list of the possible ways of performing the processes $`\mathrm{\Omega }_{pro}`$ (2) randomly select one of the process, weighting the probability of selection by the process rate $`\nu _{pro}`$ and $`\mathrm{\Omega }_{pro}`$ : $`p_{pro}=(\nu _{pro}\mathrm{\Omega }_{pro})/\left(_{processes}\mathrm{\Omega }_{pro}\nu _{pro}\right)`$ (3) randomly select a particle for performing this process (4) move the particle (5) increase the time by $`dt=\left(_{processes}\mathrm{\Omega }_{pro}\nu _{pro}\right)^1`$ (6) goto (1) A specific example of such a scheme for cluster deposition is given below (Section III C). Note that the new procedure implies a less intuitive increment of time, and that one has to create (and update) a list of all the $`\mathrm{\Omega }_{pro}`$ constantly, but the acceleration of the calculations is worth the effort. A serious limitation of KMC approaches is that one has to assume a finite number of local environments (to obtain a finite number of parameters) : this confines KMC approaches to regular lattices, thus preventing a rigorous consideration of elastic relaxation, stress effects …everything that affects not only the number of first or second nearest neighbors but also their precise position. Indeed, considering the precise position as in MD simulations introduces a continuous variable and leads to an infinite number of possible configurations or processes. Stress effects can be introduced approximately in KMC simulations, for example by allowing a variation of the bonding energy of an atom to an island as a function of the island size (the stress depending on the size), but it is unclear how meaningful these approaches are (see also Refs. ). I should quote here a recent proposition inspired on the old Frenkel-Kontorova model which allows to incorporate some misfit effects in rapid simulations. It remains to explore whether such an approach could be adapted to the KMC scheme. ### C Basic elementary processes for cluster growth What is likely to occur when clusters are deposited on a surface ? I will present here the elementary processes which will be used in cluster deposition models : deposition, diffusion and evaporation of the clusters and their interaction on the surface (Figs. 5 and 6). The influence of surface defects which could act as traps for the particles is also addressed. A simple physical rationale for choosing only a limited set of parameters is the following (see Fig. 7). For any given system, there will be a ”hierarchy” of time scales, and the relevant ones for a growth experiment are those much lower than $`t_{ML}1/F`$. The others are too slow to act and can be neglected. The hierarchy of time scales (and therefore the relevant processes) depend of course on the precise system under study. It should be noted that for cluster deposition the situation is somewhat simpler than for atom deposition since many elementary processes are very slow. For example, diffusion of clusters on top of an already formed island is very low , cluster detachment from the islands is insignificant and edge diffusion is not an elementary process at all since the cluster cannot move as an entity over the island edge (as I will discuss in section VII B, the equivalent process is cluster-cluster coalescence by atomic motion). Let me now discuss in detail each of the elementary processes useful for cluster deposition. The first ingredient of the growth, deposition, is quantified by the flux $`F`$, i.e. the number of clusters that are deposited on the surface per unit area and unit time. The flux is usually uniform in time, but in some experimental situations it can be pulsed, i.e. change from a constant value to 0 over a given period. Chopping the flux can affect the growth of the film significantly , and I will take this into account when needed (Section VI B 3). The second ingredient is the diffusion of the clusters which have reached the substrate. I assume that the diffusion is brownian, i.e. the particle undergoes a random walk on the substrate. To quantify the diffusion, one can use both the usual diffusion coefficient $`D`$ or the diffusion time $`\tau `$, i.e. the time needed by a cluster to move by one diameter. These two quantities are connected by $`D=d^2/(4\tau )`$ where $`d`$ is the diameter of the cluster. Experiments show that the diffusion coefficient of a cluster can be surprisingly large, comparable to the atomic diffusion coefficients. The diffusion is here supposed to occur on a perfect substrate. Real surfaces always present some defects such as steps, vacancies or adsorbed chemical impurities. The presence of these defects on the surface could significantly alter the diffusion of the particles and therefore the growth of the film. I will include here one simple kind of defect, a perfect trap for the clusters which definitively prevents them from moving. A third process which could be present in growth is re-evaporation of the clusters from the substrate after a time $`\tau _e`$. It is useful to define $`X_S=\sqrt{D\tau _e}`$ the mean diffusion length on the substrate before desorption. The last simple process I will consider is the interaction between the clusters. The simplest case is when aggregation is irreversible and particles simply remain juxtaposed upon contact. This occurs at low temperatures. At higher temperatures, cluster-cluster coalescence will be active (Fig. 6). Thermodynamics teaches us that coalescence should always happen but without specifying the kinetics. Since many clusters are deposited on the surface per unit time, kinetics is here crucial to determine the shape of the islands formed on the substrate. A total comprehension of the kinetics is still lacking, for reasons that I will discuss later (Section VII B). I note that the shape of the clusters and the islands on the surface need not be perfectly spherical, even in the case of total coalescence. Their interaction with the substrate can lead to half spheres or even flatter shapes depending on the contact angle. Contrary to what happens for atomic deposition, a cluster touching an island forms a huge number of atom-atom bonds and will not detach from it. Thus, models including reversible particle-particle aggregation are not useful for cluster deposition. The specific procedure to perform a rapid KMC simulation of a system (linear size L) when deposition, diffusion and evaporation of the monomers are included is the following. The processes are : deposition of a particle ($`\nu _{depo}=F`$, $`\mathrm{\Omega }_{depo}=L^2`$ (it is possible to deposit a particle on each site of the lattice)), diffusion of a monomer ($`\nu _{diff}=1/\tau `$, $`\mathrm{\Omega }_{diff}=\rho L^2`$ where $`\rho `$ is the monomer density on the surface) and evaporation of a monomer ($`\nu _{evap}=1/\tau _e`$, $`\mathrm{\Omega }_{evap}=\rho L^2`$). For each loop, one calculates two quantities $`p_{drop}=F/(F+\rho (\frac{1}{\tau _e}+\frac{1}{\tau }))`$ and $`p_{dif}=(\rho /\tau )/(F+\rho (\frac{1}{\tau _e}+\frac{1}{\tau }))`$. Then, one throws a random number p ($`0<p<1`$) and compare it to $`p_{drop}`$ and $`p_{dif}`$. If $`p<p_{drop}`$, a particle is deposited in a random position; if $`p>p_{drop}+p_{dif}`$, a monomer (randomly selected) is removed, otherwise we just move a randomly chosen monomer. After each of these possibilities, one checks whether an aggregation has taken place (which modifies the number of monomers on the surface, and therefore the number of possible diffusion or evaporation moves), increase the time by $`dt=1/(FL^2+\rho L^2(\frac{1}{\tau _e}+\frac{1}{\tau }))`$ and go to the next loop. The usual game for theoreticians is to combine these elementary processes and predict the growth of the film. However, experimentalists are interested in the reverse strategy : from (a set of) experimental results, they wish to understand which elementary processes are actually present in their growth experiments and what are the magnitudes of each of them, what physicists call understanding a phenomenon. The problem, of course, is that with so many processes, many combinations will reproduce the same experiments (see specific examples below). Then, some clever guesses are needed to first identify which processes are present. For example, if the saturation island density does not change when flux (or substrate temperature) is changed, one can guess that nucleation is mostly occurring on defects of the surface. In view of these difficulties, next section is devoted to predict the growth when the microscopic processes (and their values) are known. After, in Section V, I propose a detailed procedure to identify and quantify the microscopic process from the experiments. Finally, Section VI reviews the experimental results obtained for cluster deposition. ## IV PREDICTING GROWTH WITH COMPUTER SIMULATIONS The scope of this section is to find formulas or graphs to deduce the values of the microscopic processes (diffusion, evaporation …) from the observed experimental quantities (island density, island size histograms …). The ”classical” studies have focused on the evolution of the concentration of islands on the surface as a function of time, and especially on the saturation island density, i.e. the maximum of the island density observed before reaching a continuous film. The reason is of course the double possibility to calculate it from rate-equations and to measure it experimentally by conventional microscopy. I will show other interesting quantities such as island size distributions which are measurable experimentally and have been recently calculated by computer simulations . I will study the two limiting cases of pure juxtaposition and total coalescence (which are similar to two and three dimensional growth in atomic deposition terminology) separately. Experimentally, the distinction between the two cases can be made by looking at the shape of the supported islands : if they are circular (and larger than the incident clusters) they have been formed by total coalescence; if ramified by pure juxtaposition (see several examples below, section VI). In both cases, I analyze how the growth proceeds when different processes are at work : diffusion, evaporation, defects acting as traps, island mobility …In the simulations, I often take the diffusion time $`\tau `$ to be the unit time : in this case, the flux is equivalent to the normalized flux $`\varphi `$ (see the Table) and the evaporation time corresponds to $`\tau _e/\tau `$. The growth is characterized by the kinetics of island formation, the value of the island concentration at saturation $`N_{sat}`$ (i.e. the maximum value reached before island-island coalescence becomes important) and the corresponding values of the thickness $`e_{sat}`$ and condensation coefficient $`C_{sat}`$, useful when evaporation is important (the condensation coefficient is the ratio of matter actually present on the substrate over the the total number of particles sent on the surface (also called the thickness $`e=Ft`$), see Table I). I also give the island size distributions corresponding to each growth hypothesis. These have proven useful as a tool for experimentalists to distinguish between different growth mechanisms . By size of an island, I mean the surface it occupies on the substrate. For ”two dimensional” islands (i.e. formed by pure juxtaposition), this is the same as the island mass, i.e. its number of monomers. For ”three dimensional” islands (formed by total coalescence), their projected surface is the easiest quantity to measure by microscopy. It should be noted that for three dimensional islands, their projected surface for a given mass depends on their shape, which is assumed here to be pyramidal (close to a half-sphere). It has been shown that by normalizing the size histograms, one obtains a ”universal” size distribution independent of the coverage, the flux or the substrate temperature for a large range of their values. ### A Pure juxtaposition : growth of one cluster thick islands I first study the formation of the islands in the limiting case of pure juxtaposition. This is done for several growth hypothesis. The rate-equations treatment is given in Appendix A. #### 1 Complete condensation Let me start with the simplest case where only diffusion takes place on a perfect substrate (no evaporation). Fig 8a shows the evolution of the monomer (i.e. isolated clusters) and island densities as a function of deposition time. We see that the monomer density rapidly grows, leading to a rapid increase of island density by monomer-monomer encounter on the surface. This goes on until the islands occupy a small fraction of the surface, roughly 0.1% (Fig. 9a). Then, islands capture efficiently the monomers, whose density decreases. As a consequence, it becomes less probable to create more islands, and their number increases more slowly. When the coverage reaches a value close to 15% (Fig. 9b), coalescence will start to decrease the number of islands. The maximum number of islands at saturation $`N_{sat}`$ is thus reached for coverages around 15%. Concerning the dependence of $`N_{sat}`$ as a function of the model parameters, it has been shown that the maximum number of islands per unit area formed on the surface scales as $`N_{sat}(F/D)^{1/3}`$ . Recent simulations and theoretical analysis have shown that the precise relation is $`N_{sat}=0.53(F\tau )^{0.36}`$ for the ramified islands produced by pure juxtaposition (Fig. 10). It should be noted that if cluster diffusion is vanishingly small, the above relation does not hold : instead, film growth proceeds as in the percolation model , by random paving of the substrate. An experimental example of such a situation has been given in Ref. . #### 2 Evaporation What happens when evaporation is also included ? Fig 8b shows that now the monomer density becomes roughly a constant, since it is now mainly determined by the balancing of deposition and evaporation. As expected, the constant concentration equals $`F\tau _e`$ (solid line). Then the number of islands increases linearly with time (the island creation rate is roughly proportional to the square monomer concentration, see Appendix A). One can also notice that only a small fraction (1/100) of the monomers do effectively remain on the substrate, as shown by the low condensation coefficient value at early times. This can be understood by noting that the islands grow by capturing only the monomers that are deposited within their ”capture zone” (comprised between two circles of radius $`R`$ and $`R+X_S`$). The other monomers evaporate before reaching the islands. When the islands occupy a significant fraction of the surface, they capture rapidly the monomers. This has two effects : the monomer density starts to decrease, and the condensation coefficient starts to increase. Shortly after, the island density saturates and starts to decrease because of island-island coalescence. Fig. 10 shows the evolution of the maximum island density in the presence of evaporation. A detailed analysis of the effect of monomer evaporation on the growth is given in Ref. , where is also discussed the regime of ”direct impingement” which arises when $`X_S1`$ : islands are formed by direct impingement of incident clusters as first neighbors of previously adsorbed clusters, and grow by direct impingement of clusters on the island boundary. A summary of the results obtained in the various regimes spanned as the evaporation time $`\tau _e`$ decreases is given in Appendix A. #### 3 Defects I briefly treat now the influence of a very simple kind of defect : a perfect trap for the diffusing particles. If a particle enters such a defect site, it becomes trapped at this site forever. If such defects are present on the surface they will affect the growth of the film only if their number is higher than the number of islands that would have been created without defects (for the same values of the parameters). If this is indeed the case, monomers will be trapped by the defects at the very beginning of the growth and the number of islands equates the number of defects, whatever the diffusivity of the particles. The kinetics of island formation is dramatically affected by the presence of defects, the saturation density being reached almost immediately (Fig. 11). #### 4 Island mobility The consequences of the mobility of small islands have not received much attention. One reason is that it is difficult to include island mobility in rate equations treatments. A different (though related!) reason is that (atomic) islands are expected to be almost immobile in most homoepitaxial systems. However, several studies have shown the following consequences of island mobility for the pure juxtaposition case and in the absence of evaporation. First, the saturation island density is changed : one obtains $`N_{sat}=0.3(F/D)^{.42}`$ (Fig. 10) if all islands are mobile, with a mobility inversely proportional to their size . Second, the saturation island density is reached for very low coverages (Fig. 11 and Ref. ). This can be explained by a dynamical equilibrium between island formation and coalescence taking place at low coverages thanks to island diffusion. If only monomers are able to move, islands can coalesce (static coalescence) only when the coverage is high enough (roughly 10-15%, ). Then, the saturation island density is reached in this case for those coverages. Instead, when islands can move, the so called dynamical coalescence starts from the beginning of the growth and the balance is established at very low coverages . Third, the island size distribution is sharpened by the mobility of the islands . To my knowledge, there is no prediction concerning the growth of films with evaporation when islands are mobile. #### 5 Island size distributions Fig 12 shows the evolution of the rescaled island size distributions as a function of the evaporation time for islands formed by juxtaposition . Size distributions are normalized by the mean island size in the following way : one defines $`p(s/s_m)=n_s/N_t`$ as the probability that a randomly chosen island has a surface $`s`$ when the average surface per island is $`s_m=\theta /N_t`$, where $`n_s`$ stands for the number of islands of surface $`s`$, $`N_t`$ is the total number of islands and $`\theta `$ for the coverage of the surface. It is clear that the distributions are significantly affected by the evaporation, smaller islands becoming more numerous when evaporation increases. This trend can be qualitatively understood by noting that new islands are created continuously when evaporation is present, while nucleation rapidly becomes negligible in the complete condensation regime. The reason is that islands are created (spatially) homogeneously in the last case, because the positions of the islands are correlated (through monomer diffusion), leaving virtually no room for further nucleation once a small portion of the surface is covered ($`\theta 0.05`$). In the limit of strong evaporation, islands are nucleated randomly on the surface, the fluctuations leaving large regions of the surface uncovered. These large regions can host new islands even for relatively large coverages, which explains that there is a large proportion of small ($`s<s_m`$) islands in this regime. ### B Total coalescence : growth of three dimensional islands If clusters coalesce when touching, the results are slightly different from those given in the preceding section, mainly because the islands occupy a smaller portion of the substrate at a given thickness. Therefore, in the case of complete condensation for example, saturation arises at a higher thickness (Fig. 11) even if the coverage is approximately the same (matter is ”wasted” in the dimension perpendicular to the substrate). However, the main qualitative characteristics of the growth correspond to those detailed in the preceding section. Fig. 13 shows the evolution of the maximum island density in that case, where the three-dimensional islands are assumed to be roughly half spheres (actually, pyramids were used in these simulations which were originally intended for atomic deposition ). The analytical results obtained from a rate-equations treatment are given in Appendix B. If the islands are more spherical (i.e. the contact angle is higher), a simple way to adapt these results on the kinetic evolution of island concentration (Fig. 11) is to multiply the thickness by the appropriate form factor, 2 for a sphere for example. Indeed, if islands are spherical, the same coverage is obtained for a thickness double than that obtained for the case of half-spheres (there are two identical half spheres). This is a slight approximation since one has to assume that the capture cross section (which governs the growth) is identical for the two shapes : this is not exactly true but is a very good approximation. Fig 14 shows the evolution of the rescaled island size distributions for three dimensional islands (pyramids) in presence of evaporation. I recall that size means here the projected surface of the island, a quantity which can be measured easily by electron microscopy. We note the same trends as for the pure juxtaposition case. Fig 15 shows the evolution of the rescaled island size distributions for pyramidal islands nucleating on defects. Two main differences can be noted. First, the histograms are significantly narrower than in the preceding case, as had already been noted in experimental studies . This can be understood by noting that all islands are nucleated at almost the same time (at the very beginning of growth). The second point is that the size distributions are sensitive to the actual coverage of the substrate, in contrast with previous cases. In other words, there is no perfect rescaling of the data obtained at different coverages, even if rescaling for different fluxes or diffusion times has been checked. ### C Other growth situations I briefly address in this paragraph other processes which have not been analyzed here. A possible (but difficult to study) process is a long range interaction between particles (electrostatical or through the substrate). There is some experimental evidence of this kind of interaction for the system Au/KCl(100) but to my knowledge, it has never been incorporated in growth models. Chemical impurities adsorbed on the substrate can change the growth in conventional vacuum, and these effects are extremely difficult to understand and control . Of course, many other possible processes have not been addressed in this review, such as the influence of strain, of extended defects as steps or vacancy islands … ## V HOW TO ANALYZE EXPERIMENTAL DATA Figures 10, 13 constitute in some sense ”abacuses” from which one can determine the value of the microscopic parameters (diffusion, evaporation) if the saturation island density is known. The problem is : does the measured island density correspond to the defect concentration of the surface or to homogeneous nucleation? If the latter is true : which curve should be used to interpret the data? In other words, is evaporation present in the experiments and what is the magnitude of $`\tau _e`$? I will now give some tricks to first find out which processes are relevant and then how they can be quantified. Let’s concentrate first on the presence of defects. One possibility is to look at the evolution of $`N_{sat}`$ with the flux. As already explained, if this leaves unaffected the saturation density, nucleation is occurring on defects. A similar test can be performed by changing the substrate temperature, but there is the nagging possibility that this changes the defect concentration on the surface. It is also possible to study the kinetics of island nucleation, i.e. look at the island concentration as a function of thickness or coverage. The presence of defects can be detected by the fact that the maximum island density is reached at very low coverages (typically less than 1%, see Fig. 11) and/or by the fact that the nucleation rate (i.e. the derivative of the island density) scales as the flux and not as the square flux : see Section 3 of Ref. for more details. One should be careful however to check that all the islands, even those containing a few particles, are visible in the microscope images. This is a delicate point for atomic deposition but should be less restrictive for clusters since each cluster has already a diameter typically larger than a nanometer. Of course all this discussion assumes that the defects are of the ”ideal” kind studied here, i.e. perfect traps. If atoms can escape from the defects after some time, the situation is changed but I am unaware of studies on this question. The question of evaporation is more delicate. First, one should check whether particle reevaporation is important. In principle, this can be done by measuring the condensation coefficient, i.e. the amount of matter present on the surface as a function of the amount of matter brought by the beam. If possible, this measure leaves no ambiguity. Otherwise, the kinetics of island creation is helpful. If the saturation is reached at low thicknesses ($`e_{sat}.5ML`$), this means that evaporation is not important. Another way of detecting particle evaporation is by studying the evolution of the saturation island density with the flux : in the case of 2D growth (Fig. 10), the exponent is 0.36 when evaporation is negligible but roughly 0.66 when evaporation significantly affects the growth . There are similar differences for 3d islands : the exponent changes from 0.29 to 0.66 (Fig. 13). Suppose now that one finds that evaporation is indeed important : before being able to use Fig. 10 or Fig. 13, one has to know the precise value of $`\tau _e`$. One way to find out is to make a precise fit of the kinetic evolution of the island density or the condensation coefficient (see Section VI B 2 for an example). In next paragraph, I show how to find $`\tau _e`$ if one knows only the saturation values of the island density and the thickness. As a summary, here is a possible experimental strategy to analyze the growth. First, get a series of micrographs of submonolayer films as a function of the thickness. The distinction between the pure juxtaposition and total coalescence cases can be easily made by comparing the size of the supported islands to the (supposedly known) size of the incident clusters. Also, if the islands are spherical, this means that coalescence has taken place, if they are ramified that clusters only juxtapose upon contact. Of course, all the intermediate cases are possible (see the case of gold clusters below). One can calculate the ratio of deposited thickness over the coverage : if this ratio is close to 1, islands are flat (i.e. one cluster thick), otherwise three dimensional (unless there is evaporation). From these micrographs, it is possible to measure the island density as a function of the thickness. Fig. 11 should be now helpful to distinguish between the different growth mechanisms. For example, if the saturation island density is obtained for large thicknesses (typically more than 1ML), then evaporation is certainly relevant and trying to measure the condensation coefficient is important to confirm this point. It is clear from Figs. 10, 13 that the knowledge of $`N_{sat}`$ alone cannot determine $`\tau _e`$ since many values of $`\tau `$ and $`\tau _e`$ can lead to the same $`N_{sat}`$. In the 2D case, the values of the microscopic parameters can be obtained by noting that the higher the evaporation rate, the higher the amount of matter ”wasted” for film growth (i.e. re-evaporated). One therefore expects that the smaller $`\tau _e`$, the higher $`e_{sat}`$, which is confirmed by Fig. 16a. Therefore, from the (known) value of $`e_{sat}`$, one can determine the value of the evaporation parameter $`\eta =F\tau X_S^6`$ (Fig. 16a). Once $`\eta `$ is known, $`X_S`$ is determined from Fig. 16b since $`N_{sat}`$ is known. $`F\tau `$ can afterwards be determined (from $`X_S`$ and $`\eta `$). This is only valid for $`X_S1`$ , a condition always fulfilled in experiments. The 3d case is more difficult since the same strategy (measuring $`N_{sat}`$ and $`e_{sat}`$) fails. The reason is that in the limit of high evaporation, $`e_{sat}`$ goes as $`e_{sat}N_{sat}^{}{}_{}{}^{1/2}`$, thus bringing no independent information on the parameters . The same is true for the condensation coefficient at saturation $`C_{sat}`$, which is a constant, i.e. independent of the value of $`\tau _e`$ or the normalized flux (see Fig. 17b). This counterintuitive result (one would think that the higher the evaporation rate, the smaller the condensation coefficient at saturation) can be understood by noting that in this limit, islands only grow by direct impingement of particles within them and therefore $`X_S`$ (or $`\tau _e`$) has no effect on the growth. Fortunately, in many experimental situations, the limit of high evaporation is not reached and one ”benefits” from (mathematical) crossover regimes where these quantities do depend on the precise values of $`\tau _e`$. Figs. 17 give the evolutions of $`C_{sat}`$ and $`e_{sat}`$ as a function of $`N_{sat}`$ for different values of $`\tau _e`$ and F. Then, knowing $`e_{sat}`$ and $`N_{sat}`$ leads to an estimation of $`\tau _e`$ from Fig. 17a which can be confirmed with Fig. 17b provided $`C_{sat}`$ is known. To conclude, let me note that a saturation thickness much smaller than 1ML can also be attributed to island mobility. This is a subtle process and it is difficult to obtain any information on its importance. We note that interpreting data as not affected by island diffusion when it is actually present leads to errors on diffusion coefficients of one order of magnitude or more depending on the value of $`F\tau `$ (see Fig. 10). Finally, one should be careful in interpreting the $`N_t`$ vs. thickness curves since most observations are not made in real time (as in the computer simulations) and there can be post-deposition evolutions (see for example Ref. for such complications in the case of atomic deposition). ## VI EXPERIMENTAL RESULTS I review in this section the experimental results obtained these last years for low-energy cluster deposition, mainly in the submonolayer regime. The scope is double : first, I want to give some examples on how to analyze experiments (as indicated in Section V) and second, I will show that from a comparison of experiments and models one can deduce important physical quantities characterizing the interaction of a cluster with a surface (cluster diffusivity) and with another cluster (coalescence). The following can be read with profit by those interested only in atomic deposition as examples of interpretation since these elementary processes are relevant for some cases of atomic deposition. One should be careful that some mechanisms which are specific to atomic deposition (transient mobility, funnelling, …) are not discussed here (see ). Also, growth without cluster diffusion has to be interpreted in the framework of the percolation model as indicated above . Before analyzing experimental data, it is important to know how to make the connection between the units used in the programs and the experimental ones (see also Table I). In the program, the unit length is the diameter of a cluster. In the experiments, it is therefore convenient to use as a surface unit the site, which is the projected surface of a cluster $`\pi d^2/4`$ where $`d`$ is the mean incident cluster diameter. The flux is then expressed as the number of clusters reaching the surface per second per site (which is the same as ML/s) and the island density is given per site. The thickness is usually computed in cluster monolayers (ML), obtained by multiplying the flux by the deposition time. The coverage - the ratio of the area covered by the supported islands over the total area - has to be measured on the micrographs. ### A A simple case : $`Sb_{2300}`$ clusters on graphite HOPG I start with the case of antimony clusters containing 2300 ($`\pm 600`$) atoms deposited on graphite HOPG since here the growth has been thoroughly investigated . I first briefly present the experimental procedure and then the results and their interpretation in terms of elementary processes. #### 1 Experimental procedure As suggested in the preceding Section, various samples are prepared for several film thicknesses, incident fluxes and the substrate temperatures. For films grown on Highly Oriented Pyrolitic Graphite (HOPG), before deposition at room temperature, freshly cleaved graphite samples are annealed at 500C during 5 hours in the deposition chamber (where the pressure is $`10^7`$ Torr) in order to clean the surface. The main advantage of HOPG graphite conveniently annealed is that its surfaces consist mainly of defect-free large terraces ($`1\mu m`$) between steps. It is also relatively easy to observe these surfaces by electron or tunneling microscopy . Therefore, deposition on graphite HOPG is a good choice to illustrate the interplay between the different elementary processes which combine to lead to the growth. After transfer in air, the films are observed by Transmission Electron Microscopy (TEM) (with JEOL 200CX or TOP CON electron microscopes operating at 100 kV in order to improve the contrast of the micrographs). #### 2 Results Fig. 18a shows a general view of the morphology of the antimony submonolayer film for $`e=0.14`$ ML and $`T_s=353K`$. A detailed analysis of this kind of micrographs shows that the ramified islands are formed by the juxtaposition of particles which have the same size distribution as the free clusters of the beam. From this, we can infer two important results. First, clusters do not fragment upon landing on the substrate as indicated in the introduction. Second, antimony clusters remain juxtaposed upon contact and do not coalesce to form larger particles (option (a) of Fig. 6). From a qualitative point of view, Fig. 18a also shows that the clusters are able to move on the surface. Indeed, since the free clusters are deposited at random positions on the substrate, it is clear that, in order to explain the aggregation of the clusters in those ramified islands, one has to admit that the clusters move on the surface. How can this motion be quantified? Can we admit that diffusion and pure juxtaposition are the only important physical phenomena at work here? Fig. 19a shows the evolution of the island density as a function of the deposited thickness. We see that the saturation island density $`N_{sat}`$ is reached for $`e0.15ML`$. This indicates that evaporation or island diffusion are not important in this case. Therefore, we guess that the growth should be described by a simple combination of deposition, diffusion of the incident clusters and juxtaposition. This has been confirmed in several ways. I only give three different confirmations, directing the reader to Ref. for further details. First, a comparison of the experimental morphology and that predicted by models including only deposition, diffusion and pure juxtaposition shows a very good agreement (Fig. 18b). Second, Fig. 19b shows that the saturation island density accurately follows the prediction of the model when the flux is varied. I recall that if the islands were nucleated on defects of the surface, the density would not be significantly affected by the flux. Having carefully checked that the experiments are well described by the simple DDA model, I can confidently use Fig. 10 to quantify the diffusion of the clusters. As detailed in Ref. , one first measures the saturation island density for different substrate temperatures. The normalized fluxes ($`F\tau `$) are obtained from Fig. 10. Knowing the experimental fluxes, one can derive the diffusion times and coefficients. The result is a surprisingly high mobility of $`Sb_{2300}`$ on graphite, with diffusion coefficients of the same order of magnitude as the atomic ones, i.e. $`10^8cm^2s^1`$ (Fig. 19c). The magnitude of the diffusion coefficient is so high that we wondered whether there was any problem in the interpretation of the data, in despite of the very good agreement between experiments and growth models described above. For example, one could think of a linear diffusion of the incoming clusters, induced by the incident kinetic energy of the cluster in the beam (the cluster could ”slide” on the graphite surface). This seems unrealistic for two reasons : first, in order to explain the low island density obtained in the experiments (see above), it should be assumed that the cluster, which has a low kinetic energy (less than 10 eV), can travel at least several thousand nanometers before being stopped by friction with the substrate. This would imply that the diffusion is just barely influenced by the substrate, which only slows down the cluster. In this case, it is difficult to explain the large changes observed in the island density when the substrate temperature varies. Second, we have deposited antimony clusters on a graphite substrate tilted to $`30^o`$ from its usual position (i.e. perpendicular to the beam axis). Then, a linear diffusion of the antimony clusters arising from their incident kinetic energy would lead to anisotropic islands (they would grow differently in the direction of tilt and its perpendicular). Experiments show that there is no difference between usual and tilted deposits. Therefore we can confidently believe that $`Sb_{2300}`$ clusters perform a very rapid Brownian motion on graphite surfaces. A similar study has been carried out for $`Sb_{250}`$ on graphite, showing the same order of magnitude for the mobility of the clusters . The microscopic mechanisms that could explain such a motion will be presented in section VII. ### B Other experiments In this subsection, I try to analyze data obtained in previous studies . I provide possible (i.e. not in contradiction with any of the data) explanations, with the respective values of the microscopic processes. I stress that the scope here is not to make precise fits of the data, but rather to identify the microscopic processes at work and obtain good guesses about their respective values. #### 1 Slightly accelerated $`Ag_{160}`$ clusters on HOPG Palmer’s group has investigated the growth of films by $`Ag_{160}`$ cluster deposition. Fig. 20 shows the ramified morphology of a submonolayer deposit. Although no precise fit is possible given the limited experimental data, the island density and size shows that $`Ag_{160}`$ clusters are mobile on HOPG. #### 2 $`Sb_{36}`$ on a-C Small antimony clusters are able to move on amorphous carbon, as demonstrated by Figs. 21, and by the fact that the films are dramatically affected by changing the incident flux . Fig. 21a shows that these small clusters gather in large islands and coalesce upon contact. The island density is shown in Fig. 21b. The maximum is reached for a very high thickness ($`e1.8`$ML), which can only be explained by supposing that there is significant reevaporation of $`Sb_{36}`$ clusters from the surface. Evaporation of small antimony clusters ($`Sb_n`$ with n $``$ 100) from a-C substrates has also been suggested by other authors . A fit, using $`\tau _e=20`$ deduced from Fig. 17a gives, with $`F\tau =10^5`$ for $`F=\mathrm{6\; 10}^3`$ clusters $`site^1s^1`$, leading to $`\tau \mathrm{2\; 10}^3s,D=\mathrm{2\; 10}^{12}cm^2/s,\tau _e=.04s`$ and $`X_S6`$ nm before evaporation, and a condensation coefficient of .2 when the maximum island density is reached. However, some authors have argued that the condensation coefficient is not so low. It is interesting to try a different fit of the data – in better agreement with this indication – to give an idea of the uncertainties of the fits. For this, I assume that the deposited islands are spherical (solid line of Fig. 21b) by the procedure described in Section IV. Here I have taken $`F\tau =\mathrm{3\; 10}^6forf=\mathrm{6\; 10}^3`$ clusters $`site^1s^1`$, leading to $`\tau \mathrm{5\; 10}^4s,\tau _e=.04s`$ corresponding to $`D=\mathrm{8\; 10}^{12}cm^2/s`$, and $`X_S11`$ nm before evaporation, and a condensation coefficient of .5 when the maximum island density is reached. Note that the condensation coefficient is, as expected, higher than in the previous fit and that the agreement with the experimental island densities for the lowest thicknesses is better. Comparing the two fits, it can be seen that the difference in the diffusion coefficient is a factor of 4, and a factor 2 in the $`X_S`$. This means that the orders of magnitude of the values for the microscopic mechanisms can be trusted despite lack of comprehensive experimental investigation. Similar studies have allowed to obtain the diffusion and evaporation characteristic times for other clusters deposited on amorphous carbon. For $`Bi_{90}`$, one finds $`D\mathrm{3\; 10}^{13}cm^2s^1`$ and $`X_S8`$ nm and for $`In_{100}`$ : $`D\mathrm{4\; 10}^{15}cm^2s^1`$ and strong coalescence (the incident clusters are liquid). #### 3 $`Au_{250}`$ on graphite Fig. 22 shows the morphology of a gold submonolayer film obtained by deposition of $`Au_{250}`$ ($`\pm 100`$) clusters prepared by a laser source on graphite in a UHV chamber for different substrate temperatures. The structures are strikingly similar to those obtained in the $`Sb_{2300}`$ case : large, ramified islands. We can conclude that $`Au_{250}`$ clusters do move on graphite, and that they do not completely coalesce. A more careful examination of the island morphology indicates that the size of the branches is not the same as the size of the incident clusters, as was the case for $`Sb_{2300}`$. Here the branches are larger, meaning that there is a partial coalescence, limited by the kinetics of the growth. This is a very interesting experimental test for coalescence models that are presented later. I first try to estimate the diffusion coefficient of the gold clusters. We have to be careful here because the incident flux is chopped with the laser frequency, roughly 10Hz. The active portion of the period (i.e. when the flux is ”on”) is $`100\mu s`$. An analysis of the growth in presence of a chopped flux has been reported elsewhere . Fig. 23a shows the values of $`N_{sat}`$ as a function of the diffusion time $`\tau `$ in these experimental conditions ($`F_i`$=6 ML/s) for two hypothesis : only the monomers move or islands up to the pentamer move too (island mobility is supposed to be inversely proportional to its mass). Note that there is a range of diffusion times (up to two orders of magnitude) which lead to the same island saturation value, a strange situation in homogeneous nucleation : see Refs. for details. Given the experimental island densities, the diffusion coefficients in both hypothesis are shown in Fig. 23b. The values of the diffusion coefficient seem too high, especially in the case of exclusive monomer diffusion, but there is no experimental evidence of island mobility for the moment. I note however that since the incident clusters do significantly coalesce, it is not unreasonable to assume that the smallest islands (which are spherical as the incident clusters) can move too. We are presently carrying additional tests (on cluster reevaporation or non brownian cluster diffusion) to confirm the observation of such high diffusion coefficients. #### 4 $`Au_{250}`$ on $`NaCl`$ Given the surprising high mobility of $`Au_{250}`$ ($`\pm 100`$) on HOPG, it was worth testing gold cluster mobility on other substrates. I present here recent results obtained by depositing $`Au_{250}`$ clusters on $`NaCl`$ . The high island density (Fig. 24a) shows that gold clusters are not very mobile on this substrate, with an upper limit on the diffusion coefficient of $`D10^{15}cm^2/s`$. This is in agreement with the low mobilities observed by other authors in the seventies. The diffraction pattern (Fig. 24b) is similar to that obtained in Figs. 15 c and d of Ref. . The authors interpreted their results with the presence of multi-twinned Au particles with two epitaxial orientations : Au(111)/NaCl(100) and Au(100)/NaCl(100). This is reasonable taking into account the interatomic distances for these orientations : $`d_{Au_Au(111)}`$=0.289nm, $`d_{Na_Cl(100)}`$=0.282nm and $`d_{Au_Au(100)}`$=0.408nm $`1/2d_{Na_Cl(100)}`$=0.398nm (along the face diagonal). These preliminary results suggest that epitaxy may prevent clusters from moving rapidly on a surface, a result which has also be observed by other groups (see next section). They also show that, at least in this case, forming the clusters on the surface by atomic aggregation or depositing preformed clusters does not change the orientation nor the diffusion of the clusters on the surface. Work is in progress to determine the precise atomic structure of the clusters, their orientation on the substrate and their diffusion at higher temperatures . ## VII TOWARDS A PICTURE OF CLUSTER DIFFUSION AND COALESCENCE AT THE ATOMIC SCALE In the preceding sections I have tried to analyze the growth with the help of two main ingredients : diffusion of the clusters on the surface and their interaction. I have taken the diffusion as just one number quantifying the cluster motion, without worrying about the microscopic mechanisms which could explain it. For atomic diffusion, these mechanisms have been extensively studied and are relatively well-know. In the (simplest) case of compact (111) flat surfaces, diffusion occurs by site to site jumps over bridge sites (the transition state). Therefore, diffusion is an activated process and plotting the diffusion constant vs. the temperature yields the height of the barrier, which gives information about the microscopics of diffusion. This kind of simple interpretation is not valid for cluster diffusion. It is always possible to infer an ”activation” energy from an Arrhenius plot (see Fig. 19c) but the meaning of this energy is not clear since the precise microscopic diffusion mechanism is unknown. Similarly, cluster-cluster coalescence (Fig. 6) has been supposed to be total or null (i.e. pure juxtaposition) but without considering the kinetics nor the intermediate cases which can arise (see the experimental results for gold on graphite for example). In this section, I describe some preliminary results which can shed some light on the microscopic mechanisms leading to cluster diffusion or coalescence. ### A Diffusion of the clusters Before turning to the possible microscopic mechanisms, one must investigate whether cluster diffusion is indeed such a general phenomenon. Let me review now the available experimental data concerning the diffusion of three-dimensional (3D) clusters. I already presented in the previous section several examples of high cluster mobilities over surfaces. In the case of $`Sb_{2300}`$ on graphite, mobilities as high as $`D=10^8cm^2s^1`$ are obtained at room temperature, and similar values can be inferred for Ag cluster deposition . On a-C substrates, diffusion is not that rapid, but has to be taken into account to understand the growth. More than twenty years ago, the Marseille group carefully studied the mobility of nanometer-size gold crystallites on ionic substrates (MgO, KCl, NaCl). By three different methods, they proved that these 3D clusters - grown by atomic deposition at room temperature - are significantly mobile at moderately high temperatures ($`T350K`$). The three different methods were : direct observation under the electron microscope beam , comparison of abrupt concentration profiles or the radial distribution functions before and after annealing. All these results are carefully reviewed in Ref. . I will focus here on the last method . Fig. 25 shows the radial distribution functions of the gold clusters obtained just after deposition (the flat curve) and after annealing (the oscillating curve) a similar deposit for a few minutes at 350K (Fig. 4 of Ref. ). The flat curve is a standard as-grown radial distribution function (see for example Ref. ). The other curve is significantly different from the first, although the cluster size distribution remains identical (Fig. 25). This shows that gold clusters move as an entity on KCl(100) at 350K, since the conservation of the size distribution rules out atomic exchange between islands (the (EC) mechanism presented below (Section VII A 1). From the shape of the radial distribution function some features of the cluster-cluster interaction could be derived, mainly that it is a repulsive interaction. The detailed interaction mechanisms are not clear . A different study showed that the clusters were mobile only for a limited amount of time (several minutes), and then stopped. It turns out that clusters stop as soon as they reach epitaxial orientation on the substrate. Indeed, the gold(111) planes can orient on the KCl(100) surface, reaching a stable, minimum energy configuration (for more details on the epitaxial orientations of gold clusters on $`NaCl`$, see Refs. ). Therefore, 3D cluster diffusion might be quite a common phenomenon, at least when there is no epitaxy between the clusters and the substrate. What are the possible microscopic mechanisms? Unfortunately for the field of cluster deposition, recent theoretical and experimental work has focused mainly on one atom thick, two-dimensional islands, whose diffusion mechanisms might be different from those of 3D islands. The focus on 2D islands is due to the technological impetus provided by applications of atomic deposition - notably MBE for which one wants to achieve flat layers. Let’s briefly review the current state of the understanding of 2D island diffusion to see what inspiration we can draw for 3D cluster diffusion. #### 1 2D island diffusion mechanisms There are two main types of mechanisms proposed to account for 2D island diffusion : single adatom motion and collective (simultaneous) atom motion. It should be noted that small islands (less than $``$ 15 atoms) are likely to move by specific mechanisms, depending on the details of the island geometry and atomic energy barriers . Therefore I concentrate here on larger 2D islands. ##### a Individual mechanisms The most common mechanism invoked to account for 2D island diffusion has been that of individual atomic motion. By individual I mean that the movement of the whole island can be decomposed in the motion of uncorrelated single atom moves. There are two main examples of such a diffusion : evaporation-condensation (EC) and periphery-diffusion (PD). Theoretical investigations on these individual mechanisms have generated much interest since it was conjectured that this diffusion constant $`D_{ind}`$ is proportional to the island number of atoms (island mass) to some power which depends on the precise mechanism (EC or PD) causing island diffusion but not on temperature or the chemical nature of the system. If true, this conjecture would prove very useful, for it would allow to determine experimentally the mechanism causing island migration by measuring the exponent and some details of the atom diffusion energetics by measuring how $`D_{ind}`$ depends on temperature. Unfortunately, recent studies have shown that this prediction is too simplistic, as I show now for the two different mechanisms. (i) Periphery diffusion Fig. 26 shows the elementary mechanism leading to island diffusion via atomic motion on the edge of the island (label PD). Assuming that \- each atomic jump displaces the center of mass of the island by a distance of order 1/N (where N is the number of atoms of the island), \- each edge atom (density $`n_s`$) jumps with a rate $`kexp(E_{ed}/k_BT)`$ where $`E_{ed}`$ is the activation energy for jumping from site to site along the border and $`k_B`$ is Boltzmann constant One obtains : $$D_{ind}kn_s1/N^2exp(E_{ed}/k_BT)N^{3/2}$$ (1) if one postulates that $`n_s`$, the mean concentration of edge atoms is proportional to the perimeter of the island (i.e. to $`N^{1/2}`$). This equation allows in principle to determine the edge activation energy by measuring the temperature dependence of $`D_{ind}`$. However, recent experiments and Kinetic Monte-Carlo simulations have suggested that Eq. 1 is wrong. First, the size exponent is not universal but depends on the precise energy barriers for atomic motion (and therefore on the chemical nature of the material) and, second, the measured activation energy does not correspond to the atomic edge diffusion energy. The point is that the limiting mechanism for island diffusion is corner breaking, for islands would not move over long distances simply by edge diffusion of the outer atoms . Further studies are needed to fully understand and quantify the PD mechanism. (ii) Evaporation-Condensation An alternative route to diffusion is by exchange of atoms between the island and a 2D atomic gas. This is the usual mechanism leading to Ostwald ripening . Atoms can randomly evaporate from the island and atoms belonging to the 2D gas can condensate on it (Fig. 26). This leads to fluctuations in the position of the island center of mass, which are difficult to quantify because of the possible correlations in the atomic evaporation and condensation. Indeed, an atom which has just evaporated form an island is likely to condensate on it again, which cannot be accounted by a mean-field theory of island-gas exchange of atoms . The latter leads to a diffusion coefficient scaling as the inverse radius of the island , while correlations cause a slowing down of diffusion, which scales as the inverse square radius of the island . Experimentally, Wen et al. have observed by STM the movement of Ag 2D islands on Ag(100) surfaces. They measured a diffusivity almost independent of the island size, which rules out the PD mechanism and roughly agrees with their calculation of the size dependence of the EC mechanism. Since this calculation has been shown to be only approximate, further theoretical and experimental work is needed to clarify the role of EC in 2D island diffusion. However, the work by Wen et al. has convincingly shown that these islands move significantly and that, for silver, island diffusion is the main route to the evolution of the island size distribution, contrary to what was usually assumed (Ostwald ripening exclusively due to atom exchange between islands, via atom diffusion on the substrate). ##### b Collective diffusion mechanisms These individual mechanisms lead in general to relatively slow diffusion of the islands (of order $`10^{17}cm^2/s`$ at room temperature ). For small clusters, different (and faster) mechanisms such as dimer shearing, involving the simultaneous displacement of a dimer, have been proposed . More generally, Hamilton et al. have proposed a different mechanism, also involving collective motions of the atoms, which leads to fast island motion. By collective I mean that island motion is due to a simultaneous (correlated) motion of (at least) several atoms of the island. Specifically, Hamilton et al. proposed that dislocation motion could cause rapid diffusion of relatively small (5 to 50 atoms) homoepitaxial islands on fcc(111) surfaces. Fig. 27 shows the basic idea : a row of atoms move simultaneously from fcc to hcp sites, thus allowing the motion of the dislocation and consequently of the island center of mass. Alternative possibilities suggested by Hamilton et al. for dislocation mediated island motion are the ”kink” mechanism (the same atomic row moves by sequential but correlated atomic motion) or the ”gliding” mechanism studied below, where all the atoms of the island move simultaneously. Molecular Dynamics simulations, together with a simple analytical approach suggest that for the smallest islands ($`N<20`$) the gliding mechanism is favored, for intermediate sizes ($`20<N<100`$) the dislocation motion has the lowest activation energy, while for the largest studied islands ($`N>100`$) the preferential mechanism is that of ”kink” dislocation motion. It is interesting to quote at this point recent direct observations of cluster motion by field ion microscopy . Fig. 28 shows successive images of a compact $`Ir_{19}`$ cluster moving on Ir(111). By a careful study, the authors have ruled out the individual atomic mechanisms discussed above as well as the dislocation mechanism. Instead, they suggest that gliding of the cluster as a whole is likely to explain the observed motion . Hamilton later studied the case of heteroepitaxial, strained islands . He has shown that - due to the misfit between the substrate and the island structures - there can exist islands for which introducing a dislocation does not cost too much extra energy. These metastable misfit dislocations would propagate easily within the islands, leading to ”magic” island sizes with a very high mobility . #### 2 3D island diffusion mechanisms For the 3D clusters, the three microscopic mechanisms presented above are possible in principle. However, as noted above, the individual atom mechanisms lead to a diffusivity smaller than the diffusion of $`Sb_{2300}`$ on graphite by several orders of magnitude. These mechanisms have also been ruled out for the diffusion of gold crystallites on ionic substrates . Several tentative explanations based on the gliding of the cluster as a whole over the substrate have been proposed . Reiss showed that, for a rigid crystallite which is not in epitaxy on the substrate, the activation energy for rotations might be weak, simply because during a rotation, the energy needed by atoms that have to climb up a barrier is partially offset by the atoms going into more stable positions. Therefore, the barrier for island diffusion is of the same order as that for an atom, as long as the island does not reach an epitaxial orientation. Kern et al. allowed for a partial rearrangement of the interface between the island and the substrate when there is a misfit. The interface would be composed of periodically disposed zones in registry with the substrate, surrounded with perturbed (”amorphous”) zones, weakly bound to the substrate. This theory - similar in spirit to the dislocation theory proposed by Hamilton for 2D islands - leads to reasonable predictions but is difficult to test quantitatively. To clarify the microscopic mechanisms of 3D cluster diffusion, I now present in detail Molecular Dynamics (MD) studies carried out recently . These simulations aimed at clarifying the generic aspects of the question rather than modeling a particular case. Both the cluster and the substrate are made up of Lennard-Jones atoms , interacting through potentials of the form : $`V(r)=4ϵ\left(\left(\frac{\sigma }{r}\right)^{12}\left(\frac{\sigma }{r}\right)^6\right)`$. Empirical potentials of this type, originally developed for the description of inert gases, are now commonly used to model generic properties of condensed systems. Lennard-Jones potentials include only pair atom-atom interaction and ensure a repulsive interaction at small atomic distances and an attractive interaction at longer distances, the distance scale being fixed by $`\sigma `$ and the energy scale by $`ϵ`$. For a more detailed discussion of the different interatomic potentials available for MD simulations and their respective advantages and limitations, see Ref. . The substrate is modeled by a single layer of atoms on a triangular lattice, attached to their equilibrium sites by weak harmonic springs that preserve surface cohesion. The Lennard-Jones parameters for cluster atoms, substrate atoms and for the interaction between the substrate and the cluster atoms are respectively $`(ϵ_{cc},\sigma _{cc})`$, $`(ϵ_{ss},\sigma _{ss})`$ and $`(ϵ_{sc},\sigma _{sc})`$. $`ϵ_{cc}`$ and $`\sigma _{cc}`$ are used as units of energy and length. $`ϵ_{sc}`$, $`\sigma _{ss}`$ and $`T`$, the temperature of the substrate, are the control parameters of the simulation. The last two parameters are then constructed by following the standard combination rules : $`ϵ_{ss}=\sigma _{ss}^6`$ and $`\sigma _{sc}=\frac{1}{2}\left(\sigma _{cc}+\sigma _{ss}\right)`$. Finally, the unit of time is defined as $`\tau =(M\sigma _{cc}^2/ϵ_{cc})^{1/2}`$, where $`M`$ is the mass of the atoms which is identical for cluster and substrate atoms. The simulation uses a standard molecular dynamics technique with thermostatting of the surface temperature . In these simulations, the clusters take the spherical cap shape of a solid droplet (Fig. 29) partially wetting the substrate. The contact angle, which can be defined following reference , is roughly independent of the cluster size (characterized by its number of atoms $`n`$, for $`50<n<500`$. This angle can be tuned by changing the cluster-substrate interaction. For large enough $`ϵ_{sc}`$, total wetting is observed. The results presented below have been obtained at a reduced temperature of 0.3 for which the cluster is solid. This is clearly visible in Fig. 29, where the upper and lower halves of the cluster, colored white and grey at the beginning of the run, clearly retain their identity after the cluster center of mass has moved over 3 lattice parameters. Hence the cluster motion appears to be controlled by collective motions of the cluster as a whole rather than by single atomic jumps. The MD simulations have confirmed that one of the most important parameters for determining the cluster diffusion constant is the ratio of the cluster lattice parameter to the substrate lattice parameter. The results for the diffusion coefficient are shown in Fig. 30a. When the substrate and cluster are commensurate ($`\sigma _{ss}=\sigma _{cc}1`$), the cluster can lock into a low energy epitaxial configuration. A global translation of the cluster would imply overcoming an energy barrier scaling as $`n^{2/3}`$, the contact area between the cluster and the substrate. In that case diffusion will be very slow, unobservable on the time scale of the MD simulations. What is interesting to note is that even small deviations from this commensurate case lead to a measurable diffusion on the time scale of the MD runs. This can be understood from the fact that the effective potential in which the center of mass moves is much weaker, as the cluster atoms, constrained to their lattice sites inside the rigid solid cluster, are unable to adjust to the substrate potential (see above, Reiss model ). The effect is rather spectacular : a 10% change on the lattice parameter induces an increase of the diffusion coefficient by several orders of magnitude. Finally, I show in Fig. 30b the effect of cluster size on the diffusion constant for different lattice parameter values. As the number $`n`$ of atoms in the cluster is varied between $`n=10`$ and $`n=500`$, the diffusion constant decreases, roughly following a power law $`Dn^\alpha `$. This power law exponent $`\alpha `$ depends significantly on the mismatch between the cluster and the substrate lattice parameters. For high mismatches ($`\sigma _{ss}=0.7,0.8`$), $`\alpha `$ is close to $`0.66`$. As the diffusion constant is inversely proportional to the cluster-substrate friction coefficient, this result is in agreement with a simple ”surface of contact” argument yielding $`Dn^{2/3}`$. On the other hand, when the lattice mismatch is equal to $`0.9`$, one obtains $`\alpha 1.4`$, although the shape of the cluster, characterized by the contact angle, does not appreciably change. It is instructive to follow the trajectory followed by the cluster center of mass (Fig. 31). In the runs with a large mismatch (Fig. 31a), this trajectory is ”brownian-like”, with no apparent influence of the substrate. This is consistent with the simple ”surface of contact” argument. Instead, when the mismatch is small (Fig. 31b), the center of mass of the cluster follows a ”hopping-like” trajectory, jumping from site to site on the honeycomb lattice defined by the substrate. When $`\sigma _{ss}=\sqrt{3}/2`$, there seems to be a transition between the two regimes around $`n=200`$. It is interesting to consider the interpretation of cluster motion in terms of dislocation displacement within the cluster, a mechanism which has been proposed to explain rapid 2D cluster diffusion (see the discussion in Section VII A 1). For this, one can ”freeze” the internal degrees of freedom of the cluster deposited on a thermalized substrate. The center of mass trajectory is integrated using the quaternion algorithm . Surprisingly, the diffusion constant follows the same power law as in the free cluster case . This result proves that the diffusion mechanism in this case cannot be simply explained in terms of dislocation migration within the cluster as proposed to explain the diffusion of 2D islands in . As the substrate atoms are tethered to their lattice site, strong elastic deformations or dislocations within the lattice are also excluded. Hence, the motor for diffusion is here the vibrational motion of the substrate, and its efficiency appears to be comparable to that of the internal cluster modes. Very recently, U. Landmann performed MD simulations of diffusion of large gold clusters on HOPG substrates . He finds high cluster mobility, in agreement with the preceding simulations. His studies show that cluster diffusion in this case proceeds by two different mechanisms : long (several cluster diameters) linear ”flights” separated by relatively slow diffusive motion as observed in the preceding simulations. Further work is needed to ascertain the atomic mechanisms leading to this kind of motion. #### 3 Discussion What are the (partial) conclusions which can be drawn from these studies of cluster diffusion? I think that the main parameter determining the mobility of 3D islands on a substrate is the possible epitaxy of the cluster on the substrate. Indeed, if the island reaches an epitaxial orientation, it is likely to have a mobility limited by the individual atomic movements, which give a small diffusion constant (of order $`10^{17}cm^2s^1`$ at room temperature). Diffusivities of this magnitude will not affect the growth of cluster films during typical deposition times, and clusters can be considered immobile. The effect of these kind of diffusion rates can only be seen by annealing the substrates at higher temperatures or for long times. According to Hamilton , dislocations could propagate even for epitaxial islands, but it is likely that this mechanism is more important in the case of heteroepitaxial islands which I now proceed to discuss. Indeed, if the island is not in epitaxy on the substrate, high mobilities can be observed because the cluster sees a potential profile which is not very different from that seen by a single atom. It should be noted that this non-epitaxy can be obtained when the two lattice parameters (of the substrate and the island) are very different, or also when they are compatible if there is relative misorientation. The latter has been observed for gold on ionic substrates and mobility is relatively high until the crystallites reach epitaxy. The MD simulations presented above show that, for Lennard-Jones potentials, only homoepitaxy prevents clusters from moving rapidly on a surface. It should be noted that relaxation of the cluster or the substrate - which would favor a locking of the cluster in an energetically favorable position at the expense of some elastic energy - has not been observed in these LJ simulations, nor has dislocation propagation. This is probably realistic for the low interaction energies which correspond to metal clusters on graphite. It could also be argued that dislocation motion is more difficult in 3D clusters than in 2D islands since the upper part of the particle (absent in 2D islands) tends to keep a fixed structure. Another important parameter is the cluster-substrate interaction : one can think that a large attractive interaction (for metal on metal systems for example) can induce an epitaxial orientation and prevent the cluster from diffusing, even in the heteroepitaxial case. The differences between the diffusion of clusters grown on a substrate by atom deposition and aggregation and those previously formed in a beam and deposited must also be investigated. One could anticipate that islands formed by atom aggregation on the substrate would accommodate easily to the substrate geometry, whereas preformed clusters might keep their (metastable) configuration. However, it is not at all clear that island nucleation and epitaxy are simultaneous phenomena, for it has been observed that islands can form in a somewhat arbitrary configuration and subsequently orient on the substrate after diffusion and rotation (see Ref. ). ### B Cluster-Cluster coalescence What happens now when two clusters meet? If they remain simply juxtaposed, morphologies similar to Fig. 18a are observed. In this case, the incident clusters have retained their original morphology, and the supported particles are identical to them, even if they are in contact with many others after cluster diffusion. It is clear, by looking for example to Fig. 22 that this is not always the case. In these cases, the supported islands are clearly larger than the incident clusters : some coalescence has taken place. How can one understand and predict the size of the supported particles? Which are the relevant microscopic parameters? This section tries to answer these questions, which are of dramatic interest for catalysis, since the activity of the deposits crucially depends on its specific area and therefore on the sintering process (See for example Refs. ). I will first briefly examine the classical theory for sphere-sphere coalescence (i.e. ignoring the effect of the substrate) and then review recent molecular dynamics simulations which suggest that this classical theory may not be entirely satisfactory for nanoparticles. #### 1 Continuum theory of coalescence The standard analysis of kinetics of sintering is due to Mullins and Nichols . The ”motor” of the coalescence is the diffusion of atoms of the cluster (or island) surface from the regions of high curvature (where they have less neighbors and therefore are less bound) towards the regions of lower curvature. The precise equation for the atom flux is $$\stackrel{}{J}_s=\frac{D_s\gamma \mathrm{\Omega }\nu }{k_BT}\stackrel{}{_sK}$$ (2) where $`D_s`$ is the surface diffusion constant (supposed to be isotropic), $`\gamma `$ the surface energy (supposed to be isotropic too), $`\mathrm{\Omega }`$ the atomic volume, $`\nu `$ the number of atoms per unit surface area, $`k_B`$ Boltzmann’s constant, T the temperature and K the surface curvature (K=1/$`R_1`$ \+ 1/$`R_2`$) where $`R_1`$ and $`R_2`$ are the principal radii of curvature). For sphere-sphere coalescence, an order of magnitude estimation of the shape changes induced by this flux is : $$\frac{n}{t}2B\frac{^2K}{s^2}(y=s=0)$$ (3) where $`dn`$ is the outward normal distance traveled by a surface element during $`dt`$, $`s`$ the arc length and $`B=D_s\gamma \mathrm{\Omega }^2\nu /k_BT`$ (the $`z`$ axis is taken as the axis of revolution). For this geometry, Eq. 3 becomes (Fig. 32) : $$\frac{l}{t}\frac{B}{l^3}\left(1\frac{l}{R}\right)$$ (4) where I have made an order of magnitude estimation of the second derivative of the curvature : $`K/s(K(R)K(l))/l`$ and similarly $`^2K/s^2(1l/R)/l^3`$ (see Fig. 32). Integrating Eq. 4 leads to $$l\left(r^4+4Bt\right)^{1/4}forlR$$ (5) Eq. 5 gives an estimation of the coalescence kinetics for two spheres of radius r and R. However, despite its plausibility, Eq. 5 has to be used with care. First, the calculation leading to it from the expression of the flux (Eq. 2) is only approximate. More importantly, Eq. 2 assumes isotropic surface tension and diffusion coefficients. While this approximation may be fruitful for large particles (in the $`\mu `$m range, ), it is clearly wrong for clusters in the nanometer range. These are generally facetted as a result of anisotropic surface energies. This has two important consequences : first, since the particles are not spherical, the atoms do not feel a uniform curvature. For those located on the planar facets, the curvature is even 0, meaning that they will not tend to move away spontaneously. This effect should significantly reduce the atomic flux. Second, the diffusion is hampered by the edges between the facets which induce a kind of ”Schwoebel” effect . Then, the effective mass transfer from one end of the cluster to the other may be significantly lower than expected from the isotropic curvatures used in Eq. 2. For these anisotropic surfaces, a more general formula which takes into account the dependence of $`\gamma `$ on the crystallographic orientation should be used (see for example Ref. ). However, this formula is of limited practical interest for two reasons. First, the precise dependence of the surface energy on the crystallographic orientation is difficult to obtain. Second, as a system of two touching facetted clusters does not in general show any symmetry, the solution to the differential equation is hard to find. One possibility currently explored is to assume a simple analytical equation for the anisotropy of 2D islands and integrate numerically the full (anisotropic) Mullins’ equations. #### 2 Molecular Dynamics simulations of coalescence Since continuum theories face difficulties in characterizing the evolution of nanoparticle coalescence, it might be useful to perform molecular dynamics (MD) studies of this problem. Several studies have been performed, showing that two distinct and generally subsequent processes lead to coalescence for particles in the nanometer range : plastic deformation and slow surface diffusion . Zhu and Averback have studied the first stages (up to 160 ps) of the coalescence of two single-crystal copper nanoparticles (diameter 4.8 nm). Fig. 33 presents four stages of the coalescence process, demonstrating that plastic deformation takes place (see the arrows indicating the sliding planes) and that a relative rotation of the particles occurs during this plastic deformation (c and d). During the first 5 ps, the deformation is elastic, until the elastic limit (roughly 0.8 nm ) is reached : after this, since the shear stress (Fig. 34) is very high, dislocations are formed and glide on (111) planes in the $`<110>`$ direction, as usually seen in fcc systems. Fig. 34 also shows that after 40ps (i.e. Fig. 33c) the stress on the glide plane is much smaller and dislocation motion is less important : the two particles rotate until a low-energy grain boundary is found (Fig. 33d). This intial stage of the coalescence, where the two particles reorient and find a low-energy configuration, is very rapid, but does not lead in general to thorough coalescence. An interesting exception might have been found by Yu and Duxbury : their MD simulations showed that for very small clusters (typically less than 200 atoms) coalescence is abrupt provided the temperature is sufficiently close to the melting temperature. They argue that this is due to a (not specified) ”nucleation process” : plastic deformation is a tempting possibility. For larger clusters, the subsequent stages are much slower and imply a different mechanism : atom diffusion on the surface of the particles. The intial stages of this diffusion-mediated coalescence have been studied by Lewis et al. . The point was to study if Mullins’ (continuum) predictions were useful in this size domain. In Lewis et al. simulations, the embedded-atom method (EAM) was used to simulate the behavior of unsupported gold clusters for relatively long times ($``$ 10ns). Evidently, an important role of the substrate in the actual coalescence of supported clusters is to ensure thermalization, which is taken care of here by coupling the system to a fictitious “thermostat” . One therefore expects these coalescence events to be relevant to the study of supported clusters in the case where they are loosely bound to the substrate, e.g., gold clusters on a graphite substrate. Strong interaction of the clusters with the substrate may be complicated and lead to cluster deformation even for clusters deposited at low energies, for example if the cluster wets the substrate . Fig. 35 shows the evolution of the ratio $`x/R`$, where $`x`$ is the radius of the neck between the two particles. After an extremely rapid approach of the two clusters due to the mechanisms studied above (plastic deformation), a slow relaxation to the spherical shape starts (Fig. 36). The time scale for the slow sphericization process is difficult to estimate from Fig. 35, but it would appear to be of the order of a few hundred ns or more. This number is substantially larger than one would expect on the basis of phenomenological theories of the coalescence of two soft spheres. Indeed, Ref. predicts a coalescence time for two identical spheres $`\tau _c=k_BTR^4/(CD_s\gamma a^4)`$, where $`D_s`$ is again the surface diffusion constant, $`a`$ the atomic size, $`\gamma `$ the surface energy, $`R`$ the initial cluster radius, and $`C`$ a numerical constant ($`C=25`$ according to Ref. ); taking $`D_s\mathrm{5\; 10}^{10}m^2s^1`$ (the average value found in the simulations, see ), $`R=30`$ Å, $`\gamma 1Jm^2`$, and $`a=3`$ Å, this yields a coalescence time $`\tau _c`$ of the order of 40 ns. The same theories, in addition, make definite predictions on the evolution of the shape of the system with time. In particular, in the tangent-sphere model, the evolution of the ratio $`x/R`$ is found to vary as $`x/R(t/\tau _c)^{1/6}`$ for values of $`x/R`$ smaller than the limiting value $`2^{1/3}`$. In Fig. 35, the prediction of this simple model (full line) is compared with the results of the present simulations. There is no agreement between model and simulations. The much longer coalescence time observed has been attributed to the presence of facets on the initial clusters, which persist (and rearrange) during coalescence. The facets can be seen in the initial and intermediate configurations of the system in Fig. 36; the final configuration of Fig. 36 shows that the cluster is more spherical (at least from this viewpoint), and that new facets are forming. That diffusion is slow can in fact be seen from Fig. 36: even after 10 ns, at a temperature which is only about 200 degrees below melting for a cluster of this size, only very few atoms have managed to diffuse a significant distance away from the contact region. The precise role of the facets in the coalescence process is a subject of current interest. Experiments have shown that shape evolution is very slow in presence of facets for 3D crystallites (see for example ) and recent experiments and computer simulations on 2D islands suggest that the presence of facets can be effective in slowing down the coalescence process. Clearly, more work is needed to get a quantitative understanding of nanoparticles coalescence, and to evaluate the usefulness of Mullins’ approach, especially if one manages to include the crystalline anisotropy (see also Refs. ). ### C Island morphology Now I can turn on to the prediction of one of the essential characteristics of cluster films : the size of the supported particles. As I have already mentioned in the introduction, the size of the nanoparticles controls many interesting properties of the films. Therefore, even an approximate result may be useful, and this is what I obtain in this section. The experiments shown above demonstrate that the supported particles can have a variety of sizes, from that of the incident clusters ($`Sb_{2300}`$/HOPG, Section VI A) up to many times this size (for example $`Au_{250}`$/HOPG, Section VI B 3). To understand how the size of the supported particles is determined, one can look at a large circular island to which clusters are arriving by diffusion on the substrate (Fig. 32). There are two antagonist effects at play here. One is given by thermodynamics, which commands that the system should try to minimize its surface (free) energy. Therefore one expects the clusters touching an island to coalesce with it, leading to compact (spherical) domains. The other process, driving the system away from this minimization is the continuous arrival of clusters on the island edge. This kinetic effect tends to form ramified islands. What is the result of this competition? Since there is a kinetically driven ramification process, it is essential to take into account the kinetics of cluster-cluster coalescence, as sketched in the previous section. I will use Eq. 5 even if it is only approximate, to derive an upper limit for the size of the compact domains grown by cluster deposition. It is an upper limit since, as pointed out in the previous section, coalescence for facetted particles could be slower than predicted by Eq. 5, hence diminishing the actual size of these domains. We first need an estimate of the kinetics of the second process : the impinging of clusters on the big island. A very simple argument is used here (see also for a similar analysis for atomic growth) : since the number of clusters reaching the surface is $`F`$ per unit surface per second and the total number of islands is $`N_t`$ per unit surface, each island receives in average a cluster every $`t_r=N_t/F`$. We are now in a position to quantify the degree of coalescence in a given growth experiment. Let me suppose that a cluster touches a large island at t=0. If no cluster impinges on the island before this cluster completely coalesces (in a time $`\tau _c`$ according to Eq. 5), then the islands are compact (circular). Instead, if a cluster touches the previous cluster before its total coalescence has taken place, it will almost freeze up the coalescence of the previous cluster. The reason is that now the atoms on the (formerly) outer surface of the first cluster do not feel curvature since they have neighbors on the second cluster. The mobile atoms are now those of the second cluster (see Fig. 32, label a) and the coalescence takes a longer time to proceed (the atoms are farther form the big island). Then, if $`t_r\tau _c`$, the islands formed on the surface are ramified. For intermediate cases, the size $`R_c`$ of the compact domains can be estimated from Eq. 5 as $`R_c=x(\stackrel{~}{t}_r)`$ where $`\stackrel{~}{t}_r`$ takes into account the fact that, to freeze the coalescence of a previous cluster, one cluster has to touch the island at roughly the same point : $`\stackrel{~}{t}_rt_r2\pi R/r`$ and $$R_{c}^{}{}_{}{}^{4}=r^4+4B\frac{2\pi R_c}{r}\frac{N_t}{F}$$ (6) Eq. 6 describes the limiting cases ($`B\mathrm{}`$ or $`B0`$) correctly. The problem with the intermediate cases is to obtain a reliable estimate of the (average) atomic surface self diffusion. For gold, Chang and Thiel give values which vary between 0.02 eV on compact facets and 0.8 eV on more open surfaces. One solution is to go the other way round and estimate $`D_s`$ from the experimental data and Eq. 6. From Fig. 22, estimating $`R_c`$ from the thickness of the island arms, and using the experimental values for r (0.85 nm) and the fact that since the flux is pulsed (see Section VI B 3), the time between two successive arrivals of clusters is approximately the time between two pulses (0.1 s), and not $`N_t/F`$, one obtains $`D_s\mathrm{3\; 10}^3cm^2s^1exp(0.69eV/(k_BT))`$, which seems a sensible value. Despite the difficulty of defining average diffusion coefficients, one can use Eq. 6 to obtain a reasonable guess for the size of the compact domains by assuming that $`D_s`$ is thermally activated : $`D_s(T)=D_0exp(E_a/(k_BT))`$ with a prefactor $`D_0=10^3cm^2s^1`$ and an activation energy $`E_a`$ taken as a fraction of the bonding energy between atoms (proportional to $`k_BT_f`$). One obtains $$B=10^{11}exp(4.6T_f/T)nm^4/s$$ (7) Inserting this value in Eq. 6 leads to Fig. 37 where the size of the compact domains is plotted as a function of $`T/T_f`$. The important feature is that as long as $`T/T_f1/4`$, the incident particles do not merge. Note that this $`1/4`$ is sensitive to the assumed value of $`D^{}`$, but only via its logarithm. Again, this estimation of $`R_c/r`$ is an upper limit since coalescence could be slower than predicted by Eq. 5. What would happen now if the incident clusters were liquid? An experimental example of this liquid coalescence is given by the deposition of $`In_{100}`$ on a-C (see above). A rough guess of the coalescence time is given by a hydrodynamics argument : the driving force of the deformation is the surface curvature $`\gamma /R^2`$ where $`\gamma `$ is the liquid surface tension and $`R`$ the cluster radius. This creates a velocity field which one can estimate using the Navier-Stokes equation : $`\eta \mathrm{\Delta }v=\gamma /R^2`$ where $`\eta `$ is the viscosity and v is the velocity of the fluid. This leads to $`\tau _c(liquid)R/v\eta R/\gamma `$. Inserting reasonable values for both $`\eta `$ (.01 Pa s) and $`\gamma `$ (1 J $`m^2`$) leads to $`\tau _c(liquid)0.01R`$ which gives $`\tau _c(liquid)=10ps`$ for $`R1nm`$. This is the good order of magnitude of the coalescence times found in simulations of liquid gold clusters ($`\tau _c(liquid)80ps`$ ). Now, since $`\tau _c(liquid)t_r`$ ($`t_r0.1s`$, see above) cluster-cluster coalescence is almost instantaneous, which would lead to $`R_c\mathrm{}`$. In fact, $`R_c`$ is limited in this case by static coalescence between the big islands formed during the growth. The reason is that the big islands may be solid or pinned by defects leading to a slow coalescence. The analysis is similar here to what has been done for atomic deposition . ### D Thick films The preceding section has studied the first stages of the growth, the submonolayer regime, which interests researchers trying to build nanostructures on the surface. I attempt here a shorter study of the growth of thick films, which are known to be very different from the bulk material in some cases . The main reason for this is their nanostructuration, as a random stacking of nanometer size crystallites. Therefore, it is interesting to understand how the size of these crystallites is determined and how stable the nanostructured film is. One can anticipate that the physical mechanism for cluster size evolution is, as in the submonolayer case, sintering by atomic diffusion. For thick films however, surface diffusion can only be effective before a given cluster has been ”buried” by the subsequent deposited clusters. Thus, most of the size evolution takes place during growth, for after the physical routes to coalescence (bulk or grain boundary diffusion) are expected to be much slower. Studies of compacted nanopowders have shown that nanoparticles are very stable against grain growth. Siegel explains this phenomenon in the following way. The two factors affecting the chemical potential of the atoms, and potentially leading to structure evolutions are local differences in cluster size or in curvature. However, for the relatively uniform grain size distributions and flat grain boundaries observed for cluster assembled materials , these two factors are not active, and there is nothing telling locally to the atoms in which direction to migrate to reduce the global energy. Therefore, the whole structure is likely to be in a deep local (metastable) minimum in energy, as observed in closed-cell foams. The stability of such structures has been confirmed by several computer simulations which have indicated a possible mechanism of grain growth at very high ($`T/T_f0.8`$) temperatures : grain boundary amorphization or melting . What determines the size of the supported particles during the growth? For thick films, a reasonable assumption is that a cluster impinging on a surface already covered by layer of clusters does not diffuse, because it forms strong bonds with the layer of the deposited clusters. This hypothesis has been checked for the growth of $`Sb_{2300}`$ on graphite . There are two main differences with submonolayer growth : first, an impinging cluster has more than one neighbor and the sphere-sphere kinetics is not very realistic; second the relevant time for ramification I have used in the preceding section is no more useful here since clusters do not move. As a first approximation, to obtain an upper limit in the size of the domains, we can use the same coalescence kinetics and take a different ”ramification” time : the average time for the arrival of a cluster touching another is roughly $`t_f1/(Fd^2)`$ where $`d=2r`$ is the diameter of the cluster. If the same formula (Eq. 5) is used, one finds $$R_{c}^{}{}_{}{}^{4}=r^4+\frac{B}{Fr^2}$$ (8) The results obtained using the same approximation as in the preceding section for $`B`$ (Eq. 7) are shown in Fig. 37. Experimentally, there are observations for deposition of Ni and Co clusters . The size of the crystallites is comparable to the size of the incident (free) clusters. This is compatible with Eq. 8 since the $`T_f`$ of these elements is very high ($`1800K`$). Therefore, Eq. 8 predicts that films grown at $`T=300K`$, ($`T/T_f0.17`$) should keep a nanostructuration with $`R_cr`$, as is observed experimentally . I stress again that a structure obtained with cluster deposition with this characteristic size is not likely to recrystallize in the bulk phase (thereby loosing its nanophase properties) unless brought to temperatures close to $`T_f`$ . ## VIII CONCLUSIONS, PERSPECTIVES What are the principal ideas presented in this paper? First, useful models to analyze the first stages of thin film growth by cluster deposition have been presented in detail (Section III). These models are useful at a fundamental level, and I have shown in Section VI how many experimental results concerning submonolayer growth can be interpreted by combining these few simple processes (deposition, diffusion, evaporation …). Specifically, by comparing the experimental evolution of the island density as a function of the number of deposited particles to the predictions of computer simulations, one can obtain quantitative information about the relevant elementary processes. Second, the quantitative information on diffusion has shown that large clusters can move rapidly on the surface, with diffusion constants comparable to the atomic ones. A first attempt to understand this high diffusivity at the atomic level is given in Section VII : the conclusion is that rapid cluster diffusion might be quite common, provided the cluster and the substrate do not find an epitaxial arrangement. Concerning cluster-cluster coalescence, it has been suggested that this process can be much slower than predicted by the usual sintering theories , probably because of the cluster facets. Third, despite all the approximations involved in its derivation, Fig. 37 gives an important information on the morphology of the film : an upper limit for the ratio of the size of the compact domains over the size of the incident clusters. This helps understanding why cluster deposition leads to nanostructured films provided the deposition temperature is low compared to the fusion temperature of the material deposited ($`T_sT_f/4`$). Clearly, further experimental and theoretical work is needed in order to confirm (or invalidate) Fig. 37. Ii is clear that we still need to understand many aspects of the physics of cluster deposition. Possible investigation directions include the following, given in an arbitrary order. First, the coalescence of nanoparticles has yet to be understood and quantified. This is a basic question for both submonolayer and thick materials. Second, one has to characterize better the interactions between clusters and the substrate, and especially its influence on the cluster diffusion. It is also important to investigate the possible interactions between the clusters, which could dramatically affect the growth. Obtaining ordered arrays of nanoparticles is a hot topic at this moment. A possibility is the pinning of the clusters on surface ”defects” which demands a better understanding of cluster interaction with them. Another idea is to use the self-organization of some bacteria to produce an ordered array on which one could arrange the clusters (See Ref. , especially Chap. 5). Clearly, investigating the interaction of clusters with biological substrates is not an easy task, but it is known that practical results are not always linked to a clear understanding of the underlying mechanisms … Acknowledgments : This article could never be written without all the experimental and theoretical work carried out in our group in Lyon and in collaboration with other groups. On the experimental side, the Center for the study of small clusters (”Centre pour l’étude des petits agrégats”) gathers researchers from solid state physics (”Département de Physique des Matériaux”, DPM), the gas phase (”Laboratoire de Spectrométrie Ionique et Moléculaire”, LASIM) and catalysts (”Institut de Recherche sur la Catalyse”, IRC). I therefore acknowledge all their researchers for their help, and especially those who have done part of the research presented here : Laurent Bardotti, Michel Broyer, Bernard Cabaud, Francisco J. Cadete Santos Aires, Véronique Dupuis, Alain Hoareau, Michel Pellarin, Brigitte Prével, Alain Perez, Michel Treilleux and Juliette Tuaillon. On the theoretical side, this work was carried out in collaboration with Jean-Louis Barrat, Pierre Deltour and Muriel Meunier (DPM, Lyon), my friend Hernán Larralde (Instituto de Física de Cuernavaca, Mexico), Laurent Lewis (Université de Montréal, Canada) and Alberto Pimpinelli (Université Blaise Pascal Clermont-2, France). I am happy to thank Claude Henry (CRMC2, Marseille) and Horia Metiu (University of California) for a careful reading of the manuscript, Jean-Jacques Métois (CRMC2, Marseille) for interesting discussions, Michael Moseler (Freiburg University) for sending me Fig. 3 and Simon Carroll for Fig. 20. e-mail address : jensen@dpm.univ-lyon1.fr Appendix A Regimes predicted by rate-equations calculations for the growth of 2D islands with evaporation. These predictions agree with the computer simulations presented in this paper and are relevant for both cluster and atomic deposition (see Ref. for more details). I will here rapidly recall how the rate-equations can be written , and then turn on to the different regimes which can be derived from them. The rate equation describing the time evolution of the density $`\rho `$ of monomers on the surface is, to lowest relevant orders in F: $$\frac{d\rho }{dt}=F(1\theta )\frac{\rho }{\tau _e}F\rho 2\sigma _o\rho \sigma _iN_t$$ (9) The first term on the right hand size denotes the flux of monomers onto the island free surface, ($`\theta `$ is the island coverage discussed below). The second term represents the effect of evaporation, i.e. monomers evaporate after an average time $`\tau _e`$. The third term is due to the possibility of losing monomers by effect of direct impingement of a deposited monomers right beside a monomer still on the surface to form an island. This “direct impingement” term is usually negligible, and indeed will turn out to be very small in this particular equation, but the effect of direct impingement plays a crucial role in the kinetics of the system in the high evaporation regimes. The last two terms represent the loss of monomers by aggregation with other monomers and with islands respectively. The factors $`\sigma _o`$ and $`\sigma _i`$ are the “cross sections” for encounters and are calculated in Refs. . The number $`N_t`$ of islands will be given by: $$\frac{dN_t}{dt}=F\rho +\sigma _o\rho $$ (10) where the first term represents the formation of islands due to direct impingement of deposited monomers next to monomers already on the surface, and the second term accounts for the formation of islands by the encounter of monomers diffusing on the surface. For the island coverage $`\theta `$ i.e. the area covered by all the islands per unit area, one has: $$\frac{d\theta }{dt}=2\left[F\rho +\sigma _o\rho \right]+\sigma _iN_t+JN_t$$ (11) The term in brackets represents the increase of coverage due to formation of islands of size 2 (i.e. formed by two monomers) either by direct impingement or by monomer-monomer aggregation. The next term gives the increase of coverage due to the growth of the islands as a result of monomers aggregating onto them by diffusion, and the last term represents the growth of the islands due to direct impingement of deposited monomers onto their boundary, or directly on the island. This last term depends on $`X_S^{}`$, the desorption length of monomers diffusing on top of the islands . In all the simulations presented here (Section III), I have taken $`X_S^{}=0`$. The total surface coverage is given by $`\theta +\rho \theta `$ except at very short times. The cross sections can be evaluated in the quasistatic approximation, which consists in assuming that $`R`$ does not vary in time and that the system is at a steady state. One finds $$\sigma _i=2\pi RD\left(\frac{dP}{dr}\right)_{r=R}=2\pi D\rho \left(\frac{R}{X_S}\right)\frac{K_1(R/X_S)}{K_0(R/X_S)}$$ (12) The cross section for monomer-monomer encounters $`\sigma _o`$ is obtained from the same formula substituting $`R`$ by the monomer radius, and $`D`$ by $`2D`$ as corresponds to relative diffusion. After some additional approximations, one finds three principal regimes which are spanned as the evaporation time $`\tau _e`$ decreases. They have been called : complete condensation regime where evaporation is not important, diffusion regime where islands grow mainly by diffusive capture of monomers and finally direct impingement regime where evaporation is so important that islands can grow only by capturing monomers directly from the vapor. Within each of these regimes, there are several subregimes characterized by the value of $`X_S^{}`$. I use $`ł_{CC}(F\tau )^{1/6}`$, the island-island distance at saturation when there is no evaporation and $`R_{sat}`$ as the maximum island radius, reached at the onset of coalescence. complete condensation $`X_Sł_{CC}`$ $$N_{sat}F^{1/3}\tau ^{1/3}\mathrm{for}\mathrm{any}\mathrm{X}_\mathrm{S}^{}$$ (13) diffusive growth $`1X_Sł_{CC}`$ $$N_{sat}\{\begin{array}{cc}(FX_S^2\tau _e)^{2/3}(X_S+X_S^{})^{2/3}\hfill & \text{if }X_S^{}R_{sat}\text{ (a)}\hfill \\ & \\ F\tau _eX_S^2\hfill & \text{if }X_S^{}R_{sat}\text{ (b)}\hfill \end{array}$$ (14) with $`R_{sat}(X_S+X_S^{})^{1/3}(FX_S^2\tau _e)^{1/3}`$, which gives for the crossover between regimes (a) and (b) : $`X_S^{}(crossover)(FX_S^2\tau _e)^{1/2}`$. direct impingement growth $`X_S1`$ $$N_{sat}\{\begin{array}{cc}(F\tau _e)^{2/3}\hfill & \text{if }X_S^{}1\text{ (a)}\hfill \\ & \\ (F\tau _e)^{2/3}X_{S}^{}{}_{}{}^{2/3}\hfill & \text{if }1X_S^{}R_{sat}\text{ (b)}\hfill \\ & \\ F\tau _e\hfill & \text{if }X_S^{}R_{sat}\text{ (c)}\hfill \end{array}$$ (15) with $`R_{sat}(F\tau _e)^{1/3}X_{S}^{}{}_{}{}^{1/3}`$, which gives for the crossover between regimes (a) and (b) : $`X_S^{}(crossover)(F\tau _e)^{1/2}`$. Appendix B I present here the summary of the different limits of growth of 3d islands in presence of evaporation and/or defects. These results are derived in detail in Ref. from the resolution of rate-equations similar to those presented in Appendix A. For each regime, I give in the order the saturation island density $`N_{sat}`$, the thickness at saturation $`e_{sat}`$ (i.e. the thickness when the island density first reaches its saturation value), the thickness at coalescence $`e_c`$ (i.e. the thickness when the island density starts to decrease due to island-island coalescence), and the scaling kinetics of the mean island radius as a function of time before the saturation island density is reached. I use $`l_{CC}=(F\tau )^{1/7}`$ for 3d islands and $`X_S=\sqrt{\tau _e/\tau }`$. Clean substrate (no defects) high evaporation : $`X_Sl_{CC}l_{def}`$ $`N_{sat}[F\tau _e(1+X_s^2)]^{2/3}`$ $`e_{sat}e_c[F\tau _e(1+X_s^2)]^{1/3}`$ $`RFt`$ low evaporation : $`l_{CC}X_Sl_{def}`$ or $`l_{CC}l_{def}X_S`$ $`N_{sat}\left(\frac{F}{D}\right)^{2/7}`$ $`e_{sat}e_c\left(\frac{D}{F}\right)^{1/7}`$ $`R(FDt^2)^{1/9}t^{2/9}`$ Dirty substrate (many defects) high evaporation : $`X_Sl_{def}l_{CC}`$ $`N_{sat}c`$ $`e_{sat}\frac{1}{[1+X_s^2]}`$ $`e_c\frac{1}{c^{1/2}}`$ $`RFt`$ low evaporation : $`l_{def}X_Sl_{CC}`$ or $`l_{def}l_{CC}X_S`$ $`N_{sat}c`$ $`e_{sat}c`$ $`e_c\frac{1}{c^{1/2}}`$ $`R\left(\frac{Ft}{c}\right)`$ for $`tc/F`$, i.e. before saturation $`R(Ft/c)^{1/3}`$ between saturation and coalescence ($`c/Ft1/Fc^{1/2}`$). Table I Principal symbols and terms used in this paper. The natural length unit in the model corresponds to the mean diameter of an incident cluster. | Symbols and terms | Units, Remarks | | --- | --- | | island | structure formed on the surface by aggregation of clusters | | $`n`$ | number of atoms of the cluster | | $`d`$ | cluster diameter in nm, $`d=d_0n^{1/3}`$ where $`d_0`$ depends on the element | | site | area occupied by a cluster on the surface site=$`\pi d^2/4`$ | | ML | monolayer : the amount of matter needed to cover uniformly the | | | substrate with one layer of clusters (1 cluster per site) | | F | Impinging flux expressed in monolayers (or clusters per site) per second | | $`\tau `$ | Diffusion time : mean time needed for a cluster to make a ”jump” | | | between two sites (in seconds) | | $`\tau _e`$ | evaporation time : mean time before a monomer evaporates from the surface | | $`X_S`$ | mean diffusion length on the substrate before desorption : $`X_S=\sqrt{D\tau _e}`$ | | $`\varphi `$ | Normalized flux ($`\varphi =F\tau `$) expressed in clusters per site | | D | Diffusion coefficient expressed in $`cm^2s^1`$ (D=site/4$`\tau `$) | | e | mean thickness of the film, $`e=Ft`$ where $`t`$ is the deposition time | | $`\theta `$ | coverage; fraction of the substrate covered by the clusters | | $`N_t`$ | island density on the surface, expressed per site | | $`N_{sat}`$ | saturation (maximum) island density on the surface, expressed per site | | $`\rho `$ | density of isolated clusters on the surface, expressed per site | | $`C_{sat}`$ | condensation coefficient (ratio of matter actually present on | | | the substrate over the thickness) at saturation | | $`ł_{CC}`$ | the island-island distance at saturation when there is no evaporation |
no-problem/9903/math9903030.html
ar5iv
text
# Growth and relations in graded rings ## 1 Introduction Let $`k`$ be a field. We will call a vector $`k`$–space, a $`k`$–algebra, or $`k`$–algebra module graded, if it is $`𝐙_+`$–graded and finite–dimensional in every component. For a such space $`V`$ (in particular, $`V`$ may be an algebra or a module) we denote by $`V(x)`$ its Hilbert series $`\underset{i0}{}\text{dim }V_ix^i`$. A graded algebra $`A=A_0A_1\mathrm{}`$ is called connected, if its zero component $`A_0`$ is $`k`$; a connected algebra is called standard, if it is generated by $`A_1`$ and a unit. The word ”algebra” below denotes an associative graded algebra. All inequalities between Hilbert series are coefficient–wise, i. e., we write $`_ia_it^i_ib_it^i`$ iff $`a_ib_i`$ for all $`i0`$. We are interested in the following situation. Suppose $`A`$ is a graded algebra, $`\alpha A`$ is a subset consisting of homogeneous elements such that $`\alpha `$ minimally generates an ideal $`IA`$, and $`B=A/I`$. What can we say about relations of the Hilbert series $`A(t),B(t)`$, and the generating function $`\alpha (t)`$? If $`A`$ is a free associative algebra, then a partial answer is given by Golod–Shafarevich theorem (see \[GSh\] or Corollary 2 below). In particular, it follows that if the number of elements of $`\alpha `$ of every degree is sufficiently small, then $`B`$ is infinite-dimensional \[GSh\] (moreover, if in this case $`A`$ is standard, then $`B`$ grows exponentially \[P\]). On the other hand, V. E. Govorov (see \[Gov\] or Theorem 5 below) gives some estimates for a number of relations $`\mathrm{\#}\alpha `$, whenever $`B(t)`$ is known (if all elements of $`\alpha `$ have the same degree). Our first goal is to demonstrate that the equality cases in the Golod–Shafarevich theorem and in both of Govorov’s inequalities coincide (Theorem 6). Sets $`\alpha `$ such that equality holds are called strongly free \[A1\], or inert \[HL\]; they are non-commutative analogues of regular sequences in commutative ring theory \[A1\], \[Gol\]. If $`A`$ is free, then the set $`\alpha A`$ of homogeneous elements is strongly free iff $$\text{gl. dim }A/id(\alpha )2.$$ In the general case of an arbitrary connected algebra $`A`$, the property of being strongly free is defined by certain conditions on Hilbert series \[A1\]. E. g., for every set of homogeneous elements $`\alpha A`$ the following inequality holds: $$(Bk\alpha )(t)A(t);$$ the equality holds iff $`\alpha `$ is strongly free (where star denotes a free product of algebras). As for our question in the case of an arbitrary algebra $`A`$, we need to consider an asymptotical characteristics of the algebra’s growth. General graded algebras (even finitely generated ones) have exponential growth, so, for a such algebra $`A`$, $`\text{GK–dim }A=\mathrm{}`$. Let us introduce the following analogue of Gelfand–Kirillov dimension: if $`a_i=\text{dim }A_i`$, then define the exponent of growth of $`A`$ by $$p(A)=inf\{q>0|c>0n0a_ncq^n\}.$$ At least for a finitely generated algebra $`A`$, $`p(A)`$ is finite. If $`A(t)`$ is known, it is clear how to compute $`p(A)`$: if $`r(A)`$ is a radius of convergence of the series $`A(t)`$, then $$p(A)=\overline{lim}\sqrt[n]{a_n}=r(A)^1.$$ Notice that $`p(A)`$ depends of the grading on $`A`$ (for example, if $`A=kx,y|\text{deg }x=\text{deg }y=d`$, then $`p(A)=\sqrt[d]{2}`$ depends of $`d`$), so it is not a “dimension” in the usual sense. It is proved in \[A2\] that over a field of zero characteristic there is no algorithm to decide whether or not a given quadratic subset of standard free algebra is strongly free. Using this fact and our criteria for strongly free sets, we proved in Section 4 the following. Let $`R`$ be a finitely presented standard $`s`$–generated algebra with a set of relations $`\alpha `$. Then for some particular $`s,\alpha `$ for some rational numbers $`q,r`$ there is no algorithm to decide, when $`\alpha `$ is given, whether or not $`R(q)=r`$. Moreover, for some integer $`n`$ there is no algorithm to decide, when $`\alpha `$ is given, whether or not $`p(R)=n`$. It means that in the simplest case that the algebra $`A`$ is free and standard, even the asymptotical version of our question is undecidable in general. In Section 5 below we introduce a concept of extremal algebra: specifically, a graded algebra $`A`$ is called extremal, if for any proper quotient $`B=A/I`$ $$p(B)<p(A).$$ If $`A`$ is an extremal algebra, then it is prime (Theorem 13), and $`p(A)`$ is finite (Proposition 12). Any free product of two connected algebras is extremal (Theorem 15), as is any algebra that includes a strongly free set (Corollary 17). Using extremality, we generalize Govorov’s inequality for an arbitrary connected algebra $`A`$ (Theorem 19): if $`A(t)`$ and $`B(t)`$ are known, we obtain an estimate for the generating function $`\alpha (t)`$. Also, we find a new characterizing of strongly free sets in terms of algebras’ growth (Proposition 18, Theorem 19): if a set $`\alpha `$ is strongly free, then not only the Hilbert series $`A(t)`$, but also the exponent of growth $`p(A)`$ is as small as possible. This characterizing generalizes the one of D. Anick \[A1, theorem 2.6\]. I am grateful to Professor E. S. Golod for fruitful discussions. ## 2 Golod–Shafarevich theorem and homology Let $`R=R_0R_1\mathrm{}`$ be a standard associative algebra. Suppose a set $`\{x_1,\mathrm{},x_s\}`$ is a basis in the space $`R_1`$. Then $`R=F/I`$, where $`F=kx_1,\mathrm{},x_s=T(R_1)`$ is a free associative algebra, $`IF`$ is a two–sided ideal generated by homogeneous elements of degrees $``$ 2. Let $`\alpha =\{f_1,f_2,\mathrm{}\}`$ be a minimal system of generators of $`I`$, and let $`u=k\alpha I/PI+IP`$ be a vector space generated by $`\alpha `$ (here $`P=F_1F_2\mathrm{}`$ is an augmentation ideal of $`F`$). Suppose $$0k\stackrel{d_0}{}RH_0\stackrel{d_1}{}RH_1\stackrel{d_2}{}\mathrm{}$$ (1) be a minimal free left $`R`$–resolution of $`k`$; here $`H_i\text{Tor }_i^R(k,k)=H_i(R),`$ i. e. $$H_0=k,H_1R_1,H_2u$$ etc. Let $`\mathrm{\Omega }^i=\text{Coker }d_{i1}`$ be the $`i`$–th syzygy module. Since the Hilbert series of a tensor product is a product of the factors’ Hilbert series, using the Euler formula for the exact sequence $$0k\stackrel{d_0}{}RH_0\stackrel{d_1}{}\mathrm{}\stackrel{d_i}{}RH_i\mathrm{\Omega }^i0$$ we obtain the following ###### Proposition 1 There is an equality of formal power series $$R(x)(1sx+u(x)H_3(x)+\mathrm{}+(1)^iH_i(x))=1+(1)^i\mathrm{\Omega }^i(x).$$ In particular, there are inequalities $$R(x)(1H_0(x)+\mathrm{}H_{2i1}(x))1,$$ $$R(x)(1H_0(x)+\mathrm{}+H_{2i}(x))1;$$ equalities hold iff $`\text{gl. dim }R2i1`$ (respectively, $`\text{gl. dim }R2i`$). ###### Corolary 2 (Golod–Shaferevich theorem) There is an inequality $$R(x)(1sx+u(x))1.$$ (2) Equality holds iff $`\text{gl. dim }R2`$. The inequality is proved in \[GSh\], the equality condition is proved in \[A1\]. A non-empty set $`\alpha `$ such that the equality above holds is called strongly free, or inert in $`F`$; since in our case $`\alpha `$ is not empty, these properties are equivalent to the equality $`\text{gl. dim }R=2`$. The equality $`R(x)(1sx+u(x))=1\mathrm{\Omega }^3(x)`$ shows that for a finitely presented algebra $`R`$ we can compute the Hilbert series $`R(x)`$ whenever $`\mathrm{\Omega }^3(x)`$ is known. So it is interesting to study the module $`\mathrm{\Omega }^3`$. ###### Proposition 3 Denote by $`L=F\alpha F`$ the left ideal generated by $`\alpha `$. There is an isomorphism of left $`R`$–modules $$\mathrm{\Omega }^3\text{Tor }_1^F(R,P/L).$$ Proof Suppose $`\{u_1,\mathrm{},u_s|\text{deg }u_i=1\}`$ is a basis of the space $`H_1`$ and $`\{r_1,r_2,\mathrm{}|\text{deg }r_i=\text{deg }f_i\}`$ is a basis of $`H_2`$. Then we may assume that the map $`d_2`$ in the resolution (1) is given by the following way: if $`f_i=\underset{j=1}{\overset{n}{}}a_i^jx_j,`$ then $`d_2(r_i)=\underset{j=1}{\overset{n}{}}\overline{a_i^j}u_j,`$ where the overbar denotes the image of an element of $`F`$ in $`R`$. On the other hand, since any left ideal in a free algebra is a free module, taking the long sequence of graded $`\text{Tor }_{}^F(R,P/L)`$ we obtain: $$0\text{Tor }_1^F(R,P/L)R\underset{F}{}L\stackrel{\varphi }{}R\underset{F}{}PR\underset{F}{}P/L0.$$ Here $`R\underset{F}{}LRu,`$ $`R\underset{F}{}PRR_1.`$ So the map $`\varphi `$ induced by the inclusion $`LP`$ coinsides with the map $`d_2`$ above. Therefore, $$\text{Tor }_1^F(R,P/L)\text{Ker }d_2\mathrm{\Omega }^3.$$ ###### Corolary 4 The non-empty set $`\alpha `$ is strongly free iff $`\text{Tor }_1^F(R,P/L)=0`$. Remark All results of this section hold for an arbitrary connected algebra $`R`$. The only change is to replace in all formulae the term $`sx`$ by a generating function in the algebra’s generators $`_{i1}t^{\text{deg }x_i}`$. ## 3 Estimates for the number of relations We keep the notation of the previous section. Suppose that the ideal $`I`$ is minimally generated by $`t`$ ($`t>0`$) elements of degree $`l`$, i. e., $`\alpha =\{f_1,\mathrm{},f_t|\text{deg }f_i=l\}`$. Let $`a_i`$ denote the dimension of the space $`R_i`$: $$R(x)=\underset{i0}{}a_ix^i.$$ In the situation above, V. E. Govorov proved the following theorem. ###### Theorem 5 (\[Gov\]) The series $`R(x)`$ converges for $`x=s^1`$. The following inequalities hold: $$t\frac{s^l}{R(s^1)}$$ (3) and $$t\frac{sa_{n1}a_n}{a_{nl}}$$ (4) for all possible $`n`$. Equality holds in (4) for all possible $`n`$ if and only if $`\text{Tor }_1^F(R,P/L)=0`$. The following theorem shows that the equality hold simultaneously in (3), (4), and the Golod–Shafarevich theorem. ###### Theorem 6 The following conditions are equivalent: (i) The set $`\alpha `$ is strongly free. (ii) Equality holds in (3). (iii) Equality holds in (4) for all possible $`n`$. Proof The implications $`(i)(iii)`$ follow immediately from Corollary 4 and Theorem 5. Let us prove $`(i)(ii)`$. Let us take $`x=s^1`$ in (2). Since $`u(x)=tx^l,`$ we have the inequality $$R(s^1)ts^l1,$$ which is equivalent to (3). So, if $`\alpha `$ is strongly free, then equality holds in (3). Conversely, if $`\alpha `$ is not strongly free, then the inequality of formal power series (2) is strict, i. e., the following inequality holds $$R(x)(1sx+tx^l)1+ax^n,$$ where $`n0,a>0`$. Put $`x=s^1`$. We obtain $$R(s^1)ts^l1+as^n>1,$$ or $$t>\frac{s^l}{R(s^1)}.$$ So, the inequality (3) is strict too. ## 4 Radii of convergence of Hilbert series: non-existence of algorithms For a graded algebra $`A`$, let $`r(A)`$ denotes the radius of convergence of the series $`A(t)`$. The following properties of radii of convergence are clear and well known. ###### Proposition 7 Let $`A`$ be a graded algebra. (i) $`r(A)=\mathrm{}`$ iff $`A`$ is finite-dimensional. (ii) If $`A`$ is finitely generated, then $`r(A)>0`$. (iii) $`r(A)=1`$ iff $`A`$ is not finite-dimensional and $`A`$ has sub-exponential growth. (iv) If $`B`$ is either a subalgebra or a quotient algebra of A with the induced grading, then $`r(B)r(A)`$. (v) If $`r(A)>0,`$ then $`\underset{tr(A)}{lim}A(t)=\mathrm{}`$. So, if $`A`$ is connected, then the function $`f(t)=A(t)^1`$ is continuous on $`[0,r(A)]`$ with $`f(0)=1,f(r(A))=0.`$ Under the notation of previous sections, let $`l=2`$. It is shown by D. Anick \[A2, Theorem 3.1\] that over a field of zero characteristic for some positive integers $`s`$ and $`t`$, there is no algorithm which, when given a set $`\alpha F`$ of $`t`$ homogeneous quadratic elements, always decides in a finite number of steps whether or not $`\alpha `$ is strongly free. We will call such a pair of integers $`(s,t)`$ undecidable. ###### Lemma 8 Let $`l`$ be a positive integer, and let $`(s,t)`$ be an undecidable pair. Then the pair $`(s+l,t)`$ is undecidable. Proof Let $`G=Fkx_{s+1},\mathrm{},x_{s+l}`$ be a free algebra of rank $`s+l`$. Then a set $`\alpha F`$ is strongly free in $`F`$ iff it is strongly free in $`G`$. So there is no algorithm to recognize it. ###### Lemma 9 Let $`s`$ be an even integer. If the pair $`(s,t)`$ is undecidable, then the pair $`(2s,s^2/4)`$ is undecidable too. Proof By \[A4\], a quadratic strongly free set in $`F`$ consting of $`q`$ elements does exist iff $`4qs^2`$. So $`ts^2/4`$. Let $`\beta `$ is a quadratic strongly free set in the algebra $`G=kx_{s+1},\mathrm{},x_{2s}`$ consisting of $`s^2/4t`$ elements. Then the set $`\alpha \beta `$ is strongly free in the algebra $`FG=kx_1,\mathrm{},x_{2s}`$ if and only if the set $`\alpha `$ is strongly free in $`F`$. ###### Corolary 10 For large enough integer $`d`$, the pair $`(4d,d^2)`$ is undecidable. ###### Theorem 11 Let $`\text{char }k=0`$. Let us denote by $`F_s`$ the free associative algebra of rank $`s`$ with standard grading. (i) Let $`s,t`$ be an undecidable pair of integers. Then there is no algorithm which, when given a set $`\alpha F_s`$ of $`t`$ homogeneous quadratic elements, always decides whether or not the equality $$R(s^1)=\frac{s^2}{t}$$ holds, where $`R=F_s/\text{id }(\alpha )`$. (ii) For some positive integers $`s`$, $`t`$, and $`q`$, there is no algorithm which, when given a set $`\gamma F_s`$ of $`t`$ homogeneous quadratic elements, always decides whether or not the equality $$r(R)=q^1$$ holds, where $`R=F_s/\text{id }(\gamma )`$. For large enough integer $`d`$, we can put $`s=64d`$, $`t=241d^2`$, and $`q=60d`$. Proof The statement (i) follows from Theorem 6 and Corollary 10. To prove (ii), let $`d`$ be an integer such that the pair $`(60d,15^2d^2)`$ is undecidable. Let $`\alpha F_{60d}`$ be a quadratic set consisting of $`15^2d^2`$ elements, and let $`B=F_{60d}/\text{id }(\alpha )`$. Let us denote by $`A`$ the standard algebra $`k\{1,y_1,\mathrm{},y_{4d}\}=ky_1,\mathrm{},y_{4d}|y_iy_j=0,1i,j4d`$. Then the algebra $`C=kx_1,\mathrm{}x_{64d}/\text{id }(\gamma )`$ is isomorphic to $`AB,`$ where $`\gamma =\alpha \{x_ix_j=0|60d+1i,j64d\}`$. So $`C`$ is an algebra with $`s=64d`$ generators and $`t=(4d)^2+15^2d^2=241d^2`$ quadratic relations. It is sufficient to proof that the set $`\alpha `$ is strongly free in $`F_{60d}`$ if and only if $`r(C)=(60d)^1`$. We have $$C^1(x)=B^1(x)+A^1(x)1=B^1(x)+\frac{1}{1+4dx}1=B^1(x)\frac{4dx}{1+4dx}.$$ If $`\alpha `$ is strongly free in $`F_{60d}`$, then by Corollary 2 we have $`B^1(x)=160dx+225d^2x^2,`$ so $$C^1(x)=160dx+225d^2x^2\frac{4dx}{1+4dx}=\frac{(160dx)(115d^2x^2)}{1+4dx}.$$ By Proposition 7, $`(v)`$, we obtain $`r(C)=(60d)^1`$. Now let $`\alpha `$ is not strongly free in $`F_{60d}`$. By Theorem 6, we have $`B^1((60d)^1)<225d^2/(60d)^2=1/16`$. So $$C^1((60d)^1)=B^1((60d)^1)\frac{4d(60d)^1}{1+4d(60d)^1}<1/16\frac{1}{1+1/(4d(60d)^1)}=0.$$ We obtain $`r(C)(60d)^1`$, contradicting Proposition 7, $`(v)`$. ## 5 Algebras of extremal growth Now we introduce the following concept. ###### Definition 1 A graded algebra $`A`$ is said to be extremal, if for every nonzero homogeneous ideal $`IA`$ we have $`r(A/I)>r(A)`$. We will discuss some properties of extremal algebras. ###### Proposition 12 If $`A`$ is an extremal algebra, then $`0<r(A)<\mathrm{}`$. Proof Let us proof the other inequality. Suppose that $`r(A)=0`$. Let $`aA`$ be a nonzero homogeneous element of degree $`d>0`$, let $`I`$ be an ideal generated by $`a`$, and let $`B=A/I`$. Since $`A`$ is extremal, $`r(B)>0`$. Let $`F=kc|\text{deg }c=d`$. By the obvious inequality of Hilbert series $$(BF)(t)A(t),$$ $`r(BF)`$ must be equal to 0. On the other hand, $$(BF)(t)^1=b(t)^1dt,$$ where $`b(t)`$ is equal to $`B(t)`$ (resp., $`B(t)+1`$), if $`B`$ is unitary (resp., non-unitary, i. e. $`B_0=0`$). Since the right side is an analytical function in a neighborhood of zero, and this function takes $`0`$ into $`1`$, then in a neighborhood of zero its image does not contain $`0`$. So the function $`(BF)(t)`$ is analitycal in a neighborhood of zero. Therefore $`r(BF)>0`$. ###### Theorem 13 Let $`A`$ be an extremal algebra. Then $`A`$ is prime. Proof Obviously, it is sufficient to prove that for any two nonzero homogeneous ideals $`I`$ and $`J`$ of $`A`$, $`I.J0`$. Without loss of generality, we can assume that $`J`$ is a principal ideal generated by an element $`a`$ of a degree $`h`$. Suppose $`I.J=0`$. Let $`B=A/I,C=A/J,`$ and let $`A=IV,`$ there $`V`$ is a graded vector space. We have $$J=ka+Aa+aA+AaA=ka+Va+aV+VaV.$$ Therefore, there is an inequality of Hilbert series $$J(t)t^h+2t^hV(t)+t^hV(t)^2,$$ or $$A(t)C(t)t^h(B(t)+1)^2.$$ So, we have $$A(t)C(t)+t^h(B(t)+1)^2.$$ Thus, for radii of convergence we obtain $$r(A)\mathrm{min}\{r(B),r(C)\},$$ contradicting the extremality of $`A`$. Remark In fact, we have proved that for any two homogeneous ideals $`I,J`$ of a graded algebra $`A`$, if $`I.J=0,`$ then $`r(A)=\mathrm{min}\{r(A/I),r(A/J)\}.`$ ###### Corolary 14 Let $`A`$ be a (non-graded) locally finite filtered algebra such that the associated graded ring $`\text{gr }A`$ is extremal. Then $`A`$ is prime. Now, let us consider examples of extremal algebras. In fact, the extremality of nontrivial free algebras is proved by V. E. Govorov in \[Gov\]. We add the following ###### Theorem 15 Let $`A,B`$ be non-trivial connected algebras such that $`r(A)>0`$ and $`r(B)>0`$. Then the algebra $`AB`$ is extremal. Proof of Theorem 15 Let $`C=AB`$. By the formula $$C(t)^1=A(t)^1+B(t)^11,$$ the function $`C(t)^1`$ is analytical and nonzero in a neighborhood of zero, so $`r(C)>0`$. For $`t(0,r(C)]`$, we have $`0A(t)^1<1`$ and $`0B(t)^1<1`$. Since $`A(r(C))^1+B(r(C))^11=0,`$ we obtain $`A(r(C))^1>0`$ and $`B(r(C))^1>0`$; hence $`r(C)<\mathrm{min}\{r(A),r(B)\}`$. It follows from the standard Gröbner bases arguments that we may assume the algebras $`A`$ and $`B`$ to be monomial. (Indeed, if we fix an order on monomials, then, denoting by $`\overline{R}`$ the associated monomial algebra of an algebra $`R`$, we have $`\overline{C}=\overline{A}\overline{B}`$; moreover, if $`I`$ is a homogeneous ideal in $`C`$, then there exists an ideal $`J\overline{C}`$ such that $`r(C/I)r(\overline{C}/J)`$.) Now, let $`A=kX/I`$ and $`B=kY/J`$, there $`X,Y`$ are homogeneous sets minimally generating algebras $`A`$ and $`B`$, and $`I,J`$ are ideals generated by monomials of elements of $`X`$ and $`Y`$. If $`A`$ and $`B`$ are two–dimensional, i. e., $`Akx|x^2=0`$ and $`Bkx|x^2=0`$, then there is nothing to prove. So, we can assume that $`\text{dim }B3`$. Let $`SC`$ be a nonzero principal ideal generated by a non-empty monomial $`m`$: it is sufficient to prove that $`r(C/S)>r(C)`$. We will say that a non-empty monomial $`a`$ is an overlap of two monomials $`b,c`$ if there are non-empty monomials $`f,g`$ such that $`b=fa,c=ag`$. Now we need the following ###### Lemma 16 Let $`A,B`$ be connected monomial algebras such that $`\text{dim }_kB3`$, where $`B`$ is minimally generated by the set $`Y=\{y_i\}_{i\mathrm{\Gamma }}`$, and let $`S`$ be a monomial ideal in the algebra $`C=AB`$. Then $`S`$ contains a nonzero monomial $`p`$ with the following properties: all monomials $`p_{ij}=y_ipy_j,i,j\mathrm{\Gamma }`$ are nonzero, and, moreover, for any two monomials $`p_{ij}`$ and $`p_{kl}`$, there are no overlaps in the case $`jk`$ and there is the unique overlap $`y_j`$ in the case $`j=k`$. Proof of Lemma 16 It is obvious that $`S`$ contains a nonzero monomial $`n`$ such that $`n=xn^{}x`$, where $`n^{}`$ is a monomial, $`xX`$, where $`X`$ is the set of generators of $`A`$. To construct such a monomial $`p`$, let us consider two cases. Case 1 Let $`\mathrm{\#}Y=1`$, i. e., $`Y=\{y\}`$. Since $`\text{dim }B3`$, then $`y^20`$. Let $`l0`$ be the largest integer satisfying $`n=(xy^2)^ln_1`$, where $`n_1`$ is a monomial. Put $`p=y(xy^2)^qn_1(yx)^q`$, where $`q>\mathrm{max}\{l,\text{len }n_1+3\}`$; then $`p_{11}=y^2(xy^2)^qn_1(yx)^qy`$. Assume that a monomial $`a`$ is an overlap for the pair $`p_{11},p_{11}`$. Then there exist non-empty monomials $`c,d`$ such that $`p_{11}=ca=ad`$. Hence $`a`$ has the form $`(y^2x)^qf(xy)^q,`$ where $`f`$ is a monomial; therefore, $`\text{len }c=\text{len }d=\text{len }p_{11}\text{len }a<q`$. Since $`p_{11}=y^2(xy^2)^qn_1(yx)^qy=c(y^2x)^qf(xy)^q`$, $`c`$ has the form $`(y^2x)^r`$ for some $`r>0`$. By the maximality of $`l`$, we have $`r=1`$, so $`\text{len }c=\text{len }d=3`$. On the other hand, it follows from the equality $`p_{11}=ad`$ that $`d`$ has the form $`(xy)^l`$ for some $`l`$, so $`\text{len }d`$ must be even. Case 2 Let $`\mathrm{\#}Y2`$, i. e., $`Y=\{y_1,y_2,\mathrm{}\}`$. Put $`p=(xy_1)^qn(y_2x)^q`$, where $`q>\text{len }n`$. For some monomials $`p_{ij}`$ and $`p_{kl}`$, suppose $`a`$ is an overlap such that $`\text{len }a2`$. Then there exists an overlap of monomials $`p,p`$. It means that the set $`\{p\}`$ is not combinatorial free, so, it is not strongly free in a free associative algebra generated by the set $`XY`$ \[A1\]. By \[HL, Proposition 3.15\], this means that there exist a non-empty monomial $`a`$ and a monomial $`b`$ such that $$p=aba$$ (at least in the monomial case, the proof in \[HL\] did not really use the assumption $`\text{char }k=0`$). Then $`a`$ has the form $`a=(xy_1)^qc(y_2x)^q`$ for a monomial $`c`$, so $`\text{len }a4q`$ and $`\text{len }p8q,`$ contradicting the choice of $`q`$. Returning to the general proof, let $`PC`$ be an ideal generated by all of the monomials $`p_{ij}`$, and let $`D=C/P`$. Since $`PS`$, it is sufficient to prove that $`r(D)>r(C)`$. To prove this, we will compute the homology of the algebra $`D`$ and obtain its Hilbert series as the Euler characteristic. Recall how to compute homologies of a monomial algebra (see details in \[A3\]; we use the terminology of \[U\]). Suppose $`F`$ is a free associative algebra generated by a set $`X`$, $`IF`$ is an ideal minimally generated by a set of monomials $`U`$, and $`M`$ is the quotient algebra $`F/I`$. Let us define a concept of a chain of a rank $`n`$ and its tail. For $`n=0`$, every generator $`xX`$ is called a chain of rank $`0`$; it coinsides with its tail. For $`n>0,`$ a monomial $`f=gt`$ is called a chain of rank $`n`$ and $`t`$ is called its tail, if the following conditions hold: (i) $`g`$ is a chain of rank $`n1`$; (ii) if $`r`$ is a tail of $`g`$, then $`rt=vu,`$ where $`v,u`$ are monomials and $`uU`$; (iii) excluding the word $`u`$ as the end, there are no subwords of $`rt`$ lying in $`U`$. Let us denote by $`C_n^M`$ the set of chains of rank $`n`$; for example, $`C_0^M=X`$ and $`C_1^M=U`$. Then for all $`n0`$ there are the following isomorphisms of graded vector spaces: $$kC_n^M\text{Tor }_{n+1}^M(k,k).$$ Letting $`c_j^i`$ denote the number of chains of degree $`i`$ having a rank $`j`$, consider the generating function $$C^M(s,t)=\underset{i0}{}\underset{j0}{}c_j^is^jt^i.$$ Arguing as in Proposition 1, taking the Euler characteristic of the minimal resolution, we obtain $$M(t)^1=1C^M(1,t);$$ the formal power series in the right side does exist since every vector space $`\text{Tor }_n^M(k,k)`$ is concentrated in degrees $`n`$. By definition, for all $`i0,`$ we have $$C_i^C=C_i^AC_i^B$$ and $`C_0^D=C_0^C`$. Therefore, $$C^C(s,t)=C^A(s,t)+C^B(s,t),$$ so $$C^D(s,t)=C^A(s,t)+C^B(s,t)+C^{}(s,t),$$ there the set $`C^{}`$ consists of chains that have a subword $`p`$. Thus the set $`C_0^{}`$ is empty, and $`C_1^{}=\{p_{ij}\}`$. Let us prove that the set $`C^{}`$ consists of all monomials of the form $$c_1pc_2p\mathrm{}pc_n$$ (5) for $`n2`$, where $`c_1,\mathrm{},c_nC^B`$ . It is clear that all these monomials are chains of $`C^{}`$. Let us prove the converse. Since $`C_0^B=Y`$, this is obvious for chains of rank $`1`$. Now, let $`f=gtC_n^{}`$, where $`g`$ is a chain of lesser rank and $`t`$ is a tail of the chain $`f`$. Let $`r`$ be the tail of $`g`$. By induction, we may assume that $`gC^B`$ or $`g`$ has the form (5); in the second case, $`r`$ is the tail of $`c_n`$, or has the form $`py_i,`$ where $`y_i=c_nY`$. If $`t`$ is a word of the alphabet $`Y`$, then $`gtC^B`$, or $`c_ntC^B`$, so $`f`$ has the desired form. Otherwise, $`t`$ must contain a subword equal to $`p`$; hence, $`t=py_j`$ for some $`j`$. Thus, $$f=c_1pc_2p\mathrm{}pc_npy_j.$$ Now, let us compute the generating function. Notice that if a chain $`f`$ has the form (5), then the rank of $`f`$ is equal to $`k+n1,`$ where $`k`$ is the sum of ranks of the chains $`c_1,\mathrm{},c_n`$. Let $`\text{deg }p=b`$. By (5), we have $$C^{}(s,t)=\underset{i1}{}(st^b)^i\left(C^B(s,t)\right)^{i+1}=\frac{st^b\left(C^B(s,t)\right)^2}{1st^bC^B(s,t)}.$$ Put $`q(t)=1B(t)^1=C^B(t,1)`$. Obviously, for $`0<tr(B)`$, we have $`q(t)>0`$. We obtain $$D(t)^1=C(t)^1C^{}(1,t),$$ hence, $$D(r(C))^1=C^{}(1,r(C))=\frac{r(C)^bq(r(C))^2}{1+r(C)^bq(r(C))}>0.$$ Since $`r(D)r(C)`$ and $`D(r(C))>0`$, we obtain $`r(D)>r(C)`$. This completes the proof ot Theorem 15. ###### Corolary 17 Let $`A`$ be a connected algebra such that $`r(A)>0`$. If there exists a strongly free set in $`A`$, then $`A`$ is extremal. Proof By \[A1, Lemma 2.7\], any subset of a strongly free set is strongly free; so, there is a strongly free element $`fA`$. Let $`L`$ be the ideal generated by $`f`$, and let $`B=A/L`$. If $`A`$ is generated by $`f`$, then, since every strongly free set generates a free subalgebra, $`A=kf`$; hence, every proper quotient of $`A`$ is finitely-dimensional, so $`A`$ is extremal. Otherwise, the algebra $`B`$ is not trivial, so the algebra $`C=Bkg|\text{deg }g=\text{deg }f`$ is extremal by Theorem 15. By \[A1, Section 2\], there is an isomorphism of graded vector spaces $`\rho :CA`$ having the following properties: (i) the restriction of $`\rho `$ to $`B`$ is a right inverse to the canonical projection $`AB`$; (ii) $`\rho (g)=f`$, and $`\rho (a_1ga_2\mathrm{}ga_n)=\rho (a_1)f\rho (a_2)\mathrm{}f\rho (a_n)`$. Suppose $`mA`$ is an arbitrary homogeneous element, $`m=\rho (c)`$, and $`I=AmA`$ is the ideal generated by $`m`$. We need to prove that $`r(A/I)>r(A)`$. Indeed, let $`c^{}=gcg`$, let $`m^{}=fmf=\rho (c^{})`$, and let $`JA`$ (respectively, $`KC`$) be the ideal generated by $`m^{}`$ (resp., $`c^{}`$). For every $`a,bC`$ we get $$\rho (ac^{}b)=\rho (agcgb)=\rho (a)f\rho (c)f\rho (b)=\rho (a)m^{}\rho (b).$$ Therefore $`\rho (K)J`$, so $`(A/J)(t)(C/K)(t)`$. We obtain $$r(A/I)r(A/J)r(C/K)>r(C)=r(A).$$ ## 6 How a quotient algebra may grow? Suppose that $`A`$ is a connected algebra such that $`r(A)>0,`$ $`SA`$ is a non-empty set of homogeneous elements minimally generating an ideal $`I=ASA`$, and $`B=A/I`$. Let $`C=BkS`$, and let $$D=A/II/I^2I^2/I^3\mathrm{}$$ with the induced grading; then $`D(t)=A(t)`$. By \[HL, Theorem 2.4\], we have a epimorphism $$f:CD,$$ which is an isomorphism iff the set $`S`$ is strongly free. Since $`C`$ is either a free algebra or a free product of non-trivial algebras, it is extremal; so, we obtain the following ###### Proposition 18 Using the notation above, $$r(A)r(C).$$ Equality holds if and only if the set $`S`$ is strongly free. Now, by the formula for the Hilbert series of a free product, we have $$C(t)^1=B(t)^1S(t).$$ Since $`C`$ is extremal, the series $`B(t)`$ and $`S(t)`$ converge for $`t[0,r(C)]`$, so $$B(r(C))^1S(r(C))=0,$$ or $$B(r(C))S(r(C))=1.$$ Since $`r(A)r(C)`$, we have $$B(r(A))S(r(A))1$$ (where $`\mathrm{}>1`$); the equality holds iff $`r(A)=r(C)`$. Thus we obtain: ###### Theorem 19 Using our notation, $$B(r(A))S(r(A))1,$$ and the following conditions are equivalent: (i) the equality above holds; (ii) $`r(A)=r(C)`$; (iii) the set $`SA`$ is strongly free. In particular, if the set $`S`$ consists of $`t`$ elements of degree $`l`$, then we have $$tB(r(A))^1r(A)^l,$$ where equality holds iff $`S`$ is strongly free. This estimate generalizes Govovrov’s inequality (3). Remark Suppose that $`\text{char }k=0`$, the algebra $`A`$ is free of rank $`s`$, and $`\alpha `$ is a set of $`t`$ quadratic elements. If the pair $`(s,t)`$ is undecidable, then $`C`$ is finitely presented and connected (but non-standard) algebra such that there is no algorithm do decide whether or not $`r(C)=1/s`$.
no-problem/9903/hep-lat9903002.html
ar5iv
text
# A lattice NRQCD calculation of the 𝐵⁰-𝐵̄⁰ mixing parameter 𝐵_𝐵 ## I Introduction The constraints on the unitarity triangle of the Cabibbo-Kobayashi-Maskawa (CKM) matrix can provide us with one of the most crucial information on the physics beyond the Standard Model . However, due to large theoretical or experimental uncertainties, the current bound is too loose to test the Standard Model or new physics. The $`B^0`$-$`\overline{B^0}`$ mixing sets a constraint on $`|V_{td}V_{tb}^{}|`$ through the currently available experimental data on the mass difference between two neutral $`B`$ mesons $`\mathrm{\Delta }M_B`$= 0.477$`\pm `$0.017 ps<sup>-1</sup> . The experimental achievement is impressive as the error is already quite small $``$ 4%. Theoretical calculation to relate $`\mathrm{\Delta }M_B`$ to $`|V_{td}V_{tb}^{}|`$, on the other hand, involves a large uncertainty in the $`B`$ meson matrix element $`\overline{B^0}|𝒪_L|B^0`$, which requires a method to calculate the non-perturbative QCD effects. Lattice QCD is an ideal tool to compute such non-perturbative quantities from first principles. There has been a number of calculations of the $`B`$ meson decay constant $`f_B`$ and the $`B`$-parameter $`B_B`$ ($`B_B`$ describes the matrix element through $`\overline{B^0}|𝒪_L|B^0`$ = $`(8/3)B_Bf_B^2M_B^2`$). The calculation of $`f_B`$ is already matured at least in the quenched approximation . Major systematic errors are removed by introducing the non-relativistic effective actions and by improving the action and currents. Remaining $`a`$ (lattice spacing) dependent systematic error is confirmed to be small, and in some papers the continuum extrapolation is made. A recent review summarized the value of $`f_B`$ as $`f_B`$= 165$`\pm `$20 MeV within the quenched approximation. An essential ingredient of these calculations is the use of the non-relativistic effective actions. Since the $`b`$-quark mass in lattice unit $`am_b`$ is large for the typical lattices used for simulations, the relativistic (Wilson-type) actions could suffer from a large discretization error of order $`am_b`$ or $`(am_b)^2`$. The non-relativistic QCD (NRQCD) , on the other hand, is formulated as an expansion in $`𝒑/m_Q`$. In the heavy-light meson system, where the typical spatial momentum scale is $`\mathrm{\Lambda }_{QCD}`$, the error from the truncation of higher order terms is controllable. The calculation of $`f_B`$ is now available to order $`1/m_b^2`$, and it is known that the contribution of $`O(1/m_b)`$ is significant ($`20\%`$) while that of $`O(1/m_b^2)`$ is small ($`3\%`$. In addition, the calculations based on the Fermilab approach for heavy quark agree with the NRQCD results including their $`1/m_Q`$ dependence. These results make us confident about the non-relativistic effective action approaches in Lattice QCD. Now that the computation of $`f_B`$ is established, the next goal is to apply the similar technique to the computation of $`B_B`$. The lattice calculations of the $`B`$-parameter have been done in the infinitely heavy quark mass (static) limit , and the results are in reasonable agreement with each other. There is, however, some indication that the $`1/m_b`$ correction would be non-negligible from the study with relativistic quark actions . Their results show that there is small but non-zero negative slope in the $`1/m_Q`$ dependence of $`B`$-parameter, but it is not conclusive because of the possible systematic uncertainty associated with the relativistic quark action for heavy quark. The purpose of this work is to study the $`1/m_Q`$ dependence of $`B_B`$ by explicitly calculating it with the NRQCD action at several values of $`1/m_Q`$. Our result confirms the previous works : there is a small negative slope in $`B_B`$. In addition, we find that the slope comes from the large $`1/m_Q`$ dependence of $`B_N^{lat}`$ and $`B_S^{lat}`$, which are matrix elements of non-leading operators. For the observed $`1/m_Q`$ dependence of the lattice matrix elements, qualitative explanations are given in Discussion section using the vacuum saturation approximation. The perturbative matching of the continuum and lattice operators introduces a complication to the analysis. Since the one-loop coefficients for four-quark operators are not known yet for the NRQCD action, we use the coefficients in the static limit in Refs. . The systematic error associated with this approximation is of $`\alpha _s/(am_Q)`$ and expected to be small compared to the $`1/m_Q`$ correction itself. This and other systematic errors are discussed in detail in the Discussion section. This paper is organized as follows. In the next section, we summarize our matching procedure. The simulation method is described in section III, and our analysis and results are presented in section IV. The results are compared with the vacuum saturation approximation on the lattice in section V, and we estimate the remaining uncertainties in section VI. Finally our conclusion is given in section VII. An early result of this work was presented in Ref. . ## II Perturbative matching In this section, we give our notations and describe the perturbative matching procedure. The mass difference between two neutral $`B_q`$ mesons is given by $$\mathrm{\Delta }M_{B_q}=|V_{tb}^{}V_{tq}|^2\frac{G_F^2m_W^2}{16\pi ^2M_{B_q}}S_0(x_t)\eta _{2B}\left[\alpha _s(\mu )\right]^{6/23}\left[1+\frac{\alpha _s(\mu )}{4\pi }J_5\right]\overline{B_q^0}|𝒪_L(\mu )|B_q^0,$$ (1) where $`q=d`$ or $`s`$ and $`S_0(x_t)`$ ($`x_t=m_t^2/m_W^2`$) and $`\eta _{2B}`$ are so-called Inami-Lim function and the short distance QCD correction, respectively. Their explicit forms can be found in Ref. . $`𝒪_L(\mu )`$ is a $`\mathrm{\Delta }B`$=2 operator $$𝒪_L(\mu )=\overline{b}\gamma _\mu (1\gamma _5)q\overline{b}\gamma _\mu (1\gamma _5)q,$$ (2) renormalized in the $`\overline{\mathrm{MS}}`$ scheme with the naive dimensional regularization (NDR). $`J_{n_f}`$ is related to the anomalous dimension at the next-to-leading order with $`n_f`$ active flavors as $$J_{n_f}=\frac{\gamma ^{(0)}\beta _1}{2\beta _0^2}\frac{\gamma ^{(1)}}{2\beta _0},$$ (3) where $$\begin{array}{cc}\beta _0=11\frac{2}{3}n_f,& \beta _1=102\frac{38}{3}n_f,\\ \gamma ^{(0)}=4,& \gamma ^{(1)}=7+\frac{4}{9}n_f.\end{array}$$ (4) $`n_f`$=5 when $`\mu `$ is greater than or equal to the $`b`$ quark mass. The $`B`$-parameter $`B_{B_q}`$ is defined through $$B_{B_q}(\mu )=\frac{\overline{B_q^0}|𝒪_L(\mu )|B_q^0}{\frac{8}{3}\overline{B_q^0}|A_\mu |00|A_\mu |B_q^0},$$ (5) where $`A_\mu `$ denotes the axial-vector current $`\overline{b}\gamma _\mu \gamma _5q`$. The renormalization invariant $`B`$-parameter is defined by $$\widehat{B}_{B_q}=\left[\alpha _s(\mu )\right]^{\frac{6}{23}}\left[1+\frac{\alpha _s(\mu )}{4\pi }J_5\right]B_{B_q}(\mu ),$$ (6) which does not depend on the arbitrary scale $`\mu `$ up to the next-to-leading order. The scale $`\mu `$ is conventionally set at the scale of $`b`$-quark mass $`\mu =m_b`$. In order to calculate the matrix element $`\overline{B_q^0}|𝒪_L(m_b)|B_q^0`$ on the lattice, we have to connect the operator $`𝒪_L(m_b)`$ defined in the continuum renormalization scheme with its lattice counterpart. The matching coefficients can be obtained by requiring the perturbative quark scattering amplitudes at certain momentum with continuum $`O_L`$ operator and with lattice four fermi operators should give identical results. At the one-loop level the matching gives the following relation $`𝒪_L(m_b)`$ $`=`$ $`\left(1+{\displaystyle \frac{\alpha _s}{4\pi }}[4\mathrm{ln}(a^2m_b^2)+D_L14]\right)𝒪_L^{lat}(1/a)`$ (8) $`+{\displaystyle \frac{\alpha _s}{4\pi }}D_R𝒪_R^{lat}(1/a)+{\displaystyle \frac{\alpha _s}{4\pi }}D_N𝒪_N^{lat}(1/a)+{\displaystyle \frac{\alpha _s}{4\pi }}D_S𝒪_S^{lat}(1/a),`$ where $`𝒪_{\{L,R,N,S\}}^{lat}`$ denotes the naive local operators defined on the lattice in which the light quarks are not rotated. Their explicit forms are the following, $`𝒪_R`$ $`=`$ $`\overline{b}\gamma _\mu (1+\gamma _5)q\overline{b}\gamma _\mu (1+\gamma _5)q,`$ (9) $`𝒪_N`$ $`=`$ $`\overline{b}\gamma _\mu (1\gamma _5)q\overline{b}\gamma _\mu (1+\gamma _5)q+\overline{b}\gamma _\mu (1+\gamma _5)q\overline{b}\gamma _\mu (1\gamma _5)q`$ (11) $`+2\overline{b}(1\gamma _5)q\overline{b}(1+\gamma _5)q+2\overline{b}(1+\gamma _5)q\overline{b}(1\gamma _5)q,`$ $`𝒪_S`$ $`=`$ $`\overline{b}(1\gamma _5)q\overline{b}(1\gamma _5)q.`$ (12) Unfortunately, the one-loop coefficients $`D_{\{L,R,N\}}`$ for the NRQCD heavy and $`O(a)`$-improved light quark action are not known yet. In this work, we use the one-loop coefficients in the static limit , $$D_L=21.16,D_R=0.52,D_N=6.16,D_S=8.$$ (13) The systematic error associated with this approximation is at most $`\alpha _s/(am_Q)`$, since the NRQCD action’s $`m\mathrm{}`$ limit agrees with the static action. The numerical size of the error is discussed later. The matching of the axial-vector current appearing in the denominator of Eq. (5) can be done in a similar manner $`A_0`$ $`=`$ $`\left(1+{\displaystyle \frac{\alpha _s}{4\pi }}[2\mathrm{ln}(a^2m_b^2)+D_A{\displaystyle \frac{8}{3}}]\right)A_0^{lat}(1/a),`$ (14) where $`A_0^{lat}`$ is defined on the lattice, and the matching coefficient $`D_A`$ in the static limit $`D_A=13.89`$. In calculating the ratio of Eq. (5) a large cancellation of perturbative matching corrections takes place between the numerator and denominator, since the large wave function renormalization coming from the tadpole contribution in the lattice theory is the same. To make this cancellation explicit we consider the matching of a ratio $`B_B(m_b)=𝒪_L(m_b)/(8/3)A_0^2`$ itself, $`B_B(m_b)`$ $`=`$ $`Z_{L/A^2}(m_b;1/a)B_L^{lat}(1/a)+Z_{R/A^2}(m_b;1/a)B_R^{lat}(1/a)`$ (16) $`+Z_{N/A^2}(m_b;1/a)B_N^{lat}(1/a)+Z_{S/A^2}(m_b;1/a)B_S^{lat}(1/a),`$ where $`B_{\{L,R,N,S\}}^{lat}(1/a)=𝒪_{\{L,R,N,S\}}^{lat}(1/a)/(8/3)A_0^{lat}(1/a)^2`$ and the $`B`$ and $`\overline{B}`$ states are understood for the expectation values $`\mathrm{}`$ as in Eq. (5). Then the coefficients become $`Z_{L/A^2}(m_b;1/a)`$ $`=`$ $`\left(1+{\displaystyle \frac{\alpha _s}{4\pi }}(D_L2D_A{\displaystyle \frac{26}{3}})\right),`$ (17) $`Z_{R/A^2}(m_b;1/a)`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{4\pi }}D_R,`$ (18) $`Z_{N/A^2}(m_b;1/a)`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{4\pi }}D_N,`$ (19) $`Z_{S/A^2}(m_b;1/a)`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{4\pi }}D_S.`$ (20) The Eqs. (17)-(20) are used in the following analysis to obtain $`B_B(m_b)`$. Numerical values of $`Z_{\{L,R,N,S\}/A^2}(m_b;1/a)`$ are given in Table I for the lattice parameters in our simulation. For the coupling constant $`\alpha _s`$ in Eqs. (17)-(20) we use the V-scheme coupling with $`q^{}=\pi /a`$ or $`q^{}=1/a`$. At $`\beta `$=5.9 those are $`\alpha _V(\pi /a)`$=0.164 and $`\alpha _V(1/a)`$=0.270. The tadpole improvement does not make any effect on the ratio of Eq. (16), since the tadpole contribution cancels between the numerator and denominator. The $`b`$-quark mass scale $`m_b`$ is set to 5 GeV as usual. In the previous works in the static approximation , the leading and the next-to-leading logarithmic corrections are resummed to achieve better control in the running from $`m_b`$ to $`1/a`$. In this paper we use the one-loop formula without the resummation for simplicity. This does not introduce significant error, since the mass scale difference between $`m_b`$ and $`1/a`$ is small and the effect of the resummation is not important. In Appendix A we compare the $`Z`$ factors with and without the resummation. We determine the heavy-light pseudo-scalar meson mass $`M_P`$ from the binding energy $`E^{\mathrm{bin}}`$ measured in the simulation using a formula $$aM_P=Z_mam_QE_0+aE^{\mathrm{bin}},$$ (21) where $`Z_m`$ and $`E_0`$ are the renormalization constant for the kinetic mass and the energy shift, respectively. Both have been calculated perturbatively by Davies and Thacker and by Morningstar . Since the precise form of their NRQCD action is slightly different from ours, we performed the perturbative calculations for our action. Our results for the coefficient $`A`$ and $`B`$ in the perturbative expansion $`E_0`$ $`=`$ $`\alpha _V(q^{})A,`$ (22) $`Z_m`$ $`=`$ $`1+\alpha _V(q^{})B,`$ (23) are summarized in Table II. ## III Simulation Details Our task is to compute the ratios $`𝒪_{\{L,R,N,S\}}^{lat}(1/a)/(8/3)A_0^{lat}(1/a)^2`$ using lattice NRQCD with the lattice spacing $`a`$. In this section we describe our simulation method to obtain them. We performed the numerical simulation on a $`16^3\times 48`$ lattice at $`\beta `$= 5.9, for which the inverse lattice spacing fixed with the string tension is $`1/a`$= 1.64 GeV. In the quenched approximation we use 250 gauge configurations, each separated by 2,000 pseudo-heat bath sweeps. For the light quark we use the $`O(a)`$-improved action at $`\kappa `$=0.1350, 0.1365. The clover coefficient is set to be $`c_{\mathrm{sw}}=1/u_0^3`$, where $`u_0P_{plaq}^{1/4}`$=0.8734 at $`\beta =5.9`$. The critical $`\kappa `$ value is $`\kappa _c=`$0.1401, and $`\kappa _s`$ corresponding to the strange quark mass determined from the $`K`$ meson mass is $`\kappa _s`$=0.1385. For the heavy quark and anti-quark we use the lattice NRQCD action with the tadpole improvement $`U_\mu U_\mu /u_0`$. The precise form of the action is the same as the one we used in the previous work . We use both of the $`O(1/m_Q)`$ and $`O(1/m_Q^2)`$ actions in parallel in order to see the effect of the higher order contributions. The heavy (anti-)quark field in the relativistic four-component spinor form is constructed with the inverse Foldy-Wouthuysen-Tani (FWT) transformation defined at the tadpole improved tree level as in Ref. . The heavy quark masses and the stabilization parameters are $`(am_Q,n)`$=(10.0,2), (5.0,2), (3.0,2), (2.6,2) and (2.1,3). These parameters approximately cover a mass scale between $`4m_b`$ and $`m_b`$. We label the time axis of our lattice as $`t=[24,23]`$. The heavy quark and anti-quark propagators are created from a local source located at the origin ($`t`$=0 on our lattice) and evolve into opposite temporal directions. The light quark propagator is also solved with the same source location and with a Dirichlet boundary condition at $`t=24`$ and $`t=23`$. The $`B`$ and $`\overline{B}`$ mesons are constructed with local sink operators. Thus we have the four-quark operators at the origin and extract the matrix elements from the following three-point correlation function $$C_X^{(3)}(t_1,t_2)=\underset{\stackrel{}{x}_1}{}\underset{\stackrel{}{x}_2}{}0|A_{0}^{lat}{}_{}{}^{}(t_1,\stackrel{}{x}_1)O_X^{lat}(0,\stackrel{}{0})A_{0}^{lat}{}_{}{}^{}(t_2,\stackrel{}{x}_2)|0,$$ (24) where $`X`$ denotes $`L`$, $`R`$, $`N`$ or $`S`$. Because of a symmetry under parity transformation, $`C_L^{(3)}(t_1,t_2)`$ and $`C_R^{(3)}(t_1,t_2)`$ should exactly coincide in infinitely large statistics. Therefore we explicitly average them before the fitting procedure we describe below. To obtain the ratios $`B_X^{lat}(1/a)`$ we also define the following two-point functions $`C^{(2)}(t_1)`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{x}}{}}0|A_0^{lat}(t_1,\stackrel{}{x})A_{0}^{lat}{}_{}{}^{}(0,\stackrel{}{0})|0,`$ (25) $`C^{(2)}(t_2)`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{x}}{}}0|A_0^{lat}(0,\stackrel{}{0})A_{0}^{lat}{}_{}{}^{}(t_2,\stackrel{}{x})|0,`$ (26) and consider a ratio $$\frac{C_X^{(3)}(t_1,t_2)}{\frac{8}{3}C^{(2)}(t_1)C^{(2)}(t_2)}|t_i|1\frac{\overline{P^0}|𝒪_X^{lat}(1/a)|P^0}{\frac{8}{3}\overline{P^0}|A_0^{lat}(1/a)|00|A_0^{lat}(1/a)|P^0}=B_X^{lat}(1/a),$$ (27) where $`P^0`$ denotes a heavy-light pseudo-scalar meson. The ground state $`P^0`$ meson is achieved in the large $`|t_i|`$ regime. Although we use the local operator for the sinks at $`t_1`$ and $`t_2`$, the ground state extraction is rather easier for finite $`am_Q`$ than in the static approximation, since the statistical error is much smaller for NRQCD . This is another advantage of introducing the $`1/m_Q`$ correction. The physical $`B_B(m_b)`$ is obtained by extrapolating and interpolating each $`B_X^{lat}(1/a)`$ to the physical $`B`$ meson with $`\kappa `$ and $`m_Q`$, respectively before combining them as Eq. (16). The final result for $`B_B(m_b)`$ may also be obtained by combining the ratio of correlation functions before a constant fit. Namely we use the relation $$B_P(m_b;t_1,t_2)=\underset{X=L,R,N,S}{}Z_{X/A^2}(m_b,1/a)\frac{C_X^{(3)}(t_1,t_2)}{\frac{8}{3}C^{(2)}(t_1)C^{(2)}(t_2)}|t_i|1B_P(m_b).$$ (28) Since the statistical fluctuation in the individual $`B_X^{lat}(1/a)`$ is correlated, the error is expected to be smaller with this method (We use the jackknife method for error estimation). Following Ref. we refer to this method as the “combine-then-fit” method, while the usual one as Eq. (27) is called the “fit-then-combine” method in the rest of the paper. ## IV Simulation Results We describe the simulation results in this section. The results are from the $`O(1/m_Q)`$ action unless we specifically mention. ### A Heavy-light meson mass The binding energy of the heavy-light meson is obtained from a simultaneous fit of two two-point correlation functions. The numerical results are listed in Table III for each $`am_Q`$ and $`\kappa `$. Extrapolation of the light quark mass to the strange quark mass or to the chiral limit is performed assuming a linear dependence in $`1/\kappa `$. The meson mass is calculated using the perturbative expression Eq. (21). The results with $`\alpha _V(\pi /a)`$ and with $`\alpha _V(1/a)`$ are also given in Table III. ### B $`B_X^{lat}(1/a)`$ and $`B_P(m_b)`$ Figures 1 and 2 show the $`t_1`$ dependence of $`B_P(m_b;t_1,t_2)`$ in the “combine-then-fit” method. The perturbative matching of the continuum and lattice theory is done with the $`V`$-scheme coupling $`\alpha _V(q^{})`$ at $`q^{}`$=$`\pi /a`$ (left) and $`1/a`$ (right). Their difference represents the effect of $`O(\alpha _s^2)`$. The signal is rather noisier at $`am_Q`$=5.0 (Fig. 1) than at $`am_Q`$=2.6 (Fig. 2) from the same reason as in the static limit. But, still, a reasonably good signal is observed even for large $`am_Q`$. A plateau in the $`t_1`$ dependence is reached around $`t_1`$= 8 $``$ 11 for both $`t_2`$ = $``$10 and $``$15. To be conservative we take $`|t_1|`$ as well as $`|t_2|`$ greater than 10 for the fitting region. All data points $`(t_1,t_2)`$ in 10 $`|t_1|`$ 13 and in 10 $`|t_2|`$ 13 are fitted with constant to obtain the result for $`B_P(m_b)`$. We confirm that except for the heaviest quark the results are stable within about one standard deviation under a change of the fitting region by at most two $`t_i`$ steps in the forward and backward direction. The numerical results are listed in Table IV. The light quark mass ($`1/\kappa `$) dependence of $`B_P(m_b)`$ is presented in Fig. 3. Since its dependence is quite modest, we assume a linear dependence on $`1/\kappa `$ and extrapolate the results to the strange quark mass and to the chiral limit as shown in the plot. Results of the extrapolation are also listed in Table IV. ### C $`1/M_P`$ dependence In Fig. 4 we plot $`B_P(m_b)`$ in the chiral limit, namely $`B_{P_d}(m_b)`$, as a function of $`1/M_P`$ in the physical unit. We take $`q^{}`$=$`\pi /a`$ (circles) and $`1/a`$ (squares) for the scale of $`\alpha _V`$. Regardless of the choice of the coupling, we observe a small but non-zero negative slope in $`1/M_P`$, which supports the previous results by Bernard, Blum and Soni and by Lellouch and Lin using the relativistic fermions. To investigate the origin of the observed $`1/M_P`$ dependence, we look into the contributions of the individual operators $`𝒪_{\{L,R,N,S\}}^{lat}`$ through the “fit-then-combine” method with the same fitting region as before. We list the results for each $`B_X^{lat}`$ in Table IV. Figure 5 shows the $`1/M_P`$ dependence of $`B_L^{lat}(1/a)`$(=$`B_R^{lat}(1/a)`$), $`B_N^{lat}(1/a)`$ and $`B_S^{lat}(1/a)`$. While no significant $`1/M_P`$ dependence is observed in $`B_L^{lat}(1/a)`$, $`B_N^{lat}(1/a)`$ and $`B_S^{lat}(1/a)`$ have strong slope. Since their sign is opposite and the sign of the matching factors $`Z_{N/A^2}(m_b;1/a)`$ and $`Z_{S/A^2}(m_b;1/a)`$ (see Table I) is the same, a partial cancellation takes place giving a small negative slope for $`B_{P_d}(m_b)`$. We also make a comparison of the results of the $`1/m_Q`$ action (circles) with those of the $`1/m_Q^2`$ action (triangles) in Fig. 5. There is a small difference between the two results in $`B_N^{lat}(1/a)`$ and in $`B_S^{lat}(1/a)`$ toward large $`1/M_P`$ ($`1/M_P`$ 0.2 GeV<sup>-1</sup>), which is consistent with our expectation that the difference is an $`O(\mathrm{\Lambda }_{QCD}/m_Q)^2`$ effect. Previous results in the static approximation by Ewing et al. (diamond) , Gimenéz and Martinelli (triangle) and Christensen, Draper and McNeile (circle) are plotted with filled symbols at $`1/M_P=0`$. Although the $`\beta `$ value and the light quark action employed (the $`O(a)`$-improved action is used in Refs. and unimproved action in Ref. ) are different, all the results are in good agreement with each other. A quadratic extrapolation (dashed line) using our $`1/m_Q^2`$ NRQCD result also does agree with these previous static results. ### D Result for $`B_B(m_b)`$ Combining the data for $`B_X^{lat}(1/a)`$ discussed above, we obtain $`B_{P_d}(m_b)`$ with the “fit-then-combine” method. We confirm that the difference in numerical results from both methods are completely negligible. Figure 6 shows the results of “fit-then-combine” method using $`\alpha _V(1/a)`$ with both actions. The comparison with the static results is also made in this plot, where only the statistical error in each calculation is considered and the same matching procedure as ours are applied. We again observe a consistent result. Interpolating the above NRQCD results to the physical $`B`$ meson mass, we obtain the physical $`B_{B_d}(m_b)`$ $$B_{B_d}(m_b)=\{\begin{array}{c}0.78(3)(q^{}=\pi /a)\\ 0.72(3)(q^{}=1/a)\end{array}$$ (29) for the $`O(1/m_Q)`$ action, and $$B_{B_d}(m_b)=\{\begin{array}{c}0.78(2)(q^{}=\pi /a)\\ 0.71(3)(q^{}=1/a)\end{array}$$ (30) for the $`O(1/m_Q^2)`$ action. The quoted error is statistical only. For the ratio of $`B_{B_s}/B_{B_d}`$, we obtain $`B_{B_s}/B_{B_d}`$ = 1.01(1) for $`q^{}=\pi /a`$ and $`B_{B_s}/B_{B_d}`$ = 1.02(1) for $`q^{}=1/a`$ from both actions. ## V Discussion The strong $`1/M_P`$ dependence in $`B_X^{lat}(1/a)`$ observed in Fig. 5 can be roughly understood using the vacuum saturation approximation (VSA) on the lattice as explained below. Here it should be noted that the terminology of VSA we use here does not immediately mean $`B_B(m_b)=1`$. The VSA for $`B_{L,R}^{lat}`$ is unity by construction. This is true even for finite $`1/M_P`$, and its prediction is shown by a straight line in Fig. 5(a). The NRQCD data is located slightly below the line ($``$0.9), but the mass dependence is well reproduced by the VSA. For $`B_N^{lat}`$ and $`B_S^{lat}`$, we require a little algebra to explain their mass dependence under the VSA. Using the Fierz transformation and inserting the vacuum, we obtain $`\overline{P^0}|𝒪_N^{lat}|P^0`$ $`=`$ $`{\displaystyle \frac{8}{3}}\overline{P^0}|\overline{b}\gamma _\mu \gamma _5q|00|\overline{b}\gamma _\mu \gamma _5q|P^0{\displaystyle \frac{16}{3}}\overline{P^0}|\overline{b}\gamma _5q|00|\overline{b}\gamma _5q|P^0,`$ (31) $`\overline{P^0}|𝒪_S^{lat}|P^0`$ $`=`$ $`{\displaystyle \frac{5}{3}}\overline{P^0}|\overline{b}\gamma _5q|00|\overline{b}\gamma _5q|P^0,`$ (32) where $`|P^0`$ denotes a heavy-light pseudo-scalar meson at rest, and $`0|\overline{b}\gamma _\mu \gamma _5q|P^0`$ is related to the pseudo-scalar decay constant $$\overline{P^0}|A_0^{lat}(1/a)|0=0|A_0^{lat}(1/a)|P^0=f_PM_P.$$ (33) Let us now consider a decomposition of the $`b`$-quark field $`\overline{b}`$ into the two-component non-relativistic quark $`Q^{}`$ and anti-quark $`\chi `$ fields. Up to $`O(1/m_Q^2)`$ we have $`\overline{b}\gamma _5q`$ $`=`$ $`(Q^{}0)\left(1+{\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\right)\gamma _5q(0\chi )\left(1+{\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\right)\gamma _5q,`$ (34) $`\overline{b}\gamma _0\gamma _5q`$ $`=`$ $`(Q^{}0)\left(1{\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\right)\gamma _5q+(0\chi )\left(1{\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\right)\gamma _5q,`$ (35) and then $`\overline{P^0}|\overline{b}\gamma _5q|0`$ $`=`$ $`\overline{P^0}|(Q^{}0)\left(1+{\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\right)\gamma _5q|0`$ (36) $`=`$ $`\overline{P^0}|\overline{b}\gamma _0\gamma _5q|0+2\overline{P^0}|(Q^{}0){\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\gamma _5q|0,`$ (37) $`0|\overline{b}\gamma _5q|P^0`$ $`=`$ $`0|(0\chi )\left(1+{\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\right)\gamma _5q|P^0`$ (38) $`=`$ $`0|\overline{b}\gamma _0\gamma _5q|P^0+20|(0\chi ){\displaystyle \frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}}\gamma _5q|P^0.`$ (39) By defining $`\delta f_P`$ as $$\overline{P^0}|(Q^{}0)\frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}\gamma _5q|0=0|(0\chi )\frac{\stackrel{}{\gamma }\stackrel{}{D}}{2m_Q}\gamma _5q|P^0\delta f_PM_P,$$ (40) we obtain $`\overline{P^0}|𝒪_N^{lat}|P^0`$ $`=`$ $`{\displaystyle \frac{8}{3}}f_P^2M_P^2\left(18{\displaystyle \frac{\delta f_P}{f_P}}\right),`$ (41) $`\overline{P^0}|𝒪_S^{lat}|P^0`$ $`=`$ $`{\displaystyle \frac{5}{3}}f_P^2M_P^2\left(14{\displaystyle \frac{\delta f_P}{f_P}}\right).`$ (42) In our previous work we denoted $`\delta f_P`$ as $`\delta f_P^{(2)}`$. Thus the VSA for $`B_N^{lat}`$ and for $`B_S^{lat}`$ read $`B_N^{lat(\mathrm{VSA})}`$ $`=`$ $`18{\displaystyle \frac{\delta f_P}{f_P}},`$ (43) $`B_S^{lat(\mathrm{VSA})}`$ $`=`$ $`{\displaystyle \frac{5}{8}}\left[14{\displaystyle \frac{\delta f_P}{f_P}}\right],`$ (44) neglecting the higher order contribution of order $`1/m_Q^2`$. Results for $`\delta f_P/f_P`$ is available at $`\beta `$=5.8 in Ref. . We plot them in Fig. 5(b) and 5(c) by crosses, which show a qualitative agreement with the measured values. Di Pierro and Sachrajda pointed out that the value of several $`B`$-parameter-like matrix elements of the $`B`$ meson is explained by the VSA surprisingly well in the static limit. Here we find that the $`1/m_Q`$ dependence of $`B_X^{lat}(1/a)`$ can also be reproduced qualitatively. This result suggests that the vacuum saturation is a reasonable qualitative picture for the heavy-light meson. It does not mean, however, that the VSA works quantitatively for $`B_B(\mu )`$, and careful lattice studies are necessary for precise calculation of the $`B`$-parameters. ## VI Remaining uncertainties and the final result To estimate the systematic uncertainties in lattice calculations is a difficult task. In our case it is even more true, since we have a simulation result only at a single $`\beta `$ value. However we attempt to do it, giving a dimension counting of missing contributions. The following sources of systematic errors are possible: * discretization error: Both of the heavy and light quark actions are $`O(a)`$-improved at tree level, and there is no discretization error of $`O(a\mathrm{\Lambda }_{QCD})`$. The leading error is of $`O(a^2\mathrm{\Lambda }_{QCD}^2)`$ and of $`O(\alpha _sa\mathrm{\Lambda }_{QCD})`$. The second one is from the missing one-loop perturbative correction in the $`O(a)`$-improvement (its matching coefficient has been already obtained in Ref. ). We naively estimate the size of them to be $`O(a^2\mathrm{\Lambda }_{QCD}^2)O(a\mathrm{\Lambda }_{QCD}\alpha _s)`$ 5%, assuming $`1/a`$1.6 GeV, $`\mathrm{\Lambda }_{QCD}`$300 MeV and $`\alpha _s`$0.3. * perturbative error: The operator matching of the continuum and lattice $`\mathrm{\Delta }B`$=2 operators are done at one-loop level. Thus the $`O(\alpha _s^2)`$ correction is another source of error. In addition, we use the one-loop coefficient for the static lattice action, though our simulation has been done with the NRQCD action. The error in this mismatch is as large as $`O(\alpha _s/(am_Q))`$ and $`O(\alpha _s\mathrm{\Lambda }_{QCD}/m_Q)`$. The size of these contributions is $`O(\alpha _s^2)`$ $``$ $`O(\alpha _s/(am_Q))`$ $``$ 10% and $`O(\alpha _s\mathrm{\Lambda }_{QCD}/m_Q)`$ $``$ 2%. * relativistic error: Since we have performed a set of simulations with the $`O(1/m_Q)`$ action and the $`O(1/m_Q^2)`$ action, we can estimate the error in the truncation of the non-relativistic expansion. As we have shown, the difference between the results with $`O(1/m_Q)`$ and $`O(1/m_Q^2)`$ is small ($``$ 2%) around the $`B`$ meson mass. Then the higher order ($`O(1/m_Q^3)`$) effect is negligible. * chiral extrapolation: We have only two light quark $`\kappa `$ values. Then the linear behavior in the chiral extrapolation is nothing but an assumption. Although the light quark mass dependence is small and the assumption is a reasonable one, we conservatively estimate the error from the difference between the data at our lightest $`\kappa `$ value ($`\kappa `$=0.1365) and $`\kappa _c`$. It leads 3% for $`B_{B_d}(m_b)`$. * quenching error: All results are obtained in the quenched approximation. Study of the sea quark effect is left for future work. Taking them into account, we obtain the following values as our final results from the quenched lattice, $`B_{B_d}(m_b)`$ $`=`$ $`0.75(3)(12),`$ (45) $`{\displaystyle \frac{B_{B_s}}{B_{B_d}}}`$ $`=`$ $`1.01(1)(3),`$ (46) where the first error is statistical and the second a sum of all systematic errors in quadrature. In estimating the error of the ratio $`B_{B_s}/B_{B_d}`$ we consider the error from chiral extrapolation only, assuming that other uncertainties cancel in the ratio. The above result is related to the scale invariant $`B`$-parameter $`\widehat{B}_{B_d}`$ as $$\widehat{B}_{B_d}=\{\begin{array}{c}\left[\alpha _s(m_b)\right]^{6/23}B_{B_d}(m_b)=1.12(4)(18)\hfill \\ \left[\alpha _s(m_b)\right]^{6/23}\left[1+\frac{\alpha _s(m_b)}{4\pi }J_5\right]B_{B_d}(m_b)=1.15(5)(18)\hfill \end{array},$$ (47) using the leading and next-to-leading formula, respectively, where we use $`\mathrm{\Lambda }_{QCD}^{(5)}`$=0.237 GeV and the two-loop $`\beta `$-function. ## VII Conclusion In this paper we investigate the $`O(\mathrm{\Lambda }_{QCD}/m_Q)`$ and $`O(\mathrm{\Lambda }_{QCD}^2/m_Q^2)`$ effects on the $`B`$-parameter. We find that there is no significant mass dependence in the leading operator contribution $`B_L^{lat}(1/a)`$, while the mixing contributions $`B_N^{lat}(1/a)`$ and $`B_S^{lat}(1/a)`$ have large $`O(\mathrm{\Lambda }_{QCD}/m_Q)`$ corrections. The $`O(\mathrm{\Lambda }_{QCD}^2/m_Q^2)`$ correction for each $`B_X^{lat}(1/a)`$ is, however, reasonably small for the $`B`$ meson as we naively expected. The observed $`1/m_Q`$ dependence is qualitatively understood using the vacuum saturation approximation for the lattice matrix elements. The lattice NRQCD calculation predicts the small but non-zero negative slope in the mass dependence of $`B_P(m_b)`$ and about 10% reduction from static limit to the physical $`B`$ meson. In the present analysis, we combine lattice simulation for finite heavy quark mass with the mass independent matching coefficients determined in the static limit. The dominant uncertainty is, therefore, arising from the finite mass effects in the perturbative matching coefficients. For more complete understanding of the $`1/m_Q`$ dependence, matching coefficients with the finite heavy quark mass are necessary. ## Acknowledgment We would like to thank S. Tominaga for useful discussion. Numerical calculations have been done on Paragon XP/S at Institute for Numerical Simulations and Applied Mathematics in Hiroshima University. We are grateful to S. Hioki for allowing us to use his program to generate gauge configurations. T.O. is supported by the Grants-in-Aid of the Ministry of Education (No. 10740125). H.M. would like to thank the JSPS for Young Scientists for a research fellowship. ## A In this appendix, we compare our perturbative matching by simple one-loop formula with the renormalization group (RG) improved ones used in the previous static calculations . Since the matching procedure for determining the RG improved coefficients is given in Refs. in detail (see also Refs. ), we just show the results appropriate for our actions and definition of operator. Considering the matching of a ratio $`B_B(m_b)=𝒪_L(m_b)/(8/3)A_0^2`$ again as in section II, the RG improved versions of $`Z_{X/A^2}(m_b;1/a)`$ are as follows, $`Z_{L/A^2}(m_b;1/a)`$ $`=`$ $`Z_L^{cont}\left(1+{\displaystyle \frac{\alpha _s}{4\pi }}(D_L2D_A)\right),`$ (A1) $`Z_{R/A^2}(m_b;1/a)`$ $`=`$ $`Z_L^{cont}\times {\displaystyle \frac{\alpha _s}{4\pi }}D_R,`$ (A2) $`Z_{N/A^2}(m_b;1/a)`$ $`=`$ $`Z_L^{cont}\times {\displaystyle \frac{\alpha _s}{4\pi }}D_N,`$ (A3) $`Z_{S/A^2}(m_b;1/a)`$ $`=`$ $`Z_S^{cont},`$ (A4) and $`Z_L^{cont}`$ $`=`$ $`\left(1+{\displaystyle \frac{\alpha _s(m_b)}{4\pi }}({\displaystyle \frac{26}{3}})\right)\left(1+{\displaystyle \frac{\alpha _s(1/a)\alpha _s(m_b)}{4\pi }}(0.043)\right)`$ (A6) $`+{\displaystyle \frac{\alpha _s(m_b)}{4\pi }}(8)\left(\left({\displaystyle \frac{\alpha _s(m_b)}{\alpha _s(1/a)}}\right)^{8/25}1\right){\displaystyle \frac{1}{4}},`$ $`Z_S^{cont}`$ $`=`$ $`{\displaystyle \frac{\alpha _s(m_b)}{4\pi }}(8)\left({\displaystyle \frac{\alpha _s(m_b)}{\alpha _s(1/a)}}\right)^{8/25}.`$ (A7) Numerical values of $`Z_{\{L,R,N,S\}/A^2}(m_b;1/a)`$ are given in Table I together with those by the simple one-loop formula, where we use the V-scheme coupling as $`\alpha _s`$ appearing in Eqs. (A1)-(A4) while the couplings in Eqs. (A6) and (A7) are defined in the continuum $`\overline{\mathrm{MS}}`$ scheme with $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(4)}`$=344 MeV, which corresponds to $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$= 237 MeV. Now assuming each $`B_X^{lat}`$ is of $`O(1)`$, the dominant effects of resummation arise from $`Z_L/A^2`$ and $`Z_S/A^2`$. Since, however, its difference is at most 5% level and the effects from $`Z_L/A^2`$ and $`Z_S/A^2`$ are destructive, the total effect amount to less than 3%. To be specific, using our data extrapolated to the static limit (see Table IV) to calculate $`B_B^{stat}(m_b)`$, we obtain the results tabulated in Table V from the two matching procedures. In this case the effect of resummation is almost negligible.
no-problem/9903/hep-ex9903013.html
ar5iv
text
# Search for the Standard Model Higgs Boson at LEP ## I Introduction The objectives of my talk are to present current LEP limits on the mass of the Standard Model (SM) Higgs boson using data from 1998 taken with a center-of-mass energy of 189 GeV, and to predict, using all of the expected data taken by LEP through the year 2000, the discovery potential and final Higgs boson mass limit, assuming no evidence is found. Although the Standard Model has had tremendous success in explaining all known particle physics measurements, the exact mechanism by which the masses of the vector bosons and fermions are generated is still not understood. The Glashow-Weinberg-Salam theory describes a mechanism which generates particle mass through interactions of the particle with a scalar field . This Higgs field would manifest itself as a neutral spin-0 boson called the Higgs boson. The Glashow-Weinberg-Salam theory predicts all aspects of the Higgs boson, except for the Higgs boson mass. Electroweak observables do, however, depend upon the Higgs boson mass through logarythmic corrections. Recent precision measurements of the top quark mass, $`\mathrm{sin}^2\theta _W`$, and the W boson mass, indicate a light Higgs mass near 100 GeV/$`c^2`$ and a 95% confidence level upper mass limit of 262 GeV/$`c^2`$ . Although highly uncertain, this electroweak measurement is exciting since it is at the threshold of the current mass limits on the Higgs boson. ## II Higgs Boson Production and Decay Since the coupling strength of the Higgs boson to other particles is proportional to the particle’s mass, the Higgs boson is produced by coupling to heavy particles, of which the heaviest known particle produced at LEP2 is the Z boson. As a consequence, the main production mechanism for Higgs bosons at LEP2 is the Higgs-strahlung process $`\text{e}^+\text{e}^{}\text{Z}^{}\text{H}\text{Z}`$ where the Higgs boson is emitted from the Z boson line . The other Higgs boson production channels at LEP2 are the WW and ZZ fusion processes which produce a final state with a pair of electron neutrinos or electrons, respectively. In these fusion processes, the Higgs boson is formed in the collision of two quasi-real W or Z bosons radiated from the electron and positron beams. Interference between the production processes with electron neutrinos or electrons in the state are taken into account . The radiatively corrected cross sections for the Higgs-strahlung process and the sum of the two fusion processes including the interference terms are shown in Figure 1 as a function of the Higgs boson mass for a center-of-mass energy of 188.6 GeV. The rapid fall off in the cross section for the Higgs-strahlung process at a Higgs boson mass of 95 GeV/$`c^2`$ is due to the diminishing phase space available to produce both the heavy Higgs boson and an on-shell Z boson. In the Higgs boson search region of interest from masses of about 85 GeV/$`c^2`$ to 100 GeV/$`c^2`$, the decay of the Higgs boson into pairs of top quarks, Z bosons, or W bosons is not kinematically accessible. Consequently, the Higgs boson, which couples to mass, will most likely decay into the next heaviest set of particles which are the b quarks, $`\tau `$ leptons, and c quarks in order of decreasing mass. Figure 2 shows the nearly mass independent branching ratios of the decay of the Higgs boson as a function of its mass. The dominant decay to b quarks comprises about 85% of all Higgs boson decays, while the decay to $`\tau `$ leptons provides another 8% to the total branching fraction. The searches at LEP2 consider the Higgs boson decaying to b quark pairs or $`\tau `$ lepton pairs only, as these two channels comprise 93% of the total Higgs branching fraction. ## III Higgs Boson Topologies The four LEP collaborations have searched for the Standard Model Higgs boson in all of the possible decays of the Higgs boson ($`\text{b}\overline{\text{b}}`$, $`\tau ^+\tau ^{}`$) and the Z boson ($`\text{q}\overline{\text{q}}`$, $`\nu \overline{\nu }`$, $`\text{e}^+\text{e}^{}`$, $`\mu ^+\mu ^{}`$, $`\tau ^+\tau ^{}`$). Most of the backgrounds with large cross sections are easily reduced leaving the more difficult backgrounds which, fortunately, have cross sections not much larger than the Higgs boson signal. The most difficult background for all topologies is the ZZ final state. This irreducible background occurs when one Z boson decays to $`\text{b}\overline{\text{b}}`$ or $`\tau ^+\tau ^{}`$ and $`m_\text{H}`$ is approximately equal to $`m_\text{Z}`$. The high purity of the selections and the large amount of collected luminosity per experiment allows this difficult ZZ barrier to be overcome. ### A Four Jet Topology The four jet final state, where the Higgs boson decays to b quarks and the Z boson decays to any quark pair, comprise 64.6% of the Higgs boson final states. Difficult backgrounds for this topology include four jet WW events with four well defined, isolated jets. This background is significantly reduced by b-tagging the two jets in the event associated to the Higgs boson. Another difficult background arises from b quark pair production with a radiated high energy gluon. B-tagging is not effective for this background, and kinematics must be used to distinguish jets arising from the Z boson decay in the signal and jets from the high energy gluon. Typical selection efficiency for this channel is about 40%. ### B Missing Energy Topology The missing energy topology where the Higgs boson decays to b quarks and the Z boson decays to neutrinos comprises 20.0% of the Higgs boson final states. A difficult background for this topology arises from b quark pair production with two or more high energy initial state radiated photons going undetected down the beam. Cuts on kinematic variables like the acoplanarity of the b jets and the transverse momentum of the event are used to remove this difficult background that is both b-tagged and has a large missing mass. Typical selection efficiencies for the missing energy channel is about 40%. ### C Lepton Pair Topology The leptonic final state where the Z boson decays to either an electron or muon pair comprises only 6.7% of the total Higgs boson final states, but this final state achieves high purity with the ablility to precisely reconstruct the Z boson. This allows the reduction of all backgrounds except the irreducible ZZ final state. High purity of the channel also allows sensitivity to the decay of the Higgs boson to both b quarks and $`\tau `$ leptons which are typically reconstructed as the recoil to the lepton pair. Typical selection efficiency for this channel is typically about 75%, where the largest losses are due to the charged particle tracking acceptances of the detectors, and the insensitivity of the selection to off-shell Z bosons. ### D Tau Pair Topology The most difficult channel is the final state containing two jets and a pair of $`\tau `$ leptons. This final state arises from either the decay of the Higgs boson to b quarks and the Z boson to $`\tau `$ leptons or the decay of the Higgs boson to $`\tau `$ leptons and the Z boson to any quark pair. The combined branching fraction of these two final states is 8.7%. When no b-tagging can be applied, as in the second case were the Z boson decays into all quark flavors, a difficult background arises from the $`\text{W}^+\text{W}^{}`$ process where one W decays to a $`\tau `$ and a neutrino. Since $`\tau `$ identification is difficult, another charged particle in the event is often identified as the decay of the other $`\tau `$ lepton, and rejection of these types of events relies upon tight kinematic constraints on the final state consistent with HZ production. The typical selection efficiency for this channel is generally less than 30%. ## IV Limits on the Higgs Boson Mass Both the expected and observed Higgs boson lower mass limits at 95% confidence level for each of the LEP experiments are summarized in Table I. These preliminary results include all of the data collected at a center-of-mass energy of 189 GeV and were made immediately after the end of the physics data taking period of 1998. Consequently, all of these limits are considered highly preliminary and likely to change. Figure 3 shows, as an illustration, the preliminary expected and observed limits from the DELPHI collaboration as a function of the Higgs boson mass. The intersection of the limits with the 5% line define the exclusion region with 95% confidence. The expected and observed limits from the four collaborations are in fair agreement indicating the lack of a signal from the Standard Model Higgs boson. The similar expected limits for the four collaborations indicate the similar capabilities of the different detectors. Variations in observed limits are mostly due to the uncertainties involved in low statistics. The low ALEPH observed limit could be indication of a Higgs boson signal, but, considering the limits of the other experiments, the low limit is most likely due to unluckiness preventing the ALEPH limit from overcoming the ZZ final state barrier. ## V Prospects for the End of LEP By the end of the LEP program in the year 2000, each experiment is expected to have received about 200 $`\text{pb}^1`$ of data with a center-of-mass energy of 200 GeV. Figure 4 shows the discovery potential and expected limit for the Standard Model Higgs boson as a function of the luminosity for each experiment . The figure assumes that the limits from all four LEP collaborations will be combined. The figure is an overly optimistic expectation for LEP performance since it is made assuming a center-of-mass energy of 205 GeV which is probably unattainable. To account for the higher center-of-mass energy, a few GeV/$`c^2`$ should be subtracted from the Higgs boson mass to obtain realistic expectations. The figure indicates that with 200 $`\text{pb}^1`$ of data per experiment, LEP2 should be able to discover a Standard Model Higgs boson with a mass less than about 105 GeV/$`c^2`$. Assuming no new evidence for the Higgs boson, exclusion limits at 95% confidence level will set a minimum mass on the Standard Model Higgs boson at about 110 GeV/$`c^2`$.
no-problem/9903/cond-mat9903208.html
ar5iv
text
# Pseudogap Formation in an Electronic System with 𝑑-wave Attraction at Low-density ## I Introduction In the underdoped high-$`T_c`$ superconductors (HTSC), pseudogap (PG) behavior has been widely observed in experiments such as NMR, specific heat, and photoemission. All these phenomena can be basically understood as caused by the suppression of low-energy spectral weight in the temperature range $`T_cT\mathrm{\Delta }_{\mathrm{PG}}`$, where $`T_c`$ is the superconducting transition temperature and $`\mathrm{\Delta }_{\mathrm{PG}}`$ is a characteristic energy scale for the PG formation. This occurs both in the spin- and charge-excitation spectra. As a consequence, the problem is reduced to the clarification of the origin of this spectral-weight suppression, namely, the physical origin of $`\mathrm{\Delta }_{\mathrm{PG}}`$. The angle-resolved photoemission (ARPES) spectrum, which is sensitive to the momentum dependence of the PG, has revealed that the PG phenomenon itself exhibits a $`d`$-wave symmetry which is smoothly connected to the $`d`$-wave superconducting gap. Moreover, the locus of the minimum gap position in momentum space traces the shape of the Fermi surface. From these results, it can be inferred that the energy scale for PG formation is closely related to the superconducting correlation. Then, one of the possible explanations for the PG behavior involves the discussion of possible “precursors” of the Cooper-pair formation above $`T_c`$. Certainly there are other possible scenarios that also lead to PG formation such as spinon-pairing, antiferromagnetic spin fluctuation, and fermion-boson model, but in this paper the focus will be precisely on the development of a PG at strong coupling due to the formation of electron bound-states at a temperature scale larger than the one corresponding to long-range superconducting pairing. Along this scenario, much effort has been devoted to the investigation of the PG phenomena. However, there are few results in the literature leading to PG with $`d`$-wave symmetry, while the PG in the $`s`$-wave superconductor has been intensively investigated on the basis of the negative-$`U`$ Hubbard model. The popularity of the $`s`$-wave calculations as opposed to the more realistic $`d`$-wave case is mainly due to technical issues. The quantum Monte Carlo (QMC) simulation provides accurate information on the negative-$`U`$ Hubbard model and with these results the validity of other diagrammatic method such as the self-consistent $`t`$-matrix approximation (SCTMA) can be checked. However, for the model with $`d`$-wave attraction (or the nearest-neighbor attraction), QMC calculations are difficult mainly due to sign-problems in the simulations and also because phase separation could occur for a model with an attractive potential that acts at finite distances, contrary to what occurs in the attractive Hubbard model where the attraction is only on-site. In spite of these potential difficulties, here the $`d`$-wave PG is studied in order to contribute to the investigation of the energy scale $`\mathrm{\Delta }_{\mathrm{PG}}`$ in HTSC. For this purpose, here an effective model with $`d`$-wave separable attraction is analyzed, focussing our efforts into the low-density regime, for the following reasons. First, from a physical point of view, the low carrier density region is important because the underdoped HTSC regime as a first approximation can be described as a low-density gas of holes in an antiferromagnetic background. Previous numerical studies have shown that holes in such an environment behave like quasiparticles with a bandwidth renormalized to be of order $`J`$, the Heisenberg exchange coupling. Second, now from a technical viewpoint, it is known that the SCTMA gives reliable results in the dilute limit. Then, the behavior of the spectral function can be safely investigated in the low-density region. For these reasons in the present paper the average electron filling will actually be at most $`10\%`$. Note that our “electrons” below will simply represent fermions interacting through an attractive potential, and thus they can be thought of as “holes” in the context of HTSC. As mentioned before, the preformed $`s`$-wave pairing features in the negative-$`U`$ Hubbard model have been widely investigated as a prototype for PG formation in the underdoped HTSC. Besides the technical aspects already discussed, this seems to be based on the assumption that the difference $`s`$ vs $`d`$ in the pairing symmetry does not play an essential role in the PG formation. This may seem correct by observing the gap-like structure in the total density of states (TDOS), because it appears around the Fermi level irrespective of the pairing symmetry, although the actual detailed shape is different. However, recalling that the main features for the PG formation in the underdoped HTSC have been revealed using ARPES technique, the structure in the individual one-particle spectral function should play a crucial role. In fact, important differences between $`s`$\- and $`d`$-wave symmetry fairly clearly appear in the spectral function described below in our study. In this paper, it is reported that the $`\mathrm{\Delta }_{\mathrm{PG}}`$ scale agrees with the binding energy of the Cooper-pair irrespective of the pairing symmetry. The main difference between $`s`$\- and $`d`$-wave symmetry, appears in the momenta of preformed-pair electrons, $`𝐊`$ and $`𝐊`$. For the $`s`$-wave symmetry, $`𝐊`$ is always determined by the band structure. Namely, in the dilute limit, it is given by the momentum at the bottom of the band, $`𝐤^{}`$. Since the attraction is uniform in momentum space, $`𝐊`$ is determined only by the kinetic energy for the $`s`$-wave case. On the other hand, for a strong attraction with $`d`$-wave symmetry, $`𝐊`$ is not given by $`𝐤^{}`$, but is located at $`(\pi ,0)`$ and $`(0,\pi )`$, because the attraction becomes maximum at those momenta. Such a competition between the band structure and the strong attractive interaction leads to interesting features in the $`d`$-wave PG, while the $`s`$-wave PG simply follows the band structure. This paper is organized as follows. In section II, a general formalism to calculate the electronic self-energy in the SCTMA on the real-frequency axis is present. For the investigation of the PG structure for the $`d`$-wave attraction, a technical trick called “the $`s`$-$`d`$ conversion” is introduced. Section III is devoted to the results obtained with the formalism of Sec. II. Two types of band structures are considered with $`𝐤^{}=(0,0)`$ and $`(\pi ,0)`$, respectively. In section IV, the results are discussed. Finally in section V, after providing some comments, the main results of this paper are summarized. Throughout this paper, units such that $`\mathrm{}=k_\mathrm{B}=1`$ are used. ## II Formulation ### A Hamiltonian Let us consider a simple model in which electrons are coupled with each other through a separable attractive interaction. The symmetry of the electron pair is contained in the attractive term of the model, but it is not necessary to write it explicitly in most of the formulation of this section, although it will become important for the discussion on the PG. The model Hamiltonian is written as $`H`$ $`=`$ $`{\displaystyle \underset{𝐤\sigma }{}}(\epsilon _𝐤\mu )c_{𝐤\sigma }^{}c_{𝐤\sigma }`$ (1) $`+`$ $`{\displaystyle \underset{𝐤,𝐤^{},𝐪}{}}V_{𝐤,𝐤^{}}c_𝐤^{}c_{𝐤+𝐪}c_𝐤^{}^{}c_{𝐤^{}𝐪},`$ (2) where $`c_{𝐤\sigma }`$ is the annihilation operator for an electron with momentum $`𝐤`$ and spin $`\sigma `$, $`\epsilon _𝐤`$ the one-electron energy, $`\mu `$ the chemical potential, and $`V_{𝐤,𝐤^{}}`$ the pair-interaction between electrons. The electron dispersion is expressed as $$\epsilon _𝐤=2t(\mathrm{cos}k_x+\mathrm{cos}k_y)4t^{}\mathrm{cos}k_x\mathrm{cos}k_y,$$ (3) where $`t`$ and $`t^{}`$ are the nearest and next-nearest neighbor hopping amplitudes, respectively. The pair-interaction is written as $$V_{𝐤,𝐤^{}}=Vf_𝐤f_𝐤^{},$$ (4) where $`f_𝐤`$ is the form factor characterizing the symmetry of the Cooper-pair. Note that a positive value of $`V`$ denotes an attractive interaction throughout this paper. ### B Self-consistent $`t`$-matrix approximation Now let us calculate the spectral function using the SCTMA. Since this method becomes exact in the two-particle problem, it is expected to give a reliable result in the low-density region. In fact, this expectation has been already checked in the attractive Hubbard model by comparing SCTMA results against QMC simulations. Therefore the reliability of the SCTMA may also be expected for the non-$`s`$-wave attractive interaction, even though the direct comparison with QMC results is quite difficult in this case. Consider first for completeness the imaginary-axis representation. In this formulation, the one-particle Green’s function $`G`$ is given by $$G(𝐤,i\omega _n)=\frac{1}{i\omega _n(\epsilon _𝐤\mu )\mathrm{\Sigma }(𝐤,i\omega _n)},$$ (5) where $`\omega _n=\pi T(2n+1)`$, $`n`$ is an integer, and $`T`$ the temperature. In the SCTMA, the self-energy $`\mathrm{\Sigma }(𝐤,i\omega _n)`$ is obtained with the use of the $`t`$-matrix given by the infinite sum of particle-particle (p-p) ladder diagrams, as shown in Fig. 1. More explicitly, $`\mathrm{\Sigma }`$ is expressed as $`\mathrm{\Sigma }(𝐤,i\omega _n)`$ $`=`$ $`f_𝐤^2T{\displaystyle \underset{n^{}}{}}{\displaystyle \underset{𝐤^{}}{}}\mathrm{\Gamma }(𝐤+𝐤^{},i\omega _n+i\omega _n^{})`$ (6) $`\times `$ $`G(𝐤^{},i\omega _n^{}),`$ (7) where $`\mathrm{\Gamma }(𝐪,i\nu _m)`$ is the $`t`$-matrix, given by $$\mathrm{\Gamma }(𝐪,i\nu _m)=\frac{V^2\varphi (𝐪,i\nu _m)}{1V\varphi (𝐪,i\nu _m)}.$$ (8) Here $`\nu _m=2\pi Tm`$, with $`m`$ an integer, and $`\varphi (𝐪,i\nu _m)`$ is the p-p ladder, defined by $`\varphi (𝐪,i\nu _m)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐤}{}}f_𝐤^2G(𝐤,i\omega _n)`$ (9) $`\times `$ $`G(𝐪𝐤,i\nu _mi\omega _n).`$ (10) Note that the Hartree term is neglected in the self-energy because it should be considered as included in the band structure. The Green’s function can be calculated self-consistently using Eqs. (5)-(9). The chemical potential is determined by $$n/2=T\underset{n}{}\underset{𝐤}{}e^{i\omega _n\eta }G(𝐤,i\omega _n),$$ (11) where $`n`$ is the average electron number density per site, and $`\eta `$ is an infinitesimal positive number. In order to obtain results on the real-frequency axis, Padé approximants for the numerical analytic continuation from the imaginary-axis data are frequently used. However, in general, it is difficult to control the accuracy of the calculation by this procedure. In this paper, our efforts are focused on the direct calculation of the Green’s function on the real-frequency axis. In this context, a self-consistent calculation for the spectral function $$A(𝐤,\omega )=(1/\pi )\mathrm{Im}G(𝐤,\omega ),$$ (12) is carried out, where the retarded Green’s function $`G(𝐤,\omega )`$ is given by $$G(𝐤,\omega )=\frac{1}{\omega (\epsilon _𝐤\mu )\mathrm{\Sigma }(𝐤,\omega )}.$$ (13) The imaginary part of the retarded self-energy is expressed as $`\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )`$ $`=`$ $`f_𝐤^2{\displaystyle \underset{𝐤^{}}{}}{\displaystyle 𝑑\omega ^{}[f_\mathrm{F}(\omega ^{})+f_\mathrm{B}(\omega +\omega ^{})]}`$ (14) $`\times `$ $`A(𝐤^{},\omega ^{})\mathrm{Im}\mathrm{\Gamma }(𝐤+𝐤^{},\omega +\omega ^{}),`$ (15) where $`f_\mathrm{F}(x)=1/(e^{x/T}+1)`$ and $`f_\mathrm{B}(x)=1/(e^{x/T}1)`$. The real-part of $`\mathrm{\Sigma }`$ is obtained through the use of $`\mathrm{Im}\mathrm{\Sigma }`$ in the Kramers-Kronig (KK) relation $$\mathrm{Re}\mathrm{\Sigma }(𝐤,\omega )=\mathrm{p}.\mathrm{v}.\frac{d\omega ^{}}{\pi }\frac{\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega ^{})}{\omega \omega ^{}},$$ (16) where p.v. means the principal-value integral. The $`t`$-matrix is $$\mathrm{\Gamma }(𝐪,\omega )=\frac{V^2\varphi (𝐪,\omega )}{1V\varphi (𝐪,\omega )},$$ (17) where $`\mathrm{Im}\varphi (𝐪,\omega )`$ is given by $`\mathrm{Im}\varphi (𝐪,\omega )`$ $`=`$ $`\pi {\displaystyle \underset{𝐤}{}}{\displaystyle 𝑑\omega ^{}f_𝐤^2\mathrm{tanh}\frac{\omega ^{}}{2T}A(𝐤,\omega ^{})}`$ (18) $`\times `$ $`A(𝐪𝐤,\omega \omega ^{}),`$ (19) and the real part of $`\varphi (𝐪,\omega )`$ is also obtained using the KK relation. The electron number is obtained through $$n/2=\underset{𝐤}{}𝑑\omega A(𝐤,\omega )f_\mathrm{F}(\omega ),$$ (20) and the spectral function must satisfy the sum-rule $$1=\underset{𝐤}{}𝑑\omega A(𝐤,\omega ).$$ (21) This will be a check for the accuracy of the numerical results presented here. In the actual calculation, the fast Fourier transformation is applied to accelerate the procedure. The first Brillouin zone is divided into a $`64\times 64`$ lattice and the frequency integration is replaced by a discrete sum in the range $`25t<\omega <25t`$, dividing it into $`512`$ small intervals. As a consequence, the energy resolution is about $`0.1t`$, indicating the order of magnitude of the lowest temperature at which our calculations can be reliably carried out. When the relative difference between two successive iterations for $`A(𝐤,\omega )`$ is less than $`0.01`$ at each $`(𝐤,\omega )`$, the iteration loop is terminated. As for the sum-rule, it is systematically found to be satisfied within $`1\%`$. This value is the limitation for the accuracy of the present calculation, because the integral equation with a singular kernel is solved by replacing the integration procedure by a simple discrete summation. ### C $`s`$-$`d`$ conversion For a separable potential with $`d`$-wave symmetry, $`f_𝐤`$ is given by $$f_𝐤=\mathrm{cos}k_x\mathrm{cos}k_y.$$ (22) In this case, due to the prefactor $`f_𝐤^2`$ in Eq. (14), $`\mathrm{\Sigma }`$ always vanishes along the lines $`k_x=\pm k_y`$, leading to a delta-function contribution in the spectral function. In order to avoid this singularity, a self-consistent calculation for the $`d`$-wave case was first attempted by imposing anti-periodic and periodic boundary conditions for the $`k_x`$\- and $`k_y`$-directions, respectively. However, it was not always possible to obtain a converged self-consistent solution in this case. Actually, it was quite difficult to control such convergence even if the temperature $`T`$ was slowly decreased from the high-temperature region in which a stable solution was obtained, or if the coupling $`V`$ was adiabatically increased from the weak-coupling region. This difficulty is caused by the fact that $`\mathrm{\Sigma }`$ becomes negligibly small in the region around $`k_x\pm k_y`$ for the $`d`$-wave case, even if the strong-coupling value for $`V`$ is set as high as $`V=8t`$. If $`\mathrm{Im}\mathrm{\Sigma }`$ becomes smaller that the energy resolution in the present calculation, which is about $`0.1t`$, the sharp peak structure in the spectral function around $`k_x\pm k_y`$ is not correctly included in the self-consistent calculation. This leads to a spurious violation of the sum-rule, indicating that technical problems appear in reaching a physically meaningful solution for $`d`$-wave symmetry at low temperatures. In order to avoid this difficulty, a continuous change from $`s`$\- to $`d`$-wave symmetry is here considered by introducing a mixing parameter $`\alpha `$ such that $$f_𝐤^2=(1\alpha )+\alpha (\mathrm{cos}k_x\mathrm{cos}k_y)^2.$$ (23) Our calculations start at $`\alpha =0`$, i.e., for the pure $`s`$-wave case, in which a stable solution can be obtained easily in the SCTMA. Then, $`\alpha `$ is gradually increased such that the $`d`$-wave case is approached. If a physical quantity for the $`d`$-wave model is needed, an extrapolation is made by using the calculated results for the quantities of interest between $`0\alpha <1`$. ## III Results In this section, our results calculated with the use of the real-axis formalism are shown. Here the magnitude of the interaction $`V`$ is fixed as $`V=8t`$. ### A Case of $`t^{}=0`$ Let us consider first the band structure with $`𝐤^{}=(0,0)`$. In Fig. 2(a), the total density of states $`\rho (\omega )`$ is shown, given by $`\rho (\omega )=_𝐤A(𝐤,\omega )`$. The whole curve for TDOS is not presented in this figure, because its shape at a larger scale is quite similar to the non-interacting case. At $`\alpha =0`$, a gap-like feature at the Fermi level can be observed, although it is shallow. This result has been already reported in numerous previous papers using several techniques. With the increase of $`\alpha `$, the gap structure gradually becomes narrower and at the same time deeper. The TDOS extrapolated to $`\alpha =1`$ using the results for the $`\alpha `$s in the figure is not shown, because it becomes unphysically negative in some energy region. However, this is not a serious problem, because such a behavior is only an artifact due to the extrapolation using a small number of $`\alpha `$-results and it will disappear if $`\alpha `$ approaches unity very slowly and calculations with higher-energy resolution are performed. This problem is not present in the studies at $`t^{}=0`$ in the next subsection. Thus, this is a small complication that can be solved with more CPU and memory-intensive studies than reported here. In order to understand the observed changes in the PG behavior with the increase of $`\alpha `$, special attention must be given to the spectral function $`A(𝐤,\omega )`$. Let us first analyze the result at $`𝐤=(0,0)`$, shown in Fig. 2(b), in which two peaks are observed. The large peak above the Fermi level is due to the quasi-particle (QP) contribution, because if the interaction is gradually decreased, it continuously changes into the expected non-interacting $`\delta `$-function peak. Thus here it will be called “the QP peak”. However, note that another structure can be observed below the Fermi level, although it has only a small weight. As will be discussed in the next subsection, this originates from the peak structure in $`\mathrm{Im}\mathrm{\Gamma }`$. In this sense, it can be called “the resonant peak” due to the formation of the bound pair. When $`\alpha `$ is increased, the QP peak becomes sharper and the position of the resonant peak is shifted to the right side in Fig. 2(b), while the weight decreases. At $`\alpha =1`$, the resonant peak will likely disappear and only the $`\delta `$-function QP peak will occur, since the self-energy vanishes due to the prefactor $`f_𝐤^2`$ in Eq. (14). Although the weight for the resonant peak in $`A(𝐤,\omega )`$ with $`𝐤=(0,0)`$ decreases with the increase of $`\alpha `$, it is actually transfered to another $`A(𝐤,\omega )`$ with $`𝐤(0,0)`$. Then, let us next turn our attention to $`A(𝐤,\omega )`$ with $`𝐤=(\pi ,0)`$, shown in Fig. 2(c). In this case, a QP peak is also observed, but the position is higher than that at $`𝐤=(0,0)`$. The difference between the positions of those QP peaks is about $`4t`$, namely, equal to $`\epsilon _{(\pi ,0)}\epsilon _{(0,0)}`$. It should be noted that another peak structure grows below the Fermi level with the increase of $`\alpha `$. The position roughly agrees with the lower edge of the PG structure in the TDOS, suggesting that the PG structure for $`d`$-wave originates from $`A(𝐤,\omega )`$ around $`𝐤=(\pi ,0)`$. Let us summarize this subsection. Pseudogap features appear in the density of states both for $`s`$\- and $`d`$-wave models, but its origin is quite different. For the $`s`$-wave case, this structure is mainly due to the preformed pair of electrons around the point $`𝐤=𝐤^{}=(0,0)`$ at the bottom of the band. On the other hand, for the $`d`$-wave case, it originates from the pair of electrons at other $`𝐤`$-points, especially, $`𝐤=(\pi ,0)`$. In the case of strong attraction such as $`V=8t`$, those electrons can exploit the effect of the attractive interaction, in spite of the loss of the kinetic energy. In other word, this difference is due to the competition between the kinetic and the interaction effects. ### B Case of $`t^{}0`$ From the result for $`t^{}=0`$, in order to obtain a large PG structure for $`d`$-wave symmetry, it is necessary to consider the band structure in which $`𝐤^{}`$’s are located at $`(\pm \pi ,0)`$ and $`(0,\pm \pi )`$. The reason is that electrons around $`𝐤=𝐤^{}`$ can exploit the kinetic as well as the pairing energy due to the strong attractive interaction. As for a value of $`t^{}`$, it is here typically chosen as $`t^{}=t`$ but the results do not depend crucially on such a choice. In the TDOS shown in Fig. 3(a), no structure around the Fermi level is observed for the $`s`$-wave case, but a peak appears below the Fermi level with the increase of $`\alpha `$. It can be regarded as a sign of PG formation, but this interpretation becomes much clearer if $`A(𝐤,\omega )`$ is investigated. In Fig. 3(b), the change of $`A(𝐤,\omega )`$ at $`𝐤=𝐤^{}=(\pi ,0)`$ is depicted when $`\alpha `$ is increased. In the pure $`s`$-wave case, a large QP peak can be observed, but it is difficult to find a resonant peak below the Fermi level. On the other hand, when the $`d`$-wave is approached by increasing $`\alpha `$, the QP peak is gradually destroyed and the resonant peak grows strongly below the Fermi level. Then, the PG structure is much larger compared to that at $`t^{}=0`$. Note that in this case, the extrapolation for the TDOS at $`\alpha =1`$ is quite successful, contrary to what occurs at $`t^{}=0`$, because the large size of the PG allows us to perform the calculation at a high temperature such as $`T=2t`$, a situation in which the structure in the TDOS is smoother than at $`t^{}=0`$. For the case $`t^{}=t`$, weight transfer in $`A(𝐤,\omega )`$ is observed with the increase of $`\alpha `$, but it occurs between the QP and the resonant peaks at $`𝐤=(\pi ,0)`$. In order to confirm this idea, $`A(𝐤,\omega )`$ at $`𝐤=(0,0)`$ was studied as shown in Fig. 3(c). As expected, only the sharpening of the QP peak is observed as $`\alpha `$ is varied, because the strength of the attractive interaction at $`𝐤=(0,0)`$ becomes weak with the increase of $`\alpha `$. Note that a finite width for the QP remains at $`\alpha =1`$, but it is only a numerical artifact. Actually at $`t^{}=t`$, electrons around $`𝐤=(0,0)`$ do not take part in the PG formation even for the $`s`$-wave case. Since electrons around $`𝐤=𝐤^{}`$ gain both the kinetic and potential energies, the PG structure is determined only by those electrons. ## IV Discussion ### A Energy scale for pseudogap From the results in the previous section, in addition to the QP peak, a resonant peak below the Fermi level in $`A(𝐤^{},\omega )`$ has been observed, although its appearance depends on the value of $`t^{}`$ and the symmetry of the pair interaction. These two peaks define the PG structure in $`A(𝐤^{},\omega )`$ and also in the TDOS, although in the latter it is often difficult to observe due to the smearing effects of the sum over momentum of the individual one-particle spectral functions. Based on these observations, in this paper the PG energy $`\mathrm{\Delta }_{\mathrm{PG}}`$ is defined by the width between the QP and the resonant peaks in $`A(𝐤^{},\omega )`$. Note that for $`t^{}=0`$ and $`\alpha =1`$, the weight for the resonant peak in $`A(𝐤^{},\omega )`$ will vanish, but in the limit of $`\alpha 1`$, its position approaches the lower peak of the PG structure. In order to elucidate the physical meaning of our $`\mathrm{\Delta }_{\mathrm{PG}}`$, the imaginary part of the self-energy is analyzed at $`𝐤=𝐤^{}`$, because its structure has a direct effect on the spectral function, given by $`A(𝐤,\omega )`$ (24) $`={\displaystyle \frac{1}{\pi }}{\displaystyle \frac{\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )}{[\omega (\epsilon _𝐤\mu )\mathrm{Re}\mathrm{\Sigma }(𝐤,\omega )]^2+[\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )]^2}}.`$ (25) For an intuitive explanation, it is not convenient to analyze the full self-consistent solution for $`\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )`$. Rather the essential information can be obtained by simply evaluating Eq. (14) replacing the renormalized Green’s function $`G`$ with the non-interacting Green’s function $`G_0`$. Then, $`A(𝐤^{},\omega )`$ on the right-hand side of Eq. (14) becomes $`\delta (\omega \epsilon _𝐤^{}+\mu )`$ and $`\mathrm{Im}\mathrm{\Gamma }`$ is obtained with the use of the p-p ladder diagrams composed of two $`G_0`$-lines. Furthermore, only the contribution from the preformed pair with momentum zero for the center of mass is considered. Namely, only $`\mathrm{Im}\mathrm{\Gamma }`$ with $`𝐤+𝐤^{}=\mathrm{𝟎}`$ is taken into account in Eq. (14). Due to the above simplifications, $`\mathrm{Im}\mathrm{\Sigma }`$ at $`𝐤=𝐤^{}`$ can be shown to be $`\mathrm{Im}\mathrm{\Sigma }(𝐤^{},\omega )`$ $``$ $`f_𝐤^{}^2[f_\mathrm{F}(\epsilon _𝐤^{}\mu )+f_\mathrm{B}(\omega +\epsilon _𝐤^{}\mu )]`$ (26) $`\times `$ $`\mathrm{Im}\mathrm{\Gamma }(\mathrm{𝟎},\omega +\epsilon _𝐤^{}\mu ).`$ (27) If it is assumed that $`\mathrm{Im}\mathrm{\Gamma }`$ has a peak at $`\omega =\mathrm{\Omega }`$, then $`\mathrm{Im}\mathrm{\Sigma }(𝐤^{},\omega )`$ shows a peak structure around $`\omega \mathrm{\Omega }(\epsilon _𝐤^{}\mu )`$. Here the weight of the peak will not be discussed, but it will have a small finite value if the thermal factor is taken into account. Therefore in the spectral function at $`𝐤=𝐤^{}`$, besides the sharp QP peak at $`\omega =\epsilon _𝐤^{}\mu `$, another peak appears around $`\omega \mathrm{\Omega }(\epsilon _𝐤^{}\mu )`$ due to the peak-structure in $`\mathrm{Im}\mathrm{\Sigma }(𝐤^{},\omega )`$, indicating that the size of the PG feature is given by $`\mathrm{\Delta }_{\mathrm{PG}}=|2(\epsilon _𝐤^{}\mu )\mathrm{\Omega }|`$ in this simple approximation. Now let us estimate the value of $`\mathrm{\Omega }`$. Since $`\mathrm{\Omega }`$ is the energy at which $`\mathrm{\Gamma }T`$ acquires its maximum value, it can be obtained from the condition $`1V\mathrm{Re}\varphi _0(\mathrm{𝟎},\mathrm{\Omega })=0,`$ (28) where $`\varphi _0`$ is the p-p ladder set with two $`G_0`$-lines, explicitly given by $`\varphi _0(𝐪,\omega )={\displaystyle \underset{𝐤}{}}f_𝐤^2{\displaystyle \frac{f_\mathrm{F}(\epsilon _𝐤\mu )f_\mathrm{F}(\epsilon _{𝐪𝐤}+\mu )}{\omega +i\eta (\epsilon _{𝐪𝐤}+\epsilon _𝐤2\mu )}}.`$ (29) In the dilute case in which the chemical potential $`\mu `$ is situated below the lower band-edge $`\epsilon _𝐤^{}`$ and in the temperature region for $`T\epsilon _𝐤^{}\mu `$, Eq. (28) reduces to $`1+V{\displaystyle \underset{𝐤}{}}f_𝐤^2{\displaystyle \frac{1}{\mathrm{\Omega }2(\epsilon _𝐤\mu )}}=0,`$ (30) which is just the equation to obtain the binding energy $`\mathrm{\Delta }`$ of the Cooper-pair in the two-particle problem. Since $`\mathrm{\Delta }`$ is defined as the difference between the two-particle bound-state energy $`\mathrm{\Omega }`$ and twice the one-particle energy $`\epsilon _𝐤^{}\mu `$, it is given by $`\mathrm{\Delta }=2(\epsilon _𝐤^{}\mu )\mathrm{\Omega }.`$ (31) Then, from this analysis it is found that $`\mathrm{\Delta }_{\mathrm{PG}}=\mathrm{\Delta }`$, as intuitively expected. ### B Quantitative comparison between $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }_{\mathrm{PG}}`$ Although the discussion in the previous subsection is too simple to address the fully renormalized self-consistent solution, the results reported in Sec. III will become more reasonable if the relevant energy scales are correctly addressed. In order to understand this, let us make a direct comparison between the analytic value for $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }_{\mathrm{PG}}`$ evaluated from the energy difference between the two peaks in $`A(𝐤^{},\omega )`$. By solving Eqs. (30) and (31), the binding energy for the $`s`$\- and $`d`$-wave cases with $`t^{}=0`$ and $`t^{}=t`$ is obtained, which is shown in Fig. 4(a). In the strong-coupling region $`V8t`$, all curves are proportional to $`V`$. In the weak-coupling region, it is difficult to obtain an accurate value numerically, because the binding is exponentially small in this region. Especially, for the $`d`$-wave case with $`t^{}=0`$, it was not possible to obtain any finite value in a region of $`V7t`$. However, when negative $`t^{}`$ is introduced, the binding energy for $`d`$-wave pair is much enhanced, while the $`s`$-wave binding energy is not much affected by $`t^{}`$. This result can be understood once again as caused by the competition between the band structure and the attractive interaction at $`𝐤=𝐤^{}`$. For the $`s`$-wave case, since the attractive interaction is isotropic in momentum space, the $`V`$ dependence of $`\mathrm{\Delta }`$ is not so sensitive to the position of $`𝐤^{}`$. However, for the $`d`$-wave symmetry, the situation is drastically different. For the band structure with $`𝐤^{}=(0,0)`$, it is quite difficult for electrons around $`𝐤=𝐤^{}`$ to form a pair, because the attraction does not work at $`𝐤=𝐤^{}`$. Thus, in the weak-coupling region the binding energy is vanishingly small. If $`V`$ becomes as large as the bandwidth, $`8t`$, electron pairs at $`𝐤𝐤^{}`$ begin to affect the binding energy and the value of $`\mathrm{\Delta }`$ becomes comparable to $`t`$. On the other hand, for the band structure with $`𝐤^{}=(\pi ,0)`$, electrons around $`𝐤=𝐤^{}`$ easily form a pair because of the large strength of the attraction at that point. This sensitivity of the $`d`$-wave binding energy to the band structure is consistent with that of the $`d`$-wave PG observed in the spectral function. Now let us compare our PG energy $`\mathrm{\Delta }_{\mathrm{PG}}`$ with $`\mathrm{\Delta }`$. In Fig. 4(b), those quantities are depicted as a function of $`\alpha `$. Note that these $`\mathrm{\Delta }_{\mathrm{PG}}`$’s are estimated from $`A(𝐤^{},\omega )`$’s in Figs. 2(b) and 3(b). In the region $`\alpha <0.4`$ for $`t^{}=t`$, the values of $`\mathrm{\Delta }_{\mathrm{PG}}`$ are not shown, because the resonant peak could not be observed for the parameters used in Fig. 3(b). For the case of $`t^{}=0`$, $`\mathrm{\Delta }_{\mathrm{PG}}`$ traces the curve of the binding energy, though there is a small deviation between them. On the other hand, for the case of $`t^{}=t`$, the deviation is larger particularly around $`\alpha 0.6`$, but $`\mathrm{\Delta }_{\mathrm{PG}}`$ approaches $`\mathrm{\Delta }`$ at the $`d`$-wave case. Thus, from our analysis, it is clear that the energy scale for the PG structure is simply the pair binding energy. ## V Comments and summary In this paper, pseudogap features in a model for $`d`$-wave superconductivity have been observed. An important observation to start the discussion is that implicitly it has been assumed in the results reported thus far that $`\mathrm{\Delta }_{\mathrm{PG}}`$ is larger than the superconducting transition temperature $`T_c`$. Otherwise, the results found in our work may be confused with the superconducting gap expected below $`T_c`$. It is necessary to check this assumption, but it is a very hard task to calculate the true value of $`T_c`$. Then, in order to provide an upper limit for $`T_c`$, the critical temperature is simply evaluated within the mean-field approximation. It is expected that the true $`T_c`$ will be lower than the mean-field value $`T_c^{\mathrm{MF}}`$, which is obtained from the well-known gap equation $`1=V{\displaystyle \underset{𝐤}{}}f_𝐤^2{\displaystyle \frac{\mathrm{tanh}[(\epsilon _𝐤\mu )/(2T_c^{\mathrm{MF}})]}{\epsilon _𝐤\mu }}.`$ (32) In Fig. 5, $`T_c^{\mathrm{MF}}`$ for $`d`$-wave pairing with $`t^{}=0`$ and $`t^{}=t`$ is shown as a function of $`n`$. For $`t^{}=0`$, the calculation for the spectral function shown in Sec.III has been done at $`n=0.02`$ and $`T=0.5t`$, and the point $`(n,T)=(0.02,0.5)`$ is located above the curve of $`T_c^{\mathrm{MF}}`$ in agreement with our assumption. Also for $`t^{}=t`$, it is found from the figure that the temperature $`T=2t`$ used for $`t^{}0`$ is larger than $`T_c^{\mathrm{MF}}`$, even at $`n=0.2`$. Clearly the temperatures analyzed in the present paper are above the superconducting critical temperature. Also note that $`\mathrm{\Delta }_{\mathrm{PG}}`$ for $`d`$-wave pairing is larger than $`T_c^{\mathrm{MF}}`$. In particular, for the case of $`t^{}=t`$, it is about three times larger than $`T_c^{\mathrm{MF}}`$. This fact clearly suggests the appearance of a pseudogap temperature region, $`T_cT\mathrm{\Delta }_{\mathrm{PG}}`$, for $`d`$-wave superconductors models. Let us now briefly comment on the imaginary-axis calculation with the Padé approximation. Some attempts were made to obtain the PG structure in the imaginary-axis formalism directly for the $`d`$-wave symmetry, but it was difficult to observe it in our results. It might be possible to obtain it, if much more effort was made on the imaginary-axis calculations, particularly on the Padé approximation. However, when the $`s`$-$`d`$ conversion trick is also applied to the imaginary-axis calculation, a clear sign of the PG just below the Fermi level can be easily observed. Although both results in the real- and imaginary-axis calculations do not agree perfectly with each other, the position of the peak in the imaginary-axis result is found to be located just at the lower edge of the PG structure obtained in the real-axis calculation. In the absence of the real-axis results, such a small signal of the PG structure may be missed, because it could be regarded as a spurious result due to the Padé approximation. Finally, let us discuss the possible relation of our PG to that observed in the ARPES experiments. In our result, the PG is characterized by the binding energy of the Cooper-pair, which is of the order of $`t`$ in our models except for a numerical factor. If $`t`$ is taken as a typical value for HTSC, it becomes of the order of a sizable fraction of eV, which is larger than the observed value in the ARPES experiments. However, from the viewpoint of the $`t`$-$`J`$ model, which is expected to contain at least part of the essential physics for the underdoped HTSC, the effective hopping is renormalized to be of order $`J`$, not $`t`$, where $`J`$($`1000`$K) is the antiferromagnetic exchange interaction between nearest-neighbor spins. With this consideration the order of magnitude of our PG energy becomes more reasonable. In summary, the pseudogap structure has been investigated in the low-density region for the separable potential model with $`s`$\- as well as $`d`$-wave symmetry. After special technical attention was given to particular features of the $`d`$-wave potential that make some of the calculations unstable, it has been revealed that the effect on the PG structure of the Cooper-pair symmetry manifests in the change of the weight for the resonant peak at $`A(𝐤^{},\omega )`$. Moreover, it has been clearly shown that the energy scale for the PG structure is just the pair binding energy, which is certainly larger than $`T_c`$. ###### Acknowledgements. The authors thank Alexander Nazarenko for many useful discussions. T.H. has been supported from the Ministry of Education, Science, Sports, and Culture of Japan. E.D. is supported by grant NSF-DMR-9814350.
no-problem/9903/astro-ph9903207.html
ar5iv
text
# Cosmic Chemical Evolution ## 1 Introduction One of the greatest successes of the Big Bang theory is that its prediction that the primordial baryonic matter is almost entirely composed of hydrogen and helium with a trace amount of a few other light elements is in detailed agreement with current observations (e.g., Schramm & Turner 1998). The heavier elements, collectively called “metals”, are thought to be made at much later times through nucleosynthesis in stars. Metals are ubiquitous in the universe in virtually all environments that have been observed, including regions outside of galaxies, the intergalactic medium (“IGM”), ranging from the metal rich intracluster medium to low metallicity Lyman alpha clouds. However, metallicity (the ratio of the amount of mass in metals to the total baryonic mass for a given region, $`M_{metals}/M_{baryons}`$, divided by $`0.02`$ for the Sun, $`\mathrm{Z}_{}`$) is observed to be very variable. For example, metallicity reaches as high as ten (in units of the solar value, where value unity corresponds to $`M_{metals}/M_{baryons}=2\%`$) in central regions of active galactic nuclei (Mushotsky, Done, & Pounds 1993; Hamann 1997; Tripp, Lu, & Savage 1997) but is as low as $`10^3`$ for some halo stars in our own galaxy (Beers 1999). Disparity in metallicity values is also seen at high redshift. For instance, metallicity in damped $`\mathrm{Ly}\alpha `$ systems is as high as $`0.5`$ and as low as $`0.01`$ at redshift $`z`$ (Prochaska & Wolfe 1997), whereas it is about 0.01 in moderate column density $`\mathrm{Ly}\alpha `$ clouds at $`z3`$ (Tytler et al. 1995; Songaila & Cowie 1996). Low column density Lyman alpha clouds at $`z23`$ appear to have still lower metallicity (Lu et al. 1998; Tytler & Fan 1994). The question that naturally rises then is: When were the metals made and why are they distributed as observed? Can we understand the strong dependence of $`Z/\mathrm{Z}_{}`$ on the gas density (at redshift zero) and the comparable dependence of $`Z/\mathrm{Z}_{}`$ on redshift for regions of a given overdensity? While these are well-posed questions, addressing them directly is a formidable computational problem and requires both a large dynamic range, to ensure a fair piece of the universe to be modeled, and sufficiently realistic physics being modeled including gasdynamics, galaxy formation, galactic winds and metal enrichment. After years of continuous improvement of both numerical techniques and physical modeling, coupled with rapid increase in computer power, we have now reached the point where this important question can at last be addressed in a semi-quantitative fashion using numerical simulations. ## 2 Model The results reported on here are based on a new computation of the evolution of the gas in a cold dark matter model with a cosmological constant; the model is normalized to both the microwave background temperature fluctuations measured by COBE (Smoot et al. 1992) on large scales (Bunn & White 1997) and the observed abundance of clusters of galaxies in the local universe (Cen 1998), and it is close to both the concordance model of Ostriker & Steinhardt (1995) and the model indicated by the recent high redshift supernova results (Reiss et al. 1998). The relevant model parameters are: $`\mathrm{\Omega }_0=0.37`$, $`\mathrm{\Omega }_b=0.049`$, $`\mathrm{\Lambda }_0=0.63`$, $`\sigma _8=0.80`$, $`H_0=70`$km/s/Mpc, $`n=0.95`$ and $`25\%`$ tensor mode contribution to the CMB fluctuations on large scales. Two simulations with box sizes of $`L_{box}=(100,50)h^1`$Mpc are made, each having $`512^3`$ cells and $`256^3`$ dark matter particles with the mean baryonic mass in a cell being $`(1.0\times 10^8,1.3\times 10^7)h^1\mathrm{M}_{}`$ and the dark matter particle mass being $`(5.3\times 10^9,6.6\times 10^8)h^1\mathrm{M}_{}`$, respectively, in the two simulations. Output was rescaled to $`\mathrm{\Omega }_b=0.037`$ to match latest observations (Burles & Tytler 1998). The results shown are mostly based on the large box, while the small box is used to check resolution effects. The description of the numerical methods of the cosmological hydrodynamic code and input physical ingredients can be found elsewhere (Cen & Ostriker 1999a,b). To briefly recapitulate, we follow three components separately and simultaneously: dark matter, gas and galaxies, where the last component is created continuously from the former two during the simulations in places where real galaxies are thought to form, as dictated mostly by local physical conditions. Self-consistently, feedback into IGM from young stars in the “galaxies” is allowed, in three related forms: supernova thermal energy output, UV photon output and mass ejection from the supernova explosions. The model reproduces the observed UV background as a function of redshift and the redshift distribution of star formation (“Madau Plot”; Nagamine, Cen & Ostriker 1999), among other diagnostics. Metals are followed as a separate variable (analogous to the total gas density) with the same hydrocode. We did not fit to the observed distributions and evolution of metals, but assumed a specific efficiency of metal formation, Subsequently rescaling the computed results to an adopted “yield” (Arnett 1996), the percentage of stellar mass that is ejected back into IGM as metals, of $`0.02`$ (from an input value 0.06). A word about the resolution of the simulation is appropriate here. The conclusions drawn in this paper are not significantly affected by the finite resolution, as verified by comparing the two simulations. Let us give an argument for why this is so. Although our numerical resolution is not sufficient to resolve any internal structure of galaxies, the resolution effect should affect different regions with different large-scale overdensities more or less uniformly since our spatial resolution is uniform and our mass resolution is good even for dwarf galaxies. In other words, galaxy formation in our simulations is not significantly biased against low density regions. Thus, the distribution of the identified galaxies as a function of large-scale overdensity in the simulation would be similar, if we had a much better resolution. It is the distribution of the galaxies that determines the distribution of metals, which is the subject of this paper. Needless to say, we can not make detailed modeling of the ejection of metals from galaxies into the IGM and this ignorance is hidden in the adopted “yield” coefficient. However, once the metals get out of galaxies, their dynamics is followed accurately. Changing the adopted yield by some factor would change all quoted metallicities by the same factor but not alter any statements about spatial and temporal distribution of metals. ## 3 Results Figure 1 shows the evolution of metallicity averaged over the entire universe (dot-dashed curve) and four regions with four different overdensities, $`\delta _\rho =(10^3,10^2,20,0)`$, smoothed by a Gaussian window of comoving size $`0.3h^1`$Mpc, respectively, that approximately correspond to clusters of galaxies, Lyman limit and damped $`\mathrm{Ly}\alpha `$ systems, moderate column density $`\mathrm{Ly}\alpha `$ clouds and very low column density $`\mathrm{Ly}\alpha `$ clouds, at $`z=(3,1.0,0.5,0)`$. The overdensity of each class of objects is defined using a Gaussian smoothing window of radius $`0.3^1`$Mpc, which corresponds to a mean mass of $`6\times 10^{10}h^1M_{}`$. If we assume that the DLAs are progenitors of the present day large disk galaxies, their mass may be in the range $`5\times 10^{12}1\times 10^{13}h^1M_{}`$. Therefore a choice of overdensity of $`100`$ seems appropriate. For the moderate column density Lyman alpha clouds, the choice is somewhat less certain but small variations do not drastically change the results. For the very low column density Lyman alpha clouds, the choice of the mean density should be adequate since the density fluctuations for these objects are small thus their density should be close to the mean. For the clusters of galaxies we can use overdensity of $`10^3`$ or $`3\times 10^3`$ and it makes no difference to the results. Note that a given class of objects is chosen to have a fixed comoving overdensity, not to have a fixed physical density. This choice is made because the decrease of a factor $`50100`$ of the observed meta-galactic radiation field from $`z3`$ to $`z0`$ (Haardt & Madau 1996), and the increase of the comoving size of structure with time at a fixed comoving density as $`(1+z)^{1/2}`$ (Cen & Simcoe 1997) approximately compensate for the decrease of physical density so a fixed comoving density approximately corresponds to a fixed column density at different redshifts. This applies for the last three classes of objects. For the first class of objects (clusters of galaxies) either choice gives comparable results, due to the fact that metallicity saturates at the highest density (see below). Several trends are clear. First, metallicity is a strong function of overdensity in the expected sense: high density regions have higher metallicity. Second, and more surprisingly, the evolution of metallicity itself is a strong function of overdensity: high density regions evolve slowly with redshift, whereas the metallicity in low density regions decreases rapidly towards high redshift. Finally, the overall metallicity evolution averaged globally differs from that of any of the constituent components. Therefore any given set of cosmic objects (including stars or $`\mathrm{Ly}\alpha `$ forest) cannot be representative of the universal metallicity at all times, although at a given epoch one may be able to identify a set of objects that has metallicity close to the universal mean. For example, at $`z=3`$, regions with overdensity $`20`$ (which roughly correspond to Lyman alpha clouds of column density of $`10^{14.015.0}`$cm<sup>-2</sup>) have metallicities very close to the global mean, while at $`z=0`$, regions with overdensity of one hundred (which roughly correspond to Lyman limit and damped Lyman alpha systems) has metallicity very close to the global mean. It has been the conventional wisdom to expect that, as all “metals” are produced (but not, on balance, destroyed) by stars, the metal abundance should increase with time or decrease with increasing redshift. What we see from Figure 1 is that there is another trend which is as strong as or stronger than this. Since stars are relatively overabundant in the densest regions, metallicity is a strongly increasing function of density at any epoch. This trend is observed within galaxies (with central parts being most rich) but it is also true when one averages over larger regions. The gas in dense clusters of galaxies is far more metal rich than the general IGM at $`z=0`$. This trend is shown in another way in Figure 2, where metallicity is plotted as a function of overdensity at four redshifts. Let us now examine the individual components more closely in Figure 3 with panels (a,b,c,d) showing the metallicity distributions for regions of overdensity $`(10^3,10^2,20,0)`$, respectively, at four redshifts $`z=(3,1,0.5,0)`$. We examine each panel in turn. In panel (a) we see that there is almost no evolution from redshift zero (thick solid curve) to redshift one (dotted curve) for metallicity of intracluster gas. The narrowness of the computed distributions fits observations very well for clusters locally and at low redshift. But we predict that the metallicity of clusters at redshift $`z=3`$ will be somewhat lower than their low redshift counterparts by a factor of about three, with the characteristic metallicity declining to $`Z0.1\mathrm{Z}_{}`$. Second, examining panel (b) for regions with overdensity $`10^2`$, which roughly correspond to Lyman limit and damped Lyman alpha systems, it is seen that the median metallicity increases only slightly from $`z=3`$ to $`z=0.5`$, but there is a large range of metallicity expected of approximately $`30`$ at any redshift, in very good agreement with observations over the entire redshift range considered. Next, panel (c) shows the integral distributions for regions with overdensity $`20`$, that correspond to moderate column density Lyman alpha clouds with column density $`10^{14}10^{15}`$cm<sup>-2</sup>. We see that the median metallicity increases by a factor of about $`10`$ from redshift $`z=3`$ to $`z=0`$, but with a broad tail towards the low metallicity end at all redshift, again in good agreement with observations. Davé et al. (1998) concluded that the metallicity for regions of overdensity of $`10`$ at $`z3`$ is $`10^{2.5}`$ from analysis of CIV absorption lines, consistent with our results here. Finally, panel (d) shows regions with overdensity $`0`$ (i.e, at the mean density) corresponding to the very low column density Lyman alpha clouds. The observations are upper bounds. But it appears that the bulk of the regions with such low density indeed has quite low metallicity, consistent with observations. Davé et al. (1998) derived an upper bound on metallicity for near mean density region at $`z3`$ of $`10^3`$ from analysis of OVI absorption lines, in agreement with our results. ## 4 Conclusion In the simulation examined in this paper high density regions reach an approximately solar density first, with lower density regions approaching that level at later epochs, and at all epochs the variations of $`Z`$ with density is comparable to or larger than the variations at a given overdensity. This saturation of metallicity has a natural physical explanation. Regions where the peaks of long and short waves fortuitously coincide have the highest initial overdensity and the earliest significant star formation; but, when the long waves break, high temperature shocks form (as in the centers of clusters of galaxies), so that further new galaxy formation and infall onto older systems ceases, star formation declines (Blanton et al. 1999), and the metallicity stops increasing. Observationally, we know that, in the highest density, highest metallicity and highest temperature regions of the rich clusters of galaxies, new star formation has essentially ceased by redshift zero. As a side note, that fact that metallicity depends as strongly on density as on time implies that stellar metallicity need not necessarily (anti-)correlate with the stellar age. For example, young stars may be relatively metal poor, as supported by recent observations (Preston, Beers & Shectman 1994; Preston 1994), simply because these young stars may have formed out of relatively lower density regions where metallicity was low. The picture presented here is, in principle, quite testable. For example, Steidel and co-workers (Steidel 1993; Steidel et al. 1994) and Savage et al. (1994) and others have found that metal line systems observed along a line of sight to a distant quasar are invariably associated with galaxies near the line of sight at the redshift of the metal line system. One would expect, on the basis of the work presented here, that there would be a strong statistical trend associating higher metallicity systems to closer galaxies, since for these the typical overdensity is larger. Figure 4 shows surface density contours on a slice of $`50\times 50\times 10h3`$Mpc<sup>3</sup> for galaxies (filled red; at a surface density of $`31`$ times the mean surface density of galaxies), metals (green; at a metallicity of $`0.16\mathrm{Z}_{}`$) and warm/hot gas (Cen & Ostriker 1999a) with $`T=10^510^7`$K (blue; at a surface density of $`6.8`$ times the mean surface density of warm/hot gas). Each respective contour contains 90% of the total mass of the respective component. We see that most of the green contours contain red spots, each within a region of size approximately $`1h^1`$Mpc; i.e., one would expect to see a normal galaxy associated with a metal line system within a projected distance of $`1h^1`$Mpc. It is also seen from Figure 4 that metal rich gas is generally embedded in the warm/hot gas. This may manifest itself observationally as spectral features that seem to arise from multiple phase gas at a similar redshift along the line of sight. Recent observations appear to have already hinted this; Lopez et al. (1998), using combined observations of quasar absorption spectra from Hubble Space Telescope and other ground-based telescopes, noted that some C IV clouds are surrounded by large highly ionized low-density clouds. Finally, it may be pointed out that most of the metals are in over-dense regions and these regions are generally relatively hot: $`>10^5`$Kelvin. Therefore, they should be observable in the EUV and soft X-ray emitting warm/hot gas (Cen & Ostriker 1999a). The work is supported in part by grants NAG5-2759 and AST93-18185, ASC97-40300. We thank Ed Jenkins, Rich Mushotzky, Jim Peebles, David Spergel, Michael Strauss and Todd Tripp for discussions. Fig. 1.— The average metallicities averaged over the whole universe (dot-dashed curve), overdensity $`10^3`$ (thick solid curve), overdensity $`10^2`$ (thin solid curve), overdensity $`10`$ (dotted curve) and overdensity $`0`$ (dashed curve), respectively, as a function of redshift. Fig. 2.— The average metallicities as a function of overdensity at four redshifts. The variances are $`1\sigma `$. Fig. 3.— Panel (a) shows the differential metallicity distribution for regions with overdensity $`10^3`$ (clusters of galaxies) at four different redshifts, $`z=0`$ (thick solid curve), $`z=0.5`$ (thin solid curve), $`z=1`$ (dotted) and $`z=3`$ (dashed curve) \[the same convention will be used for panels (b,c,d)\]. Also shown as symbols are observations from various sources. Various symbols are observations: the open circle from Mushotzky & Lowenstein (1997; ML97) showing that there is almost no evolution in the intracluster metallicity from $`z=0`$ to $`z0.3`$ at around one-third of solar, the open triangles from from Mushotzky et al. (1996; M96) showing the metallicities of four individual nearby clusters (Abell 496, 1060, 2199 and AWM 7), the open square from Tamura et al. (1996; T96) showing the metallicity of the intracluster gas of Abell 1060, the filled triangle from Arnaud et al. (1994; A94) showing the metallicity of the intracluster gas of the Perseus cluster. All metallicities are measured in \[Fe/H\]. Panel (b) shows the differential metallicity distribution for regions with overdensity $`10^2`$. The open triangle from Lu et al. (1996; Lu96) shows the result from an extensive analysis of a large database of damped Lyman alpha systems with $`0.7<z<4.4`$. The horizontal range on the open triangle does not indicate the errorbar on the mean rather it shows the range of metallicities of the observed damped Lyman alpha systems as given by Lu96. The open circle from Pettini et al. (1998; P98) is due to an analysis of ten damped Lyman alpha systems at $`z<1.5`$; here the horizontal range indicates the error on the mean. The open square due to Prochaska & Wolfe (1998; PW98) is from an analysis of 19 damped Lyman alpha systems at $`z>1.5`$; the horizontal range indicates the error on the mean. Finally, the two solid dots are from Prochaska & Wolfe (1997; PW97) of an analysis of two damped Lyman alpha systems at $`z2.0`$ with one having extreme low metallicity and the other having extreme high metallicity. All metallicities are measured in \[Zn/H\]. Vertical position in panel (b) is without significance. Panel (c) shows the cumulative metallicity distribution for regions with overdensity $`20`$. The symbols are observations: the open circle from SC96<sup>6</sup> for Lyman alpha clouds at $`z3`$ with column density of $`N>3\times 10^{14}`$cm<sup>-2</sup>, the open triangle from Rauch et al. (1997; R97) for Lyman alpha clouds at $`z3`$ with column density of $`N>3\times 10^{14}`$cm<sup>-2</sup>, the solid dot from Barlow & Tytler (1998; BT98) for Lyman alpha clouds at $`z0.5`$ with column density of $`N>3\times 10^{14}`$cm<sup>-2</sup>, the solid triangle from Shull et al. (1998; S98) for Lyman alpha clouds at $`z0`$ with column density of $`N=(310)\times 10^{14}`$cm<sup>-2</sup>. Panel (d) shows the cumulative metallicity distribution for regions with overdensity $`0`$ (i.e., mean density). The open circle is the upper limit for Lyman clouds with column density $`N=10^{13.5}10^{14.0}`$cm<sup>-2</sup> at redshift $`z=2.23.6`$ from Lu et al. (1998; Lu98). The open triangle is the upper limit for Lyman clouds with column density $`N=10^{13.0}10^{14.0}`$cm<sup>-2</sup> at redshift $`z3`$ from Tytler & Fan (1994; TF94). The model seems consistent with observations of low column density Lyman alpha clouds at high redshift. Fig. 4.— Surface density contours on a slice of $`50\times 50\times 10h^3`$Mpc<sup>3</sup> for galaxies (filled red; at a surface density of $`31`$ times the mean surface density of galaxies), metals (green; at a metallicity of $`0.16\mathrm{Z}_{}`$) and warm/hot gas<sup>12</sup> with $`T=10^510^7`$K (blue; at a surface density of $`6.8`$ times the mean surface density of warm/hot gas). Each respective contour contains 90% of the total mass of the respective component.
no-problem/9903/cond-mat9903026.html
ar5iv
text
# The random magnetic flux problem in a quantum wire ## I Introduction The concepts of scaling and of the renormalization group have provided crucial insights into the localization properties of a quantum particle in a random but static environment. Beyond a typical length scale depending on the microscopic details of the disorder, the localization problem can be described by an effective field theory that is uniquely specified by the dimensionality of space and the fundamental symmetries of the microscopic Hamiltonian. Correspondingly, the disorder is said to belong to the orthogonal, unitary and symplectic ensembles, depending on whether time reversal symmetry and spin-orbit coupling are present or not. However, not all disordered systems belong to one of these three standard symmetry classes. One example is the Integer Quantum Hall Effect, for which the scaling theory in the unitary universality class cannot explain the observed jumps in the Hall resistance, since it predicts that all states are localized in two-dimensions. Instead, a new scaling theory was proposed for the Integer Quantum Hall Effect, where, in addition to the longitudinal conductivity that controls the scaling flow in the unitary ensemble, the Hall conductivity appears as a second parameter. In this paper we consider a different example. It is the so-called random flux model, which describes the localization properties of a particle moving in a plane perpendicular to a static magnetic field of random amplitude and vanishing mean. In the literature, different points of view have been offered with regard to the localization properties and the appropriate symmetry class of the random flux problem. In Refs. it has been claimed that, since the magnetic field has a vanishing mean, the only effect of the random magnetic field is to break time reversal invariance, and hence that the localization properties are those of the standard unitary symmetry class. On the other hand, Zhang and Arovas have argued that this argument might be too naive and that a scaling theory closely related to that of the Kosterlitz-Thouless transition controls the localization properties of the random magnetic flux problem. They predicted that states are localized in the tails of the spectrum whereas close to the center of the band a line of critical points of the Kosterlitz-Thouless type is formed. Related point of views can be found in Refs. . Finally, it has been proposed in Ref. that the random flux model shows critical behavior at the band center $`\epsilon =0`$ only, whereas its localization properties are those of the unitary ensemble for energies $`\epsilon 0`$. In the third scenario, the behavior at $`\epsilon =0`$ is governed by an additional symmetry, the so-called chiral or particle-hole symmetry. The chiral symmetry can also be found in the related problem of a particle hopping on a lattice with random (real) hopping amplitudes. In the one-dimensional version of this problem, it is well established that the ensemble-averaged density of states diverges at the band center $`\epsilon =0`$ and that the ensemble averaged conductance decays algebraically with the length $`L`$ of the system. For comparison, in the unitary symmetry class, the density of states is continuous at $`\epsilon =0`$, while the conductance decays exponentially with $`L`$. (The one-dimensional random-hopping problem has been studied in many incarnations, cf. Refs. .) For two-dimensional systems, the effect of the chiral symmetry was studied by Gade and Wegner (see also Refs. ). They argued that the presence of the chiral symmetry results in three new symmetry classes, called chiral orthogonal, chiral unitary, and chiral symplectic. For disordered systems with chiral unitary symmetry, all states are localized except at the singular energy $`\epsilon =0`$ at which the average density of states diverges. The relevance of the chiral unitary symmetry class to the random flux problem was pointed out by Miller and Wang. (Only the chiral unitary class is of relevance, since time-reversal symmetry is broken in the random flux model.) For the two-dimensional random-flux problem, sufficiently accurate numerical data are notoriously hard to obtain. Although a consensus has emerged that states are localized in the tails of the spectrum, it is impossible to decide solely on the basis of numerical simulations whether states are truly delocalized upon approaching the center of the band, or only deceptively appear so as the localization length is much larger than the system sizes that are accessible to the current computers. Moreover, it is very easy to smear out a diverging density of states in a numerical simulation (compare Refs. and ). In short, no conclusion has been reached in the debate about the localization properties of the two-dimensional random flux problem. Here, we focus on the simpler problem of the random flux problem on a lattice and in a quasi one-dimensional geometry of a (thick) quantum wire with weak disorder, and restrict our attention to transport properties, notably the conductance $`g`$. For a wire geometry, numerical simulations can be performed with very high accuracy, and very good statistics can be obtained. Moreover, precise theoretical predictions for the transport properties can be made, both for the unitary symmetry class, and for the chiral unitary symmetry class. The wire geometry allows us to quantitatively compare the analytical predictions for the various symmetry classes and the numerical simulations for the random flux model. This comparison shows that, away from the critical energy $`\epsilon =0`$, the $`L`$-dependence of the average and variance of the conductance $`g`$ are those of the unitary ensemble. At the band center $`\epsilon =0`$, $`g`$ and $`\text{var}g`$ are given by the chiral unitary ensemble. Hence, we unambiguously show that in a quasi one-dimensional geometry, the localization properties of the random flux model are described by the third scenario above, in which the $`\epsilon =0`$ is a special point, governed by a separate symmetry class. Although our theory is limited to a quasi one-dimensional geometry, it does show the importance of the chiral symmetry at the band center $`\epsilon =0`$ and may thus contribute to the debate about the localization properties of the random flux problem in higher spatial dimensions. This paper was motivated by two recent works. First, in a recent paper, one of the authors computed $`g`$ and $`\text{var}g`$ numerically for the random flux model in a wire geometry to a very high accuracy. While for nonzero energies $`\epsilon `$, the result was found to agree with analytical calculations for the unitary symmetry class, for $`\epsilon =0`$ a clear difference with the unitary symmetry class was observed. Second, for the chiral symmetry classes, a scaling equation for the distribution of the transmission eigenvalues in a quasi one-dimensional geometry was derived and solved exactly in the chiral unitary case by Simons, Altland, and two of the authors. This scaling equation is the chiral analogue of the so-called Dorokhov-Mello-Pereyra-Kumar (DMPK) equation, which describes the three standard symmetry classes and was solved exactly in the unitary case by Beenakker and Rejaei. However, for the chiral unitary case, analytical results for the $`L`$ dependence of $`g`$ and $`\text{var}g`$ were lacking, so that a comparison between the theory and the numerical results of Ref. was not possible. In the present work this gap is bridged. In a wire geometry, the chiral unitary universality class undergoes a striking even-odd effect first noticed by Miller and Wang: The conductance $`g`$ decays exponentially with the length $`L`$ if the number of channels $`N`$ is even, while critical behavior is shown if $`N`$ is odd, even in the limit of large $`N`$ that we consider here. In the latter case, the average conductance $`g`$ decays algebraically, while the conductance fluctuations are larger than the mean. We analyze how the even-odd effect follows from the exact solution of the Fokker-Planck equation of Ref. and compare with numerical simulations of the random flux model. We close the introduction by pointing out that the random flux problem is also relevant to some strongly correlated electronic systems. In both the Quantum Hall Effect at half-filling and high $`T_c`$ superconductivity, strong electronic correlations can be implemented by auxiliary gauge fields. In this context, the random flux problem captures the contributions from the static transverse gauge fields. Notice that the chiral symmetry is not required on physical grounds both for the Quantum Hall Effect at half-filling and for high $`T_c`$ superconductivity. Another area of applicability for our results is the passive advection of a scalar field and non-Hermitean quantum mechanics. Finally, the striking sensitivity of the localization properties in the random flux problem to the parity of the number $`N`$ of channels is remarkably similar to that of the low energy sector of a single antiferromagnetic spin-$`N/2`$ chain to the parity of $`N`$, on the one hand, or to the sensitivity of the low energy sector of $`N`$ coupled antiferromagnetic spin-1/2 chains to the parity of $`N`$, on the other hand. The paper is organized as follows. The random flux problem in a wire geometry is defined in section II. The average and variance of the conductance are calculated analytically in section III. Analytical predictions are compared to the numerical simulations in section IV. We conclude in Sec. V. ## II The random magnetic flux model In the random flux model one considers a spinless electron on a rectangular lattice in the presence of a random magnetic field with vanishing mean. The magnetic field is perpendicular to the plane in which the electron moves. In this paper, we study the random flux model in a wire geometry and for weak disorder. This system is described by the Hamiltonian $`\psi _{m,j}`$ $`=`$ $`t[\psi _{m+1,j}+\psi _{m1,j}]`$ (3) $`t(1\delta _{j,N})e^{i\theta _{m,j}}\psi _{m,j+1}`$ $`t(1\delta _{j,1})e^{i\theta _{m,j1}}\psi _{m,j1},`$ where $`\psi _{m,j}`$ is the wavefunction at the lattice site $`(m,j)`$, labeled by the chain index $`j=1,\mathrm{},N`$ and by the column index $`m`$, see Fig. 1(a). The Peierls phases $`\theta _{m,j}`$ result from the flux $`\mathrm{\Theta }_{m,j}=\theta _{m+1,j}\theta _{m,j}`$ through the plaquette between the sites $`(m,j)`$, $`(m+1,j)`$, $`(m+1,j+1)`$, and $`(m,j+1)`$. (The flux $`\mathrm{\Theta }_{m,j}`$ does not uniquely determine all the phases along all the bonds. We have used this freedom to choose the nonzero phases along the transverse bonds only.) We consider a system with Hamiltonian (3) where the phases $`\mathrm{\Theta }_{m,j}`$ take random values in a disordered strip $`0<m<M`$ only, and are zero outside. We assume that the disordered region is quasi one-dimensional, i.e., $`MN1`$, corresponding to a thick quantum wire. In the disordered region, the Peierls phases $`\theta _{m,j}`$ are chosen at random in such a way that the magnetic flux $`\mathrm{\Theta }_{m,j}=\theta _{m+1,j}\theta _{m,j}`$ is uniformly distributed in $`[p\pi ,p\pi ]`$ with $`0<p1`$. To be precise, with $`\theta _{m,j}`$ given, $`\theta _{m+1,j}`$ is chosen from the interval $`[\theta _{m,j}p\pi ,\theta _{m,j}+p\pi ]`$ with uniform probability $`1/2p\pi `$. The parameter $`p`$ controls the strength of disorder. We assume weak disorder, i.e., $`p1`$. The boundary conditions in the transverse directions that are implied by the Hamiltonian (3) are “open”, i.e., there are no bonds between the chains $`j=1`$ and $`j=N`$. In this case, $``$ has a special discrete symmetry, called the particle-hole or chiral symmetry: Under the transformation $`\psi _{m,j}(1)^{m+j}\psi _{m,j}`$, one has $``$. Hence, for each realization of the random magnetic flux, the chiral symmetry ensures that there exists an eigenstate of $``$ with energy $`\epsilon `$ for each eigenstate of $``$ with energy $`+\epsilon `$. Note that the band center $`\epsilon =0`$ is a special point. The chiral symmetry is broken by the addition of a random on-site potential to the Hamiltonian (3). Another way to break the chiral symmetry is to add bonds between the chains $`j=1`$ and $`j=N`$ and to impose periodic boundary conditions in the transverse direction for $`N`$ odd. The presence of the chiral symmetry may have dramatic consequences for charge transport through the disordered wire, as we shall see in more detail in the next sections. In order to find the conductance $`g`$ of the disordered region with the random flux, we first compute the transfer matrix $``$. To the left and to the right of the disordered region, the wavefunction $`\psi _{m,j}`$ that solves the Schrödinger equation $`\psi =\epsilon \psi `$ can be written as a sum of plane waves moving to the right $`(+)`$ and to the left $`()`$, $`\psi _{j,m}`$ $`=`$ $`{\displaystyle \underset{\nu =1}{\overset{N_c}{}}}{\displaystyle \underset{\pm }{}}c_{\nu ,\pm }^L{\displaystyle \frac{e^{\pm ik_\nu m}}{\mathrm{sin}k_\nu }}\mathrm{sin}{\displaystyle \frac{\nu j\pi }{N+1}},m<0,`$ $`\psi _{j,m}`$ $`=`$ $`{\displaystyle \underset{\nu =1}{\overset{N_c}{}}}{\displaystyle \underset{\pm }{}}c_{\nu ,\pm }^R{\displaystyle \frac{e^{\pm ik_\nu m}}{\mathrm{sin}k_\nu }}\mathrm{sin}{\displaystyle \frac{\nu j\pi }{N+1}},m>M.`$ where $`\mathrm{cos}k_\nu =\epsilon /2t\mathrm{cos}[\nu \pi /(N+1)]`$. The prefactor $`1/\mathrm{sin}k_\nu `$ is chosen such that an equal current is carried in each channel. The number $`N_c`$ is the total number of propagating channels at the energy $`\epsilon `$, i.e., the total number of real wavevectors $`k_\nu `$. We are interested in the transport properties for $`\epsilon `$ close to $`0`$, where $`N_c=N`$, and ignore the distinction between $`N_c`$ and $`N`$ henceforth. The coefficients $`c_{\nu ,\pm }^L`$ and $`c_{\nu ,\pm }^R`$ are related by the transfer matrix $``$ \[see Fig. 1(b)\], $$\left(\genfrac{}{}{0pt}{}{c_{\nu ,+}^R}{c_{\nu ,}^R}\right)=\underset{\nu ^{}=1}{\overset{N}{}}_{\nu ,\nu ^{}}\left(\genfrac{}{}{0pt}{}{c_{\nu ^{},+}^L}{c_{\nu ^{},}^L}\right).$$ (4) Note that $`_{\nu ,\nu ^{}}`$ is a $`2\times 2`$ matrix in Eq. (4). Current conservation requires $$\mathrm{\Sigma }_3^{}=\mathrm{\Sigma }_3,$$ (5) where $`\mathrm{\Sigma }_3=\sigma _3𝟙_{}`$, $`\sigma _3`$ being the Pauli matrix and $`𝟙_{}`$ the $`N\times N`$ unit matrix. In addition, at the special point $`\epsilon =0`$, the chiral symmetry of the Hamiltonian (3) results in the additional symmetry $$\mathrm{\Sigma }_1\mathrm{\Sigma }_1=,$$ (6) where $`\mathrm{\Sigma }_1=\sigma _1𝟙_{}`$. The eigenvalues of $`^{}`$, which occur in inverse pairs $`\mathrm{exp}(\pm 2x_j)`$, determine the transmission eigenvalues $`T_j=1/\mathrm{cosh}^2x_j`$ and hence the dimensionless conductance $`g`$ through the Landauer formula $$g=\underset{j=1}{\overset{N}{}}T_j=\underset{j=1}{\overset{N}{}}\frac{1}{\mathrm{cosh}^2x_j}.$$ (7) In the absence of disorder, all exponents $`x_j`$ are zero, and conduction is perfect, $`g=N`$. On the other hand, transmission is exponentially suppressed if all $`x_j`$’s are larger than unity. The smallest $`x_j`$ determines the localization properties of the quantum wire. For the quasi one-dimensional geometry $`MN1`$ that we consider here and on length scales much larger than the mean free path associated to the random magnetic field, the microscopic details of the microscopic Hamiltonian $``$ should no longer be important. Rather, the crucial ingredients are the symmetries of $``$. For nonzero energy, the only symmetry of $``$ is given by current conservation, Eq. (5). In this case, for quasi one-dimensional systems with sufficiently weak disorder, the probability distribution $`P(x_1,\mathrm{},x_N;L)`$ of the parameters $`x_j`$ is governed by the so-called Dorokhov-Mello-Pereyra-Kumar (DMPK) equation, $`\mathrm{}{\displaystyle \frac{P}{L}}`$ $`=`$ $`{\displaystyle \frac{1}{4N}}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{}{x_j}}\left[J{\displaystyle \frac{}{x_j}}(J^1P)\right],`$ (9) $`J`$ $`=`$ $`{\displaystyle \underset{k>j}{}}|\mathrm{sinh}^2x_j\mathrm{sinh}^2x_k|^2{\displaystyle \underset{k}{}}|\mathrm{sinh}(2x_j)|.`$ (10) Here $`L=Ma`$ is the length of the disordered region, $`a`$ being the lattice constant. The mean free path $`\mathrm{}`$ depends on the disorder strength and on the details of the microscopic model. The derivation of Eq. (1) assumes $`\mathrm{}\lambda `$, $`\lambda `$ being the wave length at the Fermi energy. The initial condition corresponding to perfect transmission at $`L=0`$ is $`P(x_1,\mathrm{},x_N;0)=_j\delta (x_j)`$. The Fokker-Planck equation (1) describes the unitary symmetry class. For $`\epsilon =0`$, in addition to current conservation, the chiral symmetry (6) has to be taken into account. In Ref. it was shown that for weak disorder ($`p1`$) the distribution $`P(x_1,\mathrm{},x_N;L)`$ satisfies again a Fokker-Planck equation of the form (1), but with a different Jacobian $`J`$, $`\mathrm{}{\displaystyle \frac{P}{L}}`$ $`=`$ $`{\displaystyle \frac{1}{2N}}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{}{x_j}}\left[J{\displaystyle \frac{}{x_j}}(J^1P)\right],`$ (12) $`J`$ $`=`$ $`{\displaystyle \underset{k>j}{}}|\mathrm{sinh}(x_jx_k)|^2.`$ (13) This Jacobian describes the chiral unitary symmetry class. As was shown in Ref. , and as we shall see in more detail in the next section, as a result of the replacement of the Jacobian (10) by the Jacobian (13), the statistical distribution and the $`L`$-dependence of the conductance $`g`$ at energy $`\epsilon =0`$ is quantitatively and qualitatively different from that away from $`\epsilon =0`$. In Ref. it was shown that there exists a quantum critical point induced by the randomness when $`N`$ is odd within the chiral unitary symmetry class. Away from zero energy, the transport properties of the disordered wire are those expected from the standard unitary symmetry class. A derivation of Eq. (II) is given in Appendix A. The physical picture underlying Eqs. (1) and (II) is that the parameters $`x_j`$ undergo a “Brownian motion” as the length $`L`$ of the disordered region is increased. The Jacobian $`J`$ describes the “interaction” between the parameters $`x_j`$ in this Brownian motion process. The key difference between the unitary case and the chiral unitary case is the presence of an interaction with “mirror imaged” eigenvalues $`x_j`$ in Eq. (10), which is absent in Eq. (13). To see this, we note that both for the unitary and for the chiral unitary cases, the Jacobian $`J`$ vanishes if a parameter $`x_j`$ coincides with $`x_k`$, $`kj`$. However, in the unitary case (10), $`J`$ also vanishes if $`x_j`$ coincides with a mirror image $`x_k`$, $`kj`$, or if $`x_j=0`$ (i.e., $`x_j`$ coincides with its own mirror image). The vanishing of the Jacobian $`J`$ implies a repulsion of the parameters $`x_j`$ in the underlying Brownian motion process. Hence, whereas $`x_j`$ feels a repulsion from the other $`N1`$ parameters $`x_k`$, $`kj`$, in the chiral unitary case (II), $`x_j`$ feels an additional repulsion from the $`N1`$ mirror images $`x_k`$, $`kj`$, and from its own mirror image $`x_j`$ in the standard unitary case (1). It can be shown that the parameters $`x_j`$ repel each other by a constant force in the large-$`L`$ limit, irrespective of their separation. This long-range repulsion results in the so-called “crystalization of transmission eigenvalues”: The fluctuations of the parameters $`x_j`$ are much smaller than the spacings between their average positions. Away from zero energy, i.e., in the unitary symmetry class, all $`x_j`$ can be chosen positive because of repulsion from their mirror images, and their average positions are $$x_j=(2j1)L/2N\mathrm{},j=1,\mathrm{},N.$$ (14) In the chiral unitary symmetry class, the $`x_j`$ can be both positive and negative since there is no repulsion from the mirror images, and one has $$x_j=(N+12j)L/N\mathrm{},j=1,\mathrm{},N.$$ (15) In the unitary symmetry class and in the chiral unitary class with even $`N`$ the net force on each parameter $`x_j`$ is finite, and they grow linearly with the length $`L`$. Hence, by Eq. (7), the conductance $`g`$ is exponentially suppressed for $`LN\mathrm{}`$. However, for the chiral disordered wire with an odd number of channels $`N`$, the net force on the middle eigenvalue $`x_{(N+1)/2}`$ vanishes: it remains in the vicinity of the origin and the conductance is not exponentially suppressed. Thus, the quantum wire with random flux with an odd number $`N`$ of channels goes through a quantum critical point at zero energy whereas it remains non-critical for an even number $`N`$ of channels. A more quantitative description of this even-odd effect is developed in the next section. ## III Moments of the conductance ### A Method of bi-orthonormal functions To calculate the moments of the conductance $`g`$, we make use of the exact solution of the Fokker-Planck equation (II), $`P(x_1,\mathrm{},x_N;L)`$ $``$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}e^{\frac{N\mathrm{}}{2L}x_j^2}`$ (17) $`\times {\displaystyle \underset{j<k}{}}(x_jx_k)\mathrm{sinh}(x_jx_k).`$ The proportionality constant is fixed by normalization of the probability distribution. A derivation of Eq. (17) is presented in Appendix B. The moments of $`g`$ can be computed from the $`n`$-point correlation functions $`R_n(x_1,\mathrm{},x_n;L)=`$ (18) $`{\displaystyle \frac{N!}{(Nn)!}}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_{n+1}\mathrm{}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_NP(x_1,\mathrm{},x_N;L),`$ (19) and the Landauer formula (7). For example, the first and second moments of $`g`$ are $`g`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x{\displaystyle \frac{R_1(x;L)}{\mathrm{cosh}^2x}},`$ (21) $`g^2`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_1{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_2{\displaystyle \frac{R_2(x_1,x_2;L)}{\mathrm{cosh}^2x_1\mathrm{cosh}^2x_2}}`$ (23) $`+{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x{\displaystyle \frac{R_1(x;L)}{\mathrm{cosh}^4x}}.`$ Here we compute $`R_n(x_1,\mathrm{},x_n;L)`$ using the method of bi-orthonormal functions developed by Muttalib and Frahm for a disordered wire in the unitary symmetry class. The idea is to construct, for any given $`N`$ and $`L`$, a function $`K_L(x,y)`$ with the following properties, $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑xK_L(x,x)=N,`$ (25) $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑yK_L(x,y)K_L(y,z)=K_L(x,z),`$ (26) $`P(\{x_i\};L)=c_Ndet\left[K_L(x_i,x_j)\right]_{i,j=1,\mathrm{},N}.`$ (27) If such a function exists, it is known from random matrix theory that $`c_N=1/N!`$ and $`R_n(\{x_i\};L)`$ $`=`$ $`\mathrm{det}\left[K_L(x_i,x_j)\right]_{i,j=1,\mathrm{},n}.`$ (28) Our construction of the function $`K_L(x,y)`$ starts with a representation of $`P(x_1,\mathrm{},x_N;L)`$ in Eq. (17) as a product of two determinants. Making use of the identities $`{\displaystyle \underset{j<k}{}}(x_kx_j)`$ $`=`$ $`det\left[x_k^{j1}\right]_{j,k=1,\mathrm{},N},`$ $`{\displaystyle \underset{j<k}{}}\mathrm{sinh}(x_kx_j)`$ $`=`$ $`det\left[{\displaystyle \frac{1}{2}}e^{(N+12j)x_k}\right]_{j,k=1,\mathrm{},N},`$ we find $`P(\{x_i\};L)`$ $``$ $`\mathrm{det}\left[\varphi _j(x_k)\right]_{j,k=1,\mathrm{},N}`$ (31) $`\times \mathrm{det}\left[\eta _j(x_k)\right]_{j,k=1,\mathrm{},N},`$ where $`\varphi _j(x)=x^{j1},`$ (32) $`\eta _j(x)=e^{\frac{N\mathrm{}}{2L}x^2+(N+12j)x}.`$ (33) Note that the way we write $`P`$ as a product of two determinants in Eq. (III A) is not unique. In particular, we are free to replace the sets of functions $`\{\varphi _j\}`$ and $`\{\eta _j\}`$ by an arbitrary set of linear combinations $`\{\stackrel{~}{\varphi }_j\}`$ and $`\{\stackrel{~}{\eta }_j\}`$. This freedom is crucial for the construction of the function $`K_L(x,y)`$, as we shall see below. Since the product of two determinants equals the determinant of the product of the corresponding matrices and since transposition of a matrix leaves the determinant unchanged, it is tempting to identify $`K_L(x,y)`$ with $`_{j=1}^N\varphi _j(x)\eta _j(y)`$. In this way, Eq. (27) is satisfied. However, with this choice, the remaining two conditions (25) and (26) are not obeyed. This problem can be solved by making use of the above-mentioned freedom to replace the sets of functions $`\{\varphi _j\}`$ and $`\{\eta _j\}`$ by linear combinations $`\{\stackrel{~}{\varphi }_j\}`$ and $`\{\stackrel{~}{\eta }_j\}`$. One easily verifies that if we choose these linear combinations such that they are bi-orthonormal, $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x\stackrel{~}{\varphi }_j(x)\stackrel{~}{\eta }_k(x)=\delta _{jk},j,k=1,\mathrm{},N,`$ (34) all three conditions (III A) are met if we set $$K_L(x,y)=\underset{j=1}{\overset{N}{}}\stackrel{~}{\varphi }_j(x)\stackrel{~}{\eta }_j(y).$$ (35) The construction of the bi-orthonormal functions $`\stackrel{~}{\varphi }_j`$ and $`\stackrel{~}{\eta }_j`$ is done below. First, we define the set $`\{\stackrel{~}{\eta }_j(x)\}`$, $`j=1,\mathrm{},N`$, by completing the square in the exponent of $`\eta _j(x)`$ and then normalizing $`\eta _j(x)`$, $$\stackrel{~}{\eta }_j(x)=\sqrt{\frac{1}{2\pi \sigma }}e^{\left(x\epsilon _j\sigma \right)^2/2\sigma },$$ (36) where we abbreviated $`\sigma =L/N\mathrm{},\epsilon _j`$ $`=`$ $`N+12j.`$ (37) The functions $`\stackrel{~}{\varphi }_j`$, being linear combinations of $`\varphi _j(x)=x^{j1}`$, are polynomials themselves, too. Their (maximal) degree is $`N1`$. As a first step towards their construction, we define the polynomials $$p_j(x)=\sqrt{\frac{1}{2\pi \sigma }}_{\mathrm{}}^{\mathrm{}}𝑑y(iy/\sigma )^{j1}e^{(y+ix)^2/2\sigma },$$ (38) which satisfy the special property $$_{\mathrm{}}^{\mathrm{}}𝑑xp_j(x)\stackrel{~}{\eta }_k(x)=(\epsilon _k)^{j1}.$$ (39) Notice that $`p_j(x)`$ is of degree $`j1`$. According to Eq. (39), the overlap matrix between the polynomials $`p_j`$ and the Gaussians $`\stackrel{~}{\eta }_j`$ is independent of $`L`$. Construction of bi-orthonormal functions $`\stackrel{~}{\varphi }_j`$ and $`\stackrel{~}{\eta }_j`$ is thus achieved by choosing $`L`$-independent linear combinations of the polynomials $`p_j`$ that diagonalize the overlap matrix (39). This is done using the Lagrange interpolation polynomials $$L_m(x)=\underset{nm}{}\frac{x\epsilon _n}{\epsilon _m\epsilon _n},$$ (40) which are of degree $`N1`$ and obey $`L_m(\epsilon _n)=\delta _{m,n}`$. We infer that the desired polynomials $`\stackrel{~}{\varphi }_j(x)`$ are given by $$\stackrel{~}{\varphi }_j(x)=\sqrt{\frac{1}{2\pi \sigma }}_{\mathrm{}}^+\mathrm{}𝑑yL_j(iy/\sigma )e^{\left(y+ix\right)^2/2\sigma }.$$ (41) Putting everything together, we find that $`K_L(x,z)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi \sigma }}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑yL_j(iy/\sigma )`$ (43) $`\times \mathrm{exp}\left[{\displaystyle \frac{\left(y+ix\right)^2+\left(z\epsilon _j\sigma \right)^2}{2\sigma }}\right].`$ Now, moments of the conductance $`g`$ can be calculated with the help of Eq. (28). In particular, we find that the average and variance of $`g`$ are given by $`g`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x{\displaystyle \frac{K_L(x,x)}{\mathrm{cosh}^2x}},`$ (44) $`\text{var}g`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_1{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_2{\displaystyle \frac{K_L(x_2,x_1)K_L(x_1,x_2)}{\mathrm{cosh}^2x_1\mathrm{cosh}^2x_2}}`$ (46) $`+{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x{\displaystyle \frac{K_L(x,x)}{\mathrm{cosh}^4x}}.`$ ### B Average conductance After some shifts of integration variables and with the help of the Fourier transform of $`\mathrm{cosh}^2x`$, $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x{\displaystyle \frac{e^{iyx}}{\mathrm{cosh}^2x}}`$ $`={\displaystyle \frac{\pi y}{\mathrm{sinh}(\pi y/2)}}`$ (48) $`=2{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}\left(1+{\displaystyle \frac{y^2}{4k^2}}\right)^1,`$ we obtain from Eqs. (43) and (44) an expression for the average conductance $`g`$ at the energy $`\epsilon =0`$ that involves one integration and one (finite) summation only, $`g`$ $`=`$ $`{\displaystyle \underset{m=1}{\overset{N}{}}}c_me^{\epsilon _m^2\sigma /2},`$ (49) $`c_m`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑y{\displaystyle \frac{L_m(\epsilon _miy)ye^{(y+i\epsilon _m)^2\sigma /2}}{2\mathrm{sinh}(\pi y/2)}},`$ (50) where, as before, $`\sigma =L/N\mathrm{}`$. In the limit $`N1`$ at fixed $`\sigma `$ (the so-called thick-wire limit), Eq. (49) can be further simplified. Hereto we use the second identity of Eq. (48) to cancel the Lagrange interpolation polynomial in the coefficient $`c_m`$, $`c_m`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle \frac{dy}{\pi }}e^{y^2\sigma /2}{\displaystyle \underset{k\mathrm{\Lambda }_m}{}}\left(1{\displaystyle \frac{iy+\epsilon _m}{2k}}\right)^1,`$ (51) $`\mathrm{\Lambda }_m`$ $`=`$ $`𝐙\{m+1,\mathrm{},m+N\}.`$ (52) In the limit $`N\mathrm{}`$, only $`m`$’s close to $`(N+1)/2`$ contribute to $`g`$. For those $`m`$, we may replace the infinite product on the r.h.s. of Eq. (52) by unity, and find $`c_m=(2/\pi \sigma )^{1/2}`$. Hence, for $`N1`$ even, $$g=\sqrt{\frac{2}{\pi \sigma }}\vartheta _2(0|2i\sigma /\pi )\sqrt{\frac{2}{\pi \sigma }}\underset{\genfrac{}{}{0pt}{}{m𝐙}{m\mathrm{odd}}}{}e^{m^2\sigma /2},$$ (54) whereas for $`N1`$ odd, $$g=\sqrt{\frac{2}{\pi \sigma }}\vartheta _3(0|2i\sigma /\pi )\sqrt{\frac{2}{\pi \sigma }}\underset{\genfrac{}{}{0pt}{}{m𝐙}{m\mathrm{even}}}{}e^{m^2\sigma /2}.$$ (55) Here $`\vartheta _2`$ and $`\vartheta _3`$ are the Jacobi’s theta functions. The dramatic difference between even and odd channel numbers discovered in Refs. follows immediately from Eqs. (49) or (III B) in the regime $`LN\mathrm{}`$. For even $`N`$, each term in the summation decays exponentially with $`L`$. The exponential decay of $`g`$ is governed by the slowest decaying terms in the summation in Eq. (49), i.e., the contributions from $`\epsilon _m=\pm 1`$, i.e., from $`m=N/2`$ or $`m=N/2+1`$. Hence for $`LN\mathrm{}`$ we find $$g\sqrt{\frac{8\xi }{\pi L}}e^{L/2\xi },\xi =N\mathrm{},N\text{ even}.$$ (56) Eq. (56) allows us to identify $`\xi `$ as the localization length. For odd $`N`$, there is one term in the summation (49) that does not decay exponentially with $`L`$. It is the contribution from the channel with $`\epsilon _m=0`$, $`m=(N+1)/2`$. In this case, we again define $`\xi =N\mathrm{}`$, although it is now merely a crossover length scale, to be the characteristic length scale above which the slow algebraic decay of $`g`$ sets in, $$g\sqrt{\frac{2\xi }{\pi L}},N\text{ odd},L\xi .$$ (57) In Fig. 2 we have shown the average conductance for $`N=1,2,3,4`$ as a function of $`L/\xi `$ and the asymptotic result for large $`N`$. To study the average conductance in the diffusive regime $`\mathrm{}L\xi `$, we use the Poisson summation formula $`{\displaystyle \underset{m𝐙}{}}\delta (x2m1)={\displaystyle \frac{1}{2}}{\displaystyle \underset{n𝐙}{}}e^{i\pi n(x1)},`$ (59) $`{\displaystyle \underset{m𝐙}{}}\delta (xm)={\displaystyle \underset{n𝐙}{}}e^{2\pi inx},`$ (60) to convert Eq. (III B) into $`g`$ $`=`$ $`{\displaystyle \frac{\xi }{L}}`$ (62) $`+\{\begin{array}{cc}\frac{2\xi }{L}\underset{n=1}{\overset{\mathrm{}}{}}(1)^ne^{\pi ^2n^2\xi /2L},\hfill & N1\text{ even,}\hfill \\ & \\ \frac{2\xi }{L}\underset{n=1}{\overset{\mathrm{}}{}}e^{\pi ^2n^2\xi /2L},\hfill & N1\text{ odd.}\hfill \end{array}`$ Hence the even-odd effect is non-perturbative in $`L/\xi `$ and we see that $`\xi =N\mathrm{}`$ is the characteristic length scale at which the even-odd effect shows up. Whereas the leading terms are identical, the first non-perturbative correction to $`g`$ differs by a sign for even and odd $`N`$. In Fig. 3 we plot the average conductance for $`N1`$ as a function of $`L/\xi `$ and compare with the unitary symmetry class, which is appropriate for energies $`\epsilon 0`$. In the unitary symmetry class, $`g`$ decays exponentially, irrespective of the parity of $`N`$, but with a different localization length $`\xi _\mathrm{u}`$, $$ge^{L/2\xi _\mathrm{u}},\xi _\mathrm{u}=2N\mathrm{}.$$ (63) The unitary symmetry class is appropriate for the random flux model if the energy $`\epsilon `$ is nonzero. Hence, moving the energy $`\epsilon `$ away from zero causes a factor $`2`$ increase in the localization length if the number of channels is even, and a dramatic decrease in the average conductance if $`N`$ is odd. ### C Variance of the conductance Proceeding as in the previous subsection, we find from Eqs. (43), (46), and (48), $`\mathrm{var}g`$ $`=`$ $`{\displaystyle \underset{m,n=1}{\overset{N}{}}}c_{m,n}e^{(\epsilon _m^2+\epsilon _n^2)\sigma /2}`$ (65) $`+{\displaystyle \underset{m=1}{\overset{N}{}}}c_m^{}e^{\epsilon _m^2\sigma /2},`$ with the coefficients $`c_{m,n}`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑y_1{\displaystyle \frac{y_1L_m(\epsilon _niy_1)e^{(y_1+i\epsilon _n)^2\sigma /2}}{2\mathrm{sinh}\left(\pi y_1/2\right)}}`$ $`\times {\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑y_2{\displaystyle \frac{y_2L_n(\epsilon _miy_2)e^{(y_2+i\epsilon _m)^2\sigma /2}}{2\mathrm{sinh}\left(\pi y_2/2\right)}},`$ $`c_m^{}`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑y{\displaystyle \frac{(y^3+4y)L_m(\epsilon _miy)e^{(y+i\epsilon _m)^2\sigma /2}}{12\mathrm{sinh}(\pi y/2)}}.`$ We plot $`\mathrm{var}g`$, which is computed from Eq. (65) for $`N=1,2,3,4`$, in Fig. 4, together with the thick wire limit $`N1`$. The even-odd effect is clearly seen when $`L/\xi 1`$. In the limit $`N\mathrm{}`$ at fixed $`N\mathrm{}/L`$ further simplifications are possible. We find $`\text{var}g`$ $`=`$ $`{\displaystyle \underset{m=\mathrm{}}{\overset{\mathrm{}}{}}^{}}{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}^{}}f_{m,n}f_{n,m}+{\displaystyle \underset{m=\mathrm{}}{\overset{\mathrm{}}{}}^{}}f_m^{},`$ (66) $`f_{m,n}`$ $`=`$ $`\sqrt{{\displaystyle \frac{2}{\pi \sigma }}}e^{m^2\sigma /2}`$ (68) $`+{\displaystyle \frac{1}{2}}\left[(mn)\text{erf}\left(m\sqrt{\sigma /2}\right)|mn|\right],`$ $`f_m^{}`$ $`=`$ $`\sqrt{{\displaystyle \frac{1}{18\pi \sigma }}}\left(4m^2+\sigma ^1\right)e^{m^2\sigma /2},`$ (69) where the primed summations are restricted to even (odd) $`m`$ and $`n`$ for $`N`$ odd (even). The error function $`\mathrm{erf}(x)`$ is defined as $`\mathrm{erf}(x)={\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle _0^x}𝑑te^{t^2}.`$ For $`L\xi `$ Eq. (69) simplifies to $$\mathrm{var}g\{\begin{array}{cc}\sqrt{\frac{2\xi }{\pi L}}e^{L/2\xi },\hfill & N1\text{ even,}\hfill \\ & \\ \sqrt{\frac{8\xi }{9\pi L}},\hfill & N1\text{ odd.}\hfill \end{array}$$ (70) The variance of the conductance decays exponentially for large even $`N`$ with the same decay length as the average $`g`$, while $`\text{var}g`$ decays algebraically for large odd $`N`$. Note that $`g`$ and $`\text{var}g`$ decay with the same power of $`L`$. After some tedious algebra starting from Eq. (69) to extract an expression well suited for an asymptotic expansion in small $`L`$, we find for the diffusive regime $`\mathrm{}L\xi `$, $`\mathrm{var}g`$ $`=`$ $`\{\begin{array}{cc}\frac{2}{15}+\frac{\pi ^2}{3}\left(\frac{\xi }{L}\right)^3e^{\frac{\pi ^2\xi }{2L}}+\mathrm{},\hfill & N1\text{ even ,}\hfill \\ & \\ \frac{2}{15}\frac{\pi ^2}{3}\left(\frac{\xi }{L}\right)^3e^{\frac{\pi ^2\xi }{2L}}+\mathrm{},\hfill & N1\text{ odd.}\hfill \end{array}`$ (71) Again, we see that the difference between even and odd channel numbers shows up in terms that are non-perturbative in $`L/\xi `$. The leading term $`2/15`$ in $`\mathrm{var}g`$ is universal and twice the value of its counterpart for a disordered quantum wire in the unitary ensemble. Hence, moving the energy $`\epsilon `$ away from zero decreases the conductance fluctuations by a factor two in the diffusive regime. The factor two decrease of $`\mathrm{var}g`$ upon breaking the chiral symmetry is reminiscent of the factor two difference for the conductance fluctuations between the standard orthogonal and unitary symmetry classes. The enhancement of the conductance fluctuations at $`\epsilon =0`$ had been observed previously in numerical simulations of the two-dimensional random flux problem by Ohtsuki et al. Figure 5 contains a plot of $`\mathrm{var}g`$ versus $`L/\xi `$, and offers a comparison with the unitary symmetry class. In the unitary symmetry class, $`\text{var}g`$ takes the universal value $`1/15`$ in the diffusive regime $`LN\mathrm{}`$, while $`\text{var}g\mathrm{exp}(L/2\xi _\mathrm{u})`$, $`\xi _\mathrm{u}=2N\mathrm{}`$, in the localized regime $`LN\mathrm{}`$. ## IV Numerical simulations In this section we present numerical simulations for the conductance $`g`$ in the random flux model (3). The average and variance of $`g`$ were studied previously by Avishai et al. and by Ohtsuki et al. for the random flux model in a square geometry. However, for a comparison with the theory of Sec. III and to identify the symmetry class it is necessary to study a wire geometry and large system sizes. This is done below. For each disorder configuration, we calculate the conductance using the Landauer formula (7), which we use in the more conventional form $$g=\underset{\mu ,\nu =1}{\overset{N_c}{}}|𝐭_{\mu ,\nu }|^2,$$ (73) Here $`N_c`$ is the number of propagating channels in the leads and $`𝐭`$ is the $`N_c\times N_c`$ transmission matrix, which relates the amplitudes of the incoming and outgoing waves on the left and the right of the disordered sample. The eigenvalues $`T_j`$ of the matrix $`\mathrm{𝐭𝐭}^{}`$ are the same as in Eq. (7). (The simulations are aimed at energies $`\epsilon `$ close to zero, where $`N_c`$ equals $`N`$. Hence, as before, we drop the notational distinction between $`N_c`$ and $`N`$). The transmission matrix $`𝐭`$ is computed through the recursive Green function method. In this method, $`N\times N`$ matrix Green functions $`F_{jk}`$ for reflection and $`G_{jk}`$ for transmission through the disordered region are computed using the recursive rule, $`F(m+1)`$ $`=`$ $`[\epsilon H^mt^2F(m)]^1,`$ (74) $`G(m+1)`$ $`=`$ $`tG(m)F(m+1),`$ (75) where the matrix elements of $`H^m`$ are $`H_{j,k}^m`$ $`=`$ $`t(1\delta _{j,N})e^{i\theta _{m,j}}\delta _{j+1,k}`$ $`t(1\delta _{j,1})e^{i\theta _{m,j1}}\delta _{j1,k}.`$ The initial conditions at $`m=0`$ are those of a Green function at the edge of an isolated perfect lead: $`F_{jk}(0)`$ $`=`$ $`G_{jk}(0)`$ (76) $`=`$ $`{\displaystyle \frac{2}{N+1}}{\displaystyle \underset{\nu =1}{\overset{N}{}}}e^{ik_\nu }\mathrm{sin}{\displaystyle \frac{\nu j\pi }{N+1}}\mathrm{sin}{\displaystyle \frac{\nu k\pi }{N+1}},`$ (77) where $`\mathrm{cos}k_\nu =\epsilon /2t\mathrm{cos}[\nu \pi /(N+1)]`$, see Sec. II. The scattering channels are those modes with real wavevectors $`k_\nu `$. The Green function that we need is obtained by taking into account the perfect lead boundary condition on the right of the disordered region, $`F(M)`$ $`=`$ $`\left[F(0)^1t^2F(M1)\right]^1,`$ $`G(M)`$ $`=`$ $`tG(M1)F(M).`$ The matrix Green function $`G(M)`$ describes the propagation from $`m=0`$ to $`m=M`$. The absolute value of the transmission matrix element at energy $`\epsilon `$ is then given by $`|𝐭_{\mu ,\nu }|^2`$ $`=`$ $`4\mathrm{sin}k_\mu \mathrm{sin}k_\nu `$ $`\times \left|{\displaystyle \frac{2}{N+1}}{\displaystyle \underset{j,k=1}{\overset{N}{}}}G_{jk}(M)\mathrm{sin}{\displaystyle \frac{\mu j\pi }{N+1}}\mathrm{sin}{\displaystyle \frac{\nu k\pi }{N+1}}\right|^2.`$ This procedure is repeated for each disorder configuration, and the average and variance of the conductance are obtained by taking an average over $`2\times 10^4`$ samples. The transverse boundary conditions are those of Eq. (3), i.e., open boundaries, unless explicitly indicated otherwise. We present the numerical results as a function of $`L/\xi `$, where $`\xi `$ is the characteristic length entering Eq. (56) and Eq. (57). We determine $`\xi `$ by comparing the numerical data for $`L\xi `$ to the asymptotic results (56) and (57). Figure 6 shows the average and variance of the conductance at $`\epsilon =0`$ of the random flux model (3) with $`N=15`$ and $`N=16`$ and disorder strength $`p=0.2`$. When $`L\xi `$, $`g`$ decreases algebraically for $`N=15`$ whereas it decays exponentially for $`N=16`$. This is precisely the even-odd effect that we discussed at length in the last section. We find excellent agreement between the numerical data and the theory of Sec. III, which is indicated by the solid (odd $`N`$) and dashed (even $`N`$) lines in the figure. The characteristic length $`\xi `$ that governs the crossover to the slow algebraic decay of Eq. (57) is estimated to be $`280a`$ for $`N=15`$. The localization length $`\xi `$ that governs the exponential decay of Eq. (56) is estimated to be $`283a`$ for $`N=16`$. As in the case of the average conductance, for $`\text{var}g`$, the even-odd effect can be clearly seen for $`L\xi `$, where the numerical data coincide with the analytic result in the large $`N`$ limit, Eq. (69). The slight discrepancy at very small $`L`$ happens at $`MN`$ and may be understood as a crossover from quasi one-dimensional to quasi two-dimensional behavior. This type of one- to two-dimensional crossover was reported in a numerical work by Tamura and Ando. The Fokker-Planck approach employed in sections II and III is specifically devised for a quasi one-dimensional geometry and is therefore inapplicable to describe the regime $`MN`$. In Figs. 7 and 8 we consider the dependence of $`g`$ and $`\text{var}g`$ on $`N`$, $`p`$, and $`\epsilon `$. Figure 7(a) shows the average and variance of $`g`$ for odd $`N`$ at $`\epsilon =0`$ for two choices of $`N`$ and $`p`$. We see that the numerical data show fairly good agreement with the analytic large $`N`$ result (solid lines) for the three cases we examined. For larger disorder strength $`p`$, the deviations from the analytical result (69) is more prominent, the stronger disorder data being closer to the onset of quasi two-dimensional behavior for small $`L`$. The agreement between the numerical data for $`p=1`$ and the theory of Sec. III for $`L\xi `$ is remarkable, in view of the fact that the theory was derived under the assumption of weak disorder, whereas $`p=1`$ corresponds to the strongest possible disorder in the random flux model. Figure 7(b) shows $`g`$ and $`\text{var}g`$ for the random flux model (3) at $`\epsilon =0`$ for an even number $`N`$ of channels. We show the data for three cases $`(N,p)=(16,1)`$, (16,0.2), and (32,0.2). In the last example we used periodic boundary conditions in the transverse direction instead of the open boundary conditions of Eq. (3). Since $`N`$ is even, the periodic boundary conditions do not destroy the chiral symmetry, so that the system remains in the chiral unitary symmetry class. We see that the results of numerical simulations are indistinguishable from the theoretical curves (solid lines) for both $`g`$ and $`\mathrm{var}g`$ except in the quasi two-dimensional regime $`MN`$. We conclude from Fig. 7(a) and 7(b) that the localization properties of the random flux model at $`\epsilon =0`$ are governed by the chiral unitary universality class, independent of the disorder strength. In figure 8 we show some results where the chiral symmetry is broken. In this case, charge transport is no longer governed by the Fokker-Planck equation (II) for the chiral unitary symmetry class, but by the Fokker-Planck equation (1) that is valid for the standard unitary class. In the figure, numerical results are shown for three cases away from the critical energy as well as for one case where the chiral symmetry does not exist because of the periodic boundary condition imposed for odd $`N`$. With the exception of very short lengths, where the system becomes quasi two-dimensional, all the data for $`g`$ and for $`\text{var}g`$ agree with the theoretical prediction for the unitary class. The results indicate that the small nonzero energy $`\epsilon =0.02t`$ is sufficient to cause a change from the chiral unitary symmetry class at $`\epsilon =0`$ to the standard unitary symmetry class. Another interesting feature to note is that the localization length $`\xi `$ in Fig. 8 is roughly twice as large as that in the chiral case (Fig. 7). (For example, compare the two cases $`\epsilon =0`$ and $`\epsilon =0.02t`$ for $`N=16`$ and $`p=1`$, where we find $`\xi =27.2a`$ and $`\xi _\mathrm{u}=50.6a`$, respectively.) This behavior was observed earlier in Refs. . This result is consistent with the analytic result that $`\xi `$ differs by a factor of 2 between the chiral ($`\xi =N\mathrm{}`$) and the unitary class ($`\xi _\mathrm{u}=2N\mathrm{}`$), assuming that the mean free path determined by the short-distance physics is identical in the two classes. (For the numerical results we may expect that the mean free path should not have strong energy dependence on the scale of $`|\epsilon |<0.1t`$.) ## V Conclusion In this paper we studied transport properties of a particle on a rectangular lattice in the presence of uncorrelated random fluxes of vanishing mean. This problem is commonly known as the random flux problem. We considered a wire geometry and weak disorder and showed that the symmetries of the random flux problem have dramatic consequences on the statistical distribution of the conductance $`g`$. If the energy $`\epsilon `$ is away from the band center $`\epsilon =0`$, the system belongs to the standard unitary symmetry class, while at $`\epsilon =0`$, transport is governed by an additional symmetry of the random flux model, the particle-hole or chiral symmetry. We have compared numerical simulations of the average and variance of the conductance $`g`$ in the random flux model in a thick quantum wire to analytical calculations for the standard unitary and the chiral unitary symmetry classes, and found good agreement for $`\epsilon 0`$ and $`\epsilon =0`$, respectively. There are important differences between the conductance distribution in the chiral unitary symmetry class and the standard unitary symmetry class, both in the diffusive and the localized regime. These differences are summarized in Table I. The most striking feature of the chiral unitary symmetry class is the even-odd effect: If the number of channels $`N`$ in the wire is even, the average conductance $`g`$ decays exponentially with length $`L`$ in the localized regime $`LN\mathrm{}`$, whereas for odd $`N`$, the decay of $`g`$ is algebraic. The sensitivity to the chiral symmetry in transport properties is very strong. For example, removing the chiral symmetry by a change in boundary condition is sufficient to change the universality class to the standard unitary one, even in the thick-wire limit $`N1`$. Although our theory is limited to a quasi one-dimensional geometry and cannot account for the crossover from one to two dimensions, it does show the importance of the chiral symmetry to the transport properties of the random flux problem. Taking the prominent role played by symmetry for the random flux model in quasi one-dimension as a guideline, we speculate that a similar picture is appropriate for the two-dimensional random flux problem. Hence, following Gade and Wegner, and Miller and Wang we expect that the localization properties of the two-dimensional random flux problem are controlled by the unitary symmetry class away from the band center $`\epsilon =0`$, whereas the band center $`\epsilon =0`$ plays the role of a critical energy. The random flux problem would thus share with the Integer Quantum Hall Effect, and with the problem of Dirac fermions in a random vector potential the existence of a single critical energy that lies between energies with localized states. There are however two important differences with the Integer Quantum Hall Effect. First, there is no symmetry that fixes the value of the critical energy in the Integer Quantum Hall Effect, while the chiral symmetry of the random flux model implies that criticality occurs at the band center $`\epsilon =0`$. Second, in contrast to the smooth density of states in the Integer Quantum Hall Effect, one expects that the density of states in the random flux problem is singular at $`\epsilon =0`$. Such a singularity of the density of states at $`\epsilon =0`$ was observed in the single chain random hopping problem, is suggested by the numerical simulations of Refs. , and is consistent with Gade’s analysis of the two-dimensional non-linear-$`\sigma `$ model with chiral symmetry, and with exact results on the problem of Dirac fermions in a random vector potential. (The latter problem shares the same chiral symmetry as the random flux problem although it differs from the random flux problem in that the magnetic fluxes are strongly correlated on all length scales.) ###### Acknowledgements. We are indebted to A. Altland, K. M. Frahm, B. I. Halperin, D. K. K. Lee, P. A. Lee, M. Sigrist, N. Taniguchi and X.-G. Wen for useful discussions. AF and CM would like to thank P. A. Lee and R. Morf for their kind hospitality at MIT and PSI, respectively, where parts of this work were completed. PWB acknowledges support by the NSF under grant nos. DMR 94-16910, DMR 96-30064, and DMR 97-14725. CM acknowledges a fellowship from the Swiss Nationalfonds. AF is supported by a Monbusho grant for overseas research and is grateful to the condensed-matter group at Stanford University for hospitality. The numerical calculations were performed on workstations at the Yukawa Institute, Kyoto University. ## A Derivation of the Fokker-Planck equation In this paper, we described the transport properties of a quantum wire in the chiral unitary symmetry class in terms of its transfer matrix $``$. Our theoretical analysis was focused on a solution of the Fokker-Planck equation (II) that governs the $`L`$-evolution of the probability distribution $`P(x_1,\mathrm{},x_N;L)`$ of the eigenvalues $`e^{\pm 2x_j}`$ of $`^{}`$. A derivation of this Fokker-Planck equation from a different microscopic model was presented in Ref. . Here we present an alternative derivation of Eq. (II) that is closer in spirit to derivations of the Fokker-Planck equation for the unitary class existing in the literature. For the statistical distribution of the parameters $`x_j`$, the symmetries of the transfer matrix $`(\epsilon )`$ are of fundamental importance. For the random flux model, there are two symmetries (cf. Sec. II): $`(\epsilon )\mathrm{\Sigma }_3^{}(\epsilon )=\mathrm{\Sigma }_3,`$ $`\text{flux conservation},`$ (A1) $`\mathrm{\Sigma }_1(\epsilon )\mathrm{\Sigma }_1=(\epsilon ),`$ $`\text{chiral symmetry}.`$ (A2) Here the transfer matrix $``$ is defined in Eq. (4) and $`\mathrm{\Sigma }_j=\sigma _j𝟙_{}`$, where $`\sigma _j`$ is the Pauli matrix ($`j=1,3`$) and $`𝟙_{}`$ is the $`N\times N`$ unit matrix. Because of flux conservation (A1) $`(\epsilon )`$ can be parameterized as $``$ $`=`$ $`\left(\begin{array}{cc}_{11}& _{12}\\ _{21}& _{22}\end{array}\right)`$ (A3) $`=`$ $`\left(\begin{array}{cc}𝒰& 0\\ 0& 𝒰^{}\end{array}\right)\left(\begin{array}{cc}\mathrm{cosh}X& \mathrm{sinh}X\\ \mathrm{sinh}X& \mathrm{cosh}X\end{array}\right)\left(\begin{array}{cc}𝒱& 0\\ 0& 𝒱^{}\end{array}\right),`$ (A4) where $`𝒰`$, $`𝒰^{}`$, $`𝒱`$, and $`𝒱^{}`$ are $`N\times N`$ unitary matrices and $`X`$ is a diagonal matrix containing the parameters $`x_j`$ on the diagonal. We are interested in the case of zero energy, when the chiral symmetry (A2) results in the further constraints $`𝒰=𝒰^{}`$ and $`𝒱=𝒱^{}`$. Notice that in this case, with the parameterization (A4), the parameters $`x_j`$ are uniquely determined by $``$. This is an important difference with the unitary symmetry class, where each $`x_j`$ is only defined up to a sign. As a result, in the unitary class, the distribution $`P(x_1,\mathrm{},x_N;L)`$ has to be symmetric under a transformation $`x_jx_j`$ for each $`j`$ individually, while no such symmetry requirement exists in the chiral unitary class. As the length $`L`$ of the disordered region is increased (see Fig. 9), the parameters $`x_j`$, $`j=1,\mathrm{},N`$ are subjected to a Brownian motion process: As $`L`$ is increased by an amount $`\delta L`$, the parameters $`x_j`$ will undergo a (random) shift $`x_jx_j+\delta x_j`$. We first seek the appropriate Langevin equations that describe the statistical distribution of the increments $`\delta x_j`$. Hereto we note that the transfer matrix $`\widehat{}(0;L+\delta L)`$ is the product of the individual transfer matrices $`(0;L)`$ and $`^{}(0;\delta L)`$ for wires of length $`L`$ and $`\delta L`$, respectively: $$\widehat{}=^{}.$$ (A5) We also use that the matrix $$2_{11}_{12}^{}=𝒰\mathrm{sinh}(2X)𝒰^{}$$ (A6) is hermitian and has eigenvalues $`\mathrm{sinh}2x_j`$, $`j=1,\mathrm{},N`$. Hence we find that $`2\widehat{}_{11}\widehat{}_{12}^{}`$ $`=`$ $`𝒰\left(\mathrm{sinh}2X+2\mathrm{\Delta }\right)𝒰^{},`$ (A7) $`\mathrm{\Delta }`$ $`=`$ $`𝒰^{}\left(\widehat{}_{11}^{}\widehat{}_{12}^{}_{11}^{}_{12}^{}\right)𝒰.`$ (A8) Making use of the symmetries of $`^{}`$ and of the parameterization (A4) we can rewrite $`\mathrm{\Delta }`$ as $`\mathrm{\Delta }`$ $`=`$ $`\mathrm{cosh}X𝒱_{12}^{}_{12}^{}𝒱^{}\mathrm{sinh}X`$ (A12) $`+\mathrm{sinh}X𝒱_{12}^{}_{12}^{}𝒱^{}\mathrm{cosh}X`$ $`+\mathrm{cosh}X𝒱_{11}^{}_{12}^{}𝒱^{}\mathrm{cosh}X`$ $`+\mathrm{sinh}X𝒱_{12}^{}_{11}^{}𝒱^{}\mathrm{sinh}X.`$ We take the length $`\delta L`$ of the added slice small compared to the mean free path $`\mathrm{}`$. Within the thin slice the disorder is assumed to be uncorrelated beyond a length scale of the order of the lattice spacing $`a\delta L`$. In this case one has $`^{}=1+𝒪(\delta L)^{1/2}`$, so that the matrix $`\mathrm{\Delta }`$ is of order $`(\delta L)^{1/2}`$ itself and we can treat it in perturbation theory. As a result, we find that the addition of the slice of width $`\delta L`$ results in the change $`\mathrm{sinh}2\widehat{x}_j\mathrm{sinh}2x_j`$ $`=`$ $`2\mathrm{\Delta }_{jj}+4{\displaystyle \underset{kj}{}}{\displaystyle \frac{\mathrm{\Delta }_{jk}\mathrm{\Delta }_{kj}}{\mathrm{sinh}2x_j\mathrm{sinh}2x_k}}`$ (A14) $`+𝒪(\delta L^{3/2}),`$ or equivalently $`\delta x_j`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }_{jj}}{\mathrm{cosh}2x_j}}{\displaystyle \frac{\mathrm{\Delta }_{jj}^2\mathrm{sinh}2x_j}{\mathrm{cosh}^32x_j}}`$ (A17) $`+2{\displaystyle \underset{kj}{}}{\displaystyle \frac{\mathrm{\Delta }_{jk}\mathrm{\Delta }_{kj}}{(\mathrm{sinh}2x_j\mathrm{sinh}2x_k)\mathrm{cosh}2x_j}}`$ $`+𝒪(\delta L^{3/2}).`$ It remains to find the first two moments of $`\mathrm{\Delta }_{jk}`$. Hereto we make an ansatz for the distribution of the transfer matrix $`^{}`$. Because $`^{}`$ is close to $`1`$, it is natural to parameterize it in terms of its generator, $$^{}=\mathrm{exp}𝒜.$$ (A18) From the symmetry requirements (A1) and (A2) we deduce that $`𝒜`$ has the form $$𝒜=iV𝟙_\mathrm{𝟚}+𝕎\sigma _\mathrm{𝟙},$$ (A19) where $`V`$ and $`W`$ are hermitian $`N\times N`$ matrices. We choose a convenient statistical distribution of $`^{}`$ by assuming that $`V`$ and $`W`$ have independent, Gaussian distributions with zero mean and with variance $`V_{ij}V_{kl}=W_{ij}W_{kl}=\delta _{il}\delta _{jk}{\displaystyle \frac{\delta L}{N\mathrm{}}}.`$ (A20) Then we find that the first two moments of $`\mathrm{\Delta }`$ are given by $`\mathrm{\Delta }_{jk}`$ $`=`$ $`\delta _{jk}\mathrm{sinh}(2x_j){\displaystyle \frac{\delta L}{\mathrm{}}},`$ $`\mathrm{\Delta }_{jk}\mathrm{\Delta }_{kj}`$ $`=`$ $`\mathrm{cosh}^2(x_j+x_k){\displaystyle \frac{\delta L}{N\mathrm{}}}.`$ Combining this with Eq. (A17), we conclude that under addition of a narrow slice of width $`\delta L\mathrm{}`$, the parameters $`x_j`$ undergo a shift $`x_jx_j+\delta x_j`$ with $`\delta x_j_{\delta L}={\displaystyle \frac{\delta L}{N\mathrm{}}}{\displaystyle \underset{kj}{}}\mathrm{coth}(x_jx_k),`$ (A22) $`\delta x_j\delta x_k_{\delta L}={\displaystyle \frac{\delta L}{N\mathrm{}}}\delta _{jk},`$ (A23) all higher moments vanishing to first order in $`\delta L`$. Equation (A) is equivalent to the Fokker-Planck equation (II). ## B Solution to the Fokker-Planck equation In this appendix, we present an exact solution for the Fokker-Planck equation (II), closely following the exact solution of the DMPK equation in the unitary symmetry class by Beenakker and Rejaei. We start by rewriting Eq. (II) as $`\mathrm{}{\displaystyle \frac{P}{L}}`$ $`=`$ $`{\displaystyle \frac{1}{2N}}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{}{x_j}}\left[{\displaystyle \frac{P}{x_j}}+2P\left({\displaystyle \frac{\mathrm{\Omega }}{x_j}}\right)\right],`$ (B2) $`\mathrm{\Omega }`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{j<k}{}}\mathrm{ln}|\mathrm{sinh}(x_jx_k)|^2,`$ (B3) where the initial condition is $`P(x_1,\mathrm{},x_N;0)={\displaystyle \underset{j=1}{\overset{N}{}}}\delta (x_j).`$ (B4) The key step towards the exact solution of Eq. (B) is the transformation $$P(\{x_j\};L)=\left[\underset{j<k}{}\mathrm{sinh}(x_jx_k)\right]\mathrm{\Psi }(\{x_j\};L),$$ (B5) which changes the Fokker-Planck equation (B) into a Schrödinger equation, $`\mathrm{}{\displaystyle \frac{\mathrm{\Psi }}{L}}`$ $`=`$ $`{\displaystyle \frac{1}{2N}}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{^2\mathrm{\Psi }}{x_j^2}}+{\displaystyle \frac{1}{2N}}\mathrm{\Psi }{\displaystyle \underset{j=1}{\overset{N}{}}}\left[\left({\displaystyle \frac{\mathrm{\Omega }}{x_j}}\right)^2{\displaystyle \frac{^2\mathrm{\Omega }}{x_j^2}}\right]`$ (B6) $`=`$ $`{\displaystyle \frac{1}{2N}}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{^2\mathrm{\Psi }}{x_j^2}}+U\mathrm{\Phi }.`$ (B7) Here $`U=(N1)(N2)/6+(N1)/2`$. Thus, $`\mathrm{\Psi }(x_1,\mathrm{},x_N;L)`$ obeys a Schrödinger equation in imaginary time $`L`$ that describes $`N`$ identical free particles on the line, $`\mathrm{}<x<\mathrm{}`$. (For comparison, in the unitary symmetry class, one finds that $`\mathrm{\Psi }`$ obeys a Schrödinger equation for $`N`$ identical particles moving in the presence of a potential $`\mathrm{sinh}^22x`$ which repels the $`x`$’s away from the origin.) Since the probability distribution $`P(x_1,\mathrm{},x_N;L)`$ is symmetric under a permutation of the $`x_j`$’s, it follows from Eq. (B5) that $`\mathrm{\Psi }(x_1,\mathrm{},x_N;L)`$ must be antisymmetric, i.e., it must describe the imaginary-time evolution of $`N`$ identical fermions. At $`L=0`$, the initial condition (B4) implies that all $`x_j`$ coincide at the origin. Hence, at $`L=0`$, the transformation (B5) is singular. We avoid this problem by starting with the initial condition $`P(\{x_j\};0|\{y_k\})`$ $`=`$ $`{\displaystyle \frac{1}{N!}}{\displaystyle \underset{\sigma }{}}{\displaystyle \underset{j=1}{\overset{N}{}}}\delta (x_jy_{\sigma (j)}),`$ (B8) $`y_j=ϵ(j1),`$ (B9) where all the initial values are different, and send $`ϵ`$ to zero at the end of the calculation. The summation is over all permutations $`\sigma `$ of $`1,\mathrm{},N`$. To solve Eq. (B7), we denote by $`G(x;L|y)`$ the single-particle Green function of the diffusion equation obeying $$\mathrm{}\frac{G}{L}=\frac{1}{2N}\frac{^2G}{x^2},G(x;0|y)=\delta (xy).$$ (B10) Solution of Eq. (B10) yields $$G(x;L|y)=\sqrt{\frac{N\mathrm{}}{2\pi L}}e^{\frac{N\mathrm{}}{2L}(xy)^2}.$$ (B11) Then the Slater determinant $`\mathrm{\Psi }(\{x_j\};L|\{y_k\})`$ $`=`$ $`{\displaystyle \frac{1}{N!}}\mathrm{det}\left[G(x_j;L|y_k)\right]_{j,k=1,\mathrm{},N}`$ (B13) $`\times e^{UL/\mathrm{}}`$ is antisymmetric in $`x_1,\mathrm{},x_N`$ and obeys the Schrödinger equation (B7). Using the inverse of the transformation (B5), we obtain that $`P(\{x_j\};L|\{y_k\})=`$ (B14) $`\mathrm{\Psi }(\{x_j\};L|\{y_k\}){\displaystyle \underset{j<k}{}}{\displaystyle \frac{\mathrm{sinh}(x_jx_k)}{\mathrm{sinh}(y_jy_k)}}`$ (B15) is the solution to the Fokker-Planck equation (B) with the regularized initial condition (B8). We finally take the limit $`ϵ0`$. This limit must be treated with care in view of the denominator of Eq. (B15). With the help of $`\mathrm{det}\left[e^{\frac{N\mathrm{}}{2L}(x_jy_k)^2}\right]_{j,k=1,\mathrm{},N}=`$ (B16) $`e^{\underset{j=1}{\overset{N}{}}\frac{N\mathrm{}}{2L}x_j^2}\left({\displaystyle \frac{N\mathrm{}ϵ}{2L}}\right)^{\frac{N(N1)}{2}}{\displaystyle \underset{j<k}{}}(x_jx_k)+𝒪(ϵ^2),`$ (B17) the singularity $`ϵ^{N(N1)/2}`$ coming from the denominator in Eq. (B15) is cancelled. We thus recover Eq. (17).
no-problem/9903/physics9903031.html
ar5iv
text
# Monitoring the Stability of the ALEPH Vertex Detector ## 1 Introduction The ALEPH Silicon Vertex Detector (VDET) has an active length of 40 cm and consists of two concentric cylindrical layers of 144 micro-strip silicon detectors of $`5.3\times 6.5\mathrm{c}m^2`$ with double-sided readout. Six of them are glued together and instrumented with readout electronics on each end to form the VDET elementary unit (face). The inner layer ($`6.3\mathrm{c}m`$ radius) is formed by 9 faces, the outer layer ($`10.5\mathrm{c}m`$ radius) consists of 15 faces. Strips on $`p^+`$-side run parallel to $`z`$ axis in the ALEPH reference system, allowing $`r\varphi `$ coordinates measurement; $`n`$-side strips, running normal to $`z`$ axis, allow $`z`$ coordinate measurement. A detailed description of VDET can be found elsewhere . ## 2 Laser system VDET features a laser system to monitor mechanical stability with respect to the external tracking devices. A large movement ($`>20\mu \mathrm{m}`$) of the VDET during data-taking could degrade significantly the $`b`$-tagging performance. It is thus very important to keep this aspect under strict control. This is especially true at LEP2, where the detector alignment is performed on an initial sample of Z events and monitoring using the high energy data itself is difficult due to the reduced event rates. The system uses infrared light ($`\lambda =904\mathrm{n}m`$) from two pulsed laser diodes with an output power of $`6\mathrm{W}`$ each and a pulse length of $`50\mathrm{n}s`$. The light is distributed via optical fibres to prisms and lenses attached to the inner wall of the Inner Tracking Chamber (ITC, the closest outer detector). The lenses focus 44 light spots on 14 of the 15 VDET outer faces<sup>1</sup><sup>1</sup>1One face is not equipped because the mechanical structure nearby does not allow the optic installation., normally 3 spots per face, two spots close to the ends of the face and one spot about in the middle of the face. All laser beams are nominally parallel to the $`xy`$ plane; laser beams at the VDET ends are normal to the silicon surface while laser beams that point to the central wafers have an azimuthal incident angle close to 45 degrees, in order to be sensitive to movements normal to the face plane. Information on the VDET displacements with respect to the ITC are obtained by monitoring the laser beam impact position on the silicon wafers (laser spot) versus time: the $`r\varphi `$ spot position is sensitive only to displacements in the $`xy`$ plane and the $`z`$ spot position is sensitive only to displacements along the $`z`$ direction. An $`xy`$ section of the VDET indicating the nominal laser beam positions is shown in Fig. 1. In Fig. 2 a sketch of the typical laser spot positions on a face is shown. For a detailed description of the VDET laser system see . The laser system operates during standard data taking: the laser trigger, synchronized with the beam-crossing signal, is fired once per minute (approximately once per $`100`$ physics events) and the laser event is acquired as a standard physics event. After the installation 5 spots out of 44 spots were absent, probably due to misalignment of the optics or breakage of the optical fibre. For the remaining spots, the efficiency is essentially 100% due to the large pulse height of a spot cluster. During 1997 (1998) $`62000`$ ($`129000`$) laser events were collected. ## 3 Analysis of displacements Each laser spot position is reconstructed event-by-event using a standard centre-of-gravity algorithm. A deviation, $`\mathrm{\Delta }`$, is defined for each event and for each spot as the difference between the actual spot position and a nominal position. The raw deviation for a typical spot as a function of time shows various features: a long term effect (Fig. 3 (a)), a medium term effect, present after quite long shutdown periods (Fig. 3 (b)), and a short term effect with a timescale of a few hours (Fig. 3 (c)). ### 3.1 Short and Medium Term Effects The $`r\varphi `$ and $`z`$ deviations of the three spots of a face are plotted in Fig. 4 (a) and (b) for a four day period after a long shutdown. Also shown is a time chart of the VDET temperature over the same period (Fig. 4 (c)). The $`r\varphi `$ side shows clear short and medium term effects correlated with the temperature variations. In order to reduce the probability of radiation damage, VDET is completely ON only when LEP is in “stable beams”. The $`r\varphi `$ central spot, which is at $`45^{}`$ and therefore sensitive to radial motion, shows the biggest deviation and is consistent with the expected face bowing due to the bimetallic effect (the face has a kevlar and carbon fibre beam glued on the silicon to ensure mechanical rigidity). The medium term effect is visible in Fig. 4 (a) for the deviation of the central spot. It seems that “equilibrium” is only reached after about two days after a long shutdown. This medium term effect is also thought to be due to the face bowing. During standard running, OFF periods are short ($`1÷2`$ hours) and recovery from thermal expansion is partial; during long shutdowns (more than $`1`$ day) the recovery is complete and after that a certain time is needed to reach again the normal warming-cooling cycle. The maximum bowing sagitta $`S`$ and the time constant $`\tau `$ involved in these temperature related phenomena can be estimated by fitting the following parametrisation over the 45 spot deviation $`\mathrm{\Delta }`$ versus time $`t`$: $$\mathrm{\Delta }(t)=S\left(1e^{(tt_0)/\tau }\right).$$ (1) The fits have been done over a typical ON period just after a long shutdown (with $`t_0=0`$) and over a standard ON period far from a long shutdown (with free $`t_0`$). Depending on the face, the sagitta ranges from $`5`$ to $`12\mu \mathrm{m}`$ with a mean of $`8\mu \mathrm{m}`$ and the time constant $`\tau `$ varies from $`130`$ to 280 min with a mean of 160 min. For the tracking performance the short term displacements are small enough to be neglected, especially as they mainly affect the radial direction. For the short term case the width of the residual distribution of data points with respect to the parametrisation in Eq. 1 is an estimate of the spatial resolution for a single event. Thanks to the large pulse height and the fact the cluster extends over more than one readout strip, the resolution is typically $`0.5`$ to $`1.5\mu \mathrm{m}`$. ### 3.2 Long-term Effects The long term effect is visible in Fig. 5, where the spot deviations with respect to initial values are plotted versus time for the entire 1998 data taking period. Some spots are not displayed because they are missing or are not used in the analysis due to inadequate pulse height or unusual cluster profile. The $`r\varphi `$ side deviations show a systematic long term drift that depends on face number (see Fig. 7 for face numbering convention). The size of the $`xy`$ displacements are as large as $`20\mu \mathrm{m}`$ and are larger than the single hit resolution on a charged track ($`10\mu \mathrm{m}`$). On the contrary, the $`z`$ side deviations are smaller and quite similar. During 1997 and 1998 this unexpected behaviour has been studied, parametrised and an alignment correction has been implemented. The laser spots do not provide enough information to fully reconstruct the displacements of each individual face, nevertheless the observed deviations do not seem to be due to an independent motion of the faces. The observed long term deviations are thus parametrised assuming that VDET is moving as a rigid body with respect to the initial position, given by the alignment procedure with tracks performed at the beginning of data taking in a run at the $`\mathrm{Z}`$ resonance. The deviation of a spot is expressed as a function of the standard parameters used to describe the motion of rigid bodies, consisting of 3 angles and 3 translation vectors. These are then extracted by a fit procedure that minimises the squared differences between the observed and the predicted deviation of the spots. The fit has been performed, with 3 out of the 6 parameters fixed to zero: the two rotation angles about the $`x`$ and $`y`$ axes, having found negligible displacement in $`xz`$ and $`yz`$ planes, and the translation along $`z`$ because the observed $`z`$ displacement is more consistent with a deformation rather than a global displacement. The result is shown in Fig. 6, where the three free parameters ($`x`$ translation, $`y`$ translation and $`z`$ rotation) are plotted versus time starting from the initial $`\mathrm{Z}`$ run. A pictorial view of the corresponding VDET displacement is shown in Fig. 7; it is consistent with a rotation around the bottom face. For the fit, the deviations have been averaged over a period of two days. The errors on the parameters are estimated “a posteriori” by forcing the $`\chi _{min}^2`$ to be equal to the number of degrees of freedom; the resulting uncertainty on the single spot deviation is plotted in Fig. 8 versus time. Although the single spot spatial resolution is better than that given in Fig. 8, the rigid body assumption does not take into account possible structural deformations, misalignment of the fibres or independent motion of the faces. The time dependent motion of the VDET extracted from the laser system has also been independently confirmed using charged tracks in data acquired at high energy and also during a run at the $`\mathrm{Z}`$ resonance taken at the end of the 1998 data taking. To cope with the low statistics during the high energy running, the alignments were performed averaging over a $`1`$ month period. The values for the alignment parameters so obtained are also plotted in Fig. 6 and are generally in good agreement. In the $`x`$ translation, the alignment points are shifted with respect to laser points from about day 60. This is correlated with a beam loss into the Time Projection Chamber field cage that caused a measurable (10$`\mu \mathrm{m}`$) relative displacement of the TPC with respect to the ITC. This emphasises that the alignment obtained directly from the data is also sensitive to any time dependence of the alignment of the other tracking chambers. Based on the alignment parameters extracted from the laser system, a time dependent correction to the initial $`\mathrm{Z}`$ alignment has been applied to all the 1998 data. Fig. 9 shows the distribution of the sum of the impact parameters of the two muons in dimuon events before and after applying the alignment correction. Without the correction the mean of the distribution is shifted by $`13\mu \mathrm{m}`$, when the correction is applied the distribution is centered close to zero as expected. The observed rotation of the VDET may be explained as follows: the VDET is supported by two “feet” which slot into two long metal rails located at the top and bottom of the ITC cylinder. It seems that the VDET is rotating around the lower foot (small cross in Fig. 7). Although the lower foot is rigid (made of metal), the upper foot had to be “springy” (made of plastic) in order to allow for distance variations between the top and bottom rails as the VDET slides along the rails during the installation process. It is thought that the plastic foot suffers some small deformation. ## 4 Conclusions The laser based alignment system for the ALEPH vertex detector is operational. It allows reliable and high precision monitoring of the VDET position during data taking. It has allowed the study of small temperature related motions and revealed a long term rotation of the VDET. An alignment correction based on the information from this system has been successfully applied.
no-problem/9903/cond-mat9903314.html
ar5iv
text
# Comparison of superconductivity in 𝑆⁢𝑟₂⁢𝑅⁢𝑢⁢𝑂₄ and copper oxides \[ ## Abstract To compare the superconductivity in strongly correlated electron systems with the antiferromagnetic fluctuations in the copper oxides and with the ferromagnetic fluctuations in $`Sr_2RuO_4`$ a $`tJI`$ model is proposed. The antiferromagnetic coupling $`J`$ results in the superconducting state of $`d_{x^2y^2}`$ symmetry and the ferromagnetic coupling $`I`$ results in the spin-triplet $`p`$-type state. The difference in the gap anisotropies provides the large difference in $`T_c`$ values, for the typical values of the coupling constants: $`T_c1K`$ for the ruthenate and $`T_c100K`$ for the cuprates. PACS numbers: 71.27.+a, 74.25 \] More then a decade of intensive research of the cuprate superconductors and related systems has raised fundamental challenges to our understanding of the mechanism of high-temperature superconductivity (SC). One of the most important question is what is so specific in copper oxides, is it the unique chemistry of the planar $`CuO`$ bond that determines the high value of $`T_c`$ ? The discovery of SC in $`Sr_2RuO_4`$ with $`T_c1K`$ is of a particular interest because it has a similar crystal structure to the parent compound $`La_2CuO_4`$, of one of the best studied families of the cuprate superconductors, $`La_{2x}Sr_xCuO_4`$, but has four valence electrons (for $`Ru^{4+}`$) instead of one hole per formula unit. It is generally believed that comparison of normal and SC properties of the cuprates and the ruthenate will give more deeper understanding of the nature of high-$`T_c`$ SC. While the normal state of doped cuprates looks like almost antiferromagnetic Fermi-liquid , the normal state of $`Sr_2RuO_4`$ is characterised by the strong ferromagnetic fluctuations . Properties of SC state are also different: the singlet pairing with major contribution of the $`d_{x^2y^2}`$ symmetry was suggested for the cuprates , while the triplet pairing with $`p`$-type symmetry similar to the $`{}_{}{}^{3}HeA_1`$ phase is proposed for $`Sr_2RuO_4`$ . The triplet SC in $`Sr_2RuO_4`$ is induced by the ferromagnetic spin fluctuations . To compare the SC in $`Sr_2RuO_4`$ and cuprates we have proposed here a $`tJI`$ model containing both an indirect antiferromagnetic coupling $`J`$ and a direct ferromagnetic coupling $`I`$ between neighboring cations. This model is based on the electronic structure calculations. An important difference from the cuprates is that relevant orbitals to the states near the Fermi energy are $`Rudϵ(d_{xy},d_{yz},d_{xz})`$ and $`Op\pi `$, instead of $`Cud_{x^2y^2}`$ and $`Op\sigma `$ states. Due to $`\sigma `$-bonding in the cuprates a strong $`pd`$ hybridization takes place resulting in the strong antiferromagnetic coupling $`J`$, a direct $`d_{x^2y^2}CuCu`$ overlapping is negligible. In $`Sr_2RuO_4`$ with $`\pi `$-bonding the $`RuORu`$ 180-degree antiferromagnetic superexchange coupling is weak while a direct $`d_{xy}RuRu`$ overlapping is not small. That is why we add the Heisenberg type direct $`RuRu`$ exchange interaction to the Hamiltonian of the $`tJ`$ model. The strong electron correlations are common features of the charge carriers both in cuprates and $`Sr_2RuO_4`$ in our model. These correlations for the cuprates are well known . The importance of electron correlations for $`Sr_2RuO_4`$ follows from the high value of the effective mass of electrons in the $`\gamma `$-band obtained by the quantum oscillations measurements . We have found the different solutions for SC state in $`tJI`$ model: with the singlet $`d`$-type pairing governed by the antiferromagnetic coupling $`J`$ and with the triplet $`p`$-type pairing induced by the ferromagnetic coupling $`I`$. The equations for $`T_c`$ in both states are similar. Nevertheless the same absolute value of the coupling constants results in quite different $`T_c`$ values, $`T_c^{(p)}1K`$ and $`T_c^{(d)}100K`$ for typical values of parameters. The gap anisotropy is responsible for the large difference in the $`T_c`$ values. For the $`p`$-type pairing the $`𝐤`$-dependence of the gap provides cancellation of the singular van-Hove contribution of the two-dimensional density of states, while the $`𝐤`$-dependence of the $`d`$-type gap results in the significant contribution of the van-Hove singularity. The Hamiltonian of the $`tJI`$ model is written in the form $`H={\displaystyle \underset{𝐟\sigma }{}}(\epsilon \mu )X_𝐟^{\sigma \sigma }t{\displaystyle \underset{𝐟\delta \sigma }{}}X_𝐟^{\sigma 0}X_{𝐟+\delta }^{0\sigma }+J{\displaystyle \underset{𝐟\delta }{}}K_{𝐟,𝐟+\delta }^{()}`$ (1) $`I{\displaystyle \underset{𝐟\delta }{}}K_{𝐟,𝐟+\delta }^{(+)},`$ (2) $`K_{\mathrm{𝐟𝐠}}^{(\pm )}=\stackrel{}{S}_𝐟\stackrel{}{S}_𝐠\pm {\displaystyle \frac{1}{4}}n_𝐟n_𝐠,X_𝐟^{}+X_𝐟^{}+X_𝐟^{00}=1.`$ (3) Here the Hubbard X-operators $`X_𝐟^{pq}=|pq|`$ are determined in the reduced Hilbert space containing empty states $`|0`$ and single-occupied states $`|\sigma `$, $`\sigma =`$ or $`\sigma =`$. The X-operators algebra exactly takes into accounts the constrain condition that is one of the important effects of the strong electron correlations. The $`\stackrel{}{S}_𝐟`$ and $`n_𝐟`$ operators in (2) are the usual spin and number of particles operators at the site $`𝐟`$, $`\delta `$ is the vector between n.n.. For the cuprates $`JI`$, but for $`Sr_2RuO_4`$ $`IJ`$. To get SC the copper oxides should be doped while $`Sr_2RuO_4`$ is self-doped. According to the band structure calculations the electron $`\alpha `$-band in $`Sr_2RuO_4`$ is half-filled, the hole $`\beta `$-band has $`n_0=0.28`$ holes and the electron $`\gamma `$-band with $`d_{xy}`$ contribution is more then half-filled, $`n_\gamma =1+n_0`$. The strong electron correlations split the $`\gamma `$-band into filled lower Hubbard band (LHB) with $`n_e=1`$and partially filled upper Hubbard band (UHB) with the electron concentration $`n_e=n_0`$. We use the hole representation where the electron UHB transforms in the hole LHB with hole concentration $`n_h=1n_0`$. All other bands ( $`\alpha `$ and $`\beta `$ ) are treated here as an electron reservoir. Observation of a square flux-line lattice in $`Sr_2RuO_4`$ allows to suggest that SC resides mainly on the $`\gamma `$ band . For the cuprates the quasiparticle is a hole in the electron LHB with the electron concentration $`n_e=1n_0`$, for $`La_{2x}Sr_xCuO_4`$ $`n_0=x`$. There are many ways to get the mean-field solutions for SC state, we have used the irreducible Green function method projecting the higher-order Green functions onto subspace of normal $`X_𝐤^{0\sigma }X_𝐤^{\sigma 0}`$ and abnormal $`X_𝐤^{\sigma 0}X_𝐤^{\sigma 0}`$ Green functions coupled via the Gorkov system of equations. Three different solutions have been studied: singlet $`s`$\- and $`d`$\- and triplet $`p`$-types. The gap equation has the form for $`s`$-state $`1=\frac{1}{N}{\displaystyle \underset{𝐩}{}}{\displaystyle \frac{2\omega _𝐩+(2g\lambda )\omega _𝐩^2}{2E_{𝐩0}}}\mathrm{tanh}\left({\displaystyle \frac{E_{𝐩0}}{2\tau }}\right)`$ (4) and for the $`p,d`$-states $`{\displaystyle \frac{1}{\alpha _l}}={\displaystyle \frac{1}{N}}{\displaystyle \underset{𝐩}{}}{\displaystyle \frac{\psi _l^2(𝐩)}{2E_{𝐩l}}}\mathrm{tanh}\left({\displaystyle \frac{E_{𝐩l}}{2\tau }}\right).`$ (5) Here for $`s,p,d`$-states $`E_{𝐩l}=\sqrt{c^2(n_0)(\omega _𝐩m)^2+\left|\mathrm{\Delta }_{𝐩l}\right|^2},`$ (6) $`m=\left[{\displaystyle \frac{\mu ϵ}{zt}}+(g+\lambda ){\displaystyle \frac{1n_0}{2}}\right]/c(n_0),`$ (7) where $`\omega _𝐩=\gamma _𝐩=(1/z)\underset{\delta }{}\mathrm{exp}(i𝐩\delta )`$, $`m`$ is a dimensionless chemical potential, $`c(n_0)=(1+n_0)/2`$, $`g=J/t,\lambda =I/t`$ and $`\tau =k_BT/zt`$ is a dimensionless temperature. The SC gaps are equal to $`\mathrm{\Delta }_{𝐤0}=[2+(2g\lambda )\omega _𝐤]\mathrm{\Delta }_0,\mathrm{\Delta }_0=\frac{1}{N}{\displaystyle \underset{𝐩}{}}\omega _𝐩B_𝐩/c(n_0),`$ (8) $`B_𝐩=X_𝐩^0X_𝐩^0`$ (9) for $`s`$-type $`(l=0)`$ and $`\mathrm{\Delta }_{𝐤l}=\alpha _l\psi _l(𝐤){\displaystyle \frac{1}{N}}{\displaystyle \underset{𝐩}{}}\psi _l(𝐩)B_𝐩/c(n_0)`$ (10) for the $`p`$\- and $`d`$\- states. The coupling constants and the gap anisotropy in the $`l`$-th channel are given by $`\alpha _p=\lambda ,\psi _p(𝐤)={\displaystyle \frac{1}{2}}(\mathrm{sin}k_x+\mathrm{sin}k_y),`$ (11) $`\alpha _d=(2g\lambda ),\psi _d(𝐤)={\displaystyle \frac{1}{2}}(\mathrm{cos}k_x\mathrm{cos}k_y).`$ (12) Here we have considered only the two-dimensional square lattice with the lattice parameter $`a=1`$. The equation for the chemical potential has the form $`1n_0=\frac{1}{N}{\displaystyle \underset{𝐤\sigma }{}}X_𝐤^{\sigma 0}X_𝐤^{0\sigma }.`$ (13) An important effect of the strong electron correlation is the constrain condition excluding double-occupied states $`\frac{1}{N}{\displaystyle \underset{𝐤}{}}B_𝐤=\frac{1}{N}{\displaystyle \underset{𝐤}{}}X_𝐤^0X_𝐤^0=0.`$ (14) The first term in (4) for the singlet $`s`$-type pairing parameter is proportional to $`2tz`$ and appears due to the kinematic mechanism of pairing . The $`s`$-type solution does not satisfy to the constrain condition (14) while for $`p`$\- and $`d`$-type it is fulfilled. The equation for $`T_c`$ in $`p`$\- and $`d`$-states is given by $`{\displaystyle \frac{2c(n_0)}{\alpha _l}}=\frac{1}{N}{\displaystyle \underset{𝐩}{}}{\displaystyle \frac{\psi _l^2(𝐩)}{\left|\omega _𝐩m\right|}}\mathrm{tanh}\left({\displaystyle \frac{c(n_0)\left|\omega _𝐩m\right|}{2\tau _c^{(l)}}}\right).`$ (15) The same equation for the $`d_{x^2y^2}`$-pairing has been derived by the diagram technique for the $`tJ`$ model . At the numerical solution of the equations (15) more then $`10^6`$ points of the Brillouine zone have been taken. Results of $`T_c(n_0)`$ computations are shown in the Fig.1 and 2 for several values of the coupling constants $`\alpha _l`$. These results have revealed the remarkable difference in $`T_c`$ values: $`T_c^{(p)}T_c^{(d)}`$ when $`\alpha _p=\alpha _d`$. The moderate values of $`\alpha 0.40.5`$ and $`zt0.5eV`$ result in $`T_c^{(p)}1K,T_c^{(d)}100K`$. It is clear from equations (11),(12) that the $`p`$-type SC is formed by the ferromagnetic interaction, that is the case of $`Sr_2RuO_4`$, and the $`d`$-type SC is induced by the antiferromagnetic interaction in copper oxides. To understand why $`T_c^{(d)}T_c^{(p)}`$ we have analysed the eqn. (15) analytically. Using integration over the constant energy surfaces $`\omega _𝐤=\omega `$ it can be rewritten like $`{\displaystyle \frac{2c(n_0)}{\alpha _l}}={\displaystyle \underset{1}{\overset{+1}{}}}{\displaystyle \frac{\psi _l^2(\omega )}{\left|\omega m\right|}}\mathrm{tanh}\left({\displaystyle \frac{c(n)\left|\omega m\right|}{2\tau _c}}\right)𝑑\omega ,`$ (16) $`\psi _l^2(\omega )={\displaystyle \frac{1}{(2\pi )^2}}{\displaystyle \underset{\left(\sigma _\omega \right)}{}}{\displaystyle \frac{\psi _l^2(𝐤)}{\left|_𝐤\omega _𝐤\right|}}𝑑\sigma _\omega .`$ (17) The sum rule for the $`\psi _l^2(\omega )`$ functions is the same for $`l=p`$ and $`l=d`$ : $`\frac{1}{N}{\displaystyle \underset{𝐤}{}}\psi _l^2(𝐤)={\displaystyle \underset{1}{\overset{+1}{}}}\psi _l^2(\omega )𝑑\omega =1/4.`$ (18) For the $`p`$-state $`{\displaystyle \frac{\psi _p^2(𝐤)}{\left|_𝐤\omega _𝐤\right|}}=\left|_𝐤\omega _𝐤\right|+{\displaystyle \frac{\mathrm{sin}k_x\mathrm{sin}k_y}{\sqrt{\mathrm{sin}^2k_x+\mathrm{sin}^2k_y}}}`$ (19) the second term in (19) gives zero contribution to the integral (17) and $`\psi _p^2(\omega )`$ is rather small with smooth energy dependence, $`\psi _p^2(\omega )(2/\pi ^2)(1\left|\omega \right|^{1.61})`$. For the $`d`$-state $`{\displaystyle \frac{\psi _d^2(𝐤)}{\left|_𝐤\omega _𝐤\right|}}={\displaystyle \frac{1}{2}}{\displaystyle \frac{(\mathrm{cos}k_x\mathrm{cos}k_y)^2}{\sqrt{\mathrm{sin}^2k_x+\mathrm{sin}^2k_y}}}`$ (20) has the same singularity as the van-Hove singularity in the density of states $`\rho (\omega )`$. The result of calculation is $`\psi _d^2(\omega )=(1\omega ^2)\rho (\omega )2\psi _p^2(\omega ),`$ (21) $`\rho (\omega )={\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{\pi }}\right)\mathrm{ln}(\left|\omega \right|).`$ (22) The comparison of $`\psi _p^2(\omega )`$ and $`\psi _d^2(\omega )`$ has shown that the van-Hove singularity is cancelled in the $`p`$-state and does not cancelled in the $`d`$-state (Fig 3). In conclusion we have presented the model of strongly correlated electrons in two dimensional lattice that allows to consider the cuprates ($`JI`$) and $`Sr_2RuO_4`$ $`(JI)`$ on the same footing. The singlet SC in the $`s`$-state is absent in the strong correlation limit, the triplet $`p`$-pairing occurs due to the ferromagnetic fluctuations and the singlet $`d`$-pairing is induced by the antiferromagnetic fluctuations. The reason why $`T_c`$ in the cuprates is much higher then in $`Sr_2RuO_4`$ is the different gap anisotropies. For the $`p`$-state $`𝐤`$-dependence of the gap results in the cancellation of the van-Hove singularity while for the $`d`$-state the gap anisotropy permits large van-Hove singularity contribution in the equation for $`T_c`$. For the question what is so specific in the copper oxides for high-$`T_c`$ superconductivity the possible answer may be as follows: it is the planar $`CuO\sigma `$-bonding resulting in the strong antiferromagnetic $`CuCu`$ interaction, that induced the singlet pairing with $`d_{x^2y^2}`$ symmetry. We thank N.M.Plakida for useful discussions and I.O.Baklanov for numerical calculations. The work was supported by the Russian Federal Program ”Integration of high school education and science”, grant N69. \***** \*Present address: L.V.Kirensky Institute of Physics, Krasnoyarsk, 660036, Russia. Electronic address: sgo@post.krascience.rssi.ru \- - - - - Y.Maeno, H.Hasimoto, K.Yoshida, S.Nisshizaki, T.Fujita, F.Lichtenberg, Nature 372, 532 (1994). D.Pines, Physica B 163, 78 (1990). T.Oguchi, Phys. Rev. B 51, 1385 (1995). N.E.Bickers, D.J.Scalapino, R.T.Scaletar, Int. J. Mod. Phys. B1, 687 (1987). T.M.Rice, H.Sigrist, J. Phys. Cond. Matter 7, L 643 (1995). I.I.Mazin, D.J.Singh, Phys. Rev. Lett. 79, 733 (1997). J.B.Goodenough, Magnetism and chemical bond (John Wiley and Sons, N.Y.-London, 1963). E.Dagotto, Rev. Mod. Phys. 66, 763 (1994). A.P.Mackenzie, S.R.Julian, A.J.Diver et al, Phys. Rev. Lett. 76, 3786 (1996). D.J.Singh, Phys. Rev. B 52, 1358 (1995). T.M.Riseman, P.G.Kealey, E.M.Forgan, A.P.Mackenzie, L.M.Galvin, A.W.Tyger, S.L.Lee, C.Ages, D.McK.Paul, C.M.Aegerter, R.Cubitt, Z.Q.Hao, T.Akima, Y.Maeno, Nature (London) 396, 242 (1998). S.V.Tyablikov, Methods of quantum theory of magnetism (2-nd edition, Moscow, Nauka, 1975). R.O.Zaitsev, V.A.Ivanov, Fiz. tverdogo tela (Sov. Solid State Physics) 29, 2554 (1987). N.M.Plakida, V.Yu.Yushanhai, I.V.Stasyuk, Physica C 160, 80 (1989). Yu.A.Izyumov, M.I.Katsnelson, Yu.N.Skryabin, Magnetism of itinerant electrons (Moscow, Nauka, 1994). \*****
no-problem/9903/physics9903026.html
ar5iv
text
# The Effect of Neutral Atoms on Capillary Discharge Z-Pinch \[ ## Abstract We study the effect of neutral atoms on the dynamics of a capillary discharge Z-pinch, in a regime for which a large soft-x-ray amplification has been demonstrated. We extended the commonly used one-fluid magneto-hydrodynamics (MHD) model by separating out the neutral atoms as a second fluid. Numerical calculations using this extended model yield new predictions for the dynamics of the pinch collapse, and better agreement with known measured data. \] Z-pinch collapse has been extensively studied since the late 50s, being a simple and effective way of producing hot and dense plasma. In this process, an electric current flowing through a plasma column, interacts with its self magnetic field, and the resulting force contracts the plasma column in the radial direction. Today Z-pinch plasma is widely used for various applications such as high power radiation sources and neutron sources . An exciting new application of Z-pinch plasma was recently demonstrated by Rocca et. al. . In this work, large amplification of soft-x-ray light was obtained in Ne-like Ar and S plasma, created by a fast ($`40`$ ns) Z-pinch discharge inside a capillary. Compared with the alternative approach of laser driven excitation , the capillary discharge has the advantage of allowing for compact (table-top), efficient and simpler soft-x-ray lasers. In this paper we study the role of neutral atoms in the dynamics of a capillary discharge Z-pinch, in the regime for which soft-x-ray amplification was demonstrated. The commonly used one-fluid magneto-hydrodynamics (MHD) model assumes that all the particles in the plasma are charged, and drift together. We, however, show that for the case discussed here, large portions of the plasma contain an appreciable amount of neutral atoms. Since these are not affected by the electro-magnetic forces, but only by the much weaker mechanical forces, they flow with much smaller velocities than the ions and the electrons. To account for this effect, we extend the one-fluid MHD model by introducing a separate fluid for the neutral atoms (in addition to the standard electrons-ions fluid). Results of calculations using this extended model give new predictions for the dynamics of the pinch collapse, with some features in better resemblance with the measured data. This confirms our previously reported estimates . We start with the standard one-fluid two-temperature MHD model, commonly used for numerical calculations of Z-pinch processes . It considers hydrodynamic flow including shock waves, heat conduction, heat exchange (between ions and electrons), magnetic field dynamics, magnetic forces, Ohmic heating, radiative cooling and ionization. We use a simple ionization model, and assume a quasi steady state, taking into account collisional ionization, and 2-Body and 3-Body recombination. Since the plasma is assumed to be optically thin, ionization and excitation by radiation are neglected. The latter assumption should hold at least to the end of the collapse. This model is incorporated into our numerical code, SIMBA, where the equations of motion of the system (see ) are solved in a Lagrangean mesh , assuming one-dimensional axial symmetry. Shown to be remarkably stable , and having a high length-to-diameter ratio (of 50-500), the capillary discharge Z-pinch experiment is naturally described in the framework of this 1-D MHD model. Previously reported works , have indicated that taking into account ablation of the plastic capillary wall is necessary for the correct description of the pinch dynamics. According to this the calculation should thus be extended to include a narrow region of the plastic capillary wall. However, it was also shown in that even with this effect taken into account, good agreement with the measured data still requires some major artificial adjustments of the plasma transport coefficients. We have repeated these calculations using the same one-fluid MHD model, and found them to agree with the reported results. In particular, we also find that the measured data is reproduced by one-fluid MHD calculations only when artificial adjustments are introduced, as demonstrated in Fig. (1). The figure displays the calculated radius of the collapsing Ar plasma as a function of time in a capillary discharge Z-pinch. The parameters of the calculations are those used for soft-x-ray amplification experiments : initial Ar density of $`\rho _0=1.710^6\text{g.}/\text{cm}^3`$ or $`n_02.510^{16}\text{atoms}/\text{cm}^3`$, initial temperature of $`T_00.5`$eV, and a maximum current of 39kA, with its peak at t=32ns . The figure also presents some measured data, of the radius of soft-x-ray source, as a function of time, taken from . Since the radii of the soft-x-ray source and that of the collapsing Ar plasma are related, it is clear that there are disagreements between the calculated and measured data: For example, the calculated pinch peak is about 10ns earlier than the measured one. It is shown in Fig. (1) that multiplying the classical electrical conductivity by a factor of 15, results in a good agreement with the measured instant of the pinch peak, however at the same time it also spoils the agreement with measured collapse velocity. We notice that both calculations do not properly reproduce the initial stages of the collapse, which is delayed by about 10-15ns. According to , reproducing the whole stages of the measured collapse requires more artificial adjustments in the plasma transport parameters, up to 20-40 times their classical values. This need for artificial adjustments of plasma parameters in one-dimensional one-fluid MHD calculations cannot be explained by two- or three- dimensional effects in the modeled experiment: The work of Bender et. al. has proven a perfect azimuthal ($`\varphi `$-direction) symmetry in this same capillary discharge Z-pinch, and the demonstrated amplification gain indicates a very good Z-direction symmetry. In order to better understand the dynamics of the pinch collapse, we have focused our study on the importance and the role of neutral atoms in this process. The one-fluid MHD model assumes that the plasma consists of two components: electrons and effective single-type ions, with their charge being the average charge of all the differently charged ions in the plasma, including the neutral atoms. In addition, these two components are assumed to flow together, as a single fluid. This assumptions are reasonable for regimes for which at least one of the two following conditions is fulfilled: (i) All the atoms in the plasma are ionized, or, (ii) The neutral atoms are strongly coupled to the charged particles, and hence follow them in the same single fluid. Fig. (2) presents the percentage of neutral atoms as a function of electron temperature in Argon plasma, based on our ionization model. According to this figure, a plasma of electron temperature lower than 2-3 eV contains an appreciable amount of neutrals. In Carbon plasma, which is a typical representative of the ablated capillary wall, the picture is similar. Our MHD calculations show that the Ar plasma starts to heat up above 2-3 eV only 5 ns after the beginning of the pinch, and its central region stays below this temperature for the next 25 ns . Major portions of the plastic wall plasma remain below 2-3eV even after the pinch collapses at the axis. The percentage of neutral atoms in the plasma is hence far from being negligible. We thus conclude that condition (i) does not hold. The plasma contains three different components: electrons, ions, and neutral atoms. We now turn to check whether or not condition (ii) is satisfied, by examining the couplings between these different ingredients of the plasma. The electrons and ions, being charged particles, are coupled through Coulomb forces. A measure of the strength of this coupling is given by the plasma frequency, $`\omega _P`$. For the case discussed here, $`1/\omega _P10^510^3`$ns, which is negligible compared to the typical pinch collapse times of $`\tau _{\text{pinch}}40`$ns. This means that the coupling between the electrons and ions is very strong, and that they practically drift together, as a single fluid. The neutral atoms, however, are coupled to the charged particles only by collisions, and may thus flow separately, as a second fluid. We therefore assume two fluids, one of charged-species (electrons and ions) and the other of neutral-species (atoms), with flow velocities $`u_i`$ and $`u_a`$ respectively. The collisional momentum transfer between these two fluids is evaluated assuming a hard spheres approximation: We regard the two fluids as two clouds of hard spheres, drifting through one another. In that case, the collision frequency per unit volume equals: $$\nu _{ai}^{coll}=\alpha r_a^2n_an_i\left|u_au_i\right|$$ (1) and the collisional momentum transfer rate, per unit volume, is thus $$F_{ai}^{coll}=\alpha r_a^2m_an_an_i\left|u_au_i\right|\left(u_au_i\right),$$ (2) where $`\alpha `$ is a coefficient of about $`2\pi `$. Here $`r`$ stands for the particle radius, $`m`$ for its mass, and $`n`$ stands for the number density. The indices $`a,i`$ denote atoms and ions respectively. Later on we will use the index $`e`$ for electrons. The force in Eq. (2), $`F_{ai}^{coll}`$, depends quadratically on the velocity difference between the charged-species and the neutral-species fluids. This coupling thus restrains the separation between the two fluids. Taking reasonable densities of $`n_a10^{16},n_i10^{15}`$ ($`10\%`$ ionization), and an appreciable velocity difference of $`\left|u_au_i\right|10^6`$cm/s, we get for Ar plasma a collisional coupling term of the order of $`10^6\text{dyn}/\text{cm}^3`$. This is 2-3 orders-of-magnitude less then the estimated magnetic ($`\stackrel{}{j}\times \stackrel{}{B}/c`$) and hydrodynamic ($`P`$) forces. We conclude that in the regime discussed here, both of the above conditions for the validity of the one-fluid MHD fail to be satisfied. The two fluids are indeed expected to flow separately. However, they exchange mass, momentum and energy due to exchange of particles (by ionization and recombination) and due to atoms-ions collisions. By $`S_a(r,t)`$ we denote the mass sink (per unit volume, per unit time) in the neutral-species fluid due to ionization of neutral atoms ($`S_a0`$). $`S_a`$ plays a role of a source in the charged-species fluid. Similarly, $`S_i(r,t)`$ denotes the mass sink in the charged-species fluid, due to recombination of ions<sup>+1</sup> ($`S_i0`$). The total mass transfer from the neutral-species fluid into the charged-species fluid due to ionization and recombination is thus $`S_aS_i`$. To account for the exchange of mass, momentum and energy between the two fluids the standard one-fluid MHD for the charged-species fluid (see for example) are amended, and new, separate equations for the neutral-species fluid are added. The revised mass equation for the charged-species fluid is then (we use cylindrical coordinates and assume $`\frac{}{\varphi }=0,\frac{}{z}=0`$): $$\frac{d}{dt}\left(\rho _i+\rho _e\right)+\frac{\left(\rho _i+\rho _e\right)}{r}\frac{}{r}\left(ru_i\right)=\left(S_aS_i\right),$$ (3) where $`\rho `$ stands for mass density, and $`\frac{d}{dt}\frac{}{t}+u`$ is the comoving derivative. The separate mass equation for the neutral-species fluid is then: $$\frac{d}{dt}\left(\rho _a\right)+\frac{\rho _a}{r}\frac{}{r}\left(ru_i\right)=\left(S_aS_i\right)$$ (4) The revised momentum equation for the charged species fluid is: $`\left(\rho _i+\rho _e\right){\displaystyle \frac{d}{dt}}u_i={\displaystyle \frac{}{r}}\left(P_e+P_i\right)+{\displaystyle \frac{\stackrel{}{j}\times \stackrel{}{B}}{c}}`$ (5) $`+F_{ai}^{coll}+S_a\left(u_au_i\right),`$ (6) where $`P`$ stands for pressure, $`\stackrel{}{j}`$ for current density and $`\stackrel{}{B}`$ for magnetic field. $`F_{ai}^{coll}`$ is the collisional momentum exchange between the neutral-species fluid and the charged-species fluid, given in Eq. (2). The momentum equation of the neutral-species fluid should then be: $$\rho _a\frac{d}{dt}u_a=\frac{}{r}\left(P_a\right)F_{ai}^{coll}+S_i\left(u_iu_a\right)$$ (7) Similarly, the one-fluid MHD ion-energy equation ) is also properly amended, and separate atom-energy equation for the neutral-species fluid is introduced. Collisions between the two fluids, as well as particles exchange due to ionization and recombination are considered in these equations in the same manner as in the mass and momentum equations. The MHD electron-energy equation is left unchanged. These equations were incorporated into our SIMBA code. For simplicity, and in order to emphasize the effect introduced by separating the neutral atoms from the charged-species fluid, we assume, in the following calculations, that the capillary wall is also made of Argon. The other pinch parameters are left unchanged, however we now use the classical transport coefficients , without any artificial adjustments. Fig. (3) shows the effect of the neutral-species fluid on the calculated outer boundary of the collapsing Ar plasma. It is clearly indicated that the effect of the neutral component in the capillary discharge Z-pinch is not negligible. When the neutral-species fluid is included the collapse seems to be delayed, however after it starts it is more rapid. This trend seems to better resemble the data presented in Fig. (1), where it was shown that compared to one-fluid MHD calculations the measured collapse is delayed, and after it starts the collapse rate is much higher. We have also examined the effect of neutral atoms on the electron density distribution during the pinch. In Fig. (4), the calculated spatial distribution of electron density at time=25ns is plotted, with and without the neutral-species fluid. Both models predict a collapsing plasma sheath, and show some ablated material from the capillary wall. However, when the neutral-species fluid is taken into account, the collapsing plasma sheath is wider and less dense, compared to the predictions of the standard one-fluid MHD model. We like to offer a qualitative explanation for the results presented in Fig. (3),(4). In the one-fluid MHD model, the atoms and ions are assumed to flow together with the electrons. The magnetic forces, which are dominant in this case, thus accelerate the whole plasma body. In reality, however, only the ions flow together with the electrons, while the neutral atoms flow separately. Since the plasma is initially mostly neutral, the magnetic forces act only on a small fraction of the total mass, which is then rapidly accelerated inwards. Most of the Ar stays outside, almost at rest. While the process evolves, more atoms get ionized, and join the charged-species fluid. This effect is seen in Fig. (3) as a delay in the collapse. At any given spatial and temporal point, the magnetic forces act on a “freshly” ionized matter, almost at rest. The resulting acceleration is thus more gradual, leading to a wider and less dense plasma sheath, as seen from Fig. (4). In conclusion, we have shown that the effect of neutral atoms on the dynamics of the capillary discharge Z-pinch is not negligible. We have demonstrated that separating out the neutral atoms as a second fluid produces a different pinch collapse dynamics, with some features similar to the measured data. It is expected that the improved modeling of the pinch collapse dynamics will yield a better understanding of capillary discharge X-ray lasers, since the amplification gain, as well as the propagation and refraction of radiation in the lasing media are both dominated by the details of the plasma state. Acknowledgments: We gratefully acknowledge the help of A. Birenboim, J. Nemirovsky, and J. Falcovitz for their advice and useful suggestions. This work was partially supported by the Fund for Encouragement of Research in the Technion.
no-problem/9903/hep-ph9903352.html
ar5iv
text
# References Proton Polarization Shifts in Electronic and Muonic Hydrogen R. Rosenfelder <sup>1</sup><sup>1</sup>1E-mail address: rosenfelder@psi.ch Paul Scherrer Institute, CH-5232 Villigen PSI, Switzerland ## Abstract The contribution of virtual excitations to the energy levels of electronic and muonic hydrogen is investigated combining a model-independent approach for the main part with quark model predictions for the remaining corrections. Precise values for the polarization shifts are obtained in the long-wavelength dipole approximation by numerically integrating over measured total photoabsorption cross sections. These unretarded results are considerably reduced by including retardation effects in an approximate way since the average momentum transfer (together with the mean excitation energy) turns out to be larger than usually assumed. Transverse and seagull contributions are estimated in a simple harmonic oscillator quark model and found to be non-negligible. Possible uncertainties and improvements of the final results are discussed. PACS numbers: 12.20.Ds, 12.39.Jh, 14.20.Dh, 36.10.Dr 1. There has been tremendous progress in the laser spectroscopy of hydrogen and deuterium atoms which now are even sensitive to small nuclear and proton structure effects. One of these - traditionally the least understood - is the virtual excitation of the nucleus which acts back on the bound lepton. While the effect in deuterium is comparatively large and has been evaluated theoretically with increasing sophistication and reliability , the proton polarization shifts have not received much attention up to now. Khriplovich and Sen’kov have estimated the shift in the electronic $`1S`$-state as $`71\pm 11\pm 7`$ Hz using values for the static proton polarizabilities and assuming a mean excitation energy of $`300`$ MeV. They attribute the quoted errors to the use of a relativistic approximation for the electron and to the experimental values of the polarizabilities. However, experience gained in many decades of nuclear polarization calculations has told us that the use of an average excitation energy (the “closure approximation”) can be a considerable source of uncertainty unless one calculates it precisely. In addition, one has to make sure that other ingredients to the polarization shift (higher multipoles, transverse excitations) are well under control before a definite answer can be given. It is the purpose of this note to re-evaluate the polarization shift without the questionable use of a mean excitation energy and other simplifying assumptions. Since an experiment is in progress at PSI to measure the Lamb shift in muonic hydrogen we will also evaluate the polarization shift for this case. Actually, with the experimental accuracies achievable in the near future, it turns out that the proton polarization shifts are of much greater importance for muonic than for electronic hydrogen. 2. For light electronic and muonic atoms the energy shift due to virtual excitations can be written as an integral over the forward virtual Compton amplitude. This quantity in turn may be expressed by its imaginary part, i.e. the structure functions $`W_{1/2}(\nu ,Q^2)`$ which are measurable in the inclusive reaction $`e+pe^{}+X`$. In the absence of detailed experimental information for all relevant values of momentum transfer $`Q`$ and energy transfer $`\nu `$ it is customary to apply the long wavelength (or unretarded dipole) approximation which should be valid for $`\overline{Q}r^2^{1/2}1`$ where $`r^2^{1/2}`$ is the root-mean-square radius of the proton and $`\overline{Q}`$ an average momentum transfer. In this limit it is possible to express the structure functions by the experimentally measured photoabsorption cross section $`\sigma _\gamma (\nu )`$. Bernabéu and Ericson have derived in this way the following expression for the energy shift $$\mathrm{\Delta }E_{nl}=\frac{\alpha }{2\pi ^2}\frac{|\psi _{nl}(0)|^2}{m}_0^{\mathrm{}}𝑑\nu \sigma _\gamma (\nu )f\left(\frac{\nu }{2m}\right)$$ (1) with $$f\left(\frac{\nu }{2m}\right)=\frac{8m^2}{\pi }_0^{\mathrm{}}𝑑Q^2\frac{1}{Q^2}_0^Q𝑑\xi \sqrt{Q^2\xi ^2}\frac{Q^2+2\xi ^4/Q^2}{(\nu ^2+\xi ^2)(Q^4+4m^2\xi ^2)}.$$ (2) Here $`\alpha `$ is the fine-structure constant, $`m`$ the lepton mass and $`\psi _{nl}(0)`$ the lepton wave function at the origin which is only non-zero for $`S`$-states. The correction factor accounting for the variation of the wave function over the proton radius can be safely neglected at this level of accuracy. We first note that it is possible to perform all integrations in eq. (2) <sup>1</sup><sup>1</sup>1This has also been noted before . and to obtain $$f(x)=22\mathrm{ln}(4x)+\frac{1+2x^2}{x^2}\left[\sqrt{x^2+x}\mathrm{ln}\frac{\sqrt{x^2+x}+x}{\sqrt{x^2+x}x}+g(x)\right]$$ (3) with $$g(x)=\sqrt{|x^2x|}\left[\mathrm{\Theta }(1x)\mathrm{\hspace{0.25em}2}\mathrm{arctan}\frac{\sqrt{xx^2}}{x}\mathrm{\Theta }(x1)\mathrm{ln}\frac{x+\sqrt{x^2x}}{x\sqrt{x^2x}}\right].$$ (4) The function $`f(x)`$ has quite different limits for small and large arguments: $`f(x)`$ $``$ $`\pi x^{3/2}\mathrm{for}x0`$ (5) $``$ $`{\displaystyle \frac{5}{4x^2}}\left[\mathrm{ln}(4x)+{\displaystyle \frac{19}{30}}\right]\mathrm{for}x\mathrm{}`$ (6) which makes a crucial difference between nuclear polarization shifts, say, in muonic deuterium and in electronic hydrogen. Krhriplovich and Sen’kov have only retained the leading logarithm in eq. (6) which is justified considering the much greater error which comes from pulling out the logarithmic term evaluated at the excitation energy of the $`\mathrm{\Delta }(1232)`$ and expressing the remaining integral in terms of the sum of polarizabilities $$\overline{\alpha }+\overline{\beta }=\frac{1}{2\pi ^2}_0^{\mathrm{}}𝑑\nu \frac{\sigma _\gamma (\nu )}{\nu ^2}.$$ (7) 3. In view of the well-known deficiencies of the closure approximation we have decided to evaluate the polarization shift by integrating numerically over the experimentally measured photon absorption cross section. We have taken the recent Mainz data from $`\nu =200800`$ MeV, the older Daresbury data from $`\nu =8004215`$ MeV and the parametrization $`\sigma _\gamma (\nu )=96.6\mu \mathrm{b}+70.2\mu \mathrm{b}\mathrm{GeV}^{1/2}/\sqrt{\nu }`$ above $`4215`$ MeV. Below $`200`$ MeV four angular distributions in $`\gamma p\pi ^+n`$ (from Fig. 1 in ref. ) have been converted to total cross sections and the threshold behaviour has been parametrized as $`\sigma _\gamma (\nu )=18.3\mu \mathrm{b}\mathrm{MeV}^{1/2}\sqrt{\nu \nu _{\mathrm{th}}}`$. The numerical integration was done by simple Simpson integration over linearly interpolated data points and the whole procedure was checked by evaluating eq. (7). We obtained $$\overline{\alpha }+\overline{\beta }=\mathrm{\hspace{0.25em}13.75}10^4\mathrm{fm}^3$$ (8) in good agreement with a recent analysis but smaller than the value $`14.210^4\mathrm{fm}^3`$ used in Ref. . This is due to the fact that the older cross sections which have been analysed in Ref. are systematically higher than the new Mainz data. The unretarded results for the electronic (e) and muonic ($`\mu `$) polarization shifts are then $`\mathrm{\Delta }E_{nS}^e`$ $`=`$ $`{\displaystyle \frac{106.5}{n^3}}\mathrm{Hz}`$ (9) $`\mathrm{\Delta }E_{nS}^\mu `$ $`=`$ $`{\displaystyle \frac{202}{n^3}}\mu \mathrm{eV}.`$ (10) In view of the small electron mass it is not surprising that the relativistic approximation (6) (including the constant term) agrees with the exact result to more than 4 digits. Equation (9) is larger than the estimate of ref. because the integral over virtual excitations gets contributions well above the $`\mathrm{\Delta }`$-resonance. This can be easily seen from any graph of the total photoabsorption cross section versus photon energy but made more quantitative by asking for the value of the mean excitation energy $`\overline{\nu }`$ which – when substituted into the logarithm of eq. (6) – gives the same result for the shift. We find $`\overline{\nu }410`$ MeV both for electronic and muonic hydrogen. Therefore predictions assuming that only the $`\mathrm{\Delta }`$ isobar contributes to virtual excitations of the proton are not very reliable. In the muonic case the approximation (6) still gives more than $`96`$ % of the exact numerical result whereas the non-relativistic approximation (5) overestimates it by more than $`40`$ %. This is, of course, consistent with the fact that the mean excitation energy is nearly four times larger than the muon mass. 4. The large value of the mean excitation energy also casts some doubt on the use of the unretarded dipole approximation. For example, in a simple constituent harmonic oscillator quark model one naively expects an associated mean momentum transfer of $`\overline{Q}\sqrt{M_{\mathrm{quark}}\overline{\nu }}350`$ MeV. This will have an appreciable effect when inserted in the elastic form factor which, e. g. in the dipole approximation is given by $`F_0(Q^2)1/(1+Q^2/Q_0^2)^2`$ with $`Q_0^2=0.71\mathrm{GeV}^2`$. Inelastic transition form factors to low-lying resonances have approximately the same $`Q`$-dependence apart from threshold factors which are characteristic for the specific angular momentum of the resonance (see below for a non-relativistic example). Therefore one obtains a rough estimate for the effect of retardation when the square of the elastic form factor $`F_0^2(Q^2)`$ is inserted in the $`Q^2`$-integral of eq. (2). Instead of the dipole form factor we have employed a more realistic parametrization of the charge form factor of the proton given in eq. (9) and table 3 of ref. . A careful numerical evaluation of the remaining double integral then gives for the retarded polarization shifts $`\mathrm{\Delta }E_{nS}^e`$ $`=`$ $`{\displaystyle \frac{88.9}{n^3}}\mathrm{Hz}`$ (11) $`\mathrm{\Delta }E_{nS}^\mu `$ $`=`$ $`{\displaystyle \frac{112}{n^3}}\mu \mathrm{eV}.`$ (12) Use of the dipole form factor changes the numerical values to $`89.3`$ and $`114`$ for electron and muon, respectively. Equation (11) is nearly $`20\%`$ smaller in magnitude than the unretarded result and the polarization shift in muonic hydrogen gets almost halved due to retardation effects. This reduction can be translated into a mean momentum transfer which – when inserted into the square of the elastic form factor – cuts the unretarded values by just this amount. In this way one obtains $`\overline{Q}180`$ and $`340`$ MeV/c for the electronic and muonic case respectively. 5. At present this seems to be the best model-independent estimate for the polarization shifts in hydrogen but it still neglects higher multipoles and relies on a standard but not very well tested procedure to correct the unretarded dipole approximation. Contrary to the deuteron case where excellent potential models are available, a reliable (relativistic ) model of the nucleon does not exist to calculate all these contributions. Here we take the simple non-relativistic harmonic oscillator quark model to estimate them. In this model it is easy to evaluate analytically the longitudinal (inelastic) structure function $$S_L(\nu ,q)=\underset{N=1}{\overset{\mathrm{}}{}}\delta \left(\nu N\omega \frac{q^2}{6M_{\mathrm{quark}}}\right)\frac{y^N}{N!}e^y$$ (13) as well as the transverse one <sup>2</sup><sup>2</sup>2The structure functions $`S_{L/T}`$ are linear combinations of the usual $`W_{1/2}`$ and are more convenient in the framework of ref. which uses the energy transfer $`\nu `$ and the three-momentum transfer $`q|𝐪|`$ as variables whereas ref. employs the invariants $`\nu `$ and $`Q^2`$. $$S_T(\nu ,q)=\frac{q^2}{2M_{\mathrm{quark}}^2}S_L(\nu ,q)+\frac{2\omega }{3M_{\mathrm{quark}}}\underset{N=1}{\overset{\mathrm{}}{}}\delta \left(\nu N\omega \frac{q^2}{6M_{\mathrm{quark}}}\right)\frac{y^{N1}}{(N1)!}e^y.$$ (14) Here $`y=(qb)^2/3`$ where $`b`$ is the oscillator length and $`\omega =1/(M_{\mathrm{quark}}b^2)`$ the harmonic oscillator frequency. In this model the excitation spectrum consists of sharp lines at $`N\omega `$ shifted by the recoil energy of the proton and the elastic form factor is gaussian ($`F_0^2(q)=\mathrm{exp}(y)`$) so that the proton rms radius is directly given by the oscillator length $`b`$. It should be noticed that the first and the second term in eq. (14) come from excitations by the spin and the convection current, respectively. Note also that the above structure functions fulfill Siegert’s theorem $`lim_{q0}S_L(\nu ,q)/q^2=lim_{q0}S_T(\nu ,q)/(2\nu ^2)=\sigma _\gamma (\nu )/(4\pi ^2\alpha \nu )`$ and that all excitations are indeed multiplied by the square of the elastic form factor. In the low-$`q`$ limit the first excited state exhausts the dipole absorption cross section which is not a very realistic feature of the model. Other shortcomings are the well-known inability of the harmonic oscillator quark model to reproduce both the empirical rms-radius ($`r^2^{1/2}=0.86`$ fm ) and the polarizabilities (7) if the constituent quark mass is fixed to $`M_{\mathrm{quark}}=M_{\mathrm{proton}}/3`$. We also make this choice because the masses of the constituents should add up to the total mass in a consistent non-relativistic treatment. In addition, this value gives the correct recoil energy and leads to a reasonable result for the magnetic moment of the proton. This leaves only the harmonic oscillator length as free parameter which we have fixed in such a way that the results from the unretarded dipole approximation are obtained. In this way the quark model results can be directly compared with the shifts evaluated in the model-independent approach. It is quite obvious that a non-relativistic model becomes inadequate for excitation energies and momenta of the order of the constituent mass. However, here our aim only is to obtain a rough estimate for the remaining corrections beyond the retarded dipole approximation and for this purpose the harmonic oscillator quark model may be not totally useless. To obtain a quantitative estimate we have inserted the analytic expressions (13, 14) into the formulae in the Appendix of ref. (which keep the relativistic kinematics for the lepton), summed over oscillator shells up to $`N_{\mathrm{max}}=20`$ and integrated numerically over the momentum transfer up to $`q_{\mathrm{max}}=3000`$ MeV. Special care has to be exercised because of the apparent singularity in the transverse weight function at $`q=0`$ which is canceled by the seagull contribution. The latter one is required for a gauge-invariant treatment of non-relativistic systems. Other numerical difficulties arise from the very different scales of electron mass and mean excitation energy which requires very high numerical accuracy (up to $`60\times 72`$ gaussian points compared to $`6\times 72`$ in the muonic case) and from the slow convergence ($`1/N_{\mathrm{max}}^2`$) of the sum over excitations for the spin current which peaks at high momentum, i.e. high $`N`$ . The latter problem has been overcome by an analytic resummation and the overall numerical stability has been checked by varying $`q_{\mathrm{max}}`$, $`N_{\mathrm{max}}`$ and the number of gaussian integration points. The results of the calculation are collected in Table 1. As expected the $`b`$-values are too small to account for the proton radius while the $`\omega `$-values ($`280`$ MeV) seem reasonable for low-lying nucleonic excitations. A simple way to cure these deficiencies is to attribute the gaussian form factor to the quark core and to introduce an extra formfactor $`1/(1+\beta ^2q^2)`$ for the meson cloud which surrounds the core and brings the rms-radius of the model in agreement with the experimental value . Although this is rather ad hoc and theoretically not very appealing we have included this variant also in Table 1. Note that the unretarded dipole approximation (and therefore the value obtained in ref. ) also includes some transverse excitations : eq. (6) would read $`(\mathrm{ln}4x+1)/x^2`$ if only longitudinal excitations are kept. While the longitudinal excitations (including all multipolarities) are seen to dominate, virtual transverse excitations induced by the currents cannot be neglected since the mean momentum transfer is of the order of the constituent quark mass. This is particularly important for the spin current in muonic hydrogen because its contribution grows with momentum transfer. | | electronic hydrogen \[Hz\] | muonic hydrogen \[$`\mu `$eV\] | | --- | --- | --- | | | (A) (B) | (A) (B) | | unretarded dipole (input) | \- 106.4 | \- 201.8 | | retarded dipole | \- 94.3 - 92.1 | \- 131.6 - 121.1 | | full longitudinal | \- 78.2 - 75.9 | \- 120.2 - 108.8 | | spin current | \- 7.2 - 3.8 | \- 39.1 - 20.9 | | convection current + seagull | \- 19.2 - 18.9 | \- 16.9 - 15.4 | | total | \- 104.6 - 98.6 | \- 176.2 - 145.1 | | correction to retarded dipole | \- 10.3 - 6.5 | \- 44.6 - 24.0 | Table 1 : Polarization shifts to the $`1S`$ level in electronic and muonic hydrogen evaluated in the harmonic oscillator quark model. The parameters ($`M_{\mathrm{quark}}=312.8`$ MeV, $`b=0.657`$ fm for electronic hydrogen, $`b=0.674`$ fm for muonic hydrogen) have been fitted to give the unretarded dipole approximation and the corresponding results are given under the heading (A). In case (B) an additional meson-cloud form factor has been introduced to reproduce the experimental proton radius. The ad-hoc introduction of the meson-cloud form factor reduces all contributions and brings the retarded dipole approximation more in accord with the calculation employing realistic form factors. 6. In conclusion, we have evaluated the proton polarization shifts in electronic and muonic hydrogen in a fairly model-independent way by integrating over the experimental photoabsorption cross section and accounting for retardation by use of the empirical elastic form factor. The remaining contributions (mostly from transverse excitations) have been estimated in a simple harmonic oscillator quark model and are therefore rather model-dependent and uncertain. Since it is physics in the resonance region which dominates these contributions, the theoretical situation will probably remain so unless better experimental information from inclusive $`ep`$ \- scattering in this region is available. For our final values we add the corrections (B) listed in Table 1 to eqs. (11, 12) and assign an error to them which covers the values obtained in case (A). This seems reasonable and prudent in view of the mentioned uncertainties and the inadequacy of the harmonic oscillator quark model. In addition, we take the difference between the retarded dipole result obtained with realistic form factors and the one with the gaussian form factor for the core and monopole form factor for the meson cloud as error estimate for the model-independent contribution. Adding the errors linearly we obtain in this way our final result for the polarization shifts $`\mathrm{\Delta }E_{nS}^e`$ $`=`$ $`{\displaystyle \frac{95\pm 7}{n^3}}\mathrm{Hz}`$ (15) $`\mathrm{\Delta }E_{nS}^\mu `$ $`=`$ $`{\displaystyle \frac{136\pm 30}{n^3}}\mu \mathrm{eV}.`$ (16) The first value is one order of magnitude below the present experimental accuracy ($`840`$ Hz) in the $`2S1S`$ transition whereas the planned Lamb shift experiment in muonic hydrogen aims for a precision which is comparable to the uncertainty in $`\mathrm{\Delta }E_{2S}^\mu `$. Incidentally, the proton polarization contribution to this Lamb shift has nearly the same magnitude as the hadronic vacuum-polarization correction which, however, is more precisely known. A more accurate evaluation of the former contribution is therefore needed for a better determination of the proton radius from the muonic Lamb shift experiment. Note added: After submission of the manuscript additional calculations of the muonic polarization shift using different methods have been reported. Faustov and Martynenko obtain nearly the same value as reported in this paper whereas Pachucki’s number is slightly lower. Acknowledgements : I would like to thank David Taqqu for inspiring questions and many discussions. Thanks to Valeri Markushin for a critical reading of the manuscript. Useful correspondence with I. B. Khriplovich and R. A. Sen’kov about their calculation is acknowledged. Finally I am indebted to Simon Eidelman for additional information and to Krzysztof Pachucki for sending me a draft of his manuscript before publication.
no-problem/9903/hep-ph9903262.html
ar5iv
text
# 1 Introduction ## 1 Introduction For quite some time, measurements of solar neutrinos have indicated a suppression compared to the expectations of the standard solar model (SSM) . This suppression may be explained by assuming that $`\nu _e`$ from the Sun undergo vacuum oscillations as they travel to the Earth . Recent data from the Super–Kamiokande (SuperK) experiment also exhibit a seasonal variation above that expected from the $`1/(\mathrm{distance})^2`$ dependence of the neutrino flux, which if verified would be a clear signal of vacuum oscillations. In this Letter we determine the best fit vacuum oscillation parameters to the combined solar neutrino data, including the 708–day SuperK observations. For oscillations to an active neutrino there are subtle changes in the allowed regions compared to fits with earlier SuperK data . Oscillations to a sterile neutrino are disfavored. We examine the possibility that the solar $`hep`$ neutrino flux is enhanced compared to the SSM, and find that the best fits are only marginally changed. We also find a solution with very low $`\delta m^26\times 10^{12}`$ eV<sup>2</sup> for oscillations to a sterile neutrino or with an $`hep`$ enhancement. ## 2 Fitting procedure The fitting procedure is described in detail in Ref. . The solar data used in the current fit are the <sup>37</sup>Cl (1 data point) and <sup>71</sup>Ga (2 data points) capture rates , the latest SuperK electron recoil energy spectrum with $`E_e`$ in the range 5.5 to 20 MeV (18 data points), and the latest seasonal variation data from SuperK for $`E_e`$ in the range 11.5 to 20 MeV (8 data points). For $`\nu _e`$ oscillations to an active neutrino ($`\nu _\mu `$ or $`\nu _\tau `$) we take into account the neutral current interactions in the SuperK experiment. We fold the SuperK electron energy resolution in the oscillation predictions for the $`E_e`$ distribution. For all rates that are annual averages we integrate over the variation in the Earth-Sun distance. We do not include the SuperK day/night ratio, since it is unity for vacuum oscillations. To allow for uncertainty in the SSM prediction of the <sup>8</sup>B neutrino flux, we include as a free parameter $`\beta `$, the <sup>8</sup>B neutrino flux normalization relative to SSM prediction. In our fits with non–standard $`hep`$ neutrinos we also include an arbitrary $`hep`$ normalization constant, $`\gamma `$. For the SSM predictions we adopt the results of Ref. . ## 3 Solutions with oscillations to an active neutrino Figure 1a shows the 95% C.L. allowed regions for the combined radiochemical (<sup>37</sup>Cl and <sup>71</sup>Ga) capture rates and the SuperK electron recoil spectrum, including its normalization. For the 95% C.L. region in a three–parameter fit ($`\delta m^2,\mathrm{sin}^22\theta ,\beta `$) we include all solutions with $`\chi ^2<\chi _{min}^2+8`$. On the boundary curves of the fit to the radiochemical data, the faster oscillations are due to the lower energy neutrinos (mainly $`pp`$ and <sup>7</sup>Be), and the slower oscillations are due to the <sup>8</sup>B neutrinos. The five regions allowed by the SuperK spectrum data correspond roughly to having a mean Earth–Sun distance equal to (in increasing order of $`\delta m^2`$) $`\frac{1}{2}`$, $`\frac{3}{2}`$, $`\frac{5}{2}`$, $`\frac{7}{2}`$, or $`\frac{9}{2}`$ oscillation wavelengths for a typical <sup>8</sup>B neutrino energy. Following the notation of Ref. , we label these regions A, B, C, D, and E, in order of ascending $`\delta m^2`$. In Figure 1b we show the allowed regions for the combined radiochemical and SuperK spectrum data (the solid curve). Also shown in Fig. 1b is the region excluded by the seasonal SuperK data at 68% C.L. (we show the 68% C.L. region since almost none of the parameter space is excluded by the seasonal variation at 95% C.L.). Finally, in Fig. 1c we show the 95% C.L. allowed regions obtained with all of the data (radiochemical, SuperK spectrum, and SuperK seasonal). Only regions A, C, and D are allowed at 95% C.L. by the combined data set. The best fits in each of the subregions are shown in Table 1. Region B, which was allowed in previous fits , is now excluded. The overall best fit parameters are in region C $$\delta m^2=4.42\times 10^{10}\mathrm{eV}^2,\mathrm{sin}^22\theta =0.93,\beta =0.78,$$ (1) with $`\chi ^2/DOF=33.8/26`$, which corresponds to a goodness–of–fit of 14% (the goodness–of–fit is the probability that a random repeat of the given experiment would observe a greater $`\chi ^2`$, assuming the model is correct). The next best fit is in region $`D`$ with $$\delta m^2=6.44\times 10^{10}\mathrm{eV}^2,\mathrm{sin}^22\theta =1.00,\beta =0.80,$$ (2) with $`\chi ^2/DOF=36.7/26`$, which corresponds to a goodness–of–fit of 8%. Finally, region A is also allowed, with best fit $$\delta m^2=6.5\times 10^{11}\mathrm{eV}^2,\mathrm{sin}^22\theta =0.70,\beta =0.94,$$ (3) with $`\chi ^2/DOF=38.4/26`$, which corresponds to a goodness–of–fit of 6%. Regions C and D are consistent with bi–maximal or nearly bi–maximal three–neutrino mixing models that can describe both solar and atmospheric neutrino data. Regions B and E are excluded at 95% C.L. We see from Table 1 that generally the higher $`\delta m^2`$ solutions fit the $`E_e`$ spectrum better. This is evident in Fig. 2, which shows the $`E_e`$ spectrum predictions for the solutions in Table 1, and the latest SuperK data . The higher $`\delta m^2`$ solutions do worse for the radiochemical experiments, primarily because the seasonal variation in the oscillation argument is larger in these cases for the <sup>7</sup>Be neutrinos, causing more smearing of the oscillation probability. The annual–averaged suppression of <sup>7</sup>Be neutrinos is about 93% for solution A, 66% for solutions B and C, and 57% for solutions D and E. The best fit in each region generally lies near a local minimum for the <sup>7</sup>Be neutrino contribution, as might be expected from model independent analyses without oscillations that indicate suppression of <sup>7</sup>Be contributions . Results for the seasonal variation of the solutions in Table 1 are shown in Fig. 3, along with the latest SuperK data . Also shown is the best fit for no oscillations with arbitrary <sup>8</sup>B neutrino normalization; the latter has a small seasonal dependence due to the variation of the Earth-Sun distance. Current seasonal data do not provide a strong constraint, but clearly solutions A, C, and D fit the data better than the no–oscillation curve. Also included in Table 1 is a solution with very low $`\delta m^2`$, which we call solution Z, that was pointed out in Ref. . It is excluded at 95% C.L., but marginally allowed at 99% C.L. (as is solution B). It corresponds roughly to a mean Earth–Sun distance equal to $`\frac{1}{2}`$ of the oscillation wavelength (maximal supression) for <sup>7</sup>Be neutrinos. Although it does very well in suppressing the <sup>7</sup>Be neutrinos (by about 94% after accounting for seasonal averaging), it does not show a significant seasonal effect beyond that provided by the variation of the Earth-Sun distance (see Fig. 3b). ## 4 Solutions with oscillations to a sterile neutrino A similar analysis may be made with solar $`\nu _e`$ oscillating into a sterile neutrino species. Since the sterile neutrino does not interact in any of the detectors, it is harder to reconcile the differing rates of the <sup>37</sup>Cl and SuperK experiments . The best fit parameters in each of the regions are shown in Table 2. For sterile neutrinos the overall best fit parameter is again in region C with $`\chi ^2/DOF=42.4/26`$, which is excluded at 97.7% C.L. Therefore oscillations to sterile neutrinos are highly disfavored. Interestingly, the fit for solution Z is comparable to the other best–fit solutions in the sterile case because its strong suppression of <sup>7</sup>Be neutrinos helps account for the difference between the <sup>37</sup>Cl and SuperK rates (note the $`\chi ^2`$ values for the <sup>37</sup>Cl data in Table 2). ## 5 Solutions with non-standard contributions from $`hep`$ neutrinos Recently it has been speculated that the rise in the SuperK $`E_e`$ spectrum at higher energies could be due to a larger than expected $`hep`$ neutrino flux contribution . While the maximum energy of the <sup>8</sup>B neutrinos is about 15 MeV, the $`hep`$ neutrinos have maximum energy of 18.8 MeV. In the SSM the total flux of <sup>8</sup>B neutrinos is about 2000 times that of the $`hep`$ neutrinos , and the $`hep`$ contribution to the SuperK experiment is negligible. However, there is a large uncertainty in the low energy cross section for the reaction $`{}_{}{}^{3}He+p^4He+e^++\nu _e`$ in which the $`hep`$ neutrinos are produced, and an $`hep`$ flux much larger than the SSM value may not be unreasonable . A large enhancement of the $`hep`$ contribution could in principle account for the rise in the $`E_e`$ spectrum at higher energies seen by SuperK. The $`hep`$ flux normalization $`\gamma `$ can be determined once there are sufficient events in the region $`E_e>15`$ MeV. SuperK measurements of the $`E_e`$ spectrum in the range 17–25 MeV already place the upper limit $`\gamma <8`$ at 90% C.L. , assuming no oscillations. It should first be noted that although an enhanced $`hep`$ contribution without neutrino oscillations can provide a good fit to the SuperK data for $`5.5<E_e<14`$ MeV, it cannot also account for the <sup>37</sup>Cl and <sup>71</sup>Ga rates even with arbitrary <sup>8</sup>B, $`hep`$, and <sup>7</sup>Be neutrino flux normalizations. The overall best fit in this case occurs with no <sup>7</sup>Be contribution, and has $`\chi ^2/DOF=53.7/26`$, which is excluded at 99.9% C.L. The contributions of the $`pp`$ neutrinos, plus the reduced amount of <sup>8</sup>B neutrinos needed to explain SuperK, give a rate for the radiochemical experiments that is too high, even when the <sup>7</sup>Be contribution is ignored. One can also ask what happens if an enhanced $`hep`$ contribution is combined with neutrino oscillations , although the motivation for enhancing the $`hep`$ neutrino flux is not strong here since vacuum oscillations already can explain the rise in the SuperK $`E_e`$ spectrum. Table 3 shows the best fits in each region when an arbitrary $`\gamma `$ is allowed. The overall best fit parameters in this case are again in region C with $`\chi ^2/DOF=32.8/25`$, which corresponds to a goodness–of–fit of 14%. The fits for most of the regions are not significantly improved from the standard $`hep`$ flux case, and the fitted oscillation parameters are little changed. The only exceptions are solutions E and Z, which originally could not explain the SuperK spectrum as well, but with the addition of extra $`hep`$ neutrinos can now also provide a reasonable fit to all of the data. Regions A, C, D, E, and Z are all allowed at 95% C.L. when an $`hep`$ enhancement is included. However, the preferred values of $`\gamma `$ exceed the current bound from SuperK, so that the role of an $`hep`$ flux enhancement appears to be minimal. ## 6 Summary and discussion The latest SuperK solar neutrino data suggest there is a seasonal variation in the solar neutrino flux. The hypothesis that solar $`\nu _e`$ undergo vacuum oscillations to an active neutrino species provides a consistent explanation of all the solar data, with a best fit given by oscillation parameters $`\delta m^2=4.42\times 10^{10}`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta =0.93`$. Oscillations to sterile neutrinos are ruled out at 97.7% C.L., and fits with an enhanced $`hep`$ neutrino flux do not significantly alter the fit results. The existence of vacuum neutrino oscillations can be confirmed with more data from SuperK on the seasonal variation of the <sup>8</sup>B neutrino flux. The spectrum and seasonal variations of <sup>8</sup>B neutrinos can also be measured in the Sudbury Neutrino Observatory (SNO) and ICARUS experiments. The line spectrum of the <sup>7</sup>Be neutrinos gives larger seasonal variations than <sup>8</sup> and these may be observable with increased statistics in <sup>71</sup>Ga experiments, or in the BOREXINO experiment, for which <sup>7</sup>Be neutrinos provide the dominant signal. Accurate measurements of the seasonal variation in these experiments should be able to distinguish between the different vacuum oscillation scenarios , providing a unique solution to the solar neutrino problem. ## Acknowledgements We thank S. Pakvasa for a stimulating discussion and B. Balantekin for useful conversations. This work was supported in part by the U.S. Department of Energy, Division of High Energy Physics, under Grants No. DE-FG02-94ER40817 and No. DE-FG02-95ER40896, and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
no-problem/9903/cond-mat9903220.html
ar5iv
text
# Correlations in the Bond–Future Market ## 1 Introduction Among social and economical disciplines, the analysis of financial markets is particularly suitable for a rigorous mathematical formulation. More important, technological advances in computer science applied to financial trading make great amounts of data available. It is therefore possible, with great reliability, to match real–world information with theories, conjectures, and hypotheses, thus falsifying them in the spirit of the scientific method. Indeed, financial time series are the outcome of a many–agent interaction: The realm of statistical physics. It is not a surprise that nowadays an increasing number of physicists is working on problems of statistical finance , . One of the problems of practical interest is to investigate the existence of correlations between different asset time series . In principle, this information could be used in order to make profits, in practice, this possibility is almost always cancelled by transaction costs. Here, we present a simple method to determine whether two financial time series are correlated. In particular, we have analyzed the time series of bund and btp futures exchanged at the London International Financial Futures and options Exchange (liffe), during the period October 1991—January 1994, when the bund–future market opened earlier than the btp–future one. The overnight returns of both assets are mapped onto a one–dimensional symbolic–dynamics random walk: The “bond walk” . The paper is structured as follows: In section 2 we introduce the analysis tools and present the results. Section 3 is devoted to the exploration of possible investment strategies using the information contained in correlations. Finally, in section 4 we draw our conclusions. ## 2 Analysis and Results In figure 1.a, the time evolution of the bund future as well as the btp future closing prices is plotted as a function of the trading days, for the period October 1991—January 1994. At that time the bund–future market opened earlier than the btp–future one. As a side remark, we notice that the volatility of the btp–future price is higher than that of the bund–future price, which could be due to the lower liquidity of the btp–contract market. In figure 2, the logarithmic overnight returns $`r_b(n)=\mathrm{log}\left({\displaystyle \frac{P_b^\mathrm{o}(n)}{P_b^\mathrm{c}(n1)}}\right),b=\text{bund}\text{,}\text{btp}`$ are shown; $`P_b^o`$, and $`P_b^c`$ are the opening and the closing price. Here also, the greater volatility of the btp contract is evident. In this paper, we are not interested in the absolute value of the overnight variations, but only in their signs $`u_b(n)=\mathrm{sign}_0\left(r_b(n)\right)`$, where $`\mathrm{sign}_0`$ coincides with the usual sign function except for the prescription $`\mathrm{sign}_0(0)0`$. Let us, now, introduce the bond walk displacement as following: $`\mathrm{}_b(n)={\displaystyle \underset{m=1}{\overset{n}{}}}u_b(m).`$ In figure 3.a, the displacements $`\mathrm{}_{\text{bund}}`$ and $`\mathrm{}_{\text{btp}}`$ are shown. This procedure visually enhances the correlation between the two price series, which becomes clearer in figure 3.b, where the two–dimensional random walk is now on a square lattice. The zero–lag value of the crosscorrelation between $`u_{\text{bund}}`$ and $`u_{\text{btp}}`$ quantitatively measures how similar the two dynamics are. Indeed in figure 3.c, we find that the estimate of the crosscorrelation $`C_{\text{bund},\text{btp}}(0)`$ is significantly different from zero. Figures 3.d and 3.e show that in each bond walk the autocorrelation function vanishes for any lag different from zero: Therefore there are neither long nor short range correlations in these walks. Correlations have been computed by using the unbiased estimator given in Ref. . In order to correctly describe the statistical correlations between the two bond walks, it is necessary to take into account the joint probability distribution or, equivalently, the disjoint probability distributions as well as the conditional probabilities. In Table 1, we give an estimate of the joint and conditional probabilities (in brackets) in terms of the empirical frequencies. In figure 4, we show the results of a Monte Carlo simulation drawn from the joint probability distribution $`p(u_{\text{bund}}\mathrm{and}u_{\text{btp}})`$. This simulation has been implemented as follows: At each tick, we randomly choose the bund move according to the last column of Table 1; then, the btp move is selected following Table 1. For instance, suppose that the extracted bund move is upwards, then the probabilities for btp move are given by the third row of Table 1. The results of the simulation are shown in figure 4. In this case the zero–lag crosscorrelation value is significantly (and correctly) different from zero. ## 3 Gambling The previous analysis shows that the overnight signs of the two considered bond futures are crosscorrelated. One can now think to exploit this “prior information” to test the possibility of making profits. This is what we develop in this section, where the low (high) probability of opposite (equal) overnight signs (see Table 1) is used to build “automatic investor” profiles. Each profile corresponds to a precise investment strategy, fulfilling certain rules compatible with the future–market ones . At the first investment day, a margin account is created and filled with an initial margin for any contract opened . In our case, on the first day, before the closing time, two btp–future contracts, a short and a long position<sup>1</sup><sup>1</sup>1A short (long) position is a contract for selling (buying) a security at a certain future delivery date; in our case the security is a Treasury bond., are opened. Thus, at the beginning of each trading day, either the short or the long position is closed, depending on the chosen strategy. Before the closing time of the same day, the closed position is opened again<sup>2</sup><sup>2</sup>2As a technical remark, we point out that, at the end of each day, all positions must be updated on the margin account for marking to market. The margin account must be fed when it becomes lower than the maintenance margin.. We call aggressive the profile for which, a positive (negative) bund–future overnight return implies the closure of the btp–future long (short) position; for zero returns no position is closed. The prudent automatic investor, on the other hand, closes the convenient morning position only if the bund–future return exceeds a certain threshold. If we define $`u_b^\epsilon (n)=\mathrm{sign}_\epsilon \left(r_b(n)\right)`$, where $`\mathrm{sign}_\epsilon `$ coincides with the usual sign function except for the prescription $`\mathrm{sign}_\epsilon (x)0`$ for $`|x|\epsilon `$, we can use the $`\epsilon `$ parameter to characterize the “aggressiveness” of the investor. In figure 5.a, the aggressive ($`\epsilon =0`$) and a prudent ($`\epsilon =0.001`$) investor performances are shown. It is not easy to place an order exactly at the opening price. However, suppose you know the bund–future sign variation half an hour before the opening time of the btp market, then you can immediately phone your broker telling him/her what to do, thus increasing the possibility of closing your chosen position at the opening price. Indeed, in our calculation we assume that transactions are costless and happen exactly at the opening and closing prices. This assumption is quite strong when thinking to a real operation order. In figure 5.a two other curves appear: The lotto-gambler and the ideal one. The lotto–gambler curve is built assuming the closure of the short position, the closure of the long one or neither of the two operations based on a trinomial probability distribution obtained by the past information on the btp–future contract. This algorithm is developed in the spirit of a ‘technical–analysis’ attitude, where predictability of equity returns from past returns is assumed . In formulæ, the plotted yielding curves, $`Y`$, are defined as follows: $`Y(n)=Y(n1)u(n)V_{\text{btp}}{\displaystyle \frac{P_{\text{btp}}^\mathrm{c}(n)P_{\text{btp}}^\mathrm{o}(n)}{100}},`$ (1) where $`u()`$ is $`u_{\text{bund}}^{0.001}()`$ for a prudent investor, $`u_{\text{bund}}()`$ for the aggressive investor, and $`\mathrm{rnd}()`$ for the lotto gambler investor, and where $`\mathrm{rnd}(n)=1,0,+1`$ with probability $`p_1(n),p_0(n),p_1(n)`$ respectively. The probabilities $`p_{()}(n)`$ are built using only the past information, i.e. only using the distribution of $`u_{\text{btp}}(m<n)`$. The quantity $`V_{\text{btp}}`$ is the contract value fixed to $`250,000,000`$ itl by liffe. In the ideal profile, we exploit the out–of–the–rule possibility of opening a btp–future position –at the closing time of the previous day– in the time between the opening of the bund market and the opening of the btp market, and of closing the same position immediately after this time; the position will be long (short), if the bund overnight is positive (negative) and no operation is done for zero overnight returns. In figure 5.b, the plot of the annualized percentages is presented $`\mathrm{\Pi }(n)={\displaystyle \frac{\alpha }{n}}\left({\displaystyle \frac{Y(n)}{Y(0)}}1\right),`$ where $`\alpha `$ is given by the product of 254 (trading days per year) and 100 (percentage magnification) and $`Y(0)`$ equals to the initial margin. To open a future position, only this initial margin is necessary. Though not practically achievable, the ideal profile is the realization which better takes into account the presence of correlations, giving yields, on the long run, four time greater than the other profiles. The explanation of this fact is as follows: The ideal profile is the only one where the information contained in the overnight crosscorrelation is fully exploited. In the other cases, this information is only partially used due to market rules. ## 4 Discussion and Conclusions In this paper, we have studied the correlations between bond walks for bund and btp time series. We have found a situation similar to the one in experiments with correlated photons . If the two walks are separately analyzed, their statistical properties can be described by random walks with trinomial probability transitions. However, if we consider crosscorrelations, we find that the two walks are not independent one from the other. In this case, of course, there are neither quantum entanglements nor non–local quantum effects. It is likely that the operators in the btp–future market simply check the bund–future overnight sign and behave accordingly. In the second part of the paper, we have investigated the possibility of exploiting the above correlation in order to realize a profit. Various strategies have been explored and it seems that, using the information contained in overnight correlations could lead to non–irrelevant yields. Indeed, nowadays the two markets open at the same time, thus eliminating these profit potentialities. Which is the origin of the behaviour of the yield curves? In equation (1), there is a profit if there are positive correlations between the two bond walks. In the case of the aggressive investor, a negative correlation always determines a loss, whereas this is not the case in the prudent case. Therefore, in periods of strong positive correlations both strategies lead to profits, which are greater in the aggressive case; in periods characterized by weaker correlations, there can be either profits or losses depending on the absolute value of price variations. Finally, in a period of anticorrelations, the aggressive investor systematically loses money, whereas the prudent investor loses money only if bund price variations exceed a threshold. One may ask whether the observed positive correlations giving rise to profits are due to random fluctuations. If one takes into account the full data set (N = 584 points), a two–factor linear regression analysis of the data plotted in figure 3.b gives a correlation coefficient $`r=0.89`$. The null hypothesis of no correlations can be checked by a $`t`$-Student’s statistics test and it is rejected even for a 99.5 % confidence interval, being $`t=49`$. However, a careful inspection of figure 3.b shows that three definite regions can be distinguished: Region I, including the first 150 points (from 19/09/1991 to 23/04/1992), region III covering the last 264 points (from 22/12/1992 to 11/01/1994), and region II in the middle. In region I and III, positive correlations are strong. In particular, in region III the correlation coefficient is $`r=0.98`$ with $`t=79`$, whereas in region I $`r=0.63`$ and $`t=10`$. In both regions the null hypothesis of no correlation is rejected for a 99.5 % confidence interval. In region II, on the contrary, the null hypothesis cannot be rejected at a 99.5 % confidence level. In fact, $`r=0.18`$ and $`t=2.5`$. An intriguing point is the origin of the observed correlations; it is also interesting to understand why there is a temporal window of weaker correlations, during 1992. One reason for the presence of positive correlations is the strong link between the German and the Italian bond–markets. Indeed the Italian an German economies were deeply interwoven, and the values of the two currencies were related by the European Exchange–Rate Mechanism (ERM). As for the second question, one should notice that, due to speculative pressure, the Italian currency had to be devaluated thus leaving the ERM in 1992. The method described in this paper can be easily generalized to investigate multiple correlations between assets. For instance, correlations of t–bond (U.S. government bonds) futures, bund and btp futures could be considered. Moreover, it is possible to use zero–lag two–point crosscorrelations of asset walks to measure distances in a hierarchical analysis of markets . ## 5 Aknowledgments We gratefully acknowledge fruitful discussion with Marina Resta. We are indebted to Massimo Riani for discussion, support, and encouragement. The bund– and btp–future data are available at liffe (www.liffe.com).
no-problem/9903/hep-ph9903210.html
ar5iv
text
# Yaroslavl State UniversityPreprint YARU-HE-99/01hep-ph/9903210 Comment on the paper: “Neutrino pair production by a virtual photon in an external magnetic field” by Zhukovskii et al. ## Abstract We point out some serious mistakes in the investigation of Zhukovskii et al. Both the amplitude and the probability of the process were calculated wrongly, that is, the problem of the neutrino pair production by a virtual photon in an external magnetic field is still unsolved. In the recent paper an attempt was made of investigation of the virtual photon decay into the neutrino pair in an external magnetic field with a strength much smaller than the critical value, in the frame of the standard model with neutrino mixing. However, the results of the calculations were incorrect. The basic formula (1) for the process amplitude was written in a rather slipshod manner, namely, the summation over the lepton flavor $`(a=e,\mu ,\tau )`$ was defined inadequately, besides the quark contribution into a part of the aplitude, which is diagonal with respect to a neutrino type (proportional to $`\delta _{ij}`$), was not taken into account. It is also seen from Eq.(1) that the authors extended incorrectly their result for the crossed process $`\nu _i\nu _j\gamma (ij)`$, where the charged current contribution was presented only <sup>1</sup><sup>1</sup>1In our opinion, Eq.(4) of the paper for the matrix element contains an extra factor 2 which is repeated in the commented formula (1)., on a case $`i=j`$, where the $`Z`$ exchange contribution was also presented. Really, the Eq.(1) demonstrates manifestly that the authors have “discovered” a new type of the effective local Lagrangian of the neutrino interaction with charged leptons of a chiral type (when only left-handed charged leptons interact). However, as was known up to now , the Lagrangian of such an interaction had a form $`_{\nu e}^{(Z)}={\displaystyle \frac{G_F}{\sqrt{2}}}[\overline{\nu }\gamma ^\alpha (1\gamma _5)\nu ][\overline{e}\gamma _\alpha (g_V^{\nu e}g_A^{\nu e}\gamma _5)e],`$ (1) where $$g_V^{\nu e}=\frac{1}{2}+2\mathrm{sin}^2\theta _W,g_A^{\nu e}=\frac{1}{2}.$$ By this means both left-handed and right-handed charged leptons take part in the interaction $`(g_V^{\nu e}g_A^{\nu e})`$. Our next remark is concerned with a procedure of calculations with using of the effective local Lagrangian of weak interactions. The matter is, that taking of the local limit leads to an appearance of the two problems: an amplitude acquires the ultraviolet divergency in this limit, and the triangle axial-vector anomaly as well. It is most easily seen if an amplitude is expanded into a series in terms of the external magnetic field as it is shown in Fig. 1, where the dashed lines designate the external field. The zero term in this expansion $`^{(0)}=(B=0),`$ (2) contains the ultraviolet divergency, while the term, linear on the external field $`^{(1)}=B{\displaystyle \frac{d}{dB}}|_{B=0}`$ (3) contains the triangle anomaly because of the presence of the axial-vector coupling in the effective weak Lagrangian. To obtain a correct expression for the amplitude one should perform a procedure of the two-step subtraction $`_{corr}=\left(^{(0)}^{(1)}\right)+\stackrel{~}{}^{(0)}+\stackrel{~}{}^{(1)},`$ (4) where the correct zero-field term $`\stackrel{~}{}^{(0)}`$ and the term $`\stackrel{~}{}^{(1)}`$ linear on the field should be found independently without taking the local limit and with taking account of the neutrino interaction via $`W`$ and $`Z`$ bosons with all charged fermions, both leptons and quarks. For example, the expression for $`\stackrel{~}{}^{(1)}`$ can be obtained from the amplitude of the Compton-like process $`\nu +\gamma ^{}\nu +\gamma ^{}`$ where the field tensor of one photon is replaced by the external field tensor. The subtraction procedure of this kind is performed in our paper . Since the authors of the commented paper do not take the triangle anomaly problem into account, their result would be incorrect even if the proper Lagrangian of the lepton-neutrino interaction was used. Finally, the statement of the authors about the applicability of their Eqs.(3), (4) for an analysis of a real photon decay into the neutrino pair is radically incorrect. It could be seen from the kinematical arguments. Really, these formulas were written for a photon with the space-like momentum, while the total momentum of a neutrino pair is always the time-like one. In such a case this process is kinematically forbidden not only in vacuum but in the constant external magnetic field as well. To summarize: In our opinion, the problem of the neutrino pair production by a virtual photon in an external magnetic field is still unsolved.
no-problem/9903/astro-ph9903248.html
ar5iv
text
# WOMBAT & FORECAST: Making Realistic Maps of the Microwave Sky ## 1. Introduction Cosmic Microwave Background (CMB) anisotropy observations during the next decade will yield data of unprecedented quality and quantity. Determination of cosmological parameters to the precision that has been forecast (Jungman et al. 1996, Bond, Efstathiou, & Tegmark 1997, Zaldarriaga, Spergel, & Seljak 1997, Eisenstein, Hu, & Tegmark 1998) will require significant advances in analysis techniques to handle the large volume of data, subtract foregrounds, and account for systematics. We must ensure that these techniques do not introduce biases into the estimation of cosmological parameters. The Wavelength-Oriented Microwave Background Analysis Team (WOMBAT, http://astro.berkeley.edu/wombat, see also Gawiser et al 1998) will produce state-of-the-art foreground simulations, using all available information about frequency and spatial dependence. Phase information (detailed spatial morphology) offers the possibility of improving upon techniques that only use the angular power spectrum of the foregrounds to account for their distribution. Most techniques assume the frequency spectra of the components is constant across the sky, but we will provide information on the spatial variation of each component’s spectral index whenever possible. This reflects our actual sky; with the high precision expected from future CMB maps we must test our techniques on as realistic a map as possible. A second advantage is the construction of a common, comprehensive database for all known CMB foregrounds, including uncertainties. These models provide the starting point for the WOMBAT Challenge, in which we will generate maps for various cosmological models and offer them to the community for analysis without revealing the input parameters. The WOMBAT Challenge promises to shed light on several open questions in CMB data analysis: What are the best foreground subtraction techniques? Will they allow instruments such as MAP and Planck to achieve the precision in $`C_{\mathrm{}}`$ reconstruction which has been advertised, or will errors increase significantly due to foreground uncertainties? Perhaps most importantly, do some CMB analysis methods produce biased estimates of the cosmological parameters? ## 2. Microwave Foregrounds There are four major expected sources of Galactic emission at microwave frequencies: thermal emission from dust, electric or magnetic dipole emission from spinning dust grains (Draine & Lazarian 1998a,1998b), free-free emission from ionized hydrogen, and synchrotron radiation from electrons accelerated by the Galactic magnetic field. Good spatial templates exist for thermal dust emission (Schlegel, Finkbeiner, & Davis 1998 \[SFD\]) and synchrotron emission (Haslam et al. 1982), although the $`0.^{}5`$ resolution of the Haslam maps means that smaller-scale structure must be simulated. Extrapolation to microwave frequencies is possible using maps which account for spatial variation of the spectra (Finkbeiner, Schlegel, & Davis 1999; Platania et al. 1998). A spatial template for free-free emission based on observations of H$`\alpha `$ (Smoot 1998, Marcelin et al. 1998) can be created in the near future by combining WHAM observations (Haffner, Reynolds, & Tufte 1998) with the southern celestial hemisphere H$`\alpha `$ Sky Survey (McCullough 1998). While it is known that there is an anomalous component of Galactic emission at 15-40 GHz (Kogut et al. 1996, Leitch et al. 1997, de Oliveira-Costa et al. 1997) partially correlated with dust morphology, it is not yet clear whether this is spinning dust grain emission or free-free emission somehow uncorrelated with H$`\alpha `$ observations. In fact, spinning dust emission per se has yet to be observed, so uncertainties in its amplitude are tremendous. A template for this “anomalous” component will have large uncertainties. Three nearly separate categories of galaxies will also generate foreground emission: radio-bright galaxies, low-redshift IR-bright galaxies, and high-redshift IR-bright galaxies. The anisotropy from these foregrounds is predicted by Toffolatti et al. (1998) using models of galaxy evolution to produce source counts, and updated models calibrated to recent SCUBA observations are available (Blain, Ivison, Smail, & Kneib 1998, Scott & White 1998). For the high-redshift SCUBA galaxies, no spatial template is available, so a simulation with realistic clustering will be necessary. Scott & White (1998) and Toffolatti et al. (1998) have used very different estimates of clustering, so this issue will need to be looked at more carefully. Limits on anisotropy generated by high-redshift galaxies and as-yet-undiscovered types of point sources are given by Gawiser, Jaffe, & Silk (1998) using recent observations over a wide range of frequencies. Their upper limit of $`\mathrm{\Delta }T/T=10^5`$ for a $`10^{}`$ beam at 100 GHz is a sobering result. The 5319 brightest low-redshift IR galaxies detected at 60$`\mu `$m are in the IRAS 1.2 Jy catalog (Fisher et al. 1995) and can be extrapolated to 100 GHz with a systematic uncertainty of a factor of a few (Gawiser & Smoot 1997). Sokasian, Gawiser, & Smoot (1998) have compiled a catalog of 2200 bright radio sources, some of which have been observed at 90 GHz and fewer still above 200 GHz. They have developed a method to extrapolate spectra with a factor of two uncertainty at 90 GHz. Secondary CMB anisotropy is generated as CMB photons are scattered after the original last-scattering surface. The most important of these effects occurs as the shape of the blackbody spectrum is altered through inverse Compton scattering by the thermal Sunyaev-Zel’dovich (1972; SZ) effect. Simulations have been made of the impact of SZ in large-scale structure (Persi et al. 1995), clusters (Aghanim et al. 1997) and groups (Bond & Myers 1996). The brightest 200 X-ray clusters known from the XBACS catalog can be used to incorporate the locations of the strongest SZ sources (Refregier, Spergel, & Herbig 1998). In Figure 1, we show an example of some of the foreground maps we will use: the CMB itself (a realization of standard CDM constrained to the COBE/DMR results, courtesy of E. Scannapieco), the SFD dust map, the Haslam synchrotron map, and the IR and radio source catalog amassed by Gawiser et al. Outside of the galactic plane, the morphology of each component is quite distinct. ## 3. Reducing Foreground Contamination Various methods have been proposed for reducing foreground contamination. For point sources, it is possible to mask pixels which represent positive $`5\sigma `$ fluctuations since these are highly unlikely for Gaussian-distributed CMB anisotropy and can be assumed to be caused by point sources. This technique can be improved somewhat by filtering (Tegmark & de Oliveira-Costa 1998; see Tenorio et al. 1998 for a different technique using wavelets). Sokasian, Gawiser, & Smoot (1998) demonstrate that using prior information from good catalogs may allow the masking of pixels which contain sources brighter than $`1\sigma `$. For the 90 GHz MAP channel, this could reduce the residual radio source contamination by a factor of two. Galactic foregrounds with well-understood spectra can be projected out of multi-frequency observations on a pixel-by-pixel basis (Dodelson & Kosowsky 1995, Brandt et al. 1994). The methods for foreground subtraction which have the greatest level of sophistication and have been tested most thoroughly ignore the known locations on the sky of some foreground components. Multi-frequency Wiener filtering uses assumptions about the spatial power spectra and frequency spectra of the components to perform a separation in spherical harmonic or Fourier space (Tegmark & Efstathiou 1996; Bouchet et al. 1995,1997,1998; Knox 1998). However, it does not include any phase information. The MaxEnt Method (Hobson et al. 1998a) can add phase information on diffuse Galactic foregrounds in small patches of sky but treats extragalactic point sources as an additional source of instrument noise, with good results for simulated Planck data (Hobson et al. 1998b) and worrisome systematic difficulties for simulated MAP data (Jones, Hobson, & Lasenby 1998). Both methods have difficulty if pixels are masked due to strong point source contamination or the spectral indices of the foregrounds are not well known (Tegmark 1998). Since residual contamination can increase uncertainties and bias parameter estimation, it is important to reduce it as much as possible. Current analysis methods usually rely on cross-correlating the CMB maps with foreground templates at other frequencies (see de Oliveira-Costa et al. 1998; Jaffe, Finkbeiner, & Bond 1999). It is clearly superior to have localized information on extrapolation of these templates to the observed frequencies; otherwise this cross-correlation only identifies the emission-weighted average spectral index of the foreground. When a known foreground template is subtracted from a CMB map, it is inevitable that the amplitude used will be slightly different from the true value. This leads to off-diagonal structure in the “noise” covariance matrix of the remaining CMB map, as opposed to the contributions of expected CMB anisotropies which gives diagonal contributions to the covariance matrix of the $`a_\mathrm{}m`$. Thus incomplete foreground subtraction, like $`1/f`$ noise, can introduce correlations into the covariance matrix of the $`a_\mathrm{}m`$. These complicate the likelihood analysis necessary for parameter estimation (Knox 1998), but phase information should reduce inaccuracies in foreground subtraction. ## 4. The WOMBAT Challenge Our purpose in conducting a “hounds and hares” exercise is to simulate the process of analyzing microwave maps as accurately as possible.We will make our knowledge of the various foreground components available, and each best-fit foreground map will be accompanied by its uncertainties and possible systematic errors. Each simulation of a foreground will incorporate a realization of those uncertainties. Very little is known about the locations of high-redshift IR-bright galaxies and SZ-bright clusters, so WOMBAT will provide simulations of these components. The rough characteristics of these high-redshift sources, but not their locations, will be revealed. This simulates the real observing process in a way not achieved by previous work. One of the biggest challenges in real-world observations is being prepared for surprises, both instrumental and astrophysical (see Scott 1998 for an eloquent discussion); we will include a few in our maps. We will release our maps for the community to subtract the foregrounds and extract cosmological information. The WOMBAT Challenge is scheduled to begin on March 15, 1999 and will offer participating groups four months to analyze the maps and report their results.<sup>1</sup><sup>1</sup>1see http://astro.berkeley.edu/wombat for timeline, details for participants, and updates We will produce simulations analogous to high-resolution balloon observations (e.g. MAXIMA and BOOMERANG; see Hanany et al. 1998 and de Bernardis & Masi 1998) and the MAP satellite (10<sup>6</sup> pixels at 13 resolution for a full-sky map)<sup>2</sup><sup>2</sup>2http://map.gsfc.nasa.gov. We plan to use the HEALPIX package of pixelization and analysis routines<sup>3</sup><sup>3</sup>3http://www.tac.dk/~healpix. We provided a calibration map of CMB anisotropy with a disclosed angular power spectrum in January 1999 so that participants could test the download procedure and become familiar with HEALPIX. Groups who participate will be asked to provide us with a summary of their analysis techniques. They may choose to remain anonymous in our comparison of the results but are encouraged to publish their own conclusions. In Figure 2, we show a very simple example of what we will produce. It extrapolates the maps and catalogs of Figure 1 to frequencies of 10–300 GHz. At low freqeuencies, the maps (away from the galactic plane) are dominated by synchrotron emission, at 90 GHz by the CMB itself, and at 300 GHz by dust (and by extragalactic point sources which are not easily visible at this resolution). Visually, some sort of separation of the components seems simple, but doing it at the high precision necessary (and claimed) for CMB parameter determination to “unprecedented accuracy” remains a challenge. ## 5. FORECAST The other thrust of the microwave mapmaking effort is to aid in the planning of future CMB anisotropy missions. We will enable quick and easy access to the foreground maps, combined with our best-guess extrapolations to experimental frequencies. Because uncertain extrapolation is involved, we will also provide errors on the results. Given specific information about the observing strategy, observers will be able to quickly call up predictions for their experiment’s contamination by foreground emission. ## 6. Conclusions Undoubtedly the most important scientific contribution that WOMBAT will make is the production of realistic full-sky maps of all major microwave foreground components with estimated uncertainties. These maps are needed for foreground subtraction and estimation of residual foreground contamination in present and future CMB anisotropy observations. With FORECAST, instrumental teams will be able to conduct realistic simulations without needing to assume overly idealized models for the foregrounds. By combining various realizations of these foreground maps within the stated uncertainties with a simulation of the intrinsic CMB anisotropies, we will produce the best simulations so far of the microwave sky. We can test the resilience of CMB analysis methods to surprises such as unexpected foreground amplitude or spectral behavior, correlated instrument noise, and CMB fluctuations from non-gaussian or non-inflationary models. Cosmologists need to know if such surprises can lead to the misinterpretation of cosmological parameters. Perhaps the greatest advance we offer is the ability to evaluate the importance of studying the detailed locations of foreground sources. It may turn out that techniques which use phase information are needed in order to reduce foreground contamination to a level which does not seriously bias the estimation of cosmological parameters. Combining various techniques may lead to improved foreground subtraction methods, and we hope that a wide variety will be tested by the participants in the WOMBAT Challenge. ### Acknowledgments. We thank Rob Crittenden (IGLOO) and Kris Gorski, Eric Hivon, and Ben Wandelt (HEALPIX) for making pixelization schemes available to the community. We appreciate helpful conversations with Nabila Aghanim, Giancarlo de Gasperis, Alex Refregier, David Schlegel, and Philip Stark. Some of the work described here is done under the auspices of the COMBAT collaboration supported by NASA AISRP grant NAG-3941 and NASA LTSA grant NAG5-6552. ## References Aghanim, N., De Luca, A., Bouchet, F. R., Gispert, G., & Puget, J. L. 1997, A&A, 325, 9, astro-ph/9705092 Blain, A. W., Ivison, R. J., Smail, I., & Kneib, J. P. 1998, to appear in Wide-field surveys in cosmology, Proc. XIV IAP meeting, astro-ph/9806063 Bond, J. R., Efstathiou, G., & Tegmark, M. 1997, MNRAS291, L33 Bond, J. R., & Myers, S. T. 1996, ApJS, 103, 63 Bouchet, F. R., Gispert, R., Boulanger, F., & Puget, J. L. 1997, in Bouchet F. R., Gispert R., Guiderdoni, B., Tran Thanh Van J., eds., Proc. 36th Moriond Astrophysics Meeting, Microwave Anisotropies, Editions Frontiere, Gif-sur-Yvette, p. 481 Bouchet, F. R., Gispert, R. & Puget, J. L. 1995, in “Unveiling the Cosmic Infrared Background,” AIP Conference Proceedings 348, Baltimore, MD, USA, ed. E. Dwek, p.255 Bouchet, F. R., Prunet, S., & Sethi, S. K. 1998, astro-ph/9809353 Brandt, W. N., Lawrence, C. R., Readhead, A. C. S., Pakianathan, J. N., & Fiola, T. M. 1994, ApJ, 424, 1 de Bernardis, P., Masi, S. 1998, to appear in “Fundamental parameters in Cosmology,” Rencontres de Moriond, astro-ph/9804138 de Oliveira-Costa, A., et al. 1997, ApJ, 482, L17 de Oliveira-Costa, A., Tegmark, M., Page, L. A., & Boughn, S. P. 1998, ApJ, in press, astro-ph/9807329 Dodelson, S. & Kosowsky, A. 1995, Phys.Rev.Lett, 75, 604 Draine, B. T. & Lazarian, A., 1998a, ApJ, 494, L19 Draine, B. T. & Lazarian, A., 1998b, astro-ph/9807009 Eisenstein, D. J., Hu, W., & Tegmark, M. 1998, astro-ph/9807130 Finkbeiner, D., Schlegel, D., Davis, M. 1999, in preparation Fisher, K. B., Huchra, J. P., Strauss, M. A., Davis, M., Yahil, A., & Schlegel, D. 1995, ApJS, 100, 69 Gawiser, E. et al, 1998, astro-ph/9812237. Gawiser, E., Jaffe, A., & Silk, J. 1998, astro-ph/9811148 Gawiser, E. & Smoot, G. F. 1997, ApJ, 480, L1 Haffner, L. M., Reynolds, R. J., & Tufte, S. L. 1998, ApJ, 501, L83 Hanany, S. et al. 1998, Proc. of 18th Texas Symposium on Relativistic Astrophysics and Cosmology, ed. A. V. Olinto, J. A. Frieman, & D. N. Schramm, World Scientific, p.255 Haslam, C. G. T., Salter, C. J., Stoffel, H., & Wilson, W. E. 1982, A&AS, 47, 1 Hobson, M. P., Jones, A. W., Lasenby, A. N., & Bouchet, F. R. 1998a, MNRAS, in press Hobson, M. P., Barreiro, R. B., Toffolatti, L., Lasenby, A. N., Sanz, J. L., Jones, A. W., & Bouchet, F. R. 1998b, astro-ph/9810241 Jaffe, A., Finkbeiner, D., & Bond, J. R. 1999, in preparation Jones, A. W., Hobson, M. P., & Lasenby, A. N. 1998, astro-ph/9810236 Jungman, G., Kamionkowski, M., Kosowsky, A., & Spergel, D. N. 1996, Phys.Rev.D, 54, 1332 Knox, L. 1998, astro-ph/9811358 Kogut, A. et al. 1996, ApJ, 464, L5 Leitch, E. M., Readhead, A. C. S., Pearson, T. J., Myers, S. T. 1997, ApJ, 486, L23, astro-ph/9705241 Marcelin, M., Amram, P., Bartlett, J. G., Valls-Gabaud, D., & Blanchard, A. 1998, A&A, 338, 1 McCullough, P. 1998, elsewhere in this volume Persi, F. M., Spergel, D. N., Cen, R., & Ostriker, J. P. 1995, ApJ, 442, 1 Platania, P. et al., 1997, ApJ, 505, 473 Refregier, A., Spergel, D. N, & Herbig, T., 1998, astro-ph/9806349 Schlegel, D., Finkbeiner, D., & Davis, M. 1998, ApJ, 500, 525 Scott, D., 1998, to appear in Proc of the MPA/ESO Conference: “Evolution of Large-Scale Structure: from Recombination to Garching,” ed. A. J. Banday et al, 1998, astro-ph/9810330 Scott, D., & White, M. 1998, astro-ph/9808003 Smoot, G. F. 1998, astro-ph/9801121 Sokasian, A., Gawiser, E., & Smoot, G.F. 1998, astro-ph/9811311 Sunyaev, R.A. & Zel’dovich, Ya.B. 1972, Comments Ap. Space Sci. 4, 173 Tegmark, M. 1998, ApJ, 502, 1 Tegmark, M. & de Oliveira-Costa, A. 1998, ApJ, 500, L83 Tegmark, M. & Efstathiou, G. 1996, MNRAS, 281, 1297 Tenorio, L., Lineweaver, C., Hanany, S., & Jaffe, A., 1998, in preparation Toffolatti, L. et al. 1998, MNRAS, 297, 117
no-problem/9903/astro-ph9903469.html
ar5iv
text
# Relation between Kilohertz QPOs and Inferred Mass Accretion Rate in 4 LMXBs ## 1 Introduction In the past 3 years the Rossi X-ray Timing Explorer (RXTE) has discovered kilohertz quasi-periodic oscillations (kHz QPOs) in the persistent flux of 19 low-mass X-ray binaries (LMXB; see for a review). In almost all cases the power density spectra of these sources show twin kHz peaks that, as a function of time, gradually move up and down in frequency, typically over a range of several hundred Hz. Initially, data from various sources seemed to indicate that the separation $`\mathrm{\Delta }\nu `$ between the twin kHz peaks remained constant even as the peaks moved up and down in frequency . In some sources a third, nearly-coherent, oscillation has been detected during type-I X-ray bursts, at a frequency close to $`\mathrm{\Delta }\nu `$, or twice that value (see for a review). These two results suggested that a beat-frequency mechanism was at work , with the third peak being close to the neutron star spin frequency or twice that. In sources for which only the twin kHz QPO, and no burst oscillations, were observed the frequency difference was interpreted in terms of the neutron star spin frequency as well. But the simple beat-frequency interpretation of the kHz QPOs is not without problems , and other ideas discarding one or more elements of this basic picture, but still predicting definite relations between the observed frequencies, have been proposed . One interesting result obtained from these RXTE observations is the complex dependence of the QPO frequencies upon X-ray flux, which is usually assumed to be a measure of the mass accretion rate $`\dot{M}`$. One example is 4U 1608–52 : While on time scales of hours frequency and X-ray flux are well correlated, at epochs separated by days to weeks the QPOs span the same range of frequencies even if the average flux differs by a factor of 3 or more (see also ). In this case, however, the QPO frequency is very well correlated to the position of the source in the color-color diagram . Here I summarize some results from RXTE observations of 4 LMXBs: Aql X–1, 4U 1728–34, 4U 1608–52, and 4U 1636-53. (Some of these results have been published before or will be presented in more detail elsewhere ). Here I focus on the relation of the frequencies of the kHz QPOs to the X-ray flux and colors of the source. These results cast some doubt about the recently reported detection of the orbital frequency at the innermost stable orbit in 4U 1820–30 . ## 2 Results As an example, in Figure 1 I show a power spectrum of 4U 1608–52 in the range $`3001200`$ Hz, where the two QPOs are seen simultaneously. Two, sometimes simultaneous, kHz QPOs are also present in the power spectra of 4U 1728–34 and 4U 1636–53. For Aql X–1 only one kHz QPO has been observed so far. As I already mentioned, the frequencies of these kHz QPOs slowly change as a function of time. In Figure 2 I show the relation of the frequency of one of the kHz QPOs (for 4U 1728–34, 4U 1608–52, and 4U 1636–53 it is the kHz QPO at lower frequencies; for Aql X–1 it is the only kHz QPO detected so far) vs. source count rate in the $`216`$ keV energy range. From this figure it is apparent that the dependence of the kHz QPO frequencies upon X-ray intensity is complex (the same result is obtained if the $`216`$ keV source flux is used instead of the count rate). In Figure 3 I show the color-color diagrams of Aql X–1, 4U 1728–34, 4U 1608–52, and 4U 1636-53. The soft and hard colors are defined as $`I_{(3.56.4)\mathrm{keV}}/I_{(2.03.5)\mathrm{keV}}`$, and $`I_{(9.716.0)\mathrm{keV}}/I_{(6.49.7)\mathrm{keV}}`$, respectively, where $`I`$ is the background subtracted source count rate for the indicated energy range. These color-color diagrams are typical of the so-called Atoll sources . Except for 4U 1636-53, which RXTE only observed in the banana branch, the other three sources move across all the branches of the atoll. Interestingly, there seems to be a close relation between the position of the source in the color-color diagram and the appearance of kHz QPOs in the power spectrum: the QPOs are only detected in the lower banana and the moderate island states, and disappear both in the upper banana and in the extreme island states (red circles and blue dots indicate time intervals with and without kHz QPOs, respectively). In Figure 4 I present the relation of the frequency of one of the kHz QPOs (as in Fig. 2, for 4U 1728–34, 4U 1608–52, and 4U 1636–53 it is the kHz QPO at lower frequencies; for Aql X–1 it is the only kHz QPO detected so far) as a function of hard color (see Fig. 3) for the same intervals shown in Figure 2. The complexity seen in the frequency vs. count rate diagram (Fig. 2) is reduced to a single track per source in the frequency vs. hard color diagram. The shapes of the tracks in Figure 4 suggest that the hard color may not be sensitive to changes of state when these sources move into the banana in the color-color diagram (particularly in the case of Aql X–1, because the banana branch is almost horizontal in this diagram). To further investigate this, I parametrized the color-color diagram in terms of a one-dimensional variable that measures the position of the source along the atoll. I call this variable $`S_\mathrm{a}`$, in analogy to what is usually called $`S_\mathrm{Z}`$ for the other class of LMXBs, the Z sources. The shape of the color-color diagram is approximated with a spline, and a value of $`S_\mathrm{a}`$ is assigned to each point according to the distance (along the spline) of that point to a reference point in the diagram. In Figure 5 I present the relation between the frequencies of the two QPOs in 4U 1728–34 vs. $`S_\mathrm{a}`$. I arbitrarily defined $`S_\mathrm{a}=1`$ at colors (3.02,0.59), and $`S_\mathrm{a}=2`$ at colors (2.75,0.46) (see Fig. 3). Red circles in this figure correspond to the kHz QPO at lower frequencies (the same data presented in Figure 2). The blue squares correspond to the frequency of the kHz QPO at higher frequencies, and the yellow triangles correspond to measurements in which I only detect one of the kHz QPOs; however, from the location of each point in this diagram it is possible to determine whether it is the QPO at higher or lower frequencies. ## 3 Discussion These results show that a total lack of correlation between frequency and count rate on time scales longer than a day (Fig. 2) can coexist with a very good correlation between frequency and position in the X-ray color-color diagram (Fig. 4 and 5). The frequency of the QPO increases with $`S_\mathrm{a}`$, as the source moves from the island to the banana. Only on time scales of hours does the QPO frequency appear to also correlate well with count rate. The presence of the QPOs also correlates well with the position in the color-color diagram: the QPOs are only detected in the lower banana and the moderate island states, and disappear both in the upper banana and in the extreme island states (Fig. 3). In the Atoll sources $`\dot{M}`$ is thought to increase monotonically with $`S_\mathrm{a}`$ along the track in the color-color diagram, from the island to the upper banana , whereas X-ray count rate tracks $`\dot{M}`$ much less well . The properties of the power spectra below $`100`$ Hz depend monotonically upon $`S_\mathrm{a}`$ . The result that the frequency of the kHz QPO is well correlated to $`S_\mathrm{a}`$, but not to X-ray count rate, implies that the kHz QPO frequency also depends monotonically upon inferred $`\dot{M}`$. In Z sources similar conclusions have also been reached . Further support for this interpretation comes from the simultaneous analysis of the low and high frequency parts of the power spectra. In 4U 1728–34 the kHz QPO frequencies were recently found to be very well correlated to several $`<100`$ Hz power spectral properties , while a similar result was obtained for a number of other Atoll (and Z) sources . In all these sources, not only the position in the color-color diagram and the various low frequency power spectral parameters, but also the frequencies of the kHz QPOs are all well correlated with each other. This indicates that the single parameter, inferred to be $`\dot{M}`$, which governs all the former properties also governs the frequency of the kHz QPO. X-ray intensity is the exception: it can vary more or less independently from the other parameters. In 4U 1608–52, it can change by a factor of $`4`$ (see Fig. 2) while the other parameters do not vary significantly. If as inferred, this constancy of the other parameters means that $`\dot{M}`$ does not change, then this indicates that strongly variable beaming of the X-ray flux, or large-scale redistribution of some of the radiation over unobserved energy ranges is occurring in order to change the flux by the observed factors, without any appreciable changes in the X-ray spectrum. Perhaps the $`\dot{M}`$ governing all the other parameters is the $`\dot{M}`$ through the inner accretion disk, whereas matter also flows onto the neutron star in a more radial inflow, or away from it in a jet. All current models propose that the frequencies of the kHz QPOs, which are thought to reflect the Keplerian orbital frequency of accreting matter in a disk around the neutron star, increase monotonically with $`\dot{M}`$, because the inner edge of the disk moves in when $`\dot{M}`$ increases. However, the accretion disk cannot move closer to the neutron star than the radius of the innermost stable circular orbit, even if $`\dot{M}`$ keeps increasing. This means that the frequency of the kHz QPOs should “saturate” at a value corresponding to the Keplerian frequency at the innermost stable orbit for a given source. None of these 4 sources show evidence for a saturation of the frequency of the kHz QPOs at a constant maximum value as $`\dot{M}`$ increases, different from what was inferred for 4U 1820–30 . In 4U 1820–30 the kHz QPO frequencies increase with count rate up to a threshold level, above which the frequencies remain approximately constant while the count rate keeps increasing. Assuming that count rate is a measure for $`\dot{M}`$, this was interpreted as evidence for the inner edge of the disk reaching the general-relativistic innermost stable orbit. However, the results presented above indicate that count rate is not a good measure for $`\dot{M}`$. Inspection of Figure 2 suggests that with sparser sampling some of those plots could easily have looked similar to that of 4U 1820–30. It will therefore be of great interest to see if in 4U 1820–30 the saturation of QPO frequency as a function of count rate is still there when this parameter is plotted as a function of position in the X-ray color-color diagram. ## Acknowledgments This work was supported in part by the Netherlands Research School for Astronomy (NOVA), the Leids Kerkhoven-Bosscha Fonds (LKBF), and the NWO Spinoza grant 08-0 to E.P.J. van den Heuvel. MM is a fellow of the Consejo Nacional de Investigaciones Cientificas y Tecnicas de la Republica Argentina. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. ## References
no-problem/9903/astro-ph9903316.html
ar5iv
text
# DUST-TO-GAS RATIO AND METALLICITY IN DWARF GALAXIES ## 1 INTRODUCTION Interstellar dust is composed of heavy elements made and ejected by stars. Dwek & Scalo (1980) demonstrated that supernovae are the dominant source for the formation of dust grains. They also showed that the dust is destroyed in supernova shocks (see also McKee 1989, Jones et al. 1994, and Jones, Tielens, & Hollenbach 1996). Thus, the dust is formed and destroyed in star-forming galaxies. Some recent galaxy-evolution models treat the evolution of total dust mass as well as that of metal abundance (Wang 1991; Lisenfeld & Ferrara 1998, hereafter LF98; Dwek 1998, hereafter D98; Hirashita 1999, hereafter H99). The processes of dust formation and destruction by supernovae are taken into account in LF98, in order to explain the relation between dust-to-gas mass ratio and metallicity of dwarf irregular galaxies (dIrrs) and blue compact dwarf galaxies (BCDGs). We can find detailed mathematical formulations to calculate the dust mass in any galactic system in D98, which treats the accretion of heavy elements onto preexisting dust grains in molecular clouds in addition to the processes considered in LF98. The model by D98 is successfully applied to nearby spiral galaxies in H99. It is suggested in D98 that the accretion process onto preexisting dust grains is not effective in dwarf galaxies because of the absence of dense molecular clouds there. If this is true, it is worth estimating the ineffectiveness quantitatively, by which we can obtain an information about molecular clouds in dwarf galaxies. Direct observations of molecular clouds in some dwarf galaxies have been extensively carried out (e.g., Ohta, Sasaki, & Saitō 1988). However, it is generally difficult to observe extragalactic molecular clouds. Thus, theoretical constraints on the nature of molecular clouds are indispensable for investigations on the star formation processes in extragalactic objects. In this paper, we examine the relation between dust-to-gas ratio as a function of metallicity by using a set of equations in H99. The model is one-zone (i.e., the spatial distribution of physical quantities within a galaxy is not taken into account) and the instantaneous recycling approximation is applied. The main purpose of this paper is to apply the model to dwarf galaxies (dIrrs and BCDGs). First of all, in the next section, we explain the model equation, through which the dust-to-gas ratio as a function of metallicity is calculated. The result is compared with observational data of dIrrs and BCDGs in §3. We present a summary in the final section. ## 2 REVIEW OF MODEL EQUATIONS In order to investigate the dust content in galaxies, H99 derived a set of equations describing the dust formation and destruction processes, based on the models in LF98 and D98. In H99, a simple one-zone model for a galaxy is adopted to extract global properties of galaxies. For the model treating the radial distribution of gas, element, and dust in a galaxy, see D98. The equation set is written as $`{\displaystyle \frac{dM_\mathrm{g}}{dt}}`$ $`=`$ $`\psi +EW,`$ (1) $`{\displaystyle \frac{dM_i}{dt}}`$ $`=`$ $`X_i\psi +E_iX_iW,`$ (2) $`{\displaystyle \frac{dM_{\mathrm{d},i}}{dt}}`$ $`=`$ $`f_{\mathrm{in},i}E_i\alpha f_iX_i\psi +{\displaystyle \frac{M_{\mathrm{d},i}(1f_i)}{\tau _{\mathrm{acc}}}}{\displaystyle \frac{M_{\mathrm{d},i}}{\tau _{\mathrm{SN}}}}\delta f_iX_iW.`$ (3) (See eqs. – in McKee 1989, eqs. – in LF98 and eqs. – in H99.) Here, $`M_\mathrm{g}`$ is the mass of gas. The metal is labeled by $`i`$ ($`i=\mathrm{O}`$, C, Si, Mg, Fe, $`\mathrm{}`$), and $`M_i`$ and $`M_{\mathrm{d},i}`$ denote the total mass of the metal $`i`$ (in gas and dust phases) and the mass of the metal $`i`$ in the dust phase, respectively. The star formation rate is denoted by $`\psi `$; $`E`$ is the total injection rate of mass from stars; $`W`$ is the net outflow rate from the galaxy; $`X_i`$ is the mass fraction of the element $`i`$ (i.e., $`X_iM_i/M_\mathrm{g}`$); $`E_i`$ is the total injection rate of element $`i`$ from stars; $`f_i`$ is the mass fraction of the element $`i`$ locked up in dust (i.e., $`f_i=M_{\mathrm{d},i}/M_i`$). The meanings of the other parameters in the above equations are as follows: $`f_{\mathrm{in},i}`$ is the value of the dust mass fraction in the injected material, in other words, the dust condensation efficiency in the ejecta<sup>1</sup><sup>1</sup>1In this formalism, we assume that the condensation efficiency in stellar winds is the same as that in supernova ejecta.; $`\alpha `$ refers to the efficiency of the dust destruction during star formation \[$`\alpha =1`$ corresponds to destruction of only the dust incorporated into the star, and $`\alpha >1`$ ($`\alpha <1`$) corresponds to a net destruction (formation) in the star formation\]; $`\tau _{\mathrm{acc}}`$ is the accretion timescale of the element $`i`$ onto preexisting dust particles in molecular clouds; $`\tau _{\mathrm{SN}}`$ is the timescale of dust destruction by supernova shocks; $`\delta `$ accounts for the dust content in the outflow ($`\delta =0`$ means no dust in the outflow, while $`\delta =1`$ indicates that the outflow is as dusty as the interstellar medium). We should comment on the parameter $`\alpha `$ here. Since the protostellar disk forms dust, $`\alpha <1`$ is expected. However, as to the circumstellar dust, the timescale of loss of angular momentum through the Poynting-Robertson effect is much shorter than the lifetime of stars (e.g., Rybicki & Lightman 1979). This means that the formed dust is lost effectively. Thus, we reasonably assume that $`\alpha =1`$ hereafter. The formation of planets also contributes to the loss of the dust. Here, we adopt the same assumption as LF98 and H99; the instantaneous recycling approximation (Tinsley 1980): Stars less massive than $`m_\mathrm{l}`$ live forever and the others die instantaneously. This approximation allows us to write $`E`$ and $`E_i`$, respectively, as $`E`$ $`=`$ $`\psi ,`$ (4) $`E_i`$ $`=`$ $`(X_i+𝒴_i)\psi ,`$ (5) where $``$ is the returned fraction of the mass that has formed stars which is subsequently ejected into the interstellar space, and $`𝒴_i`$ is the mass fraction of the element $`i`$ newly produced and ejected by stars.<sup>2</sup><sup>2</sup>2$`=R`$ and $`𝒴_i=y(1R)`$ for the notation in LF98. $``$ and $`𝒴_i`$ can be obtained using the following formulae (Maeder 1992): $``$ $`=`$ $`{\displaystyle _{m_\mathrm{l}}^{m_\mathrm{u}}}(mw_m)\varphi (m)𝑑m,`$ (6) $`𝒴_i`$ $`=`$ $`{\displaystyle _{m_\mathrm{l}}^{m_\mathrm{u}}}mp_i(m)\varphi (m)𝑑m,`$ (7) In equation (6), $`\varphi (m)`$ is the initial mass function (IMF), and $`m_\mathrm{u}`$ is the upper mass cutoff of stellar mass. The IMF is normalized so that the integral of $`m\varphi (m)`$ in the full range of the stellar mass becomes 1. Therefore, $`\varphi (m)`$ has a dimension of the inverse square of the mass. In equation (7), $`w_m`$ is the remnant mass ($`w_m=0.7M_{}`$ for $`m<4M_{}`$ and $`w_m=1.4M_{}`$ for $`m>4M_{}`$) and $`p_i(m)`$ is the fraction of mass converted into the element $`i`$ in a star of mass $`m`$. Using the above parameters $``$ and $`𝒴_i`$, and assuming that $`W`$ is proportional to the star formation rate ($`W=w\psi `$), equations (1)–(3) become $`{\displaystyle \frac{1}{\psi }}{\displaystyle \frac{dM_\mathrm{g}}{dt}}`$ $`=`$ $`1+w,`$ (8) $`{\displaystyle \frac{M_\mathrm{g}}{\psi }}{\displaystyle \frac{dX_i}{dt}}`$ $`=`$ $`𝒴_i,`$ (9) $`{\displaystyle \frac{M_\mathrm{g}}{\psi }}{\displaystyle \frac{d𝒟_i}{dt}}`$ $`=`$ $`f_{\mathrm{in},i}(X_i+𝒴_i)[\alpha 1+\beta _{\mathrm{acc}}(1f_i)+\beta _{\mathrm{SN}}w(1\delta )]𝒟_i,`$ (10) where $`𝒟_iM_{\mathrm{d},i}/M_\mathrm{g}=f_iX_i`$, and $`\beta _{\mathrm{acc}}`$ and $`\beta _{\mathrm{SN}}`$ are, respectively, defined by $`\beta _{\mathrm{acc}}{\displaystyle \frac{M_\mathrm{g}}{\tau _{\mathrm{acc}}\psi }}\text{and}\beta _{\mathrm{SN}}{\displaystyle \frac{M_\mathrm{g}}{\tau _{\mathrm{SN}}\psi }}.`$ (11) We can regard $`\beta _{\mathrm{acc}}`$ and $`\beta _{\mathrm{SN}}`$ as constant in time (§6.2 and §8.4 of D98). We note that the Galactic value shows $`\beta _{\mathrm{SN}}5`$ (LF98). This value corresponds to $`\tau _{\mathrm{SN}}10^8`$ yr, which is consistent with Jones et al. (1994) and Jones, Tielens, & Hollenbach (1996). The relation $`\tau _{\mathrm{acc}}\tau _{\mathrm{SN}}/2`$ (D98) leads to $`\beta _{\mathrm{acc}}2\beta _{\mathrm{SN}}10`$. Combining equations (9) and (10), we obtain the following differential equation of $`𝒟_i`$ as a function of $`X_i`$: $`𝒴_i{\displaystyle \frac{d𝒟_i}{dX_i}}=f_{\mathrm{in},i}(X_i+𝒴_i)[\alpha 1+\beta _{\mathrm{acc}}+\beta _{\mathrm{SN}}w(1\delta )]𝒟_i{\displaystyle \frac{\beta _{\mathrm{acc}}𝒟_i^2}{X_i}},`$ (12) where we used the relation, $`f_i=𝒟_i/X_i`$. Here, we take $`i=\mathrm{O}`$ to compare the result with the data in LF98. For further quantification, we need to fix the values of $``$ and $`𝒴_i`$ for the traced element ($`i=\mathrm{O}`$). The values are given in LF98. The reason why LF98 chose oxygen as the tracer is as follows: (i) Most of oxygen is produced in Type II supernovae which are also responsible for the shock destruction of dust grains; (ii) oxygen is the main constituent of dust grains. The point (i) means that the instantaneous recycling approximation may be a reasonable approximation for the investigation of oxygen abundances, since the generation of oxygen is a massive-star-weighted phenomenon. In other words, results are insensitive to the value of $`m_\mathrm{l}`$. According to LF98, we put $`m_\mathrm{l}=1M_{}`$ and $`m_\mathrm{u}=120M_{}`$. We use a power-law form of the IMF: $`\varphi (m)m^x`$. The Salpeter IMF is assumed; i.e., $`x=2.35`$ (Salpeter 1955). According to LF98, $`(,𝒴_\mathrm{O})=(0.79,\mathrm{\hspace{0.17em}1.8}\times 10^2)`$ for the Salpeter IMF. After the numerical integration of equation (12) by the Runge-Kutta method, we compare the result with observational data of dwarf galaxies in the next section. ## 3 APPLICATION TO DWARF GALAXIES To compare the solution of equation (12) with observational data of dust-to-gas ratio, we make an assumption that total mass of dust is proportional to that of oxygen in the dust phase. In other words, $`𝒟{\displaystyle \underset{i}{}}𝒟_i=C𝒟_\mathrm{O},`$ (13) where $`C`$ is assumed to be constant for all galaxies. According to Table 2.2 of Whittet (1992), $`C2.2`$ (the Galactic value). We compare the solution of equation (12) with the data in LF98 (see also the references therein). The data sets of nearby dIrrs and BCDGs are presented in Tables 1 and 2 of LF98, respectively. The observed dust-to-gas ratio is defined as $`𝒟^{\mathrm{obs}}M_\mathrm{d}/M_{\mathrm{HI}},`$ (14) where $`M_\mathrm{d}`$ and $`M_{\mathrm{HI}}`$ are the total masses of dust and H i gas, respectively. The dust mass is derived from the luminosity densities at the wavelengths of 60 $`\mu `$m and 100 $`\mu `$m observed by IRAS. The dust mass derived from the far-infrared emission is about an order of magnitude smaller than the value found from the analysis of the interstellar extinction (Fig. 2 of LF98). The presence of cold or hot dust, emitting beyond 100 $`\mu `$m and below 60 $`\mu `$m may be responsible for the discrepancy (LF98). Thus, we should keep in mind that the dust-to-gas ratio adopted here is underestimated in this context. However, since we only take into account the H i gas as the gas content and do not consider H<sub>2</sub> gas, the dust-to-gas ratio is overestimated (by a factor of $`2`$). To sum up, we should keep in mind the uncertainty of an order of magnitude for the dust-to-gas ratio derived from the observation ($`𝒟^{\mathrm{obs}}`$). In the following subsections, we compare $`𝒟`$ calculated by using equations (12) and (13) with $`𝒟^{\mathrm{obs}}`$. We focus on the two processes of dust formation: One is the condensation of dust from heavy elements ejected by stars, and the other is the accretion onto preexisting dust grains. The efficiency of the former process is denoted by $`f_{\mathrm{in},\mathrm{O}}`$ and that of the latter by $`\beta _{\mathrm{acc}}`$. The latter process is not taken into account in LF98. For the dependences of the relation between dust-to-gas ratio and metallicity on IMF, see LF98 and H99. ### 3.1 Dependence on $`f_{\mathrm{in},\mathrm{O}}`$ We present the dependence of the result on the value of $`f_{\mathrm{in},\mathrm{O}}`$ in Figure 1a, in which the solid, dotted, and dashed lines show the $`𝒟`$$`X_\mathrm{O}`$ relation for $`f_{\mathrm{in},\mathrm{O}}=0.1`$, 0.05 and 0.01, respectively. The other parameters are fixed to $`\alpha =1`$, $`\beta _{\mathrm{acc}}=2\beta _{\mathrm{SN}}=10`$, $`\delta =1`$ \[i.e., $`w(1\delta )=0`$\]. Figure 1a shows that the larger the efficiency of production of dust from heavy elements is, the larger the dust-to-gas ratio becomes. The data points represent the relations between $`𝒟^{\mathrm{obs}}`$ and $`X_\mathrm{O}`$ of dIrrs and BCDGs in LF98. The filled and open squares show the data points of the dIrrs and the BCDGs, respectively. In the limit of $`X_\mathrm{O}0`$, the solution reduces to $`𝒟_\mathrm{O}f_{\mathrm{in},\mathrm{O}}X_\mathrm{O}\text{or}𝒟Cf_{\mathrm{in},\mathrm{O}}X_\mathrm{O}.`$ (15) (See also LF98.) This means that $`𝒟`$ scales linearly with $`f_{\mathrm{in},\mathrm{O}}`$ for the extremely low metallicity. Thus, $`f_{\mathrm{in},\mathrm{O}}`$ can be constrained by low-metal galaxies (see also H99). Equation (15) means that we can constrain the parameter $`f_{\mathrm{in},\mathrm{O}}`$ by examining the relation between $`𝒟_\mathrm{O}/X_\mathrm{O}`$ (the fraction of oxygen in the dust phase) and $`X_\mathrm{O}`$. We show this relation in Figure 1b. The parameter adopted for each line is the same as Figure 1a. We also show the data points of the same sample as Figure 1a, assuming $`C=2.2`$ for all the galaxies to convert $`𝒟^{\mathrm{obs}}`$ into $`𝒟_\mathrm{O}`$. Roughly speaking, our model predicts that $`𝒟_\mathrm{O}/X_\mathrm{O}`$ is constant in the range of the dwarf galaxies. This indicates that the low-metal approximation used to derive equation (15) is applicable to dwarf galaxies. Thus, we can directly constrain the parameter $`f_{\mathrm{in},\mathrm{O}}`$ by dwarf galaxies. From the data points in Figure 1b, we see $`0.01\begin{array}{c}<\hfill \\ \hfill \end{array}𝒟_\mathrm{O}/X_\mathrm{O}\begin{array}{c}<\hfill \\ \hfill \end{array}0.1`$ or $`0.01\begin{array}{c}<\hfill \\ \hfill \end{array}f_{\mathrm{in},\mathrm{O}}\begin{array}{c}<\hfill \\ \hfill \end{array}0.1`$. We note that this range is consistent with the analyses in H99. However, we should keep in mind the uncertainty of the data as described above in this section. We also see from Figure 1b that the relation between $`𝒟_\mathrm{O}`$ and $`X_\mathrm{O}`$ becomes nonlinear in the relatively high-metal region ($`\mathrm{log}X_\mathrm{O}>3`$). The behavior of this nonlinear region depends on $`\beta _{\mathrm{acc}}`$ or $`\beta _{\mathrm{SN}}`$ (§3.2). ### 3.2 Dependence on $`\beta _{\mathrm{acc}}`$ We here investigate the dependence of the solution on $`\beta _{\mathrm{acc}}`$, which is proportional to the accretion efficiency of heavy elements onto the preexisting dust grains (§2). The resulting $`𝒟`$$`X_\mathrm{O}`$ relations for various $`\beta _{\mathrm{acc}}`$ are shown in Figure 2a. We show the cases of $`\beta _{\mathrm{acc}}=5,10,20`$ (the solid, dotted, and dashed lines, respectively), where the relation $`\beta _{\mathrm{acc}}=2\beta _{\mathrm{SN}}`$ is fixed. The other parameters are set to $`f_{\mathrm{in},\mathrm{O}}=0.05`$ ($``$ the center of the range constrained in §3.1), $`\alpha =1`$, and $`\delta =1`$. The value of $`\beta _{\mathrm{acc}}`$ is determined by the lifetime of molecular clouds (D98). The value $`\beta _{\mathrm{acc}}10`$ corresponds to the accretion timescale of $`10^8`$ yr (§2). The increase of $`\beta _{\mathrm{acc}}`$ means that the accretion of heavy elements onto dust becomes efficient. Thus, for a fixed value for the metallicity, dust-to-gas ratio increases as $`\beta _{\mathrm{acc}}`$ increases. We also present an extreme case of $`\beta _{\mathrm{acc}}=0`$ and $`\beta _{\mathrm{SN}}=5`$ (long-dashed line). In this case, the accretion onto preexisting dust grains is neglected. We were not able to reproduce the observational data of nearby spiral galaxies without taking into account the accretion process (D98; H99). However, the solid and long-dashed lines in Figure 2 show that we cannot judge whether the accretion process is efficient or not because of the little difference between the results with and without the accretion process. Thus, the basic equations of LF98, which do not include the term of the accretion were able to explain the observed relation between dust-to-gas ratio and metallicity. We note that the accretion process is properly considered in D98. The ineffectiveness of the accretion process is understood as follows. Two processes are responsible for the formation of dust in equation (3); the dust condensation from the heavy elements ejected by stars and the accretion of the heavy elements onto preexisting dust grains. The former is expressed as $`f_{\mathrm{in},i}E_i`$ and the latter as $`M_{\mathrm{d},i}(1f_i)/\tau _{\mathrm{acc}}`$ (see eq. ). We define the following ratio, $`A_i`$: $`A_i{\displaystyle \frac{M_{\mathrm{d},i}(1f_i)/\tau _{\mathrm{acc}}}{f_{\mathrm{in},i}E_i}}.`$ (16) If $`A_i<1`$, the accretion process is ineffective compared with the dust condensation process. We will show that $`A_i<1`$ for the sample dwarf galaxies. Using the instantaneous recycling approximation, $`A_i`$ is estimated as $`A_i{\displaystyle \frac{\beta _{\mathrm{acc}}(1f_i)𝒟_i}{f_{\mathrm{in},i}(X_i+𝒴_i)}}.`$ (17) Now we put $`i=\mathrm{O}`$. In Figure 2b, we present the relation between $`f_\mathrm{O}=𝒟_\mathrm{O}/X_\mathrm{O}`$ and $`X_\mathrm{O}`$. The values of parameters for each line is the same as Figure 2a. This figure shows that we can consider that $`1f_\mathrm{O}1`$. Moreover, in the range in which we are interested here, $`𝒟_\mathrm{O}f_{\mathrm{in},\mathrm{O}}X_\mathrm{O}`$ (eq. ), and $`X_\mathrm{O}𝒴_\mathrm{O}`$ (as long as $`\mathrm{log}X_\mathrm{O}\begin{array}{c}<\hfill \\ \hfill \end{array}2`$ is satisfied). Thus, $`A_\mathrm{O}`$ can be approximated by $`A_\mathrm{O}{\displaystyle \frac{\beta _{\mathrm{acc}}X_\mathrm{O}}{𝒴_\mathrm{O}}}.`$ (18) If we put $`\beta _{\mathrm{acc}}=10`$, and $`𝒴_\mathrm{O}=10^2`$, we obtain $`\mathrm{log}X_\mathrm{O}\begin{array}{c}<\hfill \\ \hfill \end{array}3`$ for the condition $`A_\mathrm{O}<1`$. This is consistent with Figure 2b, since the difference between the solid and the long-dashed lines is clear for $`\mathrm{log}X_\mathrm{O}>3`$. Thus, if $`\mathrm{log}X_\mathrm{O}>3`$ is satisfied, the dust accretion process is more ineffective than the dust condensation process. Actually, even for $`\mathrm{log}X_\mathrm{O}2.5`$, the difference is within the typical error of the observed values. The ineffectiveness of the accretion process results from the low metallicity. Thus, in galaxies with high metallicity, the accretion process becomes important. Indeed, D98 and H99 showed that the process is effective in spiral galaxies, whose metallicity is much larger than the dwarf galaxies (H99). Since we can reproduce the relation between dust-to-gas ratio and metallicity of dwarf galaxies without considering the accretion process, D98 suggested that the accretion process is not efficient in dwarf galaxies because of the lack of dense molecular clouds. This may be true, but seeing that dIrrs and BCDGs show high star formation efficiency (Sage et al. 1992; Israel, Bontekoe, & Kester 1996), there may be a large amount of dense molecular gas in dwarf irregular galaxies. Indeed, we have shown in Figure 2 that the observed relation can be explained even if the efficiency of the accretion process $`\beta _{\mathrm{acc}}`$ is as high as that in the spiral galaxies considered in H99. ## 4 SUMMARY AND DISCUSSIONS Based on the models proposed by LF98 and D98, we have examined the dust content in dIrrs and BCDGs. The basic equations which describes the changing rate of dust-to-gas ratio include the terms of dust formation from heavy elements ejected by stars, destruction by supernova shocks, destruction in star-forming regions and accretion of elements onto preexisting dust grains (§2). This accretion process is important in molecular clouds, where gas densities are generally high. The results are compared with the observed values of dIrrs and BCDGs. Though the degeneration of the parameter and observational error makes it impossible to determine each of the parameter precisely, we were able to constrain the parameters to some extent. The efficiency of dust production from heavy element (denoted by $`f_{\mathrm{in},i}`$) can be constrained by the galaxies with low metallicity (§3.1). The reasonable range is $`0.01\begin{array}{c}<\hfill \\ \hfill \end{array}f_{\mathrm{in},i}\begin{array}{c}<\hfill \\ \hfill \end{array}0.1`$, which is consistent with H99. Thus, it is possible to understand the dust amount in dwarf systems as well as that in spiral systems through the model in this paper. As for the nearby spiral galaxies, unless we take into account the accretion process of heavy element onto the preexisting dust particles, we cannot explain the observed relation between dust-to-gas ratio and metallicity (D98; H99). For the dwarf galaxies, however, we can explain the data without the accretion process (§3.2). This means that the accretion is not effective for dwarf galaxies. Even if the efficiency of the accretion $`\beta _{\mathrm{acc}}`$, determined by the lifetimes of molecular clouds (D98), is as high as that in spiral galaxies, the accretion is not effective because of the low metallicity in the dwarf galaxies. Therefore, we cannot attribute the ineffectiveness of the dust accretion process to the lack of molecular clouds. Finally, we note that our model have satisfied one condition which any model must fulfill: The model has to explain the observation of nearby galaxies. Then, it becomes a matter of concern whether our model can explain the galaxies in the high-redshift Universe. For theoretical modeling of the cosmic dust mass, see, e.g., Edmunds & Phillipps (1997). Observationally, it is interesting that high-redshift galaxies found recently have evidences of dust extinction (Soifer et al. 1998; Armus et al. 1998). The number count of galaxies in the far-infrared and submillimeter wavelengths, where dust reprocesses stellar light, is another interesting theme concerning high-redshift dust (e.g., Takeuchi et al. 1999). We would like to thank the anonymous referee for useful comments which improved this paper. We are grateful to S. Mineshige for continuous encouragement. We thank H. Kamaya, K. Nakanishi, T. T. Takeuchi and T. T. Ishii for kind helps and helpful comments. This work was supported by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. We fully utilized the NASA’s Astrophysics Data System Abstract Service (ADS). FIGURE CAPTION FIG. 1a— The relation between the dust-to-gas ratio ($`𝒟_\mathrm{O}`$) and the oxygen abundance ($`X_\mathrm{O}`$) for various $`f_{\mathrm{in},\mathrm{O}}`$ (the condensation efficiency of dust from oxygen atoms). The data points for dIrrs (filled squares) and BCDGs (open squares) are from LF98. The other parameters are fixed to $`\beta _{\mathrm{acc}}=2\beta _{\mathrm{SN}}=10`$, $`\alpha =1`$, and $`\delta =1`$. The solid, dotted, and dashed lines represent different values of $`f_{\mathrm{inO}}`$ (0.1, 0.05, and 0.01, respectively). FIG. 1b— The relation between $`f_\mathrm{O}=𝒟_\mathrm{O}/X_\mathrm{O}`$ (the fraction of oxygen in the dust phase) and $`X_\mathrm{O}`$ (oxygen abundance). The values of the parameters and the meanings of the data points are the same as Fig. 1a. FIG. 2a— The same as Fig. 1a but for the different parameter sets ($`f_{\mathrm{in},\mathrm{O}}=0.05`$, $`\alpha =1`$, $`\delta =1`$ and various $`\beta _{\mathrm{acc}}`$ and $`\beta _{\mathrm{SN}}`$). The solid, dotted, and dashed lines represent the cases of $`\beta _{\mathrm{acc}}=2\beta _{\mathrm{SN}}=5,10,20`$, respectively. The long-dashed line shows the case of no accretion process onto the preexisting dust grains ($`\beta _{\mathrm{acc}}=0`$ and $`\beta _{\mathrm{SN}}=5`$). The data points are identical to Fig. 1a. FIG. 2b— The same as Fig. 1b but for the parameter sets identical to Fig. 2a. The data points are identical to Fig. 1b.
no-problem/9903/nucl-th9903007.html
ar5iv
text
# A soluble statistical model for nuclear fragmentation ## I INTRODUCTION Nuclear fragmentation resulting from heavy ion collsions is a complex phenomenon. The role of equilibration and dynamics has not yet been determined as a plethora of approaches have been investigated. Examples of approaches are evaporative pictures, percolation models, lattice gas models, and dynamical models based on Boltzmann simulations. In this paper we consider the statistical approach where one considers sampling all configurations of non-interacting clusters. Recently, Chase and Mekjian derived relations which allow the exact calculation of the canonical partition function for such a system. By eliminating the need for computationally intensive Monte Carlo procedures and associated approximations, this technique allows a deeper insight into the thermodynamic principles which drive the statistics of fragmentation. In the next section we present the recursive technique of Chase and Mekjian and review the thermodynamic properties, some of which have already been presented in the literature. We emphasize that the surface energy is the most important parameter in determining the fragmentation and phase transition properties of the model. In the three subsequent sections, we present extensions of the model which are necessary for serious modeling of nuclear systems: excluded volume, Coulomb effects, and isospin degrees of freedom. In section VI we show how a microcanonical distribution may be generated from the canonical distribution. ## II The Model For completeness, we present an outline of the model, which is based on the work of Chase and Mekjian. The expressions used here are based on a picture of non-interacting liquid drops. Mekjian and Lee had also applied similar recursion relations to a more algebraically motivated fragmentation model that was not based on a liquid-drop picture. We consider that there are $`A`$ nucleons which thermalize in a volume $`V`$ much larger than $`V_0`$ where $`V_0=A/\rho _0`$ is the ground state volume of a nucleus of $`A`$ nucleons. These nucleons can appear as monomers but also as composites of $`a`$ nucleons. The canonical partition function of this system can be written as $`\mathrm{\Omega }_A={\displaystyle \underset{\mathrm{\Sigma }n_ka_k=A}{}}\mathrm{\Pi }_k{\displaystyle \frac{\omega _k^{n_k}}{n_k!}}`$ (1) where $`\omega _k`$ is the partition function of a single composite of size $`a_k`$, $`n_k`$ is the number of such composites and the sum goes over all the partitions which satisfy $`n_ka_k=A`$. A priori this appears to be a horrendously complicated problem but $`\mathrm{\Omega }_A`$ can be computed recursively via the formula, $$\mathrm{\Omega }_A=\frac{1}{A}\underset{k}{}\omega _k\mathrm{\Omega }_{Aa_k}$$ (2) Here $`\mathrm{\Omega }_0`$ is 1. It is this formula and the generalisation of this to more realistic case (see later) that makes this model so readily soluble. All properties of the system are determined by the partition functions of indepedent particles. The recursive formula above allows a great deal of freedom in the choice of partition functions for individual fragments, $`\omega _k`$. Any function of temperature, density and $`A`$ is allowed. However, explicit dependence on the configuration of the remainder of the system is outside the scope of this treatment. For the illustrative purposes of this section, we assume the form, $$\omega _k=\frac{V}{\mathrm{}^3}\left(\frac{a_kmT}{2\pi }\right)^{3/2}\times e^{F_{k,\mathrm{int}}/T}$$ (3) The first part is due to the kinetic motion of the center of mass of the composite in the volume $`V`$ and the second part is due to the internal structure. Following the choice of reference we assume the form $$F_{k,\mathrm{int}}=W_0a_kS(T)a_k^{2/3}T^2a_k/ϵ_0$$ (4) Here $`W_0`$ is the volume energy per nucleon(=16 MeV), $`S(T)`$ is the surface tension which is a function of the temperature $`T`$. The origin of the different terms in Eq. (4) is the following: $`W_0k+Sk^{2/3}`$ is the ground state energy of the composite of $`k`$ nucleons, and the last term in the exponential arises because the composite can be not only in the ground state but also in excited states which are included here in the Fermi-gas approximation. Following reference the value of $`ϵ_0`$ is taken to be 16 MeV. Lastly the temperature dependence of $`S(T)`$ in ref is $`S(T)=S(0)[(T_c^2T^2)/(T_C^2+T^2)]^{5/4}`$ with $`S(0)=18`$ MeV and $`T_c=18`$ MeV. Any other dependence could be used including a dependence on the average density. Upon calculation, the model described above reveals a first order phase transition. In Figure 1 the specific heat at constant volume, $`C_V=(1/A)dE/dT`$, is displayed as a function of temperature for systems of size, $`A=700`$, $`A=1400`$ and $`A=2800`$. The sharp peak represents a discontinuity in the energy density, which sharpens for increasingly large systems. The usual picture of a liquid-gas phase transition gives a discontinuity in the energy density when pressure is kept constant rather than when the volume is kept constant. To understand this result we consider a system divided into one large cluster and many small clusters. The pressure and free energy may then be approximated as $`E/A`$ $``$ $`ϵ_{\mathrm{bulk}}+{\displaystyle \frac{3}{2}}{\displaystyle \frac{N_{\mathrm{cl}.}}{A}}T,`$ (5) $`P`$ $`=`$ $`{\displaystyle \frac{N_{\mathrm{cl}.}}{V}}T,`$ (6) where $`N_{cl}`$ is the number of clusters. The bulk term depends only on the temperature and not on the way in which the nucleons are partioned into fragments. We have neglected the surface energy term which is proportional to $`A^{1/3}`$. In this limit, $`C_v`$ and $`C_p`$ become $`C_V`$ $`=`$ $`{\displaystyle \frac{ϵ_{\mathrm{bulk}}}{T}}+{\displaystyle \frac{3}{2}}{\displaystyle \frac{N_{\mathrm{cl}.}}{A}}`$ (7) $`C_p`$ $`=`$ $`C_V+{\displaystyle \frac{N_{\mathrm{cl}.}}{A}}.`$ (8) The bulk term depends only on the temperature and is therefore continuous across the phase transition. Thus, a spike in $`C_p`$ is equivalent to a spike in $`C_V`$ since both are proportional to $`N_{\mathrm{cl}.}`$. It is difficult to make a connection between this approach and the standard Maxwell construction, since here interactions between particles enter only through the surface term. Intrinsic thermodynamic quantities may be calculated in a straightforward manner. For instance the pressure and chemical potentials may be calculated through the relations, $`\mu `$ $`=`$ $`T\left(\mathrm{ln}\mathrm{\Omega }_A\mathrm{ln}\mathrm{\Omega }_{A1}\right)`$ (9) $`P`$ $`=`$ $`T{\displaystyle \frac{\mathrm{ln}(\mathrm{\Omega }_A)}{V}}`$ (10) Calculations of $`\mu `$ and $`P`$ are displayed in Figure 2 as a function of density for a system of size $`A=200`$. Both the pressure and chemical potential remain roughly constant throughout the region of phase coexistence. Of particular note is that the pressure actually falls in the coexistence region due to finite size effects. We now make some comments about influences of various factors in Eq. (4). The bulk terms, $`W_0+T^2/ϵ_0`$, are not affected by the free energy, thus they may be ignored when calculating fragmentation observables. Their influence with respect to intrinsic thermodynamic quantities is of a trivial character. The surface term $`S(T)`$ is completely responsible for determining all observables related to fragmentation and therefore all aspects of the phase transition. Aside from the system size $`A`$, fragmentation is determined by two dimensionless parameters. The first is the specific entropy, $`(V/A)(mT/(2\pi \mathrm{}^2))^{3/2}`$ and the second is the surface term $`S(T)/T`$. At a given temperature the free energy $`F=ETS`$ of $`A`$ nucleons should be minimized. With the surface tension term, $`E`$ is minmised if the whole system appears as one composite of $`A`$ nucleons but the entropy term encourages break up into clusters. At low temperatures the surface term dominates while at high temperatures entropy prevails and the system breaks into small clusters. The mass distribution may be calculated given the partition function. $$n_k=\frac{\omega _k\mathrm{\Omega }_{Aa_k}}{\mathrm{\Omega }_A}$$ (11) The mass distribution is displayed in Figure 3 for three temperatures, 6.0, 6.25 and 6.5 MeV which are centered about the transition temperature of 6.25 MeV. The distributions have been multiplied by $`a_k`$ to emphasize the decomposition of the system. The mass distribution changes dramatically in this small temperature range. The behavior is reminiscent of that seen in the percolation or lattice gas models. ## III Excluded Volume The volume used to define to the partition functions of individual fragments, $`\omega _k`$ given in Eq. (4), should reflect only that volume in which the fragments are free to move. Hahn and Stöcker suggested using $`VVA/\rho _0`$ to incorporate the volume taken up by the nuclei. By inspecting Eq. (4) on can see that this affects the partion function by simply mapping the density or volume used to plot observables. More realistically, the excluded volume could depend upon the multiplicity. Nonetheless, in rather complicated calculations not reported here, it was found that for the purpose of obtaining $`pV`$ diagrams in the domain of interest in this paper, it is an acceptable approximation to ignore the multiplicity dependence of the excluded volume. Incorporating a multiplicity dependence would be outside the scope of the present model, as it would represent an explicit interaction between fragments. However, one could add an $`a`$-dependence to the volume term to account for the difficulty of fitting fragments of various sizes into a tight volume. This might affect the model in a non-trivial fashion. We like to remind the reader that the parameter $`b`$ in the Van der Waals EOS: $`(p+a/V^2)(Vb)=RT`$ also has its roots in the excluded volume. But there $`b`$ plays a crucial role. We could not for example set $`b`$=0 without creating an instability at high density. Furthermore, the phase transition disappears when $`a`$ is set to zero. ## IV Coulomb effects It has been understood that the Coulomb effects alter the phase structure of nuclear matter. Although explicit Coulomb interactions are outside the scope of this treatment, they may be approximated by considering a screened liquid drop formula for the Coulomb energy as has been used by Bondorf and Donangelo. The addition to the internal free energy given in Eq. (4) is $$F_{\mathrm{coul}}=0.70\left(1\left(\frac{\rho }{\rho _0}\right)^{1/3}\right)\frac{a_k^{5/3}}{4}\mathrm{MeV}.$$ (12) This form implies a jellium of uniform density that cancels the nucleons positive charge when averaged over a large volume. This may be more physically motivated for the modeling of stellar interiors where the electrons play the role of the jellium. We display $`C_v`$, both with and without Coulomb terms for an $`A=100`$ system in Figure 4. Coulomb forces clearly reduce the temperature at which the transition occurs. For sufficiently large systems, Coulomb destroys the transition as large drops become unstable to the Coulomb force. ## V Conservation of Isospin The recursive approach employed here is easily generalized to incorporate multiple species of particles. If there exist a variety of particles with conserved charges $`Q_1`$, $`Q_2\mathrm{}`$, one can write a recursion relation for each charge. $$\mathrm{\Omega }_{Q_1,Q_2\mathrm{}}=\underset{k}{}\frac{q_{i,k}}{Q_i}\omega (k)\mathrm{\Omega }_{Q_1q_{1,k},\mathrm{}Q_iq_{i,k}\mathrm{}},$$ (13) where $`Q_i`$ is the net conserved charge of type $`i`$ and $`q_{i,k}`$ is the charge of type $`i`$ carried by the fragment noted by $`k`$. For the nuclear physics example, one would wish to calculate $`\mathrm{\Omega }_{N,Z}`$ where $`N`$ and $`Z`$ were the conserved neutron and proton numbers. To find $`\mathrm{\Omega }_{Z,N}`$ one must know $`\mathrm{\Omega }_{N^{},Z^{}}`$ for all $`N^{}<N`$ or $`Z^{}<Z`$. To accomplish this one must use both recursion relations. ## VI Obtaining the microcanonical distribution In nuclear collisions, one does not have access to a heat bath, but one can vary the excitation energy. A microcanonical treatment is therefore more relevant for practical calculations, particularly given the existence of a first order phase transition which occupies an infinitesimal (in the limit of large $`A`$) range of temperatures in a canonical calculation, but a finite range of energies in a microcanonical ensemble. The relevant partition function for a microcanonical ensemble is the density of states, $`\rho (E)`$ $`=`$ $`{\displaystyle \underset{i}{}}\delta (E_iE)`$ (14) $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle 𝑑\beta \underset{i}{}e^{i\beta (EE_i)}}`$ (15) $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle 𝑑\beta \mathrm{\Omega }(i\beta )e^{i\beta E}},`$ (16) where the sum over $`i`$ represents the sum over all many-body states. Although $`\mathrm{\Omega }(i\beta )`$ is easily calculable given the recursion relations discussed in the previous sections, one must perform the integral over $`\beta `$ numerically. The true solution for the density of states would be ill-defined given the discreet nature of quantum spectra which can not be combined with a delta function. However, if one defines the density of states in a finite region of size $`\eta `$, the density of states becomes well-behaved even for discreet spectra. For that reason we more pragmatically define the density of states as $`\rho _\eta (E)`$ $``$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{1}{\sqrt{2\pi }\eta }}\mathrm{exp}{\displaystyle \frac{(EE_i)^2}{2\eta ^2}}`$ (17) $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle 𝑑\beta \mathrm{\Omega }(i\beta )e^{i\beta E\eta ^2\beta ^2/2}}`$ (18) One might have considered replacing the delta function by a Lorentzian rather than by a Gaussian, but this would be dangerous given that the density of states usually rises exponentially for a many-body system. The finite range $`\eta `$ used to sample the density of states might correspond to the range of excitation energies sampled in an experimental binning. In the limit $`\eta 0`$, $`\rho _\eta `$ approaches the density of states. As an example of a quantity one may wish to calculate with a microcanonical approach, we consider the average multiplicity of a fragment of type $`k`$ in a system whose total energy is within $`\eta `$ of $`E`$. $`n_k_\eta (E)`$ $`=`$ $`{\displaystyle \frac{_in_{i,k}\frac{1}{\sqrt{2\pi }\eta }\mathrm{exp}\frac{(EE_i)^2}{2\eta ^2}}{\rho _\eta (E)}}`$ (19) $`=`$ $`{\displaystyle \frac{\frac{1}{2\pi }𝑑\beta \omega _k(i\beta )\mathrm{\Omega }_{Aa_k}(i\beta )e^{i\beta E\eta ^2\beta ^2/2}}{\rho _\eta (E)}},`$ (20) where $`n_{i,k}`$ is the number of particles of species $`k`$ within the fragment $`i`$. The integration over $`\beta `$ clearly provides an added numerical challenge that increases for small $`\eta `$. For the purposes of generating a mass distribution, one must perform this integration for every species. It might be worthwile to consider estimating the integrals over $`\beta `$ with the saddle point method, although one should be wary of taking derivatives of $`\mathrm{\Omega }`$ with respect to $`\beta `$ in the phase transition region. Microcanonical quantities might also be calculated in a completely different manner by discreetizing the energy. For instance one might measure energies in units of 0.1 MeV. One might then treat energy on the same footing as any other conserved charge. One may then write recursion relations for $`N_{A,E}`$, the number of ways to arrange $`A`$ nucleons with net energy $`E`$, where $`E`$ is an integer. $`N_{A,E}`$ $`=`$ $`{\displaystyle \underset{k,e_k}{}}{\displaystyle \frac{a_k}{A}}\omega _{k,e_k}N_{Aa_k,Ee_k}`$ (21) $`=`$ $`{\displaystyle \underset{k,e_k}{}}{\displaystyle \frac{e_k}{E}}\omega _{k,e_k}N_{Aa_k,Ee_k}`$ (22) Here, $`\omega _{k,e_k}`$ is the number of ways of arranging a fragment of type $`k`$ with net energy $`e_k`$. All other relavant microcanonical quantities may be calculated in a similar manner. Since one needs to calculate $`N`$ at all energies $`E^{}`$ less than the targeted energy $`E`$, and must sum over all energies less than $`E^{}`$ to obtain $`N_{A^{},E^{}}`$, the length of the calculation is proportional to $`E^2`$. Typically, nuclear decays occur with on the order of a GeV of energy deposited in a nucleus. Therefore, these calculations may become numerically cumbersome unless the energy is discreetized rather coarsely. ## VII summary The recursive techniques discussed here have several attractive features. They are easy to work with, incorporate characteristics of nuclear composites and appear to have standard features of liquid-gas phase transitions. In the present forms these models are resricted to low densities. For modeling nuclear disintegration this is not a serious problem, although for completeness it would be nice to be able to modify the model so that it can be extended to higher density. In this paper we have studied thermal properties of the model, and we emphasize the importance of the surface term in determining these properties. We can associate the discontinuity in the energy density with temperature to the discontinuity in the number of clusters. In addition, we have seen that including Coulomb effects lowers the temperature at which the fragmentation transition occurs and reduces the sharpness of the phase transition. We have also presented an extension of the formalism for the inclusion of isospin degrees of freedom. For comparing to nuclear physics experiments, development of the microcanonical approaches presented here is of greatest importance. It remains to be seen whether the microcanonical formalisms are tenable, as they have yet to be implemented. ###### Acknowledgements. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada and by le fonds pour la Formation de Chercheurs et l’aide á la Recherche du Québec, by the US Department of Energy, Grant No. DE FG02-96ER 40987, and by the US National Science Foundation, grant 96-05207.
no-problem/9903/cs9903013.html
ar5iv
text
# The Impact of Net Culture on Mainstream Societies : a Global Analysis ## 1 Introduction <sup>1</sup><sup>1</sup>1The superscripts on some words/set of words here give the reference number in the the glossary of selected technical terms provided at the end of this article (section 10). Evolving out of the United States $`ARPANET^1`$, first demonstrated in 1972, the Internet did not begin to approach it’s current global size until the mid 1980s. With an estimated user-base of nearly 30 million people in countries as diverse as Australia, Canada, France, Germany, Italy, India, Japan, Mexico and many other countries, these networks are not solely a reflection of any one culture, neither do they represent any isolated and singular society in ethnographic form. Rather the users of these networks represent an assortment of diverse cultures as well as a nonstandard homogeneous mixture of very many societies (societies which explicitly exhibit random heterogeneity in many aspects) from across the globe. The principal objective of this project is to determine if a new culture has emerged on these networks, distinct from that of the networks’ constituent countries. If such a culture has emerged, the objective is to * Provide an ethnographic analysis of this culture and * Determine what effects (either supporting or limiting) this culture has on the societies from which $`newbies^9`$ are being lured into the hypnotic black hole of the Internet. Though primarily intended to focus on mainstream Indian society, the overall analysis of this work is being pursued in a global context keeping in mind the technological ease of communication via Internet between two randomly chosen points on Cyberatlas, i,e, the concept of formation of an egalitarian global village is being streaked. Basically this work will try to provide the answers of the following questions in a self integrated manner * Do the users of the Net form a society with its own distinct culture? If so ……, * what are the key aspects of this culture? * what elements of this culture (if any) have an empowering and what elements of this culture (if any) have a discouraging effect on the standard mainstream society carried via the $`Internauts^7`$? * what measures can be taken to enhance or overcome the empowering and discouraging elements of the culture with the intent of easing the process of enculturation for Internauts to benefit and to prevent the declination of the ethical values of a mainstream society? The process of addressing these questions as well as finding their answers will be carried out through the proper analysis of present status of various fundamental aspects of mainstream society (i,e, education, business, entertainment, politics etc.) and their link with the Net. ## 2 Ethnography of the Net culture To examine whether the Net forms a distinct society supporting its own culture, it is necessary to define the terms Society and Culture in this context. A review of the Anthropological literature provides the following most suitable definitions * Society: An abstraction of the ways in which interaction among humans is patterned.(Howard(1989)) In layman’s language the definition could be… * A more or less organized group of people of both sexes that share a common culture.(Barnouw(1987)) * Culture: A culture is the way of life of a group of people, the complex of shared concepts and patterns of learned behavior that are handed down from one generation to the next through the means of language and intuition.( Bernard(1988) ) There are other different ways to narrate what society and culture are. Without giving details about other definitions, it can be said that, though not all in the same fashion, all definitions by and large make a clear distinction between the concepts of society and culture. What then are the dependencies between society and culture? Can a society have more than one culture or can the same culture be shared by It seems quite reasonable to assume that all the members (having equal financial background) of a particular society by and large share almost the same culture. It is also implied in the definitions of culture presented by Ember and Ember(1990) and Nanda(1991) that two different societies cannot posses exactly same culture. One might concede that there is a small possibility that two unrelated societies may, perhaps as a result of similar environment, independently evolve identical culture, but in practice it can be argued that usually this does not happen. The above discussion is significant in this context because together they imply that if the users of the Net are to be described as having a distinct culture then they need to form one particular separate society. It would appear from the global span of the Net that this is not the case. This dilemma can be resolved by viewing the Net as a pan-societal superstructure (North(1994)) which is freed of the responsibilities of providing a number of properties that can reasonably be expected from any mainstream society (e,g, reproduction, food and shelter) by virtue of the fact that its members are also the members of traditional mainstream societies that do supply all these features. Rather it is a melting pot of different components (such as socialization, economics, politics, entertainment) of many of the standard mainstream societies. Thus the Net should be viewed not as an independent society (for it does not provide all of the features of an independent society) , but as a superstructural society which, in a statistical view, can be described as the superset of intersection where the overlapping subsets are the distinct mainstream societies. Regarding the Net’s culture we may say this; though it does share many elements with the cultures of the mainstream societies that it spans, much of it is taken from these mainstream cultures, there is also much in it that represents culture adaptization to a new environment. Now if the question arises whether the Net might be considered to be a subculture of some other culture of any other particular mainstream society, it is reasonable to ask What could the Net be a subculture of? Given its global spanning size, it is obvious that there is no one society or culture that can subsume the Net. It therefore seems ineffectual to think of it in subcultural terms. So while inheriting many cultural elements from the societies that it spans, this superstructural Net society nonetheless supports its own distinct culture. ## 3 Analysis of the Net culture A survey of different literature and a thorough netsurfing gives the following key aspects of the Net culture….. * Written adaptations to the text-only medium for user to user communication include the use of $`emoticons^5`$ to denote emotional intent and the use of characters such as “\*” and “-” to denote emphasis. Conclusion: Language of communication gets a new shape? * In a mainstream society many factors determine how we judge and are judged by other people. Such factors include appearance, gender, race, wealth, dress and occupation. On the mainly text-only channels of intracommunications via Net, these factors are difficult to determine and one is left with far fewer criteria on which Nauts are being judged. Prestige is acquired within the Net culture primaryly through what one writes or through philanthropic actions such as maintaining a mailing list or writing freely distributed softwares etc. Conclusion: A new concept of acquiring social prestige, apparently more healthy than conventional one, is launched. * On the contrary to the previous feature, for those individuals who are relying on being judged or to judge by such factors as their appearance, wealth, sex appeal etc, the mainly text only communication base of the Net may be an uncomfortable experience. Conclusion: Counter-culturists on Net? <sup>2</sup><sup>2</sup>2Though the introduction of various highly powerful and user-friendly graphics packages changes the text only nature of communication day by day, still there are scopes for them who don’t want to get exposed about their features. Different Net-friendship, Net-dating sites are the proper examples for this purpose. * The increasing size of the Net leads to an increased opportunity for the exchange of various resources. Resources such as advice, graphics and softwares are most often handed on the Net through a reciprocal exchange system. With the increasing size of the Net yielding a higher population of users available to supply resources, it follows that there is a greater pool of resources from which to draw information. This is a curiously inverted position to that of a mainstream society. In such a society, when resources is shared amongst a number of people, the greater the population the more thinly the resource is distributed. On the Net, however, if a resource is made available for $`FTP^6`$ then it can be copied by any number of people without lessening the share of the resource available to each. Conclusion: Reciprocity is an important feature of the Net’s economy. Signature of symbiotic culturization? * Social stratification is present within the Net society; for example, system administrators and news group moderators have powers that most users do not. The most vital point in this regard (which will be discussed in detail later on) is, due to financial, educational and technical constraints, not all users have equal access to the resources. Conclusion: Net capitalism? * The Net is viewed by some of its misusers as an anarchy which can cause (and in reality, it does cause) severe moral degradation of the members (mainly, of the teen-agers) of the mainstream societies in a direct or indirect way. (Detail discussion will be presented in the next section) Conclusion: Degredation of ethical values through Net. * However, the Net culture has a complex set of conventions and lore called Netiquette to which users are expected to conform. Transgressions of which are supposed to be dealt by other users in various ways ranging from written chastisement to the invocation of police authorities of mainstream societies. Conclusion: Protection of teratoidation of Net societies? After these general discussions about the nature of the Net society and its culture, it is now quite reasonable to fragmentize the analysis into diverse social issues (like education, business, politics etc) case by case and to study the correlation between the mainstream societies and the Net in these contexts. ## 4 Correlation between the mainstream societies and the Net ### 4.1 Education As already been discussed, Nauts have a facility to hide their racial, financial and even generic identity if they want. This makes the Net as a medium of distant education and learning. Net has immense power to change the whole infrastructure of educational system in a highly positive way. According to Neil Rudenstein, a famous academician from Harvard University, USA, education is basically a dynamic and continuous process which can give it’s best output if the student-teacher interaction can be maximized. The chemistry of this interaction is the backbone of a proper education system. In our mainstream societies, orthodox educational infrastructure is insufficient to provide the maximization of this interaction. All students do not get equal time to interact with the teacher nor they are being paid equal attention. There also remain some psychological barriers which prevent the mobilization of the dynamism of the teacher student interaction process, for example, the shyness of a student to ask questions in the class or the inferiority complex generated by social or financial backwardness. These problems can be overcome if the Net can properly used as an educational media because academic interactions via the Net will make the teacher-student interaction more effective and vibrant. According to Nancy Singer, a Prof. of Colorado University, Department of education, application of the Net as an educational media helps students to overcome shyness and to enhance their power of communication one example of which is the publication of the world’s first fully net-operated newspaper maintained by the teen-ager students of the Centeniel school. Moreover, the Net’s voluminous information bank, along with the application of different graphics packages and audio clips makes the education more interesting and effective. Here one crucial point comes. What does education means really? When should a person be called educated? Does the process of education mean only accumulation of information and data and to a make career? The answer is a strong NO! Real education helps to grow certain sense of principle and ethics. Without them education is incomplete. It can provide a person a very good career and wealth but can never make him/her a proper human being. And as all these value-related issues of the education have concrete dependence on the ethical views of the teacher concerned, Net can never give complete education independently. Regarding communication and information gathering, Net has absolutely no parallel, but its control has to be taken over by human being when question of generation of values arises. ### 4.2 Spreading of harmful materials As already mentioned, some of the misusers of the Net is viewing it as an anarchy. The entry of any arbitrary user in Net world has created many crucial social problems all of which are carried to all the mainstream societies via the Net superstructure. The most fatal problem is the cyberporno \- the open business of pornography in the whole realm of the Net. As there is no bar in entry into the Net, companies like Brandy, Web and Penthouse have launched their nasty business of pornography throughout the Net. A recent report from a research group of Carnegi Mealon University, USA, shows that 83.5 percent of the stored pictures in Usenet are pornographic in nature. Three years back, in 1995, US Government launched the Communication Decency Act (CDA) but it is a pity that nobody could be arrested under this act because almost all the companies of USA made appeal in the Supreme Court against this act claiming that it violates their democratic freedom of speech. In 1996, the German Government has declared some of the sites as illegal and entry into those sites are kept prohibited thereof. Recently the Singapore Government also had to take the same step. Smith System Engineering, a UK based consultancy, has been awarded a contract from the European parliament to investigate the fearability of jamming pornography and racism (another fatal problem created by the racial group of people such as Neo-Nazis) on the Net. The project, expected to to run for six months, will examine methods where by offensive materials are distributed and study the technical method of blocking their flow. Unfortunately all these processes could not make the situation that much better. Millions of the Net-porno web sites are there entry into which requires only an Adult check ID password which can easily be accessed by paying only 15 - 20 US $`\$`$ and as payment is done by credit cards via Net, there is absolutely no way to check whether the applicants for this card are really over 18 by age or not! So millions of teen agers are accessing these dirty sites regularly which can result (and practically, which does result) a severe degradation of ethical and moral values worldwide. In western capitalist countries, such as in the USA and some countries from central and western Europe, numerous teen-age crimes have been reported to police which, according to the modern sociologists, are believed to be committed from odd sexual obsessions generated by different pornographic sites of the Net. Not only in western countries, Asia’s rapid growing economies are using the Net to integrate themselves with the world economy but the governments of some of the regions are not entirely comfortable with the phenomenon. The rise in the popularity of the global networks has alarmed some governments which fear unbridled access could lead to distribution of pornography, spread religious unorthodoxy or encourage political dissidence. None of them has come up with any foolproof plan to screen undesirable materials on Net, and experts say such control is technically very difficult. So it comes out that ultimately not the technology, but our ethical sense matters to make the society free from evils of misuse of Net. The combined education, taste, culture, principle and ethics of the Nauts are the recipes which can make the Net culture clean, and here humanity takes over technology. ### 4.3 Business and Industry Regarding business, the Net has brought massive changes in the way of business in financially forward countries. Small companies have got a golden opportunity to spread their business in international market through the commercial home pages and web sites. Introduction of Net helped the Software companies to make maximum profit. Netscape Communication, an USA based software making company first launched the Netscape Navigation Browser which made Netsurfing very easy. After that, Microsoft and Oracle made different softwares which nowadays have immense application in Netsurfing. Besides this, Digital Equipment and Sysco Systems are involved in data transfer technology via Net. The most important contribution came from SUN Microsystems which introduced JAVA, a very powerful language for different kind of Computer applications in the Net. On the other hand, the wide spanning of the Networks has created problems for some computer companies also. As for the example, the future market of the softwares Workgroup and Networking developed by Novel and Lotus (two branches of world famous IBM group) became uncertain. Another important contribution of Net is the introduction of a couple of very powerful Online Information Services e,g, NETCOM, UUS, PIPEX, DAEMON etc. Their wide application has lessened the business of electronic world (e-world) package of some pre Net online services, such as of Microsoft and Apple. Probably the most stirring social effect of the introduction of these online information services awaiting in the near future is the sharp blow on paper and printing industries. Heavily graphics and audio clip supported www is fostering the commercial use of the Net by making it easy and fun for Nauts to check out informations and products whenever they wish with as much depth of multimedia detail as they want. News- papers, books, magazines and journals can directly be accessed from the Net more easily and quickly than in printed form. This may cause a decreasing want of printed materials in near future resulting the deterioration of the condition of the paper and printing industries. Who will provide a parallel job for the workers in these industries has become a major question. Not only in paper and printing industries, the online shopping system also may create such kind of problems. Worlds largest retailer Wall Mart, UK’s famous Berkley Group and other different companies have launched their shopping malls in Net which results in the unemployment of regular workers in various shopping complexes. ## 5 Indian perspective Focusing the analysis on the Indian mainstream society, it is quite reasonable to say that situations differ in India compared to the western countries, the reason for this will be clear from the discussion hereafter. The rate of literacy in India is around 52 $`\%`$, among them, the number of persons with a strong background in English is countably small. So a very small fraction of 950 million Indians have academic qualification to deal with the Net. Besides this, one needs around 50 Thousand Rs/$`\mathrm{\_}`$ for bying a computer along with the standard VSNL charge for getting an Internet connection. Only a small part of the literate Indian population can afford this amount. So the percentage of the total Indian as private Net users is very small. In our country, Net facilities are accessed basically by people involved in academic world, e,g, in Universities and research institutions. But the rate at which the market price of computer and electronic components are decreasing may sharply change the situation in near future. However, the Net consciousness is continuously increasing. Here the view differs among the common people. Here, though a part of the Net users are thinking to use the Net to enhance the academic standard in our country, the major part of the Net using community are welcoming Net looking for their own business interests. The Internet Service Providers (ISP) are seeing the Indian market very much promising as well as virgin after the Open Market Policy launched by Indian Government. Here one crucial point must be emphasized, in a poor but promising country like India, all the policies regarding the use and spread of Internet should be made very carefully because dealing with the interest of common people is an extremely sensitive and delicate issue in our country. India has its own completely different form of societal infrastructure compared to the western countries and even than the other developed Asian countries also. The form of the social, political and economic problems of the Indian mainstream society is an absolutely disjoint set. Here in our country the fundamental accesses and facilities to the essential need are very thin. Under these circumstances, determining the policy which will fix up the usage of the Net needs very special attention as well as deep thought because the prime aim of spreading the Net in a third world country should necessarily be focussed on the issue of the development and prosperity of common mass; like all other facilities, it also should not be unevenly heaped onto a small class of capitalist elements. The policy-makers should not be biased to lump all the facilities of the Net only among the upper class keeping in mind the fact that mass communication is a vital factor for the overall development of an underdeveloped third world country and the Net has immense potential for this purpose. The spreading of the application of Net amongst the have nots of Indian society should definitely bring a prosperous future if the supreme control is in proper hands anyway. A spark of effort in this regard, though very tiny, (still which shows a very bright prospect) is being observed recently in Andhrapradesh. Ranadeep Sudan, an additional secretary of Chandrababu ministry has made a rough sketch about how to use the Net as the mass communicator between the various public sectors and the common people. This is being considered by the experts as a very important step on the Government policy of Information technology. In this concept paper of using the Net properly, it has been explained thoroughly how, by 2001 AD, electronic network will start operating fully to communicate between the administration and the common mass. All sorts of official informations, application forms for jobs, electricity, water, and even for the individual ration cards will be available online and instantly for 24 hours via Net. Parallel to the public sector administrations, hospitals, nursing homes, libraries and all other voluntary services and organizations will be connected through the Net. For the convenience of the common people, intermediate Internet booths (like the local telephone exchanges) will be operated which will communicate the vital nodes throughout the state resulting smooth and fast operation of the Net. Clearly the vision behind this effort is absolutely magnificent. This will lessen the hazards encountered in various public sectors while dealing with the problems of common people and to lubricate the dynamism of activity in the administration and secretariat. Parallel to these kind of individual philanthropic activities, the open market policy in our country puts the role of VSNL (Videsh Sanchar Nigam Limited) as a questionable issue. According to the present status of the Central Government policy, only VSNL will moderate all sort of Internet connections (institutional as well as private), upto 2004 AD. This tenure may be stretched indefinitely in future. The monopoly of VSNL as the gateway access has become a contentious issue among the people who are inclined towards the privatization of the Net administration in India. ## 6 The Information Fatigue Syndrome: a new socio-psychological problem One striking social problem worldwide generated by the Internet use is Internet Addiction. Which, though didn’t directly affect Indian mainstream society that much till now, has been widely spread among the Net users in western countries. Internet addicts are said to spend increased amount of time online and feel agitated or irritable when off-line. Even without a key-board at hand, a few make voluntary or involuntary typing movements with their fingers. Many of them have practically withdrawn themselves from mainstream social activities. Some find themselves with around 500 US $`\$`$ monthly online bills which they can’t afford. Some of the modern counsellars (psychoanalyst) found by extensive research on a broad group of people that excessive use of Internet use can have serious consequences, from mental depression to dropping out from school or college, even divorce. Nowadays, in most of the western countries, the Internet addiction has been compared to the drug addiction. Being a $`behavioral`$ $`addiction_2`$, the Net addiction has been closely compared to the marijuana because the marijuana plant Cannabis Sativa contains a group of chemicals called tetrahydrocannabinols (thcb) which activates the stimuli that is activated almost in a same way by heavy Internet use. However, the thcb group can have more direct physiological effects than the Internet, like major physiological effects on the cardiovascular and central nervous system, (note that the Net can also have effects on the central nervous system, though smaller in extent compared to the thcb group) short term loss of memory, loss of balance and difficulty in completing of thought processes. According to one eminent psychiatrist, Internet is a strong psychostimulant. Internet addiction, like any other addiction, has sign and symptoms which can be recognized by the addict and by those close to the addicts. This addiction is technically termed as Information Fatigue Syndrome. Some of the possible physiological correlates of heavy Internet usage may be listed in following manner… * A conditional response( increased pulse, blood pressure) to the modem connecting. * An altered state of consciousness during long period of $`dyads^4`$/ small group interaction (total focus and concentration on screen, similar to a meditation/trance state). * Dreams that appeared in scrolling texts and pictures. * Extreme irritability when interrupted by people/things in real life while immersed in cyber-space. Like all other addictions, whether a chemical addiction or another type, Net addiction also has some strong side effects on mainstream societies. The Net addiction makes users to ignore their surroundings as well as to withdraw themselves from the activities of the mainstream societies, they even intend to commit crimes when their online activities are interrupted by somebody. $`Chat`$ $`rooms^3`$ are said to be the most addictive aspects of the Internet. According to Dr. Howard Shaffer, the associate director of the division of addiction at Harvard University and Dr.Ivan Goldberg, MD Psychiatry of the Internet addiction support Group, the most fatal and widespread effects of the Internet addiction to the Net users is abruptly reduced (or even completely given up in many cases) important occupational, interactive and recreational activities because of excessive use of the Net which, in turn, affects the mainstream society resulting the complete demolition of well-organized intrasocietal interaction pattern. Internet addiction is considered to be a serious social problem where treatments and remedies need to be sincerely found out. The most prominent effort in this aspect is the Centre For On-line Addiction (COLA) which is a cyberclinic devoted to study the cyberspace addiction headed by Dr. Kimberly S. Young. Some other voluntary online cyberclinics are also established to fight against the Net addiction among which the most promising one is the Internet Addiction disorder support Group by Dr. Ivan Goldberg. ## 7 Future perspective (global) 30 years back, Telecommunication maestro Marshall Macluhan launched the dream of making global village ; which is not far from reality today with the gigantically powerful cybercommunication system in hand - the Internet. It should not sound over exaggerated if the Internet is called the most powerful and the largest machine ever constructed by human race. With around 50 thousand networks, 6.6 million computers and 30 million daily users, this information super-highway has linked 160 countries of the world. In 1993, The Clinton administration of USA made a very important administrative planning - National Information Infrastructure (NII) the aim of which was to create a network of computer networks which will allow the integration of computer hardware, software, telecommunications and shells that will make it easy to connect people with one another via computers and the access towards a vast array of services and information will be widely open by this effect. A proper application of this will lead us to the concept of Global Information Infrastructure (GII) under the operation of which the the remotest part of the globe can be connected to any other place of any country within a fraction of a second. In a recent interview given to The Newsweek magazine, Prof. Russell Newmann (head, department of International communication, Kennedy School of Government, Harvard) said that the future global village, independent of the geographical and political barrier, would be built on the basis of mentality of the members of different mainstream societies. People having same or almost same mentality and way of thinking will be grouped together to make a united, clean and egalitarian new society with its own culture adopted from, and given back to the Net culture. But will this dream of equality among the people be fulfilled? Will a person from a poverty stricken third world country get equal opportunity and facilities in comparison to a Net user from Western capitalist countries having a strong financial background (which usually makes all the differences in mainstream societies) in that global village? Or, more precisely, do the people coming from different financial and social background really acquire equal prestige in the Net culture? Does the Net culture support that much democracy? If so, who will determine the ultimate fate of the human race, is it the class of couch potatoes, sitting lazily and having the computer mouse in hand , who will dictate the future of mankind? The spreading of Net across the globe at exploding rate makes the whole human race to face all these very very crucial questions. Effort will be made to provide the answers of these questions in next sections. ## 8 Quo Vadis ? The whole analysis of this work reflects the idea that the structure of the mainstream societies as well as the activities of its members do get influenced very much by the pan-societal superstructure of the Internet and by the culture evolved in it. This invites the following most fundamental questions…. * Does the Net society and its culture support that much democracy which can make us dream to have a future society where everybody will get equal share of all fundamental rights in a socialistic infrastructure? if not, then…… * Will the Net culture, along with its social stratification, lead us to a future world where various essential resources will be accumulated only among the people having strong financial background, e,g, will the global village become a capitalism dominated society? and in the worst case….. * Does the Net culture contain that many elements of anarchism which can demolish all the ethical values in future mainstream societies as well as in the global village by creating an ethically crippled future generation void of principles and values? To draw the final conclusion by searching the answers of the questions addressed above, it is necessary to have a critical eye on the classification of the mainstream societies in anthropological, economic and political viewpoint and to the structured correlation of the Net culture with it. One important way in which anthropologists classify different societies is according to the degree to which different groups within a society have unequal access to the advantages such as resources, prestige and power (Ember and Ember(1990) , Murphy(1989), Nanda(1991), Howard(1989)). The stratification of different groups within a society gives rise to three different type of societies being generally recognized…… 1. Egalitarian Society: Societies with the least stratification. 2. Ranked Society: People are divided into hierarchically ordered groups that differ in terms of social prestige, but not significantly in terms of accesses to resources of power. 3. Class-based Society: People are divided into hierarchically ordered groups that differ in terms of access not only to prestige, but also and mainly to resources and power. However, though the reciprocity of exchanging various resources and the new concept of acquiring prestige (already discussed at the beginning) apparently represents the semi-egalitarian features of the pan-societal Net superstructure, the individual wealth necessary to support the cost of the Net access via a commercial carrier limits the number of people from mainstream societies to access the facilities of the Net which clearly indicates the presence of strong capitalism on Net culture. It is quite obvious from the argument given above that until and unless Government from all the mainstream societies can provide equal financial facility to all the members of the society for accessing the Net, which is far from reality in approximately all the class-based societies, instead of being egalitarian one, the future global village will obviously become a strongly capitalism dominated society. So it seems that the only way to make the global village strongly egalitarian is the establishment of socialism in all the mainstream societies. Regarding the Net culture, because supporting advanced computer technology is strongly involved with the question of national wealth, it is quite obvious that for all the mainstream societies, percentage right as well as the scope to contribute their individual cultures to the melting pot of the grand Net culture will depend on their society wise financial strength inequality of which clearly indicates that the Net culture also will be dominated by the members of the financially onward societies only. In such a culture, though apparently the process of acquiring prestige is newly defined, ultimately individual, as well as the national wealth will be the only dominant factor. Along with the above mentioned complexities, there is a strong possibility that Net culture may deteriorate the ethical values in the future generation of the mainstream societies worldwide by some of the anarchical activities of its misusers. Though the absence of a centralized decision-making body and its rule-of-the-people approach of the Net seems, at first, to accord classical democracy a good likeness to the political structure of the Net, however, the approach of the classical orthodox democracy is not a good metaphor for the political structure of the Net because in practice, democracy is best seen as a principle involving political consent and control on the part the governed that may find expressions in various political practices and forms of government, which, in practice, is absent in the Net culture. Rather the Net is viewed as an anarchy by many of its members, though this anarchism is not as formal as the nineteenth century definitions of anarchism described by Michael Bakunin and Peter Kropotkin essence of which are (Hagopian(1985)) ….. * The negation of state. * The abolition of private properties. * Revolution. * The rejection of religion. * The foundation of a new co-operative order. Judging from the revolutionary tone of the above list, apparently it seems quite incompatible to describe the Net culture as an anarchy. As for example, till today no claim has been made that because the Net society is an anarchy, it should advocate the rejection of religion. (Rather there are many websites which configure profound persuasion in favour of different religions of vary many diverse mainstream societies.) So what then does it mean if the Net culture is described as an anarchy? The following section provides insight into it. The term anarchy comes from anarch, which means without leader. Taking the definition of anarchy as a society with no government that policies itself by peer review, and with no laws imposed by central authority, the Net perfectly fits the bill as the anarchist’s philosophy of individual responsibility is completely fulfilled in this culture. Though here the term individual may not always indicate the individuality of a single user rather it points out mostly to the individual websites. In general, the system administrator or the webmaster is the absolute ruler of his site which, in fact makes the sites a dictatorship rather than anarchy when considered individually. (This is not necessarily bad always because many are benevolent dictatorship.) At the same time there is no possibility where system administrator (or webmaster) of a particular site can set the rule for any other site and there is absolutely no concept of any super administrator which will govern all the individual sites of the Net. So the Net is not centrally governed although the individual sites are. Thus even though locally the individual sites may not, the Net as a whole can really be described as a co-operative anarchy which carries the following features of anarchism… * Absence of government. * No form of centrally imposed control. * No mechanism available for correction to take place at a Net-wide level. Another very uncomfortable feature of the Net culture is the demolition of identity and ethnic heritage of a classical society. Every member, every information, every individual deep thought is nothing but just a number, a website, an $`IP`$ $`address_8`$ in the Net culture. ## 9 Epilouge So ultimately it seems that though the Net culture has prominent positive effects on mainstream societies, central control of it should always be in proper hands because computers can never take over human being as long as the question of ethical sense is concerned. We thus always should keep in mind that the Internet, being immensely powerful to change the whole structure of the human society, should be handled very very carefully with the intent of philanthropic activities. Internet, the most crowning achievement from the present technology, can be described as a sharp weapon. Whether the use of this weapon should be as a surgical knife to operate the evils of the society or as a killing chopper, that decision is left to the humanism of its users as well as to the future generation. ## 10 Glossary of selected technical terms 1. ARPANET : The world’s first computer network was a U.S. government-funded wide area network called the ARPANET (Advanced Research Project Agency Network). The first ARPANET Information Message Processor (IMP) was installed on Sept 1, 1969 which had only 12 KB of memory although they were considered to be powerful minicomputers of their time. In the early 1980’s the ARPANET was split into two separate networks, the new counterpart was the MILNET, a non-classified US military network which is still in service having it’s headquarter in Virginia. For an excellent overview of the history of the Net, see The History Of The Net Master’s Thesis by Henry Edward Hardy School Of Communications Grand Valley State University 2. Behavioral Addiction : A behavioral addiction is one in which an individual is addicted to an action and not necessarily a substance. Here people may become addicted to activities even when there is no true physiological dependence. 3. Chat Room : Users within the Net community electronically interact with each other through chat programs which provides a textual interaction amongst its users. Participants do not speak to each other, rather they type their dialogue at their computer key board which then appear on the screens of all the other users on-line. Websites devoted to this business is called chat rooms. 4. Dyads : Online interaction between two persons via computer. 5. Emoticons : The lack of verbal and visual cues in the textual medium of the Net has resulted in its users developing and adopting symbols (which can be typed by key board) to clarify their emotional intent. Those symbols are called emoticons. For detail informations and a complete list of the emoticons, see The New Hacker’s Dictionary Raymond, E. ed (1993) 2nd edn. Cambridge: MIT Press 6. FTP : The Net enables its users to copy files from different other computers on the Net and their own keeping the original source file intact. This process is called file transfer and the program that is used for this purpose is called File Transfer Protocol (FTP). 7. Internauts : The person who surfs through the Internet. 8. IP Address : Any computer connected in the Net has its own and unique number (like 206.101.167.195) which is used to identify or to refer that computer in the Network System. 9. Newbies : A person who surfs through different websites are named as webies so new webies are referred by this term.
no-problem/9903/astro-ph9903039.html
ar5iv
text
# Remarks on Inflation ## Abstract It has been shown that sub-Planckian models of inflation require initial homogeneity on super-Hubble scales under certain commonly held assumptions. Here I remark on the possible implications of this result for inflationary cosmology. The observed homogeneity of the universe on superhorizon scales can be explained if there was a period of accelerated expansion (inflation) in the early universe that took a relatively small homogeneous patch of space and blew it up to encompass a volume larger than what we observe today. In a recent paper VacTro98 , Mark Trodden and I addressed the issue of how large the initial homogeneous patch has to be. The result we obtained is that there is a lower bound on the inflationary horizon, $`H_{inf}^1`$, which depends on the pre-inflationary cosmology. In particular, if the pre-inflationary cosmology is a Friedman-Robertson-Walker (FRW) universe, we must have $`H_{inf}^1>H_{FRW}^1`$. If we further assume that, to get inflation with Hubble constant $`H_{inf}^1`$, the appropriate inflationary conditions (homogeneity, vacuum domination etc.) must be satisfied over a region of physical size $`L`$ which is larger than $`H_{inf}^1`$, then we obtain $`L>H_{FRW}^1`$. Hence the conditions for inflation need to be satisfied on cosmological scales specified by the pre-inflationary epoch. In the particular case of a radiation dominated FRW, the problem seems to be even more severe since the causal horizon coincides with $`H_{FRW}^1`$. Hence, it is clear that to solve the homogeneity problem, inflationary models in which inflation emerges from a non-inflationary, classical epoch, must assume large-scale homogeneity as an initial condition. Perhaps more important than the result itself is the fact that we have identified the conditions under which such a derivation is possible. The key assumptions are that Einstein’s equations and the weak energy conditions are valid, and that spacetime topology is trivial. (We also assume that singularities apart from the big bang are absent.) If these conditions hold, one would conclude that sub-Planckian inflation alleviates the large-scale homogeneity problem but does not solve it<sup>1</sup><sup>1</sup>1The word “solve” may hold different meanings for different individuals. My view is that a solution should not assume the result it purports to obtain.. Note that there are two related issues that are brought to the forefront in the above discussion. The first is - do inflationary models (within the stipulated assumptions) solve the homogeneity problem? - and my answer to this question is in the negative. The second is - can inflation occur, given that it seems to require unlikely initial conditions? Here the answer is in the positive - inflation can indeed occur as long as we have a suitable physical theory. However, one may wish to further consider the likelihood of having no inflation or the probabilities with which various kinds of inflation can occur, and this is where the questions become difficult. Hopefully some of the issues involved will become clearer by the end of this article. A way around the result in VacTro98 is to consider inflationary models in which inflation starts at the Planck epoch (for example, chaotic inflation Linrefs ) since these do not contain a classical pre-inflationary epoch. Hence, at least as far as classical physics goes, inflation is imposed as an initial condition in these models. These initial conditions are sometimes justified by using quantum cosmology – if the wavefunction of the universe is “highly peaked” around the inflationary initial conditions, one might say that these conditions are favored. However, what if the wavefunction only has a small tail around the inflationary initial conditions? I do not think that that would exclude an inflating epoch of our universe since, it could be argued, that most of the other non-inflating universes (where the peak of the wave-function is) would not be able to harbor observers such as us. Hence the argument appears to be inconclusive at present. On the other hand, anthropic arguments are essential to any theory in which the creation of universes is probabilistic. As our understanding of astrophysics and gravity improves, the arguments are likely to get sharpened. As a very simple example of possible forthcoming refinements, if we assume that life can only exist on planets, then an understanding of the cosmological conditions leading to maximal planetary formation will help in narrowing down a measure for calculating probabilities. Guth Guth has given a persuasive argument for believing that inflation took place in the early universe in spite of any required unlikely initial conditions. He likens inflationary cosmology to the evolution of life. Today we observe a rich variety of life forms, and also a large homogeneous universe. It is hard to imagine how all the miraculous forms of life could have been created directly. Similarly, it is hard to explain the direct creation of our universe. These wonders are easier to comprehend in terms of evolutionary theory - life started out in the shape of some very simple molecules, which then inevitably evolved into the present forms of life. Similarly, a small patch of the universe that satisfied some suitable properties underwent inflation and inevitably evolved into our present universe. So Guth makes the correspondence shown in Fig. 1. I find this to be a very compelling argument for the existence of an inflationary phase of the universe. The discussion in Ref. VacTro98 impacts on this correspondence in the first stage - what is the chance of getting a small patch with the correct conditions? This is the same question that biologists may ask - what is the chance of getting the first few molecules from which life can follow? In the theory of evolution, the formation of the right kind of molecules would depend on the geological and climatic conditions at the time. For example, the occurrence of lightening storms could facilitate chemical reactions that could enhance the probability of forming the molecules. Similarly we have found that the chance of getting a small patch in the universe with all the right conditions for inflation is very low, but that there are conditions under which this probability can be enhanced. So I would like to propose a further extension to the correspondence as shown in Fig. 2. Our philosophy in adopting inflation as a paradigm is that it greatly enhances the probability for the creation of the universe that we see, even though it is at the expense of invoking a fundamental scalar field - the inflaton. Then, since the introduction of the other factors in the extended correspondence can further enhance the chances of creating an inflating universe, this same philosophy guides us to include them as part of the paradigm. I should add that the preceding argument is not completely obvious, as Guth and Linde explain, since inflation washes out the initial conditions and a scheme to obtain more likely initial conditions may only provide an infinitesimal increase in the final probability of observing a homogeneous universe. However, if we consider a physical theory in which direct creation of the universe, sub-horizon inflation, as well as super-horizon inflation are all possible, and in which the inflationary phases occur with the same expansion rates, it seems clear that the most spatial volume will be produced by the sub-horizon inflation. This will be true even in the case when the inflation is “eternal”. At this conference Koffman has raised the practical question of how these developments should affect the way inflationary cosmology is studied. I would first of all suggest recognition of the fact that current inflationary models do not “solve” the homogeneity problem but only alleviate it. To solve the problem, one must resort to relaxing at least one of the common assumptions. Linde has been advocating the path of Planckian inflation for a variety of reasons. The path of quantum cosmology has also been taken by a number of researchers but, based on what I discussed above, these approaches appear to leave open a number of difficult questions that need to be answered before we can claim to have understood the homogeneity of the universe. To me two other approaches seem more promising. The first is to study the conditions under which quantum effects can give rise to violations of the weak energy condition (such as pursued by Ford and collaborators For ), and subsequently to creation of baby universes (for an early related paper see FarGutGuv ). The second approach is to explore modifications of the classical Einstein equations. This also ties in with the fact that essentially any high energy theory that is being considered these days does include such modifications. If quantum effects or extensions to Einstein’s equations lead to the conclusion that inflation can begin from a microphysical patch, profound consequences will follow since it would open the possibility of creating universes in the laboratory. One could further imagine that stellar collapse, for example, may lead to baby universe creation, and that there may be other universes lurking in the centers of galaxies. It is a pleasure to thank Alan Guth, Andrei Linde and Alex Vilenkin for describing their views on inflation and also for their patience in this matter. I am grateful to Lawrence Krauss and Mark Trodden for discussions. This work was supported by the Department of Energy.
no-problem/9903/cond-mat9903340.html
ar5iv
text
# A possible new phase of commensurate insulators with disorder: the Mott Glass ## Abstract A new thermodynamic phase resulting from the competition between a commensurate potential and disorder in interacting fermionic or bosonic systems is predicted. It requires interactions of finite extent. This phase, intermediate between the Mott insulator and the Anderson insulator, is both incompressible and has no gap in the conductivity. The corresponding phase is also predicted for commensurate classical elastic systems in presence of correlated disorder. The interplay between disorder and interactions gives rise to fascinating problems in condensed matter physics. Although the physics of disordered noninteracting systems is by now well understood, when interactions are included the problem is largely open, with solutions existing for one dimensional systems or through approximate scaling or mean-field methods . A particularly interesting situation occurs when the non-disordered system possesses a gap. This arises in a large number of systems such as disordered Mott insulators , systems with external (Peierls or spin-Peierls systems ) or internal commensurate potential (ladders or spin ladders , disordered spin 1 chains ). This situation is also relevant for classical problems such as elastic systems subjected to both periodic potential and correlated disorder, as encountered e.g. in vortex lattices in superconductors . Although in some cases an infinitesimal disorder can suppress the gap due to Imry-Ma effects, in most cases, a finite amount of disorder is needed to induce gap closure. In the latter case, the complete description of the gap closure and of the physics of the resulting phases is extremely difficult with the usual analytic techniques such as perturbative renormalization group (RG), due to the absence of a weak coupling fixed point. In $`d=1`$, where attempts in solving this problem could be made, it was believed that two phases existed: a weak disorder phase where the gap is robust and the system has all the characteristics of the pure gaped insulator, and a strong disorder phase where the gap is totally washed out by disorder and the system is a simple compressible Anderson insulator. However the techniques used so far are either restricted to special points or suffer from serious limitation: the simple perturbative RG of has to be used outside its regime of validity to describe the transition between two strong coupling phases. Furthermore up to now mostly onsite interactions have been studied. In this Letter we thus reexamine this problem using better suited methods that capture some nonperturbative effects: a variational calculation and a functional renormalization group method. We focus first, for simplicity, on one dimensional interacting spinless fermions in the presence of a commensurate potential . As argued below we expect similar physics to hold for systems with spins. Our main finding is that, in addition to the above mentioned phases, an intermediate phase exists, as shown in Fig. 1. Quite remarkably although this phase possesses some of the characteristics of an Anderson insulator, namely a non-zero ac conductivity at low frequency, it remains *incompressible*. Since it is also dominated by disorder, we call it a Mott glass. We discuss here the physical characteristics of this novel phase and its interpretation for fermionic as well as bosonic systems. We argue that such a phase is not specific to $`d=1`$ and discuss its generalization to higher dimension both for quantum and classical systems. Disordered interacting spinless fermions submitted to a commensurate periodic potential are described in one dimension by $$H=𝑑x\left[ı\mathrm{}v_F(\psi _+^{}_x\psi _+\psi _{}^{}_x\psi _{})g(\psi _+^{}\psi _{}+\psi _{}^{}\psi _+)+\mu (x)\rho (x)\right]+𝑑x_1𝑑x_2V(x_1x_2)\rho (x_1)\rho (x_2)$$ where $`\pm `$ denote fermions with momentum close to $`\pm k_F`$, as is standard in $`d=1`$. $`V`$ is the interaction, $`\mu (x)`$ an on site random potential and $`g`$ the strength of the periodic potential opening the gap. In $`d=1`$ it is convenient to use a boson representation of the fermion operators . This leads to the action $$S/\mathrm{}=𝑑x𝑑\tau \left[\frac{1}{2\pi K}\left[\frac{1}{v}(_\tau \varphi )^2+v(_x\varphi )^2\right]\frac{g}{\pi \alpha \mathrm{}}\mathrm{cos}2\varphi +\frac{\xi (x)}{2\pi \alpha \mathrm{}}e^{i2\varphi (x)}+\text{c.c.}\right]$$ (1) where $`\alpha `$ is a short distance cutoff (of the order of a lattice spacing) and the field $`\varphi `$ is related to the total density of fermions by $`\rho (x)=\varphi /\pi `$. The interactions are totally absorbed in the Luttinger liquid coefficients $`v`$ (the renormalized Fermi velocity) and $`K`$, a number that controls the decay of the correlation functions. The noninteracting case ($`V=0`$) gives $`K=1`$ and $`v=v_F`$. We focus in the following on the repulsive ($`V>0`$) case for which $`0<K<1`$ depending on the strength and range of $`V`$. For weak disorder one can separate in the random potential the Fourier components close to $`q0`$ (forward scattering) and $`q2k_F`$ (backward scattering). We have retained here, for simplicity, only the backward scattering. Forward scattering, studied in , does not lead to qualitative changes in the physics presented here. The backward scattering is modeled by the complex gaussian random variable $`\xi `$ such that $`\overline{\xi (x)\xi ^{}(x^{})}=W\delta (xx^{})`$. Perturbatively both the disorder (in the absence of commensurate potential) and the commensurate potential (in the absence of disorder) are relevant (respectively for $`K<3/2`$ and $`K<2`$). Let us now study this problem using a variational method . We first use replicas to average over disorder in (1). We then approximate the replicated action by the best quadratic action $`S_0=\frac{1}{2}_{q,\omega _n}\varphi _{q,\omega _n}^aG_{ab}^1(q,\omega _n)\varphi _{q,\omega _n}^b`$ where the $`G`$ are the variational parameters and $`a,b`$ replica indices . Skipping the technical details which can be found in , we find that the saddle point solution yields $$vG_c^1(q,\omega _n)=\frac{1}{\pi \overline{K}}(\omega _n^2+v^2q^2)+m^2+\mathrm{\Sigma }_1(1\delta _{n,0})+I(\omega _n)$$ (2) where $`G_c=_bG_{ab}`$ is the connected Green function. The parameters $`m`$, $`\mathrm{\Sigma }_1`$ and the function $`I(\omega _n)`$ obey closed self consistent equations. The important physical quantities are simply given in term of (2): the static compressibility reads $`\kappa =lim_{q0}lim_{\omega 0}q^2G_c(q,\omega )`$ while the conductivity is given by the analytical continuation to real frequencies: $`\sigma (\omega )=[\omega _nlim_{q0}G_c(q,\omega _n)]|_{i\omega _n\omega +i\delta }`$. Note that both quantities are stemming from the *same* propagator but with *different* limits $`q0`$ and $`\omega 0`$. Since we expect the physics to be continuous for small enough $`K`$ (i.e. repulsive enough interactions), one can gain considerable insight by considering the classical limit $`\mathrm{}0`$, $`K0`$ keeping $`\overline{K}=K/\mathrm{}`$ fixed. In this limit one can solve analytically the saddle point equations and compute $`m`$, $`\mathrm{\Sigma }_1`$ and $`I(\omega _n)`$. The resulting phase diagram is parameterized with two physical lengths (for $`K0`$): The correlation length (or soliton size) of the pure gapped phase $`d=((4g\overline{K})/(\alpha v))^{1/2}`$ and the localization (or pinning) length $`l_0=((\alpha v)^2/(16W\overline{K}^2))^{1/3}`$ in the absence of commensurability. We find three phases as shown in Figure 1: *Mott insulator* \[MI\]: At weak disorder we find a replica symmetric solution with $`\mathrm{\Sigma }_1=0`$ but with $`m0`$ ($`m`$ depends on the disorder). $`m0`$ leads to zero compressibility $`\kappa =0`$. $`m`$ defines the correlation length $`\xi `$, with $`\xi ^2=v^2/(\pi \overline{K}m^2)`$, in the presence of both the disorder and the commensurate potential. The effect of disorder is to increase $`\xi `$ compared to the pure case, since it reduces the gap created by the commensurate potential. We find that $`\xi `$ is given by $`(d/\xi )^2\mathrm{exp}[\frac{1}{4}(\xi /l_0)^3]=1`$. We obtain $`\sigma (\omega )=0`$ if $`\omega <\omega _c=m\sqrt{1+\lambda 3(\lambda /2)^{2/3}}`$ where $`\lambda =(\xi /l_0)^3`$. The physics of this phase is similar to the simple Mott insulator. However the gap in the conductivity decreases, when disorder increases, and closes for $`\lambda =2`$. For $`\lambda >2`$ ($`d/l_0>0.98`$) the RS solution becomes unphysical *even though* the mass $`m`$ remains finite at this transition point. For stronger disorder one must break replica symmetry. In the absence of commensurate potential such a solution describes well the $`d=1`$ Anderson insulator, in which $`\mathrm{\Sigma }_10`$. Here, however *two* possibilities arise depending on whether the saddle point allows for $`m0`$ or not: *Anderson Glass* \[AG\]: For large disorder compared to the commensurate potential $`d/l_0>1.58`$, $`m=0`$ is the only saddle point solution. In this case one recovers exactly the solution of the Anderson insulator with interactions but no commensurate potential. We call it Anderson glass to emphasize that this phase is dominated by disorder (the corresponding phase for bosons, also described by (1), is the Bose glass ). Within the variational approach, the AG has a finite compressibility (identical to the one of the pure system $`\kappa =\pi \overline{K}/v`$) and the conductivity starts as $`\sigma (\omega )\omega ^2`$ showing no gap. Physically this is what is naively expected if the disorder washes out completely the commensurate potential. While the MI and AG were the only two phases accessible by previous techniques we find that an intermediate phase exists between them. *Mott Glass* \[MG\]: For intermediate disorder $`0.98<d/l_0<1.58`$ a phase with *both* $`\mathrm{\Sigma }_10`$ and $`m0`$ exists. $`m`$ and $`\mathrm{\Sigma }_1`$ define two characteristic lengths in the intermediate phase, $`m^2+\mathrm{\Sigma }_1`$ remaining constant in the MI and MG . On the other hand $`m`$ varies and vanishes (discontinuously within the variational method) at the transition from MG to AG. The MG is thus neither a Mott nor an Anderson insulator. In particular, the optical conductivity has *no gap* for small frequencies $`\sigma (\omega )\omega ^2`$, while due to $`m0`$, the system is *incompressible* $`\kappa =0`$. These properties are shown in Fig. 2. This result is quite remarkable since by analogy with noninteracting electrons one is tempted to associate a zero compressibility to the absence of available states at the Fermi level and hence to a gap in the conductivity as well. Our solution shows this is not the case, when interactions are turned on the excitations that consists in adding one particle (the important ones for the compressibility) become quite different from the particle hole excitations that dominate the conductivity. Physical arguments are also in favor of the existence of the Mott Glass, both for systems with or without spins. Let us consider the atomic limit, where the hopping is zero. If the repulsion extends over at least one interparticle distance, leading to small values of $`K`$, particle hole excitations are lowered in energy by excitonic effects. For example for fermions with spins with both an onsite $`U`$ and a nearest neighbor $`V`$ the gap to add one particle is $`\mathrm{\Delta }=U/2`$. On the other hand the minimal particle-hole excitations would be to have the particle and hole on neighboring sites (excitons) and cost $`\mathrm{\Delta }_{\text{p.h.}}=UV`$. When disorder is added the gaps decrease respectively as $`\mathrm{\Delta }\mathrm{\Delta }W`$ and $`\mathrm{\Delta }_{\text{p.h.}}\mathrm{\Delta }_{\text{p.h.}}2W`$. Thus the conductivity gap closes, the compressibility remaining zero . According to this physical picture of the MG, the low frequency behavior of conductivity is dominated by excitons (involving neighboring sites). This is at variance from the AG where the particle and the hole are created on distant sites. This may have consequences on the precise low frequency form of the conductivity such as logarithmic corrections. When hopping is restored, we expect the excitons to dissociate and the MG to disappear above a critical value $`K>K^{}`$. Since finite range is needed for the interactions, in all cases (fermions or bosons) $`K^{}<1`$. In addition we expect $`K^{}<1/2`$ for fermions with spins. Similar excitonic arguments should also hold in in two- or three-dimensional bosonic and fermionic Mott insulators provided some finite range of interaction is taken into account. In higher dimension, since disorder has a weaker impact on the transport properties, one expects that the important change in the conductivity occurs at the transition between the MI and MG, whereas the compressibility would become non zero only for stronger disorder (transition MG to AG). Numerical investigations would prove valuable. Small gaps facilitate the observation of MG physics (see Fig. 2), making the study of systems already close to a metal – Mott insulator transition particularly interesting . The properties that we have obtained are thus quite general, depending only on the two gaps closing separately. Given the mapping between a $`d`$ dimensional quantum problem and a $`d+1`$ classical one our study also applies to commensurate classical systems in presence of disorder correlated in at least one direction (here the imaginary time $`\tau `$). (1) can be generalized to any dimension to describe a classical elastic system where $`\varphi `$ becomes a displacement field $`u`$. It applies to systems with internal periodicity, such as a crystal or a charge density wave (with $`2\varphi =K_0u`$ for reciprocal lattice vector $`K_0`$) or to non periodic systems such as interfaces. The periodic potential making the system flat while the disorder makes it rough. For these systems a functional RG procedure (FRG) in $`d=4ϵ`$ (i.e near $`4+1`$-dim systems) can be used. For uncorrelated disorder in the absence of an external periodic potential an internally periodic system is described in $`d4`$ by a $`T=0`$ “Bragg glass” fixed point . Adding a periodic potential $`\mathrm{cos}(p\varphi )`$ as a perturbation, a transition was found at $`T=0`$ between the Bragg glass (for $`p>p_c(d)`$) (periodic potential irrelevant) and a commensurate phase (for $`p<p_c(d)`$) (periodic potential relevant). Such a $`T=0`$ transition also exists for correlated disorder . To confirm the existence of the MG phase it is necessary to study the phase where the periodic potential is relevant (since $`p=2<p_c(d)`$ here). Since this goes beyond the perturbative FRG analysis, we consider the toy model where the cosine is replaced by a quadratic term (a good approximation when the periodic potential is relevant) defined by the energy: $`{\displaystyle \frac{H}{T}}={\displaystyle \frac{1}{T}}{\displaystyle d^dx𝑑\tau [\frac{1}{2}(c(u)^2+c_{44}(_\tau u)^2+m^2u^2)+V(x,u(x,\tau ))]}`$ (3) where $`\tau `$ is the coordinate along which disorder is correlated (e.g. the magnetic field for vortices with columnar defects), $`T`$ is the classical temperature ($`\mathrm{}`$ for the quantum problem) and the gaussian disorder has a correlator $`\overline{V(x,u)V(x^{},u^{})}=\delta ^d(xx^{})R(uu^{})`$ (for a periodic problem, $`R(u)`$ is itself periodic). $`c`$ and $`c_{44}`$ are the elastic moduli, analogous to $`1/\overline{K}`$ for the quantum problem (1). This $`d+1`$ dimensional model can be studied perturbatively (for small $`m`$ and disorder). For $`m=0`$ and $`T=0`$, both for uncorrelated and correlated disorder a cusp-like nonanalyticity in the renormalized disorder $`R(u)`$ develops beyond the Larkin length $`R_c`$ (corresponding to metastability and barriers in the dynamics) . Remarkably, we find that this feature *persists* even when $`m>0`$, while one usually expects that a mass smoothes out singularities. This can be seen from the RG equation for $`\mathrm{\Delta }(u)=R^{\prime \prime }(u)`$: $`_l\mathrm{\Delta }(u)`$ $`=`$ $`ϵ\mathrm{\Delta }(u)+\stackrel{~}{T}_l\mathrm{\Delta }^{\prime \prime }(u)+f_l(\mathrm{\Delta }^{\prime \prime }(u)(\mathrm{\Delta }(0)\mathrm{\Delta }(u))\mathrm{\Delta }^{}(u)^2)`$ (4) with $`f_l=\frac{1}{8\pi ^2}(1+\mu e^{2l})^2`$, $`\mu =m^2a^2`$ and at zero temperature $`T_l=0`$. Integrating the closed equation for $`\mathrm{\Delta }^{\prime \prime }(0)`$ one finds that the cusp persists (i.e $`\mathrm{\Delta }_{l=+\mathrm{}}^{\prime \prime }(0)=+\mathrm{}`$) provided that $`R_c<R_c^{}(\mu )`$ (with $`R_c^{}(\mu )1/\sqrt{\mu }`$ for small $`\mu `$) while it is washed out ($`\mathrm{\Delta }_{l=+\mathrm{}}^{\prime \prime }(0)<+\mathrm{}`$) for weaker disorder. Our FRG study shows that this $`T=0`$ transition in the renormalized disorder, exists *both* for correlated and uncorrelated disorder. For uncorrelated disorder our findings are of interest for the question of the existence of an intermediate “glassy flat phase”. In that case, however, no sharp signature of this transition exists in two point correlation functions and a physical order parameter remains to be found, which makes the existence of such a phase still controversial . On the contrary for correlated disorder the transition seen in the FRG has much stronger physical consequences. Because of the lack of rotational invariance (in $`(x,\tau )`$) the existence of the cusp and the transition directly affects two point correlation functions. Indeed, the tilt modulus $`c_{44}`$ renormalizes as $`_lc_{44}=f(l)\mathrm{\Delta }^{\prime \prime }(0)c_{44}`$. Integrating at $`T=0`$ one finds that $`c_{44}(l=+\mathrm{})`$ is finite for $`R_c>R_c^{}(\mu )`$ but that it is infinite for $`R_c<R_c^{}(\mu )`$. Furthermore we find (see for details) that for correlated disorder a small temperature ($`\stackrel{~}{T}_l>0`$) does not affect the transition (the phase where $`c_{44}(l=+\mathrm{})=\mathrm{}`$ survives) whereas for uncorrelated disorder the effective temperature goes to a constant ($`\mu ^{1ϵ/2}`$) washing out the cusp. Thus the toy model for correlated disorder exhibits at low temperature, within the FRG, a transition between two phases. The first one is identified with the MI, where the mass (commensurability) destroys the metastability and restores isotropy in $`x,\tau `$ at large scale. The second one is the MG, which is glassy with metastable states despite the presence of the mass. It shares some properties with the AG such as $`c_{44}=\mathrm{}`$. This implies non analyticity of the Green’s functions in frequency. The AG itself corresponds to the phase where the periodic potential is irrelevant ($`m=0`$). This provides strong evidence for the intermediate phase proposed in this paper, which, besides electrons, should be obtained in classical commensurate elastic systems with disorder.
no-problem/9903/astro-ph9903488.html
ar5iv
text
# Hydrodynamics of black hole–neutron star coalescence. ## 1 Introduction Theoretical studies of the binary coalescence of a neutron star with a black hole are as venerable as the Texas Symposia (Wheeler 1971 ). Such coalescences have been implicated in the creation of heavy nuclei through the r–process and were suspected of causing the violent outbursts known as gamma–ray bursts (Lattimer & Schramm 1974, 1976 ; they are also candidate sources for the gravitational radiation detectors currently coming of age, as the double neutron star binaries have been for a long time (Clark & Eardley 1977 ). No binary system composed of a black hole and a neutron star has as yet been identified as such by the astronomers. However, theoretical estimates and studies of binary stellar evolution predict that such systems are created, and that the expected rate of their coalescence may be between about one per one hundred thousand and one per a million years per galaxy (Lattimer & Schramm 1974, 1976 ; Narayan, Piran & Shemi 1991 ; Tutukov & Yungelson 1993 ; Lipunov, Postnov & Prokhorov 1997 ). This rate is about right to explain the r–process and gamma–ray bursts (although it remains to be demonstrated that the phenomena are related to the coalescence), and would give a very satisfactory event rate for future gravity wave detectors (10 to 100 per year out to 200 Mpc). The expected outcome of the coalescence ranged from the neutron star “plunging” into the black hole (Bardeen, Press & Teukolsky 1972 ), through the disruption of the neutron star and the formation of a transient accretion stream (Wheeler 1971 , Lattimer & Schramm 1976 ) or of an accretion disk/torus (Paczyński 1991 , Jaroszyński 1993 , Witt et al. 1994 ), to rapid growth of the binary separation in a steady Roche-lobe overflow scenario (Blinnikov et al. 1984 , Portegies Zwart 1998 ). It seemed important to investigate the hydrodynamics of the process numerically, to determine which, if any, of these outcomes are likely. We have initially performed Newtonian simulations treating the neutron star as a stiff polytrope (Lee & Kluźniak 1995 ), motivated by a desire to determine the timescale of the coalescence with the black hole and to investigate the spatial distribution of the matter lost by the star. These questions were of relevance to the theory of gamma–ray bursts, which seems to allow the coalescence to power the burst in the accepted relativistic shock model, provided that the environment into which the fireball expands has a very small number of baryons at least in some directions (Mészáros & Rees 1992, 1993 ), and that the explosive event is highly variable in time (Sari & Piran 1997 ). We found the coalescence of a neutron star and a black hole promising in these respects, at least in our Newtonian model (Kluźniak & Lee 1998 ). However, the stiff polytrope ($`\mathrm{\Gamma }=3`$), used by us in the work cited above, differed in one crucial respect from a neutron star—it responded to mass loss by shrinking, instead of expanding. For this reason we have repeated the simulation with a softer polytrope, and we report the results below, and elsewhere (Lee & Kluźniak 1999 ). ## 2 Numerical method For the simulations presented here, we have used the method known as Smooth Particle Hydrodynamics. Our code is three–dimensional and is completely Newtonian, although removal of angular momentum by gravitational radiation was included, as described below. This method has been described often, we refer the reader to Monaghan (1992) for a review of the principles of SPH, and to Lee (1998) and Lee & Kluźniak (1998) for a detailed description of our own code. Here, we model the neutron star via a polytropic equation of state, $`P=K\rho ^\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }=5/3`$. The unperturbed (spherical) neutron star has a radius R=13.4 km and mass M=1.4M. The black hole (of mass $`M_{\mathrm{BH}}`$) is modeled as a Newtonian point mass, with a potential $`\mathrm{\Phi }_{\mathrm{BH}}(r)=GM_{\mathrm{BH}}/r`$. We model accretion onto the black hole by placing an absorbing boundary at the Schwarzschild radius ($`r_{Sch}=2GM_{\mathrm{BH}}/c^2`$). Any particle that crosses this boundary is absorbed by the black hole and removed from the simulation. The mass and position of the black hole are continously adjusted so as to conserve total mass and total momentum. Initial conditions corresponding to tidally locked binaries in equilibrium are constructed in the co–rotating frame of the binary for a range of separations r and a given value of the mass ratio $`q=M_{\mathrm{NS}}/M_{\mathrm{BH}}`$ (Rasio & Shapiro 1994 ; Lee & Kluzniak 1998 ). The neutron star is modeled with $`N=17,256`$ particles at the start of the calculation in every case presented here. We have calculated the gravitational radiation signal emitted during the coalescence in the quadrupole approximation, and refer the reader to Lee & Kluźniak (1999) for details. We have included a term in the equations of motion that simulates the effect of gravitational radiation reaction on the components of the binary system in the quadrupole approximation (see Landau & Lifshitz 1975 ). This formulation of the gravitational radiation reaction has been used in SPH before (Davies et al. 1994 , Zhuge et al. 1996 , Rosswog et al. 1999 ) in the case of merging neutron stars, and it is usually switched off once the stars come into contact, when the point–mass approximation clearly breaks down. We are assuming then, that the polytrope representing the neutron star can be considered as a point mass for the purposes of including radiation reaction. Clearly, the validity of this assumption must be verified a posteriori when the simulation has run its course. If the neutron star is disrupted during the encounter with the black hole, this radiation reaction must be turned off, since our formula would no longer give meaningful results. We have adopted a switch for this purpose, as follows: if the center of mass of the SPH particles comes within a prescribed distance of the black hole (effectively a tidal disruption radius), then the radiation reaction is turned off. This distance is set to $`r_{tidal}=CR(M_{\mathrm{BH}}/M_{\mathrm{NS}})^{1/3}`$, where C is a constant of order unity. ## 3 Results ### 3.1 Evolution of the binary To allow comparisons of results for differing equations of state, we have run simulations with the same initial binary mass ratios as previously explored for $`\mathrm{\Gamma }=3`$ (Lee & Kluźniak 1998 ), namely $`q`$=1, $`q`$=0.8 and $`q`$=0.31. Additionally we have examined the case with mass ratio $`q`$=0.1. Equilibrium sequences of tidally locked binaries were constructed for a range of initial separations, terminating at the point where the neutron star overflows its Roche Lobe (at $`r=r_{RL}`$). In Figure 1a we show the variation of total angular momentum J for one of these sequences as a function of binary separation (solid line). Following Lai, Rasio & Shapiro (1993b) , we have also plotted the variation in J that results from approximating the neutron star as compressible tri–axial ellipsoid (dashed lines) and as a rigid sphere (dotted lines). In all cases, the SPH results are very close to the ellipsoidal approximation until the point of Roche–Lobe overflow. This result is easy to understand if one considers that the softer the equation of state, the more centrally condensed the neutron star is and the less susceptible to tidal deformations arising from the presence of the black hole. For $`\mathrm{\Gamma }=3`$ (Lee & Kluźniak 1998 ), the variation in angular momentum as a function of binary separation was qualitatively different (for high mass ratios) from our present findings for $`\mathrm{\Gamma }=5/3`$. For $`q`$=1 and $`q`$=0.8, total angular momentum attained a minimum at some critical separation before Roche–Lobe overflow occurred. This minimum indicated the presence of a dynamical instability, which made the binary decay on an orbital timescale. This purely Newtonian effect arose from the tidal interactions in the system (Lai, Rasio & Shapiro 1993a ). In the present study, we expect all orbits with initial separations $`rr_{RL}`$ to be dynamically stable. For polytropes, the mass–radius relationship is $`RM^{(\mathrm{\Gamma }2)/(3\mathrm{\Gamma }4)}`$. For $`\mathrm{\Gamma }`$=5/3, this becomes $`RM^{1/3}`$. Thus, the polytrope considered here responds to mass loss by expanding, as do neutron stars modeled with realistic equations of state (Arnett & Bowers 1977 )–the dynamical disruption of the star reported below seems to be related to this effect. For the polytropic index considered in Lee & Kluźniak (1998) , the star was not disrupted (see also Lee & Kluźniak 1995 ; 1997 ; Kluźniak & Lee 1998 ), but we find no evidence in any of our dynamical calculations for a steady mass transfer in the binary, such as the one suggested in the literature (e.g. Blinnikov et al. 1984 ; Portegies Zwart 1998 ). Using the quadrupole approximation, one can compute the binary separation as a function of time for a point–mass binary, and obtain $`r=r_i\left(1t/t_0\right)^{1/4},`$ (1) with $`t_0^1=256G^3M_{\mathrm{BH}}M_{\mathrm{NS}}(M_{\mathrm{BH}}+M_{\mathrm{NS}})/(5r_i^4c^5)`$. Here $`r_i`$ is the separation at $`t`$=0. For black hole–neutron star binaries studied here, the timescale for orbital decay because of angular momentum loss to gravitational radiation, $`t_0`$, is on the order of the orbital period, $`P`$ (for $`q`$=1, at an initial separation $`r_i`$=2.7R we find $`t_0`$=6.5 ms and $`P`$=2.24 ms). ### 3.2 Run parameters In Table 1 we present the parameters distinguishing each dynamical run we performed. All times are in milliseconds and all distances in kilometers. The runs are labeled with decreasing mass ratio (increasing black hole mass), from $`q`$=1 down to $`q`$=0.1. All simulations were run for the same length of time, $`t_{final}=22.9`$ ms (this covers on the order of ten initial orbital periods for the mass ratios considered). The initial separation for each dynamical run is given as $`r_i`$, and the separation at which Roche Lobe overflow from the neutron star onto the black hole occurs is given by $`r_{RL}`$. The fifth column in Table 1 shows the value of $`t_{rad}`$, when radiation reaction is switched off according to the criterion established in section 2. We note here that run E is probably at the limit of what should be inferred from a Newtonian treatment of such a binary system. The black hole is very large compared to the neutron star, and the initial separation ($`r_i=67.87`$ km) is such that the neutron star is well within the innermost stable circular orbit around a Schwarzschild black hole of the mass considered. ### 3.3 Morphology of the mergers The initial configurations are close to Roche Lobe overflow, and mass transfer from the neutron star onto the black hole starts within one orbital period for all runs presented here. In every run the binary separation (solid lines in in Figure 1) initially decreases due to gravitational radiation reaction. For high mass ratios, (runs A, B) the separation decays faster than what would be expected of a point–mass binary. This is also the case for a stiff equation of state, in black hole–neutron star mergers (Lee & Kluzniak 1998 ) as well as in binary neutron star mergers (Rasio & Shapiro 1994 ), and merely reflects the fact that hydrodynamical effects are playing an important role. For the soft equation of state studied here, there is the added effect of ‘runaway’ mass transfer because of the mass–radius relationship (see section 3.1). For run C, the solid and dashed lines in Figure 1b follow each other very closely, indicating that the orbital decay is primarily driven by angular momentum losses to gravitational radiation. For run E, the orbit decays more slowly than what one would expect for a point–mass binary. This is explained by the fact that there is a large amount of mass transfer (10% of the initial neutron star mass has been accreted by $`t=t_{rad}`$ in this case) in the very early stages of the simulation, substantially altering the mass ratio in the system (the dashed curves in Figure 1b are computed for fixed masses; at constant total mass, note that from equation 1, lowering the mass ratio in the system slows the orbital decay for $`q<0.5`$). The general behavior of the system is qualitatively similar for every run. Figure 2 shows density contours in the orbital plane (left panels) and in the meridional plane containing the black hole (right panels) for run D at $`t=5.73`$ ms and $`t=t_f=22.9`$ ms. The neutron star becomes initially elongated along the binary axis and an accretion stream forms, transferring mass to the black hole through the inner Lagrange point. The neutron star responds to mass loss and tidal forces by expanding, and is tidally disrupted. An accretion torus forms around the black hole as the initial accretion stream winds around it. A long tidal tail is formed as the material furthest from the black hole is stripped from the star. Most of the mass transfer occurs in the first two orbital periods and peak accretion rates reach values between 0.5 M/ms and 1.2M/ms (see Figure 3). The mass accretion rate then drops and the disk becomes more and more azimuthally symmetric, reaching a quasi–steady state by the end of the simulations. We show in Figure 4 the various energies of the system (kinetic, internal, gravitational potential and total) for run D. The dramatic drop in total internal energy reflects the intense mass accretion that takes place within the first couple of orbits. Figure 4 also shows \[panel (b)\] the total angular momentum of the system for runs A, B, D and E (the only contribution to the total angular momentum not plotted is the spin angular momentum of the black hole, see below). Angular momentum decreases for two reasons. First, if gravitational radiation reaction is still acting on the system, it will decrease approximately according to the quadrupole formula. Second, whenever matter is accreted by the black hole, the corresponding angular momentum is removed from our system. In reality, the angular momentum of the accreted fluid would increase the spin of the black hole. We keep track of this accreted angular momentum and exhibit its value in Table 3 as the Kerr parameter of the black hole. This shows up as a decrease in the total value of J. ### 3.4 Accretion disk structure In Table 2 we show several parameters pertaining to the final accretion structure around the black hole for every run. The mass that has been accreted by the black hole is denoted by M<sub>acc</sub>. The disk settles down to a fairly azimuthally symmetric structure within a few initial orbital periods (except for the long tidal tail, which always persists as a well–defined structure), and there is a baryon–free axis above and below the black hole in every case (see below). We have calculated the mass of the remnant disk, $`M_{disk}`$, by searching for the amount of matter that has sufficient specific angular momentum j at the end of the simulation to remain in orbit around the black hole (as in Ruffert & Janka 1998 ). This material has $`j>j_{crit}=\sqrt{6}GM_t/c`$, where $`M_t`$ is the total mass of the system. By the end of the simulations, between 70% and 80% of the neutron star has been accreted by the black hole. It is interesting to note that the final accretion rate (at $`t=t_f`$) appears to be rather insensitive to the initial mass ratio, and is between 2.4 M s<sup>-1</sup> and 6.1 M s<sup>-1</sup>. From this final accretion rate we have estimated a typical timescale for the evolution of the accretion disk, $`\tau _{disk}=M_{disk}/\dot{M}_{final}`$. Despite the difference in the initial mass ratios and the typical sizes of the disks, the similar disk masses and final accretion rates make the lifetimes comparable for every run. We have plotted azimuthally averaged density and internal energy profiles in Figure 5 for run D. The specific internal energy is greater towards the center of the disk, and flattens out at a distance from the black hole roughly corresponding the density maximum, at $`u3\times 10^{18}`$ erg g<sup>-1</sup>, or 3.1 MeV/nucleon, and is largely independent of the initial mass ratio. The inner regions of the disks have specific internal energies that are greater by approximately one order of magnitude. Additionally, panel (b) in the same figure shows the azimuthally averaged distribution of specific angular momentum j in the orbital plane for all runs. The curves terminate at $`r_{in}=2r_{Sch}`$. Pressure support in the inner regions of the accretion disks makes the rotation curves sub–Keplerian, while the flattening of distribution marks the outer edge of the disk and the presence of the long tidal tail (see Figure 2),which has practically constant specific angular momentum. The Kerr parameter of the black hole, given by $`a=J_{\mathrm{BH}}c/GM_{\mathrm{BH}}^2`$, is also shown in Table 3. We have calculated it from the amount of angular momentum lost by the fluid via accretion onto the black hole (see Figure 4b), assuming that the black hole is not rotating at $`t=0`$. The specific angular momentum of the black hole is smaller for lower mass ratios simply because the black hole is initially more massive when q is smaller. It is of crucial importance for the production of GRBs from such a coalescence event that there be a baryon–free axis in the system along which a fireball may expand with ultrarelativistic velocities (Mészáros & Rees 1992, 1993 ). We have calculated the baryon contamination for every run as a function of the half–angle $`\mathrm{\Delta }\theta `$ of a cone directly above the black hole and along the rotation axis of the binary that contains a given amount of mass $`\mathrm{\Delta }M`$. Table 3 shows these angles (in degrees) for $`\mathrm{\Delta }M/\mathrm{M}_{}=1.4\times 10^3,1.4\times 10^4,1.4\times 10^5`$. There is a greater amount of pollution for high mass ratios (the disk is geometrically thicker compared to the size of the black hole), but in all cases only modest angles of collimation are required to avoid contamination. We note here that the values for $`\theta _5`$ are rough estimates at this stage since they are at the limit of our numerical resolution in the region directly above the black hole. ## 4 Discussion The numerical simulations reported here were quasi-newtonian. An important caveat to keep in mind is that inclusion of general relativistic effects may lead to results qualitatively different from even post-newtonian treatment (Wilson, Mathews & Marronetti 1996 ). Our results indicate that the outcome of the binary coalescence depends on the nature of the star orbiting the black hole. As reported previously (Lee & Kluźniak 1995 , 1998 , Kluźniak & Lee 1998 ), when we modeled the star as a polytrope with adiabatic index $`\mathrm{\Gamma }=3`$, the coalescence appeared to be an intermittent process in which the core of the polytrope survives the initial encounters and increases its separation from the black hole, thus extending the merger to possibly $`0.1`$ s. For the softer polytrope discussed here ($`\mathrm{\Gamma }=5/3`$), the star is disrupted completely in a few milliseconds and all that remains after the initial mass transfer is an accretion disk, containing no more than $`1/5`$ of the initial mass, and some ejecta. Perhaps the current simulation with $`\mathrm{\Gamma }=5/3`$, is the more realistic one, because for this polytrope $`dM/dR<0`$, as for physical models of neutron stars (e.g. Arnett & Bowers 1977 ). In agreement with earlier suggestions (Lattimer & Schramm 1974, 1976 ), we have found that some matter will be ejected from the system, in an amount sufficient to account for the abundance of the r–process nuclei (assuming the r–process does indeed occur during the merger). The binary coalescence of a neutron star with a black hole remains an attractive theoretical source of gamma–ray bursts. The energy requirements for at least one recently observed burst are so severe, if emission is isotropic (e.g. Kulkarni et al. 1998 ), that some degree of beaming seems desirable. According to the simulations presented here, ultrarelativistic flows are possible in the post-merger system only along the rotational axis of the system in a solid angle of about $`0.1`$ steradian. Proper inclusion of neutrino transport may change this angle somewhat. The rather short ($`50`$ ms) accretion timescale of the remnant disk reported, does not include possible interaction between the disk and the black hole (Blandford & Znajek 1977 ). In fact, the appearance in the simulation of a substantial toroidal disk around the black hole is encouraging, as it may allow the black hole spin to be extracted by the Blandford-Znajek mechanism, possibly powering in this manner the gamma–ray burst fireball (Mészáros & Rees 1997 ). ## ACKNOWLEDGMENTS We gratefully acknowledge support for this work from DGAPA–UNAM and KBN (grant P03D01311). ## References
no-problem/9903/gr-qc9903098.html
ar5iv
text
# On the differentiability of Cauchy horizons 1991 Mathematics Subject Classification 53C50, 53C80, 83C75. ## 1 Introduction Recently Chruściel and Galloway have constructed an example of a Cauchy horizon which fails to be differentiable on a dense subset. In this paper we show that densely nondifferentiable Cauchy horizons appear to be generic in a certain class of Cauchy horizons. Chruściel and Galloway have also shown that their example implies the existence of a densely nondifferentiable black hole event horizon. They point out that these examples raise definite questions concerning some major arguments that have been given in the past where smoothness assumptions were implicitly made. In the light of these new examples, it is clear that there is a real need for a deeper understanding of the differentiability properties of horizons. In a spacetime with a partial Cauchy surface $`S`$ the Cauchy horizon $`H(S)`$ is the boundary of the set of points where, in theory, one may calculate everything in terms of the initial data on $`S`$. Cauchy horizons are achronal (i.e., no two points on the horizon may be joined by a timelike curve) and this implies that Cauchy horizons (locally) satisfy a Lipschitz condition. This, in turn, implies that Cauchy horizons are differentiable almost everywhere. Because they are differentiable except for a set of (three-dimensional) measure zero, it seems that they have often been assumed to be smooth except for a set which may be more or less neglected. However, one must remember in the above that: (1) differentiable only refers to being differentiable at a single point, and (2) sets of measure zero may be quite widely distributed. For $`S`$ a closed achronal set each point $`p`$ of a Cauchy horizon $`H^+(S)`$ lies on at least one null generator . However, null generators may or may not remain on the horizon when they are extended in the future direction. If a null generator leaves the horizon, then there is a last point where it remains on the horizon. This last point is said to be an endpoint of the horizon. Endpoints where two or more null generators leave the horizon are points where the horizon must fail to be differentiable , . In addition, Chruściel and Galloway have shown that Cauchy horizons are differentiable at points which are not endpoints. Beem and Królak have shown that Cauchy horizons are differentiable at endpoints where only one generator leaves the horizon. These results give a complete classification of (pointwise) differentiability for Cauchy horizons in terms of null generators and their endpoints. Beem and Królak have also shown that if we consider an open subset $`W`$ of the Cauchy horizon $`H^+(S)`$ and assume that the horizon has no endpoints on $`W`$, then the horizon must be differentiable at each point of $`W`$ and, in fact, that the horizon must be at least of class $`C^1`$ on $`W`$. Conversely, the differentiability on an open set $`W`$ implies there are no endpoints on $`W`$. For general spacetimes, horizons may fail to be stable under small metric perturbations; however, some sufficiency conditions for various stability questions have been obtained , . ## 2 Preliminaries ###### Definition 1 A space-time $`(M,g)`$ is a smooth $`n`$dimensional, Hausdorff manifold $`M`$ with a semi-Riemannian metric $`g`$ of signature $`(,+,\mathrm{},+)`$, a countable basis, and a time orientation. A set $`S`$ is said to be achronal if there are no two points of $`S`$ with timelike separation. We give definitions and state our results in terms of the future horizon $`H^+(S)`$, but similar results hold for any past Cauchy horizon $`H^{}(S)`$. ###### Definition 2 The future Cauchy development $`D^+(S)`$ consists of all points $`pM`$ such that each past endless and past directed causal curve from $`p`$ intersects the set $`S`$. The future Cauchy horizon is $`H^+(S)=\overline{(D^+(S))}I^{}(D^+(S))`$. Let $`p`$ be a point of the Cauchy horizon; then there is at least one null generator of $`H^+(S)`$ containing $`p`$. Each null generator is at least part of a null geodesic of M. When a null generator of $`H^+(S)`$ is extended into the past it either has no past endpoint or has a past endpoint on $`edge(S)`$ \[see , p. 203\]. However, if a null generator is extended into the future it may have a last point on the horizon which then said to be an endpoint of the horizon. We define the multiplicity \[see \] of a point $`p`$ in $`H^+(S)`$ to be the number of null generators containing $`p`$. Points of the horizon which are not endpoints must have multiplicity one. The multiplicity of an endpoint may be any positive integer or infinite. We call the set of endpoints of multiplicity two or higher the crease set, compare . By a basic Proposition due to Penrose \[, Prop. 6.3.1\] $`H^+(S)`$ is an $`n1`$ dimensional Lipschitz topological submanifold of $`M`$ and is achronal. Since a Cauchy horizon is Lipschitz it follows from a theorem of Rademacher that it is differentiable almost everywhere (i.e. differentiable except for a set of $`n1`$ dimensional measure zero). This does not exclude the possibility that the set of non-differentiable points is a dense subset of the horizon. An example of such a behaviour was given by Chruściel and Galloway . Following let us introduce the notion of differentiablility of a Cauchy horizon. Consider any fixed point $`p`$ of the Cauchy horizon $`H^+(S)`$ and let $`x^0,x^1,x^2,x^3`$ be local coordinates defined on an open set about $`p=(p^0,p^1,p^2,p^3)`$. Let $`H^+(S)`$ be given near $`p`$ by an equation of the form $$x^0=f_H(x^1,x^2,x^3)$$ The horizon $`H^+(S)`$ is differentiable at the point $`p`$ iff the function $`f_H`$ is differentiable at the point $`(p^1,p^2,p^3)`$. In particular, if $`p=(0,0,0,0)`$ corresponds to the origin in the given local coordinates and if $$\mathrm{\Delta }x=(x^1,x^2,x^3)$$ represents a small displacement from $`p`$ in the $`x^0=0`$ plane, then $`H^+(S)`$ is differentiable at $`p`$ iff one has $$f_H(\mathrm{\Delta }x)=f_H(0)+a_ix^i+R_H(\mathrm{\Delta }x)=0+a_ix^i+R_H(\mathrm{\Delta }x)$$ where the ratio $`R_H(\mathrm{\Delta }x)/|\mathrm{\Delta }x|`$ converges to zero as $`|\mathrm{\Delta }x|`$ goes to zero. Here we use $$|\mathrm{\Delta }x|=\sqrt{(x^1)^2+(x^2)^2+(x^3)^2}.$$ If $`H^+(S)`$ is differentiable at the point $`p`$, then there is a well defined 3-dimensional linear subspace $`N_0`$ in the tangent space $`T_p(M)`$ such that $`N_0`$ is tangent to the 3-dimensional surface $`H^+(S)`$ at $`p`$. In the above notation a basis for $`N_0`$ is given by $`\{a_i/x^0+/x^i|i=1,2,3\}`$. ###### Theorem 1 (Chruściel and Galloway ) There exists a connected set $`KR^2=\{t=0\}R^{2,1}`$, where $`R^{2,1}`$ is a $`2+1`$ dimensional Minkowski space-time, with the following properties: 1. The boundary $`K=\overline{K}\mathrm{int}K`$ of $`K`$ is a connected, compact, Lipschitz topological submanifold of $`R^2`$. $`K`$ is the complement of a compact set $`R^2`$. 2. There exists no open set $`\mathrm{\Omega }R^{2,1}`$ such that $`\mathrm{\Omega }H^+(K)\{0<t<1\}`$ is a differentiable submanifold of $`R^{2,1}`$. ###### Proposition 1 (Beem and Królak ) Let $`W`$ be an open subset of the Cauchy horizon $`H^+(S)`$. Then the following are equivalent: 1. $`H^+(S)`$ is differentiable on $`W`$. 2. $`H^+(S)`$ is of class $`C^r`$ on $`W`$ for some $`r1`$. 3. $`H^+(S)`$ has no endpoints on $`W`$. 4. All points of $`W`$ have multiplicity one. Note that the four parts of Proposition 1 are logically equivalent for an open set $`W`$, but that, in general, they are not necessarily equivalent for sets which fail to be open. Using the equivalence of parts (1) and (3) of Proposition 1, it now follows that near each endpoint of multiplicity one there must be points where the horizon fails to be differentiable. Hence, each neighborhood of an endpoint of multiplicity one must contain endpoints of higher multiplicity. This yields the following corollary. ###### Corollary 1 () If $`p`$ is an endpoint of multiplicity one on a Cauchy horizon $`H^+(S)`$, then each neighborhood $`W(p)`$ of $`p`$ on $`H^+(S)`$ contains points where the horizon fails to be differentiable. Hence, the set of endpoints of multiplicity one is in the closure of the crease set. ## 3 A generic densely nondifferentiable Cauchy horizon We shall construct a densely nondifferentiable Cauchy horizon in the $`3`$-dimensional Minkowski space-time $`R^{2,1}`$, but our construction can be generalized in a natural way to higher dimensions. Let $`\mathrm{\Sigma }`$ be the surface $`t=0`$, and let $`K`$ be a compact, convex subset of $`\mathrm{\Sigma }`$. Let $`K`$ denote the boundary of $`K`$. Let $`\rho (x,R)`$ and $`D(x,R)`$ be respectively a circle and a disc with center at $`x`$ and radius $`R`$. ###### Definition 3 A circle $`\rho (x,R)`$ is internally tangent to the boundary $`K`$ of $`K`$ if the disc enclosed by $`\rho `$ is contained in $`K`$ and for all $`ϵ`$ the disc of radius $`R+ϵ`$ and center $`x`$ is not contained in $`K`$. Let $`\rho (x,R)`$ be internally tangent to $`K`$; then the point $`(x,R)R^{2,1}`$ belongs to the future Cauchy horizon $`H^+(K)`$ and conversely, if a point $`(x,R)R^{2,1}`$ belongs to $`H^+(K)`$ then the circle $`\rho (x,R)`$ is internally tangent to $`K`$. If $`\rho (x,R)`$ is internally tangent in at least two points of $`K`$ then it follows from Proposition 1 that $`H^+(K)`$ is not differentiable at the point $`(x,R)`$ and the point $`(x,R)`$ has multiplicity at least two. We shall first construct a continuous curve that is not differentiable on any open subset. Let us take a line segment $`l_0`$ and let us consider an isosceles triangle with base $`l_0`$ and let $`\alpha _0`$ be the angle at the base and let $`l_1`$ denote the broken line consisting of two equal arms of the triangle. In the next step we construct two isosceles triangles with bases that are segments of the broken line $`l_1`$ and we choose the angles $`\alpha _1`$ at the base equal $`q\times \alpha _0`$ where $`q<1/2`$. We iterate the above construction. At the $`N`$th step of the construction the number of nondifferentiable points of the curve increases by $`2N1`$. After the $`N`$th step of the iterative procedure the vertex angle of the isosceles triangle obtained in the $`i`$th step is given by $$\mathrm{}_N(x_i)=\pi 2\alpha _1\left[q^{i1}\frac{q^iq^N}{1q}\right].$$ (1) In the limit $`N\mathrm{}`$ the $`i`$th vertex angle is given by $`\pi 2\alpha _1q^i\frac{q^12}{1q}`$ and is strictly less than $`\pi `$ as $`q<1/2`$. Let us call the nowhere differentiable continuous curve constructed above a rough curve. Let us call a region of $`\mathrm{\Sigma }`$ that is bounded by a rough curve and two straight lines perpendicular to the rough curve at its two endpoints<sup>1</sup><sup>1</sup>1This notion is unambiguous, as the slope of the rough curve at an endpoint is given by a well-defined limit. a fan. The above construction can be generalized to higher dimensions, for example in the 4-dimensional Minkowski space-time we construct a rough surface in the following way. We consider a triangle and the first step is to construct a pyramid with the triangle as a base and all angles between the base and the sides of the pyramid equal to the same angle $`\alpha _1`$; we then iterate the construction decreasing at each step the angle $`\alpha `$ between the base and the sides of the pyramid by a factor $`q<1/2`$ as in the 3-dimensional case. As a result we obtain a nowhere differentiable surface and we define a 3-dimensional fan as the region of $`\mathrm{\Sigma }`$ bounded by the rough surface and planes perpendicular to the rough surface passing through the sides of the initial triangle. ###### Theorem 2 Let $`b`$ be a rough curve and $`F`$ the corresponding fan. Then the set of points of $`F`$ that are centers of circles tangent to $`b`$ in at least two points of $`b`$ is dense in the interior of the fan $`F`$. Proof: Each point of $`F`$ is the center of a circle tangent to $`b`$ at at least one point. If the claim of the theorem were false, then there would exist a disc $`D(x,R)`$ with nonempty interior with the property that every point $`a\mathrm{int}D`$ is the center of a circle tangent to the rough curve at exactly one point. 1. A vertex point cannot be a point of tangency of any circle with center in $`\mathrm{int}F`$. 2. By construction the set of vertices of $`b`$ is dense in $`b`$. Thus the complement of the set of vertices in $`b`$ is totally disconnected (i.e. only one-element subsets are connected). Let us consider a map $`P`$ from the disc to $`b`$ that assigns to every point $`y`$ of $`D`$ a point on $`b`$ that is tangent to the circle centered at $`y`$. By assumption this point is unique and thus the map is well-defined. Let us show that the map $`P`$ is continuous. It is enough to prove that if $`a_na`$ then $`P(a_n)P(a)`$. As $`b`$ is compact, $`P(a_n)`$ has a subsequence that converges to a point $`c`$ on $`b`$. Since the distance $`d(a_n,P(a_n))`$ is continuous on D we have $`d(c,a)=d(a,P(a))`$. Hence $`c`$ is a tangency point of a circle centered at $`a`$ and consequently $`c=P(a)`$. By the Darboux theorem the image $`P(D(x,R))`$ is connected and by 1. and 2. above, it is a one-point set. It then follows that R = 0 which is a contradiction. QED The above theorem generalizes to the 3-dimensional case. In the case of a 3-dimensional fan $`F`$ there exists a dense subset of $`F`$ such that every ball with the center in this subset has at least two tangency points to the rough surface. All steps of the proof of Theorem 2 carry over to this case in the natural way. Let $``$ be the set of Cauchy horizons arising from compact convex sets $`K\mathrm{\Sigma }`$ . The topology on $``$ is induced by the Hausdorff distance on the set of compact and convex regions K. ###### Theorem 3 Let $``$ be the set of future Cauchy horizons $`H^+(K)`$ where $`K`$ are compact and convex regions of $`\mathrm{\Sigma }`$. The subset of densely nondifferentiable horizons is dense in $``$. Proof: Any compact and convex region K can be approximated in the sense of Hausdorff distance by a (sequence of) convex polygons contained in $`K`$. Each of the vertex angles of such a polygon is strictly less than $`\pi `$. Over each side of the polygon we constract a rough curve in such a way that the fans corresponding to the rough curves cover the polygon. This is always possible, since we may choose the starting angle $`\alpha _1`$ in the rough curve’s construction to obey the condition $$\varphi +\frac{2\alpha _1}{1q}<\pi ,$$ (2) where $`\varphi `$ is the largest vertex angle of the original polygon. When $`\alpha _1`$ decreases to 0 the rough-edged polygon converges to the original polygon in the sense of Hausdorff topology. QED It is clear that the above theorem generalizes to higher dimensions. ## 4 Some examples of densely nondifferentiable horizons In this Section we show that the construction of the previous Section implies the existence of densely nondifferentiable Cauchy horizons of partial Cauchy surfaces and also the existence of black hole event horizons. ###### Definition 4 A partial Cauchy surface $`S`$ is a connected, acausal, edgeless $`n1`$ dimensional submanifold of $`(M,g)`$. Example 1: A rough wormhole. Let $`R^{3,1}`$ be the 4-dimensional Minkowski space-time and let $`K`$ be a compact subset of the surface $`\{t=0\}`$ such that its Cauchy horizon is nowhere differentiable in the sense of the construction given in Section 3. We consider a space-time obtained by removing the complement of the interior of the set $`K`$ in the surface $`t=0`$ from the Minkowski space-time. Let us consider the partial Cauchy surface $`S=\{t=1\}`$. The future Cauchy horizon of $`S`$ is the future Cauchy horizon of set $`K\mathrm{edge}(K)`$, since $`\mathrm{edge}(K)`$ has been removed from the space-time. Thus the future Cauchy horizon is nowhere differentiable and it is generated by past-endless null geodesics. The interior of the set $`K`$ can be thought of as a “wormhole” that separates two “worlds”, one in the past of surface $`\{t=0\}`$ and one in its future. Example 2: A transient black hole. Let $`R^{3,1}`$ be the 4-dimensional Minkowski space-time and let $`K`$ be a compact subset of the surface $`\{t=0\}`$ such that its past Cauchy horizon is nowhere differentiable in the sense of the construction given in Section 3. We consider a space-time obtained by removing from Minkowski space-time the closure of the set $`K`$ in the surface $`t=0`$. Let us consider the event horizon E := $`\dot{J}^{}(𝒥^+)`$. The event horizon $`E`$ coincides with $`H^{}(K)\mathrm{edge}(K)`$ and thus it is not empty and nowhere differentiable. The event horizon disappears in the future of surface $`\{t=0\}`$ and thus we can think of the black hole (i.e. the set $`B:=R^{3,1}J^{}(𝒥^+)`$) in the space-time as “transient”. ## 5 Acknowledgments The authors would like to thank P.T. Chruściel for many helpful discussions and a careful reading of the manuscript. This work was supported by the Polish Committee for Scientific Research through grants 2 P03B 130 16 and 2 P03B 073 15.
no-problem/9903/hep-th9903014.html
ar5iv
text
# Untitled Document Strings and Discrete Fluxes of QCD. Z. Guralnik University of Pennsylvania Philadelphia PA, 19104 guralnik@ovrut.hep.upenn.edu We study discrete fluxes in four dimensional $`SU(N)`$ gauge theories with a mass gap by using brane compactifications which give $`𝒩=1`$ or $`𝒩=0`$ supersymmetry. We show that when such theories are compactified further on a torus, the t’Hooft magnetic flux $`m`$ is related to the NS two-form modulus $`B`$ by $`B=2\pi \frac{m}{N}`$. These values of $`B`$ label degenerate brane vacua, giving a simple demonstration of magnetic screening. Furthermore, for these values of $`B`$ one has a conventional gauge theory on a commutative torus, without having to perform any T-dualities. Because of the mass gap, a generic $`B`$ does not give a four dimensional gauge theory on a non-commutative torus. The Kaluza-Klein modes which must be integrated out to give a four dimensional theory decouple only when $`B=2\pi \frac{m}{N}`$. Finally we show that $`2\pi \frac{m}{N}`$ behaves like a two form modulus of the QCD string. This confirms a previous conjecture based on properties of large $`N`$ QCD suggesting a T-duality invariance. 1. Introduction Recently, an improved understanding of the infrared properties of QCD has been gained by studying classical configurations of M theory. In , a theory in the same universality class as four dimensional $`𝒩=1`$ QCD, known as MQCD, was studied by wrapping an M theory fivebrane on a holomorphic curve $`\mathrm{\Sigma }`$ embedded in the spacetime $`R^{10}\times S^1`$. $`𝒩=0`$ MQCD was also constructed in by taking $`\mathrm{\Sigma }`$ to be a non-holomorphic two-cycle of minimal area. The results of this paper are applicable to either case. We will further compactify the theory on a torus of finite size, so that the M-fivebrane wraps $`\mathrm{\Sigma }\times T^2`$ which is embedded in $`R^8\times T^2\times S^1`$. In the absence of fundamental matter, $`SU(N)`$ QCD on a two-torus has a discrete magnetic flux $`m`$ which is defined modulo $`N`$ . The supergravity quantities corresponding to discrete Yang-Mills fluxes in the ADS/CFT approach were discussed in . In that context, the discrete fluxes appeared as states in a topological field theory. In the present context, we will see that the ’t Hooft flux corresponds to the background M-theory three-form through the relation $$_{T^2\times S^1}C=2\pi \frac{m}{N}.$$ Since this result is independent of the radius of the $`S^1`$, we expect it to persist in the IIA limit, giving $$B=_{T^2}B_{NS}=2\pi \frac{m}{N}.$$ This relation is peculiar to the case in which there is a mass gap. The background three-form is of course not really discretely quantized. The vacuum energy of the five-brane has a periodic dependance on this background, with minima at $`C=2\pi \frac{m}{N}`$. At these minima, there exists a limit in which the low energy theory is QCD, with t’ Hooft flux $`m`$. For other values of $`C`$, one might naively expect to find a gauge theory on a non-commutative torus, but this is not the case. We will show that for $`C`$ away from the minima, it is impossible to decouple the Kaluza Klein modes on $`\mathrm{\Sigma }`$. Interestingly, the absence of a noncommutative deformation is consistent with arguments that the small instanton singularity plays an important role in QCD dynamics . Yang-Mills theories on non-commutative tori do not have a small instanton singularity. For $`\mathrm{\Sigma }`$ corresponding to a theory with a mass gap, the small instanton singularity exists precisely when $`C=2\pi \frac{m}{N}`$, with $`m`$ being the ’t Hooft flux. If one instead considered a curve $`\mathrm{\Sigma }`$ giving an $`𝒩=2`$ Super Yang-Mills theory, a small instanton singularity would only exist if $`C=2\pi m`$, which is gauge equivalent to $`C=0`$. Furthermore, for $`\mathrm{\Sigma }`$ giving $`𝒩=2`$ Super Yang-Mills theory, one can decouple the Kaluza-Klein modes for all values of $`C`$. At low energies, we expect a continuous class of four dimensional Yang-Mills theories on a noncommutative torus. The difference between the case with a mass gap and the case without is related to whether or not there is a $`U(1)`$ gauge field at low energies. Note that the non-commutative star product exists for $`U(N)`$ theories, but not for $`SU(N)`$. The relation (1.1) is a very simple illustration of magnetic screening. ’t Hooft has shown that in an electrically confining $`SU(N)/Z_N`$ theory with a mass gap, magnetic fluxes must be light . More precisely, the energy of a magnetic flux must vanish exponentially with the area of the torus. This is simply realized in the M-theory construction. Because of (1.1), the ’t Hooft flux is not the central charge associated with a membrane. In fact, the different magnetic fluxes correspond to classically degenerate M-fivebrane configurations. This result differs drastically from what one would obtain by considering a compactification of the M-fivebrane giving $`𝒩=2`$ Super Yang-Mills theory. In that case the magnetic flux is related to an additive central charge, rather than an element of $`Z_N`$ labeling degenerate configurations. We will also argue that $`2\pi \frac{m}{N}`$ behaves like a two-form modulus of the QCD string as well as the IIA string. This means that the imaginary part of the action of an MQCD string wrapping the two-torus $`n`$ times is given by $`n_{T^3}C`$. Again, we expect this result to persist in the IIA limit, so that the imaginary part of the action of a wrapped QCD string is given by $`2\pi n\frac{m}{N}`$. In the large $`N`$ limit, this becomes a continuous quantity. Such a relation has been shown explicitly in two dimensions in the large N limit . It was also conjectured to be true in four dimensions on the basis of properties of large $`N`$ QCD suggesting a T-duality invariance. Such a duality would map, $$\tau \frac{1}{\tau },$$ where $`\tau `$ is the Kähler modulus, $$\tau =\frac{m}{N}+i\frac{\mathrm{\Lambda }^2A}{2\pi }.$$ $`A`$ is the area of the torus, and $`\mathrm{\Lambda }^2`$ is the QCD string tension. An argument due to ’t Hooft states that in an electrically confining $`SU(N)/Z_N`$ theory with a mass gap, magnetic fluxes must be light . More precisely, the energy of a magnetic flux must vanish exponentially with the area of the torus. This is simply realized in the M-theory construction. Because of (1.1), the ’t Hooft flux is not the central charge for a membrane. In fact, when there is a mass gap, different magnetic fluxes correspond to classically degenerate M-fivebrane configurations. This result differs drastically from what one would obtain by considering a compactification of the M-fivebrane giving $`𝒩=2`$ Super Yang-Mills theory. In that case the magnetic flux is related to an additive central charge, rather than an element of $`Z_N`$ labeling degenerate configurations. In the IIA limit, the M-fivebrane wrapped on $`\mathrm{\Sigma }`$ becomes a configuration of N D4-branes suspended between NS fivebranes. For configurations giving $`𝒩=2`$ Super Yang-Mills on a commutative torus, the D4-brane worldvolume may also contain integral numbers of fundamental strings, D2 branes and D0 branes . However for configurations giving QCD-like theories with a mass gap, the allowed charges are more exotic. There can be no D2 branes or fundamental IIA strings in the low energy theory. This reflects the fact that both the magnetic and electric fluxes in QCD are elements of $`Z_N`$ rather than additive central charges. However there may be Euclidean D0-branes with fractional charge. When the theory is fully compactified on $`\mathrm{\Sigma }\times T^4`$, the D0 brane charge has a fractional part of the form $`\frac{1}{N}mm`$. This naively appears to violate Dirac quantization in the presence of a single D6 brane. However, due to the nonzero $`B=2\pi \frac{m}{N}`$, additional charges are induced on the D6-brane which preserve the Dirac quantization condition. The organization of this paper is as follows. In section II we briefly review the construction of four dimensional gauge theories by wrapping M-fivebranes on calibrated two-cycles. In section III we perform an additional compactification on a two-torus, allowing a three form modulus in M-theory, and show how this modulus is related to the ’t Hooft flux when the theory has a mass gap. In section IV we show how the ’t Hooft flux behaves like a two-form modulus for the QCD string on a torus. In section V we discuss the unusual D-brane charges allowed when there is a mass gap. 2. M-fivebranes and QCD A variety of four dimensional Yang-Mills theories may be obtained by wrapping an M fivebrane on calibrated two cycles embedded in $`S^1\times R^{10}`$. Here we consider two-cycles corresponding to an SU(N) Yang-Mills theory with a mass gap. For instance a theory in the same universality class as $`𝒩=1`$ $`SU(N)`$ Yang-Mills is obtained by wrapping the five-brane on a particular holomorphic curve embedded in three complex dimensions. For details of this curve, the reader is referred to . This theory becomes exactly $`𝒩=1,SU(N)`$ Yang Mills in the weakly coupled IIA limit in which the radius of the $`S^1`$ goes to zero in eleven dimensional Planck units. While taking this limit, the parameters of the curve are adjusted to keep the QCD scale fixed. At the same time one takes the limit $`\mathrm{\Lambda }_{QCD}l_p^{11}0`$ where $`l_p^{11}`$ is the eleven dimensional Planck length. We will begin by studying the M theory limit in which the radius of the $`S^1`$ is large. However the quantities which we will compute are discrete and independent of the radius. For our purposes the degree of supersymmetry is unimportant so long as there is a mass gap. Our results are be equally applicable to the theory in the same universality class as pure $`𝒩=0`$ QCD which is obtained from a non-holomorphic but minimal area two cycle. We will make use of two essential properties of two-cycles $`\mathrm{\Sigma }`$ corresponding to $`SU(N)`$ Yang-Mills theories with a mass gap. The first property is that $`H_1(\mathrm{\Sigma },Z)`$ is generated by a cycle $`S^1`$ wrapping the $`S^1`$ of space-time N times . The second property is that there are no square integrable harmonic one-forms on $`\mathrm{\Sigma }`$. A compactification of $`\mathrm{\Sigma }`$ obtained by adding points at infinity gives a curve of genus zero . Because of the latter property, there are no light states after dimensional reduction. At low energies, the only solution of the equations of motion for the self dual three-form field strength of the M5-brane is $$T^{(3)}=db^{(2)}C^{(3)}|_{M5}=0,$$ where $`b^{(2)}`$ is the two-form gauge potential of the M-fivebrane, and $`C^{(3)}|_{M5}`$ is the pullback of the bulk three-form potential to the M-fivebrane. Note that if we instead considered a curve $`\mathrm{\Sigma }`$ giving $`𝒩=2`$ super Yang-Mills theory, then there would be normalizable harmonic one forms associated with massless $`U(1)`$ vector multiplets in the low energy theory . If the M-fivebrane worldvolume $`X`$ has a non-trivial $`H_3(X,Z)`$, there may be several vacua corresponding to solutions of (2.1). We assume that the background $`C^{(3)}`$ is flat, so $`dC^{(3)}=0`$. We will see that these vacua are discrete. 3. Three-form moduli and ’t Hooft fluxes Let us now compactify M theory on $`S^1\times T^2\times R^8`$, and wrap an M-fivebrane on $`\mathrm{\Sigma }\times T^2\times R^2`$. The low energy theory is an SU(N) Yang-Mills theory on $`T^2\times R^2`$ with a mass gap. The M-fivebrane worldvolume now has non-trivial three cycle $`S^1\times T^2`$. Since the M5 brane theory contains strings coupling to $`b^{(2)}`$, there is a flux quantization condition $$_{S^1\times T^2}𝑑b^{(2)}=2\pi m,$$ where $`m`$ is an integer. Then because $`T^{(3)}=0`$, $$_{S^1\times T^2}C^{(3)}|_{M5}=2\pi m.$$ Since $`S^1`$ wraps N times around $`S^1`$, the bulk three-form modulus is given by $$_{S^1\times T^2}C^{(3)}=2\pi \frac{m}{N}.$$ In order to get solutions for other values of $`C^{(3)}`$, one must expand $`T^{(3)}`$ in the Kaluza Klein modes on $`\mathrm{\Sigma }`$. Note that in general $`\mathrm{\Sigma }`$ will change in the presence of a non-zero $`T^3`$, however the property that there are no normalizable harmonic one-forms persists. In this case we do not expect varying $`C^{(3)}`$ to give a continuous class of four dimensional Yang-Mill theories on non-commutative tori. We will now argue that the the integer $`m`$ is the ’t Hooft flux. To do so we will work in the IIA limit. In this limit the M5-brane becomes a set of $`N`$ coincident D4-branes stretched between NS5-branes. In the absence of the $`NS5`$-branes, the low energy theory would have a $`U(N)`$ gauge group. However in our case there is a mass gap, and the theory is $`SU(N)`$. The $`U(1)`$ degree of freedom is frozen: $`\frac{1}{N}trFB_{NS}=0`$. This follows from dimensional reduction of $`T^{(3)}=0`$. If the D4-brane is compactified on a torus, then fields on the torus are periodic up to gauge transformations $`U^1`$ and $`U^2`$. These gauge transformations are subject to a consistency condition , $$U^1(x^1,x^2)U^2(x^1+2\pi R^1,x^2)U^1(x^1+2\pi R^1,x^2+2\pi R^2)U^2(x^1,x^2+2\pi R^2)=I,$$ This condition permits the existence of fundamental matter, which appears when strings end on the D4 branes. Note that (3.1) follows from an analogous condition. The $`U`$’s may be broken up into a $`U(1)`$ factor and an $`SU(N)`$ factor, so that the above condition becomes $$e^{i_{T^2}\frac{1}{N}TrF}e^{2\pi i\frac{m}{N}}=I,$$ where $`m`$ is the $`SU(N)`$ ’t Hooft flux. Since $`\frac{1}{N}trF=B_{NS}`$, we find the ’t Hooft flux is precisely the discrete quantum number discovered in the M-theory approach. The fact that the ’t Hooft flux is only defined modulo $`N`$ is reflected in the fact that $`2\pi `$ shifts in $`B_{NS}`$ are gauge transformations. Thus the ’t Hooft fluxes are associated with degenerate classical vacua of the M5-brane, all of which have $`T^{(3)}=0`$. This demonstrates magnetic screening and is precisely what one expects for for an electrically confining theory with a mass gap . The constraint $`F_{U(1)}B=0`$ arising from the existence of a mass gap means that the theory is a standard gauge theory on a commutative torus. In this quantity was argued to be a local measure of non-commutativity on the brane worldvolume. A simple way to see that the small instanton singularity is not resolved is to perform a T-duality along one of the cycles of the torus. The $`B`$ field then becomes the real part of the complex structure of the dual torus. The $`N`$ D4-branes become a D3-brane at an angle determined by $`\frac{1}{N}TrF`$. Consider wrapping a Euclidean D1-brane on the cycle of the torus on which the T-duality is performed. Because of the relation $`B=2\pi \frac{m}{N}`$ relating the real part of the complex structure to the angle of the D3-branes, the D1-brane is at a 90 degree angle to the D3-brane, giving a BPS configuration. There is a Coulomb branch describing motion relative to the D3-branes. The D1 can be placed such that it intersects the D3-branes. Then upon undoing the T-duality, the D1-brane becomes a Euclidean D0-brane corresponding to a small instanton. It has been argued that the small instanton singularity plays an important role in QCD dynamics. Our result supports this claim, since a non-commutative deformation does not exist when there is a mass gap. Note that this is related to the freezing of the $`U(1)`$ degree of freedom. A non-commutative star product does not exist when this degree of freedom is frozen. In $`U(N)`$ gauge theories on a non-commutative torus, gauge transformations mix the $`SU(N)`$ and $`U(1)`$ components. It is interesting to note that in the $`N\mathrm{}`$ limit of a of the theory with with a mass gap, one is allowed a continuous range of values for $`B`$, all of which correspond to a gauge theory on a commutative torus. 4. $`B`$ field for the QCD string It has long been suspected that large N QCD is a string theory. We now wish to prove the conjecture that the ’t Hooft flux behaves like the real part of a two-form modulus of the QCD-string. To do so we will show that the Euclidean action of a QCD string wrapping $`T^2`$ $`n`$ times has an imaginary part given by $`2\pi n\frac{m}{N}`$ Let us first compute the imaginary part of the action of a wrapped MQCD string. The MQCD string is an open membrane ending on the M5-brane. The worlvolume of the MQCD string is $`I\times `$, where $``$ is the world sheet of the QCD string and $`I`$ is an open interval generating the relative homology $`H_1(Y/\mathrm{\Sigma },Z_N)`$. $`Y`$ is the space in which $`\mathrm{\Sigma }`$ is embedded. For a detailed discussion of the MQCD string, the reader is referred to . The action of the open M-twobrane has an imaginary piece: $$ImS=_{I\times }C^{(3)}|_{M2}_{I\times }b^{(2)},$$ where $`C^{(3)}|_{M2}`$ is the pullback of the bulk three-form to the membrane, and $`b^{(2)}`$ is the two-form of the M-fivebrane world volume. The second term is necessary for gauge invariance of the open membrane action. Under $`C^{(3)}C^{(3)}+d\lambda ^{(2)}`$, $`b^{(2)}`$ transforms to cancel the the non-invariance due to the boundary: $`b^{(2)}b^{(2)}+\lambda ^{(2)}|_{M2}`$. Because $`db^{(2)}C^{(3)}|_{M5}=0`$, we may rewrite (4.1) as $$ImS=_{(I\mathrm{\Omega })\times }C^{(3)},$$ where $`\mathrm{\Omega }`$ is a chain in the M5-brane with boundary $`I\times `$. The MQCD string is homotopic to the IIA string, which is a membrane with the world volume $`S^1\times `$ . Therefore the above expression becomes $$ImS=_{S^1\times }C^{(3)}.$$ So if $``$ wraps $`T^2`$ $`n`$ times, $$ImS=2\pi n\frac{m}{N}.$$ This result is discrete and independent of the radius of the $`S^1`$, so we expect it to hold in the IIA limit. In the large $`N`$ limit $`\frac{2\pi m}{N}`$ becomes continuous, and behaves as the two-form modulus for the QCD-string, as well as the IIA string. 5. D-brane charges in theories with a mass gap The allowed D-brane charges in theories with a mass gap differ substantially from those without a mass gap. If we had considered a brane construction of an $`𝒩=2`$ theory instead, the D2 brane charge and the fundamental string charge would equal the magnetic and electric fluxes respectively. However the fluxes in QCD are very different objects. The term in the M5-brane action of the form $`S=C^{(3)}db^{(2)}`$ vanishes in MQCD because $`db^{(2)}C^{(3)}=0`$. Due to the self duality of $`db^{(2)}C^{(3)}=0`$, this term becomes $$S=C_{RR}^{(3)}Tr(FB)+B_{NS}Tr^{}(FB),$$ in the IIA limit. This vanishes, so there are no states in the low energy theory carrying either D2 brane or fundamental string charge. In brane constructions of QCD, the Euclidean D0-brane charge may be fractional. The importance of such fractional charges for confinement and dynamical generation of superpotentials has been discussed in . We will show that the such fractional charges are consistent with Dirac quantization in the presence of a D6-brane. The Chern-Simons term in the D4 brane action which determines whether there is also D0-brane charge is given by $$C_{RR}^{(1)}Tr\left[(FB)(FB)\right]$$ Since $`Tr(FB)=0`$, the D0-brane charge is given by the $`SU(N)/Z_N`$ contribution to the instanton number, which may be fractional . If one compactifies $`SU(N)/Z_N`$ QCD on $`T^4`$, then the instanton number is given by $$\frac{1}{16\pi ^2}=\nu +\frac{mm}{N},$$ where $`\nu `$ is an integer and $`m`$ are the integer ’t Hooft fluxes. The fractional piece apparently violates Dirac Quantization in the presence of a single D6-brane, since $`Q_6Q_0^{}`$ is not an integer multiple of $`2\pi `$. If a D6-brane is present then the theory on the D4 brane contains massive fundamental matter due to string stretched between the D6-brane and the D4-brane. Since fundamentals of $`SU(N)`$ are charged under the $`Z_N`$ center, it would naively seem that ’t Hooft flux must vanish modulo $`N`$ and the instanton number must be integer. However the matter introduced by the D6-brane is in a fundamental representation of $`U(N)`$, and a nontrivial $`SU(N)`$ ’t Hooft twist may be cancelled by a $`U(1)`$ twist, as in (3.1). The introduction of fundamental matter by a D6-brane does not prohibit non-trivial ’t Hooft fluxes or fractional D0-brane charge. The Dirac quantization problem is resolved as follows. When the ’t Hooft flux is non-zero, there is also a background $`B`$ given by the relation (1.1). For non-zero $`B`$ a D6-brane carries other D-brane charges . The non-integer terms which threaten to violate Dirac quantization turn out to cancel just as they would for a pair of dyons in the presence of a non-zero theta parameter. Suppose that the $`D4`$ brane on which the Yang-Mills theory lives is extended in the directions $`01234`$, with the directions $`0123`$ compactified on $`T^4`$. The D4-brane is bounded by NS-5 branes at fixed values of $`x^4`$. Consider the action of a D6-brane with the world-volume $`T^4\times S^3`$, where $`S^3`$ is embedded in the directions $`56789`$. Also, suppose that the D6-brane is at a value of $`x^4`$ between the NS5-branes and does not intersect the D4-D0 system. The path integral for the D6 brane contains a term $$e^{i\left(_{T^4\times S^3}C_{RR}^{(7)}\frac{BB}{4\pi ^2}_{S^3}C_{RR}^{(3)}\right)},$$ The second term reflects the fact that in the presence of $`B`$, a D2-brane charge is induced on the D6-brane. A D4-brane charge is induced as well, however the corresponding term in the action does not effect Dirac quantization. This is because the the D4-brane is hodge dual to a D2-brane, and the D2-brane charge vanishes on the D4-brane associated with the Yang-Mills theory. Because of the D4-D0 system, the Ramond-Ramond potentials $`C_{RR}^{(7)}`$ and $`C_{RR}^{(3)}`$ are not globally defined. For (5.1) to be well defined, one must have $$e^{i\left(_{T^4\times S^4}𝑑C_{RR}^{(7)}\frac{BB}{4\pi ^2}_{S^4}𝑑C_{RR}^{(3)}\right)}=1,$$ where $`S^4`$ is embedded in the directions $`56789`$ and surrounds the D4-D0 system, which is a point in $`56789`$. $`𝑑C^{(7)}`$ is simply the D0-brane charge, with fractional piece $`\frac{mm}{N}`$. Similarly $`𝑑C^{(3)}`$ is the D4-brane charge, or N. Since $`B=2\pi \frac{m}{N}`$, equation (5.1) is satisfied. Roughly speaking, we have found that $$Q^{(0)}Q^{(6)}+Q^{(2)}Q^{(4)}Q^{(4)}Q^{(2)}=2\pi l,$$ where the $`Q^{}`$ charges are associated with the D6-brane, and the $`Q`$ charges are associated with the D4-branes. Acknowledgments This work was supported in part by DOE under contract No. DE-AC02-76-ER-03071. I am especially grateful to Burt Ovrut and Edward Witten for enlightening conversations. I also profited greatly from discussions with Edna Cheung, Miriam Cvetic, Ori Ganor, Antal Jevicki, Robert de Mello Koch, Morten Krogh, Sanjaye Ramgoolam, and Daniel Waldram References relax E. Witten, ‘‘Branes and the dynamics of QCD,’’ Nucl. Phys. B507 (1997) 658-690, hep-th/9706109. relax A. Hanany, M. Strassler and A. Zaffaroni, ‘‘Confinement and strings in MQCD,’’ Nucl. Phys. B513 (1998) 87, hep-th/9707244. relax K. Hori and H. Ooguri, ‘‘Strong coupling dynamics of four-dimensional N=1 gauge theories from M theory five-brane.’’ Adv. Theor. Math. Phys. 1 (1998) 1, hep-th/9706082. relax G. ’t Hooft, "A property of electric and magnetic flux in non-abelian gauge theories," Nucl. Phys. B153 (1979) 141. relax S. Gubser, I. Klebanov, and A. Polyakov, ‘‘Gauge theory correlators from noncritical string theory,’’ Phys. Lett. B428 (1998) 105-114, hep-th/9802109. relax J. Maldacena, ‘‘The large N limit of superconformal field theories and supergravity,’’ Adv. Theor. Math. Phys. 2 (1998) 231, hep-th/9711200. relax E. Witten, ‘‘Anti-de-Sitter space and holography,’’ Adv. Theor. Math. Phys. 2 (1998) 253, hep-th/9802150. relax E. Witten, ‘‘Anti-de-Sitter space, thermal phase transition, and confinement in gauge theories,’’ Adv. Theor. Math. Phys. 2 (1998) 505, hep-th/9803131. relax E. Witten, ‘‘ADS/CFT correspondence and topological field theory,’’ JHEP 9812 (1998) 012, hep-th/9812012. relax A. Connes, M. Douglas, A. Schwarz, "Noncommutative geometry and Matrix theory: compactification on tori," JHEP 9802 (1998) 003, hep-th/9711162. relax M. Douglas and C. Hull, "D-branes and the noncommutative torus," JHEP 9802 (1998) 008, hep-th/9711165. relax J. Brodie, ‘‘Fractional branes, confinement, and dynamically generated superpotentials,’’ Nucl. Phys. B532 (1998) 137, hep-th/9803140. relax O. Aharony, M. Berkooz, N. Seiberg, ‘‘Light cone description of (2,0) superconformal theories in six-dimensions,’’ Adv. Theor. Math. Phys. 2 (1998) 119, hep-th/9712117. relax A. Astashkevich, N. Nekrasov, A. Schwarz, ‘‘Instantons on noncommutative $`R^4`$ and (2,0) superconformal six-dimensional theory,’’ Commun. Math. Phys. 198 (1998) 689, hep-th/9810147. relax M. Douglas, ‘‘Conformal field theory techniques in large N Yang-Mills theory,’’ Cargese Workshop on Strings, Conformal Models and Topological Field Theories, Cargese, France, May 12-26, 1993, hep-th/9311130. relax Z. Guralnik, ‘‘T-duality of large N QCD,’’ 1998 Paris workshop on Quantum Chromodynamics. hep-th/9903021. relax Z. Guralnik, ‘‘Duality of large N Yang-Mills theory on $`T^2\times R^n`$,’’ hep-th/9804057. relax M. Douglas, "Branes within branes," Cargese 1997, Strings, branes and dualities 267, hep-th/9512077. relax E. Witten, ‘‘Solutions of four-dimensional field theories via M Theory, Nucl. Phys. B500 (1997) 3, hep-th/9703166. relax Z. Guralnik and S. Ramgoolam, ‘‘Torons and D-brane bound states,’’ Nucl. Phys. B499 (1997) 241, hep-th/9702099. relax F. Ardalan, H. Arfaei and M.M. Sheikh-Jabbari, ‘‘Mixed branes and M(atrix) theory on non-commutative torus,’’ hep-th/9803067. relax F. Ardalan, H. Arfaei and M.M. Sheikh-Jabbari, ‘‘Noncommutative geometry from strings and branes,’’ JHEP 9902 (1999) 016, hep-th/9810072. relax C.-S Chu and P.-M Ho, ‘‘Noncommutative open string and D-brane,’’ Nucl. Phys. B550 (1999) 151, hep-th/9812219. relax G. ’tHooft, ‘‘Some twisted self-dual solutions for the Yang-Mills equations on a hypertorus,’’ Comm. Math. Phys 81 (1981) 267.
no-problem/9903/quant-ph9903010.html
ar5iv
text
# On Solutions to the Nonlinear Phase Modification of the Schrödinger Equation ## 1 Introduction Recently we have presented the nonlinear phase modification of the Schrödinger equation . From the general scheme of the modification we selected the two simplest models which guarantee that the departure from the linear Schrödinger equation is minimal in some reasonable manner. One of the models turned out to have the same continuity equation as the continuity equation of the Doebner-Goldin modification and we demonstrated that its Lagrangian leads to a particular variant of this modification. The other model though constitutes a novel proposal not investigated in the literature before. It is the purpose of this report to present some physically interesting one-particle solutions<sup>1</sup><sup>1</sup>1Multi-particle solutions are discussed in . to this proposal that we called the simplest minimal phase extension (SMPE) of the Schrödinger equation. Before doing so, let us briefly recall it. In what follows, $`R`$ and $`S`$ denote the amplitude and the phase of the wave function<sup>2</sup><sup>2</sup>2We follow here the convention of that treats the phase as the angle. In a more common convention $`S`$ has the dimensions of action and $`\mathrm{\Psi }=Rexp(iS/\mathrm{})`$. $`\mathrm{\Psi }=Rexp(iS)`$, $`V`$ stands for a potential, and $`C`$ is the only constant of the modification that does not appear in linear quantum mechanics. The discussed extension, similarly as the Schrödinger equation, is invariant under the Galilean group of transformations and the space and time reflections. The Lagrangian density for the modification, $$L(R,S)=\mathrm{}R^2\frac{S}{t}+\frac{\mathrm{}^2}{2m}\left[\left(\stackrel{}{}R\right)^2+R^2\left(\stackrel{}{}S\right)^2\right]+CR^2\left(\mathrm{\Delta }S\right)^2+R^2V,$$ (1) enables one to derive the energy functional, $$E=d^3x\left\{\frac{\mathrm{}^2}{2m}\left[\left(\stackrel{}{}R\right)^2+R^2\left(\stackrel{}{}S\right)^2\right]+CR^2\left(\mathrm{\Delta }S\right)^2+VR^2\right\},$$ (2) a conserved quantity for the potentials that do not depend explicitly on time which coincides with the quantum-mechanical energy defined as the expectation value of the Hamiltonian for this modification . The equations of motion for the modification read, $$\mathrm{}\frac{R^2}{t}+\frac{\mathrm{}^2}{m}\stackrel{}{}\left(R^2\stackrel{}{}S\right)2C\mathrm{\Delta }\left(R^2\mathrm{\Delta }S\right)=0,$$ (3) $$\frac{\mathrm{}^2}{m}\mathrm{\Delta }R2R\mathrm{}\frac{S}{t}2RV\frac{\mathrm{}^2}{m}R\left(\stackrel{}{}S\right)^22CR\left(\mathrm{\Delta }S\right)^2=0.$$ (4) As argued in , the most natural way to represent the nonlinear coupling constant $`C`$ is as a product $`\pm \mathrm{}^2L^2/m`$, where $`L`$ is some characteristic length to be thought of as the size of an extended particle of mass $`m`$. This leads us to a nonlinear quantum theory of the particle of mass $`m`$ and finite size $`L`$, but still leaves open the question of sign of $`C`$. However, an alternative more traditional interpretation of $`C`$ as a universal coupling constant is also possible. In this approach $`L`$ is just a proxy for $`C`$ deprived of any additional physical meaning of its own. In general, the solutions to the modification do not possess the classical limit in the sense of the Ehrenfest theorem. It is so because the Ehrenfest relations for this modification contain some additional terms, $$m\frac{d}{dt}\stackrel{}{r}=\stackrel{}{p}+\frac{2Cm}{\mathrm{}}d^3x\stackrel{}{r}\mathrm{\Delta }\left(\mathrm{\Delta }SR^2\right),$$ (5) $$\frac{d}{dt}\stackrel{}{p}=\stackrel{}{}V+Cd^3x\left[2\stackrel{}{}S\mathrm{\Delta }\left(\mathrm{\Delta }SR^2\right)R^2\stackrel{}{}\left(\mathrm{\Delta }S\right)^2\right].$$ (6) However, in the one-dimensional case, as long as $`\mathrm{\Delta }S=f(t)`$, where $`f(t)`$ is an arbitrary function of time, these relations reduce to the standard Ehrenfest relations. For a system described by some Gaussian wave function an appropriate measure of its physical size seems to be the width of its probability density. We will take this measure as the definition of the physical size of the system. As we will see, $`L`$ is related to the physical size of the system in some situations that we will consider. Moreover, we will show that it is possible to determine $`L`$ in in the so-called subrelativistic approach discussed in connection with the solitonic solution. It is in this approach that $`L`$ can be given a physical meaning only as a truly particle-dependent parameter of the theory, the particle’s attribute similarly as its mass or spin. We will find that the physical size of the particle in this framework is that of its Compton wavelength. The stationary solutions to the linear Schrödinger equation for which $`S=Et/\mathrm{}+const`$, where $`E`$ is the energy of a system, are also stationary solutions to this modification. There may however exist other stationary solutions as well. The purpose of the next two sections is to present some non-stationary solutions that describe single systems in one dimension, but, as we will see, the solitonic solution reduces to a nontrivial stationary solution in the zero velocity limit. ## 2 Wave Packet Solutions We will assume in this section that $`\mathrm{}=1`$. Let us start from the simplest solution that is also a solution to the linear Schrödinger equation. It is a coherent state for which $$R^2=\frac{1}{\sqrt{\pi }x_0}\mathrm{exp}\left[\frac{\left(xx_0\sqrt{2}\mathrm{cos}(\omega t\delta )\right)^2}{x_{0}^{}{}_{}{}^{2}}\right]$$ (7) and $$S=\left(\frac{\omega t}{2}\frac{|\alpha |^2}{2}\mathrm{sin}2\left(\omega t\delta \right)+\frac{\sqrt{2}|\alpha |x}{x_0}\mathrm{sin}\left(\omega t\delta \right)\right).$$ (8) Since $`\mathrm{\Delta }S=0`$, the coherent state being a solution to the linear Schrödinger equation in the potential of a simple harmonic oscillator, $`V=m\omega ^2x^2/2`$, represents a solution to equations (3) and (4) in the very same potential. Here $`x_0=1/\sqrt{m\omega }`$, while $`\alpha `$ and $`\delta `$ are arbitrary constants, complex and real, respectively. The physical size of this system is $`x_0`$, but since $`\mathrm{\Delta }S=0`$, no relation between the size in question and the characteristic size $`L`$ introduced by the theory can be established. The coherent state is an example of a wave packet. Another member of this class, the ordinary Gaussian wave packet is not a solution to this modification. Unlike the Gaussian packet, the coherent state does not spread in time, but requires the potential of harmonic oscillator to support it. Nevertheless, one can find another solution in this class that represents a modified Gaussian wave packet whose amplitude is the same as that of the ordinary Gaussian wave packet in the linear theory, $$R=R_L=\left[\frac{mt_0}{\pi \left(t^2+t_0^2\right)}\right]^{1/4}\mathrm{exp}\left[\frac{mt_0x^2}{2\left(t^2+t_0^2\right)}\right].$$ (9) However, its phase is different from the phase of the “linear” packet, $$S_L=\frac{mtx^2}{2\left(t^2+t_0^2\right)}\frac{1}{2}\mathrm{arctan}\frac{t}{t_0}.$$ (10) To ensure that this difference is minimal, we assume that the phase of the modified packet has the form $$S=S_L+\frac{1}{2}f(t)x^2+h(t).$$ (11) The parameter $`t_0`$ is related to the average of the square of the momentum of this system via $`p^2=m/2t_0`$, so $`t_0`$ has to be positive. This is also necessary for the normalization of the packet’s wave function. Moreover, this constant determines the minimal physical size of the system $`L_{ph}^2(t=0)=`$ $`4t_0/m`$. In principle, this size could be arbitrarily small as the momentum of the packet can be arbitrarily large. We will see however that for negative values of $`C`$ the parameter $`t_0`$ must be larger than some finite value that depends on $`C`$. Such a packet differs in the most minimal way from the Gaussian wave packet of linear theory, but unlike the latter it may not exist without the support of some external potential. We will now find the functions $`f(t)`$ and $`h(t)`$ and the potential $`V(x,t)`$ which is required to support this configuration. Denoting for simplicity $`\mathrm{\Delta }S`$ by $`g(t)`$, we find that the first equation of the modification reduces to $$\frac{1}{m}\stackrel{}{}\left(xf(t)R^22Cmg(t)\stackrel{}{}R^2\right)=0.$$ (12) The solution of this equation is possible only if the expression in the brackets is constant, but in order for the ratio $`f(t)/g(t)`$ to be a function of time only this constant has to be zero. Consequently, one obtains that $$\frac{f(t)}{2Cmg(t)}=\frac{2mt_0}{t_0^2+t^2},$$ (13) and since $`g(t)=\mathrm{\Delta }S_L+`$ $`f(t)`$, $$f(t)=\frac{4Cm^3t_0t}{\left(t_0^2+t^2\right)\left(t_0^2+t^2+4Cm^2t_0\right)}.$$ (14) The other equation of the modification will determine $`h(t)`$ and $`V(x,t)`$. It boils down to $$\frac{1}{2}\dot{f}(t)x^2+\frac{f^2(t)x^2}{2m}+\frac{f(t)S_L^{}x}{m}+Cg^2(t)+\dot{h}(t)+V(x,t)=0,$$ (15) where overdots denote differentiation with respect to time and the prime denotes differentiation with respect to $`x`$. Its solution requires that $`V(x,t)=A(t)x^2`$. One finds then that $$A(t)=\frac{2Cm^3t_0\left[t^4t_0^3(t_0+4Cm^2)\right]}{\left(t^2+t_0^2\right)^2\left(t^2+t_0^2+4Cm^2t_0\right)^2}$$ (16) and $$h(t)=C𝑑tg^2(t)=Cm^2\frac{dtt^2}{\left(t^2+t_0^2+4Cm^2t_0\right)^2}.$$ (17) Calculating this integral gives $$h(t)=\frac{Cm^2}{2}\left\{\frac{1}{\sqrt{B^2(t_0)}}\mathrm{arctan}\left(\frac{t}{\sqrt{B^2(t_0)}}\right)+\frac{t}{t^2+B^2(t_0)}\right\}+const,$$ (18) when $`B^2(t_0)=t_0^2+4Cm^2t_0`$ is non-negative, and $$h(t)=\frac{Cm^2}{2}\left\{\frac{t}{t^2B^2(t_0)}+\frac{1}{2a}\mathrm{ln}\left|\frac{t+\sqrt{|B^2(t_0)|}}{t\sqrt{|B^2(t_0)|}}\right|\right\}+const$$ (19) otherwise. The energy of this configuration is time-dependent, $$E=\frac{t^6+3t_0^2t^4+t_0^2\left[t_0^2+2t_0\left(t_0+8Cm^2\right)\right]t^2+\left[\left(t_0+4Cm^2\right)^2+4Cm^2\left(t_0+4Cm^2\right)\right]t_0^4}{4t_0\left(t^2+t_0^2\right)\left(t^2+t_0^2+4Cm^2t_0\right)^2},$$ (20) and, as seen from this expression, it is asymptotically bounded by $`E_{asymp}E(\left|t\right|\mathrm{})=1/4t_0`$. Therefore, $`E_{asymp}=p^2/2m`$. The energy of the packet is asymptotically conserved, but it changes locally in time due to the time-dependent potential. Moreover, one observes that the energy of the Gaussian scales as $`1/|C|m^2`$, which is precisely as anticipated in based exclusively on dimensional arguments. A particularly simple form of the formula for energy is obtained for the negative coupling constant, $`C=|C|`$, and $`t_0=8|C|m^2`$, $$E=\frac{t^6+3\left(8Cm^2\right)^2t^4+\left(8Cm^2\right)^4t^2}{16|C|m^2\left(t^2+\left(8Cm^2\right)^2\right)\left(2t^2+\left(8Cm^2\right)^2\right)^2}.$$ (21) We see that in this case, $`E(t=0)=0`$ and $`E(t0)>0`$. What is the most interesting here is that the energy can become infinite for negative values of $`C`$ unless $`t_0>t_0^{cr,1}=4|C|m^2`$. This critical value of $`t_0`$ determines the lower bound on the minimal size of the packet in question as discussed earlier. This bound cannot be attained. Consequently, the lower bound for the minimal physical size of the packet is related to the characteristic size as $`L_{ph}^{lb,1}(t=0)=4L`$. It is through this relationship that $`L`$ could be, in principle, established experimentally if the bound on the minimal physical size of the packet proved to be somehow measurable. In the subrelativistic framework to be discussed in the next section, $`L=\lambda _c/4`$, which leads to $`L_{ph}^{lb,1}(t=0)=\lambda _c`$. Let us also note that for the energy to be non-negative, $`t_0t_0^{cr,2}=8|C|m^2`$. Using $`t_0^{cr,2}`$ would yield the higher lower bound on the minimal physical size of the packet under study. In particular, in the subrelativistic framework, $`L_{ph}^{lb,2}(t=0)=\sqrt{2}\lambda _c`$. This bound is attainable. The Gaussian wave packet under discussion does not alter the standard Ehrenfest relations. ## 3 Solitonic Solution We will now demonstrate that the modification discussed possesses a solitonic solution. By the soliton we mean an object whose amplitude is well localized and does not spread in time unlike that of ordinary Gaussian wave packets. It should also be a solution to the nonlinear equations of motion, i.e., we exclude the case of $`\mathrm{\Delta }S=0`$. We will seek a solution that resembles that of the Gaussian, but is not dispersive. Therefore, as an Ansatz for the amplitude we take $$R(x,t)=N\mathrm{exp}\left[\frac{(xvt)^2}{s^2}\right],$$ (22) where $`v`$ is the speed and $`s`$ is the half-width of the Gaussian amplitude to be determined through the coupling constant $`C`$ and other fundamental constants of the modification. The normalization constant $`N=\left(2/\pi s^2\right)^{1/4}`$. We will seek the phase in the form $$S(x,t)=a(xvt)^2+bvx+c(t),$$ (23) where $`a`$ and $`b`$ are certain constants and $`c(t)`$ is a function of time, all of which need to be found from the equations of motion. Assuming that $`V(x,t)=0`$ and substituting (22) and (23) into (3) and (4) reveals that the latter are satisfied provided $$b=m/\mathrm{},s^2=8mC/\mathrm{}^2,s^4a^2=1,$$ (24) and $$2\mathrm{}s^4m\frac{c(t)}{t}+2\mathrm{}^2s^2+\mathrm{}^2s^4b^2v^2+8Ca^2s^4m=0.$$ (25) We see that the coupling constant $`C`$ has to be negative, $`C=|C|`$. From (24) we now obtain that $$s^2=8m|C|/\mathrm{}^2=8L^2=8q\lambda _c^2,$$ (26) $$a=\pm \mathrm{}^2/8m|C|=\pm \frac{1}{8L^2}=\pm \frac{1}{8q\lambda _c^2},$$ (27) where $`q`$ is the Compton quotient equal to $`L^2/\lambda _c^2`$ and $`\lambda _c=\mathrm{}/mc`$ is the Compton wavelength of particle of mass $`m`$. Combining (25-27) leads to $$c(t)=\frac{1}{16}\left(\frac{\mathrm{}^3}{m^2|C|}+\frac{8mv^2}{\mathrm{}}\right)t+const.$$ (28) The energy of the soliton is a function of its speed $`v`$, $$E\left(v\right)=E_{st}\left(L\right)+\frac{mv^2}{2},$$ (29) where $$E_{st}\left(L\right)=\frac{\mathrm{}^2}{16mL^2}=\frac{mc^2}{16q}$$ (30) is the stationary part of it. This part can become of the order of the rest energy of the particle and even bigger for appropriately small $`q`$’s. Nevertheless, as long as one remains outside the realm of special relativity, the decay of particles due to energetic reasons is not an issue and it is only the difference in the kinetic energy that matters and is actually observed. This difference can be observed in the process of changing the energy of the particle by slowing it down in some detector, in particular by stopping it. In the latter case one detects that the change in the particle’s energy is $`\mathrm{\Delta }E=mv^2/2`$. We also note the characteristic scaling of energy being proportional to $`\mathrm{}^2/mL^2`$, in agreement with what we anticipated in . It is tempting to assume that the stationary energy term represents the rest mass-energy of free particle, i.e., $`E_{st}\left(L\right)=\mathrm{}^2/16mL^2=mc^2`$. This determines the characteristic size $`L`$ of the particle to be a quarter of its Compton wavelength. However, as seen from (26), its physical size $`L_{ph}=\sqrt{2}s=4L`$ turns out to be equal to the Compton wavelength itself. Implicit in this assumption is the fact that the constant rest mass-energy term that one would obtain in the nonrelativistic approximation is dropped from the scheme and replaced by the self-energy term. The energy of this term gives rise to the rest mass-energy of the particle. We call this approach subrelativistic. It leads to a model type of particles whose physical size is precisely that of their Compton wavelength, but in no way can it describe particles of any other size.<sup>3</sup><sup>3</sup>3The requirement that the size of the quantum particle should be that of its Compton wavelength has recently been used as a postulate to build a model of nonlinear quantum mechanics of extended objects from first principles . It seems that the most appropriate way to interpret these solitons is as the fundamental particular constituents of quantum realm complementary to waves. Quantons, as we choose to call them, would then be the unique realization of the particle aspect of the wave-particle duality of quantum mechanics. This all seems to be too easy so one can suspect some trick here. The trick is that out of three constants $`\mathrm{}`$, $`m`$, and $`L`$ the last two having dimensions of kg and meter, respectively, it is always possible to form a quantity of the dimensions of energy, $`\mathrm{}^2/mL^2`$, and if this quantity is to be of the order of the rest mass-energy of the particle then $`L`$ should be of the order of its Compton wavelength $`\lambda _c`$. However, it is not necessarily as easy as this simple reasoning may suggest. First of all, this dimensional trick does not imply that the physical size of the quanton is to be precisely equal to its Compton wavelength. The fact that it is so is thus rather remarkable. Secondly, and even more importantly, if a good joke is not to be repeated too often ours is a good one indeed for it cannot be repeated neither in the Doebner-Goldin nor in the Białynicki-Birula modification , although for two different reasons. In the former, the dimensions of its nonlinear parameters do not allow to make any new dimensional quantities beyond those that can be made up of $`\mathrm{}`$ and $`m`$ and those two constants are not enough for our task. In the latter, the nonlinear parameter $`\epsilon `$ has the dimensions of energy and the dimensional analysis of the problem implies that the characteristic size of an object of such energy is inversely proportional to the square root of it. Indeed, the soliton of this modification has the radius $`\mathrm{}/\sqrt{2m\epsilon }`$. Experimentally established , the upper bound on the value of this parameter is so small that it implies the existence of objects of macroscopic size and thus easy, in principle, to observe. Nevertheless, they have not been empirically confirmed. One can however consider the self-energy independent of the rest-mass energy. The rest mass-energy would then constitute a separate part of the total energy or it could be eliminated from the considerations in a completely non-relativistic framework. None of these approaches is more advantegeous than the other, both are just models. The first of them attempts to model the rest mass-energy of the quanton and thus its inertia by means of the self-interaction term, the other approach is devoid of such a goal and treats the rest mass-energy as given and inconsequential. Let us now present the subrelativistic formulation in a more mathematical manner. As a subrelativistic Hamiltonian $`H_{sub}`$ of the free Schrödinger equation we define the Hamiltonian whose expectation value on its solutions is $$<H_{sub}>=E=\frac{mv^2}{2}+mc^2.$$ (31) It is through this equation that the quantum and classical world make contact. However, this equation by no means fixes the form of subrelativistic Hamiltonian. It is easy to find such a Hamiltonian in linear quantum mechanics. In fact, it is unique and it differs from the Hamiltonian of the free Schrödinger equation $`H_{Sch}`$ by the rest energy term. In other words, it is $$H_{sub}^L=H_{Sch}+H_{rest}^L=H_{Sch}+mc^2.$$ (32) The solution to the free linear subrelativistic Schrödinger equation is, similarly as in the nonrelativistic case, a plane wave $`\mathrm{\Psi }=\mathrm{exp}(iS_{plane})`$, but with a slightly different phase, $$S_{plane}=\frac{mvx}{\mathrm{}}\left(Emc^2\right)t.$$ (33) In general, the rest energy term is of no relevance in the linear formulation of the subrelativistic Schrödinger equation for it can be absorbed in the phase of the quantum system without any further consequences. In nonlinear quantum mechanics, things can be very different. Again, we expect that the following decomposition $$H_{sub}^{NL}=H_{Sch}+H_{rest}^{NL}$$ (34) will lead to the dispersion relation (31). Now, however, the choice of the rest energy Hamiltonian $`H_{rest}^{NL}`$ is not unique. It seems that the most reasonable and minimal in some sense way to enhance this uniqueness is to stipulate that a free nonlinear subrelativistic Schrödinger equation has a solitonic solution which satisfies (31). As argued above, the Doebner-Goldin modification is unable to produce (31) for its nonlinear solutions due to dimensional reasons. The Białynicki-Birula and Mycielski modification can be thought of as a subrelativistic nonlinear extension of the Schrödinger equation only for a sufficiently light particle due to the experimental smallness of $`\epsilon `$. It is not out of the question that the SMPE is the only nonlinear modification of the Schrödinger equation that can be considered a nonlinear subrelativistic Schrödinger equation which has a solitonic solution fulfilling (31) and which, in addition, entails the unique value for the physical size of the soliton. Let us note that the plane wave that is the solution of the subrelativistic linear Schrödinger equation is also a solution to the modification in question. Nevertheless, it should be noted that the coupling constant of this modification is no longer universal in this approach for it is determined by other parameters of the theory to the effect that $`C_{sub}=\mathrm{}^4/16m^3c^2`$. Consequently, if $`C`$ is ever experimentally found to be independent of the mass of the particle, i.e., $`C`$ is indeed a truly universal constant and not a product of $`\mathrm{}^2`$, $`m`$, and $`c`$, then the discussed approach is viable only for one mass and thus is much less appealing. For this reason, it is more appropriate in this case to use $`L`$ rather than $`C_{sub}`$, for the former, being the characteristic size of the particle represents its attribute and therefore cannot be thought of as a universal constant. A similar solitonic solution exists also in the following time-dependent potential of harmonic oscillator, $$V(x,t)=k(xvt)^2,$$ (35) for any negative value of the coupling constant $`C`$. The amplitude and phase of the soliton are assumed to be the same as before, i.e., given by (22) and (23). The parameter $`b`$ is determined by (24) and the half-width of the soliton by (26), and so none of them is affected by the potential. Moreover $`c(t)`$ is also determined by (25), except that now $`a`$ satisfies the equation $$a^2=\frac{1}{s^4}\frac{km}{2\mathrm{}^2}=\frac{1}{64L^4}\frac{km}{2\mathrm{}^2},$$ (36) which implies that the strength of the potential cannot be greater than $`k_{crit}=\mathrm{}^2/32mL^4=mc^2/32qL^2`$. Choosing the standard form of $`k`$, $`k=m\omega ^2/2`$, we obtain that for a fixed $`L`$, $`\omega `$ $`\omega _{crit}=\mathrm{}/4mL^2`$. For a given $`\omega `$, $`LL_{\mathrm{max}}=\sqrt{\mathrm{}/4m\omega }`$. The energy of this configuration is $$E(v,L;d)=E_{st}(L;k)+\frac{mv^2}{2},$$ (37) where $$E_{st}(L;d)=\frac{\mathrm{}^2}{16mL^2}+2kL^2=\frac{mc^2}{16q}+2qk\lambda _c^2$$ (38) represents the stationary part of it. We note that it is only the phase and the energy of the particle that depend on the potential. The average position $`x=vt`$ and momentum $`p=mv`$ are the same for both of these solitonic solutions. In the case when $`v=0`$, each of these solutions reduces to a stationary solution of energy $`E_{st}\left(L\right)`$ and $`E_{st}(L;k)`$, respectively. For a given $`L`$, the maximum stationary energy (38) equals $`E_{st}(L;k_{crit})=\mathrm{}^2/8mL^2=mc^2/8q`$. However, as a function of $`L`$, $`E_{st}(L;k)`$ does not have a maximum, but a minimum. This minimum is attained for $`L=L_{\mathrm{max}}`$ and it is equal to the ground state energy of the harmonic oscillator, $`E=\mathrm{}\omega /2`$. Since for $`L_{\mathrm{max}}`$, $`a=0`$ and $`\mathrm{\Delta }S=0`$, we see that this state corresponds to the ground state of linear theory. For $`L<L_{\mathrm{max}}`$, there exist two nodeless wave functions of which one corresponds to the ground state of linear theory ($`\mathrm{\Delta }S=0`$). The state described by the other wave function has the energy, $$E_{st}\left(\omega \right)=\frac{\mathrm{}\omega }{4}\left(\frac{1}{Q_h}+Q_h\right)=\frac{\mathrm{}\omega _{crit}}{4}\left(1+\frac{\omega ^2}{\omega _{crit}^2}\right)=mL^2\left(\frac{\mathrm{}^2}{16m^2L^4}+\omega ^2\right),$$ (39) where $`Q_h=\left(L/L_{\mathrm{max}}\right)^2=4L^2m\omega /\mathrm{}=4q\mathrm{}\omega /mc^2=\omega /\omega _{crit}`$. Therefore, $$\mathrm{\Delta }E_{new}=E_{st}E_g=\frac{\mathrm{}\omega }{4}\left(\frac{1}{Q_h}+Q_h2\right)=\frac{\mathrm{}\omega _{crit}}{4}\left(1\frac{\omega }{\omega _{crit}}\right)^2$$ (40) represents a new line in the spectrum of harmonic oscillator not predictable by the linear theory. In terms of the separation between consecutive energy levels $`E_{con}`$ in the spectrum of linear theory and the frequency ratio $`\eta =\omega /\omega _{crit}`$, ($`\eta 1`$) $$\frac{\mathrm{\Delta }E_{new}}{E_{con}}=\frac{1}{4}\left(\frac{1}{Q_h}+Q_h2\right)=\frac{1}{4\eta }\left(1\eta \right)^2.$$ (41) In principle, it is easy to verify the existence of the new line. One should start observing the spectrum of harmonic oscillator right below $`\omega _{crit}`$. It is at this critical frequency that the new line splits off of the ground state and as we keep lowering the frequency, it moves towards the first excited state of linear theory. At $`\eta =1/4`$ it is approximately half-way there. The critical frequency $`\omega _{crit}`$ expressed in Hz is approximately $$\omega _{crit}=3\times 10^{19}\frac{m}{q_em_e},$$ (42) where $`m_e`$ is the mass of electron and $`q_e`$ the Compton quotient of a particle with respect to the Compton wavelength of the electron. Even for the lightest stable particle, the electron, this is well above the top range of frequency of gamma rays of the order of $`10^7`$ Hz, which seems to make impossible to carry out this type of experiment. We should note that if $`\omega \omega _c`$, which is a much more accessible regime, the new level has to be sought among highly excited states of harmonic oscillator as seen from (41). This may not necessarily be feasible either. It is reasonable to expect that $`L`$ is of the order of $`\lambda _c`$. In the subrelativistic framework, it is $`L=\lambda _c/4`$ that should be chosen. This yields the formula $$E_{st}=mc^2+\frac{m\lambda _c^2\omega ^2}{8}$$ (43) valid for $`\omega \omega _{crit}=4\mathrm{}/m\lambda _c^2=4m/\mathrm{}`$. However, it seems more appropriate to impose the condition $`m\lambda _c^2\omega ^2<8`$, so that the particle creation-annihilation does not occur. This defines the subrelativistic frequency regime to be $`\omega <\omega _{creat}=2\sqrt{2}m/\mathrm{}`$. What we obtained is a hard core particle regime with the physical size of the oscillator depending only on universal constants and equal to its Compton wavelength, in contrast to a “soft” core type of oscillator of linear theory whose size can be modified by changing its frequency. If this modification describes reality then we should be able to observe that one of the energy levels in the spectrum of the harmonic oscillator depends quadratically on $`\omega `$. Solitonic solutions occur also in other nonlinear modifications of the Schrödinger equation, as, for instance, in the modification of Białynicki-Birula and Mycielski and in the Doebner-Goldin type of modifications . It should be pointed out that the solitons presented in this paper exist for arbitrary values of the (negative) coupling constant which is not always the case in other nonlinear modifications where for this to happen some threshold value of nonlinear parameter(s) must be exceeded. ## 4 Conclusions We have presented four non-stationary one-dimensional solutions to the simplest minimal phase extension of the Schrödinger equation introduced in . The simplest of them, being also a solution to the linear Schrödinger equation, represents a coherent state which is a particular form of a wave packet. Its existence requires the potential of harmonic oscillator. Similar in nature is the second solution, the modified Gaussian packet whose amplitude is identical with the amplitude of the “linear” Gaussian wave packet, its phase being slightly different but having the same spatial shape as the phase of the ordinary Gaussian packet. This solution exists in the potential of harmonic oscillator with a time-dependent strength. The wave packet in question is dispersive, which is not the case for the coherent state and the other two solutions, the free Gaussian soliton and a similar soliton in the potential of harmonic oscillator travelling with the velocity of the soliton. These two objects are characteristic of nonlinear structures. All of these solutions have the standard Ehrenfest limit. For the existence of the solitons it is necessary that $`C<0`$. The other solutions exist for any value of the coupling constant, but it is only for the negative $`C`$ that the Gaussian packet seems to corraborate our hypothesis that the theory discussed describes extended particles. Indeed, if the coupling constant is negative, the minimal physical size of the packets must be larger than some finite value for otherwise they would develop infinite energy at some point. This squares quite nicely with the idea of extended, i.e., not point-like particles. The most physically interesting of the solutions presented is the free solitonic solution. It is conceivable that this solution can serve as a particle representation of the wave-particle duality embodied in quantum mechanics. The standard quantum theory despite many successful years of development has not been able to provide an acceptable physical realization of this duality as only the wave aspect of the duality in question has been incorporated in the mathematical structure of the theory. The wave packets cannot serve as good models of particles for they spread in time, suggesting that there exist macroscopically extended quantum objects contrary to the empirical evidence in this matter. The fact that these packets are not free solutions to the SMPE can thus be viewed as a partial boon to the theory, even if the theory implies that it is possible to create similar wave packets if an appropriate time-dependent potential is applied. Other notable modifications of the Schrödinger equation also contain wave packet solutions for time-dependent potentials. A good mathematical model of the particle should represent an object that is well localized and non-dispersive. The free soliton presented in this paper meets these requirements. What is specially attractive about it is that it is a particle solution to the modification that does not alter well verified properties of the quantum world established by pure wave mechanics such as, for instance, the atomic structure. This solution seems to be particularly relevant in the context of de Broglie-Bohm formulation of quantum mechanics . It is this formulation that puts a considerable emphasis on the particle aspect of the wave-particle duality. Whereas in the Copenhagen interpretation of this theory it is either the wave or the particle, and the particle can be viewed as the result of interference of waves, in the approach pioneered by de Broglie it is both the wave and the particle. In this picture, the waves are always associated with particles and serve as guides for them according to the original de Broglie idea of pilot waves . Needless to say that without a particle solution to the equations of motion, this picture is rather incomplete. The free particle solution of our modification can coexist with any solution of linear wave mechanics in the sense that they can be part of a bigger system described by a factorizable wave function without violating separability. This is indeed a perfect marriage of wave and particle in that they always remain separated. Other nonlinear modifications also contain particle-like solutions that might fulfill the dream of de Broglie. In , in a model specifically designed for this purpose the existence of a class of possible solutions of particle-like properties is demonstrated. However, these are solutions to approximate nonlinear equations. In the Białynicki-Birula and Mycielski modification, the width of free Gaussian soliton is $`\mathrm{}/\sqrt{2m\epsilon }`$, where $`\epsilon `$ is the only nonlinear physically significant parameter of the theory. Since the current upper bound on this parameter is $`3.3\times 10^{15}`$ eV, it implies that the size of gausson of the electron mass is of the order of $`3`$ mm which is a macroscopic value! Such solitons would be easy to observe, but so far they have somehow managed to escape our attention. It is thus likely that they simply do not exist. A remarkable class of new type of solitons, finite-length solitons,<sup>4</sup><sup>4</sup>4This type of solitonic solutions are known as compactons in the broader literature. have been recently discovered in the Doebner-Goldin type of modifications . As observed in , ‘they realize the “dream of De Broglie,” in the sense that they permit to identify a quantum particle with a non-spreading wave-packet of finite length travelling with a constant velocity in the free space.’ However, the length in question depends on the speed and the frequency of the soliton, and in some cases the smaller these are the bigger the length of the soliton. In particular circumstances nothing can prevent this length from becoming arbitrarily large, and so if these objects are to resemble microscopic quantum particles some additional physically justifiable assumptions are necessary. The Doebner-Goldin modification itself does not seem to provide any insight on how to handle this problem, in part because the physical meaning of its parameters is not well elucidated. On the other hand, the width of the solitons found in this paper which is a measure of their localization is of the order of the characteristic length of the modification, the length of the extended particle-system which this theory can be thought of describing. It seems rather unlikely that one can find a soliton of reasonably small size for an arbitrary value of a nonlinear coupling constant that would be a physically sound model of quantum particle in a theory which does not involve implicitly or explicitly a parameter proportional to some characteristic length or its power. The examples presented in the preceding paragraph were intended to illustrate precisely this point. The stationary soliton solution in the potential of harmonic oscillator implies that there exist an energy level in the spectrum of harmonic oscillator not predictable by the linear theory. The energy of this level depends on the characteristic size of the oscillator that is limited by a certain critical value $`L_{\mathrm{max}}`$ which corresponds to the linear theory. It is for this value that the level in question coincides with the ground state of the harmonic oscillator in linear quantum mechanics and attains its minimum. The solution that corresponds to the nonlinear theory must thus be more compact than the solution of linear theory. Indeed, unlike the discussed solution of linear quantum mechanics with a “soft” size of the oscillator that can be modified by changing its frequency, the nonlinear solution describes a hard core particle regime with the physical size of the oscillator depending only on universal constants. If this modification describes reality then we should be able to observe that one of the energy levels in the spectrum of the harmonic oscillator depends quadratically on $`\omega `$. Finally, let us note that a particularly fitting approach to the SMPE as the theory of extended particles is the approach that we term subrelativistic. This approach introduces the speed of light $`c`$, as in the rest mass-energy of a system, but the framework of special relativity is not needed; the Galilean transformation is the symmetry of the theory. Being nonrelativistic, the Schrödinger equation provides only a limited description of physical phenomena. One can derive it from the Klein-Gordon equation in the limit in which the Compton wavelength is much smaller than de Broglie’s wavelength of quantum particle. Yet, the Klein-Gordon equation cannot be used as an equation for a generic relativistic spinless quantum system due to the problem of negative probabilities. It is tempting to extend the limits of the Schrödinger equation to the domain between the completely nonrelativistic and relativistic world, to the subrelativistic realm. Subrelativistic phenomena are not necessarily of only speculative character. They may arise due to certain pecularities of nonrelativistic quantum mechanics that, as argued in , does not in all respects behave as a fully Galilean invariant theory as one would expect it in the nonrelativistic limit. The difference is empirically significant, as illustrated by the Sagnac effect , and is due to the fact that the “quantum” Galilei group is not identical with its classical counterpart for the former bears the remnants of its relativistic origin. Therefore, the nonrelativistic quantum-mechanical description is sometimes inevitably subrelativistic as ultimately based on a broader group of symmetry than the classical Galilei group. Other effects of this kind that justify the subrelativistic approach may, in principle, be possible too. A particularly important instance of such an effect is provided by the spin-orbit coupling. It is within the subrelativistic approach that one can uniquely determine the physical size of the free particle which turns out to be equal to its Compton wavelength, a most reasonable size for a quantum particle. However, the approach in question makes sense only if the nonlinear parameter of the theory is particle-dependent, i.e., this parameter, such as $`L`$, has to be the particle’s attribute in the same manner as its mass and not a universal constant. ## Acknowledgments I would like to thank Professor Paweł O. Mazur for bringing my attention to the work of Professor Staruszkiewicz that started my interest in nonlinear modifications of the Schrödinger equation. I am also grateful to Professor Andrzej Staruszkiewicz for the critical reading of the preliminary version of this paper, his comments and a discussion, and to Kurt Kołtko for his interest in this work. A correspondence with Dr. Marek Czachor whose comments helped in a better presentation of some of the ideas of this paper is particularly acknowledged. This work was partially supported by the NSF grant No. 13020 F167 and the ONR grant R&T No. 3124141.
no-problem/9903/cond-mat9903085.html
ar5iv
text
# Critical and Near-Critical Branching Processes ## I Introduction Scale-free distributions, or power laws, have been observed in a variety of biological, chemical and physical systems. Such distributions can arise from different underlying mechanisms, but always involve a separation of scales, which forces the distribution to take a standard form. Scale-free distributions are most often observed in the distribution of sizes of events (such as the Gutenberg-Richter law ), the distribution of times between events (e.g., the inter-event interval distribution in neuronal spike trains ), and frequencies. An example of the latter is the well-known and ubiquitous $`1/f`$ noise. Some systems are even more interesting because they seem to exhibit self-organization or self-tuning, concomitant with scale-free behavior as an inherent and robust property of the system, not due to the tuning of a control parameter by the experimenter. Two systems to which such spontaneous scale-free behavior has been attributed are sandpile models and taxon creation in biological systems. The former has served as the paradigm of “self-organized criticality”(SOC) , while the latter, manifested in the form of near power law shapes of rank-abundance curves, has been advanced as evidence of a fractal geometry of evolution . A much simpler system where power laws are observed is the random walk . For example, the waiting times $`t`$ for first return to zero of the simple random walk in one dimension (starting at $`x=0`$, at each time step, $`x(t+1)=x(t)\pm 1`$ with equal probability) have a probability distribution $`t^{3/2}`$. Closely related to random walks, branching processes can also create power law distributions. They have been used to model the dynamics of many systems in a wide variety of disciplines, including demography, genetics, ecology, physiology, chemistry, nuclear physics, and astrophysics. Here, we use a branching process to model the creation and growth of evolutionary taxa, and discuss its application to avalanches in SOC sandpile models. In Section II, we examine the properties of the Galton-Watson process. We find that this process can generate power laws by appropriate tuning of a control parameter, and examine the dynamics of the system both at the critical point and away from it. In Section III, we apply this branching process model to the taxonomic rank-frequency abundance patterns of evolution, and discuss the universality of their underlying dynamics. Finally, in Section IV, we discuss the implications of our work, including a discussion of the order and control parameters for the branching process and its applications, and suggest further questions. ## II The Branching Process The Galton-Watson branching process was first introduced in 1874 to explain the disappearance of family names among the British peerage . It is the first branching process in the literature, and also one of the simplest. Consider an organism that replicates. The number of replicants (daughters) it spawns is determined probabilistically, with $`p_i`$ ($`i=0,1,2`$…) being the probability of having $`i`$ daughters. Each daughter replicates (with the same $`p_i`$ as the original organism) and the daughter’s daughters replicate and so on. We are interested in the rank-frequency probability distribution $`P(n)`$ of the total number of organisms descended from this organism plus $`1`$ (for the original organism), i.e., the historical size of the “colony” the ancestral replicant has given rise to. Note that this is equivalent to asking for the probability distribution of the length of a random walk starting from 1 and returning to 0 with step sizes given by $`P(\mathrm{\Delta }n)=p_{i1}`$ ($`i=0,1,2`$…) . The abundance distribution $`P(n)`$ can be found by defining a generating function $`F(s)={\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}P(i)s^i.`$ (1) This function satisfies the relationship $`F(s)=s{\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}p_i[F(s)]^i,`$ (2) from which each $`P(n)`$ can be determined by equating coefficients of the same order in $`s`$ . This result can also be written as $`P(n)={\displaystyle \frac{1}{n}}Q(n,n1)(k1),`$ (3) where $`Q(i,j)`$ is defined as the probability that $`j`$ organisms will give birth to a total of $`i`$ true daughters . However, these approaches are not numerically efficient, as the calculation of $`P(n)`$ for each new value of $`n`$ requires re-calculation of each term in the result. For our present purposes, we approach the problem in a different manner. Let $`P_{k|j}`$ be the probability that given $`j`$ original organisms, we end up with a total of $`k`$ organisms after all organisms have finished replicating. Obviously, $`P_{k|j}=0(k<j),`$ (4) since it is impossible to have less total organisms than one starts out with, and $`P_{1|1}=p_0,`$ (5) i.e., the probability for one organism to have no daughters. A little less obviously, $`P_{k|1}`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{k1}{}}}p_jP_{(k1)|j},`$ (6) $`P_{k|j}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{k1}{}}}P_{i|1}P_{i|(j1)}(jk>1).`$ (7) These equations allow us to use dynamic programming techniques to calculate $`P(n)`$ ($`=P_{n|1}`$), significantly reducing the computational time required. Also, from Eq. (6), we can write $`{\displaystyle \frac{P_{n|1}}{P_{(n1)|1}}}=p_1+p_2{\displaystyle \frac{P_{(n1)|2}}{P_{(n1)|1}}}+p_3{\displaystyle \frac{P_{(n1)|3}}{P_{(n1)|1}}}+\text{… }.`$ (8) Since, for $`n\mathrm{}`$, $`P_{n|j}`$ is uniformly decreasing, we see $`{\displaystyle \frac{P(n)}{P(n1)}}={\displaystyle \frac{P_{n|1}}{P_{(n1)|1}}}C\text{ }\text{ as }n\mathrm{},(C1)`$ (9) where $`C`$ is a constant. $`C`$ indicates the asymptotic behavior of $`P(n)`$ as $`n\mathrm{}`$. If $`C<1`$, the probability distribution is asymptotically exponential, while for $`C=1`$, the probability distribution is a power law with exponent $`3/2`$. Let us now examine the behavior of $`P(n)`$ when $`n10^4`$, the more relevant case in the examples to follow. Using Eqs. (4)-(7), we can numerically calculate $`P(n)`$ for different sets of $`p_i`$. We define $`m`$ as the expected number of daughters per organism, given a set of probabilities $`p_i`$; $`m={\displaystyle \underset{i}{}}ip_i.`$ (10) We see that the branching rate $`m`$ (the control parameter) is a good indicator of the shape of the probability curve (Fig. 1). When $`m`$ is close to $`1`$, the distribution is nearly a power law, and the further $`m`$ diverges from $`1`$, the further the curve diverges from a power law towards an exponential. When $`m=1/2`$, the curve is completely exponential. For a population of organisms, $`m`$ is a measure of the tendency for new generations to grow, or shrink, in number. A value of $`m>1`$ indicates a growing generation size, which implies that there will, on average, be no generation with no daughters, and that the expected number of total organisms is infinite. Conversely, $`m<1`$ indicates a shrinking population size: There will be a final generation with no daughters, and the expected number of organisms is finite. When $`m=1`$, the system is in between the two regimes (the system is said to be “critical”), and only then is a power law distribution found. In general, the branching rate is determined by the ratio of the rate of introduction of competitors $`R_c`$ to the intrinsic rate of growth of existing assemblages $`R_p`$ via $`m=(1+{\displaystyle \frac{R_c}{R_p}})^1,`$ (11) as can be shown by assuming stationarity. As this ratio goes to $`0`$, $`m1`$ and the system becomes critical. In the following section, we explore systems where the “organisms” are individual members of species or taxa in a taxonomic tree, and $`m`$ is the average number of exact copies an individual makes of itself, or the average number of new taxa of the same supertaxon a taxon spawns, respectively. The same thinking can be applied to tumbling sites in a sandpile model, where $`m`$ would stand for the average number of new tumbles directly caused by a tumbling site. ## III Applications ### A Neutral Model We first present a simple simulation to test our analysis and lay the groundwork for the exploration of more complicated systems. Consider a population of organisms on a finite two-dimensional Euclidean lattice, one organism to a grid square. Each organism can be viable or sterile. All viable organisms replicate approximately every $`\tau `$ time steps (there is a small random component to each individual’s replication time to avoid synchronization effects), while sterile organisms do not replicate. When an organism replicates, its daughter replaces the oldest organism in the parent’s $`9`$-site neighborhood (Fig. 2). We define the fidelity $`F`$ as the probability that the organism will create a daughter of the same type as itself and the corresponding genomic mutation rate $`R`$ ($`=1F`$) at which it creates copies different from itself. The genomic mutation rate is actually the sum of two rates, a probability $`R_n`$ for the daughter to be viable but to be of a new genotype, different from that of the parent (neutrality rate), and a probability $`R_s`$ of the daughter being sterile. Of course, $`R_n+R_s=R`$. Note that all viable mutant daughters still share the same replication time $`\tau `$—all mutations are neutral (See Fig. 3). Such a system gives rise to abundance distributions of power law and near-power law type, which can be predicted as follows. The total number of organisms is determined by the size of the grid. We write equilibrium conditions for the total number of organisms $`\rho _A`$, and for the total number of viable organisms $`\rho _V`$, $`\mathrm{\Delta }\rho _A`$ $``$ $`a\rho _V\rho _A=0,`$ (12) $`\mathrm{\Delta }\rho _V`$ $``$ $`v\rho _V\rho _V=0,`$ (13) where $`a`$ is the average number of daughters (viable and sterile) a viable organism has, and $`v`$ is the average number of viable daughters a viable organism has. Introducing $`m`$—the average number of true daughters (daughters which share the parent’s genotype) for a viable organism—we see that $`v={\displaystyle \frac{F+R_n}{F}}m=(F+R_n)a.`$ (14) From Eqs. (12)-(14), we obtain steady state solutions for $`a`$ and $`m`$, $`a`$ $`=`$ $`{\displaystyle \frac{F^1}{1+\frac{R_n}{F}}},`$ (15) $`m`$ $`=`$ $`{\displaystyle \frac{1}{1+\frac{R_n}{F}}}.`$ (16) Using the branching process model, we can predict the abundance curve from the values of $`a`$ and $`m`$ (or conversely, $`F`$ and $`R_n`$). Fig. 4 shows abundance data for two neutral model runs with differing values of $`R_n`$ (and consequently $`m`$), along with predicted distributions (which use only $`R_n`$ and $`F`$ as parameters) based on the branching model. Although the distribution patterns are very different, both are fit extremely well by the branching process model’s predicted curves. In Eq. (16), note that $`R_n`$ is the rate of influx of new genotypes (and therefore new competitors for space), while $`F`$ is the rate of growth of existing genotypes. The value of $`m`$ is determined by the ratio of these two rates. Unless the total number of creatures is increasing, $`m1`$ ($`m=1`$ if and only if $`R_n0`$ and new competing genotypes are introduced at a vanishing rate). ### B Artificial Life Our next system is the artificial life system sanda , an example of environments which host digital organisms . In this system, while the organisms occupy a two-dimensional grid as in the neutral model detailed above, the organisms are no longer simple, and instead each has a complex genotype consisting of a string of assembly language-like instructions (Fig. 5). Each organism independently executes the instructions of its genotype, and this genotype determines the organism’s replication time $`\tau `$. Unlike the neutral model, the system allows non-neutral mutations which lead to new genotypes with both lower and higher replication times than the parent. The system and the instructions are designed so that the organisms can self-replicate by executing certain sequences of instructions. The replication time of an organism is not a predetermined constant, rather it is determined by the genotype of the organism: Organisms can replicate faster or slower than other competing organisms with different genotypes. For an organism to successfully replicate, its genotype must contain information which allows the organism to allocate temporary space (memory) for its daughter, replicate its genotype (one instruction at a time) into this temporary space, and then to divide, placing its daughter in a grid site of its own (Fig. 5). As in the neutral model, on division, the daughter replaces the oldest organism in its parent’s $`9`$-site neighborhood. Organisms, depending on their genotype, may not be able to replicate (may be sterile) or may only be able to replicate imperfectly (resulting in no true daughters). Also, the copy instruction, which the organisms must use to copy instructions from their own code into that of their nascent daughters, has a probability of failure (copy mutation rate), which can be set by the experimenter. When the copy instruction fails, an instruction is randomly chosen from all the instructions available to the organisms (the instruction set) and written in the string location copied to. Copy mutations also lead to non-true daughters. The instruction set is robust; copy errors (mutations) induced during the replication of viable organisms have a non-vanishing probability of creating viable new organisms and genotypes. Indeed, by selecting for certain traits (such as the ability to perform binary logical operations) by increasing the relative speed at which instructions are executed in organisms which carry these traits, the system can be forced to evolve and find novel genotypes which contain more information (and less entropy) than their ancestors. Even without this external selection, the system evolves organisms (and genotypes) which replicate more efficiently in less executed instructions. As a result of this evolution, the fidelity and neutral mutation rate are not fixed, but can vary with the length of an organism’s genome and the instructions contained therein. Also, new genotypes formed by beneficial mutations that allow faster replication than previously existing genotypes will have (on average) an increasing number of organisms—$`m>1`$—until the new, faster replicating genotypes fill up a sizable portion of the grid. All these factors combine to make predicting the abundance distributions for sanda much harder than for the neutral model. Indeed, rather than being constant during the course of a sanda experiment, $`R_n`$ and $`F`$ will vary unpredictably as the population of organisms occupies different areas in genotypic phase space. Certain genotypes may be brittle, allowing very few mutations that result in new viable genotypes. The length of the organisms may change, changing both the genomic mutation rate and the neutrality rate. Genotypes exist which make systematic errors when copying, which decreases the fidelity. In short, the dynamics of these digital organisms are complex and messy, much like those of their biochemical brethren. These variations are observed at the same time across different organisms in the population, and are also observed with the progression of time. Still, we attempt to predict the abundance distributions by approximating the ratio of neutral mutations to true copies by the observed ratio of viable genotypes to total number of viable organisms ever created: $`{\displaystyle \frac{R_n}{F}}{\displaystyle \frac{N_g}{N_v}},`$ (17) where $`N_g`$ is the total number of viable genotypes observed during a sanda run and $`N_v`$ is the total number of viable organisms. This relation should hold approximately under equilibrium conditions. Then, Eq. (16) becomes $`m(1+{\displaystyle \frac{N_g}{N_v}})^1,`$ (18) and from Eq. (15) $`a={\displaystyle \frac{m}{F}}.`$ (19) The fidelity $`F`$ is inferred from the average length $`l`$ of genotypes during a run and the (externally enforced) per-instruction copy mutation rate $`\gamma `$, $`F=(1\gamma )^l`$. Because we estimate $`m`$ and $`a`$ from macroscopic observables averaged over the length of a run, we expect some error in our results due to the shifting dynamics of the evolution of genotypes as the system moves in genotypic phase space. The abundance data from two different sanda runs are shown in Fig. 6 with the predicted abundance curves. The two runs shared the same grid size and per-instruction copy mutation rate, and were started with the same initial genotypes, but the runs evolved into different regions of genotypic phase space and consequently had significantly different statistics. Considering the many gross approximations we have made, the agreement between our prediction and the experimental data is surprisingly good (especially as no fitting is involved). Sanda is most closely related to an asexually replicating biological population, such as colonies of certain types of bacteria occupying a single niche. The genotype abundance distributions measured in sanda are analogous to the species or subspecies abundance distributions of its biological counterparts. In general, species abundance distributions are complicated by the effects of sexual reproduction, and of the localized and variable influences of other species and the environment on species abundances. However, we believe the branching model—used judiciously—can be helpful in the study of such distributions as well. ### C Evolution Rank-abundance distributions at taxonomic levels higher than species (e.g., the distribution of the number of families per order) are simpler to model than species abundance distributions, as the effects of the complications noted above are weak or nonexistent. We find that the available data is well fit by assuming no direct interaction or fitness difference between taxa . The shapes of rank-frequency distributions of taxonomic and evolutionary assemblages found in nature are surprisingly uniform. Indeed, Burlando has speculated that all higher-order taxonomic rank-frequency distributions follow power laws stemming from underlying fractal dynamics . We believe this conclusion is hasty: The divergence of the distributions from power law can be observed by applying appropriate binning methods to the data. (See Appendix.) Yule attempted a branching process model explanation of these distributions, and claimed that divergence from power law of rank-abundance patterns was transient and indicated a finite time since the creation of the evolutionary assemblage. Our model indicates that this is not generally the case. We find that the divergence from power law is not a result of disequilibration, but is an inherent property of the evolutionary assemblage under consideration and that this divergence provides insight into microscopic properties of the assemblage (e.g., the rate of innovation). Say, for example, that we are interested in the rank-frequency distribution of the number of families in each order for fossil marine animal orders. We assume that all new families and orders in this assemblage originate from mutations in extant families. Then, we can define rates of successful mutation $`R_f`$ for mutations which create new families in the same order as the original family, and $`R_o`$ for mutations which create an entirely new order. In this case, unlike the cases treated above, we approximate $`a\mathrm{}`$; many individual births and mutations occur, but the proportion that are family- or order-forming is minuscule. Finally, assuming a quasi-steady state (the total numbers of orders and families vary slowly ), we rewrite Eq. (16), $`m`$ $``$ $`(1+{\displaystyle \frac{R_o}{R_f}})^1`$ (20) $``$ $`(1+{\displaystyle \frac{N_o}{N_f}})^1,`$ (21) in terms of $`N_o`$ and $`N_f`$, the total numbers of orders and families respectively. As in the previous systems studied, $`R_o`$ is the rate of creation of new—competing—orders, while $`R_f`$ is the rate of growth of existing orders, and $`m`$ is determined by their ratio. Data for the abundance distribution of number of families in fossil marine animal orders are shown in Fig. 7. We obtained values for $`N_o`$ and $`N_f`$ directly from the fossil data to generate the predicted curve with no free parameters. The agreement is very good, much better than that for the sanda runs where evolutionary parameters such as the fidelity $`F`$ and the neutrality $`R_n`$ were constantly changing. Comparing $`m`$ and the resultant abundance curves with those obtained above for the rank-abundance distribution of sanda genotypes leads us to the expected conclusion that the probability of creation of a new genotype in sanda per birth is much higher than the probability of a new family creating an order in natural evolution. Indeed, a wide variety of higher-order taxonomic assemblages have abundance distributions consistent with $`m`$ near 1 . We believe this is a robust result of the evolutionary process. Low values of $`m`$ may not be observed for large taxon assemblages for several reasons. A small value of $`m`$ implies either a small number of individuals in the assemblage, or a very specialized niche with a very low rate of taxon formation. A low number of individuals would lead to a low probability of the taxon being discovered and cataloged by biologists. A small number of individuals and taxa would result in an assemblage with too few taxa to give us a clear statistical picture. Also, since such an assemblage would have a small population, be incapable of further adaptation, or both, we expect it would be more susceptible to competition and environmental effects leading to early extinction. ### D Sandpile Models It was originally suggested that the self-organization observed in the sandpile model of Bak, Tang and Wiesenfeld (BTW) (and the power laws it displayed) was an inherent property of the system, while it now seems established that the system is actually tuned by waiting until avalanches are over before dropping new grains—this is equivalent to allowing non-local interactions . The same conclusion is reached when using a branching process to describe the avalanche dynamics. Branching processes have been applied to sandpile models as early as 1988 (see also, ). Using a mean-field approach in higher dimensions ($`d4`$), power law distributions for the size of avalanches $`s(n)`$ can be obtained analytically, and critical exponents can be calculated exactly to reveal $`s(n)n^{3/2}`$ in the limit of infinitesimally small driving. This is supported by numerical simulations. However, for lower dimensions, sandpiles will “interfere” with themselves, and a smaller exponent is found. Attempts to calculate the effects of this “final-state” interaction through renormalization have as yet not been completely successful . Still, the phenomenon of “violations” of power-law behavior due to $`m<1`$ (non-critical branching process) can be seen there as well. ## IV Discussion The Galton-Watson branching process generates power law distributions when its control parameter $`m=1`$. In all the systems we have examined above, $`m=(1+{\displaystyle \frac{R_c}{R_p}})^1`$ (22) is determined by the ratio of the rate of introduction of competitors $`R_c`$ to the intrinsic rate of growth of existing assemblages $`R_p`$. As this ratio goes to $`0`$, $`m1`$ and the system becomes critical. This relation can be translated into the standard relation between an order parameter $`\alpha ={\displaystyle \frac{R_c}{R_p}}`$ (23) and a new form for the control parameter $`\mu =m^1.`$ (24) Writing $`\alpha `$ in terms of $`\mu `$, $`\alpha =\{\begin{array}{cc}(\mu \mu _c)^\beta & (\mu >\mu _c),\hfill \\ 0& (\mu \mu _c),\hfill \end{array}`$ where $`\mu _c=1`$ and $`\beta =1`$ (Fig. 8). The order parameter represents the rate at which competition is introduced in the system (the strength of selection). A value of the control parameter $`\mu <\mu _c`$ implies a system with no competition and no selection—an exponentially growing population. Values of $`\mu `$ higher than $`\mu _c`$ indicate that new competition is always being introduced and that all existing species or avalanches must eventually die out. When $`\mu =\mu _c`$, competition is introduced at a vanishingly small rate, and we find the critical situation where separation of scales occurs. For sandpile models, this $`\alpha `$ is arbitrarily set close to $`0`$ by using large lattice sizes (reducing dissipation) and waiting for avalanches to finish before introducing new perturbations (resulting in an infinitesimal driving rate and a diverging diffusion coefficient). For the biological and biologically-inspired systems we have considered, the control parameter is not set arbitrarily to a critical value. However, the dynamics of the evolutionary process, in which it is much harder to effect large jumps in fitness and function than it is to effect small ones, lead to naturally observed values of $`\alpha `$ being small, especially for higher taxonomic orders. The dynamics of evolution act, robustly, to keep $`\mu `$ near $`\mu _c`$. This in turn leads to a near power law pattern for rank-frequency distributions. We have shown that the apparent power laws of avalanches in species-abundance distributions in artificial life systems, as well as rank-abundance distributions in taxonomy can be explained by modeling the dynamics of the underlying system with a simple branching process. This branching process model successfully predicts, with no free parameters, the observed abundance distributions—including their divergence from power law. A branching process approach may allow the deduction of the microscopic parameters of the system directly from the macroscopic abundance distribution. We find that we can identify a control parameter—the average number of new events an event directly spawns, and an order parameter— the rate of introduction of competing events into the system, and that these are related in a form familiar from second order phase transitions in statistical physics. ###### Acknowledgements. We are grateful to the late Prof. J. J. Sepkoski for kindly sending us his amended data set for fossil marine animal families, as well as an anonymous referee for insightful comments. J. C. thanks M. C. Cross for continued support and discussions. Access to the Intel Paragon XP/S was provided by the Center of Advanced Computing Research at the California Institute of Technology. This research was supported by the NSF under contract Nos. PHY-9723972 and DEB-9981397. Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. ## A Binning Methods When dealing with event distributions best plotted on single log or double log scales (such as exponential and power law distributions), care must be taken in the proper binning of the experimental data. Say we are interested in the probability distribution $`P(n)`$ of an event distribution over positive integer values of $`n`$. We conduct $`N`$ trials, resulting in a data set $`Q(n)`$ of number of events observed for every $`n`$ value. For ranges of $`n`$ where the expected or observed number of events for each $`n`$ is much higher than 1, normally no binning is required. However, for ranges of $`n`$ where $`Q(n)`$ or $`P(n)`$ is small, binning is necessary to produce both statistically significant data points, and intuitively correct graphical representations. A constant bin size has several drawbacks: One must guess and choose an intermediate bin size to serve across a broad range of parameter space, and the shape and slopes of the curve (even in a double log plot) are distorted . These disadvantages can be overcome by using a variable bin size. However, choosing bin sizes for variable binning is time-consuming and arbitrary—different choices will lead to different conclusions. We propose two related methods of systematically determining appropriate variable bin sizes. Both methods lead to binned data which help in visualizing the underlying distribution (slopes and shapes are conserved). For the first method (the Data Threshold Method), we start by selecting a threshold value $`T`$. Starting from $`n=1`$ and proceeding to higher values, no binning is done until a value of $`n`$ is found for which $`Q(n)<T`$. When such a value $`n_s`$ is found, subsequent $`Q(n)`$ values are added to this amount until the sum of these values is greater than the threshold value, $`{\displaystyle \underset{n=n_s}{\overset{n_l}{}}}Q(n)>T.`$ (A1) We then have a bin size $`(n_ln_s+1)`$, with value $`_{n=n_s}^{n_l}Q(n)`$. When plotting, it is convenient to plot this as a single point at the midpoint of $`[n_s,n_l]`$, with an averaged value, $`({\displaystyle \frac{n_s+n_l}{2}},{\displaystyle \frac{_{n=n_s}^{n_l}Q(n)}{n_ln_s+1}}).`$ (A2) This yields a graphical representation with little distortion and good predictive power (Fig. 9). This binning procedure is continued until no more data remains to be binned. The second binning method (the Template Threshold Method), uses a predicted probability distribution $`P(n)`$, or a reasonable surrogate. Again, we define a threshold value for fitting $`T`$. However, in this case, the bin sizes are determined by comparing values of the expected distribution $`E(n)=P(n)\times N`$ (A3) to $`T`$. Starting from $`n=1`$ and proceeding to higher values, no binning is done until a value of $`n`$ is found for which $`E(n)<T`$. When such a value $`n_s`$ is found, subsequent $`E(n)`$ values are added to this amount until the sum of these values is greater than the threshold value, $`{\displaystyle \underset{n=n_s}{\overset{n_l}{}}}E(n)>T.`$ (A4) We then have a bin of $`[n_s,n_l]`$ with corresponding size $`(n_ln_s+1)`$. The average value associated with this bin is $`{\displaystyle \frac{_{n=n_s}^{n_l}Q(n)}{n_ln_s+1}}.`$ (A5) This procedure is repeated until the data is exhausted. For this method, the data may be graphically represented either as a single point per bin (as in the data threshold method above), or as a point (showing the associated average value) for each measured (non-zero) data point $`Q(n)`$. The data threshold method requires no a priori knowledge, and is a good predictor of the underlying distribution. However, when there are few data points, the template threshold method is more reliable. For both methods, a range of $`T`$ should be tried and the best $`T`$ (neither over- or under-binning) chosen.
no-problem/9903/astro-ph9903136.html
ar5iv
text
# OBSERVATIONS AND THEORY OF STAR CLUSTER FORMATION ## Abstract Young stars form on a wide range of scales, producing aggregates and clusters with various degrees of gravitational self-binding. The loose aggregates have a hierarchical structure in both space and time that resembles interstellar turbulence, suggesting that these stars form in only a few turbulent crossing times with positions that map out the previous gas distribution. Dense clusters, on the other hand, are often well mixed, as if self-gravitational motion has erased the initial fine structure. Nevertheless, some of the youngest dense clusters also show sub-clumping, so it may be that all stellar clustering is related to turbulence. Some of the densest clusters may also be triggered. The evidence for mass segregation of the stars inside clusters is reviewed, along with various explanations for this effect. Other aspects of the theory of cluster formation are reviewed as well, including many specific proposals for cluster formation mechanisms. The conditions for the formation of bound clusters are discussed. Critical star formation efficiencies can be as low as 10% if the gas removal process is slow and the stars are born at sub-virial speeds. Environmental conditions, particularly pressure, may affect the fraction and masses of clusters that end up bound. Globular clusters may form like normal open clusters but in conditions that prevailed during the formation of the halo and bulge, or in interacting and starburst galaxies today. Various theories for the formation of globular clusters are summarized. slugcomment: Protostars and Planets IV To be published in Protostars and Planets IV, eds. V. G. Mannings, A. P. Boss, and S. S. Russell, from the conference at Santa Barbara, CA, July 6-11, 1998. I. INTRODUCTION The advent of large array cameras at visible to infrared wavelengths, and the growing capability to conduct deep surveys with semi-automated searching and analysis techniques, have led to a resurgence in the study of stellar clusters and groupings in the disk and halo of our Galaxy, in nearby galaxies, and in distant galaxies. The complementary aspect of the cluster formation problem, namely the structure of molecular clouds and complexes, is also being realized by submm continuum mapping and comprehensive mm surveys. Here we review various theories about the origin of star clusters and the implications of young stellar clustering in general, and we discuss the requirements for gravitational self-binding in open and globular clusters. Previous reviews of cluster formation were in Wilking & Lada (1985), Larson (1990), Lada (1993), Lada, Strom, & Myers (1993), and Zinnecker, McCaughrean, & Wilking (1993). II. STRUCTURE OF YOUNG STELLAR GROUPS Stellar groupings basically come in two types: bound and unbound. Some of the unbound groups could be loose aggregates of stars that formed in the dense cores of weakly bound or unbound cloud structures, such as the Taurus star-forming complex. Other unbound groups could be dispersed remnants of inefficient star formation in strongly self-gravitating clouds or cloud cores. This section discusses loose aggregates of young stars first, considering hierarchical structure and a size-time correlation, and then more compact groups of stars, i.e. cluster formation in dense cloud cores, along with the associated process of stellar mass segregation, and the effects (or lack of effects) of high stellar densities on disks, binaries, and the stellar initial mass function. A. Hierarchical Structure in Clouds and Stellar Groups 1. Gas Structure Interstellar gas is structured into clouds and clumps on a wide range of scales, from sub-stellar masses ($`10^4`$ M) that are hundredths of a parsec in size, to giant cloud complexes ($`10^7`$ M) that are as large as the thickness of a galaxy. This structure is often characterized as consisting of discrete clumps or clouds with a mass spectrum that is approximately a power law, $`n(M)dMM^\alpha dM`$, with $`\alpha `$ in the range from $`1.5`$ to $`1.9`$ (for the smaller scales, see Heithausen et al. 1998 and Kramer et al. 1998, with a review in Blitz 1993; for the larger scales, see Solomon et al. 1987, Dickey & Garwood 1989, and the review in Elmegreen 1993). Geometrical properties of the gas structure can also be measured from power spectra of emission line intensity maps (Stutzki et al. 1998). The power-law nature of the result implies that the emission intensity is self-similar over a wide range of scales. Self-similar structure has also been found on cloud perimeters, where a fractal dimension of $`1.3\pm 0.3`$ was determined (Beech 1987; Bazell & Désert 1988; Scalo 1990; Dickman, Horvath, & Margulis 1990; Falgarone, Phillips, & Walker 1991; Zimmermann & Stutzki 1992, 1993; Vogelaar & Wakker 1994). This power-law structure includes gas that is both self-gravitating and non-self-gravitating, so the origin is not purely gravitational fragmentation. The most likely source is some combination of turbulence (see review in Falgarone & Phillips 1991), agglomeration with fragmentation (Carlberg & Pudritz 1990; McLaughlin & Pudritz 1996), and self-gravity (de Vega, Sanchez, & Combes 1996). Interstellar gas is not always fractal. Shells, filaments, and dense cores are not fractal in their overall shapes, they are regular and have characteristic scales. Generally, the structures that form as a result of specific high pressure events, such as stellar or galactic pressures, have an overall shape that is defined by that event, while the structures that are formed as a result of turbulence are hierarchical and fractal. 2. Stellar Clustering Stellar clustering occurs on a wide range of scales like most gas structures, with a mass distribution function that is about the same as the clump mass distribution function in the gas. For open clusters, it is a power law with $`\alpha `$ in the range from $`1.5`$ (van den Bergh & Lafontaine 1984; Elson & Fall 1985; Bhatt, Pandey, & Mahra 1991) to $`2`$ (Battinelli et al. 1994; Elmegreen & Efremov 1997), and for OB associations, it is a power law with $`\alpha 1.72`$, as determined from the luminosity distribution function of HII regions in galaxies (Kennicutt, Edgar, & Hodge 1989; Comeron & Torra 1996; Feinstein 1997; Oey & Clarke 1998). It is important to recognize that the gas and stars are not just clumped, as in the old plumb-pudding model (Clark 1965), but they are hierarchically clumped, meaning that small pieces of gas, and small clusters, are usually contained inside larger pieces of gas and larger clusters, through a wide range of scales. Scalo (1985) reviewed these observations for the gas, and Elmegreen & Efremov (1998) reviewed the corresponding observations for the stars. Figure 1 shows a section of the Large Magellanic Cloud southwest of 30 Dor. Stellar clusterings are present on many scales, and many of the large clusters contain smaller clusters inside of them, down to the limit of resolution. Stellar clumping on a kiloparsec scale was first studied by Efremov (1978), who identified “complexes” of Cepheid variables, supergiants, and open clusters. Such complexes trace out the recent $`3050`$ million years of star formation in the largest cloud structures, which are the HI “superclouds” (Elmegreen & Elmegreen 1983, 1987; Elmegreen 1995b) and “giant molecular associations” (Rand & Kulkarni 1990) that are commonly found in spiral arms. The sizes of star complexes and the sizes of superclouds or GMA’s are about the same in each galaxy, increasing regularly with galaxy size from $`300`$ pc or less in small galaxies like the LMC to $`600`$ pc or more in large galaxies like the Milky Way (Elmegreen et al. 1996). Each star complex typically contains several OB associations in addition to the Cepheids, because star formation continues in each supercloud, first in one center and then in another, for the whole $`3050`$ My. This overall timescale is smaller in smaller galaxies because the complexes are smaller, as Battinelli & Efremov (1999) confirmed for the LMC. Star complexes are well studied in the Milky Way and local galaxies (see reviews in Efremov 1989, 1995). In our Galaxy, they were examined most recently by Berdnikov & Efremov (1989, 1993), and Efremov & Sitnik (1988). The latter showed that 90% of the young ($`10`$ My) clusters and associations in the Milky Way are united into the same star complexes that are traced by Cepheids and supergiants. A similarly high fraction of hierarchical clustering has been found in M31 (Efremov, Ivanov, & Nikolov 1987; Battinelli 1991, 1992; Magnier et al. 1993; Battinelli, Efremov & Magnier 1996), M33 (Ivanov 1992), the LMC (Feitzinger & Braunsfurth 1984), and many other galaxies (Feitzinger & Galinski 1987). A map showing two levels in the hierarchy of stellar structures along the southwest portion of the western spiral arm in M31 is shown in figure 2 (from Battinelli, Efremov & Magnier 1996). The smaller groups, which are Orion-type OB associations, are shown with faint outlines displaced 0.1 to the south of the larger groups for clarity; the smaller groups are also shown as dots with their proper positions inside the larger groups. Evidently most of the OB associations are within the larger groups. The oldest star complexes are sheared by differential galactic rotation, and appear as flocculent spiral arms if there are no strong density waves (Elmegreen & Efremov 1996). When there are density waves, the complexes form in the arm crests, somewhat equally spaced, as a result of gravitational instabilities (Elmegreen & Elmegreen 1983; Elmegreen 1994; Rand 1995; Efremov 1998). 3. Hierarchical Clustering of Stars on Smaller Scales Hierarchical clustering of stars continues from star complexes to OB associations, down to very small scales. Infrared observations reveal embedded clusters of various sizes and densities in star-forming regions. Many of these, as discussed in the next section, are extremely dense and deeply embedded in self-gravitating molecular cores. Others are more open and clumpy, as if they were simply following the hierarchical gas distribution around them. A good example of the latter is in the Lynds 1641 cloud in the Orion association, which has several aggregates comprised of $`1050`$ young stars, plus a dispersed population throughout the cloud (Strom, Strom & Merrill 1993; Hodapp & Deane 1993; Chen & Tokunaga 1994; Allen 1995). The distribution of young, x-ray active stars around the sky (Guillout et al. 1998) is also irregular and clumpy on a range of scales. The low-mass young stars seen in these x-ray surveys are no longer confined to dense cores. Sterzik et al. (1995), Feigelson (1996), Covino et al. (1997), Neuhäuser (1997), and Frink et al. (1997, 1998) found that the low mass membership in small star-forming regions extends far beyond the previously accepted boundaries. This is consistent with the hierarchical clustering model. 4. Two Examples of Hierarchical Stellar Structure: Orion and W3 There are many observations of individual clusters that are part of a hierarchy on larger scales. The Orion region overall contains at least 5 levels of hierarchical structure. On the largest scale (first level), there is the so-called local arm, or the Orion-Cygnus spur, which has only young stars (Efremov 1997) and is therefore a sheared star formation feature, not a spiral density wave (Elmegreen & Efremov 1996; compare to the Sgr-Car arm, which also has old stars – Efremov 1997). The largest local condensation (second level) in the Orion-Cygnus spur is Gould’s Belt, of which Orion OB1 is one of several similar condensations (third level; Pöppel 1997). Inside Orion OB1 are four subgroups (fourth level; Blaauw 1964), and the youngest of them, including the Trapezium cluster, contains substructure too (fifth level): one region is the BN/KL region, perhaps triggered by theta-1c, and another is near OMC-1S (Zinnecker, McCaughrean, & Wilking 1993). The main Trapezium cluster may have no substructure, though (Bate, Clarke, & McCaughrean 1998). A similar hierarchy with five levels surrounds W3. On the largest scale is the Perseus spiral arm (first level), which contains several giant star formation regions separated by 1–2 kpc; the W3 complex is in one of them, and the NGC 7538 complex is in another. The kpc-scale condensation surrounding W3 (second level) contains associations Cas OB6 and Per OB1 (which is below the galactic plane and includes the double cluster h and $`\chi `$ Per), and these two associations form a stellar complex. The association Cas OB8, which includes a compact group of five clusters (Efremov 1989, Fig. 16 and Table 7 on p. 77) may also be a member of this complex, as suggested by the distances and radial velocities. Cas OB6 is the third level for W3. Cas OB6 consists of the two main star-forming regions W4 (fourth level) and W5, and W4 has three condensations at the edge of the expanded HII region, in the associated molecular cloud (Lada et al. 1978). W3 is one of these three condensations, and therefore represents the fifth level in the hierarchy. The hierarchy may continue further too, since W3 contains two apparently separate sites of star formation, W3A and W3B (Wynn-Williams, Becklin, & Neugebauer 1972; Normandeau, Taylor, & Dewdney 1997). Most young, embedded clusters resemble Orion and W3 in this respect. They have some level of current star formation activity, with an age possibly less than $`10^5`$ years, and are also part of an older OB association or other extended star formation up to galactic scales, with other clusters forming in the dense parts here and there for a relatively long time. 5. Cluster Pairs and other Small Scale Structure Another way this hierarchy appears is in cluster pairs. Many clusters in both the Large Magellanic Cloud (Bhatia & Hatzidimitriou 1988; Kontizas et al. 1989; Dieball & Grebel 1998; Vallenari et al. 1998) and Small Magellanic Cloud (Hatzidimitriou & Bhatia 1990) occur in distinct pairs with about the same age. Most of these binary clusters are inside larger groups of clusters and stellar complexes. However, the clumps of clusters and the clumps of Cepheids in the LMC do not usually coincide (Efremov, 1989, p. 205; Battinelli & Efremov, 1999). Some embedded clusters also have more structure inside of them. For example, star formation in the cloud G 35.20-1.74 has occurred in several different and independent episodes (Persi et al. 1997), and there is also evidence for non-coeval star formation in NGC 3603, the most massive visible young cluster in the Galaxy (Eisenhauer et al. 1998). W33 contains three separate centers or sub-clusters of star formation inside of it that have not yet merged into a single cluster (Beck et al. 1998). The same is true in 30 Dor and the associated cluster NGC 2070 (Seleznev 1995), which appears to have triggered a second generation of star formation in the adjacent molecular clouds (Hyland et al. 1992; Walborn & Blades 1997; Rubio et al. 1998; Walborn et al. 1999). Similarly, NGC 3603 has substructure with an age difference of $`10`$ My, presumably from triggering too (Brandner et al. 1997). Lada & Lada (1995) found eight small subclusters with 10 to 20 stars each in the outer parts of IC 348. Piche (1993) found two levels of hierarchical structure in NGC 2264: two main clusters with two subclusters in one and three in the other. The old stellar cluster M67 still apparently contains clumpy outer structure (Chupina & Vereshchagin 1998). Some subclusters can even have slightly different ages: Strobel (1992) found age substructure in 14 young clusters, and Elson (1991) found spatial substructure in 18 rich clusters in the LMC. Evidence that subclustering did not occur in dense globular clusters was recently given by Goodwin (1998), who noted from numerical simulations that initial substructure in globular clusters would not be completely erased during the short lifetimes of some of the youngest in the LMC. Because these young populous clusters appear very smooth, their initial conditions had to be somewhat smooth and spherical too. The similarity between the loose clustering properties of many young stellar regions and the clumpy structure of weakly self-gravitating gas appears to be the result of star formation following the gas in hierarchical clouds that are organized by supersonic turbulence. Turbulence also implies motion and, therefore, a size-dependent crossing time for the gas. We shall see in the next section that this size-dependent timescale might also apply to the duration of star formation in a region. B. Star Formation Time Scales The duration of star formation tends to vary with the size $`S`$ of the region as something like the crossing time for turbulent motions, i.e., increasing about as $`S^{0.5}`$. This means that star formation in larger structures takes longer than star formation in sub-regions. A schematic diagram of this time-size pattern is shown in figure 3. The largest scale is taken to be that of a flocculent spiral arm, which is typically $`100`$ My old, as determined from the pitch angle (Efremov & Elmegreen 1998). This relationship between the duration of star formation and the region size implies that clusters forming together in small regions will usually have about the same age, within perhaps a factor of three of the turbulent crossing time of the small scale, while clusters forming together in larger regions will have a wider range of ages, proportional to the crossing time on the larger scale. Figure 4 shows this relationship for 590 clusters in the LMC (Efremov & Elmegreen 1998). Plotted on the ordinate is the average difference in age between all pairs of clusters whose deprojected spatial separations equal the values on the abscissa. The average age difference between clusters increases with their spatial separation. In the figure, the correlation ranges between $`0.02^{}`$ and $`1^{}`$ in the LMC, which corresponds to a spatial scale of 15 to 780 pc. The correlation disappears above $`1^{}`$, perhaps because the largest scale for star formation equals the Jeans length or the disk thickness. A similar duration-size relation is also observed within the clumps of clusters in the LMC. Larger clumps of clusters have larger age dispersions (Battinelli & Efremov 1999). The correlations between cluster birth times and spatial scale are reminiscent of the correlation between internal crossing time and size in molecular clouds. The crossing time in a molecular cloud or cloud clump is about the ratio of the radius (half-width at half-maximum size) to the Gaussian velocity dispersion. The data for several molecular cloud surveys are shown in figure 5, with different symbols for each survey. On the top is a plot of the Gaussian linewidth versus size, and on the bottom is the crossing time versus size. Smaller clouds and clumps have smaller crossing times, approximately in proportion to size $`S^{0.5}.`$ Overlayed on this plot, as large crosses, are the age-difference versus separation points for LMC clusters, from figure 4. Evidently, the cluster correlation fits in nicely at the top part of the molecular cloud crossing time-size relation. These correlations underscore our perception that both cloud structure, and at least some stellar clusterings, come from interstellar gas turbulence. The cluster age differences also suggest that star formation is mostly finished in a cloud within only $`2`$ to 3 turbulent crossing times, which is very fast. In fact, this time is much faster than the magnetic diffusion time through the bulk of the cloud, which is $`10`$ crossing times in a uniform medium with cosmic ray ionization (Shu et al. 1987), and even longer if uv light can get in (Myers & Khersonsky 1995), and if the clouds are clumpy (Elmegreen & Combes 1992). Thus magnetic diffusion does not regulate the formation of stellar groups, it may regulate only the formation of individual stars, which occurs on much smaller scales (Shu et al. 1987; but see Nakano 1998). Star formation in a cluster may begin when the turbulent energy of the cloud dissipates. This is apparently a rapid process, as indicated by recent numerical simulations of supersonic MHD turbulence, which show a dissipation time of only 1–2 internal crossing times (MacLow et al. 1998; Stone, Ostriker, & Gammie 1998). Most giant molecular clouds have similar turbulent and magnetic energies (Myers & Goodman 1988) and they would be unstable without the turbulent energy (McKee et al. 1993), so the rapid dissipation of turbulence should lead to a similarly rapid onset of star formation (e.g., McLaughlin & Pudritz 1996). The turbulence has to be replenished for the cloud to last more than several crossing times. The observed age-size correlation is significantly different from what one might expect from simple crossing-time arguments in the absence of turbulence. If the velocity dispersion is independent of scale, as for an isothermal fluid without correlated turbulent motions, then the slope of the age-size correlation would be 1.0, not $`0.35`$. The correlation is also not from stochastic self-propagating star formation, which would imply a diffusion process for the size of a star formation patch, giving a spatial scale that increases as the square root of time. In that case the slope on figure 4 would be 2. The duration-size relation for stellar groupings implies that OB associations and $`10^5`$ M GMC’s are not physically significant scales for star formation, but just regions that are large enough to have statistically sampled the high mass end of the IMF, and young enough to have these OB stars still present. Regions with such an age tend to have a certain size, $`100`$ pc, from the size-time relation, but the cloud and star formation processes need not be physically distinct. The time-scale versus size correlations for star formation should not have the same coefficients in front of the power laws for all regions of all galaxies. This coefficient should scale with the total turbulent ISM pressure to the inverse $`1/4`$ power (from the relations $`PGM^2/R^4`$ and $`\mathrm{\Delta }v^2GM/\left(5R\right)`$ for self-gravitating gas; Chièze 1987; Elmegreen 1989). Thus regions with pressures higher than the local value by a factor of $`10^210^4`$ should have durations of star formation shorter than the local regions by a factor of $`310`$, for the same spatial scale. This result corresponds to the observation for starburst galaxies that the formation time of very dense clusters, containing the mass equivalent of a whole OB association, is extraordinarily fast, on the order of $`13`$ My, whereas in our Galaxy, it takes $`10`$ My to form an aggregate of this mass. Similarly, high pressure cores in GMCs (Sect. IIC) should form stars faster than low pressure regions with a similar mass or size. There are many observations of the duration of star formation in various regions, both active and inactive. In the Orion Trapezium cluster, the age spread for 80% of the stars is very short, less than 1 My (Prosser et al. 1994), as it is in L1641 (Hodapp & Deane 1993). It might be even shorter for a large number (but not necessarily a large fraction) of stars in NGC 1333 because of the large number of jets and Herbig-Haro objects that are present today (Bally et al. 1996). In NGC 6531 as well, the age spread is immeasurably small (Forbes 1996). Other clusters have larger age spreads. Hillenbrand et al. (1993) found that, while the most massive stars (80 M) in NGC 6611 (=M16) have a 1 My age spread around a mean age of $`2`$ My, there are also pre-main sequence stars and a star of 30 M with an age of 6 My. The cluster NGC 1850 in the LMC has an age spread of 2 to 10 My (Caloi & Cassatella 1998), and in NGC 2004, there are evolved low mass stars in the midst of less evolved high mass stars (Caloi & Cassatella 1995). In NGC 4755, the age spread is 6 to 7 My, based on the simultaneous presence of both high and low mass star formation (Sagar & Cannon 1995). One of the best studied clusters for an age spread is the Pleiades, where features in the luminosity function (Belikov et al. 1998) and synthetic HR diagrams (Siess et al. 1997) suggest continuous star formation over $`30`$ My when it formed ($`100`$ My ago). This is much longer than the other age spreads for small clusters, and may have another explanation, including the possibility that the Pleiades primordial cloud captured some stars from a neighboring, slightly older, star-forming region (e.g., Bhatt 1989). Recall that the age spreads are much larger than several My for whole OB associations and star complexes, as discussed above. C. Clusters in Dense Molecular Cores 1. Cluster Densities Infrared, x-ray, and radio continuum maps reveal dense clusters of young stars in many nearby GMC cores. Reviews of embedded infrared clusters, including 3-color JHK images, were written by Lada, Strom & Myers (1993) and Zinnecker, McCaughrean & Wilking (1993). Most observations of embedded young clusters have been made with JHK imagery. A list of some of the regions studied is in Table 1. These clusters typically have radii of $`0.1`$ pc to several tenths of a pc, and contain several hundred catalogued stars, making the stellar densities on the order of several times $`10^3`$ pc<sup>-3</sup> or larger. For example, in the Trapezium cluster, the stellar density is $`5000`$ stars pc<sup>-3</sup> (Prosser et al. 1994) or higher (McCaughrean & Stauffer 1994), and in Mon R2 it is $`9000`$ stars pc<sup>-3</sup> (Carpenter et al. 1997). Perhaps the more distant clusters in this list are slightly larger, as a result of selection effects. Some clusters, like W3, NGC 6334, Mon R2, M17, CMa OB1, S106, and the maser clusters, contain massive stars, even O-type stars in the pre-UCHII phase or with HII regions. Others, like rho Oph, contain primarily low mass stars. Although the mass functions vary a little from region to region, there is no reason to think at this time that the spatially averaged IMFs in these clusters are significantly different from the Salpeter (1955), Scalo (1986), or Kroupa, Tout, & Gilmore (1993) functions. Thus the clusters with high mass stars also tend to have low mass stars (Zinnecker, McCaughrean, & Wilking 1993), although not all of the low-mass stars are seen yet, and clusters with primarily low mass stars are not populous enough to contain a relatively rare massive star (see review of the IMF in Elmegreen 1998a). Embedded x-ray clusters have been found in NGC 2024 (Freyberg & Schmitt 1995), IC348 (Preibisch, Zinnecker, & Herbig 1996), IC1396 (Schulz, Berghöfer, & Zinnecker 1997), and the Mon R2 and Rosette molecular clouds (Gregorio-Hetem et al. 1998). These show x-ray point sources that are probably T Tauri stars, some of which are seen optically. The presence of strong x-rays in dense regions of star formation increases the ionization fraction over previous estimates based only on cosmic ray fluxes. At higher ionization fractions, magnetic diffusion takes longer and this may slow the star formation process. For this reason, Casanova et al. (1995) and Preibisch et al. (1996) suggested that x-rays from T Tauri stars lead to self-regulation of the star formation rate in dense clusters. On the other hand, Nakano (1998) suggests that star formation occurs quickly, by direct collapse, without any delay from magnetic diffusion. X-rays can also affect the final accretion phase from the disk. The X-ray irradiation of protostellar disks can lead to better coupling between the gas and the magnetic fields, and more efficient angular momentum losses through hydromagnetic winds (cf. Königl & Pudritz 1999). Such a process might increase the efficiency of star formation. The full implications of x-ray radiation in the cluster environment are not understood yet. A stellar density of $`10^3`$ M pc<sup>-3</sup> corresponds to an H<sub>2</sub> density of $`10^4`$ cm<sup>-3</sup>. Molecular cores with densities of $`10^5`$ cm<sup>-3</sup> or higher (e.g., Lada 1992) can easily make clusters this dense. Measured star formation efficiencies are typically 10%-40% (e.g., see Greene & Young 1992; Megeath et al. 1996; Tapia et al. 1996). Gas densities of $`10^5`$ cm<sup>-3</sup> also imply extinctions of $`A_V40`$ mag on scales of $`0.2`$ pc, which are commonly seen in these regions, and they imply masses of $`200`$ M and virial velocities of $`1`$ km s<sup>-1</sup>, which is the typical order of magnitude of the gas velocity dispersion of cold star-forming clouds in the solar neighborhood. There should be larger and smaller dense clusters too, of course, not a characteristic cluster size that is simply the average value seen locally, because unbiased surveys, as in the LMC (Bica et al. 1996), show a wide range of cluster masses with power-law mass functions, i.e., no characteristic scale (cf. Sect. IIA). 2. Cluster Effects on Binary Stars and Disks The protostellar binary fraction is lower in the Trapezium cluster than the Tau-Aur region by a factor of $`3`$ (Petr et al. 1998), and lower in the Pleiades cluster than in Tau-Aur as well (Bouvier et al. 1997). Yet the binary frequency in the Trapezium and Pleiades clusters are comparable to that in the field (Prosser et al. 1994). This observation suggests that most stars form in dense clusters, and that these clusters reduce an initially high binary fraction at starbirth (e.g., Kroupa 1995a; Bouvier et al. 1997). The cluster environment should indeed affect binaries. The density of $`n_{star}=10^3`$ stars pc<sup>-3</sup> in a cloud core of size $`R_{core}0.2`$ pc implies that objects with this density will collide with each other in one crossing time if their cross section is $`\sigma \left(n_{star}R_{core}\right)^10.005`$ pc<sup>2</sup>, which corresponds to a physical size of $`10^310^4\left(R_{core}(pc)n_{star}/10^3\right)^{1/2}`$ AU. This is the scale for long-period binary stars. Another indication that a cluster environment affects binary stars is that the peak in the separation distribution for binaries is smaller (90 AU) in the part of the Sco-Cen association that contains early type stars than it is (215 AU) in the part of the Sco-Cen association that contains no early type stars (Brandner & Köhler 1998). This observation suggests that dissipative interactions leading to tighter binaries, or perhaps interactions leading to the destruction of loose binaries, are more important where massive stars form. Computer simulations of protostellar interactions in dense cluster environments reproduce some of these observations. Kroupa (1995a) got the observed period and mass-ratio distributions for field binaries by following the interactions between 200 binaries in a cluster with an initial radius of $`0.8`$ pc. Kroupa (1995b) also got the observed correlations between eccentricity, mass ratio, and period for field binaries using the same initial conditions. Kroupa (1995c) predicted further that interactions will cause stars to be ejected from clusters, and the binary fraction among these ejected stars will be lower than in the remaining cluster stars (see also Kroupa 1998). These simulations assume that all stars begin as binary members and interactions destroy some of these binaries over time. Another point of view is that the protostars begin as single objects and capture each other to form binaries. In this scenario, McDonald & Clarke (1995) found that disks around stars aid with the capture process, and they reproduced the field binary fraction in model clusters with 4 to 10 stars (see review by Clarke 1996). According to this simulation, the cluster environment should affect disks too. There are indeed observations of this nature. Mundy et al. (1995) suggested that massive disks are relatively rare in the Trapezium cluster, and Nürnberger et al. (1997) found that protostellar disk mass decreases with stellar age in the Lupus young cluster, but not in the Tau-Aug region, which is less dense. When massive stars are present, as in the Trapezium cluster, uv radiation can photoionize the neighboring disks, and this is a type of interaction as well (Johnstone et al. 1998). 3. Cluster Effects on the IMF? The best examples of cluster environmental effects on star formation have been limited, so far, to binaries and disks. Nevertheless, there are similar suggestions that the cluster environment can affect the stellar mass as well, and, in doing so, affect the initial stellar mass function (e.g. Zinnecker 1986). For example, computer simulations have been able to reproduce the IMF for a long time using clump (Silk & Takahashi 1979; Murray & Lin 1996) or protostellar (Price & Podsiadlowski 1995; Bonnell et al. 1997) interaction models of various types. There is no direct evidence for IMF variations with cluster density, however (e.g., see Massey & Hunter 1998; Luhman & Rieke 1998). Even in extremely dense globular clusters, the IMF seems normal at low mass (Cool 1998). This may not be surprising because protostellar condensations are very small compared to the interstar separations, even in globular clusters (Aarseth et al. 1988), but the suggestion that massive stars are made by coalescence of smaller protostellar clumps continues to surface (see Zinnecker et al. 1993; Stahler, Palla, & Ho 1999). Another indication that cluster interactions do not affect the stellar mass comes from the observation by Bouvier et al. (1997) that the rotation rates of stars in the Pleiades cluster are independent of the presence of a binary companion. These authors suggest that the rotation rate is the result of accretion from a disk, and so the observation implies that disk accretion is not significantly affected by companions. Presumably this accretion would be even less affected by other cluster members, which are more distant than the binary companions. Along these lines, Heller (1995) found in computer simulations that interactions do not destroy protostellar disks, although they may remove $``$half of their mass. There is a way that could have gone unnoticed in which the cluster environment may affect the IMF. This is in the reduction of the thermal Jeans mass at the high pressure of a cluster-forming core. A lower Jeans mass might shift the turnover mass in the IMF to a lower value in dense clusters than in loose groups (Elmegreen 1997, 1999). D. Mass Segregation in Clusters One of the more perplexing observations of dense star clusters is the generally centralized location of the most massive stars. This has been observed for a long time and is usually obvious to the eye. For young clusters, it cannot be the result of “thermalization” because the time scale for that process is longer than the age of the cluster (e.g., Bonnell & Davies 1998). Thus it is an indication of some peculiar feature of starbirth. The observation has been quantified using color gradients in 12 clusters (Sagar & Bhatt 1989), and by the steepening of the IMF with radius in several clusters (Pandey, Mahra, & Sagar 1992), including Tr 14 (Vazquez et al. 1996), the Trapezium in Orion (Jones & Walker 1988; Hillenbrand 1997; Hillenbrand & Hartmann 1998), and, in the LMC, NGC 2157 (Fischer et al. 1998), SL 666, and NGC 2098 (Kontizas et al. 1998). On the other hand, Carpenter et al. (1997) found no evidence from the IMF for mass segregation in Mon R2 at $`M<2`$M, but noted that the most massive star (10 M) is near the center nevertheless. Raboud & Mermilliod (1998) found a segregation of the binary stars and single stars in the Pleiades, with the binaries closer to the center, presumably because of their greater mass. A related observation is that intermediate mass stars always seem to have clusters of low mass stars around them (Testi, Palla, & Natta 1998), as if they needed these low mass stars to form by coalescence, as suggested by these authors. There are many possible explanations for these effects. The stars near the center could accrete gas at a higher rate and end up more massive (Larson 1978, 1982; Zinnecker 1982; Bonnell et al. 1997); they (or their predecessor clumps) could coalesce more (Larson 1990; Zinnecker et al. 1993; Stahler, Palla, & Ho 1999; Bonnell, Bate, & Zinnecker 1998), or the most massive stars and clumps forming anywhere could migrate to the center faster because of a greater gas drag (Larson 1990, 1991; Gorti & Bhatt 1995, 1996; Saiyadpour, Deiss, & Kegel 1997). A central location for the most massive pieces is also expected in a hierarchical cloud (Elmegreen 1999). The centralized location of binaries could be the result of something different: the preferential ejection of single stars that have interacted with other cluster stars (Kroupa 1995c). The presence of low-mass stars around high-mass stars could have a different explanation too: high-mass stars are rare so low-mass stars are likely to form before a high-mass star appears, whatever the origin of the IMF. III. CLUSTER FORMATION MODELS A. Bound Clusters as Examples of Triggered Star Formation? Section IIA considered loose stellar groupings as a possible reflection of hierarchical cloud structure, possibly derived from turbulent motions, and it considered dense cluster formation in cloud cores separately, as if this process were different. In fact the two types of clusters and the processes that lead to them could be related. Even the bound clusters, which presumably formed in dense cloud cores, have a power law mass distribution, and it is very much like the power law for the associations that make HII regions, so perhaps both loose and dense clusters get their mass from cloud hierarchical structure. The difference might be simply that dense clusters form in cloud pieces that get compressed by an external agent. There are many young clusters embedded in cores at the compressed interfaces between molecular clouds and expanded HII regions, including many of those listed in Table 1 here, as reviewed in Elmegreen (1998b). For example, Megeath & Wilson (1997) recently proposed that the embedded cluster in NGC 281 was triggered by the HII region from an adjacent, older, Trapezium-like cluster, and Sugitani et al. (1995) found embedded clusters inside bright rimmed clouds. Compressive triggering of a cluster can also occur at the interface between colliding clouds, as shown by Usami et al. (1995). A case in point is the S255 embedded cluster (Zinnecker et al. 1993; Howard et al. 1997; Whitworth & Clarke 1997). Outside compression aids the formation of clusters in several ways. It brings the gas together so the stars end up in a dense cluster, and it also speeds up the star formation processes by increasing the density. These processes can be independent of the compression, and the same as in other dense regions that were not rapidly compressed; the only point is that they operate faster in compressed gas than in lower density gas. The external pressure may also prevent or delay the cloud disruption by newborn stars, allowing a large fraction of the gas to be converted into stars, and thereby improving the chances that the cluster will end up self-bound (cf. Sect IV; Elmegreen & Efremov 1997; Lefloch et al. 1997). Cloud cores should also be able to achieve high densities on their own, without direct compression. This might take longer, but the usual processes of energy dissipation and gravitational contraction can lead to the same overall core structure as the high pressure from an external HII region. Heyer, Snell & Carpenter (1997) discussed the morphology of dense molecular cores and cluster formation in the outer Galaxy, showing that new star clusters tend to form primarily in the self-gravitating, high-pressure knots that occur here and there amid the more loosely connected network of lower pressure material. Many of these knots presumably reached their high densities spontaneously. B. Spontaneous Models and Large Scale Triggering The most recent development in cluster formation models is the direct computer simulation of interacting protostars and clumps leading to clump and stellar mass spectra (Klessen et al. 1998). Earlier versions of this type of problem covered protostellar envelope stripping by clump collisions (Price & Podsiadlowski 1995), the general stirring and cloud support by moving protostars with their winds (Tenorio-Tagle et al. 1993), and gas removal from protoclusters (Theuns 1990). The core collapse problem was also considered by Boss (1996) who simulated the collapse of an oblate cloud, forming a cluster with $`10`$ stars. A detailed model of thermal instabilities in a cloud core, followed by a collapse of the dense fragments into the core center and their subsequent coalescence, was given by Murray & Lin (1996). Patel & Pudritz (1994) considered core instability with stars and gas treated as separate fluids, showing that the colder stellar fluid destabilized the gaseous fluid. Myers (1998) considered magnetic processes in dense cores, and showed that stellar-mass kernels could exist at about the right spacing for stars in a cluster and not be severely disrupted by magnetic waves. Whitworth et al. (1998) discussed a similar characteristic core size at the threshold between strong gravitational heating and grain cooling on smaller scales, and turbulence heating and molecular line cooling on larger scales. Some cluster formation models proposed that molecular clouds are made when high velocity clouds impact the Galactic disk (Tenorio-Tagle 1981). Edvardsson et al. (1995) based this result on abundance anomalies in the $`\zeta `$ Sculptoris cluster. Lepine & Duvert (1994) considered the collision model because of the distribution of gas and star formation in local clusters and OB associations, while Phelps (1993) referred to the spatial distribution, ages, velocities, and proper motions of 23 clusters in the Perseus arm. Comeron et al. (1992) considered the same origin for stars in Gould’s Belt based on local stellar kinematics. For other studies of Gould’s Belt kinematics, see Lindblad et al. (1997) and De Zeeuw et al. (1999). Other origins for stellar clustering on a large scale include triggering by spiral density waves, which is reviewed in Elmegreen (1994, 1995a). According to this model, Gould’s Belt was a self-gravitating condensation in the Sgr-Carina spiral arm when it passed us $`60`$ My ago, and is now in the process of large-scale dispersal as it enters the interarm region, even though there is continuing star formation in the Lindblad ring and other disturbed gas from this condensation (see Elmegreen 1993). The evolution of a dense molecular core during the formation of its embedded cluster is unknown. The core could collapse dynamically while the cluster stars form, giving it a total lifetime comparable to the core crossing time, or it could be somewhat stable as the stars form on smaller scales inside of it. Indeed there is direct evidence for gas collapse onto individual stars in cloud cores (Mardones et al. 1997; Motte, Andre & Neri 1998), but not much evidence for the collapse of whole cores (except perhaps in W49 – see Welch et al. 1987; De Pree, Mehringer, & Goss 1997). IV. CONDITIONS FOR THE FORMATION OF BOUND CLUSTERS A. Critical Efficiencies The final state of an embedded cluster of young stars depends on the efficiency, $`ϵ`$, of star formation in that region: i.e., on the ratio of the final stellar mass to the total mass (stars + gas) in that part of the cloud. When this ratio is high, the stars have enough mass to remain gravitationally bound when the residual gas leaves, forming a bound cluster. When this ratio is low, random stellar motions from the time of birth disperse the cluster in a few crossing times, following the expulsion of residual gas. The threshold for self-binding occurs at a local efficiency of about 50% (von Hoerner 1968). This result is most easily seen from the virial theorem $`2T+\mathrm{\Omega }=0`$ and total energy $`E=T+\mathrm{\Omega }`$ for stellar kinetic and potential energies, $`T`$ and $`\mathrm{\Omega }`$. Before the gas expulsion, $`E=\mathrm{\Omega }_{before}/2<0`$ from these two equations. In the instant after rapid gas expulsion, the kinetic energy and radius of the cluster are approximately unchanged because the stellar motions are at first unaffected, but the potential energy changes because of the sudden loss of mass (rapid gas expulsion occurs when the outflowing gas moves significantly faster than the virial speed of the cloud). To remain bound thereafter, $`E`$ must remain less than zero, which means that during the expulsion, the potential energy can increase by no more than the addition of $`|\mathrm{\Omega }_{before}|/2`$. Thus immediately after the expulsion of gas, the potential energy of the cluster, $`\mathrm{\Omega }_{after}`$, has to be less than half the potential energy before, $`\mathrm{\Omega }_{before}/2`$. Writing $`\mathrm{\Omega }_{before}=\alpha GM_{stars}M_{total}/R`$ and $`\mathrm{\Omega }_{after}=\alpha GM_{stars}^2/R`$ for the same $`\alpha `$ and $`R`$, we see that this constraint requires $`M_{stars}>M_{total}/2`$ for self-binding. Thus the efficiency for star formation, $`M_{stars}/M_{total}`$, has to exceed about $`1/2`$ for a cluster to be self-bound (see also Mathieu 1983; Elmegreen 1983). Another way to write this is in terms of the expansion factor for radius, $`R_{final}/R_{before}`$, where R<sub>final</sub> is the cluster radius after the gas-free cluster readjusts its virial equilibrium. A cluster is bound if $`R_{final}`$ does not become infinite. Hills (1980) derived $`R_{final}/R_{initial}=ϵ/(2ϵ1)`$, from which we again obtain $`ϵ>0.5`$ for final self-binding with efficiency $`ϵ`$. Danilov (1987) derived a critical efficiency in terms of the ratio of cluster radius to cloud radius; this ratio has to be $`<0.2`$ for a bound cluster to form. There can be many modifications to this result, depending on the specific model of star formation. One important change is to consider initial stellar motions that are less than their virial speeds in the potential of the cloud because the cloud is supported by both magnetic and kinematic energies, whereas the star cluster is supported only by kinematic energy. This modification was considered by Lada, Margulis, & Dearborn (1984), Elmegreen & Clemens (1985), Pinto (1987), and Verschueren (1990), who derived a critical efficiency for isothermal clouds that may be approximated by the expression, $$2\left(1ϵ\right)\mathrm{ln}\left(\frac{ϵ}{1ϵ}\right)+1+ϵ=1.5t^2$$ where $`t=a_s/a_{VT}<1`$ is the ratio of the stellar velocity dispersion to the virial. This expression gives $`ϵ`$ between 0.29 at $`t=0`$ and 0.5 at $`t=1`$. Other cloud structures gave a similar range for $`ϵ`$. This result is the critical star formation efficiency for the whole cloud; it assumes that the stars fall to the center after birth, and have a critical efficiency for binding in the center equal to the standard value of 0.5. A related issue is the question of purely gravitational effects that arise in a cluster-forming core once the stars comprise more than $`30`$% of the gas. In this situation, the stars may be regarded as a separate (collisionless) ”fluid” from the gas. The stability of such two fluid systems was considered by Jog & Solomon (1984) and Fridman & Polyachenko (1984). The Jeans length for a two-component fluid is smaller than that for either fluid separately. Dense stellar clusters might therefore fragment into sub-groups, perhaps accounting for some of the sub-structure that is observed in young embedded star clusters (Patel & Pudritz 1994). Lada, Margulis & Dearborn (1984) also considered the implications of slow gas removal on cluster self-binding. They found that gas removal on timescales of several cloud crossing times lowers the required efficiency by about a factor of 2, and when combined with the effect of slow starbirth velocities, lowers the efficiency by a combined factor of $`4`$. For clouds in which stars are born at about half the virial speed, and in which gas removal takes $`4`$ crossing times, the critical efficiency for the formation of a bound cluster may be only $`10`$%. Another way to lower the critical efficiency is to consider gas drag on the stars that form. Gas drag removes stellar kinetic energy and causes the stars to sink to the bottom of the cloud potential well, just like a low birth velocity. Saiyadpour et al. (1997) found that the critical efficiency can be only 0.1 in this case. Gas accretion also slows down protostars and causes them to sink to the center (Bonnell & Davies 1998). It follows from these examples that the critical efficiency for self-binding can be between $`0.1`$ and 0.5, depending on the details of the star formation process. B. Bound Clusters versus Unbound OB Associations The onset of massive star formation should mark the beginning of rapid cloud dispersal because ionizing radiation is much more destructive per unit stellar mass than short-lived winds from low-mass stars. (e.g., see Whitworth 1979). According to Vacca, Garmany & Shull (1996), the ionizing photon luminosity scales with stellar mass approximately as $`M^4`$. In that case, the total Lyman continuum luminosity from stars with luminosities in the range $`\mathrm{log}L`$ to $`\mathrm{log}L+d\mathrm{log}L`$ increases approximately as $`L^{0.66}`$ for a Salpeter IMF (a Salpeter IMF has a number of stars in a logarithmic interval, $`n\left[M_{star}\right]d\mathrm{log}M_{star}`$, proportional to $`M_{star}^{1.35}d\mathrm{log}M_{star}`$). Thus the total ionizing luminosity increases with cluster mass more rapidly than the total cloud mass, and cloud destruction by ionization follows the onset of massive star formation. If massive stars effectively destroy clouds, then the overall efficiency is likely to be low wherever a lot of massive stars form (unless they form preferentially late, as suggested by Herbig (1962), and not just randomly late). Thus we can explain both the low efficiency and the unboundedness of an OB association: the destructive nature of O-star ionization causes both. We can also explain why all open clusters in normal galaxy disks have small masses, generally less than several times $`10^3`$ M in the catalog of Lynga (1987; e.g., see Battinelli et al. 1994): low mass star-forming regions are statistically unlikely to produce massive stars. Discussions of this point are in Elmegreen (1983), Henning & Stecklum (1986), and Pandey et al. (1990). The idea that massive stars form late in the development of a cluster goes back to Herbig (1962) and Iben & Talbot (1966), with more recent work by Herbst & Miller (1982) and Adams, Strom & Strom (1983). However, Stahler (1985) suggested the observations have a different explanation, and the rare massive stars should be later than the more common low-mass star anyway, for statistical reasons (Schroeder & Comins 1988; Elmegreen 1999). The efficiency of star formation has been estimated for several embedded clusters, giving values such as 25% for NGC 6334 (Tapia et al. 1996), 6-18% for W3 IRS5 (Megeath et al. 1996), 2.5% for Serpens (White et al. 1995), 19% for NGC 3576 (Persi et al. 1994), and 23% for rho Oph (Greene & Young 1992), to name a few. C. Variation in Efficiency with Ambient Pressure Variations in the efficiency from region to region could have important consequences because it might affect the fraction of star formation going into bound clusters (in addition to the overall star formation rate per unit gas mass). One consideration is that the efficiency may increase in regions of high pressure (Elmegreen, Kaufman & Thomasson 1993; Elmegreen & Efremov 1997). This is because the virial velocity of a gravitationally-bound cloud increases with pressure and mass as $`V_{VT}(PM^2)^{1/8}`$, as may be determined from the relationships $`V_{VT}^2GM/(5R)`$ and $`PGM^2/R^4`$ for radius $`R`$. If the pressure increases and the virial velocity follows, then clouds of a given mass are harder to destroy with HII regions, which push on material with a fixed velocity of about $`10`$ km s<sup>-1</sup>. In fact, a high fraction of star formation in starburst galaxies, which generally have a high pressure, could be in the form of bound clusters (Meurer et al. 1995). The lack of expansion of HII regions in virialized clouds with high velocity dispersions also means that the massive stars will not ionize much. They will only ionize the relatively small mass of high density gas initially around them. We can determine the average pressures in today’s globular clusters from their masses and sizes using the relationship $`PGM^2/R^4`$. This gives $`P10^610^8`$ k<sub>B</sub> (Harris & Pudritz 1994; Elmegreen & Efremov 1997), which is $`10^210^4`$ times the local total ISM pressure. If the pressures of star-forming regions in the Galactic halo were this high when the globular clusters formed, and the globular cluster cloud masses were higher than those near OB associations by a factor of $`10`$, to account for the higher globular cluster masses, then the velocity dispersions in globular cluster cores had to be larger than the velocity dispersion in a local GMC by a factor $`\left(M^2P\right)^{1/8}=\left(10^2\times 10^4\right)^{1/8}5.6.`$ This puts the dispersion close to $`10`$ km s<sup>-1</sup>, making the globular cluster clouds difficult to disrupt by HII regions. V GLOBULAR CLUSTER FORMATION Globular clusters in the halos of galaxies are denser, smoother, and more massive than open clusters in galactic disks, and the globulars are also much older, but they have about the same power law mass distribution function as open clusters at the high mass end, and of course both are gravitationally bound systems. We are therefore faced with the challenging question of whether the similarities between these two types of clusters are more important than their differences. If so, then they may have nearly the same formation mechanisms, modified in the case of the globulars by the peculiar conditions in the early Universe. If the differences are too great for a unified model, then we need a unique formation theory for globular clusters. The history of the theory on this topic is almost entirely weighted toward the latter point of view, because the full mass distribution function for globular clusters is essentially a Gaussian (when plotted as linear in number versus logarithm in mass or luminosity; e.g., Harris & Racine 1979; Abraham & van den Bergh 1995), with a characteristic mass of several $`\times 10^5`$ M. Nearly all of the early models have attempted to explain this mass. For example, Peebles & Dicke (1968), Peebles (1984), Rosenblatt et al. (1988) and Padoan et al. (1997) regarded globular clusters as primordial objects produced by density fluctuations in the expanding Universe. Peebles & Dickey (1968) thought the characteristic mass was a Jeans mass. Other models viewed globulars as secondary objects, formed by thermal instabilities in cooling halo gas (Fall & Rees 1985; Murray & Lin 1992; Vietri & Pesce 1995) or gravitational instabilities in giant bubbles (Brown et al. 1995) or the shocked layers between colliding clouds (Zinnecker & Palla 1987; Shapiro, Clocchiatti, & Kang (1992); Kumai et al. 1993; Murray & Lin 1992). Schweizer (1987) and Ashman & Zepf (1992) suggested many globulars formed during galaxy mergers. This could explain the high specific frequency of globular clusters (number per unit galaxy luminosity; Harris & van den Bergh 1981) in ellipticals compared to spirals if the ellipticals formed in mergers. However, Forbes et al. (1997) found that galaxies with high specific frequencies of globular clusters have lower cluster metallicities, whereas the opposite might be expected in the merger model. Also, McLaughlin (1999) has suggested that the specific frequency of globular cluster is the same everywhere when x-ray halo gas and stellar evolution are included. There is another point of view if the globular cluster mass function is not primordial but evolved from an initial power law. This is a reasonable hypothesis because low mass globulars evaporate and get dispersed first, depressing an initial power law at low mass to resemble a Gaussian after a Hubble time (Surdin 1979; Okazaki & Tosa 1995; Elmegreen & Efremov 1997). Observations of young globular clusters, forming in starburst regions, also show a power law luminosity function with a mixture of ages (Holtzman et al. 1992; Whitmore & Schweizer 1995; Meurer et al. 1995; Maoz et al. 1996; Carlson et al. 1998), and the high mass end of the old globular systems is nearly a power law too (Harris & Pudritz 1994; McLaughlin & Pudritz 1996; Durrell et al. 1996). In that case, there is a good possibility that old globular clusters formed in much the same way as young open clusters, i.e., in dense cores that are part of a large-scale hierarchical gas structure derived from cloud collisions (Harris & Pudritz 1994; McLaughlin & Pudritz 1996) or turbulent motions (Elmegreen & Efremov 1997). Direct observations of globular cluster luminosity functions at cosmological distances should be able to tell the difference between formation models with a characteristic mass and those that are scale free. Another model for globular cluster formation suggests they are the cores of former dwarf galaxies (Zinnecker et al. 1988; Freeman 1993), “eaten” by the large galaxy during dissipative collisions. The globulars NGC 6715, Terzan 7, Terzan 8, and Arp 2 that are comoving with the Sgr dwarf galaxy are possible examples (Ibata et al. 1995; Da Costa & Armandroff 1995). Other dwarf galaxies have globular cluster systems too (Durrell et al. 1996), so the globulars around large galaxies may not come from the cores of the dwarfs, but from the dwarf globulars themselves. It remains to be seen whether this formation mechanism can account for the globular cluster luminosity function. VI CONCLUSIONS 1. Loose hierarchical clusters form when the associated gas is only weakly self-gravitating and clumped in this fashion before the star formation begins. Dense clusters come from strongly self-gravitating gas, which may be triggered, and which also may be gravitationally unstable to bulk collapse. 2. Cluster formation is often quite rapid, requiring only a few internal crossing times to make most of the stars. This follows from the relatively small age differences between nearby clusters and from the hierarchical structure of embedded and young stellar groups. Such structure would presumably get destroyed by orbital mixing if the region were much older than a crossing time. 3. Dense cluster environments seem to affect the formation or destruction of protostellar disks and binary stars, but not the stellar initial mass function. 4. Bound clusters require a relatively high star formation efficiency. This is not a problem for typically low mass open clusters, but it requires something special, like a high pressure, for a massive globular cluster. Acknowledgements: We would like to thank the conference organizers for the opportunity to write this review. Helpful comments on the manuscript were provided by W. Brandner, D. McLaughlin, and P. Kroupa. Yuri Efremov appreciates partial support from the Russian Foundation for Basic Research and the Council for Support of Scientific Schools. The research of Ralph Pudritz is supported by the National Science and Engineering Research Council of Canada (NSERC). Travel for Hans Zinnecker was supported by the Deutsche Forschungsgemeinschaft (DFG).
no-problem/9903/astro-ph9903174.html
ar5iv
text
# Cosmological parameter dependence in local string theories of structure formation ## I Introduction Modern theories of structure formation are based on two paradigms: cosmological inflation and topological defects . Inflationary scenarios produce structure from vacuum quantum fluctuations, and have been well studied. Hence upcoming high precision experiments aimed at measuring the Cosmic Microwave Background (CMB) anisotropy and Large Scale Structure (LSS) power spectra should also provide information on the model’s free parameters. Parameters internal to the theory are, for example, the primordial tilt, the perturbations’ amplitude, or the percentage of tensor modes. Others describe the current state of the Universe, and include the ratios to the critical density, of the matter density, $`\mathrm{\Omega }`$, of the vacuum energy, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, of the baryon density, $`\mathrm{\Omega }_b`$, and the Hubble constant $`H_0`$. Progress in defect theories has been slower. Here, as the Universe cools down, high temperature symmetries are spontaneously broken, and remnants of the unbroken phase may survive the transition . These “vacuum” defects later seed fluctuations in the CMB and LSS. The defect evolution is highly non linear, thereby complicating the computation of these fluctuations. However, recent breakthroughs in method and computer technology have allowed unprecedented progress , although the results for a standard combination of parameters ($`\mathrm{\Omega }=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }_b=0.05`$, $`H_0=50`$ km/sec/Mpc) were discouraging. Tuning parameters to fit the data has not been thoroughly attempted, although initial indications are that it is possible to improve the fits considerably . Also, combinations of strings and inflation with the standard parameters seem to fit the data much better than each separate component . Unlike inflation, defect theories only have one internal parameter: the symmetry breaking energy scale (which is also the string mass per unit length $`\mu `$ for local cosmic strings). Nonetheless, quantities like $`\mathrm{\Omega }`$, $`\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, or $`H_0`$ are also free parameters in defect theories, although it has been suggested in the context of global theories that the dependence on these is much weaker . The purpose of this paper is to reexamine this statement, in the context of local cosmic strings. Before such a study can be attempted a number of aspects of the work on strings need to be refined. The effect of the radiation to matter (RM) transition, and the transition to curvature or vacuum domination upon the network need to be established, if sensitivity to the cosmological parameters is to be fully considered. However, these effects were neglected in the recent highest accuracy calculations of structure formation with local cosmic strings , which make use of unequal time correlators (UETC). Here we will use the 3-scale model of Austin, Copeland and Kibble to evaluate those effects in a flat Universe with no cosmological constant. A further key ingredient of this model is the inclusion of gravitational backreaction (GBR). These results are incorporated into the UETCs of , and the CMB and LSS power spectra are obtained on a grid of values for $`\mathrm{\Omega }_b`$ and $`H_0`$. We argue that this combination of techniques provides the most accurate calculation of these spectra allowed by current computer technology. This paper is set up as follows. First, in section II, we discuss the aspects of the 3-scale model which are relevant to this work. In section III we then explain how these results are incorporated into the UETCs of . The resulting CMB and LSS power spectra, and their dependence on $`\mathrm{\Omega }_b`$ and $`H_0`$ are then presented in section IV. Conclusions are given in section V. ## II The 3-scale model The 3-scale model provides an analytic description of the string network in terms of three physical lengths $`\xi `$, $`\overline{\xi }`$ and $`\zeta `$. The mean string separation is $`\xi `$ so that the energy density in strings is $`\rho =\mu /\xi ^2`$. (Renormalisation effects are encoded in $`\xi `$ and not $`\mu `$.) The mean correlation length along the strings is $`\overline{\xi }(\xi )`$. Finally the small scale structure is described by $`\zeta `$ which is effectively the interkink distance. These scales are implicitly functions of each other, of time $`t`$ and of parameters describing the effects of expansion, gravitational radiation, GBR, loop formation and intercommutation. They also determine the root mean square string velocity $`v_{rms}`$. In this paper we have solved the evolution equations for $`d\xi /da`$, $`d\overline{\xi }/da`$ and $`d\zeta /da`$ in terms of the scale factor $`a`$ and the variables $`\gamma =1/H\xi `$, $`\overline{\gamma }=1/H\overline{\xi }`$ and $`ϵ=1/H\zeta `$ where $`H`$ is the Hubble parameter. We do not write out these rather long equations here: they follow directly from those given in by a simple change of variables from $`ta`$. Of particular interest is the effect of GBR, the behaviour of the network across the RM transition, and the question of whether or not the system has reached a scaling regime ($`ϵ`$, $`\gamma `$ and $`\overline{\gamma }`$ all constant) today. The 3-scale model could be criticised for containing many undetermined parameters. However, some of these —such as the infinite string intercommutation probability $`\chi `$, the chopping efficiency $`c`$ which gives the rate of loop formation, and $`\mathrm{\Gamma }G\mu `$ the rate of gravitational radiation, also appear in 1-scale models, for example . Other parameters are tightly constrained by numerical simulations of cosmic string evolution (see below). Indeed, the only undetermined parameters are $`k`$, which describes the excess small-scale kinkiness on loops compared to long strings, and $`\widehat{C}`$ which determines the rate at which GBR smooths the small-scale kinkiness; $`d\zeta /dt|_{GBR}=\widehat{C}\mathrm{\Gamma }G\mu `$. Neither quantity is to be found in current numerical simulations nor in other analytic models. One further point; we assume here that gravitational radiation and not particle production is the dominant source of energy loss . This implies that $`\widehat{C}>\widehat{C}_{crit}>k`$ for $`\widehat{C}_{crit}`$ a given constant of order 1 . The reader is referred to for a recent discussion of the effect of different decay mechanisms on the observational consequences of strings. Some important features of the solutions for $`\gamma `$, $`\overline{\gamma }`$, $`ϵ`$ and $`v_{rms}`$ are shown in Fig. 1 for different values of $`\widehat{C}`$. One of these is that there are two different scaling solutions. The first, ‘transient’ solution, has constant values of $`\gamma _{tr}`$, $`\overline{\gamma }_{tr}`$ and $`v_{rms}^{tr}`$ (independent of $`\widehat{C}`$), but $`ϵ`$ is growing: small scale structure is building up on the network. We believe that this corresponds to the scaling solutions of numerical simulations which have no GBR (so $`\widehat{C}=0`$). From the evolution equations one can obtain expressions for $`\gamma _{tr}`$, $`\overline{\gamma }_{tr}`$ and $`v_{rms}^{tr}`$ and thus bound them by data from simulations, which in turn constrains many of the 3-scale model parameters as claimed above. We comment that the duration of the transient regime does depend on $`\widehat{C}`$, ending when $`ϵ1/\mathrm{\Gamma }G\mu \widehat{C}`$. Thus if, as in numerical simulations, we were to set $`\widehat{C}=0`$ it would last indefinately, with $`ϵ\mathrm{}`$ eventually causing the strings to disappear ($`\gamma ,\overline{\gamma }0`$). Finally, recall that in Ref particle production was found to be the dominant source of energy loss from the network. The analogue here correpsonds to having $`k>\widehat{C}_{crit}`$ for which the evolution equations lead to a scaling regime with $`\gamma ϵ\overline{\gamma }`$; a 1-scale model . Fig. 1 shows that following the transient regime is a new ‘full’ scaling regime in which all the variables reach new constant values, apart from a period between $`5<\mathrm{ln}(a/a_{eq})<10`$ (including recombination and even today) resulting from the RM transition. It appears that the string network might only just, if at all, be reaching a scaling solution today. Indeed, across the transition, different scales seem to reach scaling at different rates which are also dependent on $`\widehat{C}`$. We also see that as $`\widehat{C}`$ increases $`\zeta `$ increases since kinks are smoothed out more effectively; $`v_{rms}`$ is also increased. It is this full scaling solution, and not the transient one that is clearly the relevant one for structure formation. However, we noted above that $`\widehat{C}`$ is so far unknown. We have therefore chosen to work in the limit where the scaling solutions are as insensitive as possible to $`\widehat{C}`$. From Fig. 1 this corresponds to large values of $`\widehat{C}`$ for which the decrease in physical energy density across the radiaton-matter transition is also largest. In that limit, the solutions plotted in Fig. 1 are also insensitive to the precise value of $`k(<\widehat{C}_{crit})`$—the exception is $`ϵ`$ which decreases as ‘particle production’ ($`k`$) increases. ## III Method of determination of the Powerspectra We have used the full scaling solutions including GBR with large $`\widehat{C}`$ to update the results of , where the power spectra in string induced CMB fluctuations ($`C_{\mathrm{}}`$) and LSS ($`P(k)`$) were computed following methods first proposed in . Of key importance is to obtain from simulations the 2-point functions of the defect stress-energy tensor $`\mathrm{\Theta }_{\mu \nu }`$. These so-called unequal time correlators (UETC) are defined as $`\mathrm{\Theta }_{\mu \nu }(𝐤,\tau )\mathrm{\Theta }_{\alpha \beta }^{}(𝐤,\tau ^{})𝒞_{\mu \nu ,\alpha \beta }(k,\tau ,\tau ^{})`$ where $`𝐤`$ is the wavevector and $`\tau `$ and $`\tau ^{}`$ are any two (conformal) times. UETCs contain all the information required for computing the power spectra in defect induced fluctuations. They fall off sharply away from the equal-time diagonal ($`\tau =\tau ^{}`$), and are tightly constrained by requirements of self-similarity (or scaling) and causality. These properties allow extrapolation of measurements made over the limited dynamical range of simulations, thus giving the defect history over the whole life of the Universe. However, it is clear that scaling breaks down near the RM transition (Fig. 1). This may be incorporated into the UETCs by measuring radiation and matter epoch UETCs, transition UETCs, and then interpolating . Such a procedure was applied to global defects, but cannot be followed with the simulations of , in which the expansion damping effects upon the network are neglected. Nonetheless these simulations are still the only available complete source of local string UETCs (see however ). The 3 scale model of , while allowing the most rigorous study to date of departures from scaling, does not provide sufficient information for constructing UETCs. We therefore decided to adopt a hybrid approach, and incorporate the information supplied by the 3-scale model of into the UETCs of . The UETCs are renormalised with a non-scaling factor describing the change in comoving string density across the RM transition, as determined from the 3-scale model. That is we make the replacement $$𝒞_{\mu \nu ,\alpha \beta }(k,\tau ,\tau ^{})𝒞_{\mu \nu ,\alpha \beta }(k,\tau ,\tau ^{})\alpha (\tau )\alpha (\tau ^{})$$ (1) where $`\alpha (\tau )`$ measures the change in the defect scaled comoving density. More specifically, $`\alpha (\tau )=N\tau ^2\rho _c=N\mu (Ha\tau \gamma )^2`$ where $`N`$ is a normalisation factor chosen such that $`\alpha (\tau )=1`$ deep in the radiation era, and $`\rho _c`$ is the comoving energy density in srings. Note that $`\alpha `$ depends on $`\gamma `$ and hence (see figure 1) on $`\widehat{C}`$: this $`\widehat{C}`$ dependence of $`\alpha `$ is plotted in figure 2. The procedure given in equation (1) was also followed in , in the context of a semi-analytical model for string induced fluctuations. The rationale lies on the fact that UETCs are proportional to the square of the defect comoving density, if one assumes no correlation between individual defects (as in the model of ). The presence of inter-defect correlations, non-negligible for small wavelengths, means that the shape of the correlators could in principle also change with $`\gamma `$. However, it is difficult to extract this information from the 3-scale model, and we expect the effect upon the CMB and LSS to be small. Naturally all UETCs (scalar, vector, and tensor) are multiplied by the same time-dependent factor. This then propagates to the various eigenmodes. It is crucial to notice that it is the comoving density, and not the physical density that enters the above scaling argument. The conversion factor between physical and comoving densities changes appreciably from the radiation to the matter epoch. Indeed for large $`\widehat{C}`$ it accounts for most of the drop in defect physical density, and as a result the strings’ comoving density (or $`\alpha (\tau )`$ plotted in figure 2) drops only by about 20% for large $`\widehat{C}`$. For small $`\widehat{C}`$ the comoving energy density actually increases as can be seen from figure 2. We reran the codes used in with these modifications (working throughout with large $`\widehat{C}`$), and with two purposes in mind: to compare the effects of neglecting the RM transition with approximations used by other groups, and to survey parameter space in cosmic string models. ## IV Results We found that the effect of neglecting the RM transition is not very large, and certainly much smaller than the errors induced by using the Albrecht and Stebbins (AS) approximation (or a variation thereof ). As suggested in , $`G\mu `$ normalized to COBE increases by a factor of the same order as the decrease in string density in the matter epoch. With standard parameters ($`\mathrm{\Omega }=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }_b=0.05`$ and $`H_0=50`$ km/s/Mpc) we found that $`G\mu `$ increases from $`1\times 10^6`$ to $`1.2\times 10^6`$ . The COBE normalized $`C_{\mathrm{}}`$ spectrum, in this cosmology, hardly changes at all. The bias in spheres with radius 100 Mpc $`h^1`$ changes from 4.9 to 4.2: the inclusion of the RM transition improves the bias problem of cosmic strings but not by much. The effect on the shape and normalization of $`P(k)`$ is plotted in Fig. 3. By contrast, using the AS approximation instead of a Boltzmann code (or one of its truncations ) induces large uncertainties in normalization and shape of $`P(k)`$. (We obtained the AS curves of Fig. 3 by replacing the variables $`\xi `$ and $`\chi `$ of with the $`\xi `$ and $`\overline{\xi }`$ of the 3-scale model respectively.) Although a small effect, only once the RM transition has been properly taken into account can we start exploring dependence on cosmological parameters. We considered $`\mathrm{\Omega }_b`$ and $`h`$, and solved for the $`C_{\mathrm{}}`$ and $`P(k)`$ on a grid with $`0.4<h<0.8`$ and $`0.01<\mathrm{\Omega }_bh^2<0.015`$. The latter satisfies nucleosynthesis constraints, the former accommodates currently accepted ranges of the Hubble parameter. In Figs. 4 and 5 we plot $`C_{\mathrm{}}`$ and $`P(k)`$ extracted from this grid to illustrate the observed variations. The $`C_{\mathrm{}}`$ spectrum of anisotropies induced by strings seems insensitive to $`\mathrm{\Omega }_b`$, but depends sensitively on $`h`$ (Fig. 4). In particular, local cosmic strings, unlike global defects, exhibit a Doppler peak, but only if the Hubble constant is not too large. The results of are therefore not generic, but depend on the choice of $`h`$; indeed they are consistent with the results of in which strings are directly coupled to a Boltzmann code. In that work it was found that strings do not have a Doppler peak, with $`h=0.8`$. Notice how the dependence of the $`C_{\mathrm{}}`$ on $`h`$ is non-linear, with a much larger jump from $`h=0.7`$ to $`h=0.8`$ than, say, from $`h=0.6`$ to $`h=0.7`$. The effect of both $`h`$ and $`\mathrm{\Omega }_b`$ upon the CDM power spectrum is very small if $`P(k)`$ is plotted using units of $`h`$/Mpc for wavenumbers (Fig. 5). This is in agreement with small $`k`$ analytical predictions (eg. ), but breaks down at large $`k`$ as can be seen by the diverging curves in Fig. 5. The string bias problem at 100 Mpc $`h^1`$ is clearly not solved by changing $`h`$ or $`\mathrm{\Omega }_b`$ (but see for other solutions). Finally, recall that we have worked with large $`\widehat{C}`$. For small $`\widehat{C}`$’s the strings’ comoving energy density increases across the RM transition and the bias problem is in fact worse. ## V Conclusion To summarize, in this paper we have studied the impact of GBR on cosmic string network evolution, and properly accounted for its properties during the RM transition. Not only did we suggest that this departure from scaling lasts longer than that obtained in other models, but we re-iterated the existence of two different scaling solutions. The full scaling solution is the relevant one for structure formation and cannot be observed in numerical simulations. These results, with large $`\widehat{C}`$, were then used to update the calculations of , and cut a first, safe slice through parameter space in string scenarios. Dependence on $`h`$ and $`\mathrm{\Omega }_b`$ was considered, the conclusions being that the CMB results are sensitive to $`h`$ but less so to $`\mathrm{\Omega }_b`$. If forthcoming experiments were to reveal a $`C_{\mathrm{}}`$ spectrum without secondary Doppler peaks—a tell-tale signal of a defect theory —then we would lose ability to determine $`\mathrm{\Omega }_b`$, since information on $`\mathrm{\Omega }_b`$ is hidden in the secondary Doppler peaks which are generically erased by incoherence, in defect scenarios. However, for $`h`$, our conclusions are more positive. Local strings exhibit a Doppler peak whose height we have demonstrated decreases quite sharply with an increasing Hubble constant. Larger regions of parameter space require surveying. In particular as pointed out in refs , strings with a cosmological constant are a promising combination. However we feel that all treatments, including our own, become too qualitative to be reliable if curvature or $`\mathrm{\Lambda }`$ are present. Considerable analytical and numerical work with $`\mathrm{\Omega }_\mathrm{\Lambda }0`$, and in open models, is required before a more quantitative discussion is possible. The first step in the context of the method we are proposing here would be to include $`\mathrm{\Lambda }`$ into the 3-scale model. ## Acknowledgements We thank Andy Albrecht, Anne Davis, Pedro Ferreira, Mark Hindmarsh, Tom Kibble and Neil Turok for useful discussions and comments. We acknowledge financial support from PPARC (E.J.C and D.A.S) and the Royal Society (J.M).
no-problem/9903/astro-ph9903026.html
ar5iv
text
# Lost in (Moduli) Space ## Lost in (Moduli) Space The duality revolution which occurred during the past five years has enormously advanced our knowledge and perspective regarding the theory formerly known as superstrings. There is now overwhelming evidence to the effect that there is a unique string theory (often called M theory), which contains, in addition to strings, a variety of membranes and even particle-like objects. In a kind of Planckian democracy, none of these degrees of freedom can truely be regarded as more fundamental than the others. In fact in one limit of this theory (somewhat confusingly called the M theory limit), the string degrees of freedom are completely absent. This recent explosion of our understanding of nonperturbative string physics nicely complements the impressive edifice of perturbative string knowledge built up during the previous decade. All of which leads to the obvious question: * If we know so much about string theory, why can’t we predict anything? The main reason for this embarrassing irony is that string theory does not have a unique (consistent, stable,) ground state. This was known to be true at the perturbative level for many years, but only recently have we realized that this disturbing property appears to persist even when we bring to bear our full arsenal of nonperturbative string dynamics. At low energies string theory is described (mostly, at least,) by an effective field theory; without a definite choice for the string vacuum we cannot even specify the degrees of freedom of this low energy theory, let alone the form or parameters of the effective lagrangian. The problem is that we are lost in moduli space. The effective field theory limit of string theory contains a number of scalar fields, called moduli, with flat potentials. This is not surprising since most known string vacua preserve some spacetime supersymmetry, and moduli are a generic feature of supersymmetric field theories. We can parametrize a moduli space by the vacuum expectation values (vevs) of these scalar fields. As we move around in this moduli space, the effective field theory, defined by shifting by these vevs, can vary enormously. Not only are there variations in couplings, but at special points in moduli space the number and type of light degrees of freedom changes. We have only just begun to feel our way around the intricate tapestry which represents the full moduli space of consistent string vacua. We have probed around the edges, which represent the various possible perturbative limits of string theory, as well as the 11-dimensional (non-stringy) M theory limit. The interior remains largely terra incognita, although string duality relations help us trace the threads connecting these different limits. We know neither the dimensionality nor the topology of the full string moduli space. Nor do we know the connectivity of this space. The tapestry may be very frayed, with many ragged patches connected to the main body by only a few threads; there may even be “string islands”, points or patches of moduli space completely disconnected from the main body. One (at least) of these points ought to correspond to the Standard Model particle physics and FRW cosmology that we observe at low energies and long distance scales. But where? ## Moduli and Cosmology It is not suprising that the existence of moduli fields has implications for cosmology. Indeed this is true even for approximate moduli, i.e., scalars whose flat potentials are lifted by nonperturbative effects. Since the number and type of moduli vary as we move around in moduli space, most of what we can say about string cosmology is very dependent on where we think the string ground state is. Certain moduli, however, have slightly more robust characteristics. Perturbative limits of string theory contain a dilaton, a weakly coupled scalar whose vev determines the string coupling, and thereby the basic relationships between the Planck scale, compactification scales, and gauge couplings. These same perturbative limits also contain a pseudoscalar axion, and indeed axions seem to be generic features of large classes of string vacua. Compactified dimensions in string theory can assume a wide variety of geometries; nevertheless certain features of the modulus describing the “overall” scale of compactification are somewhat generic. Another interesting class of moduli for cosmology are what I will call “invisible” moduli. The vevs of these scalars describe either dimensionless couplings or new mass and length scales associated with hidden sectors- exotic matter and gauge fields which couple only gravitationally to ordinary matter. The good news for cosmologists is that string moduli provide natural candidates for the scalar fields that may perform some cosmologically important tasks. These include the inflaton and perhaps quintessence. The bad news for cosmologists is that generic regions of string moduli space won’t look good cosmologically. Indeed generically moduli are more likely to be a cosmological headache than a panacea. The devil, furthermore, is in the details, and generically these details are difficult to tackle. The dilaton, for example, has properties which make it an attractive candidate for the inflaton. The dilaton acquires a nonvanishing potential only from nonperturbative effects; the relative gradient in this potential is naturally of order the inverse Planck mass, as desired for slow roll inflation. On the other hand, there are a number of problems with dilaton inflation. Specific scenarios require additional assumptions about the nonperturbative contributions to the potential; these assumptions are hard to pin down with our current level of knowledge. In many scenarios there is the additional problem that the dilaton kinetic energy dominates the potential energy. Furthermore, near any of the perturbative limits of string theory, the dilaton is generically unstable; it’s vev wants to run off to infinity, producing an infinitely weakly coupled theory. One can postulate nonperturbative fixes for this runaway behavior, but such scenarios are neither rigorous nor robust. ## String Islands and the Cosmological Constant The cosmological constant problem is the most notorious and vexing problem of quantum gravity. Any attempt to unify quantum mechanics with gravity leads (at least naively) to the conclusion that quantum fluctuations in the vacuum (i.e. zero-point energies) must couple to gravity. Since these zero-point energy sums are typically divergent, their natural scale in a quantum field theory is some ultraviolet cutoff $`U`$. One thus expects to generate a cosmological constant of order $$\mathrm{\Lambda }U^4,$$ (1) and that the entropy associated with a system of linear size $`L`$ scales like $$SL^3U^3.$$ (2) Note that $`\mathrm{\Lambda }`$ is positive if bosonic modes dominate the sum, and negative if fermionic modes dominate. Supersymmetric vacua have zero cosmological constant, due to bose-fermi cancellations. The cosmological constant problem arises because any reasonable choice for the ultraviolet cutoff scale $`U`$ leads to a $`\mathrm{\Lambda }`$ which exceeds the observational upper bound by a ridiculously large multiple, roughly $`10^{100}`$ (here I am invoking “roughly” as per standard usage in cosmology, meaning order of magnitude in the exponent). This is because particle physics scales such as the Planck mass ($`10^{19}`$ GeV), the Standard Model Higgs vev ($`10^2`$ GeV), and the apparent supersymmetry-breaking scale ($`10^210^3`$ GeV), greatly exceed the energy scale characterizing the current matter density of the universe ($`10^3`$ eV). This is a disturbing problem, made worse by our desire to allow a rather large effective $`\mathrm{\Lambda }`$ during an earlier inflationary epoch. It should also be noted that recent ideas about quintessence in no way address the main cosmological constant problem; rather quintessence models evolve one very small effective $`\mathrm{\Lambda }`$ value to another very small effective $`\mathrm{\Lambda }`$ value. String theory, which is (if nothing else) a consistent theory of quantum gravity, ought to give us some profound insight to this problem. Unfortunately even with recent advances of the duality revolution, the cosmological constant problem remains a complete mystery even in string theory. A possible ray of hope is provided by a suggestion of Witten, which ties in nicely to some recent work on the idea of “string islands”. Witten observed that in 2+1 spacetime dimensions you can have supersymmetry of the vacuum (and thus $`\mathrm{\Lambda }`$$`=`$$`0`$) without supersymmetry of the spectrum (i.e. no bose-fermi degeneracy for particles). Furthermore, string theory in 2+1 dimensions actually becomes string theory in 3+1 dimensions in the limit where the string coupling (determined by the dilaton vev) goes to infinity. This peculiar phenomenon is similar to that which leads to the 11-dimensional M theory limit. The strongly-coupled 2+1 dimensional string theory has light solitons, which actually behave exactly like a set of light Kaluza-Klein modes. These solitonic degrees of freedom thus represent the degrees of freedom of a third spatial dimension compactified on a circle. In the strong coupling limit the radius of this circle becomes infinite, and a 3+1 dimensional theory results. This suggests a method for finding non-supersymmetric string vacua with zero cosmological constant, by starting with 2+1 dimensional string vacua which contain a dilaton. Note that it is important for this trick that the 2+1 string vacua do not contain any geometrical moduli associated with compactifications from higher dimensions. Such moduli would invalidate the original argument, leading presumably to a nonzero $`\mathrm{\Lambda }`$ whose scale is set by the square of the bose-fermi mass splittings divided by the compactification scale; this is too large unless we manage to keep all the mass splittings below about 100 GeV. Thus the pure version of this trick requires string islands (peninsula?): string vacua which contain the dilaton and its axion partner, but do not contain any geometrical moduli. Surprisingly, such string islands are known to exist even in the weakly-coupled limit of the heterotic string. There is, for example, a 3+1 dimensional string vacuum whose low energy limit is pure 3+1 dimensional $`N=4`$ supergravity. Many more examples have been constructed recently. I should emphasize that even if one could exhibit string islands corresponding to non-supersymmetric vacua with zero cosmological constant, it is another matter entirely to show that any such vacuum is consistent with the Standard Model. An important conclusion for cosmologists, however, is that current thinking about making string vacua which are more “realistic” seems to favor reduced sets of moduli. A broader conclusion is that in the long run cosmological considerations are likely to play an important role in resolving mysteries about the vacuum state of string theory. ## Delightful D branes As mentioned above, string theory abounds with membranes of various types and dimensionalities. Of particular interest are D branes, objects which, considered as backgrounds for string propagation, preserve part of the underlying spacetime supersymmetry. D branes have a number of special properties, and occur with various dimensionalities (thus we have D instantons, D particles, D strings, and D membranes with up to 9 spatial dimensions). From the string point of view a D brane is a soliton whose mass (or tension, or mass per unit volume) is proportional to the inverse of the string coupling $`g_s`$. This means, among other things, that D branes become light in those regions of string moduli space where the string coupling is large. This simple fact has led to a new interpretation for singularities of various fixed spacetime backgrounds in which strings propagate. These singularities are associated with compactifying some of the original 9 or 10 spatial dimensions onto orbifolds, conifolds, or other singular geometries. A D brane can wrap around a D-dimensional closed cycle of this compact space; when this cycle is shrunk to a point a singularity appears, associated with the vanishing mass of the wrapped D brane. This observation provides a generic and physically intuitive mechanism for “smoothing” spacetime singularities. The obvious question, of course, is whether this D brane smoothing also applies to cosmological singularities. Recent work suggests that the answer is yes, although perhaps not in all cases. This is an exciting avenue for future research. ## Dark Matter in Parallel Universes The mass scale at which string physics becomes stringy is known as the string scale, $`m_s`$. For the weakly coupled heterotic string, the string scale can be shown to be about $`10^{18}`$ GeV, only about an order of magnitude smaller than the Planck mass, $`m_p`$. However in other regions of string moduli space the string scale can be much smaller. Since we don’t know where we are in moduli space, we also don’t know the value of $`m_s`$. The most we can say at present is that $`m_s`$ is greater than about 1 TeV, due to nonobservation of stringy effects in the Tevatron collider experiments. It is tempting to imagine that perhaps the string scale is not too far above the current lower bound, in the multi-TeV region which will eventually be accessible to colliders. If this bold hypothesis is correct, we are immediately faced with the problem of explaining the small ratio $`m_s/m_p`$. In string theory this small ratio is presumably related to certain moduli having very large or very small vevs, as measured in units of $`m_s`$. If these moduli are “invisible” moduli of the type discussed earlier, then their existence may have no other direct consequences for observable low energy physics. On the other hand, this small ratio could be a consequence of large compactified dimensions, through a scaling relation like $$m_p^2m_s^{n+2}R^n,$$ (3) where $`R`$ is the size of the large compact dimensions, and $`n`$ is the number of such dimensions. For $`m_s`$ of order a TeV and $`n2`$, $`R`$ in the above relation can be as large as 1 mm! Actually, this form of the large extra dimensions scenario is completely ruled out by particle physics constraints, unless we make an additional bold hypothesis: that the entire Standard Model gauge theory is confined to live on a membrane orthogonal to the large extra dimensions. Since D branes are known to have supersymmetric gauge theories confined to their worldvolumes, this hypothesis fits rather nicely with our current picture of string theory. If correct, the graviton has many massive Kaluza-Klein copies, but the Standard Model particles know of the existence of large extra dimensions only through coupling to gravity. There are, not surprisingly, many interesting cosmological implications of this scenario. One intriguing observation is that, if the Standard Model gauge theory is confined to some configuration of branes, then there may be other gauge theories confined to other brane configurations, separated from us in one or more of the large extra dimensions. Such hidden sectors are very much like parallel universes, except that they are gravitationally coupled to the visible universe. If these other “brane-worlds” contain stable matter, planets, stars, galaxies, etc., these will all appear to us as dark matter. Since the laws of (non-gravitational) physics could be quite different in these parallel worlds, qualitatively new forms of macroscopic matter may also be produced. It would be interesting to determine the current observational bounds on (i) dark “planets” in the vicinity of our solar system, (ii) dark “stars” within our galaxy and the galactic halo, and (iii) the density and distribution of dark “galaxies”.
no-problem/9903/quant-ph9903030.html
ar5iv
text
# State determination in continuous measurement ## I Introduction There is currently great interest in experiments which obtain useful information about a quantum system or state in single runs of an experiment. Very recently it has been possible to distinguish the quantum mechanical and classical models of the interaction of an atom with a mode of a high finesse optical cavity as the result of continuous monitoring of the output light while a single atom passes through the cavity . As a result of this continuous monitoring the quantum mechanical backaction of the measurement process may be expected to have a significant effect on the evolution of individual runs of the experiment. Moreover, it may be possible in the future to modify the evolution of the system through feedback based on this continuous measurement . Current experimental technology, such as that described in , is reaching the point at which determining the state of the system and observing the effects of backaction and feedback in a single run of an experiment is a real possibility. With this situation in mind, we discuss the identification of the state of the system following a period of continuous observation and the extent to which this state can be tracked, taking into account factors such as imperfect initial knowledge of the state and imperfect detection efficiency. We focus on a model of continuous position measurement of a mechanical oscillator which is relevant to the experiment of Mabuchi et al. but which also has relevance to other instances of interferometric position monitoring such as gravitational wave detection. The problem of describing quantum systems undergoing continuous measurement has attracted much theoretical interest in recent years. As discussed by Wiseman these theories admit a variety of interpretations; as tools for efficient stochastic calculation of ensemble averages in lieu of solving master equations , as equations describing the evolution of systems conditioned on measurements and as a description of the evolution of a system coupled to an environment, in which collapse of the wavefunction is supposed to be associated with the coupling to the environment . Here, we take the second viewpoint namely that the conditioned state represents the observer’s best description of the system state given the results of the continuous measurement process. Adopting the first or third viewpoints one is led to describe the system by a pure state vector throughout the evolution although the reasons for doing so are somewhat different in each case. By contrast, a description of one’s conditioned state of knowledge necessarily requires mixed states in order to account for incomplete knowledge of the system. From this viewpoint the fundamental equation for the conditioned evolution is the stochastic master equation (SME) . This is able to account for the effects of mixed initial states, imperfect detection efficiencies and the existence of unmeasured couplings to the environment. However, to date, relatively little work has attempted to address the evolution of the conditioned state in any of these situations . In this paper we consider a system which is simple enough that almost all the work can be done analytically and which admits a treatment of all of these imperfections. This helps in developing intuition about the role of SME’s and their possible relevance to experiments. A projective measurement has the property that if the result of the measurement is known, the state after the measurement is pure, and depends only on the measurement result. It would be hoped that in a continuous measurement there would be some finite interval of time after which the measurement has effectively given rise to a projection, so that the system is placed in a particular state which depends only on the sequence of measurement results and which can be calculated without knowledge of the initial state. If the resulting state is pure then a stochastic Schrödinger equation (SSE) would be a perfectly adequate tool for describing the subsequent system evolution. In this paper we investigate the conditions which lead to such an effective collapse and over what timescale it takes place. This is made possible by considering density matrices and the SME rather tha wave vectors and the SSE. In a real experiment there will also be uncontrolled, unmeasured couplings of the system to the environment and in this case the effects of the measurement will compete not only with the coherent internal dynamics of the system but also with the randomizing effects of the coupling to the bath. This may lead to mixed conditioned states even after long periods of continuous measurement and limit the observer’s ability to make inferences about the system state. Understanding the process by which the conditioned state may collapse onto a pure state and the effects of noise as described by the SME allows us to define conditions under which continuous measurements in real experiments are approximations to ideal measurements. This paper is organized as follows. In Sec. II we establish our simplified model of the continuous position measurement of an oscillator and solve the SME for Gaussian initial states. We find the time over which the second-order moments approach their steady-state values and calculate the entropy of the conditioned state as it becomes pure. In Sec. III we discuss the classical problem of state identification for the noisy measurement of the the position of an oscillator and derive a kind of uncertainty principle relating the observation and process noises if the classical model is to reproduce the SME. In Sec. IV we show that the time-scale over which the second-order moments of the conditioned state reach their steady state is the same as that over which the conditioned state is completely determined by the measurement record. Section V discusses the effect of heating, noise and detection inefficiency on these conclusions. Finally in Sec. VI we summarize and make some comments about future extensions of this work. ## II Solving the Stochastic Master Equation ### A A Generalized Model for Continous Position Measurement In this paper we shall consider the abstract model of continuous position measurement discussed by Caves and Milburn with a harmonically bound rather than a free measured particle. Projective position measurements are imagined to be made on a sequence of meters coupled briefly to the system and the limit of very frequent meter interactions and very broad initial meter position distributions is taken. This leads to a continuous evolution for the system of interest. Although this model should correspond in some limit to any continuous position measurement of a single oscillator at the standard quantum limit, one system which does realize it at least approximately is the dispersive regime of single atom cavity quantum electrodynamics (QED). In this system the position of an atom inside a high-finesse optical cavity causes a phase shift of the field driving the cavity which can be monitored by homodyne detection . The SME for the conditioned evolution of the system in its Itô form is $$d\rho _\text{c}=i[H_{\text{sys}},\rho _\text{c}]dt+2\alpha 𝒟[x]\rho _\text{c}dt+\sqrt{2\alpha }\left[e^{i\varphi }x\right]\rho _\text{c}dW.$$ (1) The superoperators $`𝒟[c]`$ and $`[c]`$ acting on a density matrix $`\rho `$ are $`𝒟[c]\rho =c\rho c^{}\frac{1}{2}c^{}c\rho \frac{1}{2}\rho c^{}c`$ and $`[c]\rho =c\rho +\rho c^{}`$Tr$`(c\rho +\rho c^{})\rho `$ where $`c`$ is an arbitary operator. We will imagine that the atom is harmonically bound, $$H_{\text{sys}}=\frac{p^2}{2m}+\frac{m\omega ^2x^2}{2}.$$ (2) The constant $`\alpha `$ describes the strength of the measurement interaction and in the cavity QED example depends on the strength of the coherent driving and of the damping of the cavity. For the moment we will consider a one-sided cavity and perfect detection so that all the output light is detected. This assumption will be relaxed in Sec. (V). In a generalization of the Caves and Milburn model we will allow projective measurements of any quadrature of the meters, not just position, since this can be realized in the cavity QED experiments by varying the phase $`\varphi `$ of the local oscillator in the homodyne detection of the output light. The resulting measurement current $`i=\frac{dQ}{dt}`$ (suitably scaled) is $$dQ=\mathrm{cos}(\varphi )x_\text{c}dt+\sqrt{\frac{1}{8\alpha }}dW.$$ (3) This stochastic master equation with the full dependence on $`\varphi `$ was discussed by Diosi in the context of a phenomenological model of position measurement through photon scattering where the kind of measurement made on the scattered photon determines the value of $`\varphi `$. Clearly if we choose $`\varphi =0,\pi `$ the homodyne detection is an effective measurement of the atomic position, whereas if $`\varphi =\pi /2,3\pi /2`$ the measurement results are independent of the system state and only contain information about the noisy potential seen by the atom. For $`\varphi =\pi /2,3\pi /2`$ the conditioned evolution is linear in the system state . This is somewhat like a continuous quantum eraser where the continuous measurement of one quadrature of the probe system destroys the position information written into the other quadrature of the probe. For $`\varphi =0`$ the SSE corresponding to this model in the case of pure states of the system has been considered in . In this work we will use a simpler means of solving the evolution equations which straightforwardly applies to mixed states. It has been shown by Jacobs and Knight that the SSE corresponding to Eq. (1) is one for which Gaussian pure states remain Gaussian and pure under the evolution. Thus, if the system is initially in a mixture of Gaussian pure states, the conditioned state will remain Gaussian under the SME (1). This property holds true for single mode systems where the Hamiltonian is at most quadratic in $`x`$ and $`p`$ and the operator $`c`$ appearing in the Lindblad term $`𝒟[c]`$ is linear in $`x`$ and $`p`$ and in all likelihood for multimode linear systems also. If we restrict ourselves to Gaussian initial states — for example to thermal states of the oscillator — then there are only five quantities which completely define the state — $`x`$,$`p`$,$`(\mathrm{\Delta }x)^2=x^2x^2`$,$`(\mathrm{\Delta }p)^2=p^2p^2`$ and $`\mathrm{\Delta }x\mathrm{\Delta }p_{\text{sym}}=\frac{1}{2}(xp+px2xp)`$. From now on we will use $`c`$ to indicate the conditioned expectation value Tr$`(c\rho _\text{c})`$. The requirement that the initial states be Gaussian is not unduly restrictive since these are typically the states that are most stable in linear systems and would therefore be a reasonable assumption for an initial state. Moreover, there is numerical evidence that non-Gaussian pure initial states become approximately Gaussian under the stochastic Schrödinger equation evolution corresponding to the SME (1) on timescales fast compared to those considered here . It has also been shown that in at least one such linear system an arbitary density matrix can eventually be written as a probabilistic sum over Gaussian pure states after sufficient evolution subject to the unconditioned master equation . In considering such a linear system we are in effect specifying a semiclassical evolution, since the equations of motion for the Wigner function of the state in the unconditioned evolution are similar to the classical Liouville equations for a phase space density, see for example . However we here include the important quantum feature of the measurement backaction, which is represented by the fact that the momentum diffusion is determined by the accuracy with which the particle’s position is monitored . So, as the noise in the position measurement is decreased by increasing $`\alpha ,`$ the momentum diffusion expressed by the Lindblad term in the SME (1) increases. Following Breslin and Milburn we can derive a system of differential equations for the first and second-order moments from the SME (1) since $`dc=`$Tr$`(cd\rho _\text{c})`$. A similar calculation is performed in . We define the dimensionless quantities $`\stackrel{~}{x}=`$ $`x/\sqrt{\mathrm{}/2m\omega },\stackrel{~}{p}=p/\sqrt{\mathrm{}m\omega /2}`$, the second-order moments $`V_{xx}=2m\omega (\mathrm{\Delta }x)^2/\mathrm{}`$, $`V_{pp}=2(\mathrm{\Delta }p)^2/\mathrm{}m\omega `$, $`V_{xp}=2\mathrm{\Delta }x\mathrm{\Delta }p_{\text{sym}}/\mathrm{}`$ and a dimensionless parameter describing the relative strengths of the measurement and harmonic dynamics, $`r=m\omega ^2/2\mathrm{}\alpha `$. The Heisenberg uncertainty principle now requires that $`V_{xx}V_{pp}1`$. A pure state has the property that $`V_{xx}V_{pp}V_{xp}^2=1,`$ representing the fact that a Gaussian pure state is a minimum uncertaintly state for some pair of conjugate quadrature variables . In terms of these new quantities, the Itô stochastic differential equations for the first and second-order moments and the measured photocurrent are $`d\stackrel{~}{x}`$ $`=`$ $`\omega \stackrel{~}{p}dt+\sqrt{{\displaystyle \frac{2\omega }{r}}}\mathrm{cos}(\varphi )V_{xx}dW,`$ (5) $`d\stackrel{~}{p}`$ $`=`$ $`\omega \stackrel{~}{x}dt+\sqrt{{\displaystyle \frac{2\omega }{r}}}\left(\mathrm{cos}(\varphi )V_{xp}\mathrm{sin}(\varphi )\right)dW,`$ (6) $`d\stackrel{~}{Q}`$ $`=`$ $`\mathrm{cos}\left(\varphi \right)\stackrel{~}{x}dt+\sqrt{{\displaystyle \frac{r}{2\omega }}}dW,`$ (7) $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{dV_{xx}}{dt}}`$ $`=`$ $`2V_{xp}{\displaystyle \frac{2}{r}}\mathrm{cos}^2(\varphi )V_{xx}^2,`$ (8) $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{dV_{pp}}{dt}}`$ $`=`$ $`2\left(1{\displaystyle \frac{\mathrm{sin}(2\varphi )}{r}}\right)V_{xp}+{\displaystyle \frac{2}{r}}\mathrm{cos}^2(\varphi )`$ (10) $`{\displaystyle \frac{2}{r}}\mathrm{cos}^2(\varphi )V_{xp}^2,`$ $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{dV_{xp}}{dt}}`$ $`=`$ $`V_{pp}\left(1{\displaystyle \frac{\mathrm{sin}(2\varphi )}{r}}\right)V_{xx}`$ (12) $`{\displaystyle \frac{2}{r}}\mathrm{cos}^2(\varphi )V_{xx}V_{xp}.`$ As in , the Itô rules for stochastic differential equations and the properties of Gaussian states result in deterministic equations for the conditioned second-order moments which are decoupled from the equations for the means. The constant term in the equation for $`V_{pp}`$ refers to the momentum diffusion due to the position measurement and remains in the master equation for the unconditioned evolution. The non-linear terms describe the conditioning of the state on the measurement. The noisy contribution to the equation for $`d\stackrel{~}{x}`$ seems a little like a stochastic impulsive force, however it is perhaps better to think of this term as updating the expected position given the measurement result $`d\stackrel{~}{Q}`$ in analogy with classical Bayesian state estimation. Equations like those above for the second-order moments of the conditioned state arise very frequently in classical continuous-time observation and control problems. They can be collected into a Riccati matrix differential equation for the covariance matrix, $`{\displaystyle \frac{d}{dt}}V`$ $`=`$ $`\omega \left(CVBVDVVA\right),`$ (14) $`V`$ $`=`$ $`\left(\begin{array}{cc}V_{xx}\hfill & V_{xp}\hfill \\ V_{xp}\hfill & V_{pp}\hfill \end{array}\right),`$ (17) $`A`$ $`=`$ $`D^T=\left(\begin{array}{cc}0\hfill & \left(1\frac{\mathrm{sin}(2\varphi )}{r}\right)\hfill \\ \left(1\frac{\mathrm{sin}(2\varphi )}{r}\right)\hfill & 0\hfill \end{array}\right),`$ (20) $`B`$ $`=`$ $`\left(\begin{array}{cc}\frac{2\mathrm{cos}\varphi }{r}\hfill & 0\hfill \\ 0\hfill & 0\hfill \end{array}\right),`$ (23) $`C`$ $`=`$ $`\left(\begin{array}{cc}0\hfill & 0\hfill \\ 0\hfill & \frac{2\mathrm{cos}\varphi }{r}\hfill \end{array}\right).`$ (26) A single variable Riccati equation which arose from the stochastic Schrödinger equation for this system was found and solved in . In practice it may not be homodyne but rather heterodyne detection which can be experimentally achieved with noise at the quantum limit . In this case, the local oscillator is detuned from the cavity frequency by a frequency $`\mathrm{\Delta }_{\text{het}}`$ which is large compared to all system frequencies, with the result that the phase $`\varphi `$ changes very rapidly. The quantum theory of heterodyne detection is described by Wiseman and Milburn . The appropriate conditioned evolution can be described by averaging all the trigonometric functions of $`\varphi `$ in the evolution equations (II A) except where they are multiplied by Itô increments. Thus the equations for the second order moments are exactly those for homodyne detection with $`\varphi =0`$ where $`r`$ is replaced by $`2r,`$ corresponding to halving the signal to noise ratio of the measurement. Considering the stochastic integrals $`_t^{t+\delta t}\mathrm{cos}(\mathrm{\Delta }_{\text{het}}t^{})𝑑W(t^{}),_t^{t+\delta t}\mathrm{sin}(\mathrm{\Delta }_{\text{het}}t^{})𝑑W(t^{}),`$ in the limit firstly of infinite $`\mathrm{\Delta }_{\text{het}}`$ then of infinitesimal $`\delta t,`$ leads to equations for the first order moments under heterodyne detection $`d\stackrel{~}{x}`$ $`=`$ $`\omega \stackrel{~}{p}dt+\sqrt{{\displaystyle \frac{\omega }{r}}}V_{xx}dW_1,`$ (28) $`d\stackrel{~}{p}`$ $`=`$ $`\omega \stackrel{~}{x}dt+\sqrt{{\displaystyle \frac{\omega }{r}}}\left(V_{xp}dW_1dW_2\right),`$ (29) where $`dW_1`$ and $`dW_2`$ are independent Wiener increments. Again this is formally identical to the evolution with homodyne detection and $`\varphi =0`$ where $`r`$ is replaced by $`2r`$ and in which a second independent noise process $`dW_2`$ is present. Scaled versions of the two quadrature components of the experimental photocurrent are given by $`d\stackrel{~}{Q}_1`$ $`=`$ $`\stackrel{~}{x}dt+\sqrt{{\displaystyle \frac{r}{\omega }}}dW_1,`$ (31) $`d\stackrel{~}{Q}_2`$ $`=`$ $`dW_2.`$ (32) Again note the replacement of $`r`$ by $`2r`$ in the equation for $`d\stackrel{~}{Q}_1`$ as compared to the equation for $`d\stackrel{~}{Q}`$ with $`\varphi =0`$. So if the quadrature-phase $`I_2(t)`$ current is collected and used to account for the noisy potential, or alternatively fed back in order to compensate this evolution, then heterodyne detection is equivalent to homodyne detection with half the signal to noise ratio as far as the motional state is concerned. ### B Steady State Conditioned Variances For all phases of the local oscillator $`\varphi \pi /2,3\pi /2`$ the second-order moments possess a steady state. For example if $`\varphi =0`$ $`V_{xx}^{\text{ss}}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}r\sqrt{\sqrt{1+{\displaystyle \frac{4}{r^2}}}1},`$ (34) $`V_{pp}^{\text{ss}}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}r\sqrt{1+{\displaystyle \frac{4}{r^2}}}\sqrt{\sqrt{1+{\displaystyle \frac{4}{r^2}}}1},`$ (35) $`V_{xp}^{\text{ss}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}r(\sqrt{1+{\displaystyle \frac{4}{r^2}}}1),`$ (36) which defines a pure state that agrees with the solution given in . The steady states are found to have exactly the same second order moments regardless of the initial purity of the system. Assuming ideal detection, the observer is eventually able to ascribe a pure state to the system. When the harmonic oscillator dynamics dominates over the measurement ($`r1`$) the steady conditioned state is approximately a coherent state with $`V_{xx}^{\text{ss}}V_{pp}^{\text{ss}}1,V_{xp}^{\text{ss}}1/r.`$ If the measurement dynamics dominate ($`r1`$) then $`V_{xx}^{\text{ss}}\sqrt{r},V_{pp}^{\text{ss}}\frac{2}{\sqrt{r}},V_{xp}^{\text{ss}}1`$and the conditioned state is strongly squeezed in position, as one would expect for a position measurement which is rapid enough to overcome the internal dynamics of the system. The product of the position and momentum variances is greater than that required by the Heisenberg uncertainty principle as a result of the internal Hamiltonian which gives a non-zero correlation $`V_{xp}`$. Finally, the scaling we have chosen for the variances makes the limit of the free particle appear singular, however this limit exists and the results agree with Belavkin’s . Pure steady states also exist for other values of $`\varphi `$ but the full expressions are rather complicated so we will just consider two special cases. If the oscillator dynamics dominate ($`r1`$) the steady conditioned states are insensitive to the local oscillator phase leaving the conditioned state nearly in a coherent state $`V_{xx}^{\text{ss}}1+\mathrm{sin}(2\varphi )/2r,V_{pp}^{\text{ss}}1\mathrm{sin}(2\varphi )/2r,V_{xp}^{\text{ss}}\mathrm{cos}^2\varphi /r.`$ If $`r1`$ the conditioned state is strongly dependent on the choice of local oscillator phase, $`V_{xx}^{\text{ss}}`$ $``$ $`{\displaystyle \frac{\sqrt{r}}{\left|\mathrm{cos}\varphi \right|}}\sqrt{1/\left|\mathrm{cos}\varphi \right|+\mathrm{tan}\varphi },`$ (38) $`V_{pp}^{\text{ss}}`$ $``$ $`{\displaystyle \frac{2}{\sqrt{r}}}\sqrt{1/\left|\mathrm{cos}\varphi \right|+\mathrm{tan}\varphi },`$ (39) $`V_{xp}^{\text{ss}}`$ $``$ $`1/\left|\mathrm{cos}\varphi \right|+\mathrm{tan}\varphi .`$ (40) Rather surprisingly it is possible for some phases of the local oscillator that the momentum variance is in fact smaller than the position variance. This is because the measurement for non-trivial $`\varphi `$ is a simultaneous measurement of the position of the oscillator and the momentum kicks to which it is being subjected and this can result in a more sharply defined momentum than position. This is really only a possibility for phases of the local oscillator where there is very little position information in the record. In the cases $`\varphi =\pi /2,3\pi /2`$ the differential equations for the second order moments of the state are simply those that result from the unitary evolution of a simple harmonic oscillator and the SME (1) describes a stochastic unitary evolution. As a result there is no steady state for the moments, which grow according to the conventional Schrödinger equation. In this case the measurement current is white noise. In it was noted that where a Lindblad operator is hermitian there exists an unravelling of the master equation which does not localize the conditioned state. The reason for this is clear in this context, such an unravelling corresponds to a measurement in which the observer obtains no information about the system state. ### C Timescale for Determination of a Pure State through Measurement The matrix Riccati equation (14) has an analytic solution given by Reid . Where $`U(t)`$ and $`W(t)`$ obey the linear coupled matrix equations $`{\displaystyle \frac{d}{dt}}U`$ $`=`$ $`AU+BW,`$ (42) $`{\displaystyle \frac{d}{dt}}W`$ $`=`$ $`CUDW`$ (43) and for times where $`U(t)`$ is non-singular, the solution for the covariance matrix is $`V(t)=W(t)U^1(t).`$ The full solution for all the second-order moments and arbitary initial conditions is complicated and not particularly illuminating. In order to expose the general form of the solution we will just consider the position variance in the case of an initial state of the oscillator with $`V_{xx}(0)=V_{pp}(0)=V_0,V_{xp}(0)=0,`$ $`V_{xx}(t)`$ $`=`$ $`{\displaystyle \frac{\begin{array}{c}V_0\left(c^2\mathrm{cosh}2bt+b^2\mathrm{cos}2ct\right)\\ +\left(1+V_0^2\right)\left(c\mathrm{sinh}2btb\mathrm{sin}2ct\right)\end{array}}{\begin{array}{c}\left(b^2+c^2\right)+V_0\left(c^3\mathrm{sinh}2bt+b^3\mathrm{sin}2ct\right)\\ +\left(1+V_0^2\right)\left(c^2\mathrm{sinh}^2btb^2\mathrm{sin}^2ct\right)\end{array}}},`$ (49) $`b`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\sqrt{\sqrt{1+{\displaystyle \frac{4}{r^2}}}1},`$ (50) $`c`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\sqrt{\sqrt{1+{\displaystyle \frac{4}{r^2}}}+1}=1/V_{xx}^{\text{ss}}.`$ (51) We have scaled time by the harmonic oscillation frequency $`\omega `$. When $`2bt1`$ the system is close to its steady state value regardless of $`V_0`$. The non-linearity of the terms describing the conditioning of the system state cause the time over which the conditioned state becomes pure to be independent of the initial temperature of the state. For definiteness we will define a collapse time $`\tau _{\text{col}}=2/b\omega `$ as being the time at which the state has become effectively pure. When $`r>1`$ this collapse time is $`\tau _{\text{col}}2r/\omega =m\omega /\mathrm{}\alpha `$ since in this regime $`b1/r`$. In this regime $`c1+1/2r^2`$ and there are many oscillations of the particle before the state is determined. That the time to determine a particular pure state of the system should increase with the frequency seems reasonable since unexpected values of the measurement current could be due to a mistaken idea of the position of the particle, the white noise in the measurement record or to motion due to the oscillation and these possibilities will be more difficult to distinguish if the atomic motion is fast. For smaller $`r`$, the measurement is becoming very good and this estimate for the collapse time is optimistic since it is hard to determine the state of the system in less than one period of the mechanical oscillation. Reductions in the conditioned momentum variance will only occur as the Hamiltonian evolution creates a correlation between the position and the momentum of the state. Even if the harmonic potential were absent a continuous measurement of position will give some information about the momentum of the particle. When $`r1`$ the particle is essentially free as far as the measurement is concerned and the time for the state reduction to occur turns out to be $`\tau _{\text{col}}\sqrt{8m/\mathrm{}\alpha }`$ which is determined solely by the strength of the measurement and the mass of the particle. In this situation $`bc1/\sqrt{r}`$. Increasing the measurement coupling means that the time for the measurement to take place is reduced and in the limit of infinite $`\alpha `$ the model essentially describes a projective measurement of the position. If heterodyne detection is used rather than homodyne detection then $`r`$ is replaced by $`2r`$ in the above equations with obvious implications for the timescale of the system collapse. A pure state describes a situation in which an observer has maximal information about the system. A mixed state describes less than maximal knowledge of system and it turns out that the amount of missing information, in an information theoretic sense, required to complete the specification of the state may be measured by the von Neuman entropy $`S(\rho )=`$Tr$`(\rho \mathrm{log}\rho )`$ of the density operator, see for example . Thus the entropy allows us to quantify the extent to which the measurement has determined the state of the system at a given time and also the extent to which other environmental couplings limit what an experimenter can say about the system state. Another commonly used measure of the “mixedness” of a given density matrix is the linear entropy or purity $`s(\rho )=1`$Tr$`(\rho ^2)`$. For a single mode Gaussian state these quantities are simple functions of the unitless “area” of the state in phase space $`A=\sqrt{V_{xx}V_{pp}V_{xp}^2}`$ $`s(\rho )`$ $`=`$ $`1{\displaystyle \frac{1}{A}},`$ (53) $`S(\rho )`$ $`=`$ $`{\displaystyle \frac{A+1}{2}}\mathrm{ln}(A+1){\displaystyle \frac{A1}{2}}\mathrm{ln}(A1)\mathrm{ln}2.`$ (54) Note that $`A`$ is just the determinant of the covariance matrix of the position and momentum probability distributions for the conditioned state. For a pure state $`A=1,s(\rho )0,S(\rho )0`$ and as the state becomes increasingly mixed it occupies a larger phase space area such that as $`A\mathrm{},s(\rho )1,S(\rho )\mathrm{}`$. As we would expect if the state is widely spread in phase space then our knowledge of the system is poor and the information needed to complete the description of the state is large. The time evolution of the variances and the linear and von Neumann entropies of the conditioned state with $`V_{xx}(0)=V_{pp}(0)=20,V_{xp}=0,\omega =1,r=20`$ is plotted in Fig. (1). These parameters are chosen since the measurement dynamics are not fast enough to obscure the harmonic oscillation totally and because achieving very small values of $`r`$ will probably be difficult in practice. Several features of Fig. (1) are relevant. Firstly the initial very rapid reduction of the position variance is associated with the the first part of the measurement record making a reasonably accurate determination of the position. Then over the timescale of the harmonic oscillation the momentum variance also reduces as the dynamics correlate the position and momentum. Note that the reduction in momentum variance occurs only when there is a strong correlation between the position and the momentum. As $`V_{xp}`$ becomes small the position variance decays more rapidly and the reduction of the momentum variance slows. Eventually all the second-order moments decay to the steady state values predicted above. This initial fast reduction of the position variance is accompanied by a fast reduction of the von Neumann entropy which damps to zero as the system approaches steady state. ### D Cavity QED Realization Although we have been considering an abstract model for continuous position measurement this work is motivated by emerging experimental possibilities in areas such as cavity quantum electrodynamics (QED). The position dependent phase shift induced by an atom strongly coupled to a high finesse optical cavity mode in the dispersive limit of cavity QED realizes the abstract position measurement coupling considered here. It is currently possible to detect the presence of a single cold atom in the cavity through measurements of the output field and great progress has been made in observing single atom events with broad bandwidth close to the dispersive regime . The phase shift changes most quickly with position where the gradient of the field is greatest and if the atom is harmonically confined in this region then the model discussed above would be approximately realized. There is also the far off-resonant optical potential which will lead to large forces on the atom in this regime which would move the atom quickly away from this region of the standing wave. However an optical standing wave at a nearby frequency could in principle be tuned to cancel the ac-Stark shift of the ground state in the region of the harmonic confinement. In fact the dipole force could straightforwardly be included in our simplified model resulting in only minor changes for sufficiently strong harmonic confinement. For the moment we will not specify the exact source of the potential confining the atom, but the use of light forces from a far off-resonant optical standing wave or a standing wave in another mode of the cavity are possibilities. There is also experimental work aimed at confining ions in high finesse optical cavities which could realize such a system . If we imagine a harmonic potential confining the atom to this region of the standing wave with some constant restoring force to overcome the dipole force then in the far-detuned and Lamb-Dicke limit the resulting SME would be exactly Eq. (1) above . The constant $`\alpha `$ would then be equal to $`2g_0^4nk_L^2/\mathrm{\Delta }^2\kappa `$, where $`g_0`$ is the maximal single photon Rabi coupling in the cavity, $`n`$ is the number of photons present in the driven cavity in the steady state, $`\mathrm{\Delta }`$ is the detuning between the atomic and cavity resonances — the external laser driving is on the cavity resonance, $`\kappa `$ is the cavity field decay rate and $`k_L`$is the wave number for the light resonant inside the cavity. Using the parameters of which are based on cavity parameters achieved by Hood et al. gives $`\alpha =2.4\times 10^{20}`$ s<sup>-1</sup>m<sup>-2</sup> and this determines the rate of decay of the off-diagonal terms in the position representation of the density matrix under the unconditioned evolution. This rather large number means that the density matrix elements $`x|\rho |x^{}`$where $`x`$ and $`x^{}`$ are separated by nine nanometers or around 1% of a wavelength will decay in the unconditioned evolution at the rate $`2.0\times 10^4`$Hz. The decay of off-diagonal elements of the density matrix in a particular basis is often associated with decoherence and the emergence of classicality . In this case the decoherence is due in to the measurement coupling. In this decay of off-diagonal density matrix elements is described as state reduction, in this work we are interested in state reduction onto a pure state and the rate of this process is determined not only by $`\alpha `$ but also by the length scale of a typical pure state of the uncoupled system. Thus we found above that reduction onto a pure state took place in a time $`\tau _{\text{col}}m\omega /4\mathrm{}\alpha `$ and the dependence on the length scale of the harmonic oscillator is clear. In order to find the rate of collapse onto a pure state in the conditioned evolution it is therefore also necessary to know the oscillation frequency of the atom due to its harmonic confinement. Assuming this is achieved optically, the value for $`\omega /2\pi `$ could be in the range of tens to hundreds of kilohertz. So for example in the potential for a cesium atom resulting from the same cavity and driving parameters gives $`\omega /2\pi =2\sqrt{\mathrm{}k_L^2/2m}\sqrt{g_0^2n/\mathrm{\Delta }}/2\pi =180`$ kHz, while $`\omega /2\pi 60`$ kHz has been achieved for cesium in optical lattices by Haycock et al. . For such a hypothetical experiment with cesium we now have $`r=5.6`$ and $`r=0.63`$ respectively. Estimates, as outlined above, for the time for an experimenter to determine a pure state of the system through heterodyne detection are then $`19\mu `$s and $`8.9\mu `$s. Both these times are reasonably close to the minimum collapse time for this accuracy of detection which corresponds to the free particle limit with $`\tau _{\text{col}}3.9\mu s.`$ In current experiments detection efficiencies and bandwidths will have a significant effect on the information that can be gathered from the record. For trapped ions harmonic frequencies would be around an order of magnitude larger and state reduction times would then also be around an order of magnitude longer. However the cavity finesse used here for such experiments may be more difficult to achieve at typical frequencies of ion transitions and the size of the cavity will be limited by the ion trap electrodes. What is dramatic about this time is that it is so short, single cold atoms have been observed close to the centre of the cavity mode in the experiment at Caltech for times of the order of hundreds of microseconds. This is another confirmation of the extent to which this experiment operates near the Standard Quantum Limit of position measurement ( and references therein) and an indication of the importance of continuous quantum measurement theory to its interpretation. Note that it will be difficult to achieve very small values of $`r`$ for which the measurement dynamics dominate over the oscillation in the well. This is the result of the two main constraints on the applicability of the model. Firstly it is necessary that the harmonic potential confine the atom to well within a wavelength, thus justifying the Lamb-Dicke approximation in the master equation (1). this requires that the recoil frequency for the cavity transition $`\omega _{\text{rec}}=\mathrm{}k_L^2/2m`$ be much smaller than the harmonic oscillation. Moreover attaining the dispersive regime requires that the saturation parameter $`s=g_0^2n/\mathrm{\Delta }^2`$ be much smaller than one. It is possible to express $`r`$ in terms of these quantities: $`r=(\omega /\omega _{\text{rec}})(1/8s)(\omega /\mathrm{\Gamma }_{\text{cav}})`$where $`\mathrm{\Gamma }_{\text{cav}}=g_0^2/\kappa `$ is the cavity mediated spontaneous emission rate . Assuming that values of $`\omega `$and $`s`$ are chosen to satisfy a particular level of approximation, the rate at which the conditioned states become pure is only increased by changing the cavity parameters through increasing $`\mathrm{\Gamma }_{\text{cav}}`$ relative to the oscillation frequency. Moreover, since both the first two factors in this expression for $`r`$ must be large for the SME (1) to correspond to the cavity system, extremely large values of $`\mathrm{\Gamma }_{\text{cav}}`$are necessary to achieve rapid measurement of the system state. ## III Classical Analogue If we are to interpret the conditioned state as the best description of the observer’s knowledge of the quantum mechanical state given the results of a series of measurements, we would expect a similarity between these equations and classical Bayesian state observation. The analogy between the SME (1) and Kalman filtering for a classical position measurement was discussed in but only the equations for the position probability distributions were considered. Here we formulate the continuous time position measurement state observer for a classical harmonic oscillator and find that there is a close analogy between the SME and the classical theory for all moments of the conditioned probability distribution as long as we restrict ourselves to Gaussian states and allow for noisy driving of the classical oscillator. The problem of noisy, classical, continuous time position measurement of a harmonic oscillator can be formulated $`{\displaystyle \frac{dx_C}{dt}}`$ $`=`$ $`\omega p_C,`$ (56) $`{\displaystyle \frac{dp_C}{dt}}`$ $`=`$ $`\omega x_C+\sqrt{{\displaystyle \frac{2\omega }{s}}}ϵ,`$ (57) $`{\displaystyle \frac{dQ_C}{dt}}`$ $`=`$ $`ax_C+\sqrt{{\displaystyle \frac{r}{2\omega }}}\eta ,`$ (58) $`E[ϵ(t)ϵ(t^{})]`$ $`=`$ $`E[\eta (t)\eta (t^{})]=\delta (tt^{}),`$ (59) $`E[ϵ(t)\eta (t^{})]`$ $`=`$ $`f\delta (tt^{})`$ (60) We have used the same scaling of the variables as in the quantum problem. We imagine that as well as having an imperfect measurement of the system the oscillator is subject to a white noise force. There may be some correlation between the oscillator (plant or process) noise and the measurement noise and so $`ϵ`$ and $`\eta `$ are correlated Wiener processes. As in the quantum mechanical case the limit of small $`r`$ is the limit of good position measurement. The continuous time theory of Kalman filtering then provides the best estimate of the system state $`\widehat{x},\widehat{p}`$ and the second-order moments of the posterior probability distribution $`P(x_C,p_C)`$ , $`d\widehat{x}_C`$ $`=`$ $`\omega \widehat{p}_Cdt+a\sqrt{{\displaystyle \frac{2\omega }{r}}}V_{xx}dW,`$ (62) $`d\widehat{p}_C`$ $`=`$ $`\omega \widehat{x}_Cdt+\left(a\sqrt{{\displaystyle \frac{2\omega }{r}}}V_{xp}+f\sqrt{{\displaystyle \frac{2\omega }{s}}}\right)dW,`$ (63) $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{dV_{xx}}{dt}}`$ $`=`$ $`2V_{xp}{\displaystyle \frac{2}{r}}a^2V_{xx}^2,`$ (64) $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{dV_{pp}}{dt}}`$ $`=`$ $`2\left(1+{\displaystyle \frac{2af}{\sqrt{rs}}}\right)V_{xp}+{\displaystyle \frac{2}{r}}`$ (66) $`{\displaystyle \frac{2f^2}{s}}{\displaystyle \frac{2}{r}}a^2V_{xp}^2,`$ $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{dV_{xp}}{dt}}`$ $`=`$ $`V_{pp}\left(1+{\displaystyle \frac{2af}{\sqrt{rs}}}\right)V_{xx}{\displaystyle \frac{2}{r}}a^2V_{xx}V_{xp}.`$ (67) Where $`dW`$ is an independent Wiener increment with $`dW^2=dt`$ proportional to the innovation process $`dQa\widehat{x}`$. Note that the circumflex employed here indicates that the quantity is an estimate of the classical variable and not that it is a quantum operator. The moments of the posterior probability distribution have been given the same notation as the moments of the conditioned quantum mechanical state. If we make the identifications $`s=r,a=\mathrm{cos}\varphi ,f=\mathrm{sin}\varphi `$ then this system of equations is formally identical to the system which determines the evolution of the quantum mechanical conditioned state Eqns (II A). We see that there is a classical model of noisy position measurement for which the equations of motion for the posterior probability distribution of the classical state given by the Kalman filter reproduce the stochastic master equation. What is specifically quantum mechanical in the SME is that we cannot, even in principle, specify the process noise and the measurement noise separately. Classically one could imagine isolating the system sufficiently that $`s`$ is as large as we like. With $`s\mathrm{}`$ there would be no momentum disturbance on the atom and after a sufficiently long observation time the state of the system would be determined exactly so that $`V_{ij}^{\text{ss}}=0.`$ Clearly this does not correspond to a quantum state. However the quantum theory of the problem guarantees that any coupling to the system which gives position information about the state of the system must also disturb the momentum. This momentum disturbance must be sufficient that the conditioned state always obeys the Heisenberg uncertainty principle. Thus only some classically possible models of position measurement are allowed by quantum mechanics. For a given level of measurement noise, the process noise must be at least sufficient to ensure that the observer can never infer probability distributions for the position and momentum which do not constitute valid quantum states. This is a measurement-disturbance uncertainty relation for continuous measurement; reducing the noise in the measurement must increase the noise in the evolution. This backaction noise does not however behave like classical process noise, its properties are entirely determined by the measurement. If we vary the basis of the measurement on the meter — vary $`\varphi `$ — then we are able to alter the correlation between the measurement noise and the apparent process noise in the classical model. We have found here that the symmetric moments of the conditioned state always obey a system of equations which also describes Kalman filtration for a classical problem, but that altering the specific quantum measurement alters both the observation and the process noise of the relevant classical problem. Since the symmetric moments are the moments of the Wigner function, for example , the conditioned Wigner function can be interpreted as the direct analogue of a classical posterior probability distribution for the system. This relationship is not going to be as straightforward for more complicated quantum systems where the conditioned Wigner function can be negative and cannot be interpreted as a probability distribution. ## IV Dependence on Initial State We have established above that the conditioned state of the system can in priniciple become pure after a finite observation time. What might not be clear is that the state which results is uniquely determined by the measurement record. In an ideal projective measurement the probabilities of obtaining the various measurement results depend upon the initial state. However, once a result has been obtained the conditional state after the measurement depends on that result alone. In a similar way, for continuous measurements we find that, while the initial state affects the probability of obtaining particular measurement records, the conditional state following the measurement is determined by the particular measurement record which was obtained, provided that record sufficiently long. In particular if the initial state of the system is very poorly known to an observer then we might hope that there is effectively a maximum likelihood estimate of the system state which depends only on the measurement current and which converges to the actual system state within $`\tau _{\text{col}}`$. If propagating the stochastic master equation with the actual initial state of the system provides the a posteriori estimate of the system state given the measurement results, then such a maximum likelihood estimate would result from propagating an initial state with very large position and momentum variances in the SME (1), say $`V_{xx}=V_{pp}=V_0\mathrm{},V_{xp}=0,`$ which gives a nearly flat prior probability distribution for the initial state. In this section we demonstrate that such a strategy does indeed work. Thus the purity of the conditioned density matrix indicates, as would be hoped, that there is only one pure state of the system at time $`t>\tau _{\text{col}}`$ which is consistent with the known sequence of measurement results. Suppose that two observers, Alice and Bob say, postulate different initial states of the system $`\rho ^\text{A}(0),\rho ^\text{B}(0),`$ which we will continue to assume to be Gaussian. For example, Alice may have more information than Bob about the system in which case Bob would start with a more mixed initial density matrix reflecting his initial lack of knowledge. Given that they both recieve the same measurement record $`I(t)`$ and propagate their conditioned states according to the SME (1), it should be the case that at some time $`t^{}`$, around $`\tau _{\text{col}}`$, Alice and Bob agree on the system state, so that $`\rho ^\text{A}(t^{})=\rho ^\text{B}(t^{})`$. From the previous section we know that after the time $`\tau _{\text{col}}`$ the second order moments of both conditioned states will be equal $`V_{ij}^\text{A}=V_{ij}^\text{B}=V_{ij}^{\text{ss}}`$ and so we focus on the equations for the first order moments of the conditioned state for each observer $`dx^\text{A}`$ $`=`$ $`\omega p^\text{A}dt+\sqrt{{\displaystyle \frac{2\omega }{r}}}V_{xx}^\text{A}dW,`$ (69) $`dp^\text{A}`$ $`=`$ $`\omega x^\text{A}dt+\sqrt{{\displaystyle \frac{2\omega }{r}}}V_{xp}^\text{A}dW,`$ (70) $`dx^\text{B}`$ $`=`$ $`\omega p^\text{B}dt+\sqrt{{\displaystyle \frac{2\omega }{r}}}V_{xx}^\text{B}dW^\text{B},`$ (71) $`dp^\text{B}`$ $`=`$ $`\omega x^\text{B}dt+\sqrt{{\displaystyle \frac{2\omega }{r}}}V_{xp}^\text{B}dW^\text{B},`$ (72) $`dQ`$ $`=`$ $`x^\text{A}dt+\sqrt{{\displaystyle \frac{r}{2\omega }}}dW=x^\text{B}dt+\sqrt{{\displaystyle \frac{r}{2\omega }}}dW^\text{B}.`$ (73) In this section we will omit the tildes used above to indicate that we have scaled the position and momentum to the natural units for the harmonic oscillator. The stochastic increment $`dQ`$ is the infinitesimal increment of the measured current which both Alice and Bob have access to. We can express the increment $`dW^\text{B}`$ in terms of the other quantities $`dW^\text{B}=dW\sqrt{2\omega /r}(x^\text{B}x^\text{A})dt,`$ and find stochastic differential equations for the differences between the means $`e_x=x^\text{B}x^\text{A},e_p=p^\text{B}p^\text{A}`$, $`de_x`$ $`=`$ $`\omega e_pdt{\displaystyle \frac{2\omega }{r}}V_{xx}^\text{B}e_xdt`$ (76) $`+\sqrt{{\displaystyle \frac{2\omega }{r}}}\left(V_{xx}^\text{B}V_{xx}^\text{A}\right)dW,`$ $`de_p`$ $`=`$ $`\omega \left(1+{\displaystyle \frac{2}{r}}V_{xp}^\text{B}\right)e_xdt`$ (78) $`+\sqrt{{\displaystyle \frac{2\omega }{r}}}\left(V_{xp}^\text{B}V_{xp}^\text{A}\right)dW.`$ The deterministic part of this system of equations describes a damped harmonic oscillation for $`e_x,e_p`$ where the damping and oscillation rates depend on the second-order moments of Bob’s conditioned state. The damping in these equations is not present in an analogous equation for $`e_x`$ given by Mabuchi for a free particle. The equation adopted in is obtained from the continuous limit of the repeated position measurement model of Caves and Milburn and does not contain a noise term in the stochastic differential equation for $`x`$. In fact the omission of this term is in error and if the continuous limit of the repeated position measurement model of Caves and Milburn is taken correctly then the noisy contribution to $`de_x,`$ which we obtained from the SME (1), is in fact present. It is the damping which results from this term which leads to all observers agreeing about the conditioned state after a sufficiently long observation time. Note that after the time $`\tau _{\text{col}}`$ Eqs (IV) are in fact a system of ordinary differential equations since at that point the covariances of the two conditioned states are equal. The differences in the means then damp to the steady state values $`e_x=e_p=0`$ indicating that Alice and Bob do eventually agree about the state of the system regardless of their initial states. Thus we have shown that the conditioned state eventually depends only on the measurement record but not the time-scale over which this occurs. It is straightforward to use the Itô chain rule to find differential equations for the expectation values of the covariance matrix for the difference between the conditioned means of Alice and Bob $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{d}{dt}}\left(E[e_x^2]\right)`$ $`=`$ $`2E[e_xe_p]{\displaystyle \frac{4}{r}}V_{xx}^\text{B}E[e_x^2]`$ (81) $`+{\displaystyle \frac{2}{r}}\left(V_{xx}^\text{B}V_{xx}^\text{A}\right)^2,`$ $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{d}{dt}}\left(E[e_p^2]\right)`$ $`=`$ $`2E[e_xe_p]{\displaystyle \frac{4}{r}}V_{xp}^\text{B}E[e_xe_p]`$ (83) $`+{\displaystyle \frac{2}{r}}\left(V_{xp}^\text{B}V_{xp}^\text{A}\right)^2,`$ $`{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{d}{dt}}\left(E[e_xe_p]\right)`$ $`=`$ $`E[e_p^2]E[e_x^2]`$ (86) $`{\displaystyle \frac{2}{r}}V_{xp}^\text{B}E[e_x^2]{\displaystyle \frac{2}{r}}V_{xx}^\text{B}E[e_xe_p]`$ $`+{\displaystyle \frac{2}{r}}\left(V_{xx}^\text{B}V_{xx}^\text{A}\right)\left(V_{xp}^\text{B}V_{xp}^\text{A}\right).`$ Although we have taken the expectation value for this system of equations, the noise terms all become zero after $`\tau _{\text{col}}`$ so these ordinary differential equations eventually describe the whole dynamics. We are now interested in the time-scale over which the determinant of this covariance matrix becomes zero. Unfortunately the time dependence of the conditioned state variances prevents a closed form solution of this system of equations and we have not found a matrix Riccati form for the overall system. It is however straightforward to investigate the problem numerically. We wish to show that all observers will agree about the conditioned state of the system in roughly $`\tau _{\text{col}}`$. To do this it is sufficent to show that an arbitary observer will agree with some preferred observer in that time. For this reason we will assume that Alice has sufficient information to describe the state of the system as a pure state and that she has access to sufficient earlier measurement records that $`V_{ij}^\text{A}(t)=V_{ij}^{\text{ss}}`$. An experimenter is going to be in Bob’s position of not having precise knowledge of or control over the preparation of the state. We would expect that if Bob makes an accurate assessment of his initial knowledge of the state then $`E[e_x^2]`$ is at first of the same order as $`V_{xx}`$ or smaller. Since $`V_{xx}^\text{B}V_{xx}^\text{A}`$ may be large initially, the stochastic term in the equations (IV) for Bob’s errors dominates for very short times and essentially determines a random initial condition for $`e_x`$ such that $`E[e_x^2]V_{xx}`$. This reflects the fact that for short times the measurement record is dominated by noise. From this point on we find numerically that $`E[e_x^2]V_{xx}`$ and that all the elements of the covariance matrix damp to zero within $`\tau _{\text{col}}`$. Thus we have found numerically that the conditioned state depends only on the measurement record after a time equal to the time over which the conditioned state becomes pure. Most importantly we found this to be the case even where all the variances were set to very large initial values, the largest we tried being $`V_{xx}^\text{B}=V_{pp}^\text{B}=E[e_x^2](0)=E[e_p^2](0)=10^{10}`$. Thus as we anticipated, Bob can make an accurate estimate of the state within the collapse time even in the absence of accurate information about the initial state. In this case Bob’s conditioned state corresponds to the maximum likelihood estimate discussed above. Interestingly, even with the pessimistic initial condition where $`E[e_x^2](0)`$ is significantly larger than $`V_{xx}^\text{B}(0)`$ — this corresponds to Bob overestimating the accuracy of his estimate of the particle’s position and the difference between his estimate and the actual value being very large — the determinant of the covariance matrix damps on the characteristic timescale $`\tau _{\text{col}}`$. This feature is more pronounced for larger values of $`V_{xx}^\text{B}(0).`$ Again this is because for large initial values of $`V_{ii}^\text{B}`$, Bob’s estimate of the system state is essentially dependent on the measurement current alone. These behaviours are demonstrated for the parameters used in the previous section in Figure (2). From this we can conclude that after the time $`\tau _{\text{col}}`$, any experimenter knows the state of the system regardless of how the system is initially prepared and of how much control the experimenter has over this process. In the next section when we introduce detection efficiency and thermal couplings, these will just modify the second-order moments of the conditioned state such that the steady conditioned state is no longer pure. Since the equations for the first order moments will be unmodified all that is necessary to reproduce the results of this section in this more general case is to use the collapse time $`\tau _{\text{col}}`$ which is appropriate for the new system. Although this will mean that the experimenter is left with broader position and momentum probability distributions, it will not mean that different experimenters disagree about the means of these distributions. So regardless of the detection efficiency the conditioned state is eventually uniquely determined by the measurement record and can be regarded as known by the experimenter. ## V Thermal and Detection Efficiency Effects In order to discuss the effects of detection efficiency and other uncontrolled coupling to a bath in the model we will add an extra momentum diffusion term to the SME to obtain for $`\varphi =0`$ $`d\rho _\text{c}`$ $`=`$ $`i[H_{\text{sys}},\rho _\text{c}]dt+2\alpha 𝒟[x]\rho _\text{c}dt+2\beta 𝒟[x]\rho _\text{c}dt`$ (88) $`+\sqrt{2\alpha }[x]\rho _\text{c}dW.`$ This simple modification to the master equation is intended to model several possible imperfections in a real experiment. One contribution to $`\beta `$ is the effect of detection efficiency so that $`\beta `$ is at least $`\alpha (1\eta )/\eta `$ where the overall detection efficiency is $`\eta `$. The effect of cavity loss through the unmonitored mirror in a cavity QED experiment is an effective detection inefficiency. Where the loss rates out of the two mirrors are $`\kappa _1,\kappa _2`$, and if only the light passing through the second mirror is detected, then we get $`\beta >\alpha \kappa _1/\kappa `$ where $`\kappa =\kappa _1+\kappa _2`$. The best situation is if the mirror in front of the detection apparatus has significantly higher transmission than the mirror used to drive the cavity. Scattering losses of the mirror will also be an effective detection inefficiency but these are typically much smaller than transmission losses. In experiments with single atoms there will also be heating due to spontaneous emission into free space which will lead to a contribution to $`\beta `$ equal to $`g_0^2n\mathrm{\Gamma }/4\mathrm{\Delta }^2`$ where $`\mathrm{\Gamma }`$ is the free space decay rate of the excited state of the atomic transition. Other than the restriction to the Lamb-Dicke regime this is the largest correction to the adiabatically eliminated master equation given above when a moderate detuning from the atomic transition is employed as in . The spontaneous emission contribution to the heating is also proportional to the measurement coupling $`\alpha `$ such that $`\beta _s=\alpha \mathrm{\Gamma }\kappa /4g_0^2`$ and to minimize the effect of spontaneous emission we must use cavities with the largest possible value of $`g_0^2/\kappa .`$ Recall that if this rate is large the signal to noise of the position measurement also improves. Contributions to $`\beta `$ in other systems, for example the interferometric detection of the position of a moving mirror , will also come from any coupling of the oscillator to a thermal bath. We found that the standard quantum Brownian motion master equation led to steady conditioned states that did not obey the Heisenberg uncertainty principle for small values of $`r`$. This a result of the non-Lindblad terms in this master equation. The master equation we adopt here solves this problem by considering only coupling to very high temperature thermal baths for which the thermal contribution to $`\beta `$ is$`\frac{\gamma k_BT}{\mathrm{}^2},`$ where $`\gamma `$ is the coupling rate to the thermal reservoir and $`T`$ is the temperature of the bath. This should be an adequate description of heating in the experiment as long as the bath to which the system is coupled is of sufficiently high temperature that only the diffusive evolution is significant for the timescales of interest. There is now no SSE equivalent to this SME (88) and pure states become mixed during the evolution. However the SME continues to preserve Gaussian states so the previous calculation for the evolution of state can be straightforwardly modified. The second-order moments of the conditioned state still reach a steady state and we can easily find an expression for the steady conditioned state phase space area $`A`$is given by $`A^{\text{ss}}`$ $`=`$ $`\sqrt{1+\beta /\alpha }=q1/\sqrt{\eta },`$ (90) $`s(\rho _\text{c}^{\text{ss}})`$ $`=`$ $`11/q1\sqrt{\eta },`$ (91) $`S(\rho _\text{c}^{\text{ss}})`$ $`=`$ $`{\displaystyle \frac{q+1}{2}}\mathrm{ln}(q+1){\displaystyle \frac{q1}{2}}\mathrm{ln}\left(q1\right)\mathrm{ln}2.`$ (92) The linear and von Neumann entropies of the steady conditioned states are plotted for a range of detection efficiencies in Fig. (3). Even though we have effectively coupled the oscillator to an infinite temperature bath the conditioned steady states have in some sense a finite temperature but stochastically varying mean values of position and momentum. We will consider some limiting cases here for the steady state variances in this more general situation. Where the measurement is strong ($`r1`$) the position variance is insensitive to these imperfections, $`V_{xx}^{\text{ss}}\sqrt{r}\sqrt{q},V_{xp}^{\text{ss}}q,V_{pp}^{\text{ss}}2\sqrt{q}^3/\sqrt{r}`$. If the dynamics dominate ($`r1`$) then the position and momentum variances have the same dependence on $`\beta `$ as $`A`$, $`V_{xx}^{\text{ss}}q,V_{xp}^{\text{ss}}q^2/r,V_{pp}^{\text{ss}}q`$. Finally the whole time evolution of the second-order moments of the conditioned state can be determined by solving the matrix Riccati equation. Again we will just consider the position variance as a function of time where $`V_{xx}(0)=V_{pp}(0)=V_0>1,V_{xp}(0)=0`$ $`V_{xx}(t)`$ $`=`$ $`{\displaystyle \frac{\begin{array}{c}q^2V_0\left(c^2\mathrm{cosh}2bt+b^2\mathrm{cos}2ct\right)\\ +q\left(V_0^2+q^2\right)\left(c\mathrm{sinh}2btb\mathrm{sin}2ct\right)\end{array}}{\begin{array}{c}q^2\left(b^2+c^2\right)+qV_0\left(c^3\mathrm{sinh}2bt+b^3\mathrm{sin}2ct\right)\\ +\left(V_0^2+q^2\right)\left(c^2\mathrm{sinh}^2btb^2\mathrm{sin}^2ct\right)\end{array}}},`$ (98) $`b`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\sqrt{\sqrt{1+{\displaystyle \frac{4q^2}{r^2}}}1},`$ (99) $`c`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\sqrt{\sqrt{1+{\displaystyle \frac{4q^2}{r^2}}}+1}=q/V_{xx}^{\text{ss}}`$ (100) and we have scaled time by the harmonic oscillation frequency $`\omega `$. As before when $`2bt1`$the system reaches its steady state value and the non-linearity of the terms describing the conditioning of the system state mean that the time for this to occur is independent of the initial state. Interestingly this time is in fact shorter than was required to purify the conditioned state in the case of ideal detection. This is essentially because the extra noise means that past observations become irrelevant more quickly, not leaving enough time to determine a pure state completely.While the steady state is reached increasingly fast it corresponds to an increasingly high effective temperature. When $`r>1`$ this time to reach the steady state is $`\tau _\text{s}2r/q\omega =m\omega /q\mathrm{}\alpha =\tau _{\text{col}}/q`$ since in this regime $`bq/r`$. As was noted in the previous section the time for the conditioned state variances to reach their steady state is also the time that is necessary for different observers to agree about the system state. The time evolution of some of these quantities is plotted in Figure (4). ## VI Conclusions In this paper we have established that for a simple class of systems quantum trajectory theories allow the determination of a unique post-measurement state that depends only on the measurement results over a finite time. We have discussed the effects of experimental imperfections on this state determination and the analogy to classical state observation. The systems to which this analysis is applicable have the property that the conditioned density matrix is at all times Gaussian and its evolution is exactly that of the posterior probability distribution for an appropriate classical state observer. While we have considered only the case of position measurement the same treatment will be applicable to these other linear systems. Clearly in more complicated systems, such as the resonant interaction of an atom with the single mode of an optical cavity, this will not always be the case and the conditioned Wigner function will sometimes be non-positive. We would however expect that the central result we have shown here, the purity of the conditioned state after sufficiently long continuous observation and the dependence of this state on the initial state only through the measurement results, will still hold for these more interesting and complicated systems. This will however require numerical simulation of the stochastic master equation for such systems. Another feature that should generalize is the interpretation of the SME as a state observer — presumably an optimal one — for the quantum system. ## Acknowledgements A. D. would like to thank Kurt Jacobs for continuing stimulating discussions on continuous measurement theory and in particular for suggesting that it would be interesting to calculate the purity of conditioned states with and without thermal couplings. It is also a pleasure to acknowledge discussions with Hideo Mabuchi. This work is supported by the Marsden Fund of the Royal Society of New Zealand.
no-problem/9903/hep-ex9903035.html
ar5iv
text
# The ATLAS Pixel Project ## I Introduction The ATLAS experiment at the CERN Large Hadron Collider will function as an exploratory tool for the Higgs boson along with possible new physics phenomena beyond the scope of the Standard Model such as supersymmetry. An average of 23 minimum-bias events will be produced every 25ns at the design luminosity ($``$ = 10<sup>34</sup>cm<sup>-2</sup>s<sup>-1</sup>). Hence interesting signatures will be buried amongst extraordinary degrees of background. In order to realise fully the physics potential of 14TeV $`pp`$ collisions at this luminosity, ATLAS must be capable of high-efficiency track reconstruction with excellent impact-parameter resolution for 3D-vertexing and b-tagging. Even at lower luminosities of 10<sup>33</sup>cm<sup>-2</sup>s<sup>-1</sup>, ATLAS will encounter millions of $`t\overline{t}`$ pairs per year for top-quark analyses along with a plethora of $`b`$-quarks enabling the possibility of CP-violation studies in the $`b`$-sector. For these areas of study, the tracking system must again deliver high spatial precision and reconstruction efficiency along with excellent momentum resolution. In addition to the performance requirements, the extremely harsh radiation environment defines perhaps the greatest challenge of all to the detector elements in ATLAS. At a radius of 30cm in the barrel region we expect to encounter a hadronic fluence<sup>*</sup><sup>*</sup>*expressed in terms of the 1MeV-neutron NIEL equivalent damage of $``$ 1.3$`\times `$10$`{}_{}{}^{14}n_{\text{eq}}^{}`$cm<sup>-2</sup> after 10 years of operation. At 10cm the figure is closer to 1.0$`\times `$10$`{}_{}{}^{15}n_{\text{eq}}^{}`$cm<sup>-2</sup>. The Inner Detector of ATLAS will incorporate high-resolution, low mass discrete tracking components in the form of silicon microstrip and silicon pixel detectors. The pixel elements will form the innermost layers by virtue of their higher granularity and because they are more suited to providing pattern recognition capability at the greatest track densities. Also, having a greater degree of segmentation than their strip-geometry counterparts naturally endows them with lower noise-per-channel and greater immunity to radiation damage. ## II Layout of the Pixel System Figure 1 shows the relative positioning of the individual modules within the pixel sub-detector. There are 3 barrel layers at radii of 4.3, 10.1 and 13.2cm along with 5 disc structures which extend to $`\pm `$78cm in the forward directions; providing pseudorapidity coverage to 2.5. The innermost barrel layer, (known as the $`B`$-layer), is not expected to survive the 10-year duration of the experiment and will therefore be replaced periodically. A single module comprises a silicon sensor tile with 16 front-end readout chips bump-bonded to it along with one Module Controller Chip (MCC). The pixel detector will incorporate 2228 such modules with a total active area of 2.3m<sup>2</sup>. ## III Module structure and hybridization There are currently two technology options being pursued for module hybridization known as the flex-hybrid approach and the MCM-D techniqueMulti Chip Module - Direct.. The former incorporates a thin kapton hybrid which is glued to the sensor backplane to provide the necessary bussing between the FE-chips, MCC and optical transmission components. The front-end chips, which are bump-bonded to the opposite face of the sensor, protrude along the long-edges of the module in order that their 48 power,data and control bond-pads may be wire-bonded up to the hybrid. The MCC sits on top of the hybrid along with the other electrical components such as decoupling capacitors, optical-link termination devices and temperature sensor. The MCM-D method eliminates the need for wire bonds as the bussing is processed on to the sensor surface (active-side) in four layers of copper metallisation. The long sensor dimension is extended to provide a region of ‘dead’ silicon upon which the MCC and peripheral components are mounted. The sensor is also wider in order that the connections from the front-end chips to the power and control bus’ may be formed using bump-bonds in the same way that the individual pixel implants are connected to the preamp inputs. Bump-bonding at 50$`\mu `$m pitch was until recent years considered to be a serious issue. However, there are several vendors with whom very promising experience has been gained recently in both the Indium and $`PbSn`$-solder technologies with pre-flip-chip yields of order 10<sup>-5</sup>. For the Indium process, the flip-chip stage involves pressing together the electronics chip and sensor, (both of which have the bumps grown on them), to form a cold weld. Solder bumps are grown only on the electronics surface using a plating and re-flow technique. The recipient sensor surface is plated with a wettable metallisation. Flip-chipping in this case involves bringing the two surfaces into accurately aligned contact and re-flowing the solder at 250$`^\text{o}`$C. ## IV Prototype Sensors The pixel sensors will be formed from 250$`\mu `$m-thick high-$`\rho `$ Silicon except within the $`B`$-layer where it is hoped to utilise material as thin as 200$`\mu `$m. High-$`\rho `$ silicon which begins as $`n^{}`$-type inverts to $`p`$-type after a hadronic dose of $``$ 2.5$`\times `$10$`{}_{}{}^{12}n_{\text{eq}}^{}`$cm<sup>-2</sup>. Beyond this dose the depletion voltage rises as the effective carrier concentration continues to increase. After 10-years of radiation exposure, the required reverse-bias voltage for full depletion of the pixel sensors will be in excess of 1000V. Since the maximum operating voltage is specified to be 600V, we will be forced to operate them in a partially-depleted regime. For this reason all of the prototype designs are based on $`n^+`$-type pixel implants so that post-inversion, the junction resides on the active side. The first-phase of sensor prototyping was conducted with two vendors; Seiko in Japan and C.I.S. in Germany. Identical wafer designs were submitted to both and featured two full-size tile designs. One of these had p-stop isolation structures, (known as ‘Tile-1’), whilst the other, (Tile-2), used the p-spray technique to over-compensate the electron accumulation-layer which otherwise has the effect of shorting $`n^+`$-implants together. Along with these devices there were a variety of smaller detector designs with a pixel-array corresponding to that of a single readout chip. Some of these matched the Tile-1 and Tile-2 designs (known as ST1 and ST2 respectively). Figure 2 shows the layout of these two designs in detail. The ST1 design is on the left and the $`p^+`$-type isolation rings may be seen surrounding the $`n^+`$-pixel implants. In the ST2 design there are also ring-like structures around each pixel but these are now $`n^+`$-type and have the purpose of reducing the inter-pixel capacitance. The ST2 design also incorporates reach-through biasing which is not possible to implement with p-stop isolation. The third design shown in figure 2 is known as SSGSingle Small Gap. This also utilises p-spray isolation but does not include any intermediate $`n^+`$ structures. Neighbouring pixels rather have just small gaps between them. ## V Front-End Readout Electronics There are many demands on the front-end readout electronics. Insensitivity to the changes in transistor thresholds and transconductance brought on by a total ionising dose of 30MRad, (in layer-1 after 10-years of operation), must be built into the design. In addition, the front-ends must be able to deliver high-efficiency performance when the operation of the sensors has been compromised by hadronic damage. This will involve leakage-current tolerence to the level of 100nA/channel whilst maintaining sensitivity to charge spectra much reduced by partial-depletion operation and charge-trapping effects. The power consumption must not exceed 40$`\mu `$W per channel, the timewalk must be contained to the 25ns bunch-crossing period of the LHC and the cross-coupling between neighbouring pixels should account for a charge loss of no more than 5%. Over the last year and a half, the demonstrator development programme has followed two concurrent design directions. The two designs, known as FE-A and FE-B, were aimed at the DMILL and Honeywell radiation-hard process’ respectively. To date both have been realised in radiation-soft technologies, (namely AMS and HP), and evaluated extensively in laboratory and testbeam environments. The FE-A design, (which was initially a BiCMOS design), was also laid out and fabricated as a pure CMOS chip which is referred to as the FE-C. The two design efforts have joined forces and are now on the threshold of the first radiation-hard submission (to DMILL), of a common demonstrator design, (FE-D), which is largely based on combined features of the FE-A and FE-B approaches. Later in 1999 it is planned that a common design submission also be made for the Honeywell SOI process (FE-H). All of the demonstrator chips were designed to be fully pin-compatible enabling a common hardware test system to be used for all cases. Only minor software and firmware options are required to switch between different chips. There were several philosophical operational differences between FE-A and FE-B not least of which was the use of a dual-threshold discriminator for the FE-B front-ends as opposed to the faster single discriminator approach used in FE-A. Time information is derived from a lower time-threshold whilst the upper threshold is resposible for hit-adjudication. Both approaches incorporate negative polarity preamps for the $`n^+`$ in $`n^{}`$ sensors, LVDS inputs and outputs for clock and control lines, a serial command protocol and serial readout scheme for line reduction and local data storage in the form of end-of-column-pair buffer sets and a FIFO for hits which have matched incoming triggers. They also both incorporate eight 8-bit local DACs for front-end biases, a local chopper circuit for calibration charge injection, 7-bits of digital charge measurement based on time-over-threshold, channel-by-channel masking capability for separate strobe and readout selection and a 3-bit local threshold-tuning DAC, (TDAC), in every pixel cell for threshold-dispersion reduction. ## VI Results from demonstrator-assembly testing Throughout 1998 the collaboration followed a rich program of testing of ‘single-chip assemblies’ which comprised the radiation-soft demonstrator electronics bump-bonded to the first-generation prototype sensors, (both irradiated and non-irradiated samples). Testing has also progressed on a smaller number of full-scale modules with 16 FE readout-chips connected to a single tile. The top-left plot in figure 3 shows the distribution of thresholds for a bare chip. This is formed from scanning the input calibration-pulse amplitude in fine steps and deriving an ‘s-curve’ of efficiency versus charge (knowing the magnitude of the calibration capacitance) pixel-by-pixel. The width of this distribution is 265e- with all of the 3-bit TDACs set to the same value. On the right of this plot the TDAC values for all channels have been assigned in order to minimise the dispersion. This process results in a post-tune distribution width of 94e-. The two lower plots on the left of figure 3 show the threshold values plotted according to ‘channel-number’ which is defined to be 160$`\times `$column number + row number. The four rightmost plots show the tuned-threshold distribution and noise distribution for a typical FE-B single-chip assembly incorporating an ST2 sensor. The equivalent noise charge (ENC) is measured to be 105e-. Similar results are obtained for ST1 assemblies whereas the noise for an SSG device was measured to be 170e- due to the higher load capacitance. A range of assemblies were evaluated in the H8 testbeam of CERN’s SPS accelerator during four periods of 1998 running. The H8 facility offers a four cross-plane silicon microstrip hodoscope, providing an extrapolation resolution of 3$`\mu `$m for charged pions with momenta in excess of 100GeV/c. This enabled detailed studies of hit-efficiency, charge-collection efficiency, resolution and noise occupancy to form part of the comparative process for the different prototype sensor designs both before and after irradiation. The left-most plots in figure 4 show the charge spectra obtained from the three principal design types, ST2, SSG and ST1 for tracks of normal incidence. Each plot has two histograms, one corresponding to the case where only a single channel was seen to fire and the other for double-channel clusters. In all cases except for ST2, the spectra show the usual Landau/Gaussian-convolution shape, peaking at around the 21,000e- value expected for the 280$`\mu `$m thickness of Silicon. For the ST2 case the double-channel distribution has a distinctly lower peak value indicating some degree of charge-loss. In the central plots, a mean-charge surface is plotted as a function of the local extrapolation point within a 2-pixel cell. For the ST1 and SSG cases it is clear that there is a slight degree of charge-collection inefficiency in the region between neighbouring columns. For the ST2 design, the loss is extreme in this region and further loss is apparent all around the pixel-cell outline. This effect is directly attributable to the atoll $`n^+`$-rings which surround each pixel in the ST2 design. The leftmost plots in figure 5 show residual distributions for an SSG assembly for tracks of normal incidence. The upper right distribution corresponds to the overall binary case in the $`r\varphi `$ dimension which yields an r.m.s. of 12.9$`\mu `$m. This is slightly better than the expected 50$`\mu `$m/$`\sqrt{12}`$ due to a small admixture of double-channel clusters. Similar results were obtained for all of the designs pre-irradiation. The table overleaf summarises the overall efficiency figures obtained for the ST1, ST2 and SSG assemblies for thresholds of approximately 2ke- and 3ke-. The values are at least equal to 99% in almost all cases upon application of a range of quality cuts in time and space, the single exception being ST2 at 3ke- threshold which registers 98.8%. This is due to the worse charge-collection efficiency mentioned earlier. Irradiated ST1 and ST2 samples were also tested in H8 at -10$`^\text{o}`$C. ST2 samples which had received 5.0$`\times `$10$`{}_{}{}^{14}n_{\text{eq}}^{}`$cm<sup>-2</sup> and 1.0$`\times `$10$`{}_{}{}^{15}n_{\text{eq}}^{}`$cm<sup>-2</sup> both performed extremely well for various reverse-bias voltages up to 600V, as tabulated in the right-hand table below. An efficiency of 98.2% was recorded for the maximum dose sample when the analysis was restricted to a region within each pixel cell which eliminates the charge-loss effects due to the atoll $`n^+`$ structures. The fake-occupancy probability for this device was recorded to be 0.9$`\times `$10<sup>-7</sup> and half of those ‘hits’ were identified as originating from physics. On the right of figure 5 are the distributions of residuals for the high-dose ST2 sample. The overall binary resolution is measured to be 16.5$`\mu `$m. The corresponding ST1 irradiated samples in contrast showed poorer performance. Extremely large numbers of noise hits were recorded for even the 5$`\times `$10$`{}_{}{}^{14}n_{\text{eq}}^{}`$cm<sup>-2</sup> sample for applied detector biases in excess of 10V. Noise analyses of the irradiated ST1 samples in the laboratory indicate severe micro-discharge occurance once the backplane bias is raised much above 100V. At 250V a broad spectrum of noise values are recorded up to 15,000e-. Even the highly dosed ST2 assembly on the other hand yields a very tight noise distribution at 600V in the laboratory with a mean of just 260e-, (see figure 7). Figure 8 shows the results of I-V characterisations of the high-dose irradiated samples. The ST1 assembly breaks down at an applied bias of around 630V whereas the ST2 device shows no signs of breakdown even up to 1kV. All of these results strongly reinforce the view that p-spray isolation leads to lower surface fields than the p-stop approach and thus greater micro-discharge and breakdown immunity when operated at high-voltage. The next generation sensor prototypes reflect these observations with the principal design based on p-spray isolation without any intermediate $`n^+`$ structures. This design looks like the SSG approach except that the gaps are made slightly wider in order to optimise the noise performance. ## VII Conclusions The ATLAS Pixel demonstrator program is progressing well with very encouraging results obtained from radiation-soft versions of the front-end readout electronics. The success of the electronics program has enabled evaluation of a series of prototype sensor designs which have provided the collaboration with a clear concept for a new prototype layout. This new design is optimised for high-voltage operation post-irradiation whilst maintaining good charge-collection efficiency. The experience obtained with the two front-end design concepts (FE-A and FE-B) has been of great benefit in converging on combined designs for the upcoming radiation hard submissions.
no-problem/9903/cond-mat9903296.html
ar5iv
text
# Noncollinear cluster magnetism in the framework of the Hubbard model ## I Introduction Clusters may show specific phenomena that do not have an equivalent in the thermodynamic limit and that are therefore of interest in both basic and applied science. Moreover, the evolution of their physical properties with increasing cluster size provides new perspectives for understanding the microscopic origin of condensed-matter properties . In this context, the study of magnetism in metals posses a particularly interesting and challenging problem since the properties of magnetic metals often change dramatically as the electrons of an isolated atom become part of a cluster of several atoms and delocalize. It is therefore of fundamental importance to understand how itinerant magnetism, as found for example in $`3d`$ transition-metal (TM) solids, develops starting from localized atomic magnetic moments. The strong dependence of the magnetic behavior as a function of system size, structure and composition opens in addition the possibility of using clusters to tailor new magnetic materials for specific technological purposes. Consequently, the relation between local atomic environment, cluster structure and magnetism has also implications of practical interest. Most theoretical studies of itinerant cluster magnetism have been performed using Hartree-Fock (HF) and local spin density (LSD) methods . Exact many-body calculations are presently limited to simple models, such as the Hubbard model, and to systems containing a small number of sites . Within mean-field approximations (HF or LSD) the self-consistent spin polarizations are usually restricted to be collinear, i.e., the direction of all local magnetic moments is assumed to be the same. However, the well known sensitivity of itinerant magnetism to the local environment of the atoms suggests that other instabilities towards spiral-like spin-density-waves (SDW’s) or even more complex magnetic structures should also be possible in general. For example, the tendency to antiferromagnetic (AF) order close to half-band filling and the presence of triangular loops in compact structures generates magnetic frustrations that may easily yield complex arrangements of the local magnetic moments. Moreover, the reduction of local coordination numbers at the surface of clusters removes constraints between the local moments and could favor the development of low-symmetry spin polarizations. In solids, noncollinear magnetic structures have been identified experimentally and theoretically already for a long time . In contrast, very little is known at present in the case of finite systems . Fully unrestricted ab initio calculations of noncollinear spin arrangements have been performed only recently for very small Fe<sub>N</sub> clusters . The investigation of magnetic phenomena of this kind requires a symmetry unrestricted approach in which no a priori assumptions are made concerning the relative orientation of the local magnetic moments, thereby enlarging the number of degrees of freedom of the problem. The main purpose of this paper is to investigate the characteristics of noncollinear magnetic states in finite clusters. We consider the single-band Hubbard model and determine the ground-state magnetic properties in a fully unrestricted Hartree-Fock approximation. The theoretical approach, outlined in Sec. II, is applied to clusters having $`N43`$ atoms. The results presented in Sec. III analyze noncollinear magnetic behaviors for a few representative compact structures. Several examples are given that illustrate the large variety of 3-dimensional magnetic arrangements obtained in the self-consistent calculations as a function of the Coulomb repulsion strength $`U/t`$, band filling $`\nu /N`$ and cluster structure. In Sec. IV we compare the UHF results with exact diagonalization calculations for $`N13`$ atoms (Lanczos method). The role of quantum fluctuations beyond mean field and the consequences of the often artificial breaking of spin-symmetry implied by the formation of the noncollinear local moments are discussed. Taking into account that approximations such as UHF are unavoidable for larger clusters and for more realistic model Hamiltonians, it is of considerable interest to test the validity of these methods in order to improve the interpretation of the approximate results and to obtain a more accurate description of magnetic phenomena in clusters. ## II Theoretical method The single-band Hubbard Hamiltonian is given by $$H=t\underset{l,m,\sigma }{}\widehat{c}_{l\sigma }^{}\widehat{c}_{m\sigma }+U\underset{l}{}\widehat{n}_l\widehat{n}_l,$$ (1) where $`\widehat{c}_{l\sigma }^{}`$ ($`\widehat{c}_{l\sigma }`$) is the creation (annihilation) operator of an electron at site $`l`$ with spin $`\sigma `$, and $`\widehat{n}_{l\sigma }=\widehat{c}_{l\sigma }^{}\widehat{c}_{l\sigma }`$ is the corresponding number operator ($`\widehat{n}_l=_\sigma \widehat{n}_{l\sigma }`$). The first sum runs over all pairs of nearest neighbors (NN) and the second over all sites. The model is characterized by the dimensionless parameter $`U/t`$, that measures the relative importance between kinetic and Coulomb energies, by the cluster structure, that defines the kinetic energy operator, and by the number of valence electrons $`\nu `$. The variations of $`U/t`$ can be associated to a uniform relaxation of the interatomic distances (e.g., $`tR_{ij}^5`$ for TM’s) or to changes in the spatial extension of the atomic-like wave function, as in different elements within the same group. Different $`\nu `$’s correspond to different band-fillings $`\nu /N`$ that may be associated qualitatively to the variations of $`\nu /N`$ across a TM $`d`$ series. In spite of its simplicity, this Hamiltonian has played, together with related models, a major role in guiding our understanding of the many-body properties of metals and of low-dimensional magnetism. It is the purpose of this work to use it to investigate the properties of noncollinear itinerant magnetism in small compact clusters. In the unrestricted Hartree-Fock (UHF) approximation the ground state for $`\nu `$ electrons is a single Slater determinant that can be written as $$|\mathrm{UHF}=\left[\underset{k}{\overset{\nu }{}}\widehat{a}_k^{}\right]|\mathrm{vac}.$$ (2) The single-particle states $$\widehat{a}_k^{}=\underset{l,\sigma =,}{}A_{l\sigma }^k\widehat{c}_{l\sigma }^{}$$ (3) are linear combinations of the atomic-like orbitals associated to $`\widehat{c}_{l\sigma }^{}`$. Notice that in Eq. (3) we allow for the most general superposition of single-electron states since $`\widehat{a}_k^{}`$ may involve a mixture of both $``$ and $``$ spin components. The eventually complex coefficients $`A_{i\sigma }^k`$ are determined by minimizing the energy expectation value $`E_{\mathrm{UHF}}=\mathrm{UHF}|H|\mathrm{UHF}`$. In terms of the density matrix $$\rho _{l\sigma ,m\sigma ^{}}\mathrm{UHF}|\widehat{c}_{l\sigma }^{}\widehat{c}_{m\sigma ^{}}|\mathrm{UHF}=\underset{k=1}{\overset{\nu }{}}\overline{A}_{l\sigma }^kA_{m\sigma ^{}}^k,$$ (4) this is given by $$E_{\mathrm{UHF}}=t\underset{l,m,\sigma }{}\rho _{l\sigma ,m\sigma }+U\underset{l}{}\left(\rho _{l,l}\rho _{l,l}|\rho _{l,l}|^2\right).$$ (5) The energy minimization and the normalization constraints on the wave function lead to the usual self-consistent equations $$t\underset{m}{}A_{m\sigma }^k+U\left(A_{l\sigma }^k\rho _{l\overline{\sigma },l\overline{\sigma }}A_{l\overline{\sigma }}^k\rho _{l\overline{\sigma },l\sigma }\right)=\epsilon _kA_{l\sigma }^k.$$ (6) For a given solution, the average local electronic density $`n_l`$ is given by $`n_l`$ $`=`$ $`\rho _{l,l}+\rho _{l,l},`$ (7) and the spin polarizations vectors $`\stackrel{}{S}_l=(S_l^x,S_l^y,S_l^z)`$ by $`S_l^x`$ $`=`$ $`\left(\rho _{l,l}+\rho _{l,l}\right)/2,`$ (8) $`S_l^y`$ $`=`$ $`i\left(\rho _{l,l}\rho _{l,l}\right)/2,`$ (9) $`S_l^z`$ $`=`$ $`\left(\rho _{l,l}\rho _{l,l}\right)/2.`$ (10) The usual collinear UHF approach is recovered when $`\rho _{l\sigma ,l\overline{\sigma }}=0l`$, i.e., when all magnetic moments $`\stackrel{}{S}_l`$ are parallel to $`z`$. In practice, several random spin arrangements are used as starting points of the selfconsistent procedure in order to ensure that the final result corresponds to the true UHF ground state. In case of multiple selfconsistent solutions (nonequivalent by rotations) the UHF energies are compared. $`E_{\mathrm{UHF}}`$ can be rewritten as $$E_{\mathrm{UHF}}=t\underset{l,m,\sigma }{}\rho _{l\sigma ,m\sigma }+\frac{U}{4}\underset{l}{}n_l^2U\underset{l}{}|\stackrel{}{S}_l|^2.$$ (11) One observes that the Hartree-Fock Coulomb energy $`E_C^{HF}`$ — the sum of the 2nd and 3rd terms in Eq. (11) — favors a uniform density distribution and the formation of local moments $`\stackrel{}{S}_l`$. Due to the local character of Hubbard’s Coulomb interaction, the relative orientation of the different $`\stackrel{}{S}_l`$ does not affect $`E_C^{HF}`$. It is therefore the optimization of the kinetic energy that eventually leads to the formation of complex magnetic structures with $`|\stackrel{}{S}_l\stackrel{}{S}_m|1`$ or to non-uniform density distributions $`n_l`$. As a result of the tendency to avoid double orbital occupancies, the UHF solutions often correspond to states of broken symmetry: spin-density waves (SDW’s), charge density waves (CDW’s) or both. The spin-rotational invariance of Eqs. (26) implies that the energy is unchanged after a rotation of the whole spin arrangement $`\{\stackrel{}{S}_l,l=1,\mathrm{},N\}`$. Therefore, if $`\stackrel{}{S}_l0`$ one has a set of linearly independent congruent solutions $`|\mathrm{UHF}k`$ ($`k2`$) which have the same average energy $`E_{\mathrm{UHF}}`$ and which differ from each other only by the orientation of the spin polarizations relative to the cluster structure. The illustrations of spin arrangements shown in Sec. III correspond to one of these SDW’s, which is chosen only for the sake of clarity. The set of UHF solutions may be used to restore the symmetry appropriate to the exact ground state $`|\mathrm{\Psi }_0`$ thereby improving the approximate wave function. For instance, for some SDW’s having $`_l\stackrel{}{S}_l=0`$ one may consider the spin-symmetrized Hartree-Fock (SSHF) wave function $$|\mathrm{SSHF}=\frac{|\mathrm{UHF1}+|\mathrm{UHF2}}{\sqrt{2(1+\mathrm{UHF1}|\mathrm{UHF2})}},$$ (12) where $`|\mathrm{UHF2}`$ is obtained form $`|\mathrm{UHF1}`$ by interchanging up and down spins. In spite of its simplicity, $`|\mathrm{SSHF}`$ goes beyond the UHF approximation and corresponds to a correlated state satisfying $`|\mathrm{SSHF}|\mathrm{\Psi }_0|`$ $`=`$ $`\sqrt{2/(1+\mathrm{UHF1}|\mathrm{UHF2})}|\mathrm{UHF1}|\mathrm{\Psi }_0|`$ (13) $``$ $`|\mathrm{UHF1}|\mathrm{\Psi }_0|=|\mathrm{UHF2}|\mathrm{\Psi }_0|.`$ (14) If quantum fluctuations between the two SDW’s $`|\mathrm{UHF1}`$ and $`|\mathrm{UHF2}`$ are non-negligible ($`\mathrm{UHF1}|H|\mathrm{UHF2}\mathrm{UHF1}|\mathrm{UHF2}E_{\mathrm{UHF}}`$) an energy reduction relative to $`E_{\mathrm{UHF}}`$ is obtained since $$E_{\mathrm{SSHF}}=\frac{E_{\mathrm{UHF}}+\mathrm{UHF1}|H|\mathrm{UHF2}}{1+\mathrm{UHF1}|\mathrm{UHF2}}.$$ (15) $`\mathrm{\Delta }E=E_{\mathrm{UHF}}E_{\mathrm{SSHF}}`$ measures the importance of these quantum spin fluctuations. More complex symmetrized states involving a linear combination of several degenerate UHF states may be constructed analogously. In order to quantify the role of electron correlations and to assess goal and limitations of the UHF approximation in applications to finite clusters, we shall also compare some of our results with those obtained by applying exact diagonalization methods . In this case the Hubbard model is solved numerically by expanding its ground-state $`|\mathrm{\Psi }_0`$ in a complete set of basis states $`|\mathrm{\Phi }_m`$ which have definite occupation numbers $`n_{l\sigma }^m`$ at all orbitals $`l\sigma `$ ($`\widehat{n}_{l\sigma }|\mathrm{\Phi }_m=n_{l\sigma }^m|\mathrm{\Phi }_m`$, with $`n_{l\sigma }^m=0`$, $`1`$). $`|\mathrm{\Psi }_0`$ is written as $$|\mathrm{\Psi }_0=\underset{m}{}\alpha _{lm}|\mathrm{\Phi }_m,$$ (16) where $$|\mathrm{\Phi }_m=\left[\underset{l\sigma }{}(\widehat{c}_{l\sigma }^{})^{n_{l\sigma }^m}\right]|vac.$$ (17) The values of $`n_{l\sigma }^m`$ satisfy the usual conservation of the number of electrons $`\nu =\nu _{}+\nu _{}`$ and of the $`z`$ component of the total spin $`S_z=(\nu _{}\nu _{})/2`$, where $`\nu _\sigma =_ln_{l\sigma }^m`$. Taking into account all possible electronic configurations may imply a considerable numerical effort which depends on the number of atoms and on band filling. For not too large clusters, the expansion coefficients $`\alpha _{lm}`$ can be determined by sparse-matrix diagonalization procedures. The results presented in this work were obtained using a Lanczos iterative method . In order to calculate $`|\mathrm{\Psi }_0`$ one usually works in the subspace of minimal $`S_z`$ since this ensures that there are no a priori restrictions on the total spin $`S`$. The ground-state spin $`S`$ is then obtained by applying to $`|\mathrm{\Psi }_0`$ the total spin operator $$\widehat{S}^2=\underset{lm}{}\stackrel{}{S}_l\stackrel{}{S}_m=\underset{lm}{}\left[\frac{1}{2}(\widehat{S}_l^+\widehat{S}_m^{}+\widehat{S}_l^{}\widehat{S}_m^+)+\widehat{S}_l^z\widehat{S}_m^z\right],$$ (18) where $`\widehat{S}_l^+=\widehat{c}_l^{}\widehat{c}_l`$, $`\widehat{S}_l^{}=\widehat{c}_l^{}\widehat{c}_l`$ and $`\widehat{S}_l^z=(\widehat{n}_l\widehat{n}_l)/2`$. From the expectation values of $`\stackrel{}{S}_l\stackrel{}{S}_m`$ one also obtains the local magnetic moments ($`l=m`$) and the intersite spin correlation functions ($`lm`$). ## III Noncollinear magnetic order in compact clusters In Fig. 1 UHF results are given for the local magnetic moments $`\mu _l=|\stackrel{}{S}_l|`$ and the total magnetic moment $`\mu _T=|_l\stackrel{}{S}_l|`$ of fcc-like clusters having $`N=13`$, $`19`$ and $`43`$ atoms at half band filling ($`\nu =N`$). These clusters are formed by adding to a central atom ($`l=1`$) the successive shells of its first NN’s ($`N=13`$), second NN’s ($`N=19`$) and third NN’s ($`N=43`$). Several common properties are observed as a function of $`U/t`$. Starting from the uncorrelated limit ($`U=0`$), the total moment $`\mu _T`$ remains approximately constant for $`U/t3`$$`4`$. In the weakly interacting regime, the local moments $`\mu _l`$ do not depend strongly on $`U/t`$ and are generally small. Notice that for $`N=43`$, $`\mu _T`$ is not minimal at small $`U`$ ($`\mu _T=3/2`$) due to degeneracies in the single-particle spectrum. For larger $`U/t`$, $`\mu _T`$ decreases rapidly eventually with some discontinuities and a few oscillations reaching values close to $`\mu _T=0`$ for $`U/t5`$$`6`$ ($`U/t9`$ for $`N=43`$). At this intermediate range of $`U/t`$, the $`\mu _l`$ increase more rapidly reaching values not far from saturation at the $`U/t`$ for which $`\mu _T`$ is minimum ($`\mu _l0.40`$$`0.45`$). The opposite trends shown by $`\mu _l`$ and $`\mu _T`$ are a clear indication of the expected onset of strong AF-like order at half-band filling. We shall see in the following that this corresponds in fact to noncollinear spin arrangements. If $`U/t`$ is increased beyond $`U/t=5`$$`6`$ (beyond $`U/t=9`$ for $`N=43`$) the local moments do not vary significantly, and $`\mu _T`$ either does not change very much ($`N=19`$) or increases monotonically ($`N=13`$ and $`43`$) remaining always smaller than $`1/2`$. The changes in $`\mu _l`$ and $`\mu _T`$ are the result of qualitative changes in the magnetic order. As an example we show in Fig. 2 the selfconsistent spin arrangements obtained in fcc clusters with $`N=13`$ atoms for representative values of $`U/t`$ ($`\nu =N`$) . For small $`U/t`$ ($`U/t<3.7`$) one finds a collinear AF order with small $`\mu _l`$ \[Fig. 2(a)\]. Here we observe a charge and spin density-wave at the cluster surface that is related to degeneracies in the single-particle spectrum at $`U=0`$. The atoms belonging to the central (001) plane \[shown in grey in Fig. 2(a)\] have much larger moments than the atoms at the upper and lower (001) planes. For example for $`U/t=0.5`$, $`\mu _l=0.26\mu _\mathrm{B}`$ at the the central plane, while the other surface moments are $`\mu _l=0.0046\mu _\mathrm{B}`$ (see Fig. 1). The central atom has a small spin polarization ($`\mu _1=0.002\mu _\mathrm{B}`$ for $`U/t=0.5`$). Notice that the magnetic order within the upper and lower (001) planes is ferromagnetic-like (with small $`\mu _l`$) and that the surface moments belonging to successive (001) planes couple antiferromagnetically. Thus, the magnetic moments at the surface of the central plane are not frustrated since all its NN’s are antiparallel to them. This explains qualitatively the larger $`\mu _l`$ found at this plane. In contrast, unavoidable frustrations are found for the smallest magnetic moments at the central site and at the atoms of the upper and lower (001) planes \[see Fig. 2(a)\] The crossover from the small-$`U/t`$ to the large-$`U/t`$ regime takes place as a succession of noncollinear spin arrangements which attempt to minimize the magnetic frustrations among the increasing local moments. A representative example is shown in Fig. 2(b). While in this case the spin arrangement is noncollinear, all the spin moments still lie in the same plane. The very small values of $`\mu _T`$ shown in Fig. 1 for intermediate $`U/t`$ indicate that there is an almost complete cancellation among the $`\stackrel{}{S}_l`$. However, this type of spin arrangements are quite unstable if the strength of Coulomb interactions is further increased. At $`U/t5.1`$ the cluster adopts a fully 3-dimensional spin structure which remains essentially unchanged even in the strongly correlated limit \[Fig. 2(c)\]. The spin arrangement can be viewed as a slight distortion of the spin ordering that minimizes the energy of a classical AF Heisenberg model on the surface shell, i.e., ignoring the interactions with the central site. In fact, if the central atom were removed or if it carried no local moment, as it is the case for $`\nu =12`$, the surface moments $`\stackrel{}{S}_l`$ would point along the medians of one of the triangles at the surface and would lie all within a plane. The magnetic interactions with the central spin $`\stackrel{}{S}_1`$ in the 13-atom cluster induce a small tilt $`S_l^z`$ of the surface spin polarizations $`\stackrel{}{S}_l`$ which is opposite to $`\stackrel{}{S}_1`$ ($`\stackrel{}{S}_1\widehat{z}`$ for $`\nu =N=13`$). The $`S_l^z`$ component is the same for all surface sites and depends moderately on $`U/t`$ ($`|S_l^z|=0.023`$$`0.056`$ for $`U/t5.1`$). Similar magnetic structures are also found in larger symmetric fcc clusters. In Fig. 3 the self-consistent spin configuration for $`N=43`$ and $`U/t=10`$ is illustrated ($`\nu =N`$). The moment $`\stackrel{}{S}_1`$ of the central site points along the (111) direction ($``$ to the plane of Fig. 3) . The other $`\stackrel{}{S}_m`$ lie almost in the plane of the figure with only small components $`S_m^z`$ along the (111) direction. As for $`N=13`$, $`S_m^z`$ is induced by the interactions with $`\stackrel{}{S}_1`$. In fact, if $`\stackrel{}{S}_1`$ vanished, all the spins would be in the plane of the figure. $`S_m^z`$ is the same for all atoms in a given NN shell, and changes sign as we move from the center to the surface of the cluster. For example for $`U/t=10`$, $`\mathrm{cos}\theta _{1m}=0.22`$ for $`m`$ belonging to the first shell, $`\mathrm{cos}\theta _{1m}=0.25`$ for $`m`$ in the second shell and $`\mathrm{cos}\theta _{1m}=0.01`$ for $`m`$ in the third shell ($`\mathrm{cos}\theta _{lm}=\stackrel{}{S}_l\stackrel{}{S}_m/|\stackrel{}{S}_l||\stackrel{}{S}_m|`$). Notice that the spin ordering of the innermost 13 atoms is similar to the one found in fcc clusters with $`N=13`$ \[Fig. 2(c)\]. While these trends hold for symmetric fcc clusters, it is worth to remark that strong modifications of the magnetic order generally occur if the symmetry of the cluster is lowered. For instance, for $`N=14`$ — an atom added to the closed-shell 13-atom cluster — one obtains a ferromagnetic-like coupling between the central atom and some of the first neighbors ($`\nu =N`$ and $`U/t=10`$). A more detailed account of the spin correlations in the UHF ground state of fcc clusters is provided by the expectation values $`\stackrel{}{S}_l\stackrel{}{S}_m`$. In Fig. 4 the average of $`\stackrel{}{S}_l\stackrel{}{S}_m`$ between NN atoms at different shells is shown. In most cases we observe that $`\stackrel{}{S}_l\stackrel{}{S}_m<0`$ (AF correlations) and that $`|\stackrel{}{S}_l\stackrel{}{S}_m|`$ increases with increasing $`U/t`$. A particularly important increase of AF correlations is observed for intermediate $`U/t`$ when the local moments $`\mu _l`$ are formed ($`U/t4.5`$$`5.5`$ for $`N=13`$ and $`19`$, see also Fig. 1). This is consistent with the already discussed decrease of $`\mu _T`$ and the onset of AF order. There are, however, two exceptions to this trend. The correlations $`\gamma _{01}`$ between the central site and the surface shell for $`N=13`$ increase somewhat for $`U/t5`$ ($`\gamma _{01}<0`$) implying that these AF correlations first tend to be less important when the $`\mu _l`$ increase. It seems that, as a result of frustrations, the more important increase of AF correlations $`\gamma _{11}`$ among the surface NN’s is done at the expense of the correlations between the central atom and the surface. A similar behavior is found in exact calculations, as it will be shown in Sec. IV. Another interesting case concerns the correlations $`\gamma _{12}`$ between the first and second atomic shells for $`N=19`$ and $`43`$. Here we observe that $`\gamma _{12}`$ changes sign upon going from $`N=19`$ to $`N=43`$ ($`U/t>5`$). Moreover, once the local moments are formed, $`\gamma _{01}`$ decreases for $`N=19`$ \[$`\gamma _{01}(19)<0`$\] and increases for $`N=43`$ as $`U/t`$ increases \[$`\gamma _{01}(43)>0`$\]. This behavior can be qualitatively understood by comparing the surfaces of these clusters. For $`N=19`$ the outermost spins (shell 2) are free to couple antiferromagnetically with their NN’s (shell 1). However, the presence of an additional atomic shell for $`N=43`$, which has NN’s belonging to both the first and second shells, forces a parallel alignment of the spins in shells 1 and 2 in order to allow AF coupling with the third shell. In fact, as shown in Fig. 4(c), $`\gamma _{13}`$ and $`\gamma _{23}`$ are the strongest AF correlations in the 43-atom cluster. The same conclusions are drawn by comparing the relative orientations of the spin polarizations (Fig. 3). In particular we observe that the angles $`\theta _{23}`$ between NN spin polarizations at shells 2 and 3 are the largest in average ($`\theta _{23}=\pi `$ or close to it). In addition to the AF half-filled case it is also interesting to discuss other band fillings, for example $`\nu =N+1`$ that is known to develop a FM ground state in the limit of large $`U/t`$ . In Fig. 5 results are given for $`\mu _l`$ and $`\mu _T`$ in an fcc 13-atom cluster with $`\nu =14`$ electrons. In this case we observe an essentially monotonic increase of $`\mu _T`$ that reflects the progressive development of a fully polarized ferromagnetic state. Close to the threshold for the onset of ferromagnetism ($`4.5U/t6.5`$) the changes of the AF-like spin arrangement produce small oscillations of $`\mu _T`$ (see Fig 5). The approximately step-like behavior resembles at first sight the results obtained in collinear mean-field calculations. However, in the present case the physical picture behind the formation of a FM state is quite different. The increase of $`\mu _T`$ close to the steps involves a succession of noncollinear spin arrangements with increasing degree of parallel moment alignment. Moreover, notice that the local moments increase much more rapidly than $`\mu _T`$ approaching saturation ($`\mu _l0.40`$$`0.45`$) for values of $`U/t`$ at which $`\mu _T`$ is still small. Thus, the increase of $`\mu _T`$ with $`U/t`$ is the result of the parallel alignment of already existing local moments $`\mu _l`$. In contrast, in collinear Hartree-Fock calculations the increase of $`\mu _T`$ is associated to the formation of the local moments themselves since $`\mu _T`$ approximately proportional to $`\mu _l`$ . Comparison with exact calculations shows that the present noncollinear picture is qualitatively closer to the actual ground-state magnetic properties than the one derived in collinear calculations, at least for the single-band Hubbard model. Indeed, we have computed the local moments $`\mu _l^2=S_l^2`$ and the total spin $`S`$ of the same fcc 13-atom cluster ($`\nu =14`$) as a function of $`U/t`$ by using a Lanczos exact diagonalization method. Starting from the uncorrelated limit \[$`\mu _1^2(U=0)=0.28`$ and $`\mu _l^2(U=0)=0.36`$ for $`l=2`$$`13`$)\] $`\mu _l^2`$ increases rapidly with $`U/t`$ reaching values close to saturation already for $`U/t10`$ \[$`\mu _1^2(U=10)=0.56`$ and $`\mu _l^2(U=10)=0.64`$ while $`\mu _1^2(U=\mathrm{})=0.56`$ and $`\mu _l^2(U=\mathrm{})=0.70`$ ($`l=2`$$`13`$)\]. In contrast the ground-state spin $`S`$ remains equal to zero up to $`U/t40`$. This implies that for $`\nu =N+1=14`$ the local moments are formed well before FM order sets in, as observed in the noncollinear UHF calculations (Fig. 5). Notice, however, that the situation could be different for other band fillings where ground-state ferromagnetism is found at much smaller $`U/t`$ or where $`S`$ is a non-monotonous function of $`U/t`$ (e.g., for $`\nu =15`$$`18`$ ). UHF yields larger local magnetic moments at the cluster surface in agreement with the exact results but it underestimates severely both the Coulomb repulsion $`U/t`$ above which $`\mu _T>1/2`$ and the $`U/t`$ for reaching saturation. Calculations have been also performed for fcc-like 13-atom clusters having $`\nu =N1=12`$ electrons. For small $`U`$ ($`U/t<4.9`$) the calculated magnetic order is collinear ($`\stackrel{}{S}_l\widehat{z}`$). The UHF ground state is a broken symmetry state that shows a charge- and spin-density wave along the (001) direction. For $`U/t<3.5`$ the atoms $`l`$ in the central (001) plane present an enhanced electron density $`n_l1.12`$ and local magnetic moments $`\mu _l=0`$. The atoms of the upper and lower (001) planes show AF order within each plane and $`\mu _l=0.17`$. The vanishing $`\mu _l`$ at the central (001) plane can be interpreted as a consequence of magnetic frustrations since the sum of their NN spins is zero. For $`U/t>3.5`$, the surface atoms at the central (001) plane develop local moments $`\mu _l0`$. The spin arrangement remains collinear with important frustrations at some triangular faces. For $`U/t>4.9`$ the spin-density distribution changes to noncollinear with all surface spin having the same modulus. However, the central site remains unpolarized. In fact the magnetic order is the same as the one obtained if the central site is removed ($`N=12`$). The moments at the triangular faces point along the medians just as in the classical antiferromagnetic ground state of an isolated triangle. In spite of the three-dimensional geometry of the cluster the spin structure is two-dimensional (all magnetic moments lie on the same plane). The topology and symmetry of the cluster structure plays a major role in determining the magnetic order and magnetic correlations within the cluster. In this context, it is interesting to consider different geometries and to compare their magnetic behavior. In Fig. 6 the UHF magnetic order in a 13-atom icosahedral cluster having $`\nu =12`$ electrons is illustrated for representative values of $`U/t`$. As a result of degeneracies in the single-particle spectrum ($`U=0`$) a noncollinear, coplanar arrangement of the local magnetic moments is found at the surface already for very small $`U/t`$ ($`U/t<5`$, see Fig. 6). At the central atom (not shown in the figure) the magnetic moment $`\mu _1=0`$. A similar behavior is observed in fcc clusters ($`N=13`$, $`\nu =12`$). The tendency to avoid frustrations within NN triangular rings conditions the arrangement of the local moments. In some triangles the spin polarizations point along the medians just as in the classical Heisenberg model, while in others two spins are parallel (i.e., fully frustrated). For larger values of $`U/t`$ ($`U/t>5`$) a less frustrated solution is favored by the antiferromagnetic correlations. This corresponds to a truly three-dimensional arrangement of the surface moments with $`\mu _1=0`$. The spin structure is such that if the surface $`\stackrel{}{S}_l`$ are brought to a common origin they form an icosahedron. The magnetic moments $`\stackrel{}{S}_l`$ on pentagonal rings present a small component that is perpendicular to the plane containing the atoms and that is antiparallel to the magnetic moment of the atom capping the ring. The projections of $`\stackrel{}{S}_l`$ on to the plane of the ring are ordered in the same way as in an isolated pentagon. ## IV Comparison between UHF and exact results In Fig. 7 UHF and exact results are given for the local moments $`\mu _l^2=S_l^2`$ and spin correlations $`\stackrel{}{S}_l\stackrel{}{S}_m`$ in an fcc-like 13-atom cluster. The UHF results for $`\mu _l^2`$ are quantitatively not far from the exact results. Not only the uncorrelated limit ($`U=0`$) is reproduced, but good agreement is also obtained in the large $`U/t`$ regime. Main trends such as the larger $`\mu _l^2`$ at the cluster surface for small $`U/t`$ and the reduction of the difference between surface and inner moments for large $`U/t`$ are correctly given. However, UHF underestimates the increase of $`\mu _l^2`$ for $`U/t<3.7`$ and anticipates the tendency to localization with increasing $`U/t`$. This results in a much more rapid crossover from weak to strong interacting regimes than in the exact solution. The quantitative differences are more important in the case of spin correlation functions $`\stackrel{}{S}_l\stackrel{}{S}_m`$ particularly for large $`U/t`$ \[Fig. 7(b)\]. Here we find that UHF underestimates the strength of AF spin correlations $`\gamma _{11}`$ at the surface, since the formation of permanent local moments blocks quantum spin fluctuations along the transversal directions. Still, in both UHF and exact calculations, the increase of AF correlations at the surface (increase of $`|\gamma _{11}|`$) is done at the expense of a decrease of the spin correlations with the central atom (decrease of $`|\gamma _{01}|`$). The discrepancies between UHF and exact results for $`N=13`$ show the limits of mean-field and give us an approximate idea of the corrections to be expected in correlated calculations on larger clusters. As expected, UHF yields better results for properties like the local moments, that are related to the density distribution, than for the correlation functions. Nevertheless, since the trends given by UHF are qualitatively correct, it is reasonable to expect that the conclusions derived for larger clusters are also valid (Figs. 1 and 4). The determination of the structure of magnetic clusters is a problem of considerable importance since structure and magnetic behavior are interrelated . Moreover, since most calculations of cluster structures are based upon mean field approximations, it would be very interesting to evaluate the role of electron correlations. We have therefore determined the relative stability of a few representative cluster structures as a function of band filling $`\nu /N`$ and Coulomb repulsion strength $`U/t`$ in the framework of the UHF approximation to the Hubbard model and we have compared the results with available exact calculations . Four different symmetries are considered: icosahedral clusters, which maximize the average coordination number, face centered cubic (fcc) and hexagonal close packed (hcp) clusters, as examples of compact structures which are found in the solid state, and body centered cubic (bcc) clusters, as an example of a rather open bipartite structure. These cluster geometries are representative of the various types of structures found to be the most stable in rigorous geometry optimizations for $`N8`$. The results for $`N=13`$ are summarized in the form of a magnetic and structural diagram shown in Fig. 8. A qualitative description of the type of magnetic order obtained in the self-consistent calculations is indicated by the different shadings. One may distinguish three different collinear spin arrangements: non-magnetic solutions (NM), non-saturated or weak ferromagnetic solutions (WFM) and saturated ferromagnetic solutions (SFM). The NM case includes paramagnetic states in which the total moment $`\mu _T`$ is minimal ($`\mu _T=0`$ or $`1/2`$). Concerning the noncollinear spin arrangements we distinguish two cases: noncollinear nonmagnetic states (NC) in which non-vanishing (eventually large) local moments $`\mu _l`$ sum up to an approximately minimal total moment $`\mu _T<1`$, and noncollinear ferromagnetic states (NCFM) that show a net magnetization $`\mu _T1`$. The NC states include all sort of frustrated antiferromagnetic-like spin structures, for example, those illustrated in Figs. 2 and 6. In order to quantify the effect of electron correlations we also show in Fig. 9 the corresponding magnetic and structural diagram as recently obtained by using exact diagonalization methods . For small $`U/t`$ ($`U/t<10`$) the UHF results for the most stable of the considered structures are in very good agreement with the exact calculations \[compare Figs. 8 and 9\]. The icosahedral cluster yields the lowest energy in the low carrier-concentration regime ($`\nu 6`$), as could have been expected taking into account that the largest coordination number is favored for small $`\nu `$ as in smaller clusters . In this case, the kinetic energy dominates (uncorrelated limit) and therefore the structure with the largest bandwidth is stabilized. As $`\nu `$ is increased several structural transitions occur. For small $`U/t`$, both exact and UHF calculations present the same structural changes: from icosahedral to fcc structure at $`\nu =11`$, from fcc to hcp at $`\nu =17`$, and from hcp to bcc at $`\nu =20`$. At larger $`U/t`$ ($`U/t>12`$) the interplay between the kinetic and Coulomb energies introduces important correlations that cannot be accounted for within UHF and that play a central role in the determination of the magnetic and structural properties. Thus, UHF often fails to yield the lowest-energy structure in the limit of large $`U/t`$. A main source of discrepancy is the too strong tendency of UHF to yield SFM ground states, particularly above half-band filling, which often disagrees with the exact magnetic behavior. Consequently the optimal structure is missed rather frequently. For example, for $`\nu =19`$ and large $`U/t`$, UHF predicts the fcc structure with a SFM ground state, while in the exact calculation the icosahedral cluster with $`S=1/2`$ is the optimum. Similar drawbacks are seen for other band fillings such as $`\nu =7`$, 10, 21, and 22 (large $`U/t`$). Still, in the event that the true ground state does show strong ferromagnetism, UHF succeeds since the ground state is the superposition of single-particle states. Examples of this kind are $`\nu =12`$ and $`\nu =20`$ for large $`U/t`$. These are rather exceptions, however, since the presence of a FM ground state in the exact solution is in general much less frequent than predicted by UHF. Around half-band filling UHF reproduces qualitatively well the transition from fcc or hcp to icosahedral structure with increasing $`U/t`$ as well as nontrivial re-entrant effects ($`\nu =13`$$`18`$). However, UHF underestimates the value of $`U/t`$ at which the structural changes occur (see Figs. 8 and 9). Part of the quantitative differences could be remedied by using in the UHF calculations a reduced or renormalized $`U/t`$ which simulates some of the effects of correlations. Summarizing, UHF fails to reproduce the exact phase diagram in detail, particularly in some of the most interesting antiferromagnetic or weak ferromagnetic regimes. Structural transition are sometimes missing (e.g., for $`\nu =10`$) and in other cases changes of structures appear artificially (e.g., bcc to hcp for $`\nu =21`$ and $`22`$). Nevertheless, it is also fair to say that UHF yields a good account of the relative stability between the considered structures well beyond the weakly interacting limit (up to $`U/t16`$) and that it also explains the larger stability of ferromagnetism above half-band filling. In the limit of strong interactions the validity of UHF breaks down and correlation effects beyond the single-determinant wave-function are indispensable in order to obtain the ground-state structure and magnetic behavior reliably. Improvements on the UHF wave function could be introduced by restoring the broken spin symmetry as proposed in Refs. \[Eqs. (1215)\]. However, the success of such an approach is likely to depend on the geometry of the cluster. For example, in low symmetry structures UHF tends to exaggerate the formation of spin and charge density redistributions which may be far from the actual exact solution and which can be very difficult to restore a posteriori. In such cases a Jastrow-like variational ansatz on restricted Hartree-Fock states could be more appropriate. Finally, it should be recalled that the Hubbard model for small clusters with one orbital per site is probably the most extremely low-dimensional system one may consider. Therefore, fluctuations and correlations effects are expected to be here more drastic than in larger clusters or in realistic multiband models more appropriate for the description of TM’s. ###### Acknowledgements. This work was supported by CONACyT (Mexico) and CNRS (France). One of the authors (MAO) acknowledges scholarships from CONACyT and FAI (UASLP).
no-problem/9903/astro-ph9903407.html
ar5iv
text
# Distribution of compact object mergers around galaxies ## 1 Introduction The recent discoveries of X-ray afterglows of gamma-ray bursts by the Beppo SAX satellite \[Costa et al., 1997\] have revolutionized the approach to these phenomena. For the first time since their discovery 30 years ago \[Klebesadel et al., 1968\] gamma ray bursts have been identified with sources at other wavelengths. In consequence optical afterglows have been discovered \[Groot et al., 1997a\], which lead to identification of host galaxies \[Groot et al., 1997b\]. At the time of writing more than a dozen afterglows have been identified. The optical lightcurves of the GRB afteglows decay as a power law $`t^\alpha `$ with the typical values of the index $`\alpha =1.1`$ to $`1.3`$. In some cases the host galaxies have been found by observing the flattening of of the lightcurve. The underlying steady flux is identified as the emission of the host galaxy. In a few cases the host galaxies themselves were found as extended objects. This allowed to measure the offset between the location of the gamma-ray burst and the center of the host galaxy. A list five GRBs and the offsets from their host galaxies is shown in Table 1. The offsets are generally small and the afterglows lie directly on the host galaxies. In other cases where the host galaxy have been only found from the flattening of the lightcurve we know that the location of the host galaxy and the GRB do not differ substantially from the simple fact that the host galaxies have been identified. There are two basic categories of theoretical models of the central engines of gamma-ray bursts within the cosmological model. The first class connects gamma-ray bursts with mergers of compact objects, e.g. neutron stars and/or black holes. There exist numerous scenarios in this set of models, some of them link GRBs with the coalescence of a black hole neutron star binary \[Narayan et al., 1992\]. In other models like in the recent paper by \[Kluźniak and Ruderman, 1998\] the GRB events are related to mergers of two neutron stars. Compact object merger model provides enough energy to power a GRB, and it has been showed in the numerical simulations \[Kluźniak and Lee, 1998\] that the coalescence may last up to a second. The analytical estimations of \[Portegies Zwart, 1998\] extend this timescale up to a minute. The second class of models \[Paczynski, 1998\] relates GRBs to explosions of supermassive stars, so called hypernovae. A direct prediction in this class of models is that gamma-ray bursts are related to the star forming regions. The relation between the GRB location and the host galaxies does not have to be true in the models that relate GRBs to compact object mergers. Tutukov and Yungelson \[Tutukov and Yungelson, 1993\] have calculated the compact object merger rates, and also found that compact object binaries may travel the distances up to $`1000`$kpc before merging. In a more detailed study \[Bloom et al., 1998a\] calculated a population of compact object binaries using the population synthesis method of Pols and Marinus \[Pols and Marinus, 1994\] and then calculated the spatial distribution of mergers in the potentials of galaxies for w few representative masses. They found that approximately $`15`$% of mergers take place outside the host galaxies). They have used the a Maxwellian kick velocity distribution with $`\sigma _v=190`$ km s<sup>-1</sup> \[Hansen and Phinney, 1997\] In this work we use the the population synthesis code based on \[Bethe and Brown, 1998\] and extended by Belczyński and Bulik \[Belczynski and Bulik, 1999\]. We concentrate on the dependence of the properties of the compact object binaries on the parameters used in the population synthesis code. We find that the most important parameter that determines the population of compact object binaries is the kick velocity a neutron star receives at birth, however this distribution is poorly known. Iben and Tutukov \[Iben and Tutukov, 1996\] claim that the properties of pulsars can be explained by only the recoil velocities with no need for the kicks. Blaauw and Ramachandran \[Blaauw and Ramachandran, 1998\] find that a single kick velocity of $`200`$km s<sup>-1</sup> suffices to reproduce the pulsar population. Cordes and Chernoff \[Cordes and Chernoff, 1997\] proposed a weighted sum two Gaussians: 80 percent with the width $`175`$ km s<sup>-1</sup> and 20 percent with the width $`700`$ km s<sup>-1</sup>. We outline the model of the binary evolution and propagation in a galactic potential in section 2. The results of the calculation are presented in section 3 and we discuss them in section 4. ## 2 Model ### 2.1 Binary evolution In order to study the spatial distribution of compact object mergers we use the population synthesis method. We use the population synthesis code \[Belczynski and Bulik, 1999\] which concentrates on the population of massive star binaries, i.e. those that may eventually lead to formation of compact objects and compact object binaries. We include the evolution of the binaries due to interaction and mass transfer and also the kicks that a newly born neutron star receives in supernova explosion. A binary may be disrupted in each of the supernova events. The surviving binaries obtain center of mass velocities, which change their trajectories and may even eject them from their galaxy. While the evolution of single stars depends only on their mass and metallicity the evolution of binaries is also a function the initial orbit (semimajor axis $`a`$, and eccentricity $`e`$) of the two stars. We assume that the distribution of the initial parameters can be expressed as a product of distributions of four parameters: the larger star (primary) mass $`M`$, the mass ratio of the less massive to the more massive star in the binary $`q`$, and the orbital parameters $`a`$ and $`e`$, i.e that this quantities are independent. The distribution of primary masses used here is \[Bethe and Brown, 1998\] $$\mathrm{\Psi }(M)M^{3/2},$$ and we adopt a flat distribution of the mass ratio $`q`$. The semi major axis distribution is scale invariant, i.e. $$\mathrm{\Gamma }(a)a^1$$ with the limits $`6R_{}<a<6000R_{}`$, and we draw the eccentricity from a distribution $`\mathrm{\Xi }(e)=2e`$. We assume that the kick velocity distribution is a three dimensional Gaussian, and parameterize it with its width $`\sigma _v`$, i.e. $$p(v)=\frac{4}{\sqrt{\pi }}\sigma _v^3v^2\mathrm{exp}\left(\frac{v^2}{\sigma _v^2}\right).$$ (1) We generate population of compact object binaries for a few values of $`\sigma _v`$ in order to asses the sensitivity of our results to this parameter. We describe the mass transfer in the common envelope evolution by the common envelope parameter $`\alpha _{CE}`$ (see e.g. \[Vrancken et al., 1991\]), and we use an intermediate value of $`0.8`$ for this parameter. In this type of evolution the more massive star looses its envelope and becomes a helium star with mass approximately 30% of its initial value. The $`\beta `$ parameter which describes the specific angular momentum of the material expelled from the binary in the Roche lobe overflow phase is set to $`\beta =6`$ \[Pols and Marinus, 1994\]. Accretion onto a neutron star in a binary is treated as Bondi-Hoyle accretion and we use the formalism developed by \[Bethe and Brown, 1998\] to find the amount of mass accreted onto the neutron star, and the final orbital separation. Systems with nearly equal masses evolve at the similar speed, and loose the common envelope, shrinking their orbit at the same time. For a more detailed description of the population synthesis code see \[Belczynski and Bulik, 1999\]. We assume that a neutron star with mass of $`1.4M_{}`$ is formed in each supernova explosion. We draw a random time in the orbital motion to obtain the position on the orbit when the supernova explodes. The remaining mass of the envelope is ejected from the system, and the newly formed neutron star receives a kick.We verify whether the system is still bound after the explosion. For bound systems we find the parameters of the new orbit and the kick velocity the whole binary receives $$\mathrm{\Delta }\stackrel{}{V}=\frac{M_2^iM_2^f}{M_1+M_2^f}(\stackrel{}{v}_2^i+\stackrel{}{v}_{kick})$$ where $`M_1`$ is the mass of the companion, $`M_2^i,M_2^f`$ are the initial and final masses of the supernova, $`\stackrel{}{v}_2`$ is the orbital velocity of the supernova at the time of explosion. After each supernova expolsion we verify whther the system survives as a binary. A compact object binary loses its energy through gravitational radiation. The time to merge is \[Peters, 1964\] $$t_{mrg}=\frac{5c^5a^4(1e^2)^{7/2}}{256G^3Mm(M+m)}\left(1+\frac{73}{24}e^2+\frac{37}{96}e^4\right)^1,$$ (2) where $`a`$ is the semi major axis of the orbit, $`e`$ is its eccentricity, and $`M,m`$ are the masses of the compact objects. ### 2.2 Orbit in Potentials of Galaxies Since little is known about the host galaxies of gamma-ray bursts, in particular of their types and masses, we will present two extreme cases: (i) propagation in a potential of large spiral galaxy like the Milky Way, and (ii) propagation in empty space, corresponding to GRBs originating e.g. in globular clusters. In the latter case we assume that all binaries originate in one point, and travel due to kicks described above and there is no gravitational potential. The potential of a spiral galaxy can be described as a sum of three components: bulge, disk, and dark matter halo. A convenient way to describe the Galactic potential has been proposed by \[Miyamoto and Nagai, 1975\], while a series of more detailed models were constructed by \[Kuijken and Gilmore, 1989\] and used in modeling the galactic halo population of neutron stars \[Bulik and Lamb, 1995, Bulik et al., 1998\]. The \[Miyamoto and Nagai, 1975\] potential for a galactic disk and bulge is $$\mathrm{\Phi }(R,z)=\frac{GM}{\sqrt{R^2+(a_i+\sqrt{z^2+b_i^2})^2}}$$ where $`a_i`$ and $`b_i`$ are the parameters, $`M`$ is the mass, and $`R=\sqrt{x^2=y^2}`$. The dark matter halo potential is spherically symmetric $$\mathrm{\Phi }(r)=\frac{GM_h}{r_c}\left[\frac{1}{2}\mathrm{ln}\left(1+\frac{r^2}{r_c^2}\right)+\frac{r_c}{r}\mathrm{atan}\left(\frac{r}{r_c}\right)\right]$$ corresponds to a mass distribution $`\rho =\rho _c/[1+(r/r_c)^2]`$. The mass of such halo is infinite, so we introduce a cutoff value $`r_{cut}=100`$kpc above which the density of the halo falls to zero. While the details of the model of galactic potential are not important for this study we have to adopt a particular value of the masses and sizes of each of the components. We use the values of the parameters as determined by \[Blaes and Rajagopal, 1991\] for the Milky Way: $`a_1=0`$kpc, $`b_1=0.277`$kpc, $`a_2=4.2`$kpc, $`b_2=0.198`$kpc, $`M_1=1.12\times 10^{10}M_{}`$, $`M_2=8.78\times 10^{10}M_{}`$, $`r_c=6.0`$kpc, and $`M_h=5.0\times 10^{10}M_{}`$. We assume that the distribution of binaries in our model galaxy follows the mass distribution in the young disk \[Paczyński, 1990\], that is $$P(R,z)dRdz=P(R)dRp(z)d(z).$$ The radial distribution is exponential with $$P(R)e^{R/R_{exp}}R$$ with $`R_{exp}=4.5`$kpc and extends to $`R_{max}=20`$kpc. The vertical distribution is $`p(z)e^{z/z_{exp}}`$ and $`z_{exp}=75`$pc. We note that this is not a self consistent approach: the density inferred from the disk potential is not the same as the density of binaries. However in this work we are not interested in determining high accuracy positions around the host galaxy, and rather with an estimate of the general properties of the distribution of compact object mergers. Each binary moves initially with the local rotational velocity in the galactic disk. After each supernova explosion we add an appropriate velocity, provided that the system survives the explosion. We calculate the orbit of each system until it merger time provided that the merger time is smaller than the Hubble time (15 Gyrs here). ## 3 Results The kick velocity distribution is not very well known. Therefore, we use the population synthesis code with four values of the kick velocity distribution width: with no kick velocities $`\sigma _v=0`$km s<sup>-1</sup>, and with $`\sigma _v=200,\mathrm{\hspace{0.17em}400},\mathrm{\hspace{0.17em}800}`$km s<sup>-1</sup>. This covers the range of values this distribution is likely to have. This the same approach as adopted in our previous work \[Belczynski and Bulik, 1999\]. The binaries receive kicks for two reasons. First, the envelope of the supernova is lost from the system and it carries away some momentum. Thus even in the case when there is no kick velocity a binary achieves an additional velocity \[Blaauw, 1960\]. Second, if the supernova explosion is asymmetric both the newly formed compact object may receive a kick velocity which affects the orbit of the binary after the explosion as well as its center of mass velocity. The fate of a binary system in a supernova explosion depends on the value and direction of the kick velocity, on the orbital phase at which the explosion occurs, and on the parameters of the binary: the masses and orbital parameters $`a`$, and $`e`$. We present the population of compact object binaries in the plane spanned by the center of mass velocity ofter the second supernova expolsion and time to merge in Figure 1. The orbital \[Blaauw, 1960\] effects are isolated and shown in the top left panel of Figure 1, where we present the results of the simulation with $`\sigma _v=0`$. There is a tail of long lived systems with lifetimes much longer than the Hubble time and small velocities, stretching outside of the boundaries of the plot to the lifetimes even of $`10^{20}`$years. These systems originally had large orbital separations, and hardly interacted in the course of their binary lifetime. In the case when there are no kicks the center of mass velocity of the comapct object binary depends on the amount of mass lost in the supernova explosion. In the extreme case of large mass loss, the center of mass velocity approaches the orbital velocity at the moment of supernova explosion, and it can never exceed it. The velocity of the system increases with increasing mass loss, however the systems that loose too much mass become unbound. This is why the lower part of the plot below $`t_{merge}10^8`$years is empty. With increasing the kick velocity also the typical velocity of a system increases and there appear short lived systems in tight orbits. They can now survive a large mass loss when the kick velocity has a favorable direction. Thus as the kick velocity is increased only the tightly bound systems (with short merger time) can survive the supernova explosions. Another effect of the kick velocity is that the long lived systems with $`t_{merge}`$ much longer than the Hubble time, which were present in the case $`\sigma _v=0`$km s<sup>-1</sup> disappear. The typical velocity of a system increases with the kick velocity. However, the population of the comapct merger binaries is not much affected when the kick velocity becomes large, e.g. changing the kick velocity distribution width from $`\sigma _v=0`$km s<sup>-1</sup> to $`200`$km s<sup>-1</sup> produces a much stronger effect than going from $`\sigma _v=400`$km s<sup>-1</sup> to $`800`$km s<sup>-1</sup>. Most of the systems are disrupted by such high velocities, and the surviving ones are only those for which the kick are not so large and have a favorable direction. Another effect of increasing the kick velocity is that the typical lifetime of a system becomes smaller. When the kick velocity is large only very tight, and/or highly eccentric systems survive, hence the typical lifetime of compact object binaries decreases. It should be noted the typical center of mass velocity of the compact object binaries increases roughly linearly with the kick velocity, while the lifetime decreases approximately exponentially. In Figure 1 we also plot following \[Bloom et al., 1998a\] the lines corresponding to the Hubble time (the dashed line), and we mark the region with the stars that will escape from a galactic potential. In order to escape a binary must satisfy the following conditions: (i) it has to have a velocity larger than the escape velocity, (ii) the distance $`vt_{merge}`$must be larger than the size of the galzxy. We also draw the line at $`t_{mrg}=15`$Myrs, to denote the systems that merger within the Hubble time. All the systems to the right of the solid line in Figure refvt have velocities above $`200`$km s<sup>-1</sup>, and live long enough to travel further than $`30`$kpc. We should also note that although each panel in Figure 1 contans $`10^3`$ systems, the production rate of compact object binaries decreases exponentially with the increasing kick velocity (see eq. 13 in Belczyński and Bulik \[Belczynski and Bulik, 1999\]). In Figures 2 and 3 we present the cumulative distributions of the projected distance from the center of the host galaxy in case (i) and from their the place of birth in case (ii), respectively, of the systems that merge within the Hubble time. When the binaries propagate in the potential of a large galaxy the kick velocity only weakly influences the the distribution of the mergers. Below the radius of $`10`$kpc the distribution is determined by the potential well. This is where all short lived and slow systems merge. There exists however a tail of high velocity, long lived systems (see Figure 1) that manage to escape. The escaping fraction is a weak function of the kick velocity. Typically the number of systems that merge further than $`30`$kpc from the center of the host galaxy is $`30`$%, except for the unphysical case of no kick velocities when it drops below $`20`$%. In the other extreme case of small host galaxies for which we neglect the gravitational potential the escaping fraction can be even larger. The escaping fraction decreases from $`80`$% for the kick velocity $`\sigma _v=200`$km s<sup>-1</sup> to about $`50`$% for $`\sigma _v=800`$km s<sup>-1</sup>. The reason for such behavior is clearly seen from Figure 1. In the product of center of mass velocity and time to merge the dominant role is played by the fast decrease of the time to merge with the increasing kick velocity $`\sigma _v`$. These quantitative results are visualized in Figures 4 and 5. Here we show the distribution of $`10^3`$ mergers around massive galaxy and in the empty space. We are showing four panels that cover the scales from $`10`$kpc to $`10`$Mpc. In the case of the propagation in a massive galaxy we are showing the projection in the plane of the galactic disk so the effects of the rotational velocity and the asymmetry of the potential well are visible. Both calculations have been done for the case of the kick velocity distribution width $`\sigma _v=200`$km s<sup>-1</sup>. ## 4 Conclusions We find that a significant fraction i.e. more than $`20`$ percent of compact object mergers take place outside of the host galaxies. The figures obtained for our case of no gravitational potential should be considered as an upper limit only. In contrast the observations show that the GRB afterglows lie on the host galaxies \[Hogg and Fruchter, 1998\]. However, our sample of the observed GRBs with afterglows is yet limited to the bursts longer than $`6`$s as BeppoSAX triggers on this timescale. Long bursts could be connected with the hypernovae-like events and therefore they are closely associated with the galaxies. Compact object mergers may be connected with the short bursts, although it has been argued that mass transfer in the coalescence of compact object may last much longer \[Portegies Zwart, 1998\]. Our results are consistent with the calculation by \[Bloom et al., 1998a\], for the case of a massive galaxy and the kick velocity distribution width $`\sigma _v=200`$km s<sup>-1</sup>. We have verified the dependence of the distribution of compact object mergers on $`\sigma _v`$. Our results show that for the case of massive galaxies the escaping fraction weakly depends on the distribution of kick velocities. We include also binaries with objects that are higher mass than the canonical $`1.4M_{}`$. These binaries are formed through accretion from a giant companion onto the neutron star. In this calculation the highest mass of a compact object is below $`2.5M_{}`$. The distribution of these more massive binaries is slightly more concentrated around the galaxies. It has to be noted that there is a numnber of potential selection effects which may affect the results of this study. Assuming that compact objcet mergers are reponsible for GRBs there may be qualitative differences between the NS-NS mergers and NS-BH mergers. As indicated above, their spatial distribution around host galaxies is different. Also the typical timescale of the bursts may be different between these two classes. Gamma-ray bursts form two separate classes (long vs. short) with different brightness distributions and spectra (short burst are harder than the long ones). So far the study of afterglows has been possible only for the long bursts. It may be the case that compact object mergers are connected with the short bursts for which so far no information about the host galaxies exist. The host galaxies have been identified in a long observational procedure: a gamma-ray burst lead to identification of a fading X-ray source, and then to discovery of the optical afterglow. Precise observations of the optical afterglows lead to the disory of host galaxies. There are bursts for which the X-ray or optical afterglows were not found. Since afterglows are usually connected with external shocks, gamma-ray bursts that take place outside of galaxies have much weaker afterglows because of the low density of intergalactic matter. Begelmann etal. Begelman and Meszaros \[Begelman et al., 1993\] argue that the afterglow emission depends scales only with the square root of the density of the outside medium. The mean external densities measured from the analysis of the known afterglow lightcurves are typically $`n0.03`$ cm<sup>-3</sup> \[Galama and Wijers, 1998\], while the intergalactic medium may be as rarified as $`10^6`$ cm<sup>-3</sup>. Hence the afterglow of a burst taking place outside a galaxy may be up to two orders of magnitude weaker than the one in a galaxy. It shows that there may be a strong preference against identification of host galaxies for the bursts that take place outside of galaxies. Acknowledgments. This work has been funded by the following KBN grants: 2P03D01616 2P03D00911, 2P03D00415 and 2P03D01113, and also made use of the NASA Astrophysics Data System. TB is grateful for the hospitality of Ecole Polytechnique where this work was finished.
no-problem/9903/math9903045.html
ar5iv
text
# Addendum To: On fibre space structures of a projective irreducible symplectic manifold (E-mail Address tyler@kurims.kyoto-u.ac.jp) In this note, we prove that every fibre space of a projective irreducible manifold is a lagrangian fibration. ###### Theorem 1 Let $`X`$ be a projective irreducible symplectic manifold and $`f:XB`$ a fibre space with projective base $`B`$. Then $`f`$ is a lagrangian fibration, that is a general fibre of $`f`$ is a lagrangian submanifold. Remark. Beauville proves that a Lagrangian fibration is a complete integrable system in \[2, Proposition 1\]. Thus, a general fibre of a fibre space of a projective irreducible symplectic manifold is an abelian variety. Remark. Markshevich states in \[4, Remark 3.2\] that there exists an irreducible symplectic manifold which has a family of non lagrangian tori. But this family does not form fibration. Proof of Theorem. Let $`\omega `$ be a nondegenerate two form on $`X`$ and $`\overline{\omega }`$ a conjugate. Assume that $`dimX=2n`$. Then $`dimF=n`$ \[5, Theorem 2\], where $`F`$ is a general fibre of $`f`$. In order to prove that $`F`$ is a Lagrangian submanifold, it is enough to show $$_F\omega \overline{\omega }A^{n2}=0$$ where $`A`$ is an ample divisor on $`X`$. Let $`H^{}`$ be an ample divisor on $`B`$ and $`H:=f^{}H^{}`$. Then $$_F\omega \overline{\omega }A^{n2}=c(\omega \overline{\omega }A^{n2}H^n),$$ where $`c`$ is a nonzero constant. We shall show $`\omega \overline{\omega }A^{n2}H^n=0`$. By \[3, Theorem 4.7\], there exists a bilinear form $`q_X`$ on $`H^2(X,)`$ which has the following properties: $$a_0q_X(D,D)^n=D^{2n}DH^2(X,).$$ We consider the following equation, $$a_0q_X(\omega +\overline{\omega }+sA+tH,\omega +\overline{\omega }+sA+tH)^n=(\omega +\overline{\omega }+sA+tH)^{2n}.$$ Calculating the left hand side, we obtain $$a_0(q_X(\omega +\overline{\omega })+s^2q_X(A)+2sq_X(\omega +\overline{\omega },A)+2tq_X(\omega +\overline{\omega },H)+2stq_X(A,H))^n.$$ Since $`\omega H^0(X,\mathrm{\Omega }_X^2)`$ and $`A,HH^1(X,\mathrm{\Omega }_X^1)`$, $$q_X(\omega +\overline{\omega },A)=q_X(\omega +\overline{\omega },H)=0$$ by \[1, Théorème 5\]. Thus we can conclude that $`\omega \overline{\omega }A^{n2}H^n=0`$ by comparing $`s^{n2}t^n`$ term of both hands sides. Q.E.D. Resarch Institute of Mathematical Science, Kyoto University. KITASHIRAKAWA, OIWAKE-CHO, KYOTO, 606-01, JAPAN.
no-problem/9903/math9903075.html
ar5iv
text
# The visual core of a hyperbolic 3-manifold ## 1 Introduction In this note we introduce the notion of the visual core of a hyperbolic 3-manifold $`N=𝐇^3/\mathrm{\Gamma }`$. One may think of the visual core as a harmonic analysis analogue of the convex core. Explicitly, the visual core $`𝒱(N)`$ is the projection to $`N`$ of all the points in $`𝐇^3`$ at which no component of the domain of discontinuity of $`\mathrm{\Gamma }`$ has visual (equivalently harmonic) measure greater than half that of the entire sphere at infinity. We investigate circumstances under which the visual core $`𝒱(N^{})`$ of a cover $`N^{}=𝐇^3/\mathrm{\Gamma }^{}`$ of $`N`$ embeds in $`N`$, via the usual covering map $`\pi :N^{}N`$. We begin by showing that the interior of $`𝒱(N^{})`$ embeds in $`N`$ when $`\mathrm{\Gamma }^{}`$ is a precisely QF-embedded subgroup of $`\mathrm{\Gamma }`$, while the visual core itself embeds when $`\mathrm{\Gamma }^{}`$ is a nicely QF-embedded subgroup. We define the notions of precisely and nicely QF-embedded subgroups of a Kleinian group and prove these embedding theorems in Section 3. Applying the results from , we are able to conclude that if the algebraic limit of a sequence of isomorphic Kleinian groups is a generalized web group, then the visual core of the algebraic limit manifold embeds in the geometric limit manifold. This result is part of our ongoing investigation of the relationship between algebraic and geometric limits of sequences of isomorphic Kleinian groups. Theorem 4.2: Let $`G`$ be a finitely generated, torsion-free, non-abelian group, let $`\{\rho _j\}𝒟(G)`$ be a sequence converging algebraically to $`\rho 𝒟(G)`$, and suppose that $`\{\rho _j(G)\}`$ converges geometrically to $`\widehat{\mathrm{\Gamma }}`$. If $`\rho (G)`$ is a generalized web group, then the visual core of $`N=𝐇^3/\rho (G)`$ embeds in $`\widehat{N}=𝐇^3/\widehat{\mathrm{\Gamma }}`$ under the covering map $`\pi :N\widehat{N}`$. There are two ways to view Theorem 4.2. On the one hand, one may think of it as a geometric analogue of the main result from , which asserts that under the same hypotheses, there is a compact core for the algebraic limit manifold which embeds in the geometric limit manifold. On the other hand, Theorem 4.2 can be thought of as a generalization of the result, proven in , that when the algebraic limit is a maximal cusp, the convex core of the algebraic limit manifold embeds in the geometric limit manifold. In fact, when $`\mathrm{\Gamma }`$ is a maximal cusp, the visual and convex cores of $`𝐇^3/\mathrm{\Gamma }`$ coincide. In Section 5, we discuss the relationship between the visual core and Klein-Maskit combination along component subgroups. Klein-Maskit combination gives a geometric realization of the topological operation of gluing hyperbolizable 3-manifolds together along incompressible surfaces in their boundaries. While the topology underlying Klein-Maskit combination is well-understood, the geometry is more mysterious. For example, the convex core of a summand of a Klein-Maskit combination need not embed in the resulting manifold. However, we show in Theorem 5.6 that the (interior of the) visual core of a summand does embed in the resulting manifold. There is a relationship between these two investigations, of limits of sequences of Kleinian groups and of Klein-Maskit combination, since in the case that the algebraic limit is a generalized web group, it is shown in that the algebraic limit is a summand of a Klein-Maskit decomposition of the geometric limit. This paper was completed while the first author was visiting Rice University, and he would like to thank the department there for their hospitality. ## 2 The visual core Before describing the basic properties of the visual core, we give some definitions. A Kleinian group is a discrete subgroup of $`\mathrm{PSL}_2(𝐂)`$, which we view as acting either on hyperbolic $`3`$-space $`𝐇^3`$ as isometries or on the Riemann sphere $`\widehat{𝐂}`$ as Möbius transformations. The action of $`\mathrm{\Gamma }`$ partitions $`\widehat{𝐂}`$ into the domain of discontinuity $`\mathrm{\Omega }(\mathrm{\Gamma })`$, which is the largest open subset of $`\widehat{𝐂}`$ on which $`\mathrm{\Gamma }`$ acts properly discontinuously, and its complement the limit set $`\mathrm{\Lambda }(\mathrm{\Gamma })`$. The stabilizer $$\mathrm{st}_\mathrm{\Gamma }(\mathrm{\Delta })=\{\gamma \mathrm{\Gamma }|\gamma (\mathrm{\Delta })=\mathrm{\Delta }\}$$ of a connected component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is called a component subgroup of $`\mathrm{\Gamma }`$. Given a measurable set $`X\widehat{𝐂}`$, consider the harmonic function $`h_X`$ on $`𝐇^3`$ defined by setting $`h_X(y)`$ to be the proportion of the geodesic rays emanating from $`y𝐇^3`$ which end in $`X`$. Though we do not explicitly use this formulation, analytically we can write $`h_X`$ in the ball model of hyperbolic $`3`$-space as $$h_X(y)=\frac{1}{4\pi }_X\left(\frac{1|y|^2}{|y\zeta |^2}\right)^2𝑑m(\zeta ).$$ The visual hull of a Kleinian group $`\mathrm{\Gamma }`$ is then defined to be $$\stackrel{~}{𝒱}(\mathrm{\Gamma })=\left\{y𝐇^3\right|h_\mathrm{\Delta }(y)\frac{1}{2}\mathrm{for}\mathrm{all}\mathrm{components}\mathrm{\Delta }\mathrm{of}\mathrm{\Omega }(\mathrm{\Gamma })\}.$$ The visual core $`𝒱(N)`$ of $`N=𝐇^3/\mathrm{\Gamma }`$ is the quotient $`\stackrel{~}{𝒱}(\mathrm{\Gamma })/\mathrm{\Gamma }`$. Although $`𝒱(N)`$ is a closed subset of $`N`$, there is no reason, in general, to suppose that $`𝒱(N)`$ is a submanifold (or suborbifold) of $`N`$. Our first observation is that the visual core of a hyperbolic 3-manifold with finitely generated fundamental group is non-empty unless its domain of discontinuity is connected and non-empty. ###### Proposition 2.1 Let $`\mathrm{\Gamma }`$ be a finitely generated Kleinian group. Then $`\stackrel{~}{𝒱}(\mathrm{\Gamma })`$ is empty if and only if $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is connected and non-empty. Proof of Proposition 2.1: By definition, $`\stackrel{~}{𝒱}(\mathrm{\Gamma })=𝐇^3`$ if and only if the domain of discontinuity $`\mathrm{\Omega }(\mathrm{\Gamma })`$ of $`\mathrm{\Gamma }`$ is empty. If $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is connected and non-empty, then $`\mathrm{\Gamma }`$ is a function group, which is a finitely generated Kleinian group whose domain of discontinuity contains a component invariant under the action of the group. Soma shows that $`\mathrm{\Gamma }`$ is then topologically tame, that is the orbifold $`𝐇^3/\mathrm{\Gamma }`$ has a finite manifold cover which is homeomorphic to the interior of a compact 3-manifold. Corollary 1 from then implies that $`\mathrm{\Lambda }(\mathrm{\Gamma })`$ has measure zero, so that $`h_{\mathrm{\Omega }(\mathrm{\Gamma })}(x)=1`$ for all $`x𝐇^3`$. In particular, we have that $`\stackrel{~}{𝒱}(\mathrm{\Gamma })`$ is empty. If $`\mathrm{\Omega }(\mathrm{\Gamma })`$ contains at least two components, let $`\mathrm{}`$ be a geodesic in $`𝐇^3`$ whose endpoints at infinity lie in distinct components $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$ of $`\mathrm{\Omega }(\mathrm{\Gamma })`$. The function $`h_{\mathrm{\Delta }_1}`$ varies continuously between 0 and 1 on $`\mathrm{}`$, and so there exists a point $`x`$ on $`\mathrm{}`$ such that $`h_{\mathrm{\Delta }_1}(x)=\frac{1}{2}`$. It follows that $`x\stackrel{~}{𝒱}(\mathrm{\Gamma })`$, and so $`\stackrel{~}{𝒱}(\mathrm{\Gamma })`$ is non-empty. Proposition 2.1 It is natural to contrast the definition of the visual core with that of the convex core. Recall that the convex hull $`\stackrel{~}{𝒞}(\mathrm{\Gamma })`$ of $`\mathrm{\Lambda }(\mathrm{\Gamma })`$ is obtained from $`𝐇^3`$ by removing each closed hyperbolic half-space which intersects the sphere at infinity in a closed disk contained in $`\mathrm{\Omega }(\mathrm{\Gamma })`$. The convex core $`𝒞(N)`$ of $`N=𝐇^3/\mathrm{\Gamma }`$ is the quotient $`\stackrel{~}{𝒞}(\mathrm{\Gamma })/\mathrm{\Gamma }`$. Equivalently, the convex core of $`N`$ is the smallest convex submanifold of $`N`$ whose inclusion is a homotopy equivalence. (See Epstein and Marden for further discussion of the convex core.) The following proposition describes the basic relationship between the visual and convex cores of a hyperbolic 3-manifold $`N`$. ###### Proposition 2.2 Let $`N=𝐇^3/\mathrm{\Gamma }`$ be a hyperbolic $`3`$-manifold. Then, its visual core $`𝒱(N)`$ is contained in its convex core $`𝒞(N)`$. Moreover, the visual core is equal to the convex core if and only if the boundary $`𝒞(N)`$ of the convex core is totally geodesic. Proof of Proposition 2.2: For each point $`x`$ of $`𝐇^3\stackrel{~}{𝒞}(\mathrm{\Gamma })`$, there exists a hyperplane $`H`$ in $`𝐇^3`$ containing $`x`$ so that the circle at infinity $`C`$ of $`H`$ bounds a closed disk contained entirely in a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma })`$. Thus, $`h_\mathrm{\Delta }(x)>\frac{1}{2}`$, which implies that $`x\stackrel{~}{𝒱}(\mathrm{\Gamma })`$. Therefore, $`\stackrel{~}{𝒱}(\mathrm{\Gamma })\stackrel{~}{𝒞}(\mathrm{\Gamma })`$, which in turn implies that $`𝒱(N)𝒞(N)`$. For each point $`x`$ of $`\stackrel{~}{𝒞}(\mathrm{\Gamma })`$, there exists a hyperplane $`H`$ in $`𝐇^3`$ containing $`x`$ so that the circle at infinity $`C`$ of $`H`$ bounds an open disk $`D`$ contained entirely in a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma })`$. If $`D`$ does not equal $`\mathrm{\Delta }`$, then $`h_\mathrm{\Delta }(x)>\frac{1}{2}`$, which implies that $`x\stackrel{~}{𝒱}(\mathrm{\Gamma })`$. Therefore, $`\stackrel{~}{𝒞}(N)=\stackrel{~}{𝒱}(N)`$ if and only if each component of $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is a circular disc, which is equivalent to requiring that $`𝒞(N)`$ be totally geodesic. Proposition 2.2 ## 3 The visual core and coverings In this section, we develop a criterion, expressed in terms of limit sets, which guarantees that the visual core of a cover of a hyperbolic manifold embeds under the covering map. This criterion involves the introduction of two closely related notions of how a subgroup $`\mathrm{\Gamma }^{}`$ of a Kleinian group $`\mathrm{\Gamma }`$ sits inside $`\mathrm{\Gamma }`$. We begin by observing that if $`\mathrm{\Gamma }^{}`$ is a precisely QF-embedded subgroup of a Kleinian group $`\mathrm{\Gamma }`$, then the interior of $`𝒱(N^{})`$ embeds in $`N`$ under the covering map $`\pi :NN^{}`$ (where $`N=𝐇^3/\mathrm{\Gamma }`$ and $`N^{}=𝐇^3/\mathrm{\Gamma }^{}`$). Here, a subgroup $`\mathrm{\Gamma }^{}`$ of a Kleinian group $`\mathrm{\Gamma }`$ is precisely QF-embedded if, for each $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }^{}`$, there is a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ so that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))`$ is contained in $`\overline{\mathrm{\Delta }}`$, $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta })`$ is quasifuchsian, and $`\mathrm{\Delta }`$ is a Jordan domain. Recall that a quasifuchsian group is a finitely generated Kleinian group whose limit set is a Jordan curve and which stabilizes both components of its domain of discontinuity. In particular, if $`\mathrm{\Delta }`$ is a component of the domain of discontinuity $`\mathrm{\Omega }(\mathrm{\Gamma })`$ of a Kleinian group $`\mathrm{\Gamma }`$, $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta })`$ is quasifuchsian, and $`\mathrm{\Delta }`$ is a Jordan domain, then $`\mathrm{\Lambda }(\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta }))=\mathrm{\Delta }`$. If $`\mathrm{\Gamma }`$ is finitely generated, then a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is a Jordan domain if and only if $`\mathrm{st}_\mathrm{\Gamma }(\mathrm{\Delta })`$ is quasifuchsian, see Lemma 2 of Ahlfors and Theorem 2 of Maskit . Hence, a finitely generated subgroup $`\mathrm{\Gamma }^{}`$ of a Kleinian group $`\mathrm{\Gamma }`$ is precisely QF-embedded if, for each $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }^{}`$, there is a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ so that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))`$ is contained in $`\overline{\mathrm{\Delta }}`$ and $`\mathrm{\Delta }`$ is a Jordan domain. Precisely QF-embedded subgroups arise naturally in Klein-Maskit combination theory, as we see in Section 5. ###### Proposition 3.1 Let $`\mathrm{\Gamma }`$ be a Kleinian group and let $`\mathrm{\Gamma }^{}`$ be a precisely QF-embedded subgroup of $`\mathrm{\Gamma }`$. Let $`N=𝐇^3/\mathrm{\Gamma }`$ and $`N^{}=𝐇^3/\mathrm{\Gamma }^{}`$, and let $`\pi :N^{}N`$ be the covering map. Then, $`\pi `$ is an embedding restricted to the interior of the visual core $`𝒱(N^{})`$ of $`N^{}`$. Proof of Proposition 3.1: Since the interior of $`𝒱(N^{})`$ is an open submanifold (or possibly an open sub-orbifold, in the case that $`\mathrm{\Gamma }^{}`$ contains torsion) of $`N^{}`$, it suffices to show that $`\pi `$ is injective on the interior of $`𝒱(N^{})`$. As $`𝒱(N^{})`$ is covered by $`\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})𝐇^3`$, it suffices to show that if $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }^{}`$, then $`\gamma (\mathrm{int}(\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})))\mathrm{int}(\stackrel{~}{𝒱}(\mathrm{\Gamma }^{}))`$ is empty. Let $`\gamma `$ be any element of $`\mathrm{\Gamma }\mathrm{\Gamma }^{}`$. Since $`\mathrm{\Gamma }^{}`$ is precisely QF-embedded in $`\mathrm{\Gamma }`$, there exists a component $`\mathrm{\Delta }_1`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ so that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))\overline{\mathrm{\Delta }_1}`$, $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta }_1)`$ is quasifuchsian, and $`\mathrm{\Delta }_1`$ is a Jordan domain. Since $`U=\widehat{𝐂}\overline{\mathrm{\Delta }_1}`$ is a Jordan domain contained entirely in $`\gamma (\mathrm{\Omega }(\mathrm{\Gamma }^{}))=\mathrm{\Omega }(\gamma \mathrm{\Gamma }^{}\gamma ^1)`$, there exists a component $`\mathrm{\Delta }_1^{}`$ of $`\gamma (\mathrm{\Omega }(\mathrm{\Gamma }^{}))`$ such that $`U\mathrm{\Delta }_1^{}`$. In particular, $`\overline{\mathrm{\Delta }_1}\mathrm{\Delta }_1^{}=\widehat{𝐂}`$. Since $`\mathrm{\Delta }_1`$ is the limit set of the quasifuchsian group $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta }_1)`$, it has measure zero, and so $`h_{\mathrm{\Delta }_1}(x)+h_{\mathrm{\Delta }_1^{}}(x)1`$ for all $`x𝐇^3`$. Since $`h_{\mathrm{\Delta }_1}`$ is harmonic and non-constant, it cannot be locally constant. Thus, if $`x`$ is in $`\mathrm{int}(\stackrel{~}{𝒱}(\mathrm{\Gamma }^{}))`$, we see that $`h_{\mathrm{\Delta }_1}(x)<\frac{1}{2}`$. Since $`h_{\mathrm{\Delta }_1^{}}(x)1h_{\mathrm{\Delta }_1}(x)>\frac{1}{2}`$, we see that $`x`$ does not lie in $`\gamma (\mathrm{int}(\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})))`$. Therefore $`\gamma (\mathrm{int}(\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})))\mathrm{int}(\stackrel{~}{𝒱}(\mathrm{\Gamma }^{}))`$ is empty, as desired. Proposition 3.1 We next observe that if $`\mathrm{\Gamma }^{}`$ is nicely QF-embedded, then the visual core $`𝒱(N^{})`$ embeds in $`N`$ under the covering map. Here, a subgroup $`\mathrm{\Gamma }^{}`$ of a Kleinian group $`\mathrm{\Gamma }`$ is nicely QF-embedded if, for each $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }^{}`$, there is a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ so that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))`$ is contained in $`\overline{\mathrm{\Delta }}`$, $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta })`$ is quasifuchsian, $`\mathrm{\Delta }`$ is a Jordan domain, and $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))\mathrm{\Delta }\mathrm{\Delta }`$. More simply, if $`\mathrm{\Gamma }^{}`$ is a finitely generated subgroup of a Kleinian group $`\mathrm{\Gamma }`$, then $`\mathrm{\Gamma }^{}`$ is nicely QF-embedded if, for each $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }^{}`$, there is a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ so that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))`$ is contained in $`\overline{\mathrm{\Delta }}`$, $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))\mathrm{\Delta }\mathrm{\Delta }`$, and $`\mathrm{\Delta }`$ is a Jordan domain. A nicely QF-embedded subgroup is always precisely QF-embedded, though the converse need not hold. Nicely embedded QF-subgroups occur naturally in the study of algebraic and geometric limits, as we see in Section 4. ###### Proposition 3.2 Let $`\mathrm{\Gamma }`$ be a Kleinian group and let $`\mathrm{\Gamma }^{}`$ be a finitely generated, nicely QF-embedded subgroup of $`\mathrm{\Gamma }`$. Let $`N=𝐇^3/\mathrm{\Gamma }`$ and $`N^{}=𝐇^3/\mathrm{\Gamma }^{}`$, and let $`\pi :N^{}N`$ be the covering map. Then, $`\pi `$ is an embedding restricted to the visual core $`𝒱(N^{})`$ of $`N^{}`$. Proof of Proposition 3.2: We argue much as in the proof of Proposition 3.1 to show that $`\pi `$ is injective on $`𝒱(N)`$. Let $`\gamma `$ be any element of $`\mathrm{\Gamma }\mathrm{\Gamma }^{}`$. Since $`\mathrm{\Gamma }^{}`$ is nicely QF-embedded in $`\mathrm{\Gamma }`$, there exists a component $`\mathrm{\Delta }_1`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ which is a Jordan domain, so that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))\overline{\mathrm{\Delta }_1}`$ and $`\mathrm{\Delta }_1\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }^{}))`$ is non-empty. Thus $`\widehat{𝐂}\overline{\mathrm{\Delta }_1}`$ is a proper subset of some component $`\mathrm{\Delta }_1^{}`$ of $`\gamma (\mathrm{\Omega }(\mathrm{\Gamma }^{}))`$. In particular, $`\overline{\mathrm{\Delta }_1}\mathrm{\Delta }_1^{}=\widehat{𝐂}`$ and $`\mathrm{\Delta }_1\mathrm{\Delta }_1^{}\mathrm{}`$. Since $`\mathrm{\Delta }_1`$ is the limit set of a quasifuchsian group, it has measure zero. Thus, $`h_{\mathrm{\Delta }_1}(x)+h_{\mathrm{\Delta }_1^{}}(x)>1`$ for all $`x𝐇^3`$. If $`x\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})`$, then $`h_{\mathrm{\Delta }_1}(x)\frac{1}{2}`$. So, we see that $`h_{\mathrm{\Delta }_1^{}}(x)>1h_{\mathrm{\Delta }_1}(x)\frac{1}{2}`$, which implies that $`x`$ does not lie in $`\gamma (\stackrel{~}{𝒱}(\mathrm{\Gamma }^{}))`$. Thus, $`\gamma (\stackrel{~}{𝒱}(\mathrm{\Gamma }^{}))\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})`$ is empty, which proves that $`\pi `$ is injective on the visual core $`𝒱(N^{})`$. To verify that $`\pi `$ is an embedding restricted to $`𝒱(N^{})`$, it only remains to check that $`\pi `$ is proper. If not, then there must exist a sequence $`\{x_j\}`$ of points in $`𝒱(N^{})`$ which exits every compact subset of $`N^{}`$, but such that $`\{\pi (x_j)\}`$ converges to a point $`x`$ in $`N`$. By passing to a subsequence, we may assume that $`d(x_j,x_{j+1})1`$ and $`d(\pi (x_j),x)\frac{1}{3j}`$ for all $`j`$. Let $`\{\stackrel{~}{x}_j\}`$ be a sequence of lifts of $`\{x_j\}`$ to $`𝐇^3`$. Since $`d(\pi (x_j),\pi (x_{j+1}))<\frac{1}{j}`$ and $`d(x_j,x_{j+1})1`$, for each $`j`$ there exists an element $`\gamma _j\mathrm{\Gamma }\mathrm{\Gamma }^{}`$ such that $`d(\stackrel{~}{x}_j,\gamma _j(\stackrel{~}{x}_{j+1}))<\frac{1}{j}`$. Since $`\mathrm{\Gamma }^{}`$ is nicely QF-embedded, there exists, for each $`j`$, a component $`\mathrm{\Delta }_j`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ which is a Jordan domain whose closure contains $`\gamma _j(\mathrm{\Lambda }(\mathrm{\Gamma }^{}))`$. Since $`\stackrel{~}{x}_{j+1}\stackrel{~}{𝒱}(\mathrm{\Gamma })`$, $`\gamma _j(\stackrel{~}{𝒱}(\mathrm{\Gamma }))\stackrel{~}{𝒱}(\mathrm{\Gamma })=\mathrm{}`$, and $`\gamma _j(\mathrm{\Lambda }(\mathrm{\Gamma }))\mathrm{\Delta }_j`$, we see that $`h_{\mathrm{\Delta }_j}(\gamma _j(\stackrel{~}{x}_{j+1}))>\frac{1}{2}`$. Since $`\stackrel{~}{x}_j\stackrel{~}{𝒱}(\mathrm{\Gamma }^{})`$, $`h_{\mathrm{\Delta }_j}(\stackrel{~}{x}_j)\frac{1}{2}`$. So, by continuity, there exists a point $`\stackrel{~}{q}_j`$ between $`\stackrel{~}{x}_j`$ and $`\gamma _j(\stackrel{~}{x}_{j+1})`$ such that $`d(\stackrel{~}{q}_j,\stackrel{~}{x}_j)<\frac{1}{j}`$ and $`h_{\mathrm{\Delta }_j}(\stackrel{~}{q}_j)=\frac{1}{2}`$. Thus, $`\{q_j\}`$ is a sequence of points in $`𝒱(N^{})`$ which exits every compact subset of $`N^{}`$, but such that $`\{\pi (q_j)\}`$ converges to a point $`x`$ in $`N`$. Since $`\mathrm{\Gamma }^{}`$ is finitely generated, there exist only finitely many inequivalent components of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$, so we may assume (by choosing different lifts, passing to a subsequence, and relabelling) that there exists a fixed component $`\mathrm{\Delta }_0`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ which is a Jordan domain, so that $`h_{\mathrm{\Delta }_0}(\stackrel{~}{q}_j)=\frac{1}{2}`$ for all $`j`$. Let $`\mathrm{\Gamma }_0=\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta }_0)`$, $`N_0=𝐇^3/\mathrm{\Gamma }_0`$ and $`p:𝐇^3N_0`$ be the covering map. Since $`h_{\mathrm{\Delta }_0}(\stackrel{~}{q}_j)=\frac{1}{2}`$, we conclude that $`\stackrel{~}{q}_j\stackrel{~}{𝒱}(\mathrm{\Gamma }_0)`$. Let $`y_j=p(\stackrel{~}{q}_j)`$. Since $`\{q_j\}`$ exits every compact subset of $`N^{}`$, $`\{y_j\}`$ must exit every compact subset of $`N_0`$. Proposition 2.2 guarantees that the sequence $`\{y_j\}`$ lies entirely in the convex core $`C(N_0)`$ of $`N_0`$. Since $`\mathrm{\Gamma }^{}`$ is finitely generated and $`\mathrm{\Delta }_0`$ is a Jordan domain, $`\mathrm{\Gamma }_0`$ is quasifuchsian. Therefore, the $`ϵ`$-thick part of the convex core, $$C(N_0)_ϵ=\{yC(N_0)|\mathrm{inj}_{N_0}(y)ϵ\},$$ is compact for all $`ϵ>0`$, see Bowditch . (Here, $`\mathrm{inj}_{N_0}(y)`$ denotes the injectivity radius of the point $`y`$ in $`N_0`$.) Thus, $`\mathrm{inj}_{N_0}(y_j)0`$, which implies that $`\mathrm{inj}_N(\pi (x_j))0`$. However, this contradicts the fact that $`\{\pi (x_j)\}`$ converges in $`N`$. Proposition 3.2 Remarks: (1) One may think of Proposition 3.1 as an analogue of Proposition 6.1 of , which asserts that if $`\mathrm{\Gamma }^{}`$ is a finitely generated, torsion-free, precisely embedded generalized web subgroup of $`\mathrm{\Gamma }`$, then there is a compact core for $`N^{}`$ which embeds (via the covering map $`\pi :NN^{}`$) in $`N`$. That result may be generalized, using the same techniques as in , to show that if $`\mathrm{\Gamma }^{}`$ is a finitely generated, torsion-free precisely QF-embedded subgroup of $`\mathrm{\Gamma }`$, then there is a compact core for $`N^{}`$ which embeds (via the covering map $`\pi `$) in $`N`$. This generalization is the more direct topological analogue of Proposition 3.1. (2) The arguments in the proofs of Propositions 3.1 and 3.2 may be used to show that larger subsets embed. Let $`\stackrel{~}{𝒲}(\mathrm{\Gamma }^{})`$ be the set of points $`x𝐇^3`$ such that $`h_\mathrm{\Delta }(x)\frac{1}{2}`$ for every component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ such that $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta })`$ is quasifuchsian, and set $`𝒲(N^{})=\stackrel{~}{𝒲}(\mathrm{\Gamma }^{})/\mathrm{\Gamma }^{}`$. Then one can adapt the proof of Proposition 3.1 to show that if $`\mathrm{\Gamma }^{}`$ is a precisely QF-embedded subgroup of $`\mathrm{\Gamma }`$, then the interior of $`𝒲(N^{})`$ embeds in $`N`$. Similarly, one can adapt the proof of Proposition 3.2 to show that if $`\mathrm{\Gamma }^{}`$ is a finitely generated, nicely QF-embedded subgroup of $`\mathrm{\Gamma }`$, then $`𝒲(N^{})`$ embeds in $`N`$. (3) Note that the definitions for a precisely QF-embedded and of a nicely QF-embedded subgroup $`\mathrm{\Gamma }^{}`$ of a Kleinian group $`\mathrm{\Gamma }`$ both make sense for an infinitely generated subgroup $`\mathrm{\Gamma }^{}`$. In fact, Proposition 3.1 as stated holds for infinitely generated, precisely QF-embedded subgroups. The reason that in the definitions we require both that the component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }^{})`$ be a Jordan domain and that $`\mathrm{st}_\mathrm{\Gamma }^{}(\mathrm{\Delta })`$ be quasifuchsian is that it is possible, by taking the Klein combination of a quasifuchsian group with an infinitely generated Kleinian group with trivial component subgroups, to construct an infinitely generated Kleinian group whose component subgroups are all quasifuchsian but the components of its domain of discontinuity are not all simply connected, and so in particular cannot all be Jordan domains. ## 4 Algebraic and geometric limits In an earlier paper , we proved that if the algebraic limit of a sequence of isomorphic Kleinian groups is a generalized web group, then it is a nicely QF-embedded subgroup of the geometric limit. In that paper, we used this result to establish that there is a compact core for the algebraic limit manifold which embeds in the geometric limit manifold, thus obtaining “topological” information about how the algebraic limit sits inside the geometric limit. In this section, we use the results of the previous section to obtain “geometric” information about how the algebraic limit sits inside the geometric limit. We briefly recall the basic definitions from the theory of algebraic and geometric limits. We refer the interested reader to Jørgensen and Marden for more details. Given a finitely generated group $`G`$, let $`𝒟(G)`$ denote the space of discrete, faithful representations of $`G`$ into $`\mathrm{PSL}_2(𝐂)`$. A sequence $`\{\rho _i\}`$ in $`𝒟(G)`$ converges algebraically to $`\rho `$ if $`\{\rho _i(g)\}`$ converges to $`\rho (g)`$ for each $`gG`$. A sequence $`\{\mathrm{\Gamma }_j\}`$ of Kleinian groups converges geometrically to a Kleinian group $`\widehat{\mathrm{\Gamma }}`$ if every element of $`\widehat{\mathrm{\Gamma }}`$ is the limit of a sequence $`\{\gamma _j\mathrm{\Gamma }_j\}`$ and if every accumulation point of every sequence $`\{\gamma _j\mathrm{\Gamma }_j\}`$ lies in $`\widehat{\mathrm{\Gamma }}`$. If $`G`$ is not virtually abelian and if $`\{\rho _i\}`$ converges to $`\rho `$ in $`𝒟(G)`$, then there is a subsequence $`\{\rho _j(G)\}`$ of $`\{\rho _i(G)\}`$ which converges geometrically to a Kleinian group $`\widehat{\mathrm{\Gamma }}`$ which contains $`\rho (G)`$. In this note, we restrict ourselves to sequences $`\{\rho _n\}`$ in $`𝒟(G)`$ so that $`\{\rho _n\}`$ converges algebraically to some $`\rho 𝒟(G)`$ and so that $`\{\rho _n(G)\}`$ converges geometrically to $`\widehat{\mathrm{\Gamma }}`$. The Kleinian group $`\rho (G)`$ is the algebraic limit of $`\{\rho _n\}`$, and $`\widehat{\mathrm{\Gamma }}`$ is the geometric limit of $`\{\rho _n(G)\}`$. If $`G`$ is torsion-free, we refer to $`𝐇^3/\rho (G)`$ as the algebraic limit manifold and to $`𝐇^3/\widehat{\mathrm{\Gamma }}`$ as the geometrical limit manifold. Since $`\rho (G)\widehat{\mathrm{\Gamma }}`$, there is a natural covering map $`\pi :𝐇^3/\rho (\mathrm{\Gamma })𝐇^3/\widehat{\mathrm{\Gamma }}`$. In order to understand the relationship between the algebraic and geometric limit, it is important to understand how $`\rho (G)`$ “sits inside” $`\widehat{\mathrm{\Gamma }}`$, which is closely related to understanding the covering map $`\pi `$. A finitely generated Kleinian group $`\mathrm{\Gamma }`$ is called a generalized web group if $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is non-empty and if every component subgroup of $`\mathrm{\Gamma }`$ is quasifuchsian (or equivalently, if every component of $`\mathrm{\Omega }(\mathrm{\Gamma })`$ is a Jordan domain). Theorem A from asserts that if the algebraic limit is a generalized web group, then it is a nicely QF-embedded subgroup of the geometric limit. ###### Theorem 4.1 (Theorem A of ) Let $`G`$ be a finitely generated, torsion-free, non-abelian group, let $`\{\rho _j\}`$ be a sequence in $`𝒟(G)`$ converging algebraically to $`\rho 𝒟(G)`$, and suppose that $`\{\rho _j(G)\}`$ converges geometrically to $`\widehat{\mathrm{\Gamma }}`$. If $`\rho (G)`$ is a generalized web group, then $`\rho (G)`$ is a nicely QF-embedded subgroup of $`\widehat{\mathrm{\Gamma }}`$. One may combine Theorem 4.1 with Proposition 3.2 to obtain “geometric” information about how the algebraic limit sits within the geometric limit in this case. ###### Theorem 4.2 Let $`G`$ be a finitely generated, torsion-free, non-abelian group, let $`\{\rho _j\}𝒟(G)`$ be a sequence converging algebraically to $`\rho 𝒟(G)`$, and suppose that $`\{\rho _j(G)\}`$ converges geometrically to $`\widehat{\mathrm{\Gamma }}`$. If $`\rho (G)`$ is a generalized web group, then the visual core of $`N=𝐇^3/\rho (G)`$ embeds in $`\widehat{N}=𝐇^3/\widehat{\mathrm{\Gamma }}`$ under the covering map $`\pi :N\widehat{N}`$. Remarks: (1) One may think of Theorem 4.2 as one way to generalize Proposition 3.2 from , which shows that if the algebraic limit is a maximal cusp, then the convex core of the algebraic limit manifold embeds in the geometric limit manifold under the covering map. In fact, one may view our Theorem 4.2 and the result from that asserts that, under the same assumptions, a compact core for the algebraic limit manifold embeds in the geometric limit manifold, as two different generalizations of Proposition 3.2 from . (2) In general, even if the algebraic limit is a generalized web group, the convex core of the algebraic limit manifold need not embed in the geometric limit manifold. (3) The examples given in illustrate the point that the visual core of the algebraic limit manifold need not embed in the geometric limit manifold in the case that the algebraic limit is not a generalized web group. ## 5 Klein-Maskit Combination In this section, we discuss the relationship between the visual core and the operation of Klein-Maskit combination. We restrict our entire discussion to Klein-Maskit combination along component subgroups. For a more complete discussion of Klein-Maskit combination see Maskit . In this setting, we see that the interior of the visual core of a summand of a Klein-Maskit decomposition of a hyperbolic 3-manifold embeds in the manifold. There are two types of Klein-Maskit combination. The first, type I, corresponds topologically to gluing 2 hyperbolic 3-manifolds together along incompressible components of their conformal boundary. The second, type II, corresponds topologically to gluing together two incompressible components of the conformal boundary of a single hyperbolic 3-manifold. The following theorem summarizes the relevant properties of Klein-Maskit combination of type I along a component subgroup (see Theorem VII.C.2 in ). ###### Theorem 5.1 (Klein-Maskit combination I) Let $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ be Kleinian groups, and let $`\mathrm{\Phi }=\mathrm{\Gamma }_1\mathrm{\Gamma }_2`$. Suppose that $`\mathrm{\Phi }`$ is a quasifuchsian group which is a component subgroup of both $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$, and that $`\mathrm{\Lambda }(\mathrm{\Gamma }_1)`$ and $`\mathrm{\Lambda }(\mathrm{\Gamma }_2)`$ lie in the closures of different components of $`\mathrm{\Omega }(\mathrm{\Phi })`$. Then, 1. $`\mathrm{\Gamma }=\mathrm{\Gamma }_1,\mathrm{\Gamma }_2`$ is a Kleinian group isomorphic to the amalgamated free product of $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ along $`\mathrm{\Phi }`$; 2. $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ are nicely QF-embedded subgroups of $`\mathrm{\Gamma }`$; 3. If $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }_i`$, then $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }_i))`$ is contained in a component $`\mathrm{\Delta }`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }_i)`$ which is $`\mathrm{\Gamma }_i`$-equivalent to the component $`\mathrm{\Delta }_i`$ of $`\mathrm{\Omega }(\mathrm{\Gamma }_i)`$ bounded by $`\mathrm{\Lambda }(\mathrm{\Phi })`$. Moreover, $`\mathrm{\Delta }\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }_i))`$ is non-empty; and 4. $`𝐇^3/\mathrm{\Gamma }`$ is homeomorphic to the manifold (or orbifold) obtained from $`(𝐇^3\mathrm{\Delta }_1)/\mathrm{\Gamma }_1`$ and $`(𝐇^3\mathrm{\Delta }_2)/\mathrm{\Gamma }_2`$ by identifying $`\mathrm{\Delta }_1/\mathrm{\Phi }`$ with $`\mathrm{\Delta }_2/\mathrm{\Phi }`$. In this case, we say that $`\mathrm{\Gamma }_1`$ is a summand of a simple type I Klein-Maskit decomposition of $`\mathrm{\Gamma }`$. Combining property (2) of Theorem 5.1 with Proposition 3.2 yields the following result: ###### Proposition 5.2 Let $`\mathrm{\Gamma }_1`$ be a finitely generated Kleinian group which is a summand of a simple type I Klein-Maskit decomposition of $`\mathrm{\Gamma }`$, and set $`N_1=𝐇^3/\mathrm{\Gamma }_1`$ and $`N=𝐇^3/\mathrm{\Gamma }`$. Then, the visual core $`𝒱(N_1)`$ of $`N_1`$ embeds in $`N`$ (via the covering map $`\pi :N_1N`$). If $`\mathrm{\Gamma }_1`$ is not finitely generated, the techniques in the proof of Proposition 3.2 may be adapted to show that $`𝒱(N_1)`$ still embeds in $`N`$. More simply, one may combine property (2) of Theorem 5.1 with Proposition 3.1 to obtain the following weaker result: ###### Proposition 5.3 Let $`\mathrm{\Gamma }_1`$ be a summand of a simple type I Klein-Maskit decomposition of a Kleinian group $`\mathrm{\Gamma }`$, and set $`N_1=𝐇^3/\mathrm{\Gamma }_1`$ and $`N=𝐇^3/\mathrm{\Gamma }`$. Then, the interior of the visual core $`𝒱(N_1)`$ of $`N_1`$ embeds in $`N`$ (via the covering map $`\pi :N_1N`$). Moreover, if $`\mathrm{\Gamma }`$ is a simple type I Klein-Maskit combination of $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$, we may find a larger subset of $`N_1`$ which embeds in $`N`$. Consider the sets $$\stackrel{~}{𝒳}_i=\left\{x𝐇^3\right|h_\mathrm{\Delta }(x)\frac{1}{2}\mathrm{for}\mathrm{all}\mathrm{components}\mathrm{\Delta }\mathrm{of}\mathrm{\Omega }(\mathrm{\Gamma }^{})\mathrm{equivalent}\mathrm{in}\mathrm{\Gamma }_i\mathrm{to}\mathrm{\Delta }_i\}.$$ Clearly, the visual hull $`\stackrel{~}{𝒱}(\mathrm{\Gamma }_i)`$ of $`\mathrm{\Gamma }_i`$ is contained in $`\stackrel{~}{𝒳}_i.`$ Let $`𝒳_i=\stackrel{~}{𝒳}_i/\mathrm{\Gamma }_iN_i=𝐇^3/\mathrm{\Gamma }_i`$. Condition (3) above and the arguments in the proof of Proposition 3.2 can then be used to show that $`𝒳_i`$ embeds in $`N=𝐇^3/\mathrm{\Gamma }`$. The following theorem summarizes the relevant properties of Klein-Maskit combination of type II along a component subgroup (see Theorem VII.E.5 in ). ###### Theorem 5.4 (Klein-Maskit combination II) Let $`\mathrm{\Gamma }_1`$ be a Kleinian group, and let $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }^{}`$ be components of $`\mathrm{\Omega }(\mathrm{\Gamma }_1)`$ which are Jordan domains. Suppose that $`\mathrm{\Phi }=\mathrm{st}_{\mathrm{\Gamma }_1}(\mathrm{\Delta })`$ and $`\mathrm{\Phi }^{}=\mathrm{st}_{\mathrm{\Gamma }_1}(\mathrm{\Delta }^{})`$ are quasifuchsian and are not conjugate by an element of $`\mathrm{\Gamma }_1`$. Let $`\gamma `$ be a Möbius transformation which conjugates $`\mathrm{\Phi }^{}`$ to $`\mathrm{\Phi }`$, and assume that $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }_1))`$ and $`\mathrm{\Lambda }(\mathrm{\Gamma }_1)`$ lie in the closures of different components of $`\mathrm{\Omega }(\mathrm{\Phi })`$. Then, 1. $`\mathrm{\Gamma }=\mathrm{\Gamma }_1,\gamma `$ is a Kleinian group isomorphic to the HNN-extension of $`\mathrm{\Gamma }_1`$ with stable letter $`\gamma `$ and associated subgroups $`\mathrm{\Phi }`$ and $`\mathrm{\Phi }^{}`$; 2. $`\mathrm{\Gamma }_1`$ is a precisely QF-embedded subgroup of $`\mathrm{\Gamma }`$; 3. If $`\gamma \mathrm{\Gamma }\mathrm{\Gamma }_1`$, then $`\gamma (\mathrm{\Lambda }(\mathrm{\Gamma }_1))`$ is contained in a component of $`\mathrm{\Omega }(\mathrm{\Gamma }_1)`$ which is $`\mathrm{\Gamma }_1`$-equivalent to either $`\mathrm{\Delta }`$ or $`\mathrm{\Delta }^{}`$; 4. $`𝐇^3/\mathrm{\Gamma }`$ is homeomorphic to the manifold (or orbifold) obtained from $`(𝐇^3\mathrm{\Delta }\mathrm{\Delta }^{})/\mathrm{\Gamma }_1`$ by identifying $`\mathrm{\Delta }/\mathrm{\Phi }`$ with $`\mathrm{\Delta }^{}/\mathrm{\Phi }^{}`$ by the homeomorphism determined by $`\gamma `$. In this case, we say that $`\mathrm{\Gamma }_1`$ is a summand of simple type II Klein-Maskit decomposition of $`\mathrm{\Gamma }`$. Combining property (2) of Theorem 5.4 with Proposition 3.1 yields the following result: ###### Proposition 5.5 Let $`\mathrm{\Gamma }_1`$ be a summand of simple type II KIein-Maskit decomposition of $`\mathrm{\Gamma }`$, and set $`N_1=𝐇^3/\mathrm{\Gamma }_1`$ and $`N=𝐇^3/\mathrm{\Gamma }`$. Then, the interior of the visual core $`𝒱(N_1)`$ of $`N_1`$ embeds in $`N`$ (via the covering map $`\pi :N_1N`$). As in the type I situation, if $`\mathrm{\Gamma }_1`$ is a summand of simple type II Klein-Maskit decomposition of $`\mathrm{\Gamma }`$, we may find a larger subset of $`N_1`$ which embeds in $`N`$. Consider the set $$\stackrel{~}{𝒳}=\left\{x𝐇^3\right|h_\mathrm{\Delta }(x)\frac{1}{2}\mathrm{for}\mathrm{all}\mathrm{components}\mathrm{\Delta }\mathrm{of}\mathrm{\Omega }(\mathrm{\Gamma }^{})\mathrm{equivalent}\mathrm{in}\mathrm{\Gamma }_1\mathrm{to}\mathrm{\Delta }\mathrm{or}\mathrm{\Delta }^{}\},$$ which clearly contains the visual hull $`\stackrel{~}{𝒱}(\mathrm{\Gamma }_1)`$ of $`\mathrm{\Gamma }_1`$. Let $`𝒳=\stackrel{~}{𝒳}/\mathrm{\Gamma }_1N_1`$. Condition (3) above and the arguments in the proof of Proposition 3.1 can be used to show that the interior of $`𝒳`$ embeds in $`N=𝐇^3/\mathrm{\Gamma }`$. In general, if a Kleinian group $`\mathrm{\Gamma }`$ can be built from $`\mathrm{\Gamma }_1`$ and a collection of other Kleinian groups by repeatedly performing Klein-Maskit combinations of types I and/or II along component subgroups, we say that $`\mathrm{\Gamma }_1`$ is a summand of a Klein-Maskit decomposition of $`\mathrm{\Gamma }`$. By applying Propositions 5.3 and 5.5, we obtain the following summation of the results of this section. ###### Theorem 5.6 Let $`\mathrm{\Gamma }_1`$ be a summand of a Klein-Maskit decomposition of $`\mathrm{\Gamma }`$. If $`N_1=𝐇^3/\mathrm{\Gamma }_1`$ and $`N=𝐇^3/\mathrm{\Gamma }`$, then the interior of the visual core $`𝒱(N_1)`$ of $`N_1`$ embeds in $`N`$ (via the covering map $`\pi :N_1N`$). Remarks: (1) The definition of the visual core was suggested by Thurston’s reproof of the Klein-Maskit combination theorems. In our notation, Thurston shows that in the type I decomposition, $`N=𝐇^3/\mathrm{\Gamma }`$ is obtained from $`𝒳_1`$ and $`𝒳_2`$ by identifying points in their boundaries. (In general, one must be a little careful since $`𝒳_i`$ need not be a submanifold.) Similarly, in the type II situation he shows that $`N`$ is obtained from $`𝒳`$ by identifying points in the boundary. (This proof is discussed in outline in Section 8 of Morgan .) (2) Notice that if $`\mathrm{\Gamma }_1`$ is a summand of a simple type II Klein-Maskit decomposition of a Kleinian group $`\mathrm{\Gamma }`$, then $`\mathrm{\Gamma }_1`$ is a precisely QF-embedded subgroup of $`\mathrm{\Gamma }`$, but is not a nicely QF-embedded subgroup. In this same case, the interior of the visual cover of $`N_1`$ embeds in $`N`$, but the visual core itself does not. (3) Corollary D of asserts that if the algebraic limit of a sequence of isomorphic Kleinian groups is a generalized web group, then it is a summand of a Klein-Maskit decomposition of the geometric limit. Hence, there is a close relationship between Theorems 4.2 and 5.6.
no-problem/9903/cond-mat9903024.html
ar5iv
text
# The shape of a moving fluxon in stacked Josephson junctions. ## Abstract We study numerically and analytically the shape of a single fluxon moving in a double stacked Josephson junctions (SJJ’s) for various junction parameters. We show that the fluxon in a double SJJ’s consists of two components, which are characterized by different Swihart velocities and Josephson penetration depths. The weight coefficients of the two components depend on the parameters of the junctions and the velocity of the fluxon. It is shown that the fluxon in SJJ’s may have an unusual shape with an inverted magnetic field in the second junction when the velocity of the fluxon is approaching the lower Swihart velocity. Finally, we study the influence of fluxon shape on flux-flow current-voltage characteristics and analyze the spectrum of Cherenkov radiation for fluxon velocity above the lower Swihart velocity. Analytic expression for the wavelength of Cherenkov radiation is derived. I. INTRODUCTION Properties of stacked Josephson junctions (SJJ’s) are of considerable interest both for applications in cryoelectronics and for fundamental physics. A particular interest in SJJ’s was stimulated by the discovery of high-$`T_c`$ superconductors (HTSC). Highly anisotropic HTSC compounds, such as Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub>, may be considered as stacks of atomic scale intrinsic Josephson junctions. The layered structure determines many of the unusual properties of HTSC. Behavior of model low-$`T_c`$ SJJ’s and HTSC exhibit many similarities. Due to mutual coupling of the junctions in the stack, the physical properties of SJJ’s can be qualitatively different from those of single Josephson junctions (JJ’s). Therefore, a one-to-one comparison between single and stacked Josephson junctions is difficult to do. Hence, the basic properties of SJJ’s have to be studied in order to describe correctly the Josephson behavior of layered superconductors. Perpendicular ($`c`$-axis) transport measurements in magnetic field, $`H`$, parallel to layers ($`ab`$-plane) is an explicit way of studying Josephson phenomena in SJJ’s. In this case the magnetic field penetrates the stack in the form of Josephson-type vortices (fluxons), and the c-axis voltage is caused by motion of such vortices along the layers. The shape of a fluxon in SJJ’s is different both from that of an Abricosov vortex in bulk superconductor, since it does not have a normal core, and from that of single JJ, since the circulating currents are not confined within one junction. The behavior of SJJ’s becomes particularly complicated when the length of the stack in one direction, $`L`$, is much longer than the Josephson penetration depth, $`\lambda _J`$. One of the unusual properties of long SJJ’s is the existence of multiple quasi-equilibrium fluxon modes, and submodes, which are characterized by different fluxon configurations in the stack. Due to the existence of such modes/submodes, the state of the stack is not well defined by external conditions. Rather it can be described only statistically with a certain probability of being in any of the quasi-equilibrium states. Experimental evidences for the existence of such modes were obtained both for HTSC intrinsic SJJ’s and low-$`T_c`$ multilayers. The existence of fluxon modes/submodes dramatically changes the behavior of long strongly coupled SJJ’s with respect to that of single long JJ’s. An example of this is the critical current, $`I_c`$, which becomes multiple valued; the fluctuations of $`I_c`$ become anomalously large; and the magnetic field dependence of $`I_c`$ becomes very complicated without periodicity in $`H`$. For understanding both the static and dynamic properties of SJJ’s, the shape of the fluxon in SJJ’s is important, and should be determined. In the static case, the shape of the single fluxon was studied for layered superconductors consisting of an infinite number of thin identical or nonidentical layers and for SJJ’s . In our previous work, we have shown that in double SJJ’s, two special single component fluxon solutions exist, which are characterized by different Swihart velocities and Josephson penetration depths. An approximate analytic fluxon solution was suggested as a linear combination of the single component solutions. For the static case, the approximate solution was shown to be in a quantitative agreement with numerically obtained solutions. Extending the approximate analytic solution to the dynamic case, it was shown that drastic changes in the fluxon shape could occur with increasing the fluxon velocity, resulting e.g. in possible inversion of the sign of the magnetic field in the second junction and appearance of attractive fluxon interaction. On the other hand, the choice between the special single component solutions and the approximate analytic fluxon solution was not addressed and the dependence of the fluxon shape on the junction parameters was not studied. Using the perturbation approach, the second order correction to the approximate analytic solution was derived and the accuracy of the solution was analyzed in the recent paper. To our knowledge, no comprehensive analysis of the single fluxon shape in SJJ’s exists for the dynamic case. The scope of the current paper is to study quantitatively the shape of the moving fluxon in double SJJ’s for various junction parameters. Our analysis is based on numerical simulations and analytical treatments of the coupled sine-Gordon equation, which describes the physical properties of SJJ’s. We show that the single moving fluxon in double SJJ’s may be described by both a single component solution and a double component solution depending on the parameters of the stack and the fluxon velocity. Moreover, the shape of the fluxon may be quite anomalous, with inverted magnetic field in the second junction and with nonmonotonous change of phase. The paper is organized as follows: in section II, we rewrite the coupled sine-Gordon equation for the case of solitonic fluxon motion and review analytic single fluxon solutions obtained in Refs.. In section III, we present the results of numerical simulations for frictionless fluxon motion for different parameters of SJJ’s, compare it with analytical predictions of Ref., we also formulate and verify conditions for observation of different fluxon solutions. In section IV, we discuss implementations of the fluxon shape in experimental situation. In subsection IV.A, we study the influence of finite damping and simulate current-voltage characteristics. Finally, in subsection IV.B we consider the case of nonsolitonic fluxon motion with the propagation velocity larger than the lower Swihart velocity. We have shown that such fluxon motion is accompanied by plasma wave exitations and derive the expression for the wavelength of such ”Cherenkov” radiation. II. GENERAL RELATIONS We consider a double stack with the overlap geometry, consisting of junctions 1 and 2 with the following parameters: $`J_{ci}`$ -the critical current density, $`C_i`$ -the capacitance, $`t_i`$\- the thickness of the tunnel barrier between the layers, $`d_i`$ and $`\lambda _{Si}`$ \- the thickness and London penetration depth of superconducting layers and L\- the length of the stack. Hereafter the subscript $`i`$ on a quantity represents its number. The elements of the stack are numerated from the bottom to the top, so that junction $`i`$ consists of superconducting layers $`i`$, $`i+1`$ and the tunnel barrier $`i`$. The fluxon will be placed in junction 1, if not stated otherwise. The physical properties of SJJ’s are described by the coupled sine-Gordon equation, which for a double stack with the overlap geometry can be written as: $$\left|\begin{array}{c}\phi _1^{\prime \prime }\\ \phi _2^{\prime \prime }\end{array}\right|=\left|\begin{array}{cc}1& S_2/\mathrm{\Lambda }_1\\ S_2/\mathrm{\Lambda }_1& \mathrm{\Lambda }_2/\mathrm{\Lambda }_1\end{array}\right|\left|\begin{array}{c}sin\left(\phi _1\right)+\stackrel{}{\phi _1}+\alpha _1\stackrel{}{\phi _1}\frac{J_b}{J_{c1}}\\ \frac{J_{c2}}{J_{c1}}sin\left(\phi _2\right)+\frac{C_2}{C_1}\stackrel{}{\phi _2}+\alpha _2\stackrel{}{\phi _2}\frac{J_b}{J_{c1}}\end{array}\right|,$$ (1) where $`\phi _{1,2}`$ are gauge invariant phase differences in JJ’s 1 and 2, ’prime’ and ’dot’ on the quantity represent partial derivatives in space and time, respectively. Space and time are normalized to Josephson penetration depth, $`\lambda _{J1}=\sqrt{\frac{\mathrm{\Phi }_0c}{8\pi ^2J_{c1}\mathrm{\Lambda }_1}}`$, and inverted Josephson plasma frequency, $`\omega _{p1}^1=\sqrt{\frac{\mathrm{\Phi }_0C_1}{2\pi cJ_{c1}}}`$, respectively, of the single JJ 1. Here $`\mathrm{\Phi }_0`$ is the flux quantum, $`c`$ is the velocity of light in vacuum and $`\mathrm{\Lambda }_i=t_i+\lambda _{Si}coth\left(\frac{d_i}{\lambda _{Si}}\right)+\lambda _{Si+1}coth\left(\frac{d_{i+1}}{\lambda _{Si+1}}\right),`$ $`S_i=\lambda _{Si}cosech\left(\frac{d_i}{\lambda _{Si}}\right).`$ The last terms in the right hand side of Eq.(1) represent total currents in the JJ’s, which consist of superconducting, displacement and quasiparticle contributions, and $`J_b`$ represents bias current density. Viscous damping due to quasiparticle current is characterized by the damping coefficient, $`\alpha _i=\beta _{ci}^{1/2}`$, where $`\beta _{ci}`$ is the McCumber parameter of single JJ $`i`$. The coupling strength in the double SJJ’s is described by a coupling parameter $`S=\frac{S_2}{\sqrt{\mathrm{\Lambda }_1\mathrm{\Lambda }_2}}.`$ The magnetic induction in the stack is equal to $`B_1={\displaystyle \frac{H_0}{2\left(1S^2\right)}}\left[\phi _1^{}+{\displaystyle \frac{S_2}{\mathrm{\Lambda }_1}}\phi _2^{}\right],`$ $$B_2=\frac{H_0}{2\left(1S^2\right)}\left[\frac{S_2}{\mathrm{\Lambda }_2}\phi _1^{}+\frac{\mathrm{\Lambda }_1}{\mathrm{\Lambda }_2}\phi _2^{}\right],$$ (2) where $`H_0=\frac{\mathrm{\Phi }_0}{\pi \lambda _{J1}\mathrm{\Lambda }_1}`$. For the soliton-like fluxon motion, the phase differences in the stack remain unchanged in the coordinate frame moving along with the fluxon. Introducing the self-coordinate of the fluxon, $`\xi =xut`$, and neglecting damping coefficient, we simplify Eq.(1) and rewrite it as a system of coupled ordinary differential equations (ODE): $`\phi _{1\xi \xi }^{\prime \prime }\left[{\displaystyle \frac{abS^2}{1S^2}}\right]=asin\left(\phi _1\right){\displaystyle \frac{J_{c2}S_2}{J_{c1}\mathrm{\Lambda }_1}}sin\left(\phi _2\right),`$ $$\phi _{2\xi \xi }^{\prime \prime }\left[\frac{abS^2}{1S^2}\right]=b\frac{J_{c2}\mathrm{\Lambda }_2}{J_{c1}\mathrm{\Lambda }_1}sin\left(\phi _2\right)\frac{S_2}{\mathrm{\Lambda }_1}sin\left(\phi _1\right),$$ (3) where $`a=1{\displaystyle \frac{u^2}{c_{01}^2}}{\displaystyle \frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}}\left(1S^2\right),`$ $$b=1\frac{u^2}{c_{01}^2}\left(1S^2\right),$$ (4) and $`c_{01}=\lambda _{j1}\omega _{p1}`$ is the Swihart velocity of the single junction 1. Comparing Eqs. (1) and (3) it is seen that solution of the coupled sine-Gordon equation for soliton-like fluxon motion is now reduced to solution of the static problem, but with parameters depending on the fluxon velocity. Eq.(3) has a first integral, $$\frac{1}{1S^2}\left[b\frac{\left(\phi _{1\xi }^{}\right)^2}{2}+a\frac{\mathrm{\Lambda }_1}{\mathrm{\Lambda }_2}\frac{\left(\phi _{2\xi }^{}\right)^2}{2}+\frac{S_2}{\mathrm{\Lambda }_2}\phi _{1\xi }^{}\phi _{2\xi }^{}\right]+cos\left(\phi _1\right)+\frac{J_{c2}}{J_{c1}}cos\left(\phi _2\right)=𝐂,$$ (5) which reduces to that from Ref. for the static case, $`u=0`$. Here $`𝐂`$ is a constant of the first integral. A. Special single component solutions In Ref. it was shown that Eq.(1), linearized with respect to $`\phi _2`$, allows two special single component solutions of the type $`\phi _1(\xi )`$ $`=`$ $`F_{1,2}=4arctan\left[exp\left(\xi /\lambda _{1,2}\right)\right],`$ (6) $`\phi _2`$ $`=`$ $`arcsin\left(1/\kappa _{1,2}sin\left(\phi _1\right)\right),`$ (7) where $`\kappa _{1,2}`$ are solutions of the quadratic equation: $$\frac{S_2}{\mathrm{\Lambda }_1}\kappa ^2+\kappa \left[1\frac{J_{c2}\mathrm{\Lambda }_2}{J_{c1}\mathrm{\Lambda }_1}+\frac{u^2}{c_{01}^2}\frac{\mathrm{\Lambda }_2}{\mathrm{\Lambda }_1}\left(\frac{J_{c2}}{J_{c1}}\frac{C_2}{C_1}\right)\left(1S^2\right)\right]\frac{J_{c2}S_2}{J_{c1}\mathrm{\Lambda }_1}=0.$$ (8) Therefore, for a double SJJ’s there exist two characteristic Josephson penetration depths, $$\lambda _{1,2}^2=\frac{\lambda _{j1}^2}{1+\kappa _{2,1}S_2/\mathrm{\Lambda }_1}\left(1\frac{u^2}{c_{1,2}^2}\right),$$ (9) and two characteristic velocities $$c_{1,2}^2=\frac{c_{01}^2}{1+\kappa _{2,1}\left(C_2J_{c1}S_2\right)/\left(C_1J_{c2}\mathrm{\Lambda }_1\right)}.$$ (10) B. Double component solution Taking the single component solutions as eigen-functions of the linearized coupled sine-Gordon equation, an approximate analytic single fluxon solution in JJ 1 was obtained in Ref.: $`\phi _1={\displaystyle \frac{\kappa _1F_1\kappa _2F_2}{\kappa _1\kappa _2}},`$ $$\phi _2=\frac{F_1F_2}{\kappa _1\kappa _2}.$$ (11) Here $`F_{1,2}`$ are the single component solutions, Eq.(6). Recently this solution was rederived more rigorously in Ref. . It was shown, that for the static case Eq.(10) gives perfect approximation for $`\phi _1`$ in the whole space region and for arbitrary parameters of the stack. Using the perturbation approach the second order correction to Eq.(10) was obtained in Ref.. As it is seen from Eq.(10) the single fluxon in double SJJ’s consists of two components. From Eq.(8) it is seen that both components contract with increasing velocity, but the characteristic velocities for the contraction are different for each component and are given by Eq.(9). For identical junctions the contraction of each component is of the Lorentz type, however, the contraction of the fluxon itself is different from Lorentz contraction. This is a consequence of the absence of Lorentz invariance for the coupled sine-Gordon equation. For nonidentical junctions, the parameters $`\kappa _{1,2}`$, depend on the fluxon velocity and thus contraction of the components is somewhat different from Lorentz contraction. In this case the maximum characteristic velocity should be obtained from the equation $`u=c_{1,2}`$ and Eqs. (7,9). By analogy with single JJ’s we’ll refer the maximum characteristic velocities to as Swihart velocities, $`\stackrel{~}{c}_{1,2}`$. In the general case, Swihart velocities are equal to $$\stackrel{~}{c}_{1,2}=\frac{\sqrt{2}c_{01}c_{02}}{\sqrt{c_{01}^2+c_{02}^2\pm \sqrt{\left(c_{01}^2c_{02}^2\right)^2+4S^2c_{01}^2c_{02}^2}}},$$ (12) where $`c_{02}=c_{01}\sqrt{\frac{C_1\mathrm{\Lambda }_1}{C_2\mathrm{\Lambda }_2}}`$ is the Swihart velocity of the single JJ2. The most crucial changes in the fluxon shape occur as the velocity approaches the lowest Swihart velocity, $`u\stackrel{~}{c}_1`$. Then the first component is totally squeezed, $`\lambda _10`$, while contraction of the second component remains marginal, see Eq.(8). In this case the two components become clearly distinguishable: the $`F_1`$ component transforms into a step-like function which changes from zero to 2$`\pi `$ within the distance $`\lambda _1`$ at the fluxon center, while outside the central region the shape of the fluxon is defined by the $`F_2`$ component. From Eq.(10) it follows that $$\frac{sin\left(\phi _1\right)}{sin\left(\phi _2\right)}=\{\genfrac{}{}{0pt}{}{\kappa _2,\left|x\right|\lambda _1;}{\kappa _1,x0.}\left(u\stackrel{~}{c}_1\right).$$ (13) For $`\frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}=1`$, and $`u=\stackrel{~}{c}_1`$, the parameters $`\kappa _{1,2}`$ are equal to: $$\kappa _{1,2}=\sqrt{\frac{\mathrm{\Lambda }_1}{\mathrm{\Lambda }_2}};\sqrt{\frac{\mathrm{\Lambda }_2}{\mathrm{\Lambda }_1}}\frac{J_{c2}}{J_{c1}}.$$ (14) The parameters $`\kappa _{1,2}`$ determine the weight coefficients of the components. From Eqs.(10,13) it follows that $`F_1`$ component dominates in junction 1 for $`J_{c2}/J_{c1}1`$, and $`F_2`$ dominates for $`J_{c2}/J_{c1}1`$. From the analysis above it is seen that the linearized coupled sine-Gordon equation allows both the single component solutions, $`F_{1,2}`$, Eq.(6) and the double component solution, Eq.(10). At this stage it is not clear which of the solutions, Eqs.(6,10), should be realized in SJJ’s, since all three solutions have the same accuracy with respect to Eq.(1). In Refs. it was shown that it is the double component solution Eq.(10) which is realized in the static case. However, it was suggested that a single component solution could be achieved at high fluxon velocities. Indeed, as we will show below, in the dynamic case both single and double component solutions can exist and even coexist, depending on parameters of the stack and the fluxon velocity. What is important, however, is that these are always the components $`F_{1,2}`$ described by Eqs.(6,7) which constitute the fluxon. III. FRICTIONLESS CASE In this section we will consider unperturbed, $`\alpha _i=J_b=0`$, frictionless fluxon motion. We analyze the pure solitonic fluxon motion for various junction parameters, make general conclusions about transformation of the fluxon shape in dynamics and compare it with analytical predictions. The effect of finite damping and bias will be considered in section IV. For numerical simulations of frictionless fluxon motion we used Eq.(3). The numerical procedure was based on a finite difference method with successive iterations of ODE in Eq.(3). The boundary conditions were such that the total phase shift is equal to 2$`\pi `$ in the junction containing a fluxon and zero in the other one. The fluxon will be placed in JJ1 if not stated otherwise. We will consider five cases: A) a stack of identical junctions, B) stack with different critical current densities, C) fluxon in the junction with lower critical current density, D) fluxon in the junction with the higher critical current density, E) a stack with difference both in critical currents and electrodes. Finally in subsection III.F we derive and verify conditions for observation of single and double component fluxon at $`u\stackrel{~}{c}_1`$ for various parameters of SJJ’s. A. Identical junctions: double component solution. In Fig.1 profiles of a) phase differences $`\phi _{1,2}`$, b) the ratio $`sin(\phi _1)/sin(\phi _2)`$, and c) magnetic inductions $`B_{1,2}`$ of a single fluxon in junction 1 are shown for a double stack consisting of identical strongly coupled junctions and for different fluxon velocities, $`u/\stackrel{~}{c}_1`$=0, 0.61, 0.92, 0.98, 0.998, 0.9999 (from left to right curve). The curves were shifted for clarity along the x-axis. Parameters of the stack are: $`d_i=t_i=0.01\lambda _{J1}`$, $`\lambda _{Si}=0.1\lambda _{J1}`$, $`S0.5`$, $`C_1=C_2`$. In Fig.1 a) dotted lines show profiles obtained from the analytic double component solution Eq.(10). Solid and dashed lines in Fig. 1 a,c) represent results of numerical simulation for junctions 1 and 2, respectively. The data in Figs.1 b,c) are obtained numerically. The magnetic induction is normalized to $`H_0=\frac{\mathrm{\Phi }_0}{\pi \lambda _{J1}\mathrm{\Lambda }_1}`$. As the velocity approaches the lower Swihart velocity, $`\stackrel{~}{c}_1`$, the existence of the two fluxon components becomes clearly seen. For identical junctions, as it follows from Eqs.(10,13), exactly one half of the fluxon belongs to each component. The $`F_1`$ component transforms to a one-$`\pi `$ step. Outside the fluxon center the fluxon is defined entirely by the $`F_2`$ component, which is only marginally contracted. Moreover, from Fig.1 a) it is seen that at $`u\stackrel{~}{c}_1`$ the phase differences in both junctions are equal outside the fluxon center in agreement with analytical prediction, Eq.(12). This is illustrated in Fig. 1 b), from which it is seen that the ratio $`sin(\phi _1)/sin(\phi _2)`$ approaches unity as $`u\stackrel{~}{c}_1`$. From Fig.1 a) it is seen that the approximate analytic solution is in good agreement with numerical solution for all fluxon velocities. Another unusual feature of the moving fluxon in SJJ’s is seen from Fig. 1 c). A dip in $`B_2`$ is developed with increasing fluxon velocity, leading to inversion of the sign at high velocities. Such behavior was predicted analytically in Ref.. Here we confirm the existence of this phenomenon by numerical simulation. B. Nonidentical junctions: transformation from $`F_2`$ to $`F_1`$ component solution with decreasing $`J_{c2}/J_{c1}`$: In Ref. it was shown that it is the double component solution that is realized in the static case for arbitrary parameters of the stack. From the numerical analysis of the dynamic case we have found that the fluxon shape is well described by the double component solution up to velocities very close to $`\stackrel{~}{c}_1`$. This is illustrated by Fig.2, in which the shape of the fluxon in junction 1 moving with the velocity close to the lower Swihart velocity, $`u=0.98\stackrel{~}{c}_1`$, is shown for different critical currents, $`J_{c2}/J_{c1}`$=10; 2; 1; 0.5; 0.1, from left to right curve. The rest of the parameters of the stack and the way of presentation is the same as in Fig. 1a). From Fig.2 it is seen that the fluxon shape is in a good agreement with the analytical double component solution, Eq.(10), up to velocities very close to $`\stackrel{~}{c}_1`$. Note, that the agreement is even better at lower fluxon velocities. From Fig.2 it can be seen that a gradual transformation of the fluxon shape from the uncontracted $`F_2`$ component solution to the contracted $`F_1`$ component solution takes place as $`J_{c2}/J_{c1}`$ decreases. In terms of the analytical double component solution, Eq.(10), this is caused by a decrease of the weight coefficient $`\kappa _2`$ , Eq.(13), of the $`F_2`$ component. As the velocity approaches the lower Swihart velocity, $`\stackrel{~}{c}_1`$, the fluxon shape can remain the double-component, as shown in Fig.1, or a transformation of the fluxon shape could take place. As we have found from our numerical simulations the transformation and the final shape of the fluxon at $`u=\stackrel{~}{c}_1`$ strongly depend on parameters of the stack. C. Fluxon in a weaker junction: Uncontracted single component solution. In Fig. 3 the fluxon shape is shown for the case when the fluxon is placed in the weaker junction, $`J_{c2}/J_{c1}=2`$, the rest of the parameters and the way of presentation are the same as in Fig.1. As it is seen from Fig.3 at velocities up to 0.98$`\stackrel{~}{c}_1`$ the shape of the fluxon is well described by the double component solution, Eq.(10). At higher velocities transformation to the single $`F_2`$ component solution, Eq. 6, with takes place. This is illustrated in Fig. 3 b), from which it is seen that as $`u\stackrel{~}{c}_1`$, $`sin(\phi _1)/sin(\phi _2)\kappa _2=2`$ in the whole space region. From Fig. 3 c) it is seen that a dip in $`B_2`$ is reduced with respect to that in Fig.1 c), due to absence of the squeezed $`F_1`$ component. D. Fluxon in a stronger junction: two component solution. In Fig. 4 the fluxon shape is shown for the case when the fluxon is placed in the stronger junction, $`J_{c2}/J_{c1}=0.5`$, the rest of the parameters and the way of presentation are the same as in Fig. 1. It is seen that at velocities up to 0.98$`\stackrel{~}{c}_1`$ the shape of the fluxon is well described by the double component solution, Eq.(10). At higher velocities the fluxon still has contracted and uncontracted components $`F_{1,2}`$. The existence of the two fluxon components is clearly seen from Fig. 4 b). As $`u\stackrel{~}{c}_1`$, $`sin(\phi _1)/sin(\phi _2)\kappa _2=0.5`$ outside the center of the fluxon and $`sin(\phi _1)/sin(\phi _2)\kappa _1=1`$ in the center, in agreement with Eqs.(12,13). However, transformation of the fluxon shape with respect to Eq.(10) takes place. From Fig.4 a) it is seen that in the left half-space (i.e. from far left to the immediate left of the fluxon center) the phase shift in junctions 1 and 2 approaches zero and $`\pi `$, respectively, and belongs to the uncontracted $`F_2`$ component, as seen from Fig. 4b). Therefore, there is a single $`F_2`$ component fluxon placed in junction 2 (the weaker junction). The situation in the left half space is then analogous to that in Fig.3. This is shown in Fig.5, in which we replotted the curves at $`u=0.9999\stackrel{~}{c}_1`$ from Figs. 3a) and 4a). Solid and dashed curves in Fig.5 represent $`\phi _1`$ and $`\phi _2`$, respectively, for the fluxon in the stronger junction from Fig.4, while dashed-dotted and dotted curves represent $`\phi _1`$ and $`\phi _2`$, respectively, for the fluxon in the weaker junction from Fig.3. Note, that in Fig.5 we normalized the $`x`$-axis to $`\lambda _{J1}`$ for the case $`J_{c2}/J_{c1}=2`$ and to $`\lambda _{J2}`$ for the case $`J_{c2}/J_{c1}=0.5`$, since the $`F_2`$ component is then situated in junction 2. From Fig.5 it is seen that after such rescaling $`\phi _{2,1}`$ from Fig. 4a) merge with $`\phi _{1,2}`$ from Fig. 3a) in the left half-space. In the central region a step-like change of phase shift takes place in both junctions. In junction 1 the phase jumps on $`+2\pi `$, which means that there is a single $`F_1`$ component fluxon and in junction 2 the phase drops on $`2\pi `$, representing the single $`F_1`$ component antifluxon. The overall fluxon shape at $`u=\stackrel{~}{c}_1`$ for the case when the fluxon is in the stronger junction, Fig. 4, can be written as: $$\{\begin{array}{c}\phi _1=F_1+Image(\phi _2)\\ \phi _2=F_2F_1\end{array},$$ (15) so that in junction 1 there is the contracted single component $`F_1`$ fluxon plus an image in a form of a ripple from junction 2 and in junction 2 there is an uncontracted fluxon $`F_2`$ \- contracted antifluxon $`F_1`$ pair. The total phase shifts in junction 1 and 2 are $`2\pi `$ and zero, respectively, however, the phase shift in junction 1 increases nonmonotonously and has two local maxima and minima, see Fig. 4 a). From Fig.4 c) it is seen that the dip in $`B_2`$ in this case is even more pronounced than that for identical SJJ’s, Fig.1 c). This is due to the increase of the weight coefficient of the squeezed $`F_1`$ component. Fig. 6 shows profiles of the fluxon moving with velocity very close to the lowest Swihart velocity, $`u=0.9999\stackrel{~}{c}_1`$, for different critical current densities $`J_{c2}/J_{c1}=10÷0.1`$ increasing sequentially from the left to the right curve. The rest of the stack parameters and the way of presentation is the same as in Fig. 1. From Fig.6 it is seen how the shape of the fluxon is changed with $`J_{c2}/J_{c1}`$. When the fluxon is placed in the weaker junction, the fluxon shape at the lowest Swihart velocity is described by the single $`F_2`$ component. For the case of Figs. 1-6, $`C_2/C_1=\mathrm{\Lambda }_2/\mathrm{\Lambda }_1=1`$, so that $`\kappa _2=\frac{J_{c2}}{J_{c1}}`$. In Fig. 6 b) the dependence $`sin(\phi _1)/sin(\phi _2)=\frac{J_{c2}}{J_{c1}}`$ is clearly visible in the whole space region for $`J_{c2}/J_{c1}>1`$. When the fluxon is placed in the stronger junction, it has two components, $`F_{1,2}`$. From Fig. 6 b) it is seen that for the case $`J_{c2}/J_{c1}<1`$, the fluxon shape in the center is determined by the $`F_1`$ component, $`sin(\phi _1)/sin(\phi _2)=\kappa _1=1`$, while outside the center the shape is given by the $`F_2`$ component, $`sin(\phi _1)/sin(\phi _2)=\kappa _2=\frac{J_{c2}}{J_{c1}}`$, confirming that the parameters of the single component solutions are always those that predicted by Eqs.(6,7). From Fig. 6 it is seen that the transition from a double to a single component solution for $`J_{c2}/J_{c1}>1`$ is gradual. Outside the fluxon center this transition is well described by a gradual increase of the weigh coefficient of the $`F_2`$ component, Eq.(10,13). In the center the exact shape of the fluxon can be obtained from the first integral, Eq.(5). For the case $`\frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}=1`$, at $`u=\stackrel{~}{c}_1`$ the first integral reduces to: $$\frac{S}{1S^2}\left(1+\frac{J_{c1}\mathrm{\Lambda }_1}{J_{c2}\mathrm{\Lambda }_2}\frac{cos\left(\phi _1\right)}{cos\left(\phi _2\right)}\right)^2\frac{\left(\phi _{1\xi }^{}\right)^2}{2}=1cos\left(\phi _1\right)+\frac{J_{c2}}{J_{c1}}\left(1cos\left(\phi _2\right)\right),$$ (16) for the $`F_2`$ single component solution. From Eq.(15) it is seen that at the fluxon center, $`x=0`$, the effective Josephson depth is equal to: $$\lambda _{eff}(F_2)=\lambda _{J1}\left(1\frac{J_{c1}\mathrm{\Lambda }_1}{J_{c2}\mathrm{\Lambda }_2}\right)\sqrt{\frac{S}{1S^2}},$$ (17) so that $`\lambda _{eff}`$ gradually increases from zero to $`\lambda _{J1}\sqrt{\frac{S}{1S^2}},`$as $`\frac{J_{c2}\mathrm{\Lambda }_2}{J_{c1}\mathrm{\Lambda }_1}`$ becomes larger than unity. The inequality $$\frac{J_{c2}\mathrm{\Lambda }_2}{J_{c1}\mathrm{\Lambda }_1}>1,$$ (18) is then a necessary (but as we’ll show below not sufficient) condition for the existence of the single component $`F_2`$ solution at $`u=\stackrel{~}{c}_1`$, for $`\frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}=1`$, since then $`\lambda _{eff}`$ remains finite at $`u=\stackrel{~}{c}_1`$. On the other hand, the transition from a double component solution, Eq.(10), to the solution, Eq.(14), for $`J_{c2}/J_{c1}<1`$ is sharp. However, the closer is $`J_{c2}/J_{c1}`$ to unity, the closer must be the fluxon velocity to $`\stackrel{~}{c}_1`$ in order to observe this transformation, as it can be seen from Figs. 4,6. E. Nonidentical electrodes: Bifurcations and more complicated two component solutions. So far we have considered the case when only critical current densities of JJ’s in the stack were different. Another common type of the difference between JJ’s is the difference in the properties of electrodes. In Fig.7 the fluxon shape is shown for the case, $`\mathrm{\Lambda }_22.5\mathrm{\Lambda }_1`$. Physically this means that the third electrode has either larger London penetration depth, $`\lambda _{S3}=2\lambda _{J1,2}`$, or it is thinner than the rest of the electrodes, $`d_{1,2}=4d_3`$, see definitions in sec.II. The rest of the parameters are $`J_{c2}/J_{c1}=0.5`$, $`d_{1,2}=t_i=0.01\lambda _{J1}`$, $`\lambda _{S1,2}=0.1\lambda _{J1}`$, $`S0.31`$, $`\frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}=1`$. The way of presentation is the same as in Fig. 1. As it is seen from Fig.7, at velocities up to 0.98$`\stackrel{~}{c}_1`$ the shape of the fluxon is well described by the analytic double component solution, Eq.(10). From Fig. 7b) it is seen that outside the fluxon center the phase distribution is determined by the $`F_2`$ component with $`\kappa _20.79`$ given by Eq.(13). However, at $`u0.998\stackrel{~}{c}_1`$ the system exhibit bifurcations and a sudden switching to the solution given by Eq.(14) occurs. At slightly larger velocity another bifurcation takes place resulting in switching to yet another solution. The switching between the solutions is hysteretic. If we start reducing the fluxon velocity, the switching back to the double component solution takes place at somewhat lower velocity. Therefore there is a certain region of fluxon velocities for which those solutions coexist. Such situation is illustrated in Figs.8 a-c), in which phase distributions $`\phi _{1,2}`$ for three possible solutions are shown for $`u0.998\stackrel{~}{c}_1`$ and for the same stack as in Fig.7. As usual, solid and dashed curves in Fig. 8 represent $`\phi _1`$ and $`\phi _2`$, respectively, obtained from numerical simulations. Fig 8 a) shows the solution coming from low velocities. Dashed-dotted and dotted curves in Fig. 8 a) represent $`\phi _1`$ and $`\phi _2`$, respectively, given by the approximate double component solution, Eq.(10), showing good agreement with numerical solution. With increasing velocity, switching to the solution of the type given by Eq.(14) takes place. This solution is shown in Fig. 8 b). This solution exist only in a very narrow velocity interval and with further increase of velocity a switching to a new solution occurs. The new solution is shown in Fig. 8 c). It is characterized by a large phase variation in the second junction. Such solution exists up to $`\stackrel{~}{c}_1`$. Looking at the fluxon shape at $`u=0.9999\stackrel{~}{c}_1`$ from Fig.7 we see that it consists of three parts: (i) at the fluxon center, $`x=0`$, in junction 1 there is a $`2\pi `$ step, which as we will show in a moment, belongs to the pure $`F_1`$ component and in junction 2 there is a $`4\pi `$ (!) drop belonging to two flux quantum, $`2F_1`$, antifluxon. (ii) At $`x=x_00.7\lambda _{J1}`$ and (iii) $`x=x_0`$ in junction 2 there is a $`2\pi `$ increase and in junction 1 there is a ripple like phase shift. As it can be seen from Figs. 7 a,c) the features at $`x=\pm x_0`$ contain both contracted and uncontracted parts and are in fact given by the double component solutions, Eq.(10). This is illustrated in Fig.9 a) in which solid and dashed lines show phase distributions $`\phi _1`$ and $`\phi _2`$, respectively, from Fig. 7 a) for $`u=0.9999\stackrel{~}{c}_1`$ and dashed-dotted and dotted curves represent $`\phi _1`$ and $`\phi _2`$, respectively, given by the analytic double component solution, Eq.(10), shifted by $`x_0`$ along the $`x`$\- axis. Obviously, the features at $`x=\pm x_0`$ correspond to the double component soliton placed in junction 2. Due to spatial separation between the contracted centrum of the fluxon and the double component features at $`x=\pm x_0`$, we can analyze the shape of the central contracted part. From Fig. 7 c) it is seen that the magnetic induction in junction 1 at $`x=0`$, $`B_1(0)`$, increases sharply as the velocity approaches $`\stackrel{~}{c}_1`$. If the phase distribution in the central region is given by the $`F_1`$ component, then from Eq. (2) it follows that $`B_1(0)F_1^{}(0)\lambda _1^1\left(1\frac{u^2}{\stackrel{~}{c}_1^2}\right)^{1/2}`$, representing Lorentz contraction of $`F_1`$ component at the lowest Swihart velocity, $`\stackrel{~}{c}_1`$. Fig. 9 b) shows the inverse value of $`B_1(0)`$ versus the Lorentz factor $`\left(1\frac{u^2}{\stackrel{~}{c}_1^2}\right)^{1/2}`$, for the high velocity solution from Fig.(7). Dots represent the numerically obtained values, solid line is the apparent linear fit. Lorentz contraction of the central region at the lowest Swihart velocity, $`\stackrel{~}{c}_1`$, is clearly seen from Fig. 9 b), therefore confirming that the central region is given by the pure $`F_1`$ component. The overall shape of the soliton in this case can be described as: $$\{\begin{array}{c}\phi _1=F_1+Image(\phi _2)\\ \phi _2=F_{1+2}(xx_0)2F_1+F_{1+2}(x+x_0)\end{array},$$ (19) where $`F_{1+2}`$ is the double component solution, given by Eq.(10). From Eq.(18) it is seen that the overall phase shift is $`2\pi `$ in junction 1 and zero in junction 2, however the phase growth in junction 1 is nonmonotonous. F. Conditions for the existence of single and two component solutions. For practical applications of SJJ’s in flux-flow oscillators, the shape of the fluxon at the highest propagation velocity is crucial. It is then important to know how the shape of the fluxon at $`u=\stackrel{~}{c}_1`$ depends on the parameters of the stack. Eq.(17) formulates the necessary condition for the existence of uncontracted single component $`F_2`$ solution at $`u=\stackrel{~}{c}_1`$, for $`\frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}=1`$, since then $`\lambda _{eff}`$ remains finite at $`u=\stackrel{~}{c}_1`$. However this condition is not sufficient. Indeed, for the case of Figs.7,8, $`\frac{J_{c2}\mathrm{\Lambda }_2}{J_{c1}\mathrm{\Lambda }_1}1.25`$, i.e. Eq.(17) is satisfied. However, we obviously do not observe the single component $`F_2`$ solution at $`u=\stackrel{~}{c}_1`$, but rather a switching to more complicated two component solutions, Eqs.(14,18), takes place. From Fig. 8 a) it is seen that the switching takes place when maximum value of $`\phi _2`$ approaches $`\pi /2`$. If the solution were to stay at the special single component $`F_2`$ solution, then according to Eq.(6), the maximum value of $`sin(\phi _2)`$ would be equal to $`1/\kappa _2`$. This gives us an additional condition for the observation of the single component $`F_2`$ solution: $$\kappa _2>1,$$ (20) In Fig.10, regions of the existence of the two component (shaded area) and the single component $`F_2`$ solutions at $`u=\stackrel{~}{c}_1`$ are shown for $`\frac{C_2\mathrm{\Lambda }_2}{C_1\mathrm{\Lambda }_1}=1`$. Numbers in Fig. 10 show the number of components obtained numerically. Solid and dashed lines represent the conditions Eq.(19) and Eq.(17), respectively. Arrows indicate the cases considered in Figs. 1,3,4,7. It is seen that the single component solution exists when both conditions, Eqs.(17,19), are satisfied. IV. IMPLEMENTATION FOR EXPERIMENTAL SITUATION In the previous section we have studied the unperturbed fluxon motion, $`\alpha _i=0,J_b=0`$. This is an idealized case. In real experimental situation $`\alpha _i0`$ and $`J_b0`$. To quantitatively study the influence of damping and bias on the fluxon shape and IVC’s, we performed numerical simulation of the coupled sine-Gordon equation, Eq.(1), with the dissipation and bias terms. Here we used two different approaches: (i) Considering solitonic-type fluxon motion, $`\phi _i=\phi _i(\xi )`$, we derive the system of one-dimensional ordinary differential equations (ODE), similar to Eq.(3), but with dissipation terms, $`\alpha _i0`$, and bias terms, and used the same numerical procedure to solve it. (ii) Alternatively, we directly integrated the system of partial differential equations (PDE), Eq. (1) using explicit finite difference method. Both approaches have certain advantages and disadvantages. Using ODE, it is possible to calculate fluxon shape for arbitrary small damping, while PDE require relatively large damping coefficient. In addition, ODE in comparison to PDE, does not have problem with accumulation of error and does not require a long relaxation and averaging times. For solitonic fluxon motion both approaches give identical results. On the other hand, ODE are restricted to the study of solitonic motion, while the PDE allow more complicated solutions. For simulation of PDE, periodic boundary conditions were used, which correspond to fluxon motion in annular SJJ’s with $`L=10÷100\lambda _{J1}`$. A. Effect of damping and current-voltage characteristics. First of all fluxon shape will affect the shape of the flux-flow current-voltage characteristics (IVC’s) caused by fluxon motion in the stack. The shape of flux-flow IVC is determined by a balance between the input power to the system from the current bias source and the power dissipated due to a finite damping, see e.g. Ref.. In a single JJ’s as the fluxon velocity approaches the Swihart velocity, Lorentz contraction takes place and the fluxon energy increases sharply. The fluxon velocity asymptotically approaches the Swihart velocity with increasing current. In the IVC this result in the appearance of an almost vertical step at the velocity matching condition. Since the existence of this step is closely related to the Lorentz contraction of the fluxon, we expect that the step at the velocity matching condition should also exist in SJJ’s with the characteristic velocity equal to the lower Swihart velocity, $`\stackrel{~}{c}_1`$, whenever the fluxon contains the Lorentz contracted part $`F_1`$. On the other hand, when a pure $`F_2`$ component solution takes place, the fluxon will reach $`u=\stackrel{~}{c}_1`$ at a finite current and flux-flow IVC should have a finite slope at $`u=\stackrel{~}{c}_1`$. For the case when the transformation of the fluxon shape given by Eq.(18) (and probably by Eq.(14) as well ) takes place, this would result in a premature switching from the flux-flow branch and, possibly, in the existence of an extra metastable flux-flow branch in the IVC with the same limiting velocity, $`u=\stackrel{~}{c}_1`$, but with larger dissipation. The average DC voltage in JJ 1 is $`V_1/V_{01}=u/c_{01}`$, where $`V_{01}=\frac{\mathrm{}\pi c_{01}}{2eL}`$, and in JJ 2 is zero. Therefore, we now plot current-velocity characteristics to represent IVC’s. In Fig. 11, the single fluxon IVC’s are shown for double stacks with equal damping coefficients, $`\alpha _{1,2}=0.05`$, and for $`J_{c2}/J_{c1}=1`$, (solid diamonds); $`J_{c2}/J_{c1}=0.5`$, (open circles); $`J_{c2}/J_{c1}=2`$, (open squares). The rest of parameters are the same as in Fig. 1. In Fig. 11, symbols represent solutions obtained from ODE and subsequent solid lines show solution of the full PDE, Eq. (1), dashed gray line shows the IVC of an uncoupled single junction 1 and dotted line indicates the position of the lower Swihart velocity, $`\stackrel{~}{c}_10.817c_{01}`$. Insets in Fig. 11 show spatial distribution of $`sin(\phi _1)`$ (solid lines), and $`sin(\phi _2)`$ (dashed lines), for maximum propagation velocities, for which ODE based numerical procedure converged: $`u_{max}(J_{c2}/J_{c1}=1)0.999\stackrel{~}{c}_1`$, $`u_{max}(J_{c2}/J_{c1}=0.5)0.97\stackrel{~}{c}_1`$, $`u_{max}(J_{c2}/J_{c1}=2)0.99\stackrel{~}{c}_1`$. Solid lines in Fig.11 show that PDE allow solutions propagating with even larger velocity, however those solutions are not of solitonic type, and will be discussed in the next section. From Fig. 11 it is seen that the IVC’s for $`J_{c2}/J_{c1}=1`$ and $`0.5`$, exhibit velocity matching behavior at $`u\stackrel{~}{c}_1`$. On the other hand, for $`J_{c2}/J_{c1}=2`$, no velocity matching behavior is observed, and the velocity reaches $`\stackrel{~}{c}_1`$ at a finite current. Such behavior is in agreement with the absence of contracted $`F_1`$ component, as discussed above. From the insets in Fig. 11 it is seen, that the fluxon shape for the case of small damping does not differ much from the frictionless case, considered in the previous section. On the other hand, damping reduces the stability of the fluxon state at high propagation velocities, so that the maximum fluxon velocity for pure solitonic motion becomes less than $`\stackrel{~}{c}_1`$. This is due to the finite bias current in the stack, which result in asymmetry of phase distribution in the stack. Such asymmetry is clearly seen from inset in Fig. 11 for $`J_{c2}/J_{c1}=0.5`$. From Fig. 11 it is seen, that reduction of stability in the flux-flow state depends on parameters of the stack. E.g. for the case of double stack with nonidentical junctions, flux-flow state is more stable for fluxon in the weaker junction, $`J_{c2}/J_{c1}=2`$, than for fluxon in the stronger junction, $`J_{c2}/J_{c1}=0.5`$. For the case $`J_{c2}/J_{c1}=0.5`$, switching of the second junction to the quasiparticle branch occurs first at $`I/I_{c1}0.22`$. In real life of course junctions in the stack are not prenumerated and if junctions are not identical, the stable state will correspond to a fluxon placed in the weaker junction. Therefore whenever properties of junctions in the stack are considerably different, the stable dynamic state at $`u=\stackrel{~}{c}_1`$ would correspond to the existence of uncontracted $`F_2`$ component soliton in the weaker junction. Experimentally, steps at the velocity matching condition were observed for low-$`T_c`$ SJJ’s, on the other hand, for high-$`T_c`$ intrinsic SJJ’s the flux-flow IVC’s always have a finite slope. We note, that for stacks with large number of junctions with thin electrodes, $`d\lambda _s`$, it is not necessary to have a variation in $`J_{ci}`$ to make the junctions different. In this case the middle junctions have a lower critical field, $`H_{c1}`$, approximately half of that compared to the outmost junctions due to the fact that fluxon in the outmost junctions carries only half a flux quantum. In a double stack, considered here, a corresponding thing happens when the junctions have different electrode thicknesses. The junction with thicker electrodes may have lower $`H_{c1}`$ even if $`J_c`$ of this junction is larger, see Ref. . In a sense, the criterion for transition from ’weak’ to ’strong’ junction is given by Eqs. (17,19). Although the stable state corresponds to the case when fluxon is placed in the weaker junction, the situation when the fluxon is placed in the stronger junction can also be achieved in experiment as shown in Ref.. Obviously such state can be achieved in annular SJJ’s. In this case the fluxon can eventually be introduced in the stronger junction and if so it will stay there and can be accelerated by the bias current. C. Velocity above $`\stackrel{~}{c}_1`$: Cherenkov radiation. So far we have considered the case $`u\stackrel{~}{c}_1`$. For $`u>\stackrel{~}{c}_1`$, coefficients before the second derivative of phase in Eq.(3) become negative. The equation can still be written in a sine-Gordon type form if we substitute $`\phi (\xi )=\pi +\varphi (\xi )`$. Indeed, a 2$`\pi `$ soliton like solution for $`\varphi (\xi )`$, moving with $`u>\stackrel{~}{c}_1`$, does exist. However the energy of this state is decreasing with increasing velocity and therefore such state would correspond to unstable IVC branch with negative resistance. In Ref. it was suggested that for $`u>\stackrel{~}{c}_1`$ the fluxon is a combination of a soliton with Josephson plasma waves. From Eq. (8) it is seen that $`\lambda _1`$ becomes imaginary for $`u>\stackrel{~}{c}_1`$ and the $`F_1`$ component transforms into a travelling Josephson plasma wave. The fluxon solution is then given by the $`F_2`$ component soliton accompanied by Josephson plasma waves from the degenerate $`F_1`$ component. Recently, the existence of such type of solution was shown by numerical simulation in Ref. and was interpreted as Cherenkov radiation in SJJ’s, when fluxon velocity exceeds the phase velocity of electromagnetic waves. This solution is not of soliton type and can not be obtained from ODE. Therefore solution of full PDE, Eq.(1), is required. From Fig. 11 it is seen that for the case $`J_{c2}/J_{c1}=2`$, PDE allow solutions propagating with $`u>\stackrel{~}{c}_1`$. In Fig.12 results of numerical simulations of PDE, Eq.(1), for the case $`J_{c2}/J_{c1}=2`$, $`u>\stackrel{~}{c}_1`$ are shown. Parameters of SJJ’s are the same as in Fig.11. Insets show spatial distributions of $`sin(\phi _1)`$ (solid lines) and $`sin(\phi _2)`$ (dashed lines) for a) $`u/\stackrel{~}{c}_11.015`$, $`I/I_{c1}=0.15`$ and b) $`u/\stackrel{~}{c}_11.155`$, $`I/I_{c1}=0.5`$, respectively. Simulations were done for annular SJJ’s with $`L=100\lambda _{J1}`$. From our simulations we observe that fluxon shape changes gradually as $`u`$ exceeds $`\stackrel{~}{c}_1`$. Therefore, there are no peculiarities at $`u=\stackrel{~}{c}_1`$ in the IVC, see solid line in Fig.11 for $`J_{c2}/J_{c1}=2`$. Indeed, from inset a) in Fig. 12 it is seen that for $`u`$ slightly above $`\stackrel{~}{c}_1`$, fluxon shape in the left half-space is similar to that at $`u<\stackrel{~}{c}_1`$, see the bottom inset in Fig.11. However, small oscillations appear behind the fluxon (fluxon is propagating from right to left). As the velocity increases, both amplitude and wavelength of the oscillations increase, as illustrated in inset b) in Fig. 12. To clarify the physical origin of the oscillations, in Fig. 12 we have plotted the average wavelength of oscillations (circles) as a function of the absolute value of the Lorentz factor, $`\sqrt{(u/\stackrel{~}{c}_1)^21}`$. The solid line in Fig. 12 shows the absolute value, $`2\pi \left|\lambda _1\right|`$, given by Eq.(8) and describing small amplitude plasma waves from the degenerate $`F_1`$ component (the factor $`2\pi `$ is due to different definition of the wavelength and the penetration depth). Excellent agreement between the wavelength of Cherenkov radiation and Josephson plasma wavelength from the degenerate $`F_1`$ component is observed without any fitting, thus confirming the idea of Ref. that Cherenkov radiation is due to plasma wave generation from the degenerate $`F_1`$ component. We would like to note, that $`\left|\lambda _1\right|`$ is not linear as a function of the Lorentz factor, although deviations from the linear dependence are small. For high propagation velocities, an uncertainty appears in determination of $`\lambda `$. This is caused by the increase of the amplitude of oscillations, see inset b) in Fig. 12. Here oscillations are not exactly monochromatic but the wavelength slightly increase with the amplitude. The uncertainty in determination of $`\lambda `$ at high velocities is shown by error bars in Fig.12. CONCLUSIONS In conclusion, we have shown that the shape of a single fluxon in double stacked Josephson junctions can be described by existence of two components given by Eqs. (6,7) and with characteristic lengths and velocities given by Eqs.(8,9). At velocities up to $`u0.98\stackrel{~}{c}_1`$, the soliton shape is well described by the approximate double component solution, Eq.(10), for all studied junction parameters, as illustrated in Figs.1-4,7. In the very vicinity of the lower Swihart velocity the fluxon shape may undergo radical transformations. The final shape of the fluxon at $`u=\stackrel{~}{c}_1`$ strongly depends on parameters of the stack. From our numerical simulations we have found that the fluxon may remain double component, as shown in Fig.1, transform to a pure $`F_2`$ component solution, see Fig.3, or be a more complicated combination of $`F_1`$ and $`F_2`$ components, see Eqs.(14,18) and Figs. 4,7,8. Those more complicated solutions do not, strictly speaking, represent the single fluxon state, but are combinations with fluxon-antifluxon pairs. However, even in this case these are always the components $`F_{1,2}`$ described by Eqs.(6,7) that constitute the solution. This implies that the components $`F_{1,2}`$ are real and may live their own life and appear in different combinations. Conditions for observing the single and the two-component solutions at $`u=\stackrel{~}{c}_1`$ were formulated and verified. We have shown that as the velocity approaches $`\stackrel{~}{c}_1`$, the phase shift may become nonmonotonous. A prominent feature of a soliton moving at the velocity close to $`\stackrel{~}{c}_1`$ in SJJ’s is the possible inversion of magnetic field $`B_2(0)`$, see Figs.1,3,4 c). Such behavior was predicted analytically in Ref.. Here we confirmed the existence of this phenomenon by numerical simulation. The inversion of magnetic field in junction 2 may lead to attractive fluxon interaction for fluxons in different junctions. Then the so-called ’in-phase’ or ’bunched’ state with fluxons one on the top of the other in adjacent junctions may become favorable at high enough fluxon velocity. In experiment this would result in appearance of the extra flux-flow branch in the current-voltage characteristics with higher voltage, as shown by numerical simulations and observed in experiment on low-$`T_c`$ SJJ’s. In Ref. it was shown that the bunched state can be stable at $`u>\stackrel{~}{c}_1`$, however, no mechanism for overcoming the mutual fluxon repulsion and transformation into the bunched state was suggested. The existence of the field inversion might be a criterion for the appearance of the bunched state in SJJ’s. As we have shown, the sign inversion and a dip in $`B_2(0)`$ disappears when junction 2 becomes considerably stronger than junction 1 and transformation of the fluxon shape to a single $`F_2`$ component solution takes place, see Fig. 3 c). The shape of the flux-flow IVC’s was analyzed for various parameters of SJJ’s and it was shown that velocity matching behavior at $`u=\stackrel{~}{c}_1`$ is observed when fluxon contains the contracted $`F_1`$ component. Finally, Cherenkov radiation at $`u>\stackrel{~}{c}_1`$ was shown to be due to generation of plasma waves from the degenerate $`F_1`$ component in agreement with the prediction of Ref. . Analytic expression for the wavelength of the Cherenkov radiation is derived. The work was supported by Swedish Superconductivity Consortium and in part by the Russian Foundation for Basic Research under Grant No. 96-02-19319.
no-problem/9903/hep-ph9903237.html
ar5iv
text
# SUSY Production Cross Sections ## I Perturbative QCD Results The possibility of supersymmetry (SUSY) at the electroweak scale and the ongoing search for the Standard Model (SM) Higgs boson constitute two major related aspects of the motivation for the Tevatron upgrade currently under construction at Fermilab. The increase in the center-of-mass energy to 2 TeV and the luminosity to an expected 2 fb<sup>-1</sup>, together with detector improvements, should permit discovery or exclusion of supersymmetric partners of the standard model particles up to much higher masses than at present . Estimates of the production cross sections for pairs of supersymmetric particles may be computed analytically from fixed-order quantum chromodynamics (QCD) perturbation theory. Calculations that include contributions through next-to-leading order (NLO) in QCD have been performed for the production of squarks and gluinos , top squark pairs , slepton pairs , gaugino pairs , and the associated production of gauginos and gluinos . The cross sections can be calculated as functions of the sparticle masses and mixing parameters. In a recent paper , Berger, Klasen, and Tait provide numerical predictions at next-to-leading order for the production of squark-antisquark, squark-squark, gluino-gluino, squark-gluino, and top squark - antitop squark pairs in proton-antiproton collisions at the hadronic center-of-mass energy 2 TeV. These calculations are based on the analysis of Refs. , and the CTEQ4M parametrization of parton densities. The hard scale dependence of the cross section at leading order (LO) in perturbative QCD is reduced at NLO but not absent. An estimate of the theoretical uncertainty at NLO is approximately $`\pm 15`$ % about a central value. The central value is obtained with the hard scale chosen to be equal to the average of the masses of the produced sparticles, and the band of uncertainty is determined from a variation of the hard scale from half to twice this average mass. The next-to-leading order contributions increase the production cross sections by 50 % and more from their LO values. For example, in the case of squark-antisquark production the next-to-leading order cross section lies above the leading order cross section by 59 %. This increase translates into a shift in the lower limit of the produced squark mass of 19 GeV. The cross sections for squark-antisquark production, gluino pair production, and the associated production of squarks and gluinos of equal mass are of similar magnitude, whereas the squark pair production and top squark-antitop squark production cross sections are smaller by about an order of magnitude . The cross sections reported in Ref. are for inclusive yields, integrated over all transverse momenta and rapidities. In the search for supersymmetric states, a selection on transverse momentum will normally be applied in order to improve the signal to background conditions. The theoretical analysis can also be done with similar selections. A tabulation of cross sections for various squark and gluino masses is available upon request from the authors of Ref. . Next-to-leading order calculations of the production of neutralino pairs, chargino pairs, and neutralino-chargino pairs are reported to be on the way to completion , but final numerical predictions are not yet available for general use. The strongly interacting squarks and gluinos may also be produced singly in association with charginos and neutralinos. Leading-order production cross sections for the associated production of a chargino plus a squark or gluino and of a neutralino plus a squark or gluino are published , and a next-to-leading order calculation of associated production of a gaugino plus a gluino is now available . Berger, Klasen, and Tait compute total cross sections for all the gaugino-gluino production reactions $`\stackrel{~}{g}\stackrel{~}{\chi }_{(14)}^0`$ and $`\stackrel{~}{g}\stackrel{~}{\chi }_{(12)}^\pm `$ in next-to-leading order SUSY-QCD. For numerical results, they select an illustrative mSUGRA scheme in which the GUT scale common scalar mass $`m_0=100`$ GeV, the common gaugino mass $`m_{1/2}=150`$ GeV, the trilinear coupling $`A_0=300`$ GeV, $`\mathrm{tan}(\beta )=4`$ and $`\mathrm{sgn}(\mu )=+`$. (The sign convention for $`A_0`$ is opposite to that in the ISASUGRA code). They convolute the NLO hard partonic cross sections with the CTEQ4M parametrization of parton densities, and present physical cross sections as a function of the $`\stackrel{~}{g}`$ mass or of the average mass $`m=(m_{\stackrel{~}{\chi }}+m_{\stackrel{~}{g}})/2`$. For $`p\overline{p}`$ collisions at $`\sqrt{S}=2`$ TeV the cross sections at $`m_{\stackrel{~}{g}}=300`$ GeV range from $`𝒪(1\mathrm{p}\mathrm{b})`$ for the $`\stackrel{~}{\chi }_2^0`$ and the $`\stackrel{~}{\chi }_1^\pm `$ to $`𝒪(10^3\mathrm{pb})`$ for the $`\stackrel{~}{\chi }_3^0`$. The $`\stackrel{~}{g}\stackrel{~}{\chi }_{(1,2)}^0`$ and $`\stackrel{~}{g}\stackrel{~}{\chi }_1^\pm `$ cross sections are of hadronic size despite the fact that the overall coupling strength is $`𝒪(\alpha _{EW}\alpha _s)`$ not $`𝒪(\alpha _s^2)`$. The masses of the $`\stackrel{~}{\chi }_{(1,2)}^0`$ and $`\stackrel{~}{g}\stackrel{~}{\chi }_1^\pm `$ are significantly smaller in a typical mSUGRA scenario than those of the squarks and gluinos. The phase space and the parton luminosity are therefore greater for associated production of a gluino and a gaugino than for a pair of squarks or gluinos, and the smaller coupling strength is compensated. The next-to-leading-order cross sections are enhanced by typically 10% to 25% relative to the leading order values. The theoretical uncertainty resulting from variations of the factorization/renormalization scale is approximately $`\pm 10\%`$ at NLO for the $`\stackrel{~}{\chi }_2^0`$ and the $`\stackrel{~}{\chi }_1^\pm `$, a factor of 2 smaller than the LO variation. Shown in Fig. 1 are the predicted cross sections as a function of the average mass. Baer, Harris, and Reno compute total cross sections for all the slepton pair production reactions $`\stackrel{~}{e}_L\stackrel{~}{\nu }_L`$, $`\stackrel{~}{e}_L\overline{\stackrel{~}{e}_L}`$, $`\stackrel{~}{e}_R\overline{\stackrel{~}{e}_R}`$ and $`\stackrel{~}{\nu }_L\overline{\stackrel{~}{\nu }}_L`$ in next-to-leading order QCD. The analytic calculations are very similar to the QCD corrections to the Standard Model massive lepton-pair production (Drell-Yan) process. Numerical results are based on the CTEQ4M parametrization of parton densities. For $`p\overline{p}`$ collisions at $`\sqrt{S}=2`$ TeV, the cross sections range from $`𝒪(1\mathrm{p}\mathrm{b})`$ at $`m_{\mathrm{slepton}}=50`$GeV to $`𝒪(10^3\mathrm{pb})`$ at $`m_{\mathrm{slepton}}=200`$GeV. The next-to-leading-order cross sections are enhanced by typically 35% to 40% relative to the leading order values. The theoretical uncertainty resulting from variations in the hard scattering scale and parton distribution functions is approximately $`\pm 15\%`$. In the mSUGRA model, slepton pair production is most important for small values of the parameter $`m_0`$. The next-to-leading order enhancements of slepton pair cross sections at Tevatron energies can push predictions for leptonic SUSY signals to higher values than typically quoted in the literature in these regions of model parameter space. For current expectations of the hierarchy of masses and cross sections, consult Ref. . ## II Monte Carlo Methods Experimental searches for supersymmetry rely heavily on Monte Carlo simulations of cross sections and event topologies. Two Monte Carlo generators in common use for hadron-hadron collisions include SUSY processes; they are ISAJET and SPYTHIA . Both the Monte Carlo approach and the fixed order pQCD approach have different advantages and limitations. Next-to-leading order perturbative calculations depend on very few parameters, e.g., the renormalization and factorization scales, and the dependence of the production cross sections on these parameters is reduced significantly in NLO with respect to LO. Therefore, the normalization of the cross section can be calculated quite reliably if one includes the NLO contributions. On the other hand, the existing next-to-leading order calculations provide predictions only for fully inclusive quantities, e.g., a differential cross section for production of a squark or a gluino, after integration over all other particles and variables in the final state. In addition, they do not include sparticle decays. This approach does not allow for event shape studies nor for experimental selections on missing energy or other variables associated with the produced sparticles or their decay products that are crucial if one wants to enhance the SUSY signal in the face of substantial backgrounds from Standard Model processes. The natural strength of Monte Carlo simulations consists in the fact that they generate event configurations that resemble those observed in experimental detectors. Through their parton showers, these generators include, in the collinear approximation, contributions from all orders of perturbation theory. In addition, they incorporate phenomenological hadronization models, a simulation of particle decays, the possibility to implement experimental cuts, and event analysis tools. However, the hard-scattering matrix elements in these generators are accurate only to leading order in QCD, and, owing to the rather complex nature of infra-red singularity cancellation in higher orders of perturbation theory, it remains a difficult challenge to incorporate the full structure of NLO contributions successfully in Monte Carlo simulations. The limitation to leading-order hard-scattering matrix elements leads to large uncertainties in the normalization of the cross section. The parton shower and hadronization models rely on tunable parameters, another source of uncertainties. In Ref. a method is suggested to improve the accuracy of the normalization of cross sections computed through Monte Carlo simulations. In this approach, the renormalization and factorization (hard) scale in the Monte Carlo LO calculation is chosen in such a way that the normalization of the Monte Carlo LO calculation agrees with that of the NLO perturbative calculation. The scale choice depends on which partonic subprocess one is considering and on the kinematics. This choice of the hard scale will affect both the hard matrix element and the initial-state and final-state parton shower radiation. On the other hand, an alternative rescaling of the cross section by an overall $`K`$-factor will have no bearing on the parton shower radiation. A reduction in the hard scale leads generally to less evolution and less QCD radiation, and vice-versa, in the initial- and final-state showering. A change of the hard scale will be reflected in the normalization of the cross section as well as in the event shape. Investigations are underway to determine how significant the changes are in computed final state momentum distributions.
no-problem/9903/cond-mat9903082.html
ar5iv
text
# Reduced O diffusion through Be doped Pt electrodes ## Abstract Using first principles electronic structure calculations we screen nine elements for their potential to retard oxygen diffusion through poly-crystalline Pt (p-Pt) films. We determine that O diffuses preferentially as interstitial along Pt grain boundaries (GBs). The calculated barriers are compatible with experimental estimates. We find that Be controls O diffusion through p-Pt. Beryllium segregates to Pt GBs at interstitial (i) and substitutional (s) sites. i-Be is slightly less mobile than O and it repels O, thus stuffing the GB. s-Be has a high diffusion barrier and it forms strong bonds to O, trapping O in the GB. Experiments confirm our theoretical predictions. A serious problem preventing the use of high dielectric oxide materials (e. g. BaSrTiO<sub>3</sub>) for capacitors as part of dynamic random access memory devices is the O diffusion through the electrodes and subsequent oxidation under the electrodes. Pt electrodes do not oxidize but they allow O to diffuse through the p-Pt film during deposition and anneal of the dielectric. This causes unwanted oxidation below the Pt film. It has been proposed (but not yet demonstrated) that O diffusion can be reduced by doping of the Pt film. The addition of barrier layers or the use of different electrode materials are other attempts to cope with the oxidation issue. All the theoretical results reported in this study are obtained using the first principles total-energy code VASP (Vienna Ab-initio Simulation Package) within the generalized gradient approximation (GGA). Electronic wavefunctions are expanded in a plane wave basis. The atomic cores are represented by ultrasoft pseudopotentials which allow for a reduced plane wave basis set (cut-off 270 eV). It has long been assumed that O diffuses along grain boundaries in the p-Pt films used as electrodes. The model GB we choose for our diffusion study is the $`\mathrm{\Sigma }5(310)[001]`$ symmetric tilt GB (See Fig. 1). To determine the O diffusion mechanism from theory we first have to find the preferential site for O in Pt (see Table 1). The calculated binding energy for the O<sub>2</sub> molecule is 4.91 eV per O atom. The most stable site for O in bulk fcc-Pt is the tetrahedral interstitial with a binding energy of 3.45 eV. Interstitial sites at the $`\mathrm{\Sigma }5`$ GB have binding energies up to 4.60 eV. Thus O in Pt strongly segregates to GBs. In agreement with experiment our results indicate that Pt does not oxidize and even the O in GBs is unstable with respect to the formation of gaseous O<sub>2</sub>. Any diffusion of O involving sites in bulk Pt has a high activation energy of at least 2.64 eV. To diffuse through bulk-Pt, O first has to move from the GB to a bulk site which costs 1.15 eV as bulk-interstitial or 2.17 eV as bulk-substitutional. For O interstitial migration one has to add the tetrahedral-octahedral energy difference of 1.49 eV. Substitutional O likely migrates with the help of a Pt vacancy which has a calculated formation energy of 0.73 eV. Thus crystalline Pt films act as O diffusion barriers. If such films could be deposited reliably, at low stress, and with good adhesion the O inter-diffusion problem would be solved. To calculate the diffusion barriers for interstitial O in the GB we use two different techniques. In the traditional approach we determine the potential energy surface (PES) for an O atom moving within the GB. This is done by calculating the total energy of the GB system with an O interstitial fixed at the points of a rectangular 10$`\times `$6 grid spanning the irreducible (310) interface cell. The O’s coordinate perpendicular to the GB plane and the position of most of the Pt atoms are relaxed at each mesh point. Four Pt atoms distant to the O are fixed to prevent a rigid translation of the Pt film. The resulting potential energy surface (PES), Fig. 2, indicates one main minimum at coordinate (5.5, 4.5), at least one secondary minimum at (9,4), and a clear diffusion path in the , i.e. the short, direction with a barrier (at 5.5,1) of about 0.7 eV. In the \[$`\overline{1}`$30\] direction the O has to cross at least two saddle points. The highest saddle is at (8,4.5) and appears to have a barrier of about 0.9 eV. Application of the nudged elastic band (NEB) method leads to a more accurate determination of the O diffusion barriers in the $`\mathrm{\Sigma }5`$ GB. In the direction the barrier is 0.68 eV and in the \[$`\overline{1}`$30\] direction we identify two saddle points with almost the same barriers: 0.68 eV at (1,3) and 0.67 eV at (8,2). Note that the low barrier at (8,2) is not obvious from Fig. 2. Also, the full variability of the energy along the diffusion path in the \[$`\overline{1}`$30\] direction is not reflected in the calculated PES. The NEB is more reliable in predicting diffusion paths and barriers. It is not clear if diffusion along the chosen $`\mathrm{\Sigma }5`$ GB is representative of O diffusion in p-Pt. Thus we compare our results to experimental findings. In the most careful study of O diffusion through nanocrystalline Pt films Schmiedl et al. have determined the diffusion rate of O at room temperature. The measured value is about $`D=10^{19}`$cm<sup>2</sup>/s, depending on the microstructure of the Pt film, especially the density of GBs. This dependence indicates that GB diffusion is the dominant diffusion mechanism. To compare with the calculated value we estimate an O diffusion barrier $`E_d`$ from the experimental diffusion rate $$D=D_0e^{E_d/k_BT}.$$ (1) The grain size of the Pt films grown by Schmiedl et al. is about 100 Å. This means that only about 1/100 of the area of the film allows O diffusion. Thus the diffusion rate $`D_{GB}`$ in the GB is about 10<sup>-17</sup> cm<sup>2</sup>/s. A typical diffusion prefactor $`D_0`$ is $`10^3`$ cm<sup>2</sup>/s and $`k_BT`$ is 0.025 eV. Solving for $`E_d`$ results in $`E_d=0.8`$ eV. The error margin for this estimate is at least 0.2 eV. We conclude that the calculated diffusion barrier of 0.68 eV is compatible with experiment and that O diffusion in p-Pt proceeds along GBs. The main goal of this study is to identify dopants that reduce the O diffusion through p-Pt films and thus keep the oxidation of material below the p-Pt at tolerable levels. Potential dopants have to meet certain conditions. For example, the dopant element should have a high melting point so that sputter targets can be produced easily. In addition, the dopant should not form a volatile oxide. From the elements that pass those pre-conditions we choose Be, B, Mg, Ti, V, Cu, Rh, Ta, and Ir for initial investigation. Only if a dopant segregates to the GB can it affect O GB-diffusion effectively. To check which elements segregate to Pt GBs we calculate the dopant’s binding energy in the stable elemental phase, in a fictitious Pt<sub>3</sub>-dopant alloy, at substitutional and interstitial sites in a Pt bulk matrix, and at different substitutional and interstitial sites at the Pt GB (see Table 1). Only the binding energies for the most stable sites out of five substitutional and three interstitial sites at the GB are listed. We find that only the two smallest elements, Be and B segregate strongly to the grain boundaries and we determine their effect on O diffusion. A dopant at the GB can reduce O diffusion as interstitial (i) or substitutional (s) species. If the dopant prefers i-sites and if the dopant or a stable dopant-O complex diffuses more slowly than the O alone the dopant blocks the O diffusion by “stuffing” of the GB. Boron goes interstitial and it repels O in the GB. The calculated i-B diffusion barrier in the GB is 0.54 eV, thus it diffuses faster than O. Most likely O exposure of B doped p-Pt films causes out-diffusion of the B from the GB to form B<sub>2</sub>O<sub>3</sub> at the surface. In the B-free films the O diffusion is not affected. This scenario agrees with experiment. If the dopants prefer s-sites at the GB then a slow diffusion of the dopant is almost guaranteed. Diffusion of substitutional species in close packed materials typically involve vacancies. Calculated vacancy formation energies at the Pt $`\mathrm{\Sigma }5`$ GB range from 0.72 eV to 1.08 eV, i. e. they are at or above the calculated vacancy formation energy in bulk Pt. Adding the vacancy migration barrier of about 1 eV results in a diffusion activation energy of at least 1.5 eV. If the dopant-O interaction is sufficiently different from the Pt-O interaction in the GB a substitutional dopant reduces O GB diffusion. A comparatively attractive dopant traps diffusing O atoms directly, a repulsive dopant traps the O’s at sites with no dopant neighbors. Beryllium reduces O diffusions as i- and as s-dopant at Pt $`\mathrm{\Sigma }5`$ GBs. Isolated i-Be at the GB experiences an isotropic migration barrier of 0.70 eV as determined by a set of NEB calculations. In the vicinity of s-Be the barrier increases to about 0.8 eV caused by a small repulsive interaction. A neighboring i-Be leads to a larger increase because of a strong short range repulsion between the two i-Be atoms. To quantify the barrier increase requires the study of a large number of paths which has not been done here. The interaction between i-Be and O is repulsive. The repulsion energy is about 0.5 eV at neighboring local minima and about 0.1 eV at second nearest neighbor sites depending on the detailed configuration. This and the slow diffusion of i-Be indicates that i-Be retards O diffusion. To confirm this we test two possible saddle configurations for O diffusion with i-Be near by. We find that i-Be increases the barrier for O diffusion by about 0.1 eV. The interaction between s-Be and O is strongly attractive which leads to high O diffusion barriers. We calculate the O binding energy at seven different sites close to a s-Be at the GB. At the new global minimum the O is 0.90 eV lower in energy than at the most stable site at the GB without Be. The diffusion barrier of O is increased from 0.68 eV to 0.81 eV in the direction and to above 2 eV in the \[$`\overline{1}`$30\] direction. In the lowest energy configuration containing 2 s-Be atoms, an O atom is bridging between the two s-Be atoms. The barrier for the O to leave the bridge site is above 2.5 eV. We note here that in the presence of O in the GB it is energetically favorable for i-Be to convert to s-Be that is bound to O, thus trapping the O. We conclude that Be reduces O diffusion along Pt GBs in various ways. The actual O diffusion rate in the presence of Be depends on the Pt microstructure and the Be concentration and absorption site. We test our theoretical prediction experimentally by measuring the WO<sub>3</sub> formation beneath a clean and a Be doped p-Pt film under O anneal. The Pt is sputter deposited at 400 C to a nominal thickness of 100 nm on a substrate consisting of 450 nm of CVD W on a TiN adhesion layer on SiO<sub>2</sub>/Si. The 1.5 at.% Be implant is performed at 40 keV and 7 tilt with a dose of 10<sup>16</sup> cm<sup>-2</sup>. For comparison we prepare an identical wafer without the Be implantation. Each of the two wafers are pre-annealed at 600 C in a carefully controlled Ar ambient for 30 minutes prior to the 5 minutes oxidation heat treatment at 500 C in open air. XRD analysis revealed WO<sub>3</sub> peaks in both samples. The WO<sub>3</sub> (110) peak counts of the Be-implanted sample are 3.1 times smaller than those of the Be free sample. This result indicates that Be implantation indeed delays the diffusion of O<sub>2</sub> through Pt. We expect that the Be effect would be even bigger if the Be was co-deposited with the Pt such that more of the Be has a chance to segregate to the Pt GBs. In conclusion, we show that Be doping of p-Pt films reduces the O inter-diffusion. This conclusion is based on our first principles atomistic model and first experimental results. We expect that the optimization of the Be doping recipe will increase the Be effect significantly. Our study shows how first principles based modeling not only helps to understand fairly complicated processes like inter-diffusion in poly-crystalline materials; modeling even shows the way to modify the diffusion properties. Modeling also helps us determine rules that govern segregation and diffusion properties, e. g. the role of atomic size. This will be subject of a longer paper. This work was made possible by advancements in first principles codes (e. g. VASP) that allow to calculate large systems containing “hard” elements (e. g. O and Pt) efficiently and accurately. The NEB enables us to calculate diffusion paths in complicated cases. Finally, we need parallel computers to consider the hundreds of configurations necessary to solve “real world” materials science problems. We acknowledge the use of Sandia National Laboratories computing resources.
no-problem/9903/hep-th9903043.html
ar5iv
text
# References The dynamics of $`n`$ D0-branes of IIA superstring theory at an energy scale $`E`$ is described, in the limit $$\sqrt{\alpha ^{}}E0,g_s0,$$ (1) by a non-relativistic $`U(n)`$ supersymmetric gauge quantum mechanics with 16 supersymmetries, otherwise known as the M(atrix) model . Here, $`\alpha ^{}`$ is the inverse string tension and $`g_s`$ is the string coupling constant. The M(atrix) model is just a D=10 super-Yang-Mills (SYM) theory dimensionally reduced to D=1. The (dimensionful) coupling constant of this SYM theory is $$g_{YM}=g_s^{1/2}(\alpha ^{})^{3/4}.$$ (2) The group of symmetries of the M(atrix) model (excluding supersymmetries) is the D=10 Bargmann group, which is a central extension of the Galilean group; the central charge is the D0-brane mass. This group is a subgroup of the D=11 Poincaré group for which a null component of the 11-momentum is central. The (super)Bargmann invariance of the $`U(1)`$ theory follows from the fact that the action is the null reduction of the action for the D=11 massless superparticle ; the extension to $`U(n)`$ then follows from the fact that the relative D0-brane coordinates of translation and boost invariant. We conclude that the non-relativistic limit described above is equivalent to a limit in which the spacelike circle of $`S^1`$ compactified M-theory becomes lightlike. It has been argued that all degrees of freedom of IIA superstring theory other than D0-branes decouple in this limit , so that the M(atrix) model provides a definition of M-theory on a lightlike circle, as originally conjectured . According to the M(atrix) model, the UV regime of D=11 supergravity is described by the IR dynamics of the SYM theory. But the IR limit of the gauge theory is its strong coupling limit. This can be investigated in the limit of large $`n`$ by ’t Hooft’s topological expansion , for which the effective dimensionless coupling constant at the energy scale $`E`$ is $$g_{eff}(E)=\frac{g_{YM}N^{1/2}}{E^{3/2}}.$$ (3) The topological expansion is an asymptotic expansion in $`g_{eff}`$. According to a generalization of the $`adS/CFT`$ correspondence , the dual asymptotic expansion in $`g_{eff}^1`$ is provided by IIA supergravity in the background of its D0-brane solution . In the string frame the non-vanishing fields of this solution are $`ds_{st}^2`$ $`=`$ $`H^{\frac{1}{2}}dt^2+H^{\frac{1}{2}}ds^2(\text{𝔼}^9)`$ $`e^\varphi `$ $`=`$ $`g_sH^{\frac{3}{4}}`$ $`\stackrel{~}{F}_8`$ $`=`$ $`g_s^1_9dH`$ (4) where $`\stackrel{~}{F}_8`$ is the 8-form dual of the 2-form Ramond-Ramond (RR) field strength, $`_9`$ is the Hodge dual on $`\text{𝔼}^9`$, and $`H`$ is a harmonic function on $`\text{𝔼}^9`$. The $`g_s`$-dependence may be determined from the solution with $`g_s=1`$ by means of the transformation $`\varphi `$ $``$ $`\varphi +\lambda `$ $`\stackrel{~}{F}_8`$ $``$ $`e^\lambda \stackrel{~}{F}_8`$ (5) where $`\lambda `$ is a constant. This is not an invariance of the action but it is an invariance of the field equations. In coordinates such that $$ds^2(\text{𝔼}^9)=dr^2+r^2d\mathrm{\Omega }_8^2,$$ (6) where $`d\mathrm{\Omega }_8^2`$ is the $`SO(9)`$-invariant metric on $`S^8`$, we may choose the harmonic function $`H`$ to be $$H=1+g_sN\left(\frac{\sqrt{\alpha ^{}}}{r}\right)^7.$$ (7) Given the factor of $`g_s^1`$ in $`\stackrel{~}{F}_8`$, this choice corresponds to N coincident D0-branes at the origin of $`\text{𝔼}^9`$. We can now rewrite $`H`$ as $$H=1+\frac{g_{eff}^2(U)}{(\sqrt{\alpha ^{}}U)^4}$$ (8) where $$U=r/\alpha ^{}.$$ (9) The variable $`U`$ has dimensions of energy. It is the energy of a string of length $`r`$, although one should not read too much into this fact as $`U`$ will shortly be seen to be merely a convenient intermediate variable. For the moment we need only suppose (subject to later verification) that ‘low energy’ corresponds to the limit $`\sqrt{\alpha ^{}}U0`$. For any non-zero $`g_{YM}`$ this implies $`g_{eff}(U)\mathrm{}`$, which we need in any case for the validity of the dual IIA supergravity description of the D0-brane dynamics. The low-energy limit is therefore a ‘near-horizon’ limit in the sense that $$H\frac{g_{eff}^2(U)}{(\sqrt{\alpha ^{}}U)^4}.$$ (10) There is a problem with this limit, however, because the string frame metric of (S0.Ex1) has a curvature singularity at singularities of $`H`$, i.e. at $`U=0`$. Although the string frame is natural in the context of IIA superstring theory, it is not obviously the preferred frame in the context of the M(atrix) model. Of course, no frame is really ‘preferred’ because the physics cannot depend on the choice of frame, but there may be a frame in which the physics is simplest. It was argued in that the preferred frame in this sense is the ‘dual’ frame, defined for a general p-brane (up to homothety) as the one for which the dual brane (the D6-brane in our case) has a tension independent of the dilaton. In this frame, and for all $`p5`$, the singularities of the harmonic function $`H`$ in the p-brane metric are Killing horizons near which the D-dimensional geometry is $`adS_{p+2}\times S^{Dp2}`$ . This result generalizes the interpolation property of p-branes, such as the M2,M5 and D3 branes, that do not couple to a dilaton . For the D0-brane the dual frame metric $`d\stackrel{~}{s}^2`$ is related to the string frame metric as follows: $$d\stackrel{~}{s}^2=(e^\varphi N)^{\frac{2}{7}}ds_{st}^2.$$ (11) The factor of $`N`$ is included here for later convenience. The D0-brane metric is now $$d\stackrel{~}{s}^2=\left(g_sN\right)^{\frac{2}{7}}\left[H^{\frac{5}{7}}dt^2+H^{\frac{2}{7}}(dr^2+r^2d\mathrm{\Omega }_8^2)\right],$$ (12) and in the near-horizon limit we have $$d\stackrel{~}{s}^2\alpha ^{}\left[\left(g_{YM}^2N\right)^1U^5dt^2+U^2dU^2+d\mathrm{\Omega }_8^2\right].$$ (13) The singularity of the metric at at $`U=0`$ is now only a coordinate singularity at a Killing horizon of $`_U`$, but the metric still depends on the SYM coupling constant. To circumvent this, we define the a new radial variable $`u`$ (with dimensions of energy) by $$u^2=\frac{U^5}{g_{YM}^2N}.$$ (14) For convenience we also introduce the rescaled time coordinate $$\stackrel{~}{t}=\frac{5}{2}t.$$ (15) The near-horizon D0-brane solution is now $`(\alpha ^{})^1d\stackrel{~}{s}^2`$ $`=`$ $`{\displaystyle \frac{4}{25}}\left[u^2d\stackrel{~}{t}^2+u^2du^2\right]+d\mathrm{\Omega }_8^2`$ $`e^\varphi `$ $`=`$ $`N^1\left[g_{eff}(u)\right]^{7/5}`$ $`\stackrel{~}{F}_8`$ $`=`$ $`7Nvol(S^8)`$ (16) We recognise this as $`adS_2\times S^8`$, with standard (horospherical) coordinates for the $`adS_2`$ factor. As $`u`$ is now the only dimensionful variable it sets the energy scale for solutions of the massless wave equation in the near-horizon geometry. This fact implies that an infra-red cut-off of supergravity at length $`\alpha ^{}u`$ corresponds, via holography , to an ultraviolet cut-off of the D0-brane SYM theory at energy $`u`$ . The $`adS_2`$ metric has an $`SL(2;R)`$ isometry group. This does not extend to a symmetry of the full near-horizon solution because the dilaton field is invariant only under the one-dimensional subgroup generated by $`_t`$, However, scale transformations generated by the Killing vector field $`t_tu_u`$ take one hypersurface of constant $`u`$ into another such hypersurface, which leads to a rescaling of $`g_{eff}(u)`$. A hypersurface of constant $`u`$ is thus the vacuum of the M(atrix) model at coupling $`g_{eff}(u)`$. As we rescale $`u`$ we go either to a free theory with $`g_{eff}=0`$ at the $`adS_2`$ boundary, which is obviously scale invariant, or towards a strongly coupled theory at the Killing horizon of $`_t`$ at $`u=0`$. In order to keep $`e^\varphi `$ small in the latter limit we must take $`N`$ large. However, for any finite $`N`$ the effective string coupling constant will still become large sufficiently near $`u=0`$ and the IIA supergravity description will break down. This is an indication that we should pass to D=11 supergravity. Given that the full D0-brane solution (S0.Ex1) is the reduction of the M-wave, one might wonder what the near-horizon limit of the D0-brane solution lifts to in D=11. In view of the fact that the non-relativistic D0-brane action is the null reduction of the D=11 massless superparticle action, the obvious guess is that the near-horizon limit of the D0-brane solution is a null reduction of the M-wave. This is true, in the following sense . The M-wave metric is $$ds_{11}^2=dudv+Kdu^2+ds^2(\text{𝔼}^9)$$ (17) where $`K`$ is harmonic on $`\text{𝔼}^9`$; it is also an arbitrary function of $`u`$ but in order to reduce to D=10 we must choose it to be $`u`$-independent. The choice $`K=Q/r^7`$ where $`r`$ is the distance from the origin in $`\text{𝔼}^9`$ now leads to the D0-brane solution (S0.Ex1) after reduction along orbits of the timelike Killing vector field $`_u_v`$. This is the standard timelike reduction. We may instead reduce along orbits of the Killing vector field $`_u`$, which is null at spatial infinity. To this end we set $`v=2t`$ and write (17) as $$ds_{11}^2=K(du+K^1dt)^2+K^{1/2}ds_{st}^2$$ (18) where $$ds_{st}^2=K^{\frac{1}{2}}dt^2+K^{\frac{1}{2}}ds^2(\text{𝔼}^9).$$ (19) is the string frame 10-metric. The IIA solution resulting from reduction on orbits of $`_u`$ is therefore the D0-brane solution in the near-horizon limit, i.e. with $`H`$ replaced by $`K=H1`$. It is satisfying that the dual supergravity description of $`n`$ D0-branes for large $`n`$ is a wave solution because this is what one would expect from the Bohr correspondence principle. However, we have still to consider what happens at $`u=0`$. In many cases, singularities of IIA solutions are resolved by their D=11 interpretation , but this does not happen here. The singularities of the harmonic function $`K`$ are curvature singularities of the M-wave solution, so the D=11 supergravity description must break down there. The reason that the IIA supergravity dual description breaks down is that the effective string coupling becomes large. While this implies a decompactification to D=11 it also implies that the neglect of string loop corrections, and hence M-theory corrections in D=11, cannot be ignored. These are UV corrections to D=11 supergravity that should be determined by the IR physics of the M(atrix) model, but this is its strong coupling limit that we hoped to understand via its supergravity dual. Although we have failed to learn much about the IR physics of the M(atrix) model from its supergravity dual, we can presumably learn how to resolve the singularity of the M-wave solution of D=11 supergravity from the IR physics of the M(atrix) model; it is just that the M(atrix) model/$`adS_2`$ correspondence doesn’t help us to accomplish this. On the positive side, it does shows that the M(atrix) model proposal is a close cousin of Maldacena’s adS/CFT proposal (as argued by other means in ). The latter is generally considered to provide an illustration of the concept of holography . If this is extended to the M(atrix) model and, more generally, to other branes then the general statement would seem to be that the bulk gravitational physics is determined by the physics of the ‘matter’ on branes. M-theory provides a natural realization of this idea (which is also suggested by the global nature of observables in general relativistic theories) because the uniqueness of D=11 supergravity ensures the absence of bulk matter. This is all rather similar to Mach’s principle, as Hořava has previously pointed out in the context of an alternative proposal for the degrees of freedom of M-theory . The utility of Mach’s principle is rather limited for asymptotically flat spacetimes because the local inertial frames are then predominantly determined by the existence of asymptopia. Holography is similarly limited; its applicability to adS spacetimes is evidently linked to the fact that timelike spatial infinity can be interpreted as a brane. This limitation would not be a problem if the universe were spatially closed, but this invokes cosmology to resolve an apparently unrelated problem. Perhaps this is an indication that they are not unrelated and that a consistent nonperturbative formulation of quantum gravity must incorporate cosmology. Acknowledgements: This article is largely an elaboration for $`p=0`$ of work reported for general $`p`$ in . I thank my collaborators HarmJan Boonstra and Kostas Skenderis, and also Gary Gibbons and Arkady Tseytlin for discussions on related issues. I thank Mohab Abou-Zeid, Melanie Becker, Iouri Chepelev and Petr Hořava for helpful comments on an earlier version, and the organisers of the 3’rd Puri workshop for the opportunity to visit India.
no-problem/9903/cond-mat9903459.html
ar5iv
text
# Recent Experiments with Bose-Condensed Gases at JILA ## 1 INTRODUCTION The experimental realization of Bose-Einstein condensation (BEC) in the alkalis has stimulated a flurry of scientific interest. In addition to single trapped condensates, double condensates (of two different spin states) have also been observed.<sup>2</sup><sup>2</sup>2A recent preprint by the MIT group also alludes to a multiple-component BEC confined in an optical trap. In the experiments described below, another double condensate system is described, in which the two condensate spin states overlap to a higher degree than in the previous experiment. Moreover, transitions can be driven between the two states to couple them, produce superposition states, or measure their relative phase. This double condensate also makes possible a number of novel experiments which probe the interactions between two interpenetrating quantum fluids. This paper describes the new double condensate system. We discuss in particular the conditions under which spin states are trapped and can be made to overlap. Since the experiments described here are performed in a trap with a rotating magnetic field, we also explore the peculiarities of spectroscopy in such a field. ## 2 CONDENSATE PRODUCTION We use a new, third-generation JILA BEC apparatus, in which the time-averaged orbiting potential (TOP) magnetic trap of generation I is combined with the double magneto-optical trap (MOT) system of generation II to produce condensates containing up to one million atoms. $`{}_{}{}^{87}\mathrm{Rb}`$ atoms are first collected in a vapor cell MOT and then magnetically guided through a transfer tube into a second ultrahigh vacuum ($`10^{12}`$ Torr) MOT, where up to $`10^9`$ atoms are collected in $`15`$ seconds. After further cooling in optical molasses and optically pumping to the $`|F=1,m_f=1`$ hyperfine state, the trapped atoms are loaded into a magnetic trap. The atoms are further cooled by forced evaporation, in which an applied radiofrequency (rf) magnetic field drives transitions in the most energetic atoms to untrapped states (*e.g.*, $`|1,0`$). We adjust the final evaporation frequency to choose the fraction of atoms which are condensed into the ground state, which can range from 0%, a cold thermal cloud, to nearly 100%, a pure condensate. The atoms are held in the trap for up to several seconds and subsequently released and allowed to expand ballistically for imaging. The entire cycle typically takes under one minute, and the apparatus is reliable enough to operate unattended, producing nearly identical condensates without interruption for over an hour at a time. The technique of resonant absorption imaging is used to probe the cloud. In this destructive process, a brief pulse of repump light ($`5S_{1/2},F=1`$ to $`5P_{3/2},F^{}=2`$) transfers the $`|1,1`$ atoms to the $`F=2`$ hyperfine level. The atoms are subsequently illuminated by the probe beam, which drives the $`5S_{1/2},F=2`$ to $`5P_{3/2},F^{}=3`$ cycling transition. Atoms scatter the light out of the probe beam and the resulting shadow is imaged upon a charge-coupled device (CCD) array. We process these data to extract the optical depth as a function of position, which permits us to determine the size of the cloud and number of atoms of which it is composed, as well as thermodynamic quantities such as its temperature $`T`$. ## 3 TWO OVERLAPPING CONDENSATES In a recent experiment at JILA, Myatt and co-workers produced two overlapping condensates in the $`|2,2`$ and $`|1,1`$ hyperfine states of $`{}_{}{}^{87}\mathrm{Rb}`$. Due to their different magnetic moments, their mutual repulsion, and the effect of gravity, the condensates occupied slightly different regions in the trap and could not be made to overlap one another completely. A third spin state of $`{}_{}{}^{87}\mathrm{Rb}`$ may also be trapped which has (to first order) the same magnetic moment as the $`|1,1`$ state, and (as will be shown below) can be made to overlap more completely with it. This system permits the realization of a rudimentary Rb clock in a magnetic trap, and provides a vehicle for understanding the interplay and dynamics between two condensates trapped in different spin states without the complications introduced by the lack of condensate overlap. ### 3.1 Magnetic Moments In the low-field limit, the magnetic energy of a state characterized by magnetic moment $`\mu `$ and quantum number $`m_f`$ quantized along the direction of the field $`B`$ is $$U=\mu 𝐁=g_Fm_f\mu _BB$$ (1) where $`g_F`$ is the Landé $`g`$-factor and $`\mu _B`$ is the Bohr magneton. The magnetic moment precesses about the field at the Larmor frequency $$\omega _L=\frac{g_F\mu _BB}{\mathrm{}}.$$ (2) States for which the product $`g_Fm_f`$ is positive minimize their energy in low fields (“weak-field seekers”) and may be trapped in the minimum of a magnetic field. For $`{}_{}{}^{87}\mathrm{Rb}`$, this leads to three trappable states (Fig. 1): the $`|2,2`$ and $`|2,1`$ states in the $`F=2`$ hyperfine manifold (where $`g_F=+\frac{1}{2}`$), and the $`|1,1`$ state in the $`F=1`$ manifold (where $`g_F=\frac{1}{2}`$). Note that, because of the sign of $`g_F`$, the trapped $`|1,1`$ state precesses in a sense opposite to that of the other two trapped states. This difference gives rise to some subtle behavior in the rotating magnetic field of the TOP trap, as we shall see below. In the absence of gravity, all three of the trapped states sit atop one another, although the $`|2,2`$ atoms are bound a factor of two more tightly by their larger magnetic moment. The centers of the atomic clouds are displaced (“sag”) in the presence of gravity by an amount which scales as $`g/\mu `$, where $`g`$ is the acceleration due to gravity. As a result, the $`|2,2`$ atoms sag about half as much than their $`|2,1`$ and $`|1,1`$ counterparts, and the degree of spatial overlap between the states is reduced. To a first approximation, however, the $`|2,1`$ and $`|1,1`$ atoms sag equally in the trap; consequently, we choose these two states for the fully overlapping condensate experiments. Beyond first order, the magnetic moments of the $`|2,1`$ and $`|1,1`$ states depend (differently) upon the applied magnetic field, and a more exact analysis is required. The Hamiltonian for these atoms includes couplings between the nuclear spin $`I=3/2`$, the electronic spin $`J=1/2`$, and the externally applied magnetic field; this is expressed $$\widehat{H}_{\mathrm{BR}}=\frac{h\nu _{\mathrm{hfs}}}{2}𝐈𝐉+g_J\mu _B𝐉𝐁g_I\mu _n𝐈𝐁,$$ (3) where $`\nu _{\mathrm{hfs}}=6834682612.8`$ Hz is the zero-field hyperfine splitting in $`{}_{}{}^{87}\mathrm{Rb}`$, $`g_J`$ and $`g_I`$ are the electronic and nuclear $`g`$-factors, and $`\mu _n`$ is the nuclear magneton. The energy of a particular state as a function of the magnitude of the magnetic field is found by diagonalizing Eq. (3), which yields the well-known Breit-Rabi formula: $$E_{\mathrm{BR}}(B)=\frac{h\nu _{\mathrm{hfs}}}{2(2I+1)}g_I\mu _nBm_f\pm \frac{h\nu _{\mathrm{hfs}}}{2}\sqrt{1+\frac{4m_fx}{2I+1}+x^2}$$ (4) where the upper (lower) sign is taken for $`F=2`$ ($`F=1`$), and the parameter $`x`$ is defined as $$x=\frac{(g_J\mu _B+g_I\mu _n)B}{h\nu _{\mathrm{hfs}}}.$$ (5) A plot of the splitting between the $`|2,1`$ and $`|1,1`$ states is shown in Fig. 2(b). The magnetic moments of the two states are identical when the slope of this difference becomes zero, *i.e.*, at $`B=3.24`$ G. At this field<sup>3</sup><sup>3</sup>3This requirement is only on the magnitude of the field seen by the atoms, and can arise from a combination of (rotating) bias fields and quadrupole components. two noninteracting atomic clouds (or condensates) will overlap. ### 3.2 Creating the Double Condensate The double condensate is produced by driving atoms from the $`|1,1`$ condensate to the $`|2,1`$ state with a two-photon excitation involving a microwave photon at $`6.8`$ GHz and an rf photon at $`1`$ MHz.<sup>4</sup><sup>4</sup>4Note that this scheme differs from that used in the previous JILA experiment, in which the two condensates were produced simultaneously during the evaporative cooling cycle. The two-photon drive is detuned 1.24 MHz from the intermediate $`|2,0`$ state, as shown in Fig. 1. The rf pulse is generated by a Wavetek 395 synthesizer and coupled into the system in the same manner as the rf used for forced evaporative cooling. The microwave radiation is generated by an HP 8672A frequency synthesizer, amplified to approximately 1 W by a travelling wave tube amplifier, and coupled into the system through a truncated waveguide. The coupling is rather inefficient, and the resulting two-photon Rabi frequency is only 1.1 kHz. For stability and absolute hyperfine interval measurements (described below) both synthesizers are frequency-locked to the 10 MHz reference of an HP 53570A global positioning system (GPS)-stabilized time and frequency reference receiver. By varying the rf detuning we can sweep the drive across the transition frequency and measure the number of atoms in the $`|2,1`$ condensate (Fig. 3). The repump light is blocked in the imaging sequence in order to prevent atoms in the $`|1,1`$ state from being imaged. Each point corresponds to a measurement with a pure condensate (*i.e.*, evaporated to the point at which there was no visible thermal cloud). For this particular measurement the transfer pulse was applied after the atoms were released from the trap, but identical results are obtained with atoms transferred to the upper state while in the trap. The width of the line is consistent with the length of the applied pulse, which is approximately $`500\mu `$sec. Strictly speaking, any transfer of atoms to the upper state which is incomplete results in a single condensate in a superposition of the two states. The probe light projects these states after the condensate is released from the trap. Unless the relative phase between the two condensates is to be measured (as in the Ramsey method described below) there is little experimental difference between the superposition state and two independent condensates. ### 3.3 Effect of a Rotating Field The preceding analysis is sufficient for a static magnetic trap, such as that used in the experiment of Myatt *et. al.*. Additional complications arise, however, when a time-dependent magnetic trap (such as the TOP trap) is used to provide the confinement. The TOP trap involves a magnetic field of magnitude $`B_b`$ rotating counterclockwise (as viewed from the positive $`z`$-axis) at angular frequency $`\omega _t`$ in the $`xy`$-plane, $$𝐁_𝐛(t)=B_b(\mathrm{cos}\omega _tt\widehat{𝐱}+\mathrm{sin}\omega _tt\widehat{𝐲}).$$ (6) The effective Hamiltonian in a frame co-rotating with the magnetic field transforms as $$\widehat{H}_{\mathrm{eff}}=R(\omega _tt)\widehat{H}_{\mathrm{BR}}R^{}(\omega _tt)i\mathrm{}R^{}(\omega _tt)\frac{}{t}R(\omega _tt)$$ (7) with the time-dependent rotation operator $`R`$ defined by $$R(\omega _tt)=\mathrm{exp}(\frac{i}{\mathrm{}}F_z\omega _tt).$$ (8) The operator $`F_z`$ is the $`z`$-component of the total (nuclear plus electronic) spin vector, and the sign of the argument of $`R`$ is chosen to rotate the coordinate axes in the same sense as the field. The Breit-Rabi Hamiltonian $`\widehat{H}_{\mathrm{BR}}`$ is invariant under the transformation. We therefore obtain $$\widehat{H}_{\mathrm{eff}}=\widehat{H}_{\mathrm{BR}}F_z\omega _t.$$ (9) For weak fields (*i.e.*, for $`\omega _L\omega _t`$) this additional term may be approximated as an effective magnetic field $`B_\omega `$ pointing along the $`z`$-axis which causes Larmor precession of the spin at the trap rotation frequency, *i.e.*, $$\omega _t=\frac{g_F\mu _BB_\omega }{\mathrm{}}B_\omega =\frac{\mathrm{}\omega _t}{g_F\mu _B}.$$ (10) The total spin vector for each state precesses about the magnetic field in a direction specified by the sign of its Landé $`g`$-factor $`g_F`$; as a consequence, the direction of the effective magnetic field $`B_\omega `$ is also determined by this sign. For our two trapped states we can take out the overall sign of $`B_\omega `$ explicitly and write $$B_{\mathrm{total}}=\sqrt{B_b^2+(|B_\omega |)^2}$$ (11) where the upper (lower) sign is for the $`|2,1`$ ($`|1,1`$) state. Up to this point we can see that the effective field is the same for both states. In TOP traps, however, there is also a magnetic quadrupole field $$𝐁_𝐪=B_q^{}(x\widehat{𝐱}+y\widehat{𝐲}2z\widehat{𝐳})$$ (12) (where $`B_q^{}=B/x`$ and the sign of the quadrupole has been chosen such that the field points toward the center of the trap on the $`z`$-axis). Gravity causes the equilibrium displacement of the atoms to sag below the center of the trap as defined by the quadrupole gradient. If the force on the atoms due to gravity is in the $`z`$-direction and the atoms are displaced some distance $`z`$ from the center of the trap, Eq. 11 must be modified to read $$B_{\mathrm{total}}=\sqrt{B_b^2+(2B_q^{}z|B_\omega |)^2}$$ (13) where the upper (lower) sign is taken for atoms in the $`|2,1`$ ($`|1,1`$) state. As a first result, we note that the measured transition frequency between the two states in the TOP trap will not be that shown in Fig. 2(b), which assumes that both states see the same field. In the TOP trap, each state sees a field different from that seen by the other since the rotating term $`B_\omega `$ adds to the field produced by the quadrupole gradient for one state and is subtracted from it for the other. The equilibrium position of the atoms is where the total force on the atoms is zero, or $$\frac{}{z}\left[E_{\mathrm{BR}}(B_{\mathrm{total}})\right]=M_{\mathrm{Rb}}g.$$ (14) Our second result is that the two equilibrium displacements will in general not be the same when $`B_b=3.24`$ G. We may, however, find new conditions under which the two states once again share an equilibrium displacement in the TOP trap by calculating the relative sag as a function of the magnetic quadrupole gradient as well as the bias field magnitude, rotation frequency, and sense of rotation (all of which are TOP trap parameters which we can adjust within certain limits). For instance, Fig. 4 shows the calculated sag for a trap rotation frequency $`\omega _t=2\pi 7.202`$ kHz and a quadrupole gradient $`B_q^{}=61.5`$ G/cm, where the upper curve is for clockwise rotation ($`\omega _t<0`$) and the lower curve for counterclockwise rotation ($`\omega _t>0`$). In this case, the relative sag is calculated to be zero when $`B=9.4`$ G and the field rotates in the counterclockwise sense. Note there is no value of $`B_b`$ for which the equilibrium displacements in the trap are the same for atoms in both states when the TOP field rotates in the clockwise sense. In order to confirm the relative sag as predicted by the theory for the rotating magnetic field, we drove the $`m=0`$ center-of-mass (“sloshing”) motion by discontinuously changing the trap fields (and, hence, oscillation frequencies) for pure $`|1,1`$ and $`|2,1`$ condensates. The resulting condensate axial motion is shown in Fig. 5. For a field of 3.2 G, the $`|2,1`$ atoms are seen to oscillate about an equilibrium position somewhat higher than that of the $`|1,1`$ atoms; the reverse is true for 11 G. These observations agree with the theoretical prediction of Fig. 4. ## 4 APPLICATIONS, PRELIMINARY RESULTS, AND CONCLUSIONS The $`|1,0`$$``$ $`|2,0`$ “clock” transition in $`{}_{}{}^{87}\mathrm{Rb}`$ is widely used as a frequency reference. In addition to systematic effects, the precision to which such an atomic transition can be measured is limited by the time over which the transition can be observed. By using condensates confined in a magnetic trap, the observation time can be on the order of the lifetime of the condensates, which may be up to several seconds. Unfortunately, the states traditionally used in the “clock” transition cannot be magnetically trapped since $`m_f=0`$. As noted in Sec. 3.1 (above), the $`|2,1`$ to $`|1,1`$ transition frequency is (like the clock transition) independent of $`B`$ to first order. We can measure this frequency by using Ramsey’s technique of separated oscillatory fields (SOF). Unlike a beam experiment, the fields here occur as two pulses separated in time rather than two interaction regions separated in space. We begin with atoms in the $`|1,1`$ state. A first $`\pi /2`$ pulse creates a condensate in a superposition of the two states, at which point its wavefunction freely evolves at the frequency of the hyperfine splitting $`\omega _{\mathrm{clock}}=\omega _{|2,1}\omega _{|1,1}`$. Our local oscillator (the sum of the microwave and rf frequencies) evolves at $`\omega _{\mathrm{LO}}`$. The number of atoms observed in the upper state after some interaction time $`t`$ is given by $$\frac{N_{2,1}}{N_{\mathrm{tot}}}=\mathrm{sin}^2\left((\omega _{\mathrm{clock}}\omega _{\mathrm{LO}})\frac{t}{2}+\varphi _0\right)$$ (15) where $`\varphi _0`$ is an unimportant relative phase originating from the details of the pulse timing. We detune the local oscillator roughly 1 kHz from the transition and measure the number of atoms in the $`|2,1`$ condensate for various interaction times. From a fit to the data we obtain a measurement of the detuning, as shown in Fig. 6.<sup>5</sup><sup>5</sup>5This single measurement does not give us the *sign* of the detuning, and we are required to choose a different detuning and repeat the measurement to get at this quantity. If the equilibrium positions of the two condensates are not identical then the wavefunctions for the two condensate states will begin to separate in space and will not effectively interfere with one another when the second $`\pi /2`$ pulse is applied. The data presented in this figure determine the hyperfine splitting between the two states to about 1 part in $`10^9`$, and were taken with atoms released from the trap and allowed to expand ballistically in only the rotating magnetic field. Ramsey fringes have recently been obtained with condensates confined in the magnetic trap (after having adjusted the trap parameters to make the condensate equilibrium positions identical) and will be considered in a future paper. Another set of future experiments focuses upon the interactions between two condensates in two different hyperfine states, which constitutes one of a class of studies of mixed condensates. Choosing the equilibrium displacements to be identical simplifies the interpretation of the dynamical behavior of the two condensates, and maximizes the overlap which can be achieved. In particular, the system described here permits very good control over the relative number of atoms in the two states. Experiments under consideration include the measurement of the relative scattering length of the two states by examining the mode spectrum excited by transferring all of the atoms from the $`|1,1`$ state to the $`|2,1`$ state, and the phase separation dynamics of the two condensates. In conclusion, we have produced a new two-condensate system out of the $`|1,1`$ and $`|2,1`$ hyperfine states of $`{}_{}{}^{87}\mathrm{Rb}`$. Due to their very similar magnetic moments these two condensates can be made to overlap to a very high degree in both static and rotating-field magnetic traps. This system, and others like it, may be used as a vehicle for studying the dynamics of interpenetrating quantum fluids and for precision metrology. ## ACKNOWLEDGMENTS This work is supported by the National Institute of Standards and Technology, the National Science Foundation, and the Office of Naval Research. The authors would also like to thank the other members of the JILA BEC Collaboration, and in particular J. L. Bohn, for their contributions and discussions on these topics. This paper first appeared in the Proceedings of the SPIE, Volume 3270, Pages 98–106 (1998). Equations (2) and (10), and their accompanying text, have been corrected for this postprint edition.
no-problem/9903/astro-ph9903073.html
ar5iv
text
# The Lyman Alpha Forest in Hierarchical Cosmologies ## Introduction A physical picture of the Ly$`\alpha `$ forest in hierarchical cosmologies has recently emerged from numerical simulations sims ; bryan in which the absorbers that give rise to low column density lines ($`N_{HI}<10^{15}`$ cm<sup>-2</sup> at $`z3`$) are large, unvirialized objects with sizes of $`100`$ kpc and densities comparable to the cosmic mean. Since the absorbers grow from the primordial density fluctuations through gravitational amplification, statistics of the forest may be used to test various models of structure formation. We discuss the numerical stability of the statistics against changes in simulation box size and spatial resolution in detail elsewhere. bryan We focus here on examples from our model comparison study machacek in which statistics of the Ly$`\alpha `$ forest are computed in five cosmological models: the standard cold dark matter model (SCDM), a flat cold dark matter model with nonvanishing cosmological constant (LCDM), a low density cold dark matter model (OCDM), a flat cold dark matter model with a tilted power spectrum (TCDM), and a critical model with both cold dark matter and two massive neutrinos (CHDM). The initial fluctuations, assumed to be Gaussian, are normalized using $`\sigma _{8h^1}`$ to agree with the observed distribution of clusters of galaxies, although all but SCDM are also consistent with the COBE measurements of the cosmic microwave background. By varying $`\sigma _{8h^1}`$ within a given model (SCDM), we also investigate the dependence of the statistics on changes in the fluctuation power spectrum. The simulation technique uses a particle-mesh algorithm to follow the dark matter and the piecewise parabolic method bryan95 to simulate gas dynamics. The simulation box length is $`9.6`$ Mpc (comoving) with spatial resolution of $`18.75h^1`$ ($`37.5h^1`$) kpc for the model (power) comparison studies, respectively. Nonequilibrium effects are followed anninos for six particle species (HI, HII, HeI, HeII, HeIII, and the electron density). We assume a spatially-constant radiation field computed from the observed QSO distribution haardt which reionizes the universe around $`z6`$ and peaks at $`z2`$. Synthetic spectra are generated along $`300`$ random lines of sight through the volume, including the effects of peculiar velocity and thermal broadening. The spectra are normalized to give a mean optical depth $`<\tau >=0.3`$ at $`z=3`$ to agree with observation. We do not include the effects of radiative transfer, self-shielding or star formation and so can not address the physics of the highest column density absorbers ($`N_{HI}>10^{16}`$ cm<sup>-2</sup>). ## Fit Dependent Statistics The synthetic spectra are fit by Voigt profiles at these low column densities to obtain column densities and Doppler widths for each line. The slope of the column density distribution is insensitive to changes in the size of the simulation volume or grid resolution. bryan Fig. 1 confirms analytic work gnedin that the slope of the column density distribution depends primarily on the power in the model at scales $`100200`$ kpc and steepens for models with lower power. Each of the five cosmologies in our comparison study agrees with the data at the $`3\sigma `$ level, although models (SCDM,LCDM,OCDM) with moderate power at these scales are favored. The Doppler $`b`$ parameter distributions are determined not only by thermal broadening and peculiar velocities of the absorbers, but also by the Hubble expansion across their width, and require high spatial resolution to be modeled properly. bryan ; theuns The shape of the distribution including the high $`b`$ tail is well fit by hierarchical models. However, as shown in Fig. 2, observations by Kim, et. al kim yield median $`b`$ parameters significantly higher than predicted for the models favored by the column density distributions (LCDM, OCDM, SCDM). ## Conclusion Although hierarchical cosmologies reproduce the general characteristics of the Ly$`\alpha `$ absorption spectra quite well, detailed tests of the models may require new methods of analysis for the simulations and observations. Statistics derived directly from the flux or optical depth distributions without recourse to any line fitting algorithm are particularly interesting.rauch For example, the optical depth probability distribution, like the column density distribution, is stable to changes in simulation spatial resolution. Fig. 3 shows that the distribution narrows for models with lower power and that the shape of the distribution varies significantly for the different models over the range $`0.05<\tau <4`$ accessible to observations. Comparison of high quality observations with high resolution simulations using an ensemble of such statistics may soon clarify the physical properties of the intergalactic medium at intermediate redshifts when galaxies were young. This work is done under the auspices of the Grand Challenge Cosmology Consortium and supported in part by NSF grant ASC-9318185 and NASA Astrophysics Theory Program grant NAG5-3923.
no-problem/9903/hep-ex9903048.html
ar5iv
text
# Light Gluino Search for Decays Containing 𝜋⁺⁢𝜋⁻ or 𝜋^𝟎 from a Neutral Hadron Beam at Fermilab. ## Abstract We report on two null searches, one for the spontaneous appearance of $`\pi ^+\pi ^{}`$ pairs, another for a single $`\pi ^0`$, consistent with the decay of a long-lived neutral particle into hadrons and an unseen neutral particle. For the lowest level gluon-gluino bound state, known as the $`R^0`$, we exclude the decays $`R^0\pi ^+\pi ^{}\stackrel{~}{\gamma }`$ and $`R^0\pi ^0\stackrel{~}{\gamma }`$ for the masses of $`R^0`$ and $`\stackrel{~}{\gamma }`$ in the theoretically allowed range. In the most interesting $`R^0`$ mass range, $`3\mathrm{GeV}/\mathrm{c}^2`$, we exclude $`R^0`$ lifetimes from $`3\times 10^{10}`$ seconds to as high as $`10^3`$ seconds, assuming perturbative QCD production for the $`R^0`$. preprint: RUTGERS-99-12 Light masses for gluinos ($`\stackrel{~}{g}`$) and photinos ($`\stackrel{~}{\gamma }`$) arise naturally in many supersymmetry (SUSY) models, including those that solve the SUSY-CP problem by eliminating dimension-3 operators . They predict light gauginos and heavy squarks, and have not been ruled out conclusively. In such models the gluino and photino masses are expected to be $`1.0\mathrm{GeV}/\mathrm{c}^2`$. The gluino should form bound states with normal quarks and gluons ($`g`$), the lightest of which is called the $`R^0`$, a spin 1/2 $`g\stackrel{~}{g}`$ bound state. Estimates of the mass and lifetime of the $`R^0`$ vary from 1 to 3 $`\mathrm{GeV}/\mathrm{c}^2`$ and $`10^{10}`$ to $`10^5`$ s respectively . Chung, Farrar, and Kolb show that a stable $`\stackrel{~}{\gamma }`$ as a relic dark matter candidate requires the ratio of masses $`M_{R^0}/M_{\stackrel{~}{\gamma }}\mathrm{r}`$ to be $`1.3r1.8`$. The range $`1.3r1.55`$ is favored. Values of $`r`$ below 1.3 yield an insufficient abundance of dark matter, while too large a value of $`r`$ overcloses the universe. A previous direct search for the $`R^0`$ by the KTeV collaboration is described in . That result, based on 5% of the data collected by the KTeV experiment in 1996, excluded the $`R^0`$ with the constraint $`M_{R^0}M\stackrel{~}{\gamma }0.648\mathrm{GeV}/\mathrm{c}^2`$. Thus for $`r1.4`$, our previous result was insensitive to $`M_{R^0}2.3\mathrm{GeV}/\mathrm{c}^2`$, which represents a large portion of the region of primary interest . A search for the C-suppressed decay $`R^0\eta \stackrel{~}{\gamma }`$ is discussed in . References to other searches can be found in . We assume exactly the same production mechanism and rates as described in . The $`R^0`$ is expected to decay mainly into $`\rho \stackrel{~}{\gamma }`$. The decay into $`\pi ^0\stackrel{~}{\gamma }`$ or $`\eta \stackrel{~}{\gamma }`$ is suppressed due to conservation of $`C`$-parity. Figure 1 illustrates the lower limit of sensitivity in $`M_{R^0}M_{\stackrel{~}{\gamma }}`$ of the three decay modes mentioned above, along with the cosmological constraints. We report on null searches for two decay modes. The first is the dominant decay, $`R^0\rho \stackrel{~}{\gamma },\rho \pi ^+\pi ^{}`$. The second is the $`C`$-violating decay $`R^0\pi ^0\stackrel{~}{\gamma }`$. In both decays, the $`\stackrel{~}{\gamma }`$ escapes undetected. The KTeV experiment as used in the $`R^0`$ search is described in . The data used in the $`R^0\rho \stackrel{~}{\gamma },\rho \pi ^+\pi ^{}`$analysis were collected during the 1996 run of KTeV (FNAL E832). The trigger and analysis cuts used are similar to those described in . To detect a possible $`R^0`$ signal we examined decays with two charged particles; specifically the shape of the invariant mass distribution, with the assumption that the particles were pions ($`M_{\pi ^+\pi ^{}}`$). An online filter was used during data collection to classify the events according to $`\pi ^+\pi ^{}`$ invariant mass. The data with $`M_{\pi ^+\pi ^{}}<0.45\mathrm{GeV}/\mathrm{c}^2`$ were prescaled. Backgrounds consisted of $`K_L\pi ^\pm l^{}\nu `$ ($`l=e,\mu `$) decays with leptons mis-identified as pions (semi-leptonic decays); $`K_L\pi ^+\pi ^{}`$ and $`K_L\pi ^+\pi ^{}\gamma `$ decays; as well as $`K_L\pi ^+\pi ^{}\pi ^0`$ decays with undetected $`\pi ^0`$’s. The offline analysis, including cuts using photon veto energies, semileptonic decay rejection, and track, vetex quality requirements, was similar to that described in . Additional cuts reduced the probability of track reconstruction errors that paired the wrong combination of horizontal and vertical track components. The $`K_L\pi ^+\pi ^{}`$ decays were rejected by requiring the square of the transverse momentum of the $`\pi ^+\pi ^{}`$ with respect to the beam direction ($`P_T^2`$) to be greater than 0.001 $`(\mathrm{GeV}/\mathrm{c})^2`$, and the $`K_L\pi ^+\pi ^{}\gamma `$ and $`K_L\pi ^+\pi ^{}\pi ^0`$ decays were rejected by using the calorimeter to identify the photons from respective decays. In the region $`M_{\pi ^+\pi ^{}}0.36\mathrm{GeV}/\mathrm{c}^2`$ additional cuts ($`K_L\pi ^+\pi ^{}\pi ^0`$ specific cuts), including restricting the total energy deposited in the calorimeter to $`5\mathrm{GeV}/\mathrm{c}^2`$, further reduced the $`K_L\pi ^+\pi ^{}\pi ^0`$ decays. We performed a maximum-likelihood fit to the $`M_{\pi ^+\pi ^{}}`$ distribution, using Monte Carlo distributions for $`K_L\pi e\nu `$, $`K_L\pi \mu \nu `$, $`K_L\pi ^+\pi ^{}`$, $`K_L\pi ^+\pi ^{}\pi ^0`$, and $`R^0\pi ^+\pi ^{}\stackrel{~}{\gamma }`$. The Monte Carlo events were subjected to all the same cuts as the data, however the $`e^\pm `$ identification cuts was not applied to the $`K_L\pi e\nu `$, and a muon veto requirement was not required for the $`K_L\pi \mu \nu `$ events. These lepton identification cuts have no effect on the $`M_{\pi ^+\pi ^{}}`$ distribution. The amplitudes for all the simulated $`M_{\pi ^+\pi ^{}}`$ shapes were allowed to vary independently. Figure 2(a) shows the $`M_{\pi ^+\pi ^{}}`$ distribution for all data, before applying the $`K_L\pi ^+\pi ^{}\pi ^0`$ specific and $`P_T^2`$ cuts. There are $``$ 2.1$`\times 10^6`$ CP-violating $`K_L\pi ^+\pi ^{}`$ decays in the peak at $`M_K`$. The sharp edge at $`M_{\pi ^+\pi ^{}}=0.45\mathrm{GeV}/\mathrm{c}^2`$ is due to the online filter prescale. The $`K_L\pi ^+\pi ^{}\pi ^0`$ decays are evident at $`M_{\pi ^+\pi ^{}}0.36\mathrm{GeV}/\mathrm{c}^2`$, and the kinematic limit is evident at $`2M_\pi =0.28\mathrm{GeV}/\mathrm{c}^2`$. Also shown in Figure 2(a) is the sum of the various kaon decay distributions from Monte Carlo. The data and kaon decay simulation are in agreement for six orders of magnitude. In addition, the $`P_T^2`$ distributions for data and sum of decay simulations (not shown) are in good agreement, using the amplitudes for the various kaon decays found in $`M_{\pi ^+\pi ^{}}`$ fit. Figure 2(b) shows the $`M_{\pi ^+\pi ^{}}`$ distribution for the data with all the cuts, as well as the sum of the $`K_L`$ decay simulations, and distributions for two sample $`R^0`$,$`\stackrel{~}{\gamma }`$ combinations. The $`K_L\pi ^+\pi ^{}`$ peak in data is significantly reduced due to the $`P_T^2`$ cut. The agreement between data and the sum of $`K_L`$ decay simulations has an overall $`\chi ^2/\mathrm{degree}\mathrm{of}\mathrm{freedom}`$ of $``$ 194/148 for the region $`0.28\mathrm{GeV}/c^2M_{\pi ^+\pi ^{}}0.58\mathrm{GeV}/c^2`$. The two sample $`R^0`$ distributions shown in figure 2(b), are scaled to the expected rate for $`R^0`$’s with a lifetime equal to the lifetime of the $`K_L`$. Since the shape due to $`R^0`$ decay is significantly different from those due to kaon decay, we searched for the $`R^0`$ by examining the difference between the $`M_{\pi ^+\pi ^{}}`$ shape and the shape expected from kaon decays. The data show no deviation in the $`M_{\pi ^+\pi ^{}}`$ shape that could indicate a contribution from an $`R^0`$ decay. Limits on $`R^0`$ were obtained using the maximum likelihood fit explained above. There are 10 events with $`M_{\pi ^+\pi ^{}}0.6\mathrm{GeV}/\mathrm{c}^2`$ that are not simulated by $`K_L`$ decays. These events, which are consistent with residual gas interactions in the vacuum, are treated as signal in the fit. Various $`R^0,\stackrel{~}{\gamma }`$ combinations, with $`1.3r2.2`$ were used in the fit. All fits yielded $`R^0`$ components consistent with zero. An upper limit for a given $`R^0`$ was determined by evaluating the maximum-likelihood curve at the 90% confidence level (C.L.) interval, which limited the $`R^0`$ decays to less than a few tens to hundreds range. The detector acceptance and the $`R^0`$ flux at production were then determined as a function of the $`R^0`$ lifetime. The normalization was performed using 2.1$`\times 10^6`$ $`K_L\pi ^+\pi ^{}`$ events observed, from which we determined that 37.7 $`\times 10^{10}`$ $`K^0`$ exited the absorbers . Figure 3 shows the 90% C.L. upper limit on the ratio $`R^0/K^0`$, as well as the expectation for the $`R^0/K^0`$ ratio for $`r=1.4`$. Figure 4 shows the variation of the 90% C.L. upper limit on the $`R^0/K^0`$ flux ratio with the $`R^0`$ lifetime for two sample $`R^0`$, $`\stackrel{~}{\gamma }`$ combinations. Particles with lifetimes much shorter than the $`K_L`$ decay too close to the target to be visible in the detector, while those with much longer lifetimes exit the detector without decaying. We use the $`R^0/K^0`$ flux expectation to exclude a range of lifetimes for a given $`R^0`$, $`\stackrel{~}{\gamma }`$ combination. Figure 5 shows $`R^0`$ lifetimes excluded at 90% C.L. for a given mass, assuming a 100% branching fraction for the $`R^0\pi ^+\pi ^{}\stackrel{~}{\gamma }`$ decay. Contours are shown for $`\mathrm{r}=1.3,1.4,1.55,1.73`$. In this analysis, we are able to exclude $`R^0`$’s with masses well below the lower limits of previous searches ($``$ 2.2 GeV/$`c^2`$). The lower limit of the exclusion contour (at 1.3 GeV/$`c^2`$ for $`r=1.3`$) is due to the kinematic limit of $`M_{R^0}M_{\stackrel{~}{\gamma }}=2M_\pi `$. We note that in the theoretically interesting regions of $`M_{R^0}`$ and $`r`$, our exclusion covers lifetimes as low as $`3\times 10^{10}`$ seconds, and as high as $`10^3`$ seconds, effectively spanning the theoretically interesting range of lifetimes. If the mass difference $`M_{R^0}M_{\stackrel{~}{\gamma }}`$ is less than $`2M_\pi `$ then the $`R^0`$ can only decay via $`R^0\pi ^0\stackrel{~}{\gamma }`$. We searched for the decay $`R^0\pi ^0\stackrel{~}{\gamma }`$, $`\pi ^0\gamma \gamma `$ in data taken during a special run whose primary purpose was to search for the decay $`K_L\pi ^0\nu \overline{\nu }`$ . Since the signatures for the $`K_L`$ and $`R^0`$ decays, two photons with missing transverse momentum ($`P_T`$), are similar, these data are sensitive to $`R^0\pi ^0\stackrel{~}{\gamma }`$ decays. Only one narrow beam of neutral kaons was used in this run, with the transverse beam size of $`4\mathrm{cm}\times 4\mathrm{cm}`$ at the calorimeter. The trigger was designed to select events with two energy clusters in the calorimeter, together with four cluster events ($`K_L\pi ^0\pi ^0`$) for normalization. The longitudinal distance of the decay vertex from the target ($`Z`$) and the transverse momentum of the two photons were determined by constraining the invariant mass of the two photons to that of $`\pi ^0`$. The average $`P_T`$ resolution was $`8\mathrm{MeV}/\mathrm{c}`$. The selection criteria used in this data sample are similar to the one used in the $`\pi ^0\nu \overline{\nu }`$ analysis , with the exception of the $`P_T`$ cut at 260 MeV/$`c`$. Photon veto detectors and drift chambers were used to suppress backgrounds from other kaon decays and hadronic interactions in the detector. The events were required to have the decay vertex in vacuum, with the vertex $`Z`$ position in the range $`125Z157`$ m. We examined the shape of the $`P_T`$ distribution to isolate $`R^0`$ candidates. The $`P_T`$ distribution of the final data sample is shown in Figure 6, along with the background expected from $`K_L\gamma \gamma `$ and $`\mathrm{\Lambda }n\pi ^0`$ decays. The peak near $`P_T=0`$ is from the decay $`K_L\gamma \gamma `$, and the remaining events below $`P_T=160`$ MeV/$`c`$ are from $`\mathrm{\Lambda }`$ decays. The signal search region at $`P_T>160`$ MeV/$`c`$, chosen to minimize background from hyperon and $`K_L`$ decays, is indicated by the arrow. Clearly, we are sensitive to $`R^0`$ masses for which the $`\pi ^+\pi ^{}`$ decay cannot proceed. After all cuts, two events remain in the signal region. From studies of $`\mathrm{\Lambda }n\pi ^0`$ decays , we expect the number of events from hadronic interactions to be $`4.7\pm 1`$, consistent with the number of events remaining in our sample. Treating these two events as signal, the corresponding 90% C.L. upper limit on the number of observed $`R^0`$ signal events is 5.32. The $`K_L`$ flux in this data sample was measured from 3466 observed $`K_L\pi ^0\pi ^0`$ decays. The pQCD prediction for the $`R^0/K_L`$ flux ratio were used as before to obtain upper and lower lifetime limits at the 90% C.L., assuming the $`R^0`$ decays 100% of the time to $`\pi ^0\stackrel{~}{\gamma }`$. Figure 7 shows the exclusion contours for $`r=1.3`$, and $`r=1.4`$ using the $`\pi ^0`$ analysis, and the $`\pi ^+\pi ^{}`$ analysis. Note that using the $`\pi ^0`$ analysis, we extend the range of excluded $`M_{R^0}`$ down to $`0.8\mathrm{GeV}/\mathrm{c}^2`$, for lifetimes between $`2.5\times 10^{10}`$ and $`5.6\times 10^6`$ seconds. The analyses presented in this paper exclude most $`R^0`$ masses, over six decades in lifetime. A significant portion of the region allowed by the cosmological constraint $`r1.4`$ and $`M_{R^0}2.2\mathrm{GeV}/\mathrm{c}^2`$ which was not addressed by previous searches is now excluded. We thus definitively close the light gaugino window. Our null results eliminate most SUSY models in which gauginos remain massless at tree-level. More generally, our understanding of the $`M_{\pi ^+\pi ^{}}`$ shape will constrain future models that predict long-lived particles with a $`\pi ^+\pi ^{}`$ component in their decays. We thank Glennys Farrar for suggesting this search and for discussions concerning this work and, along with Rocky Kolb, for pointing out the cosmological significance of this search. We gratefully acknowledge the support and effort of the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported in part by the U.S. DOE, The National Science Foundation and The Ministry of Education and Science of Japan. In addition, A.R.B., E.B. and S.V.S. acknowledge support from the NYI program of the NSF; A.R.B. and E.B. from the Alfred P. Sloan Foundation; E.B. from the OJI program of the DOE; K.H., T.N. and M.S. from the Japan Society for the Promotion of Science. P.S.S. acknowledges receipt of a Grainger Fellowship.
no-problem/9903/astro-ph9903328.html
ar5iv
text
# The Quiescent Accretion Disk in IP Peg at Near-Infrared Wavelengths ## 1. Introduction Observations at near-infrared wavelengths (NIR) are well-suited for studying the companion star and the outer accretion disk in cataclysmic variables (CVs) and other compact binaries. The spectral type of the late-type secondary star can be determined from its NIR colors or its absorption line strengths, although both methods usually require an estimate of the contribution of the accretion disk to the near-infrared flux (Berriman et al. (1985); Dhillon & Marsh (1995)). In high-inclination compact binary systems, the NIR light curves typically show ellipsoidal variations from the Roche-lobe shaped secondary star. The ellipsoidal variations constrain the inclination and mass ratio of the binary and, when combined with the observed mass function, place limits on the mass of the primary object, an important means of modeling black hole binary systems (e.g. Haswell et al. 1993, Shahbaz et al. 1994) . Near-infrared photometry of CVs also probes their accretion disks. The visible and ultraviolet light curves of CVs are dominated by flux from the accretion disk, the white dwarf, and the bright spot where the mass transfer stream impacts the disk. The disk flux seen at these wavelengths emanates predominantly from hotter, inner annuli, the disk/white dwarf boundary layer, and the disk chromosphere. Near-infrared data supplement these shorter wavelength observations by probing the cool disk at larger radii. It is in precisely these regions of the disk, where material accreted from the companion star accumulates in quiescence, that many physical phenomena of interest take place, including the initiation of some dwarf novae outbursts. High-inclination CVs show eclipses of the accretion disk at inferior conjunction of the secondary star; in the near-infrared, secondary eclipses of the companion star by the disk are also seen (Bailey et al. (1981); Sherrington et al. (1982)). Flux-ratio diagrams of near-infrared emission from CVs indicate the presence of both opaque and transparent material in the accretion disk, and show that the fraction of optically thick to optically thin gas varies from system to system (Berriman et al. (1985)). A combination of multicolor NIR light curves and spectra can be used to determine the relative contributions of the secondary star and the accretion disk to the total flux, and the amount of optically thick and optically thin emission from the disk. In nearly edge-on systems, where the companion star occults the primary star and the accretion disk, the shape of the eclipse contains information on the pattern of emissivity across the disk. Maximum entropy modeling methods have been successfully used at optical and UV wavelengths to construct maps of the accretion disk intensity and determine its radial brightness temperature profile (see Horne 1993 for a review). In dwarf novae in outburst, visible disk maps show brightness temperature ($`T_{br}`$) profiles roughly consistent with optically thick, steady-state emission (e.g. Horne and Cook 1985; Rutten et al. 1992). Maps of quiescent dwarf novae, however, show flat $`T_{br}`$ profiles and accretion disks that are far from steady-state (Wood et al. (1986); Wood, Horne, & Vennes (1992)). In order to extend multiwavelength modeling of accretion disks, we present the first near-infrared map of a disk, in the cataclysmic variable, IP Peg. IP Peg is a dwarf nova consisting of a mass-donating, M-dwarf secondary star and an accretion disk around a white dwarf primary star. It undergoes regular outbursts every few months in which it brightens by approximately two magnitudes in the visible. IP Peg has an orbital period of 3.8 hours. With its high inclination, it is one of the few dwarf novae above the period gap with eclipses of the white dwarf, accretion disk and bright spot. Visible eclipse timings have constrained the geometry of the system, although the fits are complicated by blending of the eclipse features due to the bright spot and white dwarf ingress (Wood & Crawford (1986); Wolf et al. (1993)). The large contribution of the bright spot to the eclipse profile has also precluded any quiescent, visible maps of the accretion disk. Szkody & Mateo (1986) observed IP Peg in the near-infrared. They obtained mean J, H and K colors and their light curves show ellipsoidal variations and both primary and secondary eclipses. In this paper, we present H-band light curves of IP Peg, fits to the ellipsoidal variations of the secondary star, and a map of the accretion disk. Section 2 summarizes the observations and data reduction. Section 3.1 discusses the morphology of the light curves, while Section 3.2 describes modelling the ellipsoidal variations. We describe the eclipse mapping method and present the quiescent disk map in Section 3.3, and consider the dependence of the results on the choice of modeling parameters in Section 3.4. Section 4 discusses the results, and concluding remarks are presented in Section 5. ## 2. Observations and Data Reduction We observed IP Peg for three nights in 1993 September and five nights in 1994 September and October using the infrared imaging camera, ROKCAM (Colomé & Harvey (1993)), on the 2.7-m telescope at McDonald Observatory. We obtained 15 hours of H-band data; the observations are summarized in Table 1. IP Peg was in quiescence during the observations; the next outbursts began on 1993 October 25 and on 1994 December 7 (Bortle 1993a ; Bortle (1995)). We observed IP Peg and a nearby field star, located approximately 34 arcseconds to the southwest, and measured the sky background by nodding the telescope in a grid of nine positions. The data were reduced using the standard IRAF data reduction packages and the DAOPHOT aperture photometry routines. The data initially showed evidence of an instrumental artifact, causing variations of up to $`\pm 0.1`$ mag in both the target and the comparison stars. The variations were correlated with the grid position of the star on the array, which we attributed to a poor match between the dome flats and the actual sensitivity variations across the ROKCAM chip. We removed most of the instrumental signature by applying corrective terms to the calculated instrumental magnitudes. The correction for each grid position and each star was computed from the difference between the mean magnitude of the star and its mean magnitude in a given grid position for each night. We then subtracted the field star magnitude from the IP Peg magnitude in each frame to correct for atmospheric effects. The shapes of the resulting light curves did not vary appreciably from one night to the next, so we combined the data into three mean light curves: 1993 September, 1994 September, and 1994 October. The individual nights were shifted to have the same mean magnitude before combining (a typical shift of 0.008 mag). The data from 1994 October 27 were not used in the mean 1994 October light curve because IP Peg appeared to be fainter on that night, and with the limited data, it was impossible to determine whether this was real or an instrumental effect. We converted the data from magnitude to arbitrary intensity units and binned them using phase bins equal to 0.5% of the orbital period. We used the Wolf et al. (1993) ephemeris, in which phase zero corresponds to the phase of white dwarf egress. A bin size of 0.005 is equal to a time interval of 1.14 minutes. We calibrated the data using standard star observations from 1994 September 30 (Elias et al. (1982)). Since only two standard stars were observed, we fit the data using a simple transformation equation with a fixed extinction coefficient and no color terms (Allen (1976)). The mean H magnitude for IP Peg on 1994 September 30 was $`12.14\pm 0.11`$, a value consistent with previous observations (Szkody & Mateo (1986)). The conversion from magnitude to intensity units was calibrated using a flux calibration of $`10^6`$ mJy for a zero-magnitude M-dwarf star (Berriman & Reid (1987)). The standard deviation of the mean after flux calibration averaged $`\pm 0.12`$, $`\pm 0.06`$, and $`\pm 0.11`$ mJy for 1993 September, 1994 September and 1994 October, respectively. The uncertainties were dominated by systematic effects such as the sensitivity variations of pixels across the array, so we set the error bars in each light curve equal to the mean error. We tripled the error bar on a single bin near primary minimum in the 1994 September light curve ($`\varphi `$ = 0.0625; see Figure 11), because the deviation of this point from the surrounding points is not real, but a remnant of the instrumental calibration error. ## 3. Analysis of the Quiescent Light Curves ### 3.1. Morphology The mean H-band light curves of IP Peg are shown in Figures 1 through 3. Due to the difficulty in measuring the white dwarf ingress in IP Peg, the published ephemeris designates $`\varphi `$ = 0 as the phase of white dwarf egress, but convention – and the maximum entropy modeling program – place the point of zero phase at inferior conjunction of the secondary star. The observed light curves can be shifted to the conventional zero phase using the known width of the white dwarf eclipse (Wood & Crawford (1986)). Due to changes in IP Peg’s orbital period, however, the linear ephemeris is imprecise. As a result, we shifted the light curves by half the white dwarf eclipse width plus an additional amount ($`\mathrm{\Delta }\varphi `$ = 0.027) to correct for deviations from the ephemeris. This additional shift was determined while mapping the accretion disk (see Section 3.3) and applied iteratively to the data. The orbital phasing shown in Figures 1 through 3 incorporates this correction. All three light curves are dominated by what appears to be ellipsoidal variations from the late-type companion star, but it is clear that the secondary star is not the only near-infrared emitter. The peak in the light curve near $`\varphi `$ = 0.75 is greater than the peak near $`\varphi `$ = 0.25; typically, this phenomenon is indicative of beamed emission from the bright spot. The primary eclipse of the accretion disk is visible in the 1993 September and 1994 September light curves. A primary eclipse is probably present in the 1994 October data, but it is less obvious in that light curve. The 1994 September light curve also shows bright spot ingress and egress features (near $`\varphi `$ = 0.9 and 0.1 in Figure 2 and more clearly in Figure 5). The primary eclipse is not deep, suggesting that the integrated flux from the accretion disk in IP Peg is small in the H-band. The second minimum in the data near $`\varphi `$ = 0.5 is caused partially by the ellipsoidal variations, but a secondary eclipse of the companion star by the accretion disk may also be present. The 1993 September and 1994 September light curves are morphologically similar. The flux at primary minimum is nearly the same, but the 1993 data has smaller peak-to-peak variations, indicating that the accretion disk and/or the bright spot fluctuate in brightness. The 1994 October light curve is significantly different from the September data: the primary eclipse is shallower and the eclipse minimum is shifted to a later orbital phase. The secular variations in IP Peg precluded combining all the data into a single light curve. Since the 1994 September light curve has the best signal-to-noise ratio, we concentrated on modeling it. All further references to the data, unless stated otherwise, refer to Figure 2. ### 3.2. Modeling the Ellipsoidal Variations The first step in analyzing the light curve required modeling and removing the contribution of the secondary star. The ellipsoidal variations were modeled using a light-curve synthesis program (Zhang et al. (1986); Haswell et al. (1993)). The program calculates the flux from a Roche lobe-filling secondary star by dividing the surface of the star into a grid. The flux at each grid position is modified to account for the effects of gravity-darkening and limb-darkening: $$T(r,\theta ,\varphi )=T_{pole}\left[\frac{g(r,\theta ,\varphi )}{g_{pole}}\right]^\beta $$ (1) and $$I=I_0(1u+u\mathrm{cos}\gamma )$$ (2) where $`\beta `$ and $`u`$ are the gravity darkening and linear limb-darkening coefficients, respectively, $`T_{pole}`$ and $`g_{pole}`$ are the temperature and surface gravity at the pole of the star, $`I_0`$ is the intensity emitted normal to the surface of the star, and $`\gamma `$ is the angle between the line of sight and the normal to the surface. The output of the program is a light curve of flux versus orbital phase in arbitrary flux units. Traditionally, the program has been used to create model light curves by varying the free parameters until the fit to the observed data has been optimized. The free parameters for a model of the secondary star are the mass ratio, $`q`$, the orbital inclination, $`ı`$, $`T_{pole}`$, $`\beta `$, and $`u`$. Since the secondary star is not the only source of modulation in the light curve of IP Peg, we initially fit the model light curve to the data in the orbital phases between $`\varphi `$ = 0.1 and 0.4. We assumed that the flux at these phases consists of the ellipsoidal variations plus a constant disk component, that is: $$F_{obs}(\varphi )=cF_2(\varphi )+F_d$$ (3) where $`F_{obs}`$ is the observed data, $`F_2`$ is the flux from the secondary star and $`F_d`$ is the contribution of the accretion disk. The constant term, $`c`$, scales the modeled secondary star flux to the data. We then solved for $`c`$ and $`F_d`$ using a least-squares fit to determine the fractional contributions of the secondary star and the disk to the observed data. This method assumed that the uneclipsed disk flux is constant over the observing period and that — consistent with the visible light curves of IP Peg — the bright spot does not contribute to the data at these orbital phases (Wood & Crawford (1986)). To create the model light curve of the secondary star, $`F_2(\varphi )`$, we initially set $`q`$ = 0.49 and $`ı`$ = 80$`\stackrel{}{\mathrm{.}}`$9, values derived from the visible eclipses of IP Peg (Wood & Crawford (1986)). The spectral type of the secondary star and its temperature were determined from the observed (H-K) color and set to $`T_{pole}`$ = 3050 K (Szkody & Mateo (1986); Leggett (1992)). The gravity darkening coefficient was set to $`\beta `$ = 0.05 (Sarna (1989)), and the limb-darkening coefficient was extrapolated from Wade & Rucinski (1985) and set to $`u`$ = 0.30. We also varied the parameters to cover the range of physically permitted models for the secondary star, varying $`q`$ from 0.35 to 0.6, $`ı`$ from 89$`\stackrel{}{\mathrm{.}}`$5 to 79, $`T_{pole}`$ from 2800 to 3200 K, $`u`$ from 0.2 to 0.5, and $`\beta `$ from 0.05 to 0.08. We were unable to fit a viable model to the observed data for any of the parameter values. Every fit to $`\varphi `$ = 0.1 to 0.4 generated more modeled flux than observed flux at other orbital phases. This was particularly evident at mid-eclipse, $`\varphi `$ = 0, where the modeled secondary star flux must be equal to or less than the observed flux, and instead was higher. The best fits to Equation 3 for the secondary star models also generated negative, unphysical values for the disk flux, $`F_d`$. The failure of the ellipsoidal variations plus a constant disk component to fit the data for $`\varphi `$ = 0.1 – 0.4 demonstrates that an additional, phase-dependent source contributes to the IP Peg light curve in the near-infrared, partly mimicking an ellipsoidal variation. Since the near-infrared light curve alone is insufficient to constrain the contribution of the secondary star, we abandoned the use of Equation 3 and calculated the model for the ellipsoidal variations using best-guess parameter values obtained from previous visible and near-infrared observations: $`q`$ = 0.49, $`ı`$ = 80$`\stackrel{}{\mathrm{.}}`$9, and $`T_{pole}`$ = 3050 K. The model light curve was then scaled to the data at the primary eclipse. The scaling was done iteratively. First, we assumed that the secondary star is the only source of flux at $`\varphi `$ = 0 and set the model flux equal to the observed flux at this phase. After modeling the accretion disk, we determined that 8% of the area of the accretion disk remains unocculted at primary minimum (i.e. the sides of the disk stick out at mid-eclipse). Using the primary eclipse depth (see Figure 6) and assuming a roughly uniform distribution of flux in the disk, we determined that the unocculted disk emits 0.15 mJy at minimum light. Accordingly, we rescaled the model to the observed data assuming 0.15 mJy of unocculted light at primary minimum. For the final model, we improved the light curve synthesis program via the addition of improved limb-darkening coefficients and specific intensities for the secondary star ($`I_0`$). We obtained these parameters using the Allard model atmospheres for cool M dwarf stars (Allard & Hauschildt (1995)). For the secondary star in IP Peg, we assumed T = 3000 K, log g = 4.5, and solar metallicity. The resulting linear limb-darkening coefficient in the H-band is $`u0.20`$. The limb-darkening profile in the H-band is decidedly non-linear, however, so the limb-darkening equation (Equation 2) was modified to use a quadratic approximation: $$I=I_0(1a(1\mathrm{cos}\gamma )b(1\mathrm{cos}\gamma )^2)$$ (4) where $`a`$ and $`b`$ are the quadratic limb-darkening coefficients. The coefficients were derived from the model atmospheres following the method outlined in Wade & Rucinski (1985) . For IP Peg, the coefficients were $`a`$ = 0.022 and $`b`$ = 0.321. Finally, we tested the effects of including irradiation of the secondary star in the model (assuming a 15,000 K white dwarf; Marsh 1988). The change in the depth of the ellipsoidal dip at $`\varphi `$ = 0.5 was 0.04 mJy, or 0.36%. We neglected the small effect of irradiation to avoid additional model-dependent choices for the temperature of the white dwarf and the albedo of the secondary star. Figure 4 shows the model ellipsoidal variations scaled to the observed data and Figure 5 is the light curve from which the model secondary star flux has been subtracted. The ratio of the mean secondary flux to the mean observed flux shows that the secondary star provides approximately 85% of the observed H-band flux, a value consistent with the estimate in Szkody & Mateo (1986) that the disk contributes up to 20% of the H-band flux. The subtracted light curve has both a primary and a secondary eclipse and shows bright spot ingress and egress features during primary eclipse. The peak in the light curve near $`\varphi `$ = 0.8 is early in phase compared to the visible (where it peaks near $`\varphi `$ = 0.9; Wood and Crawford 1986). The gradual decline from the peak to the start of bright spot ingress is more extended as a result. If the peak is due to beamed emission from the bright spot, its phasing suggests an unusual position for the spot on the disk. The subtracted light curve also confirms the presence of a secondary eclipse of the companion star by the accretion disk. Unlike the visible data, the near-infrared light curve is not flat after the primary eclipse, and instead shows a second peak. It was this hump which accounted for our failure in fitting the ellipsoidal variations to this region of the light curve. The appearance of the subtracted light curve resembles a double-hump variation such as those seen in the visible light curves of the dwarf novae WZ Sge and Al Com (Robinson et al. (1978); Patterson et al. (1996)). A double-hump variation in IP Peg would account for the early phase position of the hump before primary eclipse, and for the presence of the second hump in Figure 5. This phase-dependent variation may be present in the light curves of other CVs; if so, the double-hump variation may have been confused with the ellipsoidal variations in other binary systems. ### 3.3. Maximum entropy eclipse mapping of the accretion disk The accretion disk was modeled using the maximum entropy eclipse mapping program developed by Horne (1985) . The model assumes that a Roche-lobe filling secondary star eclipses a flat accretion disk which lies in the plane of the binary orbit. The maximum entropy method (MEM) divides the surface of the accretion disk into a two-dimensional grid and numerically solves for the intensity distribution that best fits the eclipse data while maximizing the entropy of the model relative to a default map of the disk. The program is iterated repeatedly and the default map regularly updated until the desired quality of fit ($`C_{aim}\chi _\nu ^2`$) is achieved. We chose a disk radius equal to $`R_d`$ = 0.56 $`R_{L_1}`$ based on the variation of the visible bright spot position with time after outburst (Wolf et al. (1993)) and the date of the 1994 September observations relative to the previous outburst. After initial modeling (see below), the disk radius was increased to $`R_d`$ = 0.6 $`R_{L_1}`$. The initial modeling also demonstrated that fitting the data to values of $`C_{aim}<`$ 2.0 introduced spurious features in the disk map by fitting flickering in the data. To avoid this, we set the target fit to $`C_{aim}`$ = 2.0. The maximum entropy program assumes that any variation in the light curve is due to the eclipse, necessitating the removal of all non-eclipse features from the light curve before mapping (i.e. the orbital humps and/or the anisotropic emission from the bright spot). To remove the double-hump variation, we fit a double-sine curve to the data, excepting the regions of the primary and secondary eclipses. The fit is shown in Figure 5. This figure also shows the extent of the primary and secondary eclipses in orbital phase for a disk radius of $`R_d`$ = 0.6 $`R_{L_1}`$. The fit was used to rectify the light curve to a constant value outside of eclipse. The rectified light curve is shown in Figure 6; the flatness of the light curve outside of eclipse is evidence that the double-sine curve is a good approximation to the orbital humps in Figure 5. Usually, the default maps are smoothed azimuthally so that the maximum entropy solution favors the most axisymmetric model to fit the observed data. This has the effect of suppressing azimuthal information in the model accretion disk while the radial intensity distribution remains largely constrained by the data (Horne (1985)). The traditional method of choosing an axisymmetric default map in MEM modeling is ill-suited to eclipses of IP Peg in quiescence, however, because of the dominant contribution of the bright spot to the eclipse profile. We tried many alternative default maps and modeling schemes to determine the best disk map and to test the dependence of the map on the method chosen. Two methods will be discussed in this text: 1. We started with a uniform default map of constant intensity and iterated the program until $`C_{aim}`$ was reached and the entropy maximized. The default map was not updated while iterating. The resulting disk map is the least model-dependent map possible, but also the least physical, subject to ’crossed-arch’ distortions of compact features in the map (Horne (1985)). 2. We allowed the initially uniform default map to evolve as the disk map converged on a solution. The default map was updated regularly by smoothing the disk map with a Gaussian of width $`\sigma `$ = 0.01 $`R_{L_1}`$. The constraint of maximizing entropy then suppresses disk structure on scales smaller than $`\sigma `$, while allowing the data to constrain broad features, producing the smoothest disk map consistent with the data (Horne (1985)). Figure 7 is the contour map of the accretion disk for the flat default map. By assuming blackbody emission at each annulus, we derived the disk’s radial brightness temperature profile, shown in Figure 8. We used a distance to IP Peg of 121 pc, based on the Szkody & Mateo (1986) distance determination but correcting for the variation in the K-band surface brightness with (V–K) (Ramseyer (1994)). We set $`R_{L_1}=4.5\times 10^{10}`$ cm (Wood & Crawford 1986, assuming the intermediate value for the white dwarf boundary layer in Table 4). Despite the distortions present in the map, several features stand out. The accretion disk has a flat radial brightness temperature profile and a bright spot. We found the radius of the spot by creating three disk maps of varying outer disk radius: $`R_d`$ = 0.56, 0.6 and 0.7 $`R_{L_1}`$. In the latter two cases, the spot intensity peaked at $`x`$ = 0.47 and $`y`$ = 0.34 $`R_{L_1}`$. This places the radius of the spot, and thus a lower limit to the disk radius, at 0.58 $`R_{L_1}`$. This value is larger than the typical radius of the visible spot during this time in quiescence, so the disk map with an outer radius $`R_d`$ = 0.56 was too small. We subsequently set the disk radius in the maps to $`R_d`$ = 0.6 $`R_{L_1}`$. The position of the spot on the disk does not correspond to the range of positions for the bright spot in IP Peg in visible light; nor does it lie along the theoretical mass stream trajectory for material being accreted from the secondary star (the mass stream trajectory is shown in Figure 9; Wood et al. 1989b ). The disk map in Figure 7 was also used to determine the position of $`\varphi `$ = 0 in the observed light curve. This was done by shifting the light curve in phase until the intensity contours at small disk radii and the central ’crossed-arch’ distortion were aligned with the geometric center of the disk. The corrected time of $`\varphi `$ = 0 was used to calculate the orbital phasing shown in Figures 1 – 3. Figure 9 shows the contour map of the accretion disk created using the Gaussian-smoothed default map. Figure 10 is the radial brightness temperature profile for this disk map, and Figure 11 shows the fit of the disk map to the observed light curve. Again, the brightness temperature profile shows that the bulk of the accretion disk has a flat profile with a brightness temperature $`T_{br}`$ 3000 K. A bright spot with a peak temperature $`T_{br}`$ 10,000 K is located at the edge of the disk. The disk map also shows a region of enhanced intensity in the back of the disk. In some disk maps, a bright region in the back of the disk is caused by an underestimate of the unocculted background flux in the system (Rutten, van Paradijs & Tinbergen (1992)). This is not the case in IP Peg, where alterations in the amount of background light do not affect this region of the disk map. Excess light in the back of the disk has also been indicative of a flare in the opening angles of the disk (Robinson et al. (1995)). It appears more likely that the enhanced emission in the map of IP Peg is an artifact of the maximum entropy constraint, which favors producing the smoothest disk map possible, spreading emission from the bright spot over the disk along lines of constant shadow (Horne (1985)). The disk map also shows a lower surface brightness edge at large radii in the fourth quadrant of the disk in $`xy`$ space. The feature is very robust — showing up in every disk map we created — and is dictated by the shape of the observed eclipse. ### 3.4. Dependence of the results on the modeling parameters chosen. Many steps in the modeling of the light curve required fixing parameters whose true values are uncertain. Below, we discuss how the modeling results are affected by changes in several key parameter values: 1. The binary geometry, $`q`$ and $`ı`$: The value of the mass ratio in IP Peg, $`q`$, could range from 0.35 to 0.6, although most observations indicate that $`q`$ 0.49 (Wood & Crawford (1986); Marsh (1988); Wolf et al. (1993)). The inclination is most likely between 79 and 81. The disk maps we present assumed $`q`$ = 0.49 and $`ı`$ = 80$`\stackrel{}{\mathrm{.}}`$9. We repeated the entire modeling process for $`q`$ = 0.6 and $`ı`$ = 79 (Wolf et al. (1993)). The morphology of the double-hump variation and the relative depths and shapes of the two eclipses remained the same in the new light curve, while the peaks in the light curve (near $`\varphi `$ = 0.25 and 0.75) increased by 0.1 mJy. The disk maps consequently showed the same flat brightness temperature distribution and the prominent bright spot seen in Figures 7 and 9. Changing the system geometry did decrease the brightness temperature in the accretion disk from roughly 3000 K to 2000–2500 K, with a slight rise in temperature with disk radius. The peak bright spot temperature declined from 10,000 K to 6000 K, although much of this decline appears to be due to the flux from the spot being distributed across the disk (hence the rise in disk temperature with radius). 2. The orbital ephemeris and choice of $`\varphi `$ = 0: Given the rapid changes in IP Peg’s orbital period and the absence of a white dwarf egress feature in the near-infrared, it is difficult to determine the correct orbital phasing for the light curve. We chose the time of $`\varphi `$ = 0 using the intensity distribution in the accretion disk map. We tested this assumption by shifting $`\varphi `$ = 0 by 0.01 in phase, so that the primary eclipse minimum coincides with inferior conjunction of the secondary star. Moving $`\varphi `$ = 0 causes small changes in the fits of the ellipsoidal variations and double-sine curves to the data and also rotates features in the accretion disk map. The resulting disk map had the same intensity distribution, but the intensity contours were no longer centered on the geometric center of the disk and the spot was rotated closer to the white dwarf–secondary star axis. Even with the rotation, however, the bright spot position did not coincide with the visible bright spot position or with the theoretical mass stream trajectory. 3. Fitting the ellipsoidal model to the data: We scaled the model ellipsoidal light curve to the data by assuming that 8% of the accretion disk flux in Figure 6 was unocculted at primary minimum. Since the bright spot is a significant source of flux in IP Peg, the assumption of uniform disk emission is not strictly true (see Figure 9); however, 0.15 mJy of background light is a good estimate, and changing the background light by up to 0.05 mJy did not affect the results. Larger changes in the background light (either by eliminating it or by adding large amounts of extra flux) showed up as artifacts in the disk maps, in particular as light being removed from or added to the edges of the disk, which remain unocculted at primary minimum. 4. Fitting and removing the double-hump variation: Subtracting the double-sine fit across the primary eclipse (Figure 5) introduces uncertainty in interpreting the subsequent disk maps. Without knowing the source of the double-hump variation, and in particular if it is wholly or partially an anisotropic emitting source, it is difficult to determine the shape of the rectification across the primary eclipse. To test the dependence of the models on the shape of the fit across the eclipse, we repeated the disk mapping procedure after using a straight line drawn between the beginning and end-points of the eclipse to rectify the light curve. The resulting accretion disk map was unchanged, with the exception of a slightly lower peak bright spot temperature (from 10,000 K to 9000 K) and a larger spot extent on the face of the disk. 5. The white dwarf: Throughout the modeling, we ignored the contribution of the white dwarf to the H-band flux. Marsh (1988) states that the temperature of the white dwarf is less than 15,000 K, and that it contributes less than 0.2 mJy in the visible. We used the light curve synthesis models (discussed in Section 3.2) to estimate that a 15,000 K white dwarf would contribute approximately 0.08 mJy to the H-band flux, a small, but not undetectable, amount (geometric parameters for the white dwarf came from Wood & Crawford 1986). The light curves showed no evidence of white dwarf egress features, however, and the accretion disk maps did not add light to small disk radii, as would be expected for unaccounted–for white dwarf flux. This suggests that the white dwarf may be much cooler than the upper limit of 15,000 K. The alternate modeling methods discussed above demonstrate that key features in the accretion disk map of IP Peg are robust: the flat intensity distribution in the disk, the unusual location for the bright spot, and the brightness temperatures of the accretion disk and bright spot are relatively unaffected by changes in the modeling parameters used. ## 4. Discussion Figure 5 shows the H-band light curve of IP Peg after the ellipsoidal variations of the secondary star have been removed. There is a double-peaked component to the light curve that mimics but cannot be attributed to the ellipsoidal variations. This orbital modulation is not apparent in the visible light curves of IP Peg, which are flat following primary eclipse. The peak before primary eclipse also occurs at an earlier phase in the NIR than in the visible. While the visible peak is typically attributed to anisotropic beaming from the bright spot, the early phase position of the peak in the near-infrared implies an additional contribution from the double-hump source. The variation may originate in any of the cool emitters seen in the near-infrared: the accretion disk, the bright spot, or the secondary star. Star spots caused by magnetic activity have been observed on late-type stars and may preferentially occur at the inner Lagrangian point in binaries (Ramseyer, Hatzes, & Jablonski (1995); Livio & Pringle (1994)). A star spot at the inner Lagrangian point in IP Peg could contribute to the depth of the light curve at secondary minimum ($`\varphi `$ = 0.5), although a second persistent spot on the opposite side of the star would also be necessary to create the observed double-humped variation. The morphology of the variation observed in IP Peg most strongly resembles the double-hump profile seen in the quiescent, visible light curves of WZ Sge and AL Com, two dwarf novae with extremely short orbital periods. In these two systems, the double-hump variation appears to originate in the accretion disk. In WZ Sge, for example, the secondary star is virtually invisible even in the near-infrared, and cannot be the source of the strong variability (Dhillon (1998)). A double-hump variation is also present in both systems during the early days of outburst (after which the more common superhumps are seen), again pointing to a disk origin for the variability (Patterson et al. (1981, 1996); Kato et al. (1996)). Models of the double-hump profile in WZ Sge and AL Com have included attributing it to the bright spot being visible on both the near and far sides of the accretion disk, or to a spiral dissipation pattern in the disk caused by a 2:1 resonance instability (Robinson et al. (1978); Patterson et al. (1996)). Observations of visible continuum emission in IP Peg have shown a second hump in the light curve, which may be caused by the bright spot shining through the disk (Wolf et al. (1998)). Recent visible spectra of IP Peg on the rise to outburst and in quiescence have also indicated the presence of multiple bright emission sites in the accretion disk, which could be due to spiral structure in the (outbursting) disk or to a second bright spot from mass-stream overflow (Steeghs et al. (1997); Wolf et al. (1998)). The presence of the double-hump variation in the NIR light curves of other CVs cannot be discounted, particularly since the phenomenon may be inadvertently attributed to ellipsoidal variations from the secondary star. It is possible that the contribution of the secondary star to the light curve could be estimated incorrectly, affecting determinations of the absolute magnitudes for the secondary star and the accretion disk. Ellipsoidal variations have also been used to constrain the values of $`q`$ and $`ı`$ in other compact binaries. When these values are used to determine the mass of the primary object, such as in black-hole binaries, the presence of a variable disk contribution to the light curve can significantly alter the calculated results (cf. Sanwal et al. 1995, Shahbaz et al. 1996). The contour map of the quiescent accretion disk in IP Peg (Figure 9) shows that the disk has a flat surface brightness distribution. The prominent feature is the bright spot. The location of the bright spot on the disk is unusual. It does not correspond to the range of positions of the bright spot in the visible nor to the theoretical trajectory of the mass stream from the secondary star (shown in Figure 9). The near-infrared bright spot is located at a larger azimuth than the spot seen at visible wavelengths (where azimuth is measured relative to the line between the secondary star and the inner Lagrangian point); and it is at a larger disk radius than is usually seen at this time in quiescence in the visible, although disk radii this large have been seen in visible measurements at times soon after outburst (Wolf et al. (1993)). The disk map also has a lower-intensity edge at large radii in the part of the disk facing the inner Lagrangian point in Figure 9. It is unclear why this occurs, but the feature is robust and unaffected by the method used to create the disk map or the system parameters chosen. Except for the spot and turned-down edge, the brightness temperature of the disk (Figure 10) is flat and clearly deviates from the $`Tr^{3/4}`$ law for a steady-state disk. In this respect, it resembles the quiescent disk maps of the other eclipsing dwarf novae, Z Cha, OY Car, and HT Cas, at visible wavelengths (Wood et al. (1986); Wood et al. 1989a ; Wood, Horne, & Vennes (1992)). The $`T_{br}(r)`$ profile of IP Peg is even flatter than in these systems, however, and the brightness temperature is lower as well. The effect could be caused by differing opacities in different wavebands, or the brightness temperature in the visible (where the disk in IP Peg has not been mapped) may be lower in IP Peg than in the other eclipsing dwarf novae. Temperatures as low as 2500 K have been seen in OY Car soon after an outburst; the disk becomes steadily warmer during quiescence (Wood (1986)). The observations we modeled were taken a month after the previous outburst of IP Peg (Bortle 1993b ), so the low temperatures observed are not likely to be caused by a similar phenomenon. The flat, non-steady state brightness temperature profiles seen in these quiescent CVs provide more support for the disk instability model of normal dwarf novae outbursts, in which an outburst is caused by a thermal limit cycle instability in the accretion disk (e.g. Cannizzo 1993). The presence of the secondary eclipse in the near-infrared light curves limits the interpretation of the models of the primary eclipse. The secondary eclipse indicates that the accretion disk is occulting some of the flux from the secondary star, so the observed near-infrared disk flux cannot completely originate in an optically thin disk. The secondary eclipse is too shallow to be caused by a fully opaque disk, however. We tested the expected depth of the secondary eclipse for an optically thick accretion disk by allowing a dark, opaque disk to occult the secondary star (using the light curve modeling code discussed in Section 3.2). For an accretion disk radius of $`R_d`$ = 0.6 $`R_{L_1}`$, the secondary hump at $`\varphi `$ = 0.5 would be 1.85 mJy deeper than it is without the secondary eclipse; the effect of adding the secondary eclipse is shown in Figure 12. This depth is much larger than the dip at $`\varphi `$ = 0.5 in Figure 5, even before the double-sine fit is subtracted. A smaller disk radius of $`R_d`$ = 0.56 $`R_{L_1}`$ (the original guess based on visible observations) occults 1.7 mJy at secondary eclipse, still too deep to be consistent with the observed hump at $`\varphi `$ = 0.5. The shallow secondary eclipse also rules out a two-phase accretion disk (an opaque, cool disk with a hot, optically thin chromosphere), unless the size of the opaque disk is smaller than the disk radius inferred from the observed visible and near-infrared bright spot positions. The shallow secondary eclipse shows that optically thin emission dominates the observed H-band flux from most of the disk in IP Peg. This result is consistent with the near-infrared colors of IP Peg, which, when plotted on a flux-ratio diagram point to a strong optically thin component to the flux (Szkody & Mateo (1986); Berriman et al. (1985)). The distribution of the transparent and opaque gas is unclear, however. The disk emissivity could be patchy. In particular, while the accretion disk is primarily optically thin, the bright spot emission is probably optically thick because its brightness temperature is greater than 10,000 K, and is near the temperature of the spot determined from visible observations (Marsh (1988)). The effective temperature of the accretion disk is a more complex issue. For a one-phase disk, either completely opaque or transparent, the radial brightness temperatures in Figure 10 give a rough lower limit to the effective temperature of the disk. For a transparent disk, the brightness temperature is likely to be a severe underestimate of the actual disk temperature. Since the secondary eclipse in IP Peg is too shallow to allow for an opaque accretion disk, models for the quiescent disk which have temperatures of $`T_{eff}`$ 5000 – 6000 K (and are optically thin) are more consistent with the data than models which invoke a cold ($`T_{eff}`$ 2000 – 3000 K), optically thick disk. The 1993 September and 1994 October light curves were too noisy to model. It is clear, however, particularly from the variations in the shape of the light curves in the month from 1994 September to October, that parameters in the accretion disk in IP Peg—the amplitude of the double-hump variation, and the temperatures and densities in the disk and the bright spot—vary over the course of the quiescent cycle. As a result, additional and repeated modeling of the double-hump variation and both eclipses would better resolve the optical depth and temperature of the quiescent disk in IP Peg. Future work should also include simultaneous, multicolor eclipse maps in the near-infrared to further constrain the optical depth and temperature of the disk by probing how the NIR colors vary throughout the binary orbit. ## 5. Conclusions 1. The quiescent H-band light curve of IP Peg contains contributions from the late-type companion star, the accretion disk and the bright spot, as well as a primary eclipse of the accretion disk and a secondary eclipse of the companion star. The characteristic ellipsoidal variations from the secondary star dominate the light curve, but the amplitude of the variation is not enough to account for all of the orbital modulation seen in the light curve outside of eclipse. 2. The light curve after the model secondary star flux has been subtracted shows a phase-dependent, double-hump profile reminiscent of the quiescent light curves of WZ Sge and AL Com. In these two systems, the double-hump variation is believed to originate in the accretion disk. The presence of a double-hump variation in the near-infrared light curves of other compact binaries may complicate determinations of the relative flux contributions of the accretion disk and the secondary star. 3. The primary eclipse was modeled using maximum entropy eclipse mapping techniques. The bulk of the disk has a flat surface brightness distribution and a cool brightness temperature ($`T_{br}`$ 3000 K). There is a prominent bright spot on the edge of the disk ($`T_{br}`$ 10,000 K). The position of the near-infrared bright spot is not the same as the position of the theoretical mass stream trajectory or the range of measured visible bright spot positions in IP Peg. 4. The flat radial brightness temperature distribution of the accretion disk is consistent with those of other eclipsing dwarf novae in quiescence, although the near-infrared disk in IP Peg is both flatter and cooler than in the other systems. The flat intensity distribution is consistent with the quiescent, non-steady-state behavior predicted by the disk instability model of normal dwarf novae outbursts. 5. The secondary eclipse of the companion star indicates that some occulting material is present in the disk, but the eclipse depth is too shallow to allow for an opaque accretion disk. The disk emissivity could be patchy. In particular, while optically thin emission dominates the H-band flux, the bright spot is probably optically thick with a temperature around 10,000 K. The temperature of the bulk of the accretion disk depends on its optical depth, but is probably higher than the brightness temperature. The authors wish to thank Chris Johns-Krull for generously providing the Allard model atmospheres for M dwarf stars and the software to generate model atmospheres of IP Peg.
no-problem/9903/math9903131.html
ar5iv
text
# On a Result of Atkin and Lehner ## 1 Introduction We wish to give a new proof of one of the main results of Atkin-Lehner . That paper depends, among other things, on a slightly strengthened version of Theorem 1 below, which characterizes forms in $`S_k(\mathrm{\Gamma }_0(N))`$ whose Fourier coefficients satisfy a certain vanishing condition. Our proof involves rephrasing this vanishing condition in terms of representation theory; this, together with an elementary linear algebra argument, allows us to rewrite our problem as a collection of local problems. Furthermore, the classical phrasing of Theorem 1 makes the resulting local problems trivial; this is in contrast to the method of Casselman , whose local problem relies upon knowledge of the structure of irreducible representations of $`\mathrm{GL}_2(𝐐_p)`$. Our proof is therefore much more accessible to mathematicians who aren’t specialists in the representation theory of $`p`$-adic groups; the method is also applicable to other Atkin-Lehner-style problems, such as the level structures that were considered in Carlton . Our proof of Theorem 1 occupies Section 2. In Section 3, we explain the links between this Theorem and the rest of Atkin-Lehner theory; in particular, we show that Theorem 1, together with either the Global Result of Casselman or Theorem 4 of Atkin-Lehner , can be used to derive all of the important results of Atkin-Lehner theory. ## 2 The Main Theorem Recall that, if $`N|M`$ and $`d|(M/N)`$, there is a map $`i_d:M_k(\mathrm{\Gamma }_0(N))M_k(\mathrm{\Gamma }_0(M))`$ defined by $$c_m(i_d(f))=\{\begin{array}{cc}0\hfill & \text{if }dm\hfill \\ c_{m/d}(f)\hfill & \text{if }d|m.\hfill \end{array}$$ This map sends cusp forms to cusp forms and eigenforms to eigenforms (with the same eigenvalues); up to multiplication by a constant, it is given by $`ff|_{\left(\begin{array}{cc}d& 0\\ 0& 1\end{array}\right)}`$. ###### Theorem 1. Let $`fM_k(\mathrm{\Gamma }_0(N))`$ be such that $`c_m(f)=0`$ unless $`(m,N)>1`$. Then $`f=_{p|N}i_p(f_p)`$, where $`p`$ varies over the primes dividing $`N`$ and where $`f_pM_k(\mathrm{\Gamma }_0(N/p))`$. Furthermore, if $`f`$ is a cusp form (resp. eigenform) then the $`f_p`$’s can be chosen to be cusp forms (resp. eigenforms with the same eigenvalues as $`f`$). Our proof rests on two elementary linear algebra lemmas: ###### Lemma 2. Let $`V_1,\mathrm{},V_n`$ be vector spaces and, for each $`i`$, let $`f_i`$ be an endomorphism of $`V_i`$. Then $$\mathrm{ker}(f_1\mathrm{}f_n)=\underset{i=1}{\overset{n}{}}V_1\mathrm{}(\mathrm{ker}f_i)\mathrm{}V_n.$$ ###### Proof. We can easily reduce to the case $`n=2`$. If we write $`V_i=(\mathrm{ker}f_i)V_i^{}`$ then $`f_i|_{V_i^{}}`$ is an isomorphism onto its image, and $$V_1V_2=((\mathrm{ker}f_1)(\mathrm{ker}f_2))((\mathrm{ker}f_1)V_2^{})(V_1^{}(\mathrm{ker}f_2))(V_1^{}V_2^{}).$$ We see that $`f_1f_2`$ kills the first three factors, and is an isomorphism from the fourth factor onto its image; $`\mathrm{ker}(f_1f_2)`$ is therefore the sum of the first three factors, which is what we wanted to show. ∎ ###### Lemma 3. Let $`V_1,\mathrm{},V_n`$ be vector spaces and, for each $`i`$, let $`V_i^{}`$ and $`V_i^{\prime \prime }`$ be subspaces of $`V_i`$. Then $$\begin{array}{c}\left(\underset{i=1}{\overset{n}{}}V_1\mathrm{}V_i^{}\mathrm{}V_n\right)(V_1^{\prime \prime }\mathrm{}V_n^{\prime \prime })\hfill \\ \hfill =\underset{i=1}{\overset{n}{}}V_1^{\prime \prime }\mathrm{}(V_i^{}V_i^{\prime \prime })\mathrm{}V_n^{\prime \prime }.\end{array}$$ ###### Proof. Again, we can assume that $`n=2`$. Write $`V_i=V_{i1}V_{i2}V_{i3}V_{i4}`$ where $`V_{i1}=V_i^{}V_i^{\prime \prime }`$, $`V_i^{}=V_{i1}V_{i2}`$, and $`V_i^{\prime \prime }=V_{i1}V_{i3}`$. Then $`V_1^{}V_2+V_1V_2^{}`$ is the direct sum of those $`V_{1j}V_{2k}`$’s where at least one of $`j`$ or $`k`$ is in the set $`\{1,2\}`$. Also, $`V_1^{\prime \prime }V_2^{\prime \prime }`$ is the direct sum of the $`V_{1j}V_{2k}`$’s where $`j`$ and $`k`$ are both in the set $`\{1,3\}`$. Thus, their intersection is $`(V_{11}V_{21})(V_{11}V_{23})(V_{13}V_{21})`$, as claimed. ∎ ###### Proof of Theorem 1.. If $`fM_k(\mathrm{\Gamma }_0(N))`$ then $`f|_{\left(\begin{array}{cc}N^1& 0\\ 0& 1\end{array}\right)}M_k(\mathrm{\Gamma }^0(N))`$, where we define the group $`\mathrm{\Gamma }^0(N)`$ by $$\mathrm{\Gamma }^0(N)=\left\{\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)\mathrm{SL}_2(𝐙)\right|b0(modN)\}.$$ Furthermore, up to multiplication by a constant, $`f|_{\left(\begin{array}{cc}N^1& 0\\ 0& 1\end{array}\right)}`$ has the same Fourier coefficients as $`f`$, except that we have to take the $`q`$-expansion with respect to $`e^{2\pi \sqrt{1}z/N}`$ instead of $`e^{2\pi \sqrt{1}z}`$. Our Theorem, then, is equivalent to the statement that, if $`fM_k(\mathrm{\Gamma }^0(N))`$ satisfies the condition $`c_m(f)=0`$ unless $`(m,N)>1`$ (1) then $`f=_{p|N}f_p`$ where $`f_pM_k(\mathrm{\Gamma }^0(N/p))`$. Let $`M=M_k(\mathrm{\Gamma }(N))`$; it comes with an action of $`\mathrm{SL}_2(𝐙/N𝐙)`$. If $`fM`$ and $`d|N`$, define $`\pi _d(f)`$ to be $`_{d|m}c_m(f)q^m`$. Then $`\pi _d(f)M`$: in fact, $$\pi _d(f)=\frac{1}{d}\underset{b=0}{\overset{d1}{}}f|_{\left(\begin{array}{cc}1& bN/d\\ 0& 1\end{array}\right)}.$$ The principle of inclusion and exclusion implies that $`f`$ satisfies (1) if and only if $$f=\underset{p|N}{}\pi _p(f)\underset{\begin{array}{c}p_1,p_2|N\\ p_1<p_2\end{array}}{}\pi _{p_1p_2}(f)+\mathrm{}.$$ Thus, if $`V`$ is an irreducible $`\mathrm{SL}_2(𝐙/N𝐙)`$-representation contained in $`M`$, it suffices to prove our Theorem for a form in $`V`$, since the conditions of our Theorem can be expressed in terms of the action of $`\mathrm{SL}_2(𝐙/N𝐙)`$. Let $`N=_{i=1}^np_i^{n_i}`$ be the prime factorization of $`N`$. Then $`\mathrm{SL}_2(𝐙/N𝐙)=_i\mathrm{SL}_2(𝐙/p_i^{n_i}𝐙)`$, so $`V=_iV_i`$ where $`V_i`$ is a representation of $`\mathrm{SL}_2(𝐙/p_i^{n_i}𝐙)`$. Also, $`\pi _{p_i}`$ acts as the identity on the $`V_j`$ for $`ji`$. So if we define $$\pi (f)=f\underset{p|N}{}\pi _p(f)+\underset{\begin{array}{c}p_1,p_2|N\\ p_1<p_2\end{array}}{}\pi _{p_1p_2}(f)\mathrm{}$$ then $`\pi =(1\pi _{p_1})\mathrm{}(1\pi _{p_n})`$ and $`\mathrm{ker}(\pi )`$ is the space of forms satisfying (1). Thus, Lemma 2 implies that $$\mathrm{ker}(\pi )=\underset{i=1}{\overset{n}{}}V_1\mathrm{}(\mathrm{ker}(1\pi _{p_i}))\mathrm{}V_n.$$ Turning now to the question of a form’s being in $`M_k(\mathrm{\Gamma }^0(N))`$, that is the case if and only if the form is both in $`M_k(\mathrm{\Gamma }(N))`$ and is invariant under the image $`B(N)`$ of $`\mathrm{\Gamma }^0(N)`$ in $`\mathrm{SL}_2(𝐙/N𝐙)`$. Also, $`B(N)=_iB(p_i)`$. Thus, setting $`V_i^{}`$ to be $`\mathrm{ker}(1\pi _{p_i})`$ and $`V_i^{\prime \prime }`$ to be the space of $`B(p_i)`$-invariant elements of $`V_i`$, Lemma 3 implies that an element of $`V`$ is both in $`\mathrm{ker}\pi `$ and invariant under $`B(N)`$ if and only if it is in $$\underset{i=1}{\overset{n}{}}V_1^{\prime \prime }\mathrm{}(V_i^{}V_i^{\prime \prime })\mathrm{}V_n^{\prime \prime }.$$ But if $`v_iV_i`$ is in $`V_i^{}V_i^{\prime \prime }`$ then it is invariant both under $`B(p_i)`$ and under projection to the subspace of invariants under the cyclic subgroup generated by $`\left(\begin{array}{cc}1& p_i^{n_i1}\\ 0& 1\end{array}\right)`$; this last condition is equivalent to its being invariant under $`\left(\begin{array}{cc}1& p_i^{n_i1}\\ 0& 1\end{array}\right)`$. Thus, our vector $`v_i`$ is invariant under $$\left\{\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)\mathrm{SL}_2(𝐙/p_i^{n_i}𝐙)\right|b0(modp_i^{n_i1})\},$$ and $`V_1^{\prime \prime }\mathrm{}(V_i^{}V_i^{\prime \prime })\mathrm{}V_n^{\prime \prime }`$ is the set of invariants under $$\left\{\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)\mathrm{SL}_2(𝐙/N𝐙)\right|b0(modN/p_i)\},$$ i.e. the elements of $`VM_k(\mathrm{\Gamma }^0(N/p_i))`$, completing our proof. The cusp form case is similar, replacing $`M`$ by the space of cusp forms. The eigenform case then follows from the facts that the Hecke operators are simultaneously diagonalizable and that their action is preserved by the operators $`i_p`$. ∎ ## 3 Newforms, Oldforms, and All That In this Section, we explain the relation between Theorem 1 and the rest of Atkin-Lehner theory. We shall see that the whole theory follows from Theorem 1 together with facts about $`L`$-series associated to modular forms, as expressed by Theorem 4 of Atkin-Lehner or the Global Result of Casselman . We claim no originality in the methods used in this Section. Define $`K_0(N)`$ to be the subspace of $`fS_k(\mathrm{\Gamma }_0(N))`$ such that $`c_m(f)=0`$ unless $`(m,N)>1`$: thus, $`K_0(N)`$ is the subspace characterized in Theorem 1. Define $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$ to be $`S_k(\mathrm{\Gamma }_0(N))/K_0(N)`$; for $`f\overline{S}_k(\mathrm{\Gamma }_0(N))`$, $`c_m(f)`$ is well-defined exactly when $`(m,N)=1`$. Also, let $`𝐓^N`$ be the free polynomial algebra over $`𝐂`$ generated by commuting operators $`T_m`$ for $`(m,N)=1`$. Then $`𝐓^N`$ acts on $`S_k(\mathrm{\Gamma }_0(N))`$ (where $`T_m`$ acts as the $`m`$’th Hecke operator), and its action is diagonalizable; it is easy to see that its action descends to $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$. (For example, $`T_m`$ commutes with the action of the operators $`\pi _d`$ defined in the proof of Theorem 1.) ###### Proposition 4. The $`𝐓^N`$-eigenspaces in $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$ are one-dimensional; furthermore, an eigenform $`f\overline{S}_k(\mathrm{\Gamma }_0(N))`$ is zero if and only if $`c_1(f)=0`$. ###### Proof. If $`f\overline{S}_k(\mathrm{\Gamma }_0(N))`$ is an eigenform for $`T_m`$ with eigenvalue $`\lambda _m(f)`$ then $`c_m(f)=\lambda _m(f)c_1(f)`$. Thus, if $`f`$ is a $`𝐓^N`$-eigenform then it is determined by its eigenvalues and by $`c_1(f)`$. ∎ This Proposition, together with Theorem 1, sometimes allows one to reduce questions about the spaces $`S_k(\mathrm{\Gamma }_0(N))`$ to spaces whose eigenspaces are one-dimensional. ###### Proposition 5. If $`f`$ and $`g`$ are eigenforms in $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$ such that, for some $`D`$, they have the same eigenvalues $`\lambda _m`$ for all $`m`$ with $`(m,ND)=1`$, then they have the same eigenvalues for all $`m`$ with $`(m,N)=1`$. ###### Proof. This is part of the Global Result of Casselman , or of Theorem 4 of Atkin-Lehner . ∎ We should also point out that our Theorem 1 isn’t quite the same as Theorem 1 of Atkin-Lehner . Their Theorem 1 assumes that $`c_m(f)=0`$ unless $`(m,ND)=1`$, and thus breaks down into two parts: showing that you can assume that $`D=1`$, and our Theorem 1. It is easy to show that the first part is equivalent to Proposition 5, at least in the eigenform case; the cusp form case takes a bit more work. We now present what is traditionally thought of as the core of Atkin-Lehner theory. ###### Theorem 6. If $`\{\lambda _m\}`$ is a set of eigenvalues (for all $`m`$ relatively prime to a finite set of primes) that occurs in some space $`S_k(\mathrm{\Gamma }_0(N))`$ then there is a unique minimal such $`N`$ (with respect to division) for which those eigenvalues occur, and the corresponding eigenspace is one-dimensional. If $`f`$ is a basis element for that eigenspace and if $`M`$ is a multiple of $`N`$ then the corresponding eigenspace in $`S_k(\mathrm{\Gamma }_0(M))`$ has a basis given by the forms $`i_d(f)`$ where $`d`$ varies over the (positive) divisors of $`M/N`$. ###### Proof. For any positive integer $`M`$, write $`V_0(M)`$ for the set of eigenforms in $`S_k(\mathrm{\Gamma }_0(M))`$ with eigenvalues $`\{\lambda _m\}`$. By Proposition 5, we don’t have to worry exactly about which primes are avoided in our set of eigenvalues, so this notation makes sense. Furthermore, let $`N`$ be a minimal level such that $`V_0(N)`$ is nonzero. By Proposition 4, the image of $`V_0(N)`$ in $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$ is one-dimensional. Theorem 1 shows that any element of the kernel of the map from $`V_0(N)`$ to $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$ is of the form $`_{p|N}i_p(f_p)`$, where $`f_pV_0(N/p)`$. But the minimality of $`N`$ shows that there aren’t any such forms; the kernel is therefore zero, so $`V_0(N)`$ is one-dimensional. To see that $`N`$ is unique, let $`S_k`$ be the space of adelic cusp forms of weight $`k`$ but of arbitrary level structure; it comes with an action of $`\mathrm{GL}_2(𝐀^{\mathrm{}})`$, and elements of $`S_k(\mathrm{\Gamma }_0(M))`$ correspond to elements of $`S_k`$ invariant under the action of a certain subgroup $`U_0(M)=_pU_0(p^{m_p})`$, where $`p`$ varies over the set of all primes and $`p^{m_p}`$ is the highest power of $`p`$ that divides $`M`$. Casselman’s Global Result says that the set $`V`$ of forms in $`S_k`$ with eigenvalues $`\{\lambda _m\}`$ gives an irreducible representation of $`\mathrm{GL}_2(𝐀^{\mathrm{}})`$; thus, it can be written as a restricted tensor product $`V=_pV_p`$, and $$V_0(M)=\underset{p}{}V_p^{U_0(p^{m_p})}.$$ Since $`U_0(p^m)`$ contains $`U_0(p^{m+1})`$, for each $`p`$ it is the case that, if for some power $`p^m`$, $`V_p^{U_0(p^m)}`$ is nonzero, then there is a minimal such power. Thus, taking $`N`$ to be the product of those minimal powers of $`p`$, we see that, if for some $`M`$, $`V_0(M)`$ is nonzero, then it is nonzero for a unique minimal $`M`$, namely our $`N`$. (Alternatively, the uniqueness of the minimal level is part of Theorem 4 of Atkin-Lehner .) Finally, to see that the eigenspace grows as indicated, let $`f`$ be a nonzero element of $`V_0(N)`$ for $`N`$ minimal. By Proposition 4, we can assume that $`c_1(f)=1`$, since our argument above showed that the image of $`f`$ in $`\overline{S}_k(\mathrm{\Gamma }_0(N))`$ is nonzero. Fix some multiple $`M`$ of $`N`$, and assume that we have shown that, for all proper divisors $`M^{}`$ of $`M`$ with $`N|M^{}`$, $$V_0(M^{})=\underset{d|(M^{}/N)}{}i_d(f)𝐂.$$ (2) We then want to show that the same statement holds with $`M`$ in place of $`M^{}`$. Thus, let $`g`$ be an element of $`V_0(M)`$. By Proposition 4, the image of $`gc_1(g)i_1(f)`$ in $`\overline{S}_k(\mathrm{\Gamma }_0(M))`$ is zero, so by Theorem 1, $$g=c_1(g)i_1(f)+\underset{p|M}{}i_p(g_p)$$ for some forms $`g_pV_0(M/p)`$. Also, $`g_p=0`$ unless $`p|(M/N)`$, since otherwise $`N`$ wouldn’t divide $`M/p`$, contradicting the unique minimality of $`N`$. But then (2) implies that each $`g_p`$, and hence $`g`$, can be written as a linear combination of the forms $`i_d(f)`$ for $`d|(M/N)`$; it is easy to see that such an expression for $`g`$ is unique. ∎
no-problem/9903/astro-ph9903232.html
ar5iv
text
# Anisotropies in the CMB ## I Introduction Over the last few years it has become common place to speak of the Cosmic Microwave Background (CMB) anisotropy as the premier laboratory for cosmology and early universe physics. In the titles to most talks the word “anisotropy” is often absent, it being understood that the talk will be about the anisotropy. This is an interesting phenomenon. Consider that when we speak of the CMB we could speak about 3 major properties. Firstly, the existence of the CMB is one of the pillars of the hot big bang model of cosmology. Secondly, the black body spectrum of the CMB, the most perfect black body ever measured in nature, confirms the cosmological origin of the CMB and puts extraordinarily strong constraints on early energy injection in the universe (e.g. through decaying particles, see ). These first two properties show that the CMB has already delivered important cosmological information. Our current focus is the third area: the anisotropy. This fact alone indicates the high level of promise that a study of the anisotropy holds. Like much of cosmology, the CMB is a data driven subject. However, in this proceedings I focus on the theory behind the CMB anisotropies<sup>*</sup><sup>*</sup>*Because of space, I have referenced primarily work that I have been involved in. Much more representative referencing can be found in those sources., and the current status of theoretical efforts, rather than on the CMB data. It is a blessing of this field that numerous experimental efforts underway will make any statements about the experimental situation obsolete before they reach print (even on the web!). As an overly brief summary of the current status: the current data are in good agreement with our general paradigm and support a spatially flat universe with an almost scale-invariant spectrum of adiabatic fluctuations in predominantly cold dark matter. Departures from that statement in any direction fit the data less well, though at present large error bars and theorist’s ingenuity limit the strength of statements that can be made. The calculation of CMB anisotropies is now a highly refined subject. While most calculations focus on the “standard” models, the theory is in fact very general. This generality also leads to complexity, but the basic physics behind the CMB is very simple. To understand CMB anisotropies it is helpful to recall several general points: * The universe was once hot and dense. At these early times ($`10^5`$yr after the bang) the plasma was highly ionized. Thomson ($`\gamma e`$) scattering was rapid and tightly coupled the CMB photons to the “baryons” ($`p+e`$). In the limit that this scattering was rapid, the mean free path was small, a fluid approximation is valid. Thus we speak of the photon-baryon fluid. In this fluid the baryons provide much of the inertia (mass) and the photons the pressure ($`p_\gamma =\rho _\gamma /3`$, $`p_B0`$). * The observed large-scale structure grew through gravitational instability from small perturbations at early times. These density perturbations imply, through Poisson’s equation, small perturbations in the gravitational field. * Combining the above two observations, we infer that the fundamental modes of the system would be gravity sourced sound waves in the fluid. The equation of motion for the sound waves can be derived by taking the tight-coupling limit of the equations of radiative transfer. In this case (see below) $$\left[m_{\mathrm{eff}}\mathrm{\Delta }T_k^{}\right]^{}+\frac{k^2}{3}\mathrm{\Delta }T_k=F_k$$ (1) where $`F_k`$ is the gravitational forcing term, $`m_{\mathrm{eff}}`$ describes the inertia of the fluid, and primes denote derivatives with respect to (conformal) time. The forcing term contains derivatives of the potential (and spatial curvature) while $`m_{\mathrm{eff}}`$ depends on the baryon-to-photon ratio, which evolves with time. * Finally, recombination (when protons captured electrons to form hydrogen and the universe became neutral) occurred suddenly, but not instantaneously. With the decrease in the free $`e^{}`$ density the mean free path for photons rises from essentially zero to the size of the observable universe. The CMB photons travel freely to us, giving us a snapshot of the fluid at a fixed instant in time. The energy density, or temperature, fluctuations in the fluid are seen as CMB temperature differences (anisotropy) across the sky. * The temperature fluctuations arise from 3 terms: the gravitational redshifts as photons climb out of potential wells , density perturbations (with $`\mathrm{\Delta }T/T=(1/4)\delta \rho _\gamma /\rho _\gamma `$) and Doppler shifts from line-of-sight velocity perturbations. On large angular scales the first two terms dominate, while on smaller angular scales the last two are most important. The density and velocity contributions are out of phase, with the velocity being smaller than the density contribution (see later). While this way of looking at the anisotropy is physically clear, it is not how the calculations are actually done. Remember that the fluctuations are observed to be small ($`\delta 10^5`$ c.f. $`\alpha _{QED}10^2`$). Thus one writes down the Einstein, fluid and radiative transfer equations, expands about an exact solution and truncates the expansion at linear orderThe second order terms have been computed and shown to be small as expected.. This procedure gives a set of coupled ODEs which describe the evolution of each (independent) FourierIn hyperbolic geometries the Fourier decomposition needs to be generalized, but this is a technical point. mode. While in some cases the equations can be solved analytically, usually a numerical solution is performed. Since the Fourier modes decouple in linear theory it is advantageous to work in the Fourier basis in the observations also. Unfortunately the sky is curved, so plane waves are not the natural basis. But a “curved sky Fourier expansion” can still be performed using the spherical harmonics: $`\mathrm{\Delta }T/T=_{\mathrm{}}a_\mathrm{}mY_\mathrm{}m`$. We focus then not on $`\mathrm{\Delta }T/T`$ but on the $`a_\mathrm{}m`$, known as multipole moments. By definition $`a_\mathrm{}m=0`$, so the first non-vanishing correlator is the two-point function. Since the $`Y_\mathrm{}m`$ are a complete orthonormal basis, $`a_\mathrm{}^{}m^{}a_\mathrm{}m^{}\delta _{\mathrm{}^{}\mathrm{}}\delta _{m^{}m}`$ and by rotational symmetry the proportionality constants can only depend on $`\mathrm{}`$. Thus we write $`|a_\mathrm{}m|^2=C_{\mathrm{}}`$. If the fluctuations are Gaussian, having specified the mean and variance we have completely specified the model. For more general distributions the higher moments also need to be specified. We show in Fig. 1 a typical $`C_{\mathrm{}}`$ curve for a standard cold dark matter (CDM) model. A readable introduction to the physics can be found in Refs. among others. The precise shape of the power spectrum depends upon cosmological parameters as well as the underlying density perturbations and thereby encodes a wealth of information; see Fig. 1. The current theoretical situation can be summarized as follows: * The formalism for computing $`C_{\mathrm{}}`$ (and the higher moments) for any FRW space-time and any model of structure formation exists . Since this is essentially a statement in General Relativity, the proof can be made quite rigorous. * Not every model has been calculated, but for those where independent calculations have been done (mostly CDM models) independent codes agree to $`𝒪(1\%)`$. * The spectrum encodes information on the cosmology and the model of structure formation and can be measured with exquisite precision. * Model dependent parameter extraction can simultaneously fit a dozen parameters to an accuracy of $`𝒪(10\%)`$ or better, e.g. . * This stunning promise has overshadowed an important additional fact however. Even if our models do not fit every nuance of the observed data, model independent constraints on the parameters exist as do cosmology independent tests of the model of structure formation . For some time the promise of the CMB to strongly constrain numerous cosmological parameters has been evident. Both the measurements and the calculations can be done with high precision, and the theories predict a rich structure to the spatial power spectrum. A multi-parameter fit of theory to data, assuming that the fit is good, then allows simultaneous constraints on the model parameters. It is important to understand however that the predictions for cosmological parameter estimation depend both on the assumed theory and on the parameter space which one searches. As an example, for the MAP satellite scheduled to launch next year, a fit to a 7 parameter family $`\mathrm{\Lambda }`$CDM model gives errors on $`\mathrm{\Omega }_\mathrm{B}h^2`$ of 4%, $`\mathrm{\Omega }_{\mathrm{mat}}h^2`$ of 7%, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ of 14% and the optical depth to reionization of 14%. The tensor-to-scalar ratio is essentially unconstrained, as is the tensor spectral index. In combination these constraints, and the assumed spatial flatness, allow a constraint on the Hubble constant of 14%. If we allow both curvature and a cosmological constant then the error on $`\mathrm{\Omega }_\mathrm{\Lambda }`$ goes up by a factor of 2 and on the Hubble constant by 4! Much of the effort in parameter estimation of late has focussed on numerical issues (where much earlier work was deficient ), on combining CMB observations with other measurements and on extending the parameter space . In addition to highlighting the promise of near future CMB missions, the work on parameter estimation elucidates the often complex interplay of cosmological parameters on the detailed structure of the anisotropy spectra. In this regard the work on extending the parameter space is very important, since it allows one to explore in detail the relationships that exist in our “favored” models that may not exist in general. If we can find parameters which move us off our surface of preferred theories in a controlled manner, at the very least we can constrain such departures when the data become available, strenghtening our belief in the fundamental paradigm . The other area of much recent interest is the combination of CMB data with the many other areas of astrophysics experiencing rapid growth. As an example of the power of the CMB, combined with other measurements, and of extending the parameter space, let us consider a possible measurement of the fluctuations in the $`2K`$ neutrino background (here the 3 neutrino species are assumed massless – see Ref. for neutrino mass effects). The hot big bang model predicts that this background must be present, and it too will have a fluctuation spectrum. Detecting this neutrino signal directly is almost impossible, and detecting the fluctuations in the neutrino signal even more so. However the fluctuations should be there, and their form can be predicted. In the left panel of Fig. 2 we show a calculation of the anisotropy spectrum, from Ref. . The $`y`$-axis scale is $`\mathrm{\Delta }T/T10^5`$ as for the photons. In the right panel of Fig. 2 we show a possible future $`1\sigma `$ “detection” of the fluctuations in the neutrino background inferred from a combination of large-scale structure and CMB data. Clearly this particular example is somewhat fanciful. The “detection” is extremely marginal and the assumptions going into the calculation quite optimistic. However considering how hard it is to do this detection any other way, it serves to illustrate the power of combinations of astrophysical measurements to constrain fine details of all the components making up the energy density of the universe. As another (topical) example, I show in Fig. 3 (also taken from Ref. ) the $`1\sigma `$ limits on the equation of state and fraction of critical density in “dark energy” such as a cosmological constant or dynamical scalar field (sometimes known as $`x`$CDM or quintessence). Note that regardless of the equation of state a $`<10\%`$ measurement of both the energy density and equation of state is possible. These somewhat random examples should illustrate the power of the CMB to constrain cosmological parameters, under the assumption that our current models provide a good fit to the data. Of course it may always turn out that while the paradigm within which we are working is correct, our models are deficient in some detail which prevents a good fit to the data. The strategy in this case is to relax our assumptions and try to reconstruct the model from the observed spectrum. Perhaps eventually the missing ingredient can be found and utopia regained. In the meantime all is not lost. There exist several model-independent measurements of the cosmological parameters . As an example I show in Fig. 4 two different models of structure formation, both in a critical density universe. The models are chosen not to be good fits to the data, but to be very different from each other. In relativistic perturbation theory there are two kind of perturbations: adiabatic and isocurvature. Any fluctuation can be decomposed into these two basis modes. The solid line in the left panel of Fig. 4 is an example of a pure adiabatic model while the dashed line is an example of a pure isocurvature model. Note that while many things are different in these two models, there are two things that remain fixed. First the damping tail is at the same angular scale in both models ($`\mathrm{}1500`$). Secondly the separation between e.g. the 2nd and 3rd peaks is the same in both models. The first statement is easy to understand. The damping comes from photon diffusion during the time it takes the universe to recombine . Perturbations on scales smaller than the photon diffusion scale are erased, leading to the damping of power at high-$`\mathrm{}`$. Clearly this process is independent of the source of the fluctuations. The second statement is also easy to understand. The photon-baryon fluid behaves like an oscillator with a natural frequency. Once the “bell” is struck it wants to ring at that natural frequency. Thus even though the driving forces in the adiabatic and isocurvature models are different, the “ringing” of the higher peaks proceeds at the same frequency. So the peak spacing is fixed. While the peak spacing $`\mathrm{}_A`$ and the damping scale $`\mathrm{}_D`$ are (nearly) independent of the model of structure formation, they do depend on the cosmology. Specifically on the mapping between physical scales at the surface of last scattering ($`z10^3`$) and angles on the sky. They are thus probes of the angular diameter distance to last scattering, as shown in the right panel of Fig. 4 for the case of an open universe. More general constraints in the $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$ plane or the $`\mathrm{\Omega }_gw`$ plane can be found in respectively. To turn the problem around one can look for tests of the model of structure formation independent of the cosmology. Our most succesful class of models is those with an early epoch of accelerated expansion, i.e. inflationary models. Since accelerated expansion requires a fluid with negative pressure, it is intimately related to quantum mechanical considerations (the inner space–outer space connection). One of the greatest triumphs of the inflationary idea is that it provides a source of small adiabatic fluctuations which can grow, through gravitational instability, to form the CMB anisotropies and large-scale structure that we observe today. How can we test this paradigm for the generation of primoridal fluctuations? Any model of fluctuations should produce all 3 modes of perturbations: scalar modes (density perturbations), vector modes (fluid vorticity) and tensor modes (gravitational waves). The vectors have no growing mode and so after a few expansion times they have decayed away, leaving scalar and tensor modes. The presence of vector modes would thus be evidence for fluctuation generation activity while the CMB anisotropy was being formed, i.e. not inflation. The mere presence of tensor modes does not however argue one way or the other. In inflationary models based on a single, slow-rolling scalar field the scalar modes are enhanced over the tensor modes by a large factor which is related to the tensor spectral index (see Ref. for a review). The relation becomes an inequality if more than one field is important. Unfortunately the tensor spectral index is quite hard to measure unless the tensor signal is large, and usually an additional (model dependent) relation to the scalar spectral index is assumed instead. It has been argued that if the inflationary idea is to find a home in modern high-energy physics theories, rather than in effective or “toy” models, then the tensor signal is quite likely to be small . While our ignorance of physics above the electroweak scale makes it dangerous to take particle physics predictions as gospel in cosmology, the observational situation also argues against a large tensor signal . In some sense this is good news: a generic mechanism would presumably make scalar, vector and tensor perturbations in roughly the same amounts leading to $`T/S1`$ today (the vectors having decayed). Inflation on the other hand predicts that the scalar signal is enhanced, lowering $`T/S`$ from this naive prediction, as observations currently prefer. Luckily CMB based tests of inflation exist which do not require a measurement of the tensor signal . They rely on the fact that the only known way to generate adiabatic fluctuations (i.e. fluctuations in the energy density or curvature of space) on cosmological scales today is to have a period of accelerated expansion , i.e. inflation. The key then is to test for the adiabaticity of the fluctuations, which can be done with broader features than detailed fitting to extract small signals. Plausibility arguments suggest that if a peak in the anisotropy spectrum is observed near $`\mathrm{}200`$, the fluctuations are adiabatic. Isocurvature models generically predict a peak shifted to higher $`\mathrm{}`$ (see Fig. 4). Further support for this inference could be gained by measuring the 2nd and 3rd peaks, though some loopholes still remain . The sharpest tests of the model can be performed if information about the polarization of the CMB is obtained (as both MAP and Planck intend). Since $`\gamma e`$ scattering depends on polarization and angle as $`ϵ_fϵ_i`$, where $`ϵ_{f,i}`$ are the polarization vectors of the final and inital radiation, a quadrupole anisotropy generates linear polarization (see Fig. 5). An introduction to polarization can be found in Ref. , and the numerous references therein. For our purposes here the key feature of polarization is that it is generated only by scattering. The small angle polarization is thus localized to the last-scattering surface, and provides us with a probe of the anisotropies (as a function of scale) at that time. The behaviour of the anisotropy around the horizon scale, and the slope of the spectrum at larger scales, then gives a test of the presence or absence of large-scale fluctuations in the curvature . In conclusion, cosmology is now in a “golden age”. We finally have the data to answer our most fundamental questions, and to generate new puzzles. Within a decade we hope to have a standard model of structure formation. Our current theoretical structure, starting with quantum fluctuations in the early universe, continuing with general relativistic dynamics and ending with free-fall of radiation and matter, is one of the most beautiful and complete in all of physics. Far from the cosmology of old, where order of magnitude estimates held sway, modern cosmology emphasizes precision calculations using well controlled approximations. The archetypical system of this “new era” is the microwave background. If our models are close to correct, high precision studies of the CMB anisotropy will revolutionize cosmology. If our models are wrong, one could not hope for a better data set with which to find the right path. We are all eagerly awaiting imminent experimental advances in this field.
no-problem/9903/hep-ph9903490.html
ar5iv
text
# References SINP/TNP/99-3 hep-ph/9903490 Effects of $`R`$-parity violation on direct $`CP`$ violation in $`B`$ decays and extraction of $`\gamma `$ Gautam Bhattacharyya $`^{1,}`$ <sup>*</sup><sup>*</sup>*Electronic address: gb@tnp.saha.ernet.in and Amitava Datta $`^{2,}`$ Electronic address: adatta@juphys.ernet.in <sup>1</sup>Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Calcutta 700064, India <sup>2</sup>Department of Physics, Jadavpur University, Calcutta 700032, India Abstract > In the standard model, direct $`CP`$-violating asymmetries for $`B^\pm \pi ^\pm K`$ are $`2\%`$ based on perturbative calculation. Rescattering effects might enhance it to at most $`(2025)\%`$. We show that lepton-number-violating couplings in supersymmetric models without $`R`$-parity are capable of inducing as large as $`𝒪(100\%)`$ $`CP`$ asymmetry in this channel. Such effects drastically modify the allowed range of the CKM parameter $`\gamma `$ arising from the combinations of the observed charged and neutral $`B`$ decays in the $`\pi K`$ modes. With a multichannel analysis in $`B`$ decays, one can either discover this exciting new physics, or significantly improve the existing constraints on it. > PACS number(s): 11.30.Er, 13.25.Hw, 12.60.Jv, 11.30.Fs It is well-known that $`CP`$-violating $`B`$ decays might constitute an important hunting ground for new physics. This is particularly so since many $`CP`$-violating asymmetries related to $`B`$ decays, which are predicted to be very small in the SM , are likely to be measured with unprecedented precision in the upcoming $`B`$ factories. Measurements larger than the SM predictions would definitely signal the presence of new physics. Our primary concern in this paper is how to identify and extract such information. To illustrate this point, let us consider direct $`CP`$ violation in charged $`B`$ decays. The decay amplitude for $`B^+f`$ can be written as $`A(B^+f)=_i|A_i|e^{i\varphi _i^W}e^{i\varphi _i^S}`$. Here $`\varphi _i^W`$ and $`\varphi _i^S`$ are the weak and strong phases, respectively, for the $`i`$th term. $`|A_i|`$ depend crucially on nonperturbative strong interaction dynamics and have not as yet been reliably computed. One usually measures direct $`CP`$ violating rate asymmetry in $`B^\pm `$ decays through $`a_{CP}[(B^+f)(B^{}\overline{f})]/[(B^+f)+(B^{}\overline{f})]`$. Requiring $`CPT`$ invariance and assuming that only two terms dominate in a given decay amplitude, the above asymmetry can be expressed as $$a_{CP}=\frac{2|A_1||A_2|\mathrm{sin}(\varphi _1^W\varphi _2^W)\mathrm{sin}(\varphi _1^S\varphi _2^S)}{|A_1|^2+|A_2|^2+2|A_1||A_2|\mathrm{cos}(\varphi _1^W\varphi _2^W)\mathrm{cos}(\varphi _1^S\varphi _2^S)}.$$ (1) For $`a_{CP}`$ to be numerically significant the following conditions need to be satisfied: $`(i)|A_1||A_2|`$, $`(ii)\mathrm{sin}(\varphi _1^W\varphi _2^W)1`$ and $`(iii)\mathrm{sin}(\varphi _1^S\varphi _2^S)1`$. In the SM, the $`B`$ decay amplitude in a given channel receives multiple contributions from the so-called ‘tree’ and ‘penguin’ diagrams. In many cases, however, all but one of the interfering amplitudes are highly suppressed yielding almost unobservable $`a_{CP}`$. Conversely, observation of large $`CP`$ asymmetries in these channels would indicate presence of amplitudes of comparable magnitudes arising from some new physics. Consider, as an example, the decay $`B^+\pi ^+K^0`$. The corresponding quark level process is $`\overline{b}\overline{s}d\overline{d}`$. In the SM, this receives contributions only from colour-suppressed penguin operators. Using the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, the decay amplitude could be expressed as (following the notations of ) $$A^{\mathrm{SM}}(B^+\pi ^+K^0)=A\lambda ^2(1\lambda ^2/2)\left[1+\rho e^{i\theta }e^{i\gamma }\right]|P_{tc}|e^{i\delta _{tc}},$$ (2) where $`\lambda =0.22`$ is the Wolfenstein parameter; $`A|V_{cb}|/\lambda ^2=0.81\pm 0.06`$; $`\gamma \mathrm{Arg}(V_{ub}^{}V_{ud}/V_{cb}^{}V_{cd})`$ is the CKM weak phase; $`\theta `$ and $`\delta _{tc}`$ are $`CP`$-conserving strong phases; $`P_{tc}P_t^SP_c^S+P_t^WP_c^W`$ (the difference between top- and charm-driven strong and electroweak penguins); and, finally, $`\rho `$ depends on the dynamics of the up- and charm-penguins. For calculating $`P_{tc}`$ we employ the factorization technique, which has been suggested to be quite reliable in a recent analysis . One can express $`|P_{tc}|G_Ff(\overline{C}_i)/\sqrt{2}`$, where $`=(m_{B_d}^2m_\pi ^2)f_KF_{B\pi }`$, with $`F_{B\pi }=0.3`$, and $`f(\overline{C}_i)`$ is an analytic function of the Wilson coefficients<sup>1</sup><sup>1</sup>1See, e.g., Eq. (36) of Fleischer and Mannel for the explicit dependence on the Wilson coefficients.. The $`\overline{C}_i`$’s refer to the renormalization scale $`\mu =m_b`$ and denote the next-to-leading order (NLO) scheme-independent Wilson coefficients (see the formalism developed in Refs. ). Assuming an average four-momentum ($`k`$) of the virtual gluons and photons appearing in the penguins, we obtain, to a good approximation, $`f(\overline{C}_i)0.09`$. If one admits $`0.25<k^2/m_b^2<0.5`$, the NLO estimate of $`(B^\pm \pi ^\pm K)0.5[(B^+\pi ^+K^0)+(B^{}\pi ^{}\overline{K^0})]`$ varies in the range $`(1.01.8)\times 10^5`$ for $`\rho =0`$ . This will be relevant when we subsequently confront the experimental branching ratio with the theoretical prediction. Following Eq. (2), the $`CP`$ asymmetry in the $`B^+\pi ^+K^0`$ channel is given by (neglecting tiny phase space effects) $$a_{CP}^{\mathrm{SM}}=2\rho \mathrm{sin}\theta \mathrm{sin}\gamma /(1+\rho ^2+2\rho \mathrm{cos}\theta \mathrm{cos}\gamma ).$$ (3) In the perturbative limit, $`\rho =𝒪(\lambda ^2R_b)1.7\%`$, where $`R_b=|V_{ub}|/\lambda |V_{cb}|=0.36\pm 0.08`$. However, it would be quite misleading to interpret a measurement of $`a_{CP}`$ larger than a few $`\%`$ as a signal of new physics. Rescattering effects , such as, $`B^+\pi ^0K^+\pi ^+K^0`$, i.e., long-distance contributions to the up- and charm-driven penguins, could enhance $`\rho `$ to as large as $`𝒪(10\%)`$, based on an order-of-magnitude estimate using Regge phenomenology. On account of such final state interactions, $`a_{CP}`$ could jack up to $`𝒪(20\%)`$, which is non-negligible. It has recently been shown that new physics enhanced colour dipole coupling and destructive interference could push $`a_{CP}`$ to as large as $`40\%`$ . What if a much larger $`a_{CP}`$ is observed? In the minimal supersymmetric standard model, there are additional contributions to the $`B^\pm \pi ^\pm K`$ penguins, and $`a_{CP}`$ could go up to $`30\%`$ . But switching on $`R`$-parity-violating ($`\overline{)}R`$) interactions triggers topologically different new diagrams and their interference with the SM penguins could generate large $`a_{CP}`$. Its quantitative estimate, as far as practicable, is the thrust of the present paper. The key point is that the lepton-number-violating interactions of the type $`\lambda _{ijk}^{}L_iQ_jD_k^c`$, which is a part of the $`\overline{)}R`$ superpotential, could contribute to non-leptonic $`B`$ decays at the tree level. For some of these channels the leading SM contributions arise from penguins. In view of the current upper bounds on the relevant $`\lambda ^{}`$ couplings and sparticle masses , the possibility that the $`\overline{)}R`$ tree contributions are of the same order of magnitudes as the SM penguins is very much open. Moreover, $`\overline{)}R`$ interactions are potentially important sources of new weak phases. To clarify this point, we first stress that the $`\lambda _{ijk}^{}`$ are in general complex. Even if a given $`\lambda ^{}`$ is predicted to be real in some specific model, the phase rotations of the left- and right-handed quark fields required to keep the mass terms real and reduce the CKM matrix to its standard form will automatically introduce a new weak phase in this term, barring accidental cancellations. Indeed, this phase can be absorbed by redefining the slepton or the sneutrino field. But the $`\overline{)}R`$ contribution to a $`B`$ decay amplitude depends on the product of the type $`\lambda _{ij3}^{}\lambda _{ilm}^{}`$. This product cannot be rendered real by transforming the single slepton or sneutrino field corresponding to the index $`i`$. This point has not been emphasized in the literature. How $`R`$-parity violation induces $`CP`$ violation in neutral $`B`$ decays have been analysed before . In this paper, we examine the $`\overline{)}R`$ effects on direct $`CP`$ violation in $`B^\pm \pi ^\pm K`$ decays. We show that these effects could cast a much larger numerical impact than the non-perturbative dynamics (e.g., the rescattering effects) in the SM. In the process we have derived new upper bounds on the $`\lambda _{i13}^{}\lambda _{i12}^{}`$ combinations from the existing data. We also study how the same $`\overline{)}R`$ couplings contaminate the extraction of the CKM phase $`\gamma `$ from $`B\pi K`$ decays. These issues have not been addressed in the past. How to compute the $`\overline{)}R`$ contributions to the decay $`B^+\pi ^+K^0`$? One can generate a colour non-suppressed tree level amplitude for this process using the simultaneous presence of the $`\lambda _{i13}^{}`$ and $`\lambda _{i12}^{}`$ couplings. Following the standard practice, we shall assume that only one such pair (for a given $`i`$) of $`\overline{)}R`$ couplings is numerically significant at a time. This constitutes a sneutrino ($`\stackrel{~}{\nu }_i`$) mediated decay. Using a simple Fierz transformation, one can rearrange this $`(SP)(S+P)`$ interaction in a $`(VA)(V+A)`$ form. The amplitude of this new contribution turns out to be<sup>2</sup><sup>2</sup>2In Ref. , a correction factor ($`f_{\mathrm{QCD}}(\alpha _S(m_b)/\alpha _S(\stackrel{~}{m}))^{24/23}2`$ for a sneutrino mass ($`\stackrel{~}{m}`$) of 100 GeV) to such $`\overline{)}R`$ interactions has been computed. We do not indulge ourselves with this nitty-gritty for our order of magnitude estimate. $$A^\overline{)}R(B^+\pi ^+K^0)=(|\lambda _{i13}^{}\lambda _{i12}^{}|/8\stackrel{~}{m}^2)e^{i\gamma _\overline{)}R}|\mathrm{\Lambda }_\overline{)}R|e^{i\gamma _\overline{)}R},$$ (4) where $`\gamma _\overline{)}R`$ is the weak phase associated with the product of $`\lambda ^{}`$s. The total amplitude (sum of expressions in Eqs. (2) and (4)) can be parametrized as $$A(B^+\pi ^+K^0)=A\lambda ^2(1\lambda ^2/2)|P_{tc}|e^{i\delta _{tc}}\left(1+\rho e^{i\gamma }e^{i\theta }+\rho _\overline{)}Re^{i\gamma _\overline{)}R}e^{i\theta _\overline{)}R}\right),$$ (5) where $$\rho _\overline{)}R|\mathrm{\Lambda }_\overline{)}R|/A\lambda ^2(1\lambda ^2/2)|P_{tc}|.$$ (6) Note that $`\rho _\overline{)}R`$ is free from uncertainties due to factorization. A straightforward computation for the net $`CP`$ asymmetry yields $$a_{CP}=\frac{2\rho \mathrm{sin}\theta \mathrm{sin}\gamma +2\rho _\overline{)}R\mathrm{sin}\theta _\overline{)}R\mathrm{sin}\gamma _\overline{)}R+2\rho \rho _\overline{)}R\mathrm{sin}(\theta \theta _\overline{)}R)\mathrm{sin}(\gamma \gamma _\overline{)}R)}{1+\rho ^2+\rho _\overline{)}R^2+2\rho \mathrm{cos}\theta \mathrm{cos}\gamma +2\rho _\overline{)}R\mathrm{cos}\theta _\overline{)}R\mathrm{cos}\gamma _\overline{)}R+2\rho \rho _\overline{)}R\mathrm{cos}(\theta \theta _\overline{)}R)\mathrm{cos}(\gamma \gamma _\overline{)}R)}.$$ (7) It is clear that $`a_{CP}`$ is numerically insignificant if both $`\rho `$ and $`\rho _\overline{)}R`$ are vanishingly small. As an illustrative example to demonstrate that $`R`$-parity violation alone has the potential to generate a $`CP`$ asymmetry much larger than the most optimistic expectation in the SM, we set $`\rho =0`$ in Eq. (7). A non-zero $`\rho (0.1)`$ could dilute the effect at most by $`20\%`$. At this point, an estimate of how large $`\rho _\overline{)}R`$ could be is in order. We choose $`\stackrel{~}{m}=`$ 100 GeV throughout our analysis. Employing the current upper limits on $`\lambda _{i13}^{}\lambda _{i12}^{}`$ , we obtain, for $`i=`$ 1, 2, and 3, $`\rho _\overline{)}R<`$ 0.17, 3.45, and 4.13 respectively<sup>3</sup><sup>3</sup>3These upper limits correspond to the products of the individual upper limits obtained from different physical processes assuming a common superpartner scalar mass of 100 GeV .. The upshot is that it is possible to arrange $`\rho _\overline{)}R=1`$ (for $`i=`$ 2, 3), which implies that a 100% $`CP`$ asymmetry is no longer unattainable, once we set $`\gamma _\overline{)}R=\theta _\overline{)}R=\pi /2`$. Such a drastic hike of $`CP`$ asymmetry is somewhat unique and we emphasize that it is hard to find such large effects in other places . Notice that a minimum $`\rho _\overline{)}R`$ is required to generate a given $`a_{CP}`$. This is given by (for $`\rho =0`$) $$\rho _\overline{)}R>(1\sqrt{1a_{CP}^2})/|a_{CP}|.$$ (8) Eq. (8) has been obtained by minimizing $`\rho _\overline{)}R`$ with respect to $`\gamma _\overline{)}R`$ and $`\theta _\overline{)}R`$ for a given $`a_{CP}`$. Numerically, $$\rho _\overline{)}R>1.0(1.0),0.50(0.8),0.33(0.6),0.21(0.4),0.10(0.2);$$ (9) where the numbers within brackets refer to the corresponding $`a_{CP}`$’s. At the same time we must ensure that $`\rho _\overline{)}R`$ does not become too large to overshoot the experimental constraint on $`(B^\pm \pi ^\pm K)`$. The latest CLEO measurement reads $`^{\mathrm{exp}}(B^\pm \pi ^\pm K)=(1.4\pm 0.5\pm 0.2)\times 10^5`$, which means $`^{\mathrm{exp}}1.9\times 10^5`$ (1$`\sigma `$) . Recall that $`^{\mathrm{SM}}(1.01.8)\times 10^5`$ in view of the present uncertainty of the SM (for $`\rho =0`$; varying $`\rho `$ within $`00.1`$ cannot change $`^{\mathrm{SM}}`$ significantly). Therefore the tolerance to accommodate a multiplicative new physics effect could at most be a factor of 1.9 (at 1$`\sigma `$). It is easy to extract from the denominator of Eq. (7) that pure $`\overline{)}R`$ effects modify the SM prediction of the branching ratio by a multiplicative factor $`(1+\rho _\overline{)}R^2+2\rho _\overline{)}R\mathrm{cos}\theta _\overline{)}R\mathrm{cos}\gamma _\overline{)}R)`$. The maximum allowed value of $`\rho _\overline{)}R`$ is then obtained by setting one of the two angles appearing in this factor to zero and the other to $`\pi `$ (i.e., arranging maximum possible destructive interference in the branching ratio). This leads to the conservative upper bound $$\rho _\overline{)}R<2.4;\mathrm{implying}|\lambda _{i13}^{}\lambda _{i12}^{}|<5.7\times 10^3(1\sigma ).$$ (10) Note that, in spite of the large uncertainties as discussed above, for $`i=2,3`$, these limits are already stronger than the existing ones (see discussion just above Eq. (8)). Moreover, the existing bounds from semi-leptonic processes necessarily depend on the exchanged squark masses and have been derived by assuming a common mass of 100 GeV for them. With present data from Tevatron on the squarks and gluino searches, this seems to be a too optimistic assumption. On the other hand, our bounds from hadronic $`B`$ decays are based on a more realistic assumption that the virtual sneutrinos have a common mass of 100 GeV. We also note that the choice of phases that leads to Eq. (10) kills the $`\overline{)}R`$ contribution to the $`CP`$ asymmetry. If, on the other hand, we are interested in finding the upper limit on $`\rho _\overline{)}R`$, with $`a_{CP}`$ maximized with respect to $`\gamma _\overline{)}R`$ and $`\theta _\overline{)}R`$, we must set each of the angles to $`\pi /2`$: the interference term vanishes, and we obtain a stronger limit $`\rho _\overline{)}R<1.0`$. This, in conjunction with Eqs. (9) and (10), defines a range of the $`\overline{)}R`$ couplings to be probed in the upcoming $`B`$ factories. The $`\overline{)}R`$ couplings responsible for such large $`CP`$ asymmetries might leave important side-effects that could surface out while extracting the angles ($`\alpha `$, $`\beta `$ and $`\gamma `$) of the unitarity triangle. In the SM, the three angles measured independently should sum up to $`\pi `$. In the presence of new physics, if one attempts to determine those angles assuming the validity of the SM alone, one may not obtain their true values. The SM amplitude of $`B_d\mathrm{\Psi }K_s`$, regarded as the gold-plated channel for extracting $`\beta `$, is dominated by a sum of operators all multiplying the same CKM combination $`V_{cs}^{}V_{cb}`$. It so happens that the present bounds on the corresponding $`\overline{)}R`$ couplings, viz. $`\lambda _{i23}^{}`$ and $`\lambda _{i22}^{}`$, are too strong to contaminate the extraction of $`\beta `$ in a numerically significant manner ($`A^\overline{)}R(B_d\mathrm{\Psi }K_s)<0.02A^{\mathrm{SM}}(B_d\mathrm{\Psi }K_s)`$) . On the other hand, extraction of $`\alpha `$ via $`B_d\pi ^+\pi ^{}`$ may be contaminated in a significant way, once $`\lambda _{i13}^{}\lambda _{i11}^{}`$ combinations are switched on. However, in the scenario that we are considering (i.e., where only $`\lambda _{i13}^{}\lambda _{i12}^{}`$ combinations are non-zero), extractions of both $`\alpha `$ and $`\beta `$ remain unaffected. Having thus measured $`\alpha `$ and $`\beta `$, $`\gamma `$ can be determined indirectly using the relation $`\gamma =\pi \alpha \beta `$. Let us now consider a direct measurement of $`\gamma `$, as suggested in Ref. , where one uses the observable $`R[(B_d\pi ^{}K^+)+(\overline{B}_d\pi ^+K^{})]/[(B^+\pi ^+K^0)+(B^{}\pi ^{}\overline{K^0})].`$ The present experimental range is $`R=1.0\pm 0.46`$ . Neglecting rescattering effects, a measurement of $`R`$ could be used to bound $`\gamma `$ within the framework of the SM as $`\mathrm{sin}^2\gamma <R`$ . Within errors, it may still be possible that $`R`$ settles to a value significantly smaller than unity, disfavouring values of $`\gamma `$ around $`90^{}`$. This will certainly be in conflict if, for example, $`\gamma 90^{}`$ is preferred by indirect determination. Now considering the same $`\overline{)}R`$ scenario expressed through Eq. (4), the SM bound is modified to ($`\rho =0`$) $$\mathrm{sin}^2\gamma <R(1+\rho _\overline{)}R^2+2\rho _\overline{)}R\mathrm{cos}\theta _\overline{)}R\mathrm{cos}\gamma _\overline{)}R).$$ (11) Thus the bound on $`\gamma `$ either gets relaxed or further constrained depending on the magnitude of $`\rho _\overline{)}R`$ and the signs of $`\gamma _\overline{)}R`$ and $`\theta _\overline{)}R`$. For $`\rho _\overline{)}R1`$ and $`\gamma _\overline{)}R=\theta _\overline{)}R\pi /2`$, $`\mathrm{sin}^2\gamma <2R`$, and hence there is no constraint on $`\gamma `$, if $`R`$ turns out to be $``$$`>`$ 0.5. Notice that with these choices of $`\rho _\overline{)}R`$, $`\gamma _\overline{)}R`$ and $`\theta _\overline{)}R`$, one expects to observe large $`CP`$ asymmetry as well. Therefore the lesson is that if a large $`a_{CP}`$ is observed, more care is necessary in extracting $`\gamma `$. But the latter in isolation cannot provide a comprehensive signal of new physics. The existing bound on $`\gamma `$ in the SM ($`41^{}<\gamma <134^{}`$) has been obtained from a global fit of the unitarity triangle using data on $`|V_{cb}|`$, $`|V_{ub}|/|V_{cb}|`$, $`B_d`$$`\overline{B}_d`$ mixing and $`CP`$ violation in the $`K`$ system . The allowed zone is almost symmetric around $`\gamma =90^{}`$. Interestingly, a measurement of $`R<1`$, which is still possible within errors, excludes a region symmetric with respect to $`\gamma =90^{}`$. Eq. (11) implies that these two contrasting features can be reconciled by $`R`$-parity violation. It would be worth performing a multichannel analysis combining all kinds of $`B\pi \pi ,\pi K`$ and $`BDK`$ modes (recommended for measuring $`\gamma `$) to enhance the significance of any nonzero asymmetry and to identify the sources of new physics in a more unambiguous way. In this context, $`R`$-parity violation has a distinctive feature that it can enhance both $`a_{CP}`$ and the branching ratios simultaneously. With our choice of $`\overline{)}R`$ couplings, only $`B^\pm \pi ^\pm K`$ modes are affected. Note that if we turn on the $`\lambda _{i23}^{}`$ and $`\lambda _{i11}^{}`$ couplings instead of those we have considered here, we obtain the same diagrams except that the new amplitude picks up a colour-suppression factor. On the other hand, it is possible to probe $`\overline{)}R`$ effects on other channels, e.g., $`B_s\varphi K`$ ($`b\overline{s}ss`$), by turning on other $`\overline{)}R`$ couplings. Even if no enhanced $`a_{CP}`$ is observed, or no disparity between $`^{\mathrm{SM}}`$ and $`^{\mathrm{exp}}`$ is established, new constraints on $`\overline{)}R`$ couplings could be obtained, as has already been hinted by Eq. (10). All these signify that a systematic study of many observables together with different sets of $`\overline{)}R`$ couplings could constitute an exciting program in view of the upcoming $`B`$ factories. The work of AD has been supported by the DST, India (Project No. SP/S2/k01/97) and the BRNS, India (Project No. 37/4/97 - R & D II/474). Both authors thank A. Raychaudhuri for a careful reading of the manuscript, and M. Artuso of the CLEO Collaboration for communication regarding new data.
no-problem/9903/hep-ph9903338.html
ar5iv
text
# LPT Orsay 99/19 hep-ph/9903338 Event–by–event fluctuations in heavy–ion collisions and the quark–gluon string model ## I Introduction An event–by–event analysis of heavy–ions collisions can give important information on the dynamics of these processes. In the paper it was proposed to use an event–by–event analysis of transverse momentum fluctuations as a method for the study of ”equilibration” in high–energy nucleus–nucleus collisions. For this purpose a special variable $`\mathrm{\Phi }`$ has been introduced in ref. (the definition of this variable will be given below). The problem of ”equilibration” in high–energy heavy–ions collisions is very important in order to understand the conditions for quark–gluon plasma formation. Recent experimental results on event–by–event analysis of Pb–Pb collisions at CERN SPS by the NA49 collaboration show that the value of $`\mathrm{\Phi }`$ is substantially smaller than expected in the case of independent nucleon–nucleon collisions and its smallness was considered as an indication of ”equilibration” in the system. This result has been discussed in the framework of different theoretical models. It was shown that the increase of transverse momenta of hadrons due to multiple rescatterings leads to a substantial increase of $`\mathrm{\Phi }`$. Incorporating this effect in the model of ref. one finds an even stronger disagreement with the NA49 result. It was also demonstrated that strings fusion also leads to an increase of $`\mathrm{\Phi }`$ in disagreement with experiment . The results on the influence of final states interactions on the observable $`\mathrm{\Phi }`$ are contradictory: in ref. it was shown that final states interactions in the framework of the string model of ref. have a small effect and do not allow to reach agreement with experiment, while in ref. it was argued that final states interactions in the framework of the UrQMD model are essential and can decrease $`\mathrm{\Phi }`$ to a value consistent with experiment. On the other hand, it was shown in ref. that, in the case of fully equilibrated hadronic gas made mostly of pions, one expects large positive values for the variable $`\mathrm{\Phi }`$ not consistent with the experimental observation. In this note we study event–by–event fluctuations using the Monte–Carlo formulation of the Quark–Gluon String Model (QGSM) . The QGSM and the Dual Parton Model (DPM) ) are closely related dynamical models based on 1/N–expansion in QCD, string fragmentation and reggeon calculus. They give a good description of many characteristics of multiparticle production in hadron–hadron, hadron–nucleus and nucleus–nucleus collisions (for a review see refs. ). Nuclear interactions in this model are treated in the Glauber–Gribov approach. It will be shown that the model reproduces the results of the event–by–event analysis of the NA49 experiment for the quantity $`\mathrm{\Phi }`$ as well as for other fluctuations observed in this experiment . We analyze the reason for the decrease of the quantity $`\mathrm{\Phi }`$ from p–p to Pb–Pb collisions seen by the NA49 experiment and come to the conclusion that the quantity $`\mathrm{\Phi }`$ is sensitive to many details of the interaction and can hardly be considered as a good measure of ”equilibration” in the system. We predict a strong increase of the value of $`\mathrm{\Phi }`$ at energies of RHIC and higher. The model also gives definite predictions for event–by–event fluctuations in p–Pb collisions. ## II Analysis of event–by–event fluctuations of transverse momenta Let us remind briefly the method to study event–by–event fluctuations of transverse momenta of produced particles introduced in ref. . It was proposed to define for each particle in a given event a variable $`z_i=p_{Ti}<p_T>`$, where $`p_{Ti}`$ is the transverse momentum of the particle $`i`$ and $`<p_T>`$ is the mean transverse momentum of particles averaged over all events. Using $`z_i`$ the quantity $`Z=\mathrm{\Sigma }_{i=1}^Nz_i`$ is defined, where N is the total number of particles in the event. If nucleus–nucleus collisions can be considered as a superposition of independent nucleon–nucleon collisions then it can be shown that $$\frac{<Z^2>_{AA}}{<N>_{AA}}=\frac{<Z^2>_{NN}}{<N>_{NN}}.$$ (1) A derivation of this result is given in Appendix 1. The averaging in eq.(1) is over all events in a given kinematical region. It was proposed in ref. to characterize the degree of fluctuations by the variable $$\mathrm{\Phi }=\sqrt{\frac{<Z^2>}{<N>}}\sqrt{<z^2>},$$ (2) where $`<z^2>`$ is the second moment of the single particle inclusive z–distribution. The quantity $`<z^2>`$ corresponds to purely statistical fluctuations and is determined by mixing particles from different events. It was emphasized in ref. that, if nucleus–nucleus collisions would be a simple superposition of independent nucleon–nucleon collisions, then the variable $`\mathrm{\Phi }`$ would be the same as in the nucleon–nucleon case (see Appendix 1). In nucleon–nucleon collisions the quantity $`\mathrm{\Phi }`$ is different from zero due to dynamical correlations and, in particular, due to the dependence of $`<p_T>`$ on the number of produced particles. It was proposed to attribute a possible decrease of the quantity $`\mathrm{\Phi }`$ in A–A collisions to the effects of ”equilibration”. Let us note that the model of independent N–N collisions for nucleus–nucleus interactions is an extremely oversimplified one. The Glauber model at high energies is not equivalent to independent N–N collisions even for N–A interactions. The space–time picture of hadron–nucleus interactions at high energies is absolutely different from a simple picture of successive reinteractions of an initial hadron with nucleons of the nucleus (see e.g. ). For nucleus–nucleus interactions there are extra correlations . The model of independent N–N collisions does not even satisfy energy–momentum conservation, as a nucleon of one nucleus can not interact inelastically several times with nucleons of another nucleus having the same energy at each interaction. In the QGSM as well as in DPM, the effects of multiple interactions in hadron–nucleus and nucleus –nucleus collisions are taken into account in the approach based on the topological expansion in QCD . Probabilities of rescatterings are calculated in the framework of the Glauber–Gribov theory and multiparticle configurations in the final state are determined using AGK cutting rules. In these models the Pomeron is related to the cylinder type diagrams, which correspond to the production of two chains of particles due to decays of two $`qqq`$ strings. Multi–Pomeron exchanges are related to multi–cylinder diagrams which which produce extra chains of type $`q\overline{q}`$. They are especially important in interactions with nuclei. Fragmentation of strings into hadrons is described according to ”regge counting rules” , which give correct triple–regge and double–regge limits of inclusive cross sections. Note, however, that in the Monte Carlo version used in this paper, the Artru–Menessier string fragmentation scheme is implemented instead. All conservation laws (including energy–momentum conservations) are satisfied in this approach. Let us emphasize that at energies $`\sqrt{s}10`$ GeV the cylinder–type diagrams give the dominant contributions for N–N collisions. Extra $`q\overline{q}`$ chains due to multi–cylinder diagrams have rather small length in rapidity (short chains) and do not lead to substantial contributions to particle production. In nucleon–nucleus and nucleus–nucleus collisions, the number of short chains is strongly increased compared to the nucleon–nucleon case (it is proportional to a number of collisions) and they should be taken into account in any realistic calculations of multiparticle production on nuclei. This means that for p–A and A–B collisions there are extra ”clusters” of particles (short chains, of type $`q\overline{q}`$) compared to the nucleon–nucleon interactions ”clusters” (long chains connecting valence quarks and diquarks of the colliding nucleons). So we come to the conclusion that in the relativistic Glauber–Gribov dynamics the characteristics of final particles in N–A and A–B collisions can not be expressed in terms of N–N collisions only, as it was assumed in ref., and eq.(1) is not valid in general. In Appendix 1 we give as an illustrative example the results in a model with two types of clusters. This model is a generalization of the single cluster model of ref. , and is much closer to QGSM and DPM. The results of the Monte Carlo calculation for the quantity $`\mathrm{\Phi }`$ are shown in Table 1 for p–p, p–Pb and central Pb–Pb collisions at SPS energies ($`\sqrt{s}=19.4`$ GeV) and at RHIC ($`\sqrt{s}=200`$ GeV). Predictions of the model for $`\mathrm{\Phi }`$ are quite different for these two energies. At SPS there is a strong reduction of the quantity $`\mathrm{\Phi }`$ for nuclear collisions compared to N–N collisions, while at RHIC the quantity $`\mathrm{\Phi }`$ is predicted to be much larger than at SPS and about the same for Pb–Pb and p–p collisions. At LHC energies the value of $`\mathrm{\Phi }`$, obtained in our model, is 160 MeV for p–p and even larger for Pb–Pb collisions (the Monte Carlo code we use does not allow to calculate precise values of the correlations at LHC energies due to a too large number of particle produced in each event; so in Table 1 we give predictions of the model at $`\sqrt{s}=540`$ GeV and $`\sqrt{s}=1`$ TeV to show the energy dependence of fluctuations). The results for SPS energies are in a reasonable agreement with experimental data of the NA49 Collaboration . The result for p–p interactions at this energy is even higher than the estimate, based on the dependence of $`<p_T>`$ on the number of charged particles, given in ref.. The model reproduces this correlation reasonably well and shows that this is not the only source of fluctuations leading to a non–zero value of $`\mathrm{\Phi }`$. We find that the quantity $`\mathrm{\Phi }`$ is sensitive also to other types of correlations and, in particular, to the correlations related to conservation of $`p_T`$ in the process. Let us note that the quantity $`\mathrm{\Phi }`$ at SPS energies is very small and is defined in eq.(2) as a difference of two large numbers (see Table 1), so it is very sensitive to all details of dynamical models. Because of its smallness it is difficult to obtain a good accuracy in a Monte Carlo calculation of this quantity (especially for nucleus–nucleus collisions, where the maximum statistics possible in the Monte–Carlo is $``$ 5000 events). In order to increase the statistics and to reduce this uncertainty we give in Table 1 results obtained for the total rapidity interval, while the experimental data of the NA49 Collaboration were obtained in a fixed rapidity interval $`4<y_\pi ^{lab.}<5.5`$. The error in the values of $`\mathrm{\Phi }`$ in Table 1 is about 1 MeV for the lowest energies, and increases at high energy. Other properties of event by event fluctuations observed by NA49 Collaboration were calculated under the condition of the experiment and are reproduced by the model reasonably well as it is shown in Fig.1. It follows from Table 1 that at SPS energies there is an increase of the quantity $`\sqrt{\frac{<Z^2>}{<N>}}`$ from p–p to Pb–Pb collisions, but there is an even larger increase for $`\sqrt{<z^2>}`$ due to the increase of $`<p_T>`$ and to a change in the form of the $`p_T`$ distribution. The effect of the correlations between $`<p_T>`$ and the number of charged particles due to rescatterings is, to a large extent, compensated at these energies by energy–momentum conservation effects. As a result, we find no dependence of $`<p_T>`$ on $`n_{ch}`$ for Pb–Pb collisions at SPS. For RHIC energies a strong increase in the values of both $`\sqrt{\frac{<Z^2>}{<N>}}`$ and especially of $`\mathrm{\Phi }`$ is predicted (see Table 1). At these energies the increase of average transverse momentum with the number of rescatterings becomes very important in p–p interactions and is reproduced by the QGSM (Fig. 2a). It is shown in Fig. 2b that an increase of $`<p_T>`$ with multiplicity is predicted at these energies even for Pb–Pb collisions, although the effect is less pronounced for heavy-ions collisions than for p–p. At LHC energies all these effects will be stronger than at RHIC and will produce an increase of the quantity $`\mathrm{\Phi }`$ in p–p, p–A and A–A collisions as energy increases. The predictions of the model can be easily tested in future experiments at RHIC and LHC. ## III Conclusions We have shown, in a Monte Carlo version of the QGSM, that at SPS energies the quantity $`\mathrm{\Phi }`$, caracterizing event–by–event transverse momentum fluctuations, decreases from $`\mathrm{\Phi }9`$ MeV in p–p collisions to $`\mathrm{\Phi }2`$ MeV in central Pb–Pb collisions. This result for Pb–Pb collisions agrees with the measurement of the NA49 Collaboration. In ref. , such a decrease between p–p and Pb–Pb was considered to be a test of equilibration of the dense system produced in central heavy ion collisions. We have obtained the same result in the framework of an independent string model. At RHIC energies, we predict an increase in the value of $`\mathrm{\Phi }`$ ($`\mathrm{\Phi }=75÷80`$ MeV). In this case $`\mathrm{\Phi }`$ will be approximately the same in p–p and central Pb–Pb collisions. At higher energies the value of $`\mathrm{\Phi }`$ is predicted to be larger and to increase from p–p to central Pb–Pb collisions. Our analysis indicates that the quantity $`\mathrm{\Phi }`$ can hardly be considered as a good measure of ”equilibration” in the system. However, it can be used as a sensitive test of dynamical models. Acknowledgments This work was supported in part by INTAS grant 93-0079ext, NATO grant OUTR.LG 971390, RFBR grants 96-02-191184a, 96-15-96740 and 98-02-17463. One of the author (E. G. F.) thanks Fundación Ramón Areces of Spain for finantial support. ## Appendix 1 Here we will consider a simplified model of multiparticle production with two types of ”clusters”, which is a generalization of the model of ref., where clusters of only a single type were produced. As discussed above, these ”clusters” correspond to $`qqq`$ chains (clusters of the first type) and $`q\overline{q}`$ chains (clusters of the second type). Nucleon–nucleon collision at SPS–energies can be described with a good accuracy by the production of two clusters of the first type, while for proton–nucleus and nucleus–nucleus interactions, production of the second type of clusters is important even in this energy range. For independent production of $`k_1`$ clusters of the first type and $`k_2`$ \- of the second type, with average transverse momenta $`<p_T>_i`$ and multiplicity $`<n>_i`$ for the cluster $`i`$, the following results for $`<Z>`$ and $`<Z^2>`$ can be obtained: $$<Z>_{k_1k_2}=k_1<Z>_1+k_2<Z>_2$$ (A.1) where $`<Z>_i=<n>_i(<p_T>_i<P_T>)`$ and $`<P_T>=\frac{(<k_1><n_1><p_T>_1+<k_2><n>_2<p_T>_2)}{(<k_1><n>_1+<k_2><n>_2)}`$ . $$<Z^2>_{k_1k_2}=k_1<Z^2>_1+k_2<Z^2>_2+k_1(k_11)<Z>_1^2+$$ $$+k_2(k_21)<Z>_2^2+2k_1k_2<Z>_1<Z>_2$$ (A.2) The expression for $`<Z^2>_{k_1k_2}`$ can be rewritten as $$<Z^2>_{k_1k_2}=k_1(<Z^2>_1<Z>_1^2)+k_2(<Z^2>_2<Z>_2^2)+<Z>_{k_1k_2}^2.$$ (A.3) For the quantity $`\mathrm{\Phi }`$ in eq.(2) it is important that contrary to the case of a single cluster, the expression for $`<Z^2>`$ contains negative terms proportional to $`<Z>_i^2`$. Next we take the average over the number of produced clusters with some distribution $`P_{k_1k_2}`$ $$<<Z^2>_{k_1k_2}>=\underset{k_1,k_2}{}P_{k_1k_2}<Z^2>_{k_1k_2}=$$ $$=<k_1>(<Z^2>_1<Z>_1^2)+<k_2>(<Z^2>_2<Z>_2^2)+<<Z>_{k_1k_2}^2>.$$ (A.4) In the following we will denote this averaging simply by $`<Z^2>`$. We shall concentrate on p–A collisions. In this case, it is easy to show that the last term in eq.(A.4) is small and can be neglected. To prove this we note that for p-A collisions, $`k_2=k_12`$ with $`k_1=\overline{\nu }+1`$, where $`\overline{\nu }`$ is the average number of collisions and, thus, $$<<Z>_{k_1k_2}^2><<Z>_{k_1k_2}>^2=(<k_1^2><k_1>^2)(<Z>_1+<Z>_2)^2.$$ (A.5) Taking into account that (for fixed impact parameter) the distribution in $`k_1`$ is of a Poisson type with $`(<k_1^2><k_1>^2)=c_1<k_1>`$ and that $`<<Z>_{k_1k_2}>=<k_1><Z>_1+<k_2><Z>_2=<k_1><Z>_1+(<k_1>2)<Z>_2=0`$ we obtain: $$<<Z>_{k_1k_2}^2>=\frac{4c_1<Z>_2^2}{<k_1>}.$$ (A.6) For large values of $`<k_1>`$ this quantity is much smaller than the other terms in the right–hand side of eq.(A.4). The expressions for the quantities $`<N>`$ and $`<z^2>`$, that enter into the definition of $`\mathrm{\Phi }`$ are selfevident $$<N>=<k_1><n_1>+<k_2><n_2>$$ (A.7) and $$<z^2>=\frac{<k_1><n>_1<z^2>_1+<k_2><n>_2<z^2>_2}{<k_1><n>_1+<k_2><n>_2}.$$ (A.8) Let us denote $`\frac{<Z^2>_i}{<n>_i}`$ as $`<z^2>_i(1+\delta _i)`$ and $`\frac{<Z>_i^2}{<Z^2>_i}\gamma _i`$, $`\frac{<k_2><n>_2}{<k_1><n>_1}\alpha `$. Taking into account that $`\delta _i2\frac{\mathrm{\Phi }_i}{\sqrt{z_i^2}}`$ and $`\gamma _i`$ are much smaller than unity we obtain the following approximate expression for $`\mathrm{\Phi }`$: $$\mathrm{\Phi }=\frac{[(\delta _1\gamma _1)<z^2>_1+(\delta _2\gamma _2)\alpha <z^2>_2]}{2\sqrt{A}}$$ (A.9) where $`A=(1+\alpha )(<z^2>_1+\alpha <z^2>_2)`$. It is important that the terms proportional to $`\gamma _i`$ give negative contributions to $`\mathrm{\Phi }`$ and can substantially decrease the value of $`\mathrm{\Phi }`$. For the case of clusters of the same type ($`\gamma _i=0,<Z^2>_1=<Z^2>_2,\delta _1=\delta _2`$) we obtain: $$\mathrm{\Phi }=\frac{\delta _1}{2}\sqrt{<z^2>}$$ (A.10) both for p–p and p–A. We recover in this way the result of ref.. The discussion in this Appendix has been restricted to p–A interactions. The situation is more complicated in A–B collisions. Actually, even for p–A, we do not claim that the effect discussed in this Appendix is the main reason for the decrease of $`\mathrm{\Phi }`$, obtained in the Monte Carlo calculations (see Table 1), between p–p and p–A collisions at SPS energies. Nevertheless, our example illustrates the important effect that a modification of the model (i. e., going from one to two types of clusters) can have on the quantity $`\mathrm{\Phi }`$. ## Table captions Table 1. The results of the Monte-Carlo calculation for the quantities $`\sqrt{\frac{<Z^2>}{<N>}}`$ (MeV), $`\sqrt{<z^2>}`$ (MeV) and $`\mathrm{\Phi }`$ (MeV) for p-p, p-Pb and central Pb-Pb-collisions at SPS ($`\sqrt{s}=19.4`$ GeV), RHIC ($`\sqrt{s}=200`$ GeV) and higher ($`\sqrt{s}=540`$ GeV and 1 TeV) energies. The results at 1 TeV have only an indicative value (see main text). ## Figure captions Figure 1. Event spectra characterising the multiplicity, transverse momentum and rapidity distribution of charged particles per event for Pb-Pb collisions at $`P_{lab}=158`$ AGeV/c and $`b3.5`$ fm in the rapidity interval $`4y5.5`$. The full lines are the Monte Carlo results. Experimental data are from ref. . Figure 2. The dependence of the average transverse momentum on the multiplicity of charged particles in the window $`|\eta |2.5`$ at $`\sqrt{s}=200`$ GeV for $`\mathrm{p}\overline{\mathrm{p}}`$ collisions compared to experimental data (Fig. 2a) and for Pb-Pb central collisions (Fig. 2b). Table 1 $`\sqrt{s}`$ (GeV) $`\sqrt{\frac{<Z^2>}{<N>}}`$ (MeV) $`\sqrt{<z^2>}`$ (MeV) $`\mathrm{\Phi }`$ (MeV) p-p 19.4 244.5 235.5 9.0 p-Pb 19.4 243.5 243.0 0.5 Pb-Pb 19.4 265.6 263.2 2.4 p-p 200 387.0 310.6 76.4 p-Pb 200 433.7 367.8 65.9 Pb-Pb 200 508.9 429.4 79.5 p-p 540 450.1 323.6 126.5 p-Pb 540 524.3 397.7 126.6 Pb-Pb 540 622.6 475.2 147.4 p-p 1000 455.5 324.4 131 p-Pb 1000 524.5 397.5 127 Pb-Pb 1000 704.2 484.3 220 Figure 1 Figure 2
no-problem/9903/hep-ph9903350.html
ar5iv
text
# Inflation and Preheating in NO models ## I Introduction Usually it is assumed that the inflaton field $`\varphi `$ after inflation rolls down to the minimum of its effective potential $`V(\varphi )`$, oscillates, and eventually decays. The stage of oscillations of the inflaton field is a necessary part of the standard mechanism of reheating of the universe . However, there exist some models where the inflaton potential $`V(\varphi )`$ gradually decreases at large $`\varphi `$ and does not have a minimum. In such theories the inflaton field $`\varphi `$ does not oscillate after inflation, so the standard mechanism of reheating does not work there. Investigation of inflationary models of this type has been rather sporadic , and each new author has given them a new name, such as deflation , kination , and quintessential inflation . However, the universe does not deflate in these models, and in general they are not related to the theory of quintessence. From our perspective, the main distinguishing feature of inflationary models of this type is the non-oscillatory behavior of the inflaton field, which makes the standard mechanism of reheating inoperative. Therefore we will call such models “non-oscillatory models,” or simply “NO models.” In addition to describing the most essential feature of this class of theories which makes reheating problematic, this name reflects the rather negligent attitude towards these models which existed until now. One of the reasons why NO models have not attracted much attention was the absence of an efficient mechanism of reheating. For a long time it was believed that the only mechanism of reheating possible in NO models was the gravitational particle production which occurs because of the changing metric in the early universe . This mechanism is very inefficient, which may lead to certain cosmological problems. However, recently the situation changed. The mechanism of instant preheating which was found in is very efficient, and it works in NO models even better than in the models where $`V(\varphi )`$ has a minimum. In this paper we will describe various features of NO models. First of all, we will discuss the problem of initial conditions in these models, which in our opinion has not been properly addressed before. The standard assumption made in is that at the end of inflation in NO models one has a large and heavy inflaton field $`\varphi `$ which rapidly changes and creates light particles $`\chi `$ minimally coupled to gravity from a state where the classical value of the field $`\chi `$ vanishes. We will show that this setting of the problem should be reconsidered. If the fields $`\varphi `$ and $`\chi `$ do not interact (which was the standard assumption of Refs. ), then at the end of inflation the field $`\chi `$ typically does not vanish. Usually the last stages of inflation are driven by the light field $`\chi `$ rather than by the heavy field $`\varphi `$. But in this case reheating occurs due to oscillations of the field $`\chi `$, as in the usual models of inflation. In addition to reexamining the problem of initial conditions, we will point out potential difficulties associated with isocurvature perturbations and gravitational production of gravitinos and moduli fields in NO models. In order to provide a consistent setting for the NO models one needs to introduce interaction between the fields $`\varphi `$ and $`\chi `$. This resolves the problem of initial conditions in these models and makes it possible to have a non-oscillatory behavior of the inflaton field after inflation. We show that all of these problems can be resolved in the context of the recently proposed scenario of instant preheating if there is an interaction $`\frac{g^2}{2}\varphi ^2\chi ^2`$ of the inflaton field $`\varphi `$ with another scalar field $`\chi `$, with $`g^210^{14}`$. In this case the mechanism of instant preheating in NO models is much more efficient than the usual mechanism of gravitational particle production studied in . ## II On the initial conditions in NO models without interactions NO models considered in described an inflaton field which does not interact with other fields except gravitationally. As an example, we will consider here the simplest theory of the inflaton field $`\varphi `$ with an effective potential $`V(\varphi )`$ which behaves as $`\frac{\lambda }{4}\varphi ^4`$ at $`\varphi <0`$, and (gradually) vanishes when $`\varphi `$ becomes positive. In addition, in accordance with , we will consider a light scalar field $`\chi `$ which is not coupled to the inflaton field $`\varphi `$, and which is minimally coupled to gravity. Reheating in this model occurs because of the gravitational production of $`\chi `$ particles. Application of the general theory of gravitational particle creation to the last stages of inflation and immediate stages after inflation was considered in many papers; see in particular . However, this theory (and the interpretation of its results) may change dramatically if one investigates initial conditions for inflation and studies quantum fluctuations produced during inflation. In particular, in all previous works on NO models it was assumed that at the beginning of inflation $`|\varphi |`$ is very large and $`\chi =0`$. Let us show that in this case at the end of the stage of inflation driven by the field $`\varphi `$ the long-wavelength fluctuations of the field $`\chi `$ typically become so large that it leads to a new stage of inflation which is driven not by the field $`\varphi `$, but by the field $`\chi `$. This conclusion is rather general and may be extended to other models of $`V(\varphi )`$. The explanation goes back to the paper , where it was found that in the presence of several scalar fields the last stage of multiple inflation is typically driven by the lightest scalar field. Indeed, the field $`\varphi `$ during inflation obeys the following equation: $$3H\dot{\varphi }=\lambda _\varphi \varphi ^3.$$ (1) Here $$H=\sqrt{\frac{2\pi \lambda _\varphi }{3}}\frac{\varphi ^2}{M_p}.$$ (2) These two equations yield the solution $$\varphi =\varphi _0\mathrm{exp}\left(\sqrt{\frac{\lambda _\varphi }{6\pi }}M_pt\right).$$ (3) If the field $`\chi `$ is very light, then in each time interval $`H^1`$ during inflation fluctuations $`\delta \chi =\frac{H}{2\pi }`$ will be produced. The equation describing the growth of fluctuations of the field $`\chi `$ can be written as follows: $$\frac{d\chi ^2}{dt}=\frac{H^3}{4\pi ^2}.$$ (4) In de Sitter space with $`H=const`$ this equation would give $$\chi ^2=\frac{H^3t}{4\pi ^2}.$$ (5) For the theory under consideration $`H`$ depends on time, and Eq. (4) reads $$\frac{d\chi ^2}{dt}=\frac{\lambda _\varphi \sqrt{\lambda }_\varphi }{3\sqrt{6\pi }}\frac{\varphi _0^6}{M_p^3}\mathrm{exp}\left(\sqrt{\frac{6\lambda _\varphi }{\pi }}M_pt\right).$$ (6) The result of integration at large $`t`$ converges to $$\chi ^2=\frac{\lambda _\varphi \varphi _0^6}{18M_p^4}.$$ (7) These fluctuations from the point of view of a local observer look like a classical scalar field $`\chi `$ which is homogeneous on the scale $`H^1`$ and which has a typical amplitude $$\overline{\chi }=\sqrt{\chi ^2}=\sqrt{\frac{\lambda _\varphi }{18}}\frac{\varphi _0^3}{M_p^2}.$$ (8) This quantity is greater than $`M_p`$ for $$\varphi _0\lambda _\varphi ^{1/6}M_p.$$ (9) This condition is quite natural. For example, if, in accordance with , inflation begins at $`VM_p^4`$, one has $`\varphi _0\lambda _\varphi ^{1/4}M_p`$, which is much greater than $`\lambda _\varphi ^{1/6}M_p`$. If the field $`\chi `$ has a shallow polynomial potential such as $`m_\chi ^2\chi ^2/2`$ (with a small mass $`m_\chi `$), or $`\lambda _\chi \chi ^4/4`$ (with small $`\lambda _\chi `$), then the existence of a homogeneous field $`\overline{\chi }M_p`$ leads to a new stage of inflation. This stage will be driven by the light field $`\chi `$, and it will begin after the end of the stage of inflation driven by the field $`\varphi `$. The condition (9) coincides with the condition that chaotic inflation with respect to the field $`\varphi `$ enters the stage of self-reproduction . In this regime the field $`\varphi `$ may stay large even much longer than is suggested by our classical equations which do not take self-reproduction into account. As a result, fluctuations of the field $`\chi `$ will be even greater, and the last stage of inflation will be driven not by the field $`\varphi `$ but by the lighter field $`\chi `$. In such a case, the results of gravitational particle production obtained in all previous papers on NO models do not apply. Instead of particles $`\chi `$ produced at the end of inflation driven by the field $`\varphi `$ (or in addition to these particles), we have long-wavelength fluctuations of the field $`\chi `$ which initiate a new stage of inflation driven by the field $`\chi `$. A similar result can be obtained in the model $`V(\varphi )=\frac{m^2}{2}\varphi ^2`$. In this case after inflation driven by the field $`\varphi `$ one has $`\overline{\chi }=\sqrt{\chi ^2}=m\varphi _0^2/(\sqrt{3}M_p)`$. This leads to inflation with respect to the field $`\chi `$ (i.e. one has $`\overline{\chi }>M_p`$) for $`\varphi _0M_p\sqrt{\frac{M_p}{m}}`$. Again, according to , the most natural initial value of $`\varphi `$ is $`\varphi _0\frac{M_p^2}{m}M_p\sqrt{\frac{M_p}{m}}`$. This suggests that if the field $`\chi `$ is very light (which was assumed in ), then it is this field rather than the field $`\varphi `$ that is responsible for the end of inflation. Therefore instead of studying gravitational production of the field $`\chi `$ due to the non-oscillatory motion of the field $`\varphi `$ during preheating in this NO model, one should study the mechanism of production of particles of the field $`\varphi `$ by the oscillating field $`\chi `$. Of course, one can avoid some of these problems by choosing a specific set of scalar fields and a specific version of the theory which does not allow any of these scalar fields except the field $`\varphi `$ to drive inflation. For example, if the field $`\chi `$ is an axion field (and no other light scalar fields are present), then it simply cannot be large enough to be responsible for inflation. One can also assume that the field $`\chi `$ is nonminimally coupled to gravity and has a large effective mass $`O(H)`$ during inflation (see Sect. IV). Then the long-wavelength fluctuations of this field will not be produced. Thus there are some ways to overcome the problem mentioned above. But in general this problem is very serious, and one should be aware of its existence. ## III Isocurvature perturbations in the NO models In the previous section we showed that even if one assumes that $`\chi =0`$ at the beginning of inflation, the assumption that one still has $`\chi =0`$ at the beginning of reheating in general is incorrect. The long-wavelength perturbations of the field $`\chi `$ generated during inflation typically are very large, and they look like a large homogeneous classical field $`\chi `$. Now we will consider a more general question: If the two fields $`\varphi `$ and $`\chi `$ do not interact, then why should we assume that one of them should vanish in the beginning of inflation? And if it does not vanish, then how does it change the whole picture? Suppose for example that the field $`\chi `$ is a Higgs field with a relatively small mass and with a large coupling constant $`\lambda _\chi \lambda _\varphi `$. The total effective potential in this theory (for $`\varphi <0`$) is given by $$V(\varphi ,\chi )=\frac{\lambda _\varphi }{4}\varphi ^4+\frac{\lambda _\chi }{4}(\chi ^2v^2)^2.$$ (10) Here $`v`$ is the amplitude of spontaneous symmetry breaking, $`vM_p`$. During inflation and at the first stages of reheating this term can be neglected, so we will study the simplified model $$V(\varphi ,\chi )=\frac{\lambda _\varphi }{4}\varphi ^4+\frac{\lambda _\chi }{4}\chi ^4.$$ (11) This model was first analyzed in . It is directly related to the Peebles-Vilenkin model if the field $`\chi `$ is the Higgs boson field with a small mass $`m`$. This model exhibits the following unusual feature. In general, at the beginning of inflation one has both $`\varphi 0`$ and $`\chi 0`$. Thus, unlike in the previous subsection, we will not assume that $`\chi =0`$, and instead of studying quantum fluctuations of this field which can make it large, we will assume that it could be large from the very beginning. Even though the fields $`\varphi `$ and $`\chi `$ do not interact with each other directly, they move towards the state $`\varphi =0`$ and $`\chi =0`$ in a coherent way. The reason is that the motion of these fields is determined by the same value of the Hubble constant $`H`$. The equations of motion for both fields during inflation look as follows: $$3H\dot{\varphi }=\lambda _\varphi \varphi ^3.$$ (12) $$3H\dot{\chi }=\lambda _\chi \chi ^3.$$ (13) These equations imply that $$\frac{d\varphi }{\lambda _\varphi \varphi ^3}=\frac{d\chi }{\lambda _\chi \chi ^3},$$ (14) which yields the general solution $$\frac{1}{\lambda _\varphi \varphi ^2}=\frac{1}{\lambda _\chi \chi ^2}+\frac{1}{\lambda _\varphi \varphi _0^2}\frac{1}{\lambda _\chi \chi _0^2},$$ (15) Since the initial values of these fields are much greater than the final values, at the last stages of inflation one has $$\frac{\varphi }{\chi }=\sqrt{\frac{\lambda _\chi }{\lambda _\varphi }}.$$ (16) Suppose $`\lambda _\varphi \lambda _\chi `$. In this case the “heavy” field $`\chi `$ rapidly rolls down, and then from the last equation it follows, rather paradoxically, that the Hubble constant at the end of inflation is dominated by the “light” field $`\varphi `$. Thus we can consistently consider the creation of fluctuations of the field $`\chi `$ ($`\chi `$ particles) at the end of and after the last inflationary stage driven by the $`\varphi `$ field. But now these fluctuations occur on top of a nonvanishing classical field $`\chi `$. To study the behavior of the classical fields $`\varphi `$ and $`\chi `$ and their fluctuations analytically, one should remember that during the inflationary stage driven by $`\varphi `$ one has $`H=\sqrt{\frac{2\lambda _\varphi \pi }{3}}\frac{\varphi ^2}{M_p}`$, as in the previous section. In this case, as before, the solution for the equation of motion of the field $`\varphi `$ is given by $$\varphi =\varphi _0\mathrm{exp}\left(\sqrt{\frac{\lambda _\varphi }{6\pi }}M_pt\right).$$ (17) Meanwhile, according to Eq. (16), $$\chi =\varphi _0\sqrt{\frac{\lambda _\varphi }{\lambda _\chi }}\mathrm{exp}\left(\sqrt{\frac{\lambda _\varphi }{6\pi }}M_pt\right),$$ (18) whereas for perturbations of the field $`\chi `$ one has: $$\delta \chi =\delta \chi _0\mathrm{exp}\left(3\sqrt{\frac{\lambda _\varphi }{6\pi }}M_pt\right).$$ (19) Let us consider, for example, the behavior of the fields and their fluctuations at the end of inflation, starting from the moment $`\varphi =\varphi _i`$. One may take, for example, $`\varphi _i4M_p`$, which corresponds to a point approximately 60 e-folds before the end of inflation. The fluctuations $`\delta \chi _iH(\varphi _i)/2\pi `$ decrease according to (19), and at the end of inflation one gets $$\frac{\delta \chi }{\chi }=\frac{H(\varphi _i)}{2\pi \chi _i}\mathrm{exp}\left(2\sqrt{\frac{\lambda _\varphi }{6\pi }}M_pt\right)=\frac{\sqrt{\lambda _\chi }\varphi _i}{\sqrt{6\pi }M_p}\frac{\varphi _e^2}{\varphi _i^2}.$$ (20) Here $`\varphi _e0.3M_p`$ corresponds to the end of inflation.<sup>*</sup><sup>*</sup>*We are grateful to Peebles and Vilenkin for pointing out that the factor $`\frac{\varphi _e^2}{\varphi _i^2}`$ should be present in this equation. After that moment the fields $`\varphi `$ and $`\chi `$ begin oscillating, and the ratio of $`\delta \chi `$ to the amplitude of oscillations of the field $`\chi `$ remains approximately constant. This gives the following estimate for the amplitude of isocurvature perturbations in this model: $$\frac{\delta V(\chi )}{V(\chi )}4\frac{\delta \chi }{\chi }=2\times 10^2\sqrt{\lambda _\chi }.$$ (21) Initially perturbations of $`V(\chi )`$ give a negligibly small contribution to perturbations of the metric because $`V(\chi )V(\varphi )`$; that is why they are called isocurvature perturbations. However, the main idea of preheating in NO models is that eventually $`\chi `$ fields or the products of their decay will give the dominant contribution to the energy-momentum tensor because the energy density of the field $`\varphi `$ rapidly vanishes due to the expansion of the universe ($`\rho _\varphi a^6`$). However, because of the inhomogeneity of the distribution of the field $`\chi `$ (which will be imprinted in the density distribution of the products of its decay on scales greater than $`H^1`$), the period of the dominance of matter over the scalar field $`\varphi `$ will happen at different times in different parts of the universe. In other words, the epoch when the universe begins expanding as $`a\sqrt{t}`$ or $`at^{2/3}`$ instead of $`at^{1/3}`$ will begin at different moments $`t`$ (at different total densities) in different parts of the universe. Starting from this time the isocurvature fluctuations (21) will produce metric perturbations, and, as a result, perturbations of CMB radiation. Note that if the equation of state of the field $`\chi `$ or of the products of its decay coincided with the equation of state of the scalar field $`\varphi `$ after inflation, fluctuations of the field $`\chi `$ would not induce any anisotropy of CMB radiation. For example, these fluctuations would be harmless if the field $`\chi `$ decayed into ultrarelativistic particles with the equation of state $`p=\rho /3`$ and if the equation of state of the field $`\varphi `$ at that time were also given by $`p=\rho /3`$. However, in our case the field $`\varphi `$ has equation of state $`p=\rho `$, which is quite different from the equation of state of the field $`\chi `$ or of its decay products. Isocurvature fluctuations lead to approximately 6 times greater large scale anisotropy of the cosmic microwave radiation as compared with adiabatic perturbations. To avoid cosmological problems, one would need to have $`\frac{\delta V(\chi )}{V(\chi )}5\times 10^6`$. If $`\chi `$ is the Higgs field with $`\lambda _\chi 10^7`$, then the perturbations discussed above will be unacceptably large. This may be a rather serious problem. Indeed, one may expect to have many scalar fields in realistic theories of elementary particles. To avoid large isocurvature fluctuations each of these fields must be extremely weakly coupled, with $`\lambda _\chi 10^7`$, The general conclusion is that the theory of reheating in NO models, as well as their consequences for the creation of the large-scale structure of the universe, may be quite different from what was anticipated in the first papers on this subject. In the simplest versions of such models inflation typically does not end in the state $`\chi =0`$, and large isocurvature fluctuations are produced. ## IV Cosmological production of gravitinos and moduli fields If the inflaton field $`\varphi `$ is sterile, not interacting with any other fields, the elementary particles constituting the universe should be produced gravitationally due to the variation of the scale factor $`a(t)`$ with time. This was one of the basic assumptions of all papers on NO models . Not all species can be produced this way, but only those which are not conformally invariant. Indeed, the metric of the Friedmann universe is conformally flat. If one considers, for example, massless scalar particles $`\chi `$ with an additional term $`\frac{1}{12}\chi ^2R`$ in the Lagrangian (conformal coupling), one can make conformal transformations of $`\chi `$ simultaneously with transformations of the metric and find that the theory of $`\chi `$ particles in the Friedmann universe is equivalent to their theory in flat space. That is why such particles would not be created in an expanding universe. Since conformal coupling is a rather special requirement, one expects a number of different species to be produced. An apparent advantage of gravitational particle production is its universality . There is a kind of “democracy” rule for all particles non-conformally coupled to gravity: the density of such particles produced at the end of inflation is $`\rho _X\alpha _XH^4`$, where $`\alpha _X10^2`$ is a numerical factor specific for different species and $`H`$ is the Hubble parameter at the end of inflation. Unfortunately, democracy does not always work; there may be too many dangerous relics produced by this universal mechanism. One of the potential problems is related to the overproduction of gravitons mentioned in . In order to solve it one needs to have models with a very large number of types of light particles. This is difficult but not impossible . However, even more difficult problems will arise if NO models are implemented in supersymmetric theories of elementary particles. For example, in supersymmetric theories one may encounter many flat directions of the effective potential associated with moduli fields. These fields usually are very stable. Moduli particles decay very late, so in order to avoid cosmological problems the energy density of the moduli fields must be many orders of magnitude smaller than the energy density of other particles . Moduli fields typically are not conformally invariant. There are several different effects which add up to give them masses $`CH`$ during expansion of the universe, with $`C=O(1)`$ (in general, $`C`$ is not a constant) . This is very similar to what happens if, for example, one adds a term $`\frac{\xi }{2}R\varphi ^2`$ to the lagrangian of a scalar field. Indeed, during inflation $`R=12H^2`$, so this term leads to the appearance of a contribution to the mass of the scalar field $`\mathrm{\Delta }m^2=12\xi H^2`$. Conformal coupling would correspond to $`m^2=2H^2`$. According to , the energy density of scalar particles produced gravitationally at the end of inflation is given by $`10^2H^4(16\xi )^2`$. Thus, unless the constant $`C`$ is fine-tuned to mimic conformal coupling, we expect that in addition to the energy of classical oscillating moduli fields, at the end of inflation one has gravitational production of moduli particles with energy density $`10^2H^4`$, just as for all other conformally noninvariant particles. In usual inflationary models one also encounters the moduli problem if the energy of classical oscillating moduli fields is too large . Here we are discussing an independent problem which appears even if there are no classical oscillating moduli. Indeed, in NO models all particles created by gravitational effects at the end of inflation will have similar energy density $`10^2H^4`$. But if the energy density of moduli fields is not extremely strongly suppressed as compared with the energy density of other particles, then such models will be ruled out . A similar problem appears if one considers the possibility of gravitational (nonthermal) production of gravitinos. Usually it is assumed that gravitinos have mass $`m_{3/2}=10^210^3`$ GeV, which is much smaller than the typical value of the Hubble constant at the end of inflation. Therefore naively one could expect that gravitinos, just like massless fermions of spin $`1/2`$, are (almost exactly) conformally invariant and should not be produced due to expansion of the Friedmann universe. However, in the framework of supergravity, the background metric is generated by inflaton field(s) $`\varphi _j`$ with an effective potential constructed from the superpotential $`W(\varphi _j)`$. The gravitino mass in the early universe acquires a contribution proportional to $`W(\varphi _j)`$. Depending on the model, the gravitino mass soon after the end of inflation may be of the same order as $`H`$ or somewhat smaller, but typically it is much greater than its present value $`m_{3/2}`$. A general investigation of the behavior of gravitinos in the Friedmann universe shows that the gravitino field in a self-consistent Friedmann background supported by scalar fields is not conformally invariant . For example, the effective potential $`\lambda \varphi ^4`$ can be obtained from the superpotential $`\sqrt{\lambda }\varphi ^3`$ in the global supersymmetry limit. This leads to a gravitino mass $`\sqrt{\lambda }\varphi ^3/M_p^2`$. At the end of inflation $`\varphi M_p`$, and therefore the gravitino mass is comparable to the Hubble constant $`H\sqrt{\lambda }\varphi ^2/M_p`$. This implies strong breaking of conformal invariance. The theory of gravitational production of gravitinos is strongly model-dependent, and in some models it might be possible to achieve a certain suppression of their production as compared to the production of other particles. The problem is that, just like in the situation with the moduli fields, this suppression must be extraordinary strong. Indeed, to avoid cosmological problems one should suppress the number of gravitinos as compared to the number of other particles by a factor of about $`10^{15}`$ . We will present a more detailed discussion of the cosmological production of gravitinos and moduli in a separate publication . The gravitino/moduli problem and the problem of isocurvature perturbations are interrelated in a rather nontrivial way. Indeed, the gravitino and moduli problems are especially severe if the density of gravitinos and/or moduli particles produced during reheating is of the same order of magnitude as the energy density of scalar fields $`\chi `$. We assumed, according to , that the energy density of the fields $`\chi `$ after inflation is $`O(10^2H^4)`$. But this statement is not always correct. It was derived in under an assumption that particle production occurs during a short time interval when the equation of state changes. Meanwhile in inflationary cosmology the long-wavelength fluctuations of the field $`\chi `$ minimally coupled to gravity are produced during inflation all the time when the Hubble constant $`H(t)`$ is smaller than the mass of the $`\chi `$ particles $`m_\chi `$. The energy density of $`\chi `$ particles produced during inflation will contain a contribution $`\rho _0=\frac{m_\chi ^2}{2}\chi ^2`$, which may be many orders of magnitude greater than $`10^2H^4`$. For the sake of argument, one may consider inflation in the theory $`\frac{\lambda }{4}\varphi ^4`$ and take $`m_\chi `$ equal to the value of $`H`$ at the end of inflation, $`m_\chi \sqrt{\lambda }_\varphi M_p`$. Then, according to Eq. (7), after inflation one has $`\rho _0\frac{\lambda _\varphi m_\chi ^2\varphi _0^6}{36M_p^4}10^2H^4\left(\frac{\varphi _0}{M_p}\right)^610^2H^4`$, because $`\varphi _0M_p`$. This is the same effect which we discussed in Section II: If $`\varphi _0`$ is large enough, we may even have a second stage of inflation driven by the large energy density of the fluctuations of the field $`\chi `$. But even if $`\varphi _0`$ is not large enough to initiate the second stage of inflation, it still must be much greater than $`M_p`$ to drive the first stage of inflation, which makes the standard estimate $`\rho 10^2H^4`$ incorrect . There is one more effect which should be considered, in addition to gravitational particle production. The effective mass of the particles $`\varphi `$ at $`\varphi <0`$ is given by $`\sqrt{3\lambda }\varphi `$. At the end of inflation, at $`\varphi M_p`$, this mass is of the same order as the Hubble constant $`\sqrt{\lambda }\varphi ^2/M_p`$. Then, within the Hubble time $`H^1`$ the field $`\varphi `$ rolls to the valley at $`\varphi >0`$ and its mass vanishes. This is a non-adiabatic process; the mass of the scalar field changes by $`O(H)`$ during the time $`O(H^1)`$. As a result, in addition to gravitational particle production there is an equally strong production of particles $`\varphi `$ due to the nonadiabatic change of their mass . This may imply that the fraction of energy in gravitinos will be much smaller than previously expected, simply because the fraction of energy in the fluctuations of the field $`\chi `$ will be much larger. But there is no free lunch. For example, the production of large number of nearly massless particles $`\varphi `$ may lead to problems with nucleosynthesis. Large inflationary fluctuations of the field $`\chi `$ can create large isocurvature fluctuations. In the end of Section II we mentioned that one can avoid this problem if one assumes, for example, that the fields $`\chi `$ acquire effective mass $`O(H)`$ in an expanding universe. Then their fluctuations will not be produced during inflation. But in such a case their density after inflation will be given by $`10^2H^4`$, and therefore we do not have any relaxation of the gravitino and the moduli problems. ## V Saving NO models: Instant preheating As we will see, the problems discussed above will not appear in theories of a more general class, where the fields $`\varphi `$ and $`\chi `$ can interact with each other. We will consider a model with the interaction $`\frac{g^2}{2}\varphi ^2\chi ^2`$. First we will show that in this case it really makes sense to study preheating assuming that $`\chi =0`$. Then we will describe the scenario of instant preheating, which allows a very efficient energy transfer from the inflaton field to particles $`\chi `$. ### A Initial conditions for inflation and reheating in the model with interaction $`\frac{g^2}{2}\varphi ^2\chi ^2`$ Consider a theory with an effective potential dominated by the term $`V(\varphi ,\chi )=\frac{g^2}{2}\varphi ^2\chi ^2`$. This means that we will assume that the constant $`g`$ is large enough for us to temporarily neglect the terms $`\frac{\lambda _\varphi }{4}\varphi ^4+\frac{\lambda _\chi }{4}\chi ^4`$ in the discussion of initial conditions. In this case the Planck boundary is given by the condition $$\frac{g^2}{2}\varphi ^2\chi ^2M_p^4,$$ (22) which defines a set of four hyperbolas $$g|\varphi ||\chi |M_p^2.$$ (23) At larger values of $`\varphi `$ and $`\chi `$ the density is greater than the Planck density, so the standard classical description of space-time is impossible there. On the other hand, the effective masses of the fields should be smaller than $`M_p`$, and consequently the curvature of the effective potential cannot be greater than $`M_p^2`$. This leads to two additional conditions: $$|\varphi |g^1M_p,|\chi |g^1M_p.$$ (24) We assume that $`g1`$. Suppose for definiteness that initially the fields $`\varphi `$ and $`\chi `$ belong to the Planck boundary (23) and that $`|\varphi |`$ is half-way towards its upper bound (24): $`|\varphi |g^1M_p/2`$. The choice of the coefficient $`1/2`$ here is not essential; we only want to make sure that the field $`\chi `$ initially is of order $`M_p`$, but it can be slightly greater than $`M_p`$, This allows for an extremely short stage of inflation when the field $`\chi `$ rolls down towards $`\chi =0`$. The equations for the two fields are $$\ddot{\varphi }+3H\dot{\varphi }=g^2\varphi \chi ^2.$$ (25) and $$\ddot{\chi }+3H\dot{\chi }=g^2\varphi ^2\chi .$$ (26) The curvature of the effective potential in the $`\varphi `$ direction initially is $`g^2\chi ^2g^2M_p^2`$, which is very small compared to the initial value of $`H^2M_p^2`$. Thus the field $`\varphi `$ will move very slowly, so one can neglect the term $`\ddot{\varphi }`$ in Eq. (25). $$3H\dot{\varphi }=g^2\varphi \chi ^2.$$ (27) If the field $`\varphi `$ changes slowly, then the field $`\chi `$ behaves as in the theory $`\frac{m_\chi ^2}{2}\chi ^2`$ with $`m_\chi g|\varphi |`$ being slightly smaller than $`M_p`$ and with the initial value of $`\chi `$ being slightly greater than $`M_p`$. This leads to a very short stage of inflation which ends within a few Planck times. After this short stage the field $`\chi `$ rapidly oscillates. During this stage the energy density of the oscillating field drops down as $`a^3`$, the universe expands as $`at^{2/3}`$, and $`H=\frac{2}{3t}`$. Thus the square of the amplitude of the oscillations of the field $`\chi `$ decreases as follows: $`\chi ^2\chi _0^2a^3t^2`$. This leads to the following equation for the field $`\varphi `$: $$\frac{\dot{\varphi }}{\varphi }\frac{g^2}{t}.$$ (28) The solution of this equation is $`\varphi =\varphi _0\left(\frac{t}{t_0}\right)^{g^2}`$ with $`t_0M_p^1`$, and $`\varphi _0M_p/g`$. (The condition $`t_0M_p^1`$ follows from the fact that the initial value of $`H=\frac{2}{3t}`$ is not much below $`M_p`$.) This gives $$\varphi \frac{M_p}{g}\left(M_pt\right)^{g^2}.$$ (29) The inflaton field $`\varphi `$ becomes equal to $`M_p`$ after the exponentially large time $$tM_p^1\left(\frac{1}{g}\right)^{g^2}.$$ (30) During this time the energy of oscillations of the field $`\chi `$ becomes exponentially small, and the small term $`\frac{\lambda _\varphi }{4}\varphi ^4`$ which we neglected until now becomes the leading term driving the scalar field $`\varphi `$. At this stage we will have the usual chaotic inflation scenario with $`|\varphi |>M_p`$ and with the fields evolving along the direction $`\chi =0`$. Thus in the presence of the interaction term $`\frac{g^2}{2}\varphi ^2\chi ^2`$ one can indeed consider inflation and reheating with $`\chi =0`$. As we have seen, this possibility was rather problematic in the models where $`\varphi `$ and $`\chi `$ interacted only gravitationally. The effective mass of the field $`\chi `$ during inflation is $`g|\varphi |`$, which is much greater than the Hubble constant $`\frac{\sqrt{\lambda }\varphi ^2}{M_p}`$ for $`\frac{g^2}{\lambda }\frac{\varphi ^2}{M_p^2}`$. In realistic versions of this model one has $`\lambda 10^{13}`$ , and $`g^2\lambda `$. Therefore long-wavelength fluctuations of the field $`\chi `$ are not produced during the last stages of inflation, when $`\varphi M_p`$. A similar conclusion is valid if at the last stages of inflation the effective potential of the field $`\varphi `$ is quadratic, $`V(\varphi )=\frac{m^2}{2}\varphi ^2`$. In this case $`H\frac{m\varphi }{M_p}`$, and inflationary fluctuations of the field $`\chi `$ are not produced for $`g\frac{m}{M_p}`$. In realistic versions of this model one has $`m10^6M_p`$ , and fluctuations $`\delta \chi `$ are not produced if $`g^210^{12}`$. This means that the problem of isocurvature fluctuations does not appear. ### B Instant preheating in NO models To explain the main idea of the instant preheating scenario in NO models, we will assume for simplicity that $`V(\varphi )=\frac{m^2}{2}\varphi ^2`$ for $`\varphi <0`$, and that $`V(\varphi )`$ vanishes for $`\varphi >0`$. We will discuss a more general situation later. We will assume that the effective potential contains the interaction term $`\frac{g^2}{2}\varphi ^2\chi ^2`$, and that $`\chi `$ particles have the usual Yukawa interaction $`h\overline{\psi }\psi \chi `$ with fermions $`\psi `$. For simplicity, we will assume here that $`\chi `$ particles do not have any bare mass, so that their effective mass is equal to $`g|\varphi |`$. In this model inflation ends when the field $`\varphi `$ rolls from large negative values down to $`\varphi 0.3M_p`$ . Production of particles $`\chi `$ begins when the effective mass of the field $`\chi `$ starts to change nonadiabatically, $`|\dot{m}_\chi |m_\chi ^2`$, i.e. when $`g|\dot{\varphi }|`$ becomes greater than $`g^2\varphi ^2`$. This happens only when the field $`\varphi `$ rolls close to $`\varphi =0`$, and the velocity of the field is $`|\dot{\varphi }_0|mM_p/1010^7M_p`$ . (In the theory $`\frac{\lambda }{4}\varphi ^4`$ with $`\lambda =10^{13}`$ one has a somewhat smaller value $`|\dot{\varphi }_0|6\times 10^9M_p^2`$.) The process becomes nonadiabatic for $`g^2\varphi ^2g|\dot{\varphi }_0|`$, i.e. for $`\varphi _{}\varphi \varphi _{}`$, where $`\varphi _{}\sqrt{\frac{|\dot{\varphi }_0|}{g}}`$ . Note that for $`g10^5`$ the interval $`\varphi _{}\varphi \varphi _{}`$ is very narrow: $`\varphi _{}M_p/10`$. As a result, the process of particle production occurs nearly instantaneously, within the time $$\mathrm{\Delta }t_{}\frac{\varphi _{}}{|\dot{\varphi }_0|}(g|\dot{\varphi }_0|)^{1/2}.$$ (31) This time interval is much smaller than the age of the universe, so all effects related to the expansion of the universe can be neglected during the process of particle production. The uncertainty principle implies in this case that the created particles will have typical momenta $`k(\mathrm{\Delta }t_{})^1(g|\dot{\varphi }_0|)^{1/2}`$. The occupation number $`n_k`$ of $`\chi `$ particles with momentum $`k`$ is equal to zero all the time when it moves toward $`\varphi =0`$. When it reaches $`\varphi =0`$ (or, more exactly, after it moves through the small region $`\varphi _{}\varphi \varphi _{}`$) the occupation number suddenly (within the time $`\mathrm{\Delta }t_{}`$) acquires the value $$n_k=\mathrm{exp}\left(\frac{\pi k^2}{g|\dot{\varphi }_0|}\right),$$ (32) and this value does not change until the field $`\varphi `$ rolls to the point $`\varphi =0`$ again. A detailed description of this process including the derivation of Eq. (32) was given in the second paper of Ref. ; see in particular Eq. (55) there. This equation (32) can be written in a more general form. For example, if the particles $`\chi `$ have bare mass $`m_\chi `$, this equation can be written as follows : $$n_k=\mathrm{exp}\left(\frac{\pi (k^2+m_\chi ^2)}{g|\dot{\varphi }_0|}\right).$$ (33) This can be integrated to give the density of $`\chi `$ particles $$n_\chi =\frac{1}{2\pi ^2}\underset{0}{\overset{\mathrm{}}{}}𝑑kk^2n_k=\frac{(g\dot{\varphi }_0)^{3/2}}{8\pi ^3}\mathrm{exp}\left(\frac{\pi m_\chi ^2}{g|\dot{\varphi }_0|}\right).$$ (34) As we already mentioned, in the theory $`\frac{m^2}{2}\varphi ^2`$ with $`m=10^6M_p`$ one has $`|\dot{\varphi }_0|=10^7M_p^2`$ . This implies, in particular, that if one takes $`g1`$, then in the theory $`\frac{m^2}{2}\varphi ^2`$ there is no exponential suppression of production of $`\chi `$ particles unless their mass is greater than $`m_\chi 2\times 10^{15}`$ GeV. This agrees with a similar conclusion obtained in . Let us now concentrate on the case $`m_\chi ^2g|\dot{\varphi }_0|`$, when the number of produced particles is not exponentially suppressed. In this case the number density of particles at the moment of their creation is given by $`\frac{(g\dot{\varphi }_0)^{3/2}}{8\pi ^3}`$, but then it decreases as $`a^3(t)`$: $$n_\chi =\frac{(g\dot{\varphi }_0)^{3/2}}{8\pi ^3a^3(t)}.$$ (35) Here we take $`a_0=1`$ at the moment of particle production. Particle production occurs only in a small vicinity of $`\varphi =0`$. Then the field $`\varphi `$ continues rolling along the flat direction of the effective potential with $`\varphi >0`$, and the mass of each $`\chi `$ particle grows as $`g\varphi `$. Therefore the energy density of produced particles is $$\rho _\chi =\frac{(g\dot{\varphi }_0)^{3/2}}{8\pi ^3}\frac{g\varphi (t)}{a^3(t)}.$$ (36) The energy density of the field $`\varphi `$ drops down much faster, as $`a^6(t)`$. The reason is that if one neglects backreaction of produced particles, the energy density of the field $`\varphi `$ at this stage is entirely concentrated in its kinetic energy density $`\frac{1}{2}\dot{\varphi }^2`$, which corresponds to the equation of state $`p=\rho `$. We will study this issue now in a more detailed way. The equation of motion for the inflaton field after particle production looks as follows: $$\ddot{\varphi }+3H\dot{\varphi }=g^2\varphi \chi ^2$$ (37) We will assume for simplicity that the field $`\chi `$ does not have bare mass, i.e. $`m_\chi =g\varphi `$. As soon as the field $`\varphi `$ becomes greater than $`\varphi ^{}`$ (and this happens practically instantly, when particle production ends), the particles $`\chi `$ become nonrelativistic. In this case $`\chi ^2`$ can be easily related to $`n_\chi `$: $$\chi ^2\frac{1}{2\pi ^2}\frac{n_kk^2dk}{\sqrt{k^2+g^2\varphi ^2}}\frac{n_\chi }{g\varphi }\frac{(g\dot{\varphi }_0)^{3/2}}{8\pi ^3g\varphi a^3(t)}.$$ (38) Therefore the equation for the field $`\varphi `$ reads $$\ddot{\varphi }+3H\dot{\varphi }=gn_\chi =g\frac{(g\dot{\varphi }_0)^{3/2}}{8\pi ^3a^3(t)}.$$ (39) To analyze the solutions of this equation, we will first neglect backreaction. In this case one has $`at^{1/3}`$, $`H=\frac{1}{3t}`$, and $$\varphi =\frac{M_p}{2\sqrt{3\pi }}\mathrm{log}\frac{t}{t_0},$$ (40) where $`t_0=\frac{1}{3H_0}=\frac{M_p}{2\sqrt{3\pi }\dot{\varphi }_0}\frac{5}{\sqrt{3\pi }m}`$. One can easily check that this regime remains intact and backreaction is unimportant for $`t<t_1\frac{8\pi ^3}{\sqrt{g^5\dot{\varphi }_0}}`$, until the field $`\varphi `$ grows up to $$\varphi _1\frac{5M_p}{4\sqrt{3\pi }}\mathrm{log}\frac{1}{g}.$$ (41) This equation is valid for $`g1`$. For example, for $`g=10^3`$ one has $`\varphi _13M_p`$. For $`g=10^1`$ one has $`\varphi _1M_p`$. Note that the terms in the left hand side of the Eq. (39) decrease as $`t^2`$ when the time grows, whereas the backreaction term goes as $`t^1`$. As soon as the backreaction becomes important, i.e. as soon as the field $`\varphi `$ reaches $`\varphi _1`$, it turns back, and returns to $`\varphi =0`$. When it reaches $`\varphi =0`$, the effective potential becomes large, so the field $`\varphi `$ cannot become negative, and it bounces towards large $`\varphi `$ again. Now let us take into account interaction of the $`\chi `$ field with fermions. This interaction leads to decay of the $`\chi `$ particles with the decay rate $$\mathrm{\Gamma }(\chi \psi \psi )=\frac{h^2m_\chi }{8\pi }=\frac{h^2g|\varphi |}{8\pi }.$$ (42) Note that the decay rate grows with the growth of the field $`|\varphi |`$, so particles tend to decay at large $`\varphi `$. In our case the field $`\varphi `$ spends most of the time prior to $`t_1`$ at $`\varphi M_p`$ (if it does not decay earlier, see below). The decay rate at that time is $$\mathrm{\Gamma }(\chi \psi \psi )\frac{h^2gM_p}{8\pi }.$$ (43) If $`\mathrm{\Gamma }_\chi ^1<t_1\frac{8\pi ^3}{g^{5/2}\dot{\varphi }_0}`$, then particles $`\chi `$ will decay to fermions $`\psi `$ at $`t<t_1`$ and the force driving the field $`\varphi `$ back to $`\varphi =0`$ will disappear before the field $`\varphi `$ turns back. In this case the field $`\varphi `$ will continue to grow, and its energy density will continue decreasing anomalously fast, as $`a^6`$. This happens if $$\frac{h^2gM_p}{8\pi }\frac{g^{5/2}\dot{\varphi }_0^{1/2}}{8\pi ^3}.$$ (44) Taking into account that in our case $`\dot{\varphi }_0\frac{mM_p}{10}`$ and $`m10^6M_p`$, one finds that this condition is satisfied if $`h5\times 10^3g^{3/4}`$. This is a very mild condition. For example, it is satisfied for $`h>5\times 10^3`$ if $`g=1`$, and for $`h>5\times 10^7`$ if $`g=10^4`$. This scenario is always 100% efficient. The initial fraction of energy transferred to matter at the moment of $`\chi `$ particle production is not very large, about $`10^2g^2`$ of the energy of the inflaton field . However, because of the subsequent growth of this energy due to the growth of the field $`\varphi `$, and because of the rapid decrease of kinetic energy of the inflaton field, the energy density of the $`\chi `$ particles and of the products of their decay soon becomes dominant. This should be contrasted with the usual situation in the theories where $`V(\varphi )`$ has a minimum. As was emphasized in , efficient preheating is possible only in a subclass of such models. In many models where $`V(\varphi )`$ has a minimum the decay of the inflaton field is incomplete, and it accumulates an unacceptably large energy density compared with the energy density of the thermalized component of matter. The possibility of having a very efficient reheating in NO models may have significant consequences for inflationary model building. It is instructive to compare the density of particles produced by this mechanism to the density of particles created during gravitational particle production, which is given by $`\rho _\chi 10^2H^4\rho _\varphi \frac{\rho _\varphi }{M_p^4}`$, where $`\rho _\varphi `$ is the energy density of the field $`\varphi `$ at the end of inflation. In the model $`\frac{\lambda _\varphi }{4}\varphi ^4`$ one has $`\rho _\varphi 10^{16}M_p^4`$, and, consequently, $`\rho _\chi \rho _\varphi \frac{\rho _\varphi }{M_p^4}10^{16}\rho _\varphi `$. Meanwhile, as we just mentioned, at the first moment after particle production in our scenario the energy density of produced particles is of the order of $`10^2g^2\rho _\varphi `$ , and then it grows together with the field $`\varphi `$ because of the growth of the mass $`g\varphi `$ of each $`\chi `$ particle. Thus, for $`g^210^{14}`$ the number of particles produced during instant preheating is much greater than the number of particles produced by gravitational effects. Therefore one may argue that reheating of the universe in NO models should be described using the instant preheating scenario. Typically it is much more efficient than gravitational particle production. This means, in particular, that production of normal particles will be much more efficient than the production of gravitinos and moduli. In order to avoid the gravitino problem altogether one may consider versions of NO models where the particles produced during preheating remain nonrelativistic for a while. Then the energy density of gravitinos during this epoch decreases much faster than the energy density of usual particles. New gravitinos will not be produced if the resulting temperature of reheating is sufficiently small. ## VI Other versions of NO models The mechanism of particle production described above can work in a broad class of theories. In particular, since parametric amplification of particle production is not important in the context of the instant preheating scenario, it will work equally well if the inflaton field couples not to bosons $`\chi `$ but to fermions . Indeed, the creation of fermions with mass $`g|\varphi |`$ also occurs because of the nonadiabaticity of the change of their mass at $`\varphi =0`$. The theory of this effect is very similar to the theory of the creation of $`\chi `$ particles described above; see in this respect . Returning to our scenario, production of particles $`\chi `$ depends on the interactions between the fields $`\varphi `$ and $`\chi `$. For example, one can consider models with the interaction $`\frac{g^2}{2}\chi ^2(\varphi +v)^2`$. Such interaction terms appear, for example, in supersymmetric models with superpotentials of the type $`W=g\chi ^2(\varphi +v)`$ . In such models the mass $`m_\chi `$ vanishes not at $`\varphi _1=0`$, but at $`\varphi _1=v`$, where $`v`$ can take any value. Correspondingly, the production of $`\chi `$ particles occurs not at $`\varphi =0`$ but at $`\varphi =v`$. When the inflaton field reaches $`\varphi =0`$, one has $`m_\chi gv`$, which may be very large. If one takes $`vM_p`$, one can get $`m_\chi gM_p`$, which may be as great as $`10^{18}`$ GeV for $`g10^1`$, or even $`10^{19}`$ GeV for $`g1`$. If one takes $`vM_p`$, the density of $`\chi `$ particles produced by this mechanism will be exponentially suppressed by the subsequent stage of inflation. In the previous section we considered the simplest model where $`V(\varphi )=0`$ for $`\varphi >0`$. However, in general $`V(\varphi )`$ may becomes flat not at $`\varphi =0`$, but only asymptotically, at $`\varphi M_p`$. Such theories have become rather popular now in relation to the theory of quintessence; for a partial list of references see e.g. . In such a case the backreaction of created particles may never turn the scalar field $`\varphi `$ back to $`\varphi =0`$. Therefore the decay of the particles $`\chi `$ may occur very late, and one can have very efficient preheating for any values of the coupling constants $`g`$ and $`h`$. On the other hand, if the $`\chi `$ particles are stable, and if the field $`\varphi `$ continues rolling for a very long time, one may encounter a rather unusual regime. If the particle masses $`g|\varphi |`$ at some moment approach $`M_p`$, the $`\chi `$ particles may convert to black holes and immediately evaporate. Indeed, in conventional quantum field theory, an elementary particle of mass $`M`$ has a Compton wavelength $`M^1`$ smaller than its Schwarzschild radius $`2M/M_p^2`$ if $`MM_p`$. Therefore one may expect that as soon as $`m_\chi =g|\varphi |`$ becomes greater than $`M_p`$, each $`\chi `$ particle becomes a Planck-size black hole, which immediately evaporates and reheats the universe. If this regime is possible, it should be avoided. Indeed, black holes of Planck mass may produce similar amounts of all kinds of particles, including gravitinos. Therefore if reheating occurs because of black hole evaporation, then we will return to the gravitino problem again. Thus, the best possibility is to consider those versions of the instant preheating scenario which do not lead to the creation of stable particles of Planckian mass. It may seem paradoxical that one needs to be careful about this constraint. Several years ago it would have seemed impossible to produce particles of mass greater than $`5\times 10^{12}`$ GeV during the decay of an inflaton field of mass $`m_\varphi 10^{13}`$ GeV. Here we consider a nonperturbative mechanism of preheating which may produce particles 5 orders of magnitude heavier than $`m_\varphi `$. It is interesting that the mechanism of instant preheating discussed in this paper works especially well in the context of NO models where all other mechanisms are rather inefficient. ## Acknowledgments It is a pleasure to thank R. Kallosh, P.J.E. Peebles, A. Van Proeyen, A. Riotto, A. Starobinsky, I. Tkachev, and A. Vilenkin for useful discussions. This work was supported by CIAR and by NSF grant AST95-29-225. The work of G.F. and A.L. was also supported by NSF grant PHY-9870115. We are grateful to Nick Pritzker and to the organizers of the Pritzker Symposium in Chicago where some of the results reported in this paper were obtained.
no-problem/9903/astro-ph9903077.html
ar5iv
text
# EVOLUTION OF INTERSTELLAR CLOUDS IN LOCAL GROUP DWARF SPHEROIDAL GALAXIES IN THE CONTEXT OF THEIR STAR FORMATION HISTORIES ## 1 INTRODUCTION Recent observations have been revealing the properties of the Local Group dwarf spheroidal galaxies (dSphs). The dSphs have luminosities of order $`10^{5\text{}7}L_{}`$ and are characterized by their low surface brightnesses (see Gallagher & Wyse 1994 for review). The dSphs contain such small amounts of gas that they show no evidence of present star formation.<sup>1</sup><sup>1</sup>1In fact, a dwarf galaxy with present star formation is not generally called dSph. Saito (1979) showed that instantaneous gas ejection from supernovae (SNe) can make the gas in proto-dSphs escape in their initial burst of star formation (see also Larson 1974). This so-called SN feedback mechanism nicely accounts for the observed scaling relations among mass, luminosity, and metallicity of each dSph (Dekel & Silk 1986; see also Hirashita, Takeuchi, & Tamura 1998). The stellar population analyses of the Local Group dSphs show that their star formation histories are full of variety (Mateo et al. 1998; Aparicio 1999; Grebel 1999). Some dSphs has prominent intermediate-age ($`3`$–10 Gyr ago) stellar populations, and others have only small numbers of such populations. A dSph located close to the Galaxy tends to have poor intermediate-age stars. This may indicate that the Galaxy has affected their star formation histories through ultraviolet (UV) radiation or the Galactic wind (van den Bergh 1994). In this paper, we aim at understanding of the evolution of interstellar medium in the Local Group dSphs in the context of their star formation histories. Though it is generally difficult to infer the physical properties of interstellar gas of the dSphs in their star formation epochs, star formation histories derived from the stellar color-magnitude diagrams (e.g., Gallagher & Wyse 1994) help us to obtain some information on the physical quantities of the gas. A merit of using the Local Group dSphs is that they are so close to the Galaxy that their star formation histories are directly inferred from their stellar populations. This work can be applied to dwarf irregular galaxies, elliptical galaxies or distant galaxies in the future. This paper is organized as follows. First of all, in the next section, we consider the evolution of interstellar clouds in initial star formation epoch. Then, in §3, we apply the cloud-cloud collision model to star formation in the intermediate ages of dSphs. Finally, the last section is devoted to discussions. ## 2 SURVIVAL OF CLOUDS IN PROTO-DWARF GALAXIES The collapse of gas in a proto-dSph induces the initial burst of star formation. This initial burst occurs in the dynamical timescale determined by the dark matter potential ($`10^7`$ yr). During the burst, the hot gas (temperature of $`T10^6`$ K) originating mainly from SNe (McKee & Ostriker 1977) contributes to heating of interstellar gas through thermal conduction (e.g., Draine & Giuliani 1984). In this section, we examine the effect of the thermal conduction. ### 2.1 Evaporation Timescale of Clouds It is widely accepted that interstellar medium is a cloudy fluid (e.g., Elmegreen 1991). Interstellar medium is multiphase gas with various temperature and number density (McKee & Ostriker 1977). Here, we simply consider two-phase interstellar gas composed of hot ($`T10^6`$ K) diffuse gas and cool ($`T<10^4`$ K) clouds. For the evolution of interstellar medium in the context of multiphase interstellar medium, see e.g., Fujita, Fukumoto, & Okoshi (1996). The hot gas originating from the successive SNe can heat cool interstellar clouds. The cool gas may finally evaporate. The evaporating gas as well as the hot gas escapes freely out of the proto-dwarf galaxy, since the thermal energy at $`T10^6`$ K is much larger than the gravitational potential. The timescale for the gas to escape out of the galaxy is estimated by crossing time $`t_{\mathrm{cross}}`$ defined by $`t_{\mathrm{cross}}{\displaystyle \frac{R_{\mathrm{dSph}}}{c_\mathrm{s}}},`$ (1) where $`c_\mathrm{s}`$ is the sound speed of the hot gas (typically 100 km s<sup>-1</sup>) and $`R_{\mathrm{dSph}}`$ is the typical size of the proto-dSph. Thus, the crossing time is estimated by $`t_{\mathrm{cross}}1.0\times 10^7\left({\displaystyle \frac{R_{\mathrm{dSph}}}{1\mathrm{kpc}}}\right)\left({\displaystyle \frac{c_\mathrm{s}}{100\mathrm{km}\mathrm{s}^1}}\right)^1\mathrm{yr}.`$ (2) Hirashita, Takeuchi, & Tamura (1998) also estimated the crossing time. The estimation from the view point of thermal energy also provides the timescale of $`10^{6\text{}7}`$ yr for the escape of hot gas from dwarf galaxies (Yoshii & Arimoto 1987; Nath & Chiba 1995). Now we discuss the process of evaporation. After treating the conservation of mass and energy, Cowie & McKee (1977) derived the typical mass loss rate ($`\dot{m}`$) of a cool cloud embedded in the hot medium as $`\dot{m}={\displaystyle \frac{16\pi \mu m_\mathrm{H}\kappa _\mathrm{h}R_\mathrm{c}}{25k_\mathrm{B}}},`$ (3) where $`\mu `$ is the mean weight of a particle normalized by the mass of a hydrogen atom ($`m_\mathrm{H}`$), $`R_\mathrm{c}`$ is the radius of the cloud \[a spherical cool ($`TT_\mathrm{h}`$, where $`T_\mathrm{h}`$ is the temperature of the hot gas) cloud is assumed\], $`k_\mathrm{B}`$ is the Boltzmann constant, and $`\kappa _\mathrm{h}`$ is the thermal conductivity estimated at the temperature of $`T_\mathrm{h}`$ and the electron number density of $`n_\mathrm{h}`$ (the electron number density of the hot gas). The thermal conductivity is expressed as $`\kappa _\mathrm{h}=1.8\times 10^5{\displaystyle \frac{T_\mathrm{h}^{5/2}}{\mathrm{ln}\mathrm{\Lambda }}}\mathrm{erg}\mathrm{s}^1\mathrm{deg}^1\mathrm{cm}^1,`$ (4) where $`\mathrm{ln}\mathrm{\Lambda }`$ is the Coulomb logarithm which is a function of electron number density and electron temperature (Spitzer 1956; Cowie & McKee 1977). Using $`\dot{m}`$, the timescale for the evaporation, $`t_{\mathrm{evap}}`$, of the cloud is estimated as $`t_{\mathrm{evap}}={\displaystyle \frac{m_\mathrm{c}}{\dot{m}}}=1.4\times 10^7\left({\displaystyle \frac{\overline{n}_\mathrm{c}}{1\mathrm{cm}^3}}\right)\left({\displaystyle \frac{R_\mathrm{c}}{10\mathrm{pc}}}\right)^2\left({\displaystyle \frac{T_\mathrm{h}}{10^6\mathrm{K}}}\right)^{5/2}\left({\displaystyle \frac{\mathrm{ln}\mathrm{\Lambda }}{30}}\right)\mathrm{yr},`$ (5) where $`\overline{n}_\mathrm{c}`$ is the mean number density of gas in the cloud and $`m_\mathrm{c}`$ is the mass of the cloud estimated by $`m_\mathrm{c}={\displaystyle \frac{4\pi }{3}}R_\mathrm{c}^3\mu m_\mathrm{H}\overline{n}_\mathrm{c}.`$ (6) According to Larson (1974), hot gas fills more than half of the dwarf galaxy in a short timescale ($`<10^6`$ yr) if successive and multiple SNe are considered. Thus, our picture of continuous evaporation during the crossing time is justified. However, we should note that the timescale is largely dependent on stellar initial mass function. ### 2.2 Cooling Timescale of the Hot Gas In this subsection, we estimate the cooling timescale $`t_{\mathrm{cool}}`$. The cooling timescale is expressed by $`t_{\mathrm{cool}}={\displaystyle \frac{3k_\mathrm{B}T_\mathrm{h}}{2n_\mathrm{h}\mathrm{\Lambda }_{\mathrm{cool}}(T_\mathrm{h})}},`$ (7) where $`n_\mathrm{h}`$ is the number density of electrons in the hot gas and $`\mathrm{\Lambda }_{\mathrm{cool}}(T_\mathrm{h})`$ is the cooling function as a function of temperature. The cooling function is composed of two contributions as $`\mathrm{\Lambda }_{\mathrm{cool}}(T_\mathrm{h})=\mathrm{\Lambda }_{\mathrm{ff}}(T_\mathrm{h})+\mathrm{\Lambda }_{\mathrm{line}}(T_\mathrm{h}),`$ (8) where $`\mathrm{\Lambda }_{\mathrm{ff}}`$ and $`\mathrm{\Lambda }_{\mathrm{line}}`$ represent the cooling rates through free-free radiation and through metal-line emission, respectively. According to Gaetz & Salpeter (1983), the metal cooling is estimated as $`\mathrm{\Lambda }_{\mathrm{line}}(T_\mathrm{h}=10^6\mathrm{K})1.3\times 10^{22}\zeta \mathrm{erg}\mathrm{s}^1\mathrm{cm}^3,`$ (9) where $`\zeta `$ is the metallicity normalized by the solar system abundance (see also Raymond, Cox, & Smith 1976 for the cooling function). On the other hand, the free-free cooling function is estimated as $`\mathrm{\Lambda }_{\mathrm{ff}}(T_\mathrm{h})2\times 10^{24}\left({\displaystyle \frac{T_\mathrm{h}}{10^6\mathrm{K}}}\right)^{1/2}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^3`$ (10) (Rybicki & Lightman 1979). If we assume $`\zeta 0.01`$, which is a typical value for stellar metallicity observed in the present dSphs (Aaronson & Mould 1985; Buonanno et al. 1985), $`\mathrm{\Lambda }_{\mathrm{line}}(T_\mathrm{h}=10^6\mathrm{K})1.3\times 10^{24}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^3`$. The resulting cooling timescale becomes $`t_{\mathrm{cool}}2\left({\displaystyle \frac{T_\mathrm{h}}{10^6\mathrm{K}}}\right)\left({\displaystyle \frac{n_\mathrm{h}}{10^3\mathrm{cm}^3}}\right)^1\left({\displaystyle \frac{\mathrm{\Lambda }_{\mathrm{cool}}}{3\times 10^{24}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^1}}\right)^1\mathrm{Gyr}.`$ (11) Hence, the cooling timescale is much longer than the crossing timescale (eq. 2). This means that the effect of cooling of the hot gas can be neglected. The present stellar metallicity of the dSph sample is at most $`\zeta 0.1`$ if we consider the error range. Adopting this upper limit value, we obtain the shorter value of the cooling time of $`0.5`$ Gyr. Even in this case, the cooling time is much shorter the crossing timescale.<sup>2</sup><sup>2</sup>2We note that we should consider the metallicity of gas, not stars. Though the lack of gas content in dSphs makes it impossible to know the metallicity of their gas, we imagine that the gas metallicity did not exceed 0.1 from the present gas content in low-mass dwarf irregular galaxies. We have considered uniform hot gas distribution. The confinement of the hot gas might be possible. With the confinement, the density of the hot gas can be so high that the cooling timescale becomes shorter than the crossing timescale. However, since the crossing time of the confined region also becomes shorter than that in the previous estimation, it seems difficult to realize the shorter cooling timescale than the crossing time. Once the hot gas crosses a dense region, the hot gas blows away easily. Thus, it is reasonable to assume the longer cooling time than the crossing time. We note that if a physically reasonable confining mechanism of the hot gas is found the estimation in this section may need modification. The mixing of the hot gas with the warm gas can make the cooling timescale short. The mixing produces the gas with temperature of $`10^5`$ K (Begelman & Fabian 1990; Slavin, Shull, & Begelman 1993), at which the line cooling rate becomes an order of magnitude larger than that at $`10^6`$ K (Gaetz & Salpeter 1983). Thus, the cooling timescale may be shorter by an order of magnitude than the previous estimate. However, even in this case, the cooling time is larger than the crossing time. ### 2.3 Condition for Survival of a Cloud Here, we estimate the size of a cloud that survives the evaporation. Since the evaporation process is effective in the timescale of $`t_{\mathrm{cross}}`$, the condition for the survival is expressed by $`t_{\mathrm{evap}}>t_{\mathrm{cross}}.`$ (12) From equations (2) and (5), the above condition is written as $`R_\mathrm{c}`$ $`>`$ $`R_{\mathrm{crit}}`$ (13) $``$ $`8\left({\displaystyle \frac{\overline{n}_\mathrm{c}}{1\mathrm{cm}^3}}\right)^{1/2}\left({\displaystyle \frac{T_\mathrm{h}}{10^6\mathrm{K}}}\right)^{5/4}\left({\displaystyle \frac{\mathrm{ln}\mathrm{\Lambda }}{30}}\right)^{1/2}`$ $`\times \left({\displaystyle \frac{R_{\mathrm{dSph}}}{1\mathrm{kpc}}}\right)^{1/2}\left({\displaystyle \frac{c_\mathrm{s}}{100\mathrm{km}\mathrm{s}^1}}\right)^{1/2}\mathrm{pc},`$ where we define the critical radius for the survival, $`R_{\mathrm{crit}}`$. Thus, the cloud larger than $`R_{\mathrm{crit}}10`$ pc can survive the thermal conduction during the initial burst of star formation. The motions of interstellar clouds produce velocity shear between the clouds and ambient hot gas. The shear may lead to the Kelvin-Helmholtz (K-H) instability (Chandrasekhar 1961). Since the growth timescale of the K-H instability is as short as the conduction timescale (Appendix A) for $`R_\mathrm{c}10`$ pc (for the notation in Appendix A, $`R_\mathrm{c}=\lambda `$), the K-H instability as well as the conduction can determine the minimum mass of the clouds. ## 3 CLOUD-CLOUD COLLISION One of the direct test of survival of the cloud in the initial star formation in proto-dSph is to examine star formation histories of the dSphs. Their star formation histories are investigated from the stellar population analyses (Gallagher & Wyse 1994; Mateo 1998). The second star formation produced what is called “intermediate-age stellar populations” (e.g., Gallagher & Wyse 1994). There are evidences of second star formation in each Local Group dSphs. Assuming that the second star formation is due to the surviving clouds within a dSph, we examine the physical properties of the clouds. Before the examination, we should fix the mechanism of the second star formation. We assume that star formation is induced by collisions between clouds. This physically means that the compression of clouds during the collision makes free-fall time and cooling time of the clouds shorter (the physical process is described in Kumai, Basu, & Fujimoto 1993), which leads to formation of dense molecular clouds and finally to star formation. Even if a cloud has formed stars, the shell of a stellar wind or a supernova remnant associating with the cloud induce star formation of another cloud which collides with it (see also Roy & Kunth 1995). The idea that cloud-cloud collisions induce star formation has a long history (e.g., Field & Saslaw 1965). We note that another possible mechanism of star formation should be investigated in future works. The cloud-cloud collision timescale $`t_{\mathrm{coll}}`$ can be estimated by $`t_{\mathrm{coll}}{\displaystyle \frac{1}{N\sigma V}},`$ (14) where $`N`$ is the number of clouds per unit volume, $`\sigma `$ is the geometrical cross section of a cloud, expressed as $`\sigma =\pi R_\mathrm{c}^2`$, and $`V`$ is the velocity of a cloud. In fact, $`N\sigma V`$ should be written as $`N\sigma V`$, where $``$ means that the physical quantity is averaged for all the clouds. The number of the clouds is the largest in the region where the gravitational potential well is the deepest. The size of the deepest region is typically estimated by the core radius (typically 100–1000 pc for dSphs; e.g., Mateo 1998). Thus, the physical quantities in this section represents the typical values within the core radius. The velocity $`V`$ is determined by the virial-equilibrium value in gravitational potential of a dSph ($`10`$ km s<sup>-1</sup>). If we give a typical collision timescale, we can estimate the number density of clouds in a dSph by $`N{\displaystyle \frac{1}{\pi R_\mathrm{c}^2Vt_{\mathrm{coll}}}}.`$ (15) Observationally, the duration of second star formation seems a few Gyr (van den Bergh 1994; Mateo et al. 1998; Grebel 1999). Based on the assumption that the timescale of star formation is determined by the collision timescale of interstellar clouds, $`t_{\mathrm{coll}}`$ should be $`\text{a few Gyr}`$. Thus, the following estimation for $`N`$ is possible according to equation (15): $`N1.0\times 10^7\left({\displaystyle \frac{R_\mathrm{c}}{10\mathrm{pc}}}\right)^2\left({\displaystyle \frac{V}{10\mathrm{km}\mathrm{s}^1}}\right)^1\left({\displaystyle \frac{t_{\mathrm{coll}}}{3\mathrm{Gyr}}}\right)^1\mathrm{pc}^3,`$ (16) where we estimated the size of the cloud by $`R_{\mathrm{crit}}`$ estimated in §2.3. This value of $`N`$ corresponds to the mean cloud-cloud interval of $`200`$ pc. We note that $`R_{\mathrm{crit}}`$ is the lower limit of the size of a cloud which survived the first star formation. If the typical size is larger, $`N`$ becomes smaller and the typical interval between clouds gets larger. This argument about the cloud size becomes clear if we introduce the mass spectrum of clouds. The mass spectrum $`𝒩(m)dm`$ is defined by the number density of clouds of mass between $`m`$ and $`m+dm`$. If we assume that the spectrum can be expressed by the power-law, $`𝒩(m)m^p`$, and that mass ($`m`$) and size ($`R`$) of any cloud is related by $`mR^3`$ (the constant mass density of any cloud), we obtain the size distribution of the clouds: $`𝒩(R)dRR^{3p+2}dR`$. Using this form, we calculate the mean size of the clouds, $`R`$, as $`R`$ $`=`$ $`{\displaystyle _{R_{\mathrm{crit}}}^{R_{\mathrm{up}}}}R𝒩(R)𝑑R/{\displaystyle _{R_{\mathrm{crit}}}^{R_{\mathrm{up}}}}𝒩(R)𝑑R`$ (17) $`=`$ $`\left({\displaystyle \frac{3p4}{3p3}}\right)\left[{\displaystyle \frac{(R_{\mathrm{up}}/R_{\mathrm{crit}})^{3p+4}1}{(R_{\mathrm{up}}/R_{\mathrm{crit}})^{3p+3}1}}\right]R_{\mathrm{crit}},`$ where $`R_{\mathrm{up}}`$ is the upper cutoff of the size. Setting as $`R_{\mathrm{up}}R_{\mathrm{crit}}`$, we obtain $`RR_{\mathrm{crit}}`$ for $`p>1`$. In this case, estimation of $`R_\mathrm{c}`$ with $`R_{\mathrm{crit}}`$ (eq. 16) is justified. Field & Saslaw (1965) suggested $`p=3/2`$ to explain observed star formation rate. We further estimate the total mass of surviving clouds. The total mass $`M_\mathrm{c}`$ can be estimated by $`M_\mathrm{c}`$ $``$ $`{\displaystyle \frac{4\pi }{3}}R_{\mathrm{dSph}}^3Nm_\mathrm{c}`$ (18) $``$ $`5\times 10^4\left({\displaystyle \frac{R_{\mathrm{dSph}}}{1\mathrm{kpc}}}\right)^3\left({\displaystyle \frac{\overline{n}_\mathrm{c}}{1\mathrm{cm}^3}}\right)\left({\displaystyle \frac{R_\mathrm{c}}{10\mathrm{pc}}}\right)\left({\displaystyle \frac{V}{10\mathrm{km}\mathrm{s}^1}}\right)^1\left({\displaystyle \frac{t_{\mathrm{coll}}}{3\mathrm{Gyr}}}\right)^1M_{},`$ where $`m_\mathrm{c}`$ is defined in equation (6). Thus, the total mass of the stellar population, which is formed in the second star formation epoch (so-called intermediate age), is roughly $`10^4M_{}`$ (or less, since we can hardly expect all the gas is converted to stellar mass). This is 1–3 orders of magnitude smaller than the typical stellar mass of the dSphs ($`10^{5\text{}7}M_{}`$). Indeed, the number of the intermediate-age population is much smaller than the old stellar population for the dSph, though the Carina dSph has prominent intermediate-age populations. (e.g., Mateo 1998). We will comment on the exception later in §4. Here, we should mention the effect of SNe during the second star formation. As shown above, the second star formation is not so active as the initial burst of star formation. Thus, once cloud size is determined during the initial burst through the thermal conduction, the SN heating in the intermediate age has little influence on the clouds. Thus, we can reasonably ignore the SN heating in the intermediate age. ## 4 DISCUSSIONS AND IMPLICATIONS We have inferred the evolution of interstellar clouds in the Local Group dSphs from their observed star formation histories. Owing to the hot gas supplied by initial star formation, small interstellar clouds evaporates. However, clouds larger than $`10`$ pc can survive during the burst of star formation. The surviving clouds contribute to second star formation to form so called “intermediate-age stellar populations.” There are observational evidences that the second star formation occurred in the intermediate age ($`3`$–10 Gyr ago). The timescale of the second star formation is typically a few Gyr (e.g., Mateo et al. 1998). Assuming that star formations are induced by cloud-cloud collisions, the collision timescale should be a few Gyr to realize the observed timescale of the second star formation. Since the collision timescale is related to the number of clouds, the number is constrained. The expected number density of clouds is typically $`1.0\times 10^7`$ pc<sup>-3</sup>, which indicates that the total mass of gas contributing to the second star formation is typically $`10^4M_{}`$. This is 1–3 orders of magnitude smaller than the observed stellar mass of the Local Group dSphs. This indicates that almost all the stars in the dSphs are formed in the initial star formation. Recently, Hirashita, Takeuchi, & Tamura (1998) suggested that the luminous mass of a dSph is determined by the depth of dark matter potential. Their suggestion is true if the first star formation is dominant in the star formation histories of dSphs. Indeed, our cloud-cloud collision model implies that the second stellar population is not dominant in mass. We should also consider environmental effects. Van den Bergh (1994) suggested that environmental effects on dSphs may be important for their star formation histories. He showed that the star formation histories of the Local Group dwarf galaxies correlate with the Galactocentric distances (the distances from the Galaxy): Dwarf galaxies near the Galaxies, such as Ursa Minor and Draco contain only a little fraction of intermediate-age or recent stellar population, while there is observational evidences of recent star formations in distant dwarf galaxies. In the same paper, he also suggested that star formations in the dwarf galaxies are affected by the existence of UV radiation or the wind from the Galaxy. Hirashita, Kamaya, & Mineshige (1997) showed that the Galactic wind can strip the gas of nearby dwarf galaxies. In the epoch of initial burst, the effects of OB star radiations and SNe in dSphs are much stronger than environmental effects and determine the structure of dSphs (Hirashita, Takeuchi, & Tamura 1998). However, after the first star formation, such effects become weak, so that the dominant factor to determine the physical condition is environmental effects such as the Galactic wind (Hirashita, Kamaya, & Mineshige 1997). Thus, the environmental effect may be responsible for the physical nature of the second star formation. Since Ursa Minor and Draco are located closer to the Galaxy than other dSphs, they are easily affected by the ram pressure of Galactic wind which strips the gas of them. Therefore, the second star formation is less prominent in these two dSphs than other dSphs (van den Bergh 1994). Recently, the star formation histories of the companion dSphs of M31 has begun to be made clear (e.g., Armandroff, Davies, & Jacoby 1998). The increase of the number of sample dSphs will contribute to test of the environmental effects. Here we should note that Einasto, Saar, & Kaasik (1974) pointed out environmental effects on structures of galaxies. Another environmental effect is possible. The burst of star formation may be induced by infall of intergalactic gas clouds. If a cloud is captured in a gravitational potential well of a dSph, the cloud may form stars. Hirashita, Kamaya, & Mineshige (1997) pointed out that the present activity of star formation in the Magellanic Clouds may be due to such infall of gas. However, since potential wells of dSphs are much shallower than those of the Magellanic Clouds, it seems difficult for dSphs to capture the intergalactic clouds. Contrary to most dSphs, the Carina dSph shows a burst of star formation in the intermediate age. One possibility to explain this peculiar nature of the galaxy is UV radiation field. It is suggested that UV radiation field suppresses formation of dwarf galaxies (Babul & Rees 1992; Efstathiou 1992). The UV from the Galaxy may have suppressed the formation of Carina. When the UV field at the galaxy became weak in the intermediate age, Carina experienced a burst of star formation. To show that this is true, we also explain why most of dSphs did not suffer the suppression from UV radiation. The future work on the history of UV radiation field in the Local Group may provide us a hint to solve this problem. We note that the cloud-cloud collision model is applicable to the star formation histories of dwarf irregular galaxies, giant elliptical galaxies or distant galaxies. For the application of the collision model to dense molecular clouds in a high-redshift object, see e.g., Ohta et al. (1998). The following points remain to be solved in this paper: What determines the number of stars formed in the initial burst of star formation? This is the problem raised also in Hirashita, Takeuchi, & Tamura (1998). What determines the number of clouds that contribute to the intermediate-age stellar populations? This question is related to the mass spectrum of interstellar clouds (§3). We wish to thank the anonymous referee for helpful comments that substantially improved the discussion of the paper. We are grateful to S. Mineshige for continuous encouragement. We thank H. Kamaya, T. T. Takeuchi, H. Nomura, N. Tamura, and K. Yoshikawa for helpful comments and useful discussions. This work was motivated by discussions with S. van den Bergh at the IAU meeting. We would like to thank him for his stimulating comments. This work is supported by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. We fully utilized the NASA’s Astrophysics Data System Abstract Service (ADS). APPENDIX ## Appendix A THE KELVIN-HELMHOLTZ INSTABILITY The Kelvin-Helmholtz (K-H) instability has been discussed in various astrophysical contexts. For example, Miyahata & Ikeuchi (1995) discussed the stability of protogalactic cloud against the K-H instability (see also Murray et al. 1993). The K-H instability is also applied to interstellar physics: Klein, McKee, & Collella (1994) examined the interstellar clouds, while Fleck (1984) and Kamaya (1996) investigated molecular clouds. All of these works assume that dense clouds are embedded in a diffuse medium. Since the dense medium generally moves at the velocity determined by the depth of gravitational potential, there is generally a relative motion between the dense and diffuse media. The presence of the relative motion makes it possible to discuss the K-H instability (e.g., Chandrasekhar 1961). In §2, we considers interstellar clouds embedded in hot tenuous gas originating from successive SNe. Since clouds in a galaxy generally move at the velocity determined by the gravitational potential of the galaxy, it is necessary to examine the timescale of the growth of the K-H instability. The growth rate ($`\omega `$) in the linear regime at a flat interface between the cloud and ambient hot gas can be expressed as $`\omega =k{\displaystyle \frac{(\rho _\mathrm{c}\rho _\mathrm{h})^{1/2}U}{\rho _\mathrm{c}+\rho _\mathrm{h}}},`$ (A1) where $`k`$ is a wavenumber of a mode ($`k=2\pi /\lambda `$, where $`\lambda `$ is the wavelength), $`\rho _\mathrm{c}`$ and $`\rho _\mathrm{h}`$ are, respectively, the mass densities of cloud and hot gas, and $`U`$ is the relative velocity (Drazin & Reid 1981). Since $`\rho _\mathrm{c}\rho _\mathrm{h}`$, we obtain the following estimation for a typical proto-dwarf galaxy considered in §2: $`\omega 6.4\times 10^{14}\left({\displaystyle \frac{\lambda }{10\mathrm{pc}}}\right)^1\left({\displaystyle \frac{\rho _\mathrm{h}/\rho _\mathrm{c}}{10^3}}\right)^{1/2}\left({\displaystyle \frac{U}{10\mathrm{km}\mathrm{s}^1}}\right)\mathrm{s}^1.`$ (A2) The timescale of the growth, $`t_{\mathrm{KH}}`$, is estimated by $`t_{\mathrm{KH}}{\displaystyle \frac{2\pi }{\omega }}3.1\times 10^7\left({\displaystyle \frac{\lambda }{10\mathrm{pc}}}\right)\left({\displaystyle \frac{\rho _\mathrm{h}/\rho _\mathrm{c}}{10^3}}\right)^{1/2}\left({\displaystyle \frac{U}{10\mathrm{km}\mathrm{s}^1}}\right)^1\mathrm{yr},`$ (A3) which is comparable with the evaporation time defined in §2. Thus, the instability may determine the size of clouds (see e.g., Kamaya 1998 for stabilizing effects).
no-problem/9903/cond-mat9903147.html
ar5iv
text
# Thermal Contraction and Disordering of the Al(110) Surface \[ ## Abstract Al(110) has been studied for temperatures up to 900 K via ensemble density-functional molecular dynamics. The strong anharmonicity displayed by this surface results in a negative coefficient of thermal expansion, where the first interlayer distance decreases with increasing temperature. Very shallow channels of oscillation for the second-layer atoms in the direction perpendicular to the surface support this anomalous contraction, and provide a novel mechanism for the formation of adatom-vacancy pairs, preliminary to the disordering and premelting transition. Such characteristic behavior originates in the free-electron-gas bonding at a loosely packed surface. PACS numbers: 71.15.Pd, 65.70.+y, 68.35.Ja, 68.45.Gd \] Metal surfaces exhibit a remarkable behavior as a function of the temperature. Thermodynamic stability is often determined by a delicate balance between energetic and entropic effects, and can lead to a rich phenomenology for the phase diagrams of different systems. Unreconstructed face-centered cubic (110) surfaces (e.g. Al, Cu, Ni) display a damped oscillatory pattern of interlayer relaxations, starting with a large contraction between the first and the second layer. Such behavior originates in the response of surface atoms to under-coordination: moving towards the underlying layer, they increase their surrounding charge density while reducing the corrugation of the surface and the lateral tensile strain. When the temperature is raised, this under-coordinated layer can start to disorder even before the melting temperature of the bulk is reached. While the suggestion that a surface could act as a nucleation stage for melting had long been made, experimental evidence of a reversible melting transition limited to the outer surface layers came only recently. For the case of Al(110), several experimental techniques (ion blocking and shadowing, electron or neutron diffraction, He scattering) have since shown a clear onset of disordering at temperatures between 770 K and 815 K, whereas the bulk melting temperature is 933 K. Computer simulations based on different models (effective-medium theory, embedded-atom method, glue models) have then been applied to the study of several (110) surfaces (Pb, Al, Cu, Ni), and surface premelting was observed in all cases. However, many issues remain unresolved. Extensive low-energy electron diffraction (LEED) studies in Al(110)(c) show a negative thermal expansion coefficient for the first interlayer distance, and a large positive one (twice the bulk value) for the second interlayer distance. These findings are at variance with widely held general theoretical considerations, and with the results of available computer simulations for Cu, Ni, and Al which predict an expansion of the first interlayer distance with temperature. In addition, model calculations fail to reproduce the zero-temperature multilayer relaxation pattern, predicting only the contraction of the first interlayer. On Al(110) the premelting transition is preceded by an anomalous proliferation of adatoms on the surface, for which there is no reliable microscopic picture. Finally, the degree of anharmonicity and anisotropy of the different surface layers, as opposed to the bulk, is not known, due to the experimental difficulty in resolving different layers. Ab-initio molecular dynamics (MD) simulations of metal surfaces are very challenging, and only few and limited studies have been attempted. We use here an approach that we recently introduced (ensemble density-functional theory (eDFT)), together with a technical improvement for the Brillouin Zone (BZ) integrations (so called “cold smearing”), which is particularly suited to MD simulations. Applying this scheme to the case of Al(110), we provide the first theoretical confirmation of a negative thermal expansion for this surface, and excellent agreement with the experiments for the temperature-dependent multilayer relaxations. Moreover, we present a novel, and in retrospect simple, picture of the microscopic mechanisms that lead to this anomalous thermal contraction and to the surface disordering associated with premelting. In first-principles calculations for metals it is customary to introduce a fictitious electronic temperature $`\sigma `$, to broaden the density of states and to smooth the discontinuities at the Fermi energy $`\mu `$, greatly improving the sampling accuracy of a given set of k-points. It is very convenient to choose a broadening that has zero first- and second-moments, so that the resulting electronic free energy does not have any quadratic dependence on the broadening temperature, and neither do its derivatives with respect to any external parameter (e.g. the Hellmann-Feynman forces, or the stress tensor). In the existing schemes this is achieved at the price of allowing for negative orbital occupancies, so that problems can arise in self-consistent calculations where the total electronic density may become negative. Here, we present a broadening scheme leading to an occupation function that is positive definite. Occupation broadening convolutes the density of states with a broadening of the $`\delta `$ function; the cold-smearing broadening is $$\stackrel{~}{\delta }(x)=\frac{2}{\sqrt{\pi }}\mathrm{exp}^{\left(x\frac{1}{\sqrt{2}}\right)^2}\left(2\sqrt{2}x\right).$$ (1) Spin-degeneracy is assumed here, and $`x=\frac{\mu ϵ}{\sigma }`$. The “generalized entropic functional” $`S=_is_i`$ that can be derived and the occupation numbers $`f_i=_{\mathrm{}}^{x_i}\stackrel{~}{\delta }(x)𝑑x`$ can all be expressed in terms of pseudoenergies $`\stackrel{~}{ϵ_i}`$ ($`x_i=\frac{\mu \stackrel{~}{ϵ_i}}{\sigma }`$); in particular $$s_i=\frac{1}{\sqrt{\pi }}\mathrm{exp}^{\left(x_i\frac{1}{\sqrt{2}}\right)^2}\left(1\sqrt{2}x_i\right).$$ (2) No practical difficulty to the self-consistent calculations is caused by the fact that some spin-degenerate occupancies can still exceed 2; this was also the case for the choices set forth in Ref. . The calculations use the local-density approximation (LDA) and norm-conserving pseudopotentials, with a plane-wave basis cutoff of 11 Ry. The bulk properties of Al are well represented: the lattice parameter is 3.96 (4.02) Å, the elastic constants C<sub>11</sub>=117 (114) GPa, C<sub>12</sub>=66 (62) GPa, and C<sub>44</sub>=39 (32) GPa (experimental results at 0 K are in parenthesis). The simulation cell is a $`3\times 3`$ 8-layer Al(110) slab, containing 72 atoms separated by 8.5 Å of vacuum. k-point sampling is performed with the $`\frac{1}{4},\frac{1}{4},\frac{1}{4}`$ Baldereschi point, using $`\sigma `$=0.5 eV of cold smearing. The zero-temperature structural properties are summarized in Table I: good and consistent agreement with the experimental results is registered. The 8-layer calculation has been performed using the same finite cell and sampling of the MD simulations; this introduces some small finite-size errors, that can be evaluated exactly at 0 K comparing them with a fully converged calculation (a $`1\times 1`$ 15-layer slab with $`12\times 12\times 2`$ k-point sampling). Constant-temperature MD simulations have been performed using a Gaussian thermostat and a leapfrog velocity Verlet algorithm to integrate the ionic equations of motion, using a timestep of 8 fs. A set tolerance for each timestep of 5 meV/cell in the spread of the total energies over the last 5 electronic iterations resulted in a negligible drift of the constant of motion (less than 1.5 meV/atom/ps for a microcanonical run). The lattice parameter parallel to the surface was fixed applying the experimental thermal expansion coefficient for the bulk to the LDA equilibrium lattice parameter. We followed five runs, at increasing temperatures of 400, 600, 700, 800 and 900 K, for 5, 10, 6, 6, and 6 ps respectively. We present in Fig. 1 our results for the mean square displacements (MSDs) in the different layers, from the surface to the interior of the crystal, during a 5 ps run at 400 K. The horizontal scale shows the decremental time: the plot starts with the averages over the full run, then proceeds by discarding a progressively longer initial segment. This approach highlights the initial thermalization time (negligible in this case), the flatness of the plateau for the converged time average, and provides an estimate for the statistical errors. The left panel of Fig. 1 shows the $`[001]`$ component for the MSDs (we label it $`x`$); this component is parallel to the surface and perpendicular to the $`[\overline{1}10]`$ rows that characterize the $`(110)`$ surface. The right panel shows the $`[110]`$ $`z`$ component, perpendicular to the surface. The time averages are well converged, with the third- and fourth-layer results very close to each other (giving us confidence on the absence of finite-size effects), and close to the experimental bulk values. Two results stand out from the simulation. First, the MSDs in the $`x`$ direction are twice as large for the surface atoms than for those in all the other inner layers. While it can be expected that the undercoordinated atoms on the surface should be more loosely bound, the large difference with the averages for the lower layers is notable. Second, the MSDs in the $`z`$ direction (i.e. perpendicular to the surface) are much larger in the second layer than in the first layer. This is a distinctive feature of this crystallographic orientation that was first encountered in embedded-atom simulations of Ni(110) and Cu(110). In Al the effect is more striking due to its free-electron-gas behavior. A simple rationalization can be offered: since the (110) surface is very open, atoms in the second layer have natural channels of oscillation perpendicular to the surface and directed towards the vacuum. The charge density on the top of the second-layer atoms is still quite homogeneous, and the bonds are easily stretched, leaving thus the freedom for the atoms to move back and forth along these channels. On the other hand, atoms in the first (surface) layer see the vacuum acting as a hard wall, limiting their mobility outwards; their largest oscillations are thus parallel to the surface and perpendicular to the $`[\overline{1}10]`$ rows. The anisotropic behavior of the surface dynamics can be gauged by looking at Fig. 2, where the MSDs at 700 K are plotted in all three crystallographic orientations as a function of the layer depth. Moving from the bulk to the surface, one can observe that the third layer still behaves in a bulk-like fashion: the MSDs are isotropic, and they are only slightly larger than those in the two layers below. The anisotropy becomes very distinct in the second layer, with its characteristic large MSDs perpendicular to the surface, and persists in the first layer, for which the ‘easy’ channels are parallel to the surface and across the close-packed $`[\overline{1}10]`$ rows. Some of the components for the MSDs in the first and second layers can be up to 2-3 times their bulk counterparts. These enhancements near the surface are due to the lower coordination; in addition, a higher degree of anharmonicity makes these surface MSDs increase much more rapidly with temperature than the bulk ones. This becomes apparent from the plot in Fig. 3 of the MSDs as a function of the temperature. The innermost layers show isotropic MSDs, with some deviation from the linear regime only above 700 K (in the harmonic regime the MSDs dependence on the temperature is exactly linear). The outer layers, on the contrary, are strongly anharmonic. The very large increases in the vibrational amplitudes along the ‘easy’ channels are precursors to the creation of adatoms and vacancies on the surface, that lead to the disordering and premelting of the surface. In fact, we observe that with increasing temperature atoms in the second-layer start making increasingly large slow excursions towards the surface. One of these events is shown in Fig. 4; the highlighted atom temporarily pops out from the surface. In another event the second-layer atom remained outside the surface, creating an adatom-vacancy pair where the vacancy is initially in the second layer (this void is quickly filled up by a surface atom). In this second case the adatom diffused away via exchange diffusion. The microscopic dynamics provides a clear explanation of the behavior of this surface, that displays an increasing contraction of the first interlayer distance with temperature, where a large expansion would have been expected. The contraction can be understood looking at the motion of the second-layer atoms along these channels that are shallower towards the vacuum. With increasing temperature, the center of mass of the second layer moves outwards, since it is not hampered by nearest-neighbors directly on top (the first layer is staggered with respect to the second). The first layer is more limited in its expansion, since the vacuum acts as a hard wall. The end result is that the average distance between the first and the second layer decreases with temperature. This decrease is then offset by a larger thermal increase of the distance between the second and the third layer. The results of our simulations for the interlayer relaxation as a function of temperature (see Fig. 5) are in very good quantitative agreement with the LEED data (Ref. (c)). In conclusion, our calculations on Al(110) represent the first extensive first-principles molecular-dynamics simulations of the dynamics on a metal surface, presenting both an insightful picture of the microscopic dynamics and a remarkable agreement with the available experimental data. The microscopic dynamics of this surface is peculiar, and governed by the interplay between the free-electron-gas behavior of the bulk and the quasi-covalent bonding of the undercoordinated surface atoms. Two distinct soft channels of oscillation have been identified. One channel is at the surface in the direction, perpendicular to the close-packed surface grooves. The other, unexpected, is perpendicular to the surface but confined to the second-layer atoms. It is this channel that is responsible for the observed anomalous contraction of the surface with temperature. Additionally, it provides a novel, favored mechanism for the generation of adatom-vacancy pairs, whose proliferation is precursor to the disordering and premelting transition. N.M. acknowledges support by NSF grant DMR-96-13648; these calculations have been performed on the Hitachi S3600 at the University of Cambridge High Performance Computing Facility. Present address: Center for Computational Materials Science, Naval Research Laboratory, Washington DC.
no-problem/9903/hep-lat9903004.html
ar5iv
text
# 1 Introduction ## 1 Introduction From the point of view of computer simulation, lattice approach to non perturbative aspects of quantum field theory is a mature technique; apart from few exceptions, well consolidated schemes of simulation do exist, something like a recipes book, that allow studies, for example, of the most interesting features of QCD. The progress in the results is quite slow, in view of the large computing power needed for realistic calculation, but the field appears well founded. The mentioned ”few exceptions”, however, concern very interesting problems, as well. The most paradigmatic of these dark zones is the study of thermodynamic of QCD in presence of non-zero baryonic density, shortly Finite Density QCD. The standard way to include the effects of baryonic matter on QCD vacuum leads to complex action in Euclidean formulation and this prevents the use of standard simulation algorithms, based on the idea of importance sampling, defined through a positive definite density of probability, e.g. the exponential of minus the Euclidean action. This problem can be rephrased stating the impossibility of defining a Boltzmann weight for each field configuration: only calculating the partition function we can define correctly the observables and obtain sensible results for quantities of physical interest. Calculations of partition functions are not infrequent in lattice simulations , but their nature of extensive quantities raises the problem of the feasibility of this type of calculation with limited statistics, as forced from finite computing power. In the following section we will argue that, although reliable evaluation of the partition function of fermions coupled lattice gauge theories at zero baryon density is possible and successful , the extension of such technique for Finite Density QCD appears out of reaching for any reasonable statistics, at least in a range of theory parameters: for some values of the chemical potential $`\mu `$ the phase of the fermionic determinant can be estimated only averaging over $`O(e^V)`$ configurations. In two recent papers we have presented our results for Finite Density QCD, mainly working with light quarks, in the infinitely strong coupling limit. Our approach consisted in trying to approximate the correct partition function using the modulus of the determinant, as suggested by the analysis of the SU(3) linear chain model , . The emerging picture is quite disappointing: we are essentially unable to reproduce the results of ref. , obtained with the MDP (Monomer-Dimer-Polymer) technique. Moreover we did show as the simulation scheme, known as Glasgow method , and possibly any other method based on the Grand Canonical Partition Function (GCPF) approach , produce essentially the same results of our calculations, when analyzed in a way to avoid perverse numerical effects due to rounding errors . In spite of these disappointing aspects, there are regions in the parameters space of the theory, in particular at large bare quark masses, in which one can hope to obtain reliable results, of some interest from a methodological point of view. In fact, at large quark masses (and any $`\beta `$), the interval of chemical potential where the contribution of the phase can not be appreciated shrinks. The rest of the work, consequently, is devoted to the investigation of large quark mass limit, in some sense a favorite laboratory in which numerical techniques can be tested. Monitoring the expectation value of the phase of the Dirac determinant we can distinguish the regions in the parameter space where our evaluation of the partition function of finite density QCD is (in principle) exact from the ones where we miss a possible contribution to $`𝒵`$. A coherent picture seems to emerge from our data: a saturation transition exists at all couplings and merges, in the scaling region, to the true deconfining critical line that, with respect to $`\mu =0`$, moves towards smaller $`\beta `$ with increasing $`\mu `$. In the next section we will give arguments to explain why the contribution of the phase can not be measured and will present, in the strong coupling limit, a quantitative check of the Grand Canonical formulation results using data obtained with different techniques. The third section is devoted to the exposition of our approach to simulations of finite density QCD at finite coupling which exploits the main advantage of the MFA approach, i.e. the free mobility in the $`(\beta ,\mu )`$ plane. In the fourth section we present results for fermionic and gluonic observables, discussing the fate of the deconfining phase transition, expected, on phenomenological ground, when one increases the baryon density. The analisys is complemented with informations coming from the partially solved infinite bare mass limit . In the last section a final discussion of the most important results is done. ## 2 The partition function of Finite Density QCD The Finite Density QCD partition function can be written as $$𝒵=[dU]e^{\beta S_g(U)}det\mathrm{\Delta }(U,m_q,\mu )$$ (1) where, using the staggered formulation, the fermionic matrix $`\mathrm{\Delta }`$ takes the standard form $`\mathrm{\Delta }_{i,j}=m_q\delta _{i,j}+{\displaystyle \frac{1}{2}}{\displaystyle \underset{\nu =1,2,3}{}}\eta _\nu (i)[U_\nu (i)\delta _{j,i+\widehat{\nu }}U_\nu ^{}(i\widehat{\nu })\delta _{j,i\widehat{\nu }}]`$ $`+{\displaystyle \frac{1}{2}}[U_4(i)\delta _{j,i+\widehat{4}}e^\mu U_4^{}(i\widehat{4})\delta _{j,i\widehat{4}}e^\mu ]`$ The contribution of modulus $`|det\mathrm{\Delta }|`$ of Dirac determinant and its phase $`\varphi _\mathrm{\Delta }`$ can be separated as $$𝒵=𝒵_{}e^{i\varphi _\mathrm{\Delta }}_{}$$ (2) where $$𝒵_{}=[dU]e^{\beta S_g(U)}|det\mathrm{\Delta }(U,m_q,\mu )|$$ (3) is the partition function of the model with the modulus of the determinant (modulus QCD in the following), and $$e^{i\varphi _\mathrm{\Delta }}_{}=\frac{[dU]e^{\beta S_G(U)}|det\mathrm{\Delta }|e^{i\varphi }}{[dU]e^{\beta S_G(U)}|det\mathrm{\Delta }|}$$ (4) It is clear from eq. (2) that, in the thermodynamical limit, the theory defined by means of $`𝒵_{}`$ is physically different from the original theory only when the expectation value of the cosine of the phase of fermion determinant is vanishing exponentially with the system volume. In the regions of parameter space where the aforementioned expectation value is not $`O(e^V)`$ modulus QCD is an equivalent formulation of Finite Density QCD i.e. indistinguishable in the thermodynamical limit. In the rest of parameter space, modulus QCD clearly overestimates the true QCD partition function. Let us try to better illustrate this concept looking at figure 1. It refers to infinitely strong coupling limit $`\beta =0`$ and $`V=6^3\times 4`$. At fixed quark mass $`m_q`$ the partition function of the system is only dependent on chemical potential $`\mu `$. If we plot the free energy versus $`\mu `$ we can extract the phase structure from the appearance of a singularity in (some derivative of) the curve. Two extreme limits are well known. At $`\mu =0`$ we get the logarithm of the usual fermion determinant averaged over gauge field configurations with a flat distribution: an average of a well defined (real and positive) quantity that can be computed. On the other hand in the large $`\mu `$ limit only the last term of Grand Canonical Partition Function (see later) survives: $`det\mathrm{\Delta }\left(1/2\right)^{3V}e^{3V\mu }`$ and the free energy is a straight line with slope $`3V`$. In this limit the (baryon) number density, defined as: $$N(\mu )=\frac{1}{3V}\frac{}{\mu }\mathrm{log}𝒵$$ (5) is equal to $`1`$, and we can say that we are in a saturation regime, with Pauli exclusion principle preventing further increase of baryon density. In these two limits modulus QCD is coincident with the true theory and deviations are possible only in the intermediate region. Starting from $`\mu =0`$, we can use the data of fig. 5 in ref. , regarding number density at $`m_q=0.1`$, in order to reconstruct the free energy of the true theory as seen from the MDP approach. This is shown in figure 1 as the dotted line. If we superimpose the results of modulus QCD (continuous line) we can easily identify three regions: * $`\mu <\mu _1=0.3`$, which defines the onset in modulus QCD, where the number density is essentially zero; * $`\mu >\mu _2=1.0`$ , the saturated region; * $`\mu _1<\mu <\mu _2`$, the region where modulus QCD grossly overestimates the free energy of true theory. (as stated in , using Glasgow prescription for dealing with the complex determinant, we obtain, for the free energy, exactly the same results as in modulus QCD). In figure 2 we report, for the same lattice and quark mass, the difference between the free energy of modulus QCD and the estimation based on data of ref. . Superimposed to that we plot the expectation value of $`e^{i\varphi _\mathrm{\Delta }}_{}`$ at the same value of the parameters. It is evident that the intermediate region is where the phase term is vanishing within statistical errors. If we concentrate on a value of $`\mu `$ inside this region, for example $`\mu =0.7`$, and we plot the distributions of the phase and the (logarithm of the) modulus of fermion determinant of single field configurations, we can see (figure 3) that modulus distribution is behaved as expected, while the phase distribution is almost flat. These distributions have been computed using $`N2500`$ configurations of a $`6^34`$ lattice. With this statistics we can hope to measure accurately the phase term $`e^{i\varphi _\mathrm{\Delta }}_{}`$ only down to $`O(1/\sqrt{N})`$ ($`0.02`$ for our runs), far from the $`O(e^V)`$ order needed in principle. Even with a statistics of some thousands of configurations, we can say nothing on free energy of true theory in the range $`\mu _1<\mu <\mu _2`$, that covers the region where the number density varies rapidly. This does not imply necessarily that the phase is relevant in this region: for example it could go to zero as $`e^{V_S}`$, with $`V_S`$ the spatial volume, being in this case at the same time irrelevant and non measurable! The situation becomes somewhat better if we move to large quark mass: the range $`(\mu _1,\mu _2)`$, where finite statistics effects prevent to obtain a sensible evaluation of free energy, becomes narrower (see later), thus allowing the study of the model in a wider parameters range. The same scenario holds at finite coupling too, allowing us to investigate a great part of the parameter space. ## 3 Simulation Scheme In this section we will present the simulation scheme that we have used in our work. Our simulations are based on the GCPF (Gran Canonical Partition Function) formalism with an MFA (Microcanonical Fermionic Average) inspired approach for intermediate coupling analysis. The GCPF formalism allows one to write the fermionic determinant as a polynomial in the fugacity $`z=e^\mu `$: $`det\mathrm{\Delta }(U;m,\mu )`$ $`=`$ $`det(G+e^\mu T+e^\mu T^{})=z^{3V}det(P(U;m)z^1)`$ $`=`$ $`{\displaystyle \underset{n=3L_s^3}{\overset{3L_s^3}{}}}a_nz^{nL_t}`$ where the propagator matrix $`P`$ is $$P(U;m)=\left(\begin{array}{cc}GT& T\\ T& 0\end{array}\right)$$ in which $`G`$ contains the spatial links and the mass term, $`T`$ contains the forward temporal links and $`V=L_s^3L_t`$ is the lattice volume. Once fixed the quark mass $`m_q`$, a complete diagonalization of $`P`$ allows one to reconstruct, trough a recursion algorithm, the coefficients $`a_n`$, hence $`det\mathrm{\Delta }`$, for any value of the chemical potential $`\mu `$. Due to the $`Z(L_t)`$ symmetry of the eigenvalues of $`P`$ it is possible to write $`P^{L_t}`$ in a block form and we only need to diagonalize a $`6L_s^3\times 6L_s^3`$ matrix. This general method has been implemented in the framework of an MFA inspired approach. The basic idea in MFA is the exploitation of the physical equivalence between the canonical and microcanonical formalism via the introduction of an explicit dependence on the pure gauge energy in the computation of the partition function. Indeed (1) can be written as: $$𝒵(\beta ,\mu ,m)=𝑑En(E)e^{6V\beta E}<S_{eff}^F(\mu ,m_q)>_E$$ (6) where $$n(E)=[dU]\delta (6VES_g[U])$$ (7) is the density of states at fixed pure gauge energy $`E`$, and $$<S_{eff}^F(\mu ,m_q)>_E=\frac{[dU]\delta (6VES_g[U])S_{eff}^F([U],\mu ,m_q)}{n(E)}$$ (8) is the average over gauge field configurations at fixed energy $`E`$ of a suitable definition of effective fermionic action. For the calculation of $`<S_{eff}^F>_E`$ we proceed as follows: first, we choose a set of energies selected to cover the range of $`\beta `$ we are interested in. Secondly, for all the energies in the set, we generate gauge field configurations using a pseudo-microcanonical code; the generation of gauge fields at fixed energy is not the costly part of the whole procedure, so we can well decorrelate the configurations used for measuring the Dirac operator. Then, a standard NAG routine is used in order to obtain the complete set of eigenvalues of the propagator matrix $`P`$. At this point we can reconstruct the fugacity expansion coefficients $`a_n`$ or, without any substantial additional computer cost, use the eigenvalues to explore the possibilities offered by alternative prescriptions for the fermionic effective action, i.e. evaluate the modulus of the determinant and hence $`𝒵_{}`$. At the end, we have the fermionic effective action evaluated at discrete energy values: a polynomial interpolation allows the reconstruction at arbitrary values of the energy $`E`$, in order to perform the numerical one-dimensional integration in (6) and obtain the partition function $`𝒵_{}(\beta ,\mu ,m)`$. In a previous work we have found evidence for numerical instabilities in the standard evaluation of coefficients $`a_n`$, whose origin lies on the ordering of the eigenvalues of $`P`$ as calculated by a standard diagonalization routine. A random eigenvalue arrangement, before the calculation of the coefficients $`a_n`$, is necessary in order to control rounding effects. In the present work we have always used this procedure to calculate the GCPF expansion coefficient. To conclude this section let us, briefly, summarize the usefulness of MFA. This algorithm does not require a separate fermionic simulation for each value of $`\beta `$, as the standard HMC (Hybrid Monte Carlo) algorithms, thus allowing us to extend the analysis to the whole relevant $`\beta `$ range without an additional computer cost. Moreover, the basic idea of MFA is to consider the fermionic determinant (or its absolute value) as an observable. So $`det\mathrm{\Delta }`$ is not in the integration measure and one avoids, in principle, the problem of dealing with a complex quantity in the generation of configurations. We have seen, however, that an unaffordable (eventually exponential) statistics is necessary to calculate $`𝒵`$ in some $`\mu `$ range so that this advantage on direct simulation schemes can not, in general, be exploited. ## 4 Large quark mass results In this section we will present results in the large bare quark mass limit both in the strong and intermediate coupling QCD at finite density. We have performed simulations in a $`4^3\times 4`$ lattice ($`10`$ masses $`m_q=1.05.0`$) in the range of the chemical potential $`\mu [0.0,4.0]`$ and, for intermediate coupling, of $`\beta [4.0,6.0]`$. Some (low statistic) data for a $`6^3\times 4`$ lattice will also be shown. Firstly we have located the range $`(\mu _1,\mu _2)`$ (as a function of $`\beta `$ and $`m_q`$), in which the average (4) is, with available statistics, indistinguishable from zero (see fig. 2). In figure 4 we report the gap $`\mathrm{\Delta }\mu =\mu _2\mu _1`$ versus the quark mass computed fixing the gauge coupling value $`\beta =5.5`$, a value near to the temperature induced transition. From figure 4 it is evident the tendency of $`\mathrm{\Delta }\mu `$ to decrease as the quark mass $`m_q`$ is increased. We have calculated the partition function of QCD in the $`\beta \mu `$ plane. This calculation is in principle exact for $`\mu [\mu _1(\beta ),\mu _2(\beta )]`$ and is complemented with data of modulus QCD in the region where we are not able to measure the phase. All the data are presented in such a way to make evident when a possible contribution from the phase has been discarded. In figure 5-a we report $`N(\mu )/\mu `$ evaluated at $`m_q=1.5`$ and different values of the gauge coupling $`\beta `$. We have chosen two values of $`\beta `$: $`\beta =5.3`$ for which the system is inside the confining phase, and $`\beta =5.7`$ in the deconfined one. The figure shows a sharp peak for the derivative, moreover the position of the peak does not move with $`\beta `$. The same quantity for the larger lattice is reported in figure 5-b. We do not attempt to do a finite size scaling analisys but only note that the height of the peaks grows considerably suggesting a (saturation) transition at $`\mu =\mu _c^S`$ and independent from $`\beta `$. The same scenario holds at smaller $`m_q`$ with the only difference that the peaks becomes broader for masses smaller than $`1.0`$ signalling the well known phenomena of early onset in the number density , . These findings are to be compared with the one obtained analytically at infinite $`m_q`$ for which the infinite bare mass QCD partition function factorizes $$𝒵(\beta ,\mu )=(\beta ,\mu )𝒵_{PG}(\beta )𝒵(\beta =0,\mu )$$ (9) with $``$ a irrelevant factor in the zero temperature thermodynamical limit ($`1`$ for $`V=L^4\mathrm{}`$) and $`𝒵(\beta =0,\mu )`$ developing a first order saturation transition at $`T=0`$. Due to the independence of $`\mu _c^S`$ from $`\beta `$, we will investigate its dependence on the bare quark mass from the behavior of $`N(\mu )`$ at $`\beta =0`$, where we are able to compare with other results (analytical as well as numerical). In figure 6 we report $`\mu _c^S(m_q)`$: at $`m_q2`$ we approach a linear dependence for $`\mu _c^S`$ in good agreement with the numerical results by MDP while, at larger masses, $`\mu _c^S`$ coincide asymptotically with the large mass limit $`\mathrm{log}(2m_q)`$ (as well as with the $`1/3`$ of the nucleon mass ). Coming to the phase structure in the $`\beta \mu `$ plane it is evident that the critical line at constant $`\mu =\mu _c^S`$ can not be the only one. In order to separate the confined phase from the deconfined one we need a transition line that, starting at the zero density finite temperature critical point, eventually merges with the saturation transition at smaller values of $`\beta `$. This critical line is the relevant one from a physical point of view in the sense that here we can obtain finite quantities for physical observables when the lattice spacing goes to zero. In the infinite mass limit this line is vertical (at the critical coupling of the finite temperature pure gauge theory, see formula (9)) so we can expect that, for our large masses, it moves only slightly to smaller $`\beta `$. If this is the case it is crucial the possibility to rely on data at fixed $`\mu `$ and any $`\beta `$ to extract relevant informations. The saturation of number density is a pure lattice artifact (a saturated lattice corresponds, in the continuum limit, to a divergent number density). Therefore, in order to search for evidence of the transition expected on phenomenological grounds, it is necessary to restrict our analisys to values of $`\mu `$ where the discretization effects are smaller i.e. to a number density smaller than the lattice half filling value $`1/2`$. In this region of the parameter space the phase gives no contribution (see fig. 2) and we can rely on our results. We have studied the plaquette energy $`E(\beta ,\mu )`$, the Polyakov loop $`P(\beta ,\mu )`$ and the number density as a function of $`\beta `$. In figure 7-a,7-b we report $`E(\beta ,\mu )`$ and $`E(\beta ,\mu )/\beta `$, evaluated at bare quark mass $`m_q=1.8`$ and at different values of the chemical potential $`\mu <\mu _c^S`$. In fig. 7-a we can clearly see a rapid variation of the observable for all the values of $`\mu `$; for the $`\mu =0`$ curve this happens in correspondence with the pseudo-temperature transition of zero density full QCD. The critical gauge coupling moves to smaller $`\beta `$ as we increase $`\mu `$. This phenomenon is also evident as a sharp peak in the figure of the derivative (fig 7-b). It is tempting to interpret this as an evidence of a temperature induced phase transition extending at non zero values of $`\mu `$. Also the behavior of the Polyakov loop points in this direction: as can be seen in figure 8, $`P_\mu (\beta )`$ changes rapidly at values of $`\beta `$ consistent with the ones obtained from the energy. The number density gives a less clear signal since it is forced to be a constant function of $`\beta `$ at $`\mu =0`$. Nevertheless we can see in figure 9 that the plot of this observable is still consistent with previous findings for the gluonic quantities. It is useful to remark that plotting the same quantities at fixed $`\beta `$ as a function of $`\mu `$ we would be practically unable to see any signal. Signals for a developing discontinuity in all these observables rely on data in the region where the contribution of the phase is negligible but similar behavior is found at larger values of $`\mu `$ too (where we can only compute the observables of modulus QCD). All these findings are consistent with phenomenological expectations for the temperature-density QCD phase diagram where, increasing the baryonic density, the critical temperature of the deconfinement phase transition decreases. On the lattice this translates in a critical line that, starting at the zero density-finite temperature critical point, continues at smaller values of $`\beta `$ for finite $`\mu `$. To conclude our analysis we report in figure 10 the $`(\beta ,\mu )`$ phase diagram of the theory at $`m_q=1.8`$ and $`V=4^4`$. We can see two (critical) lines; the horizontal one is due to saturation, while the other should be the physical one. If the same scenario holds in the zero mass limit we can expect that, as the lattice spacing goes to zero, the latter critical line extends in all the scaling window eventually coinciding with the former in the zero temperature limit. ## 5 Conclusions In this paper we have studied Finite Density Lattice QCD by means of numerical simulations. As well known this approach, probably the only one able to tackle the non perturbative effects leading to quark-gluon plasma transition, suffers severe problems due to the lack of hermiticity of Dirac operator for a single realization of gauge fields. In the first part of the paper we have shown as, for small quark masses and strong coupling, any numerical algorithm based on the GCPF approach gives results different from what expected in the region where the contribution of the phase can not be evaluated. To our understanding only a statistic exponentially large with the system volume (and a consequently high accuracy in numerical calculations) can solve this problem. Moving to large quark mass region we meet a much better situation and a large part of the parameter space becomes accessible to numerical simulations. We get, independently from the gauge coupling $`\beta `$, a saturation transition at a chemical potential $`\mu _c^S`$ well compatible with the one predicted, in the strong coupling regime, by previous numerical and analytical analysis. The new result is the evidence of another transition line that connects the previous one to the second order critical point of the four flavor $`\mu =0`$ theory. This has to be regarded as the lattice counterpart of the transition line in the temperature-chemical potential plane that should separate the standard hadronic phase from the quark-gluon plasma phase. For the first time we have got some evidences that the behavior of finite density lattice QCD can be consistent with standard phenomenological expectations. Larger lattices could clarify the nature of the transitions but the volumes attainable with reasonable computer resources make this program not effective. To extend these results to the small quark mass region is impossible since the contribution of the phase to the partition function becomes not measurable practically in the whole parameter space. At the end we have to conclude that, until now, finite density lattice QCD, far from providing non perturbative quantitative insights in the behavior of quarks and gluons, can at most give us some qualitative indication. This work has been partly supported through a CICYT (Spain) - INFN (Italy) collaboration. A.G. was supported by a Istituto Nazionale di Fisica Nucleare fellowship at the University of Zaragoza. Figure Captions * Fig. 1: Free energy at $`\beta =0`$ and $`m_q=0.1`$ of true QCD in the MDP approach (dotted line) and of Modulus QCD (continuous line). * Fig. 2: Normalized difference between the free energy of Modulus QCD and the free energy of true QCD in the MDP approach (continuous line) superimposed to the expectation value of the determinant phase at $`\beta =0`$ and $`m_q=0.1`$. * Fig. 3: Distributions of the logarithm of the modulus (a) and of the phase (b) of the fermionic determinant in a $`6^3\times 4`$ lattice at $`\beta =0`$, $`m_q=0.1`$ and $`\mu =0.7`$ with $`N=2500`$ configurations. * Fig. 4: Width of the $`\mu `$ region $`(\mathrm{\Delta }\mu )`$ in which the QCD partition function fails to be positive versus the quark bare mass $`m_q`$ in a $`4^3\times 4`$ lattice at $`\beta =5.5`$. * Fig. 5: Derivative of the number density respect to the chemical potential in a $`4^3\times 4`$ (a) and $`6^3\times 4`$ (b) lattice at $`m_q=1.5`$ for $`\beta =5.3`$ (dashed line), and $`\beta =5.7`$ (continuous line). * Fig. 6: Saturation critical chemical potential $`\mu _c^S`$ versus the quark bare mass $`m_q`$ in a $`4^3\times 4`$ lattice at $`\beta =0`$. Dotted line is the large mass limit, the dashed one is the result of . * Fig. 7: Plaquette energy $`E(\beta ,\mu )`$ (a) and its derivative $`E(\beta ,\mu )/\beta `$ (b) evaluated in a $`4^3\times 4`$ lattice at $`m_q=1.8`$ for $`\mu =0.01.5`$ (from the right to the left) in steps of $`0.1`$. Dashed line are for $`\mu >\mu _1`$. * Fig. 8: Polyakov loop $`P(\beta ,\mu )`$ evaluated in a $`4^3\times 4`$ lattice at $`m_q=1.8`$ for $`\mu =0.01.5`$ (from the right to the left) in steps of $`0.1`$. Dashed line are for $`\mu >\mu _1`$. * Fig. 9: Number density evaluated in a $`4^3\times 4`$ lattice at $`m_q=1.8`$ for $`\mu =0.01.5`$ (from the right to the left) in steps of $`0.1`$. Dashed line are for $`\mu >\mu _1`$. * Fig. 10: Phase diagram for the $`4^3\times 4`$ lattice at $`m_q=1.8`$ in the $`(\beta ,\mu )`$ plane, the dotted line is for $`\mu >\mu _1`$.
no-problem/9903/cond-mat9903170.html
ar5iv
text
# A scalable, tunable qubit, based on a clean DND or grain boundary D-D junction ## Abstract Unique properties of a ballistic DND or grain boundary D-D junction, including doubly degenerate ground state with tunable potential barrier between the ”up” and ”down” states and non-quantized spontaneous magnetic flux, make it a good candidate for a solid state qubit. The role of quantum ”spin” variable is played by the sign of equilibrium superconducting phase difference on the junction, which is revealed in the direction of spontaneous supercurrent flow in equilibrium. Possibilities of design-specific simultaneous operations with several integrated qubits are discussed. The pronounced shift from ”software” to ”hardware” in theoretical research on quantum computing (QC) is a good measure of growing confidence that QC can be realized on practically interesting scale (of at least $`10^3`$ qubits). Though first experimental realizations of QC used such technologies as NMR and ion trapping, the problem of scalability for such approaches still looks formidable. Therefore much effort is directed at search for a practical solid state qubit (SSQ), with natural candidates being such mesoscopic devices as quantum dots , mesoscopic Josephson junctions and superconducting single-electron transistors (parity switches) . The evident advantage of a SSQ is scalability, where all the potential of existing solid state technologies could be used, while among the problems the main are (1) to achieve quantum beatings between distinguishable states of a single qubit, (2) to prevent loss of coherence during calculations, and (3) to minimize statistical dispersion of the properties of individual qubits. The problems (1) and (2) pose specific difficulties in an SSQ due to huge number of degrees of freedom coupled to it, and to necessity to fine-tune two states of a mesoscopic system chosen as working ones to a resonance. A possibility to circumvent these obstacles is presented by Josephson systems with d-wave cuprates, which violate time-reversal symmetry and as a result have doubly degenerate groundstate with a potential for quantum beatings (macroscopic quantum tunneling), or at least quantum noise . Ioffe et al. recently incorporated this property in their ”quiet qubit” design, which uses tunneling (SID) or dirty SND junctions with equilibrium phase difference $`\phi _0=\pm \frac{\pi }{2}`$. Let us consider a device shown in Fig.1a. Its main part is a clean mesoscopic D-D junction (i.e. ballistic DND or D-(grain boundary)-D junction). Though we will concentrate on the DND qubit design, the same considerations are applicable, mutatis mutandis, to high quality grain-boundary D-D junctions, where recently an analogous current-phase dependence was observed. The grain boundary region where the superconducting gap is suppressed, is naturally modeled by a normal conductor with the same lattice parameters and chemical potential as in the superconducting banks, which only enhances the amplitude of purely Andreev scattering in the system. The terminal B of the junction is formed by a massive d-wave superconductor; in a multiple-qubits system, they will all use it as a common ”bus” bar. The terminal A is small enough to allow - when isolated - quantum phase fluctuations. It is essentially the sign of the superconducting phase difference $`\phi `$ between the terminals A and B that plays the role of ”spin variable” of quantum computing. The collapse of the wave function is achieved by connecting the terminal A with the external source of electrons (”ground”), thus blocking the phase fluctuations due to phase-number uncertainty relation. So far, the best way to do this is presented by using a ”parity key”, PK (superconducting single-electron transistor), which only passes Cooper pairs, and only at a certain gate voltage $`V_g`$. Other parity keys, with different parameters, are used to link adjacent qubits, allowing for controllable entanglement (Fig.1b). Such an architecture allows reasonably easy way to integrate a large number of qubits in a 1D or 2D matrix. We will see that it also provides a natural way of preparing all qubits on the same bus in the same initial quantum state, thus facilitating the implementation of error correction algorithms. The readout of the state of a qubit is simplified by the presence of small spontaneous, non-dissipative currents and spontaneous fluxes (of order $`10^210^3\mathrm{\Phi }_0`$ depending on the setup) concentrated in the central part of the DND junction, which have opposite directions in two degenerate equilibrium states. While too small to lead to unwanted inductive coupling between the qubits or decoherence, they can be still used to read out the state of the qubit once it was collapsed in one of the states with $`\pm \phi _0`$, e.g. using the magnetic force microscope tip M (which is removed during the computations). Collapsing and reading processes are thus time separated, and the computation results will be automatically preserved for the time limited only by the thermal fluctuations. A necessary condition for a qubit to work is $`t_{t(unneling)}<t_{g(ateapplication)}<t_{d(ecoherence)}`$. Here a clean DND junction has a very important advantage following from the fact that the absolute value of equilibrium phase difference, $`|\phi _0|`$, can now vary from $`0`$ to $`\pi `$ depending on the angle $`\mathrm{\Omega }`$ between the crystal axes of the d-wave superconductors and the ND boundary . In practice, since the lattice structure of the d-wave superconductors allows only a limited set of easy cleavage directions, e.g. $`\left(010\right)`$ and $`(100)`$, the equilibrium phase can be varied by preparing a steplike ND interface, with the equilibrium phase determined by the relative weight of Andreev zero- and $`\pi `$levels (coupled to the lobes of d-wave order parameter in A and B with the same or opposite sign respectively), produced by such an arrangement. Since the shape of the effective potential barrier between states with $`\pm \phi _0`$ depends on $`\phi _0`$, it can now be varied. Therefore the tunneling rate can be chosen in exponentially wide limits to achieve the optimal performance. (In an SND junction the equilibrium phase can also be varied, but it cannot be made less than $`\pi (\sqrt{2}1)/\sqrt{2}`$).) Besides, this allows to fix the working interval of the device to $`|\phi |\pi `$, since due to exponential dependence of the tunneling amplitude on the barrier action, for $`|\phi _0|<\pi /2`$ the tunneling to the states in the next cell, $`\phi _0\phi _0\pm 2\pi `$, can be completely neglected, whatever the inductance of the system. The supercurrent through the normal part of the system is carried by a set of Andreev levels formed by reflections at the ND boundaries . We will calculate it using the quasiclassical approach following from Eilenberger equations which is well suited to our problem. Here we can consider ”Andreev tubes” along the quasiparticle trajectories in the normal part of the system each carrying the supercurrent density which in our case should be written as $$𝐣(𝐫,𝐧)=j_c𝐧\underset{p=1}{\overset{\mathrm{}}{}}(1)^p\frac{[𝐫,𝐧]}{l_T}\frac{\mathrm{sin}p\phi [[𝐫,𝐧]]}{\mathrm{sinh}p[𝐫,𝐧]/l_T}e^{[𝐫,𝐧]/l_{imp}}.$$ (1) Here the critical current density $`j_c(ev_F\mathrm{\Pi })/(\lambda _FS)`$, $`S`$ and $`\mathrm{\Pi }`$ being area and half-perimeter of the system. Equation (1) is valid in the limit $`\xi _0S/\mathrm{\Pi }`$, the characteristic separation between the superconducting electrodes and is thus applicable to the case of superconducting cuprates. (The opposite limit of large coherence length is considered in .) The quantity $`[𝐫,𝐧]`$ is the length of the quasiclassical trajectory linking two superconductors which passes through the point $`𝐫`$ in the direction $`𝐧`$; $`\phi [[𝐫,𝐧]]`$ is the phase gain along this trajectory (in this paper we do not concern with the effects of current-induced field, and therefore in the absence of external fields this is simply the phase difference between the superconductors on the ends of the trajectory, including the extra $`\pi `$ if $`[𝐫,𝐧]`$ connects the lobe of the d-wave order parameter with opposite signs). The normal metal coherence length $`l_T=v_F/2\pi k_BT`$, and $`l_{imp}`$ takes into account effects of weak elastic scattering by nonmagnetic impurities (in the ballistic regime, by definition, $`l_{imp}S/\mathrm{\Pi }`$). We have used standard approximation of steplike behaviour of the order parameter at the ND boundary, and neglected the own magnetic field of the supercurrent. The total supercurrent density at a point r is thus given by $$𝐣(𝐫)=_0^\pi \frac{d\theta }{\pi }𝐣(𝐫,𝐧(\theta )).$$ (2) Calculating the total current flowing in A, we find in the limit $`l_{imp},l_T\mathrm{}`$ $$I(\phi )=\frac{2j_cW}{\pi }\left[\frac{1+Z(\mathrm{\Omega })}{2}F(\phi )+\frac{1Z(\mathrm{\Omega })}{2}F(\phi +\pi )\right],$$ (3) where $`F(\phi )`$ is the $`2\pi `$-periodic sawtooth of unit amplitude, and the imbalance factor $`Z(\mathrm{\Omega })`$ determines the equilibrium phase difference $$|\phi _0|=\left|\frac{1Z(\mathrm{\Omega })}{2}\right|\pi .$$ (4) In the setup of Fig.1a $$|\phi _0(\mathrm{\Omega })|=\frac{\mathrm{sin}|\mathrm{\Omega }|}{\sqrt{2}}\pi .$$ (5) The current-phase dependence (3) and corresponding Josephson energy $`E_J(\phi )=\frac{\mathrm{}}{2e}^\phi 𝑑\phi I(\phi )`$ are plotted in Fig.2, and current density distribution in the normal part of the system is shown in Fig.3. The vortex pattern is clearly seen. The spontaneous flux in the system is $$\mathrm{\Phi }_s\kappa (\mathrm{\Omega })\frac{j_c\mathrm{\Pi }^2}{c}\frac{\kappa (\mathrm{\Omega })eN_{}v_F}{c}\frac{1}{137}\frac{\kappa (\mathrm{\Omega })}{\pi }\frac{v_F}{c}N_{}\mathrm{\Phi }_0,$$ (6) where $`\kappa <1`$ is a geometry-dependent attenuation factor (e.g. in SND junctions $`\kappa =0`$ by symmetry if ND boundary is parallel to (100) or (010) ); $`N_{}\mathrm{\Pi }/\lambda _F`$ is the number of transport modes in the system. Let us make some estimates. Taking the size of the system $`10^3`$Å, $`v_F10^7`$cm/s, we find that $`\mathrm{\Phi }_s\kappa 10^3\mathrm{\Phi }_0`$. The magnetic moment of the spontaneous current will be of order $`m_s\kappa (N_{}e\mathrm{\Pi }v_F)/c\kappa 10^5\mu _B.`$ The tunneling rate between the states is $`\mathrm{\Gamma }\omega _0\mathrm{exp}[U(0)/\mathrm{}\omega _0],`$ where the frequency of oscillations near $`\pm \phi _0`$ is $`\omega _0\sqrt{N_{}\overline{ϵ}ϵ_Q}/\mathrm{}`$ and the height of the potential barrier $`U(0)(\phi _0/2\pi )^2`$ . Due to spatial Andreev quantization, there are no elementary excitations in the normal part with the system with energies below $`\overline{ϵ}\mathrm{}v_F/2\mathrm{\Pi }10^{15}`$ erg. At temperatures below $`\overline{T}=\overline{ϵ}/k_B10`$K thermal excitations are frozen out, and dissipation can only be due to interlevel transitions generated by ac Josephson voltage generated by phase fluctuations. Therefore it will be absent if $$2e<V_J>\mathrm{}\sqrt{<\dot{\phi }^2>}\mathrm{}\omega _0<\overline{ϵ},$$ (7) which can be rewritten as $`ϵ_Q<\overline{ϵ}/N_{}`$ (where the charging energy $`ϵ_Q=2e^2/C,`$ and $`C`$ is the capacitance of the terminal A), or $`\omega _0<v_F/2\mathrm{\Pi }.`$ The latter condition is a physically clear requirement that the quantum oscillations of superconducting phase allow time for readjustment of Andreev levels in the system (which is indeed $`2\mathrm{\Pi }/v_F`$, the time necessary for the electron and Andreev reflected hole to travel across the system). Otherwise the coherent transport through the normal part of the system cannot be established, and dissipative currents flow instead. The maximum value of $`\omega _0`$ allowed by the above limitation is $`\omega _{\mathrm{max}}10^{12}\mathrm{s}^1`$. (That is, the capacitance of the terminal A cannot be lower than $`C_{\mathrm{min}}=2e^2N_{}/\overline{ϵ}10^{11}`$F.) The corresponding tunneling rate is $$\mathrm{\Gamma }_{\mathrm{max}}\omega _{\mathrm{max}}\mathrm{exp}[N_{}(\phi _0/2\pi )^2].$$ (8) Therefore we would require $`\phi _00.2\pi `$ to have tunneling rate in the 100 MHz region. For a system of integrated DND qubits of Fig.1b the bulk d-wave ”bus” provides, the possibility for operations performed over all the qubits simultaneously by creating a supercurrent flow along the bus. In particular, one can easily prepare the whole register of qubits in the same (up or down) state. This is an attractive property, e.g. for implementation of a quantum correction algorithm. If take the size of a unit qubit with its periphery as $`510^3`$Å, a 2D 100$`\times `$100-qubit block will occupy only an area of 50$`\times `$50 $`\mu `$m<sup>2</sup>, which is realistic to keep below the dephasing length due to thermal excitations. Application of quantum gates to individual qubits can be effected in various ways, lifting the degeneracy between up/down states either by directly applying localized magnetic field to a qubit (using a magnetic scanning tip), by creating local supercurrents in the bus, or by using laser beams with circular polarization. The entanglement of the states of adjacent qubits is achieved simply by opening a key between them for a certain time. In conclusion, we have suggested a new design for a solid state superconducting, scalable qubit. Besides using the degeneracy of the ground state, common to all D-D junctions, it strongly relies on unique properties of clean DND or grain boundary junctions: tunability of the equilibrium phase difference across the junction, and spontaneous currents and fluxes in equilibrium. The former allows to optimize the design in order to achieve the fastest possible tunneling rate (which is vital in order to beat the dephasing processes) and thus makes it easier to integrate qubits in a computer. The latter presents an easier way to manipulate and read out the state of a qubit. Our estimates show that there is a real chance to create a working solid state qubit along these lines using existing experimental possibilities. Acknowledgements: I am grateful to M. Beaudry, D. Bonn, S. Lacelle, P. Stamp and A.-M. Tremblay for helpful discussions and critical comments, and to the Dept. de physique, Université de Sherbrooke, for hospitality. This research was supported in part by CIAR.
no-problem/9903/cond-mat9903149.html
ar5iv
text
# Metal-insulator transition in the one-dimensional Holstein model at half filling \[ ## Abstract We study the one-dimensional Holstein model with spin-1/2 electrons at half-filling. Ground state properties are calculated for long chains with great accuracy using the density matrix renormalization group method and extrapolated to the thermodynamic limit. We show that for small electron-phonon coupling or large phonon frequency, the insulating Peierls ground state predicted by mean-field theory is destroyed by quantum lattice fluctuations and that the system remains in a metallic phase with a non-degenerate ground state and power-law electronic and phononic correlations. When the electron-phonon coupling becomes large or the phonon frequency small, the system undergoes a transition to an insulating Peierls phase with a two-fold degenerate ground state, long-range charge-density-wave order, a dimerized lattice structure, and a gap in the electronic excitation spectrum. \] A long time ago Peierls suggested that a one-dimensional metal should exhibit an instability against a periodic lattice distortion of wave vector equal to twice the Fermi wave vector. Although this distortion increases the lattice elastic energy, it opens a gap in the electronic spectrum at the Fermi surface, lowering the electronic energy. Thus, the Peierls insulating state can be energetically favored over the metallic state. A wide range of quasi-one-dimensional materials, such as MX chains, charge-density-wave (CDW) compounds, conjugated polymers and charge-transfer salts, have electronic properties that are dominated or at least affected by the Peierls instability. These systems are often modeled by the one-dimensional Holstein model, the Su-Schrieffer-Heeger model or various spin-Peierls models. The Peierls instability is well understood in the static lattice (adiabatic) limit and within mean-field theory. An interesting and still controversial question is how the Peierls ground state is modified when quantum lattice fluctuations are taken into account. These quantum lattice fluctuations could have an important effect in most quasi-one-dimensional materials with a Peierls ground state because the lattice zero-point motion is often comparable to the amplitude of the Peierls distortion. Thus, this question has motivated several studies of quantum lattice fluctuation effects in the Holstein , Su-Schrieffer-Heeger and spin-Peierls models. In spinless fermion models and spin-Peierls models these studies have shown that the transition to a Peierls state occurs only when the electron-phonon coupling exceeds a finite critical value or when the phonon frequency drops below some finite threshold value. Thus, in these systems quantum lattice fluctuations destroy the Peierls instability for small electron-phonon coupling or large phonon frequency. In more realistic models with spin-1/2 electrons, however, previous studies have generally concluded that the ground state is a Peierls state for any finite electron-phonon coupling at finite phonon frequency, in qualitative agreement with mean-field theory. Here we consider the one-dimensional Holstein model with spin-1/2 electrons at half-filling. This model describes electrons coupled to dispersionless phonons, represented by local oscillators. It has as Hamiltonian $`H={\displaystyle \frac{1}{2M}}{\displaystyle \underset{i}{}}p_i^2+{\displaystyle \frac{K}{2}}{\displaystyle \underset{i}{}}q_i^2\alpha {\displaystyle \underset{i}{}}q_i(n_i1)`$ (1) $`t{\displaystyle \underset{i\sigma }{}}\left(c_{i\sigma }^{}c_{i+1\sigma }+c_{i+1\sigma }^{}c_{i\sigma }\right),`$ (2) where $`q_i`$ and $`p_i`$ are the position and momentum operators for a phonon mode at site $`i`$, $`c_{i,\sigma }^{}`$ and $`c_{i,\sigma }`$ are creation and annihilation operators for an electron of spin $`\sigma `$ on site $`i`$, and $`n_i=c_{i,}^{}c_{i,}+c_{i,}^{}c_{i,}`$. The half-filled band case corresponds to a density of one electron per site. At first sight, there are four parameters in this model: the oscillator mass $`M`$ and spring constant $`K`$, the electron-phonon coupling constant $`\alpha `$, and the electron hopping integral $`t`$. However, if phonon creation and annihilation operators are denoted by $`b_i^{}`$ and $`b_i`$, respectively, the Holstein Hamiltonian can be written (up to a constant term) $`H=\omega {\displaystyle \underset{i}{}}b_i^{}b_i\gamma {\displaystyle \underset{i}{}}\left(b_i^{}+b_i\right)(n_i1)`$ (3) $`t{\displaystyle \underset{i\sigma }{}}\left(c_{i+1\sigma }^{}c_{i\sigma }+c_{i\sigma }^{}c_{i+1\sigma }\right),`$ (4) where the phonon frequency is given by $`\omega ^2=K/M`$ (we set $`\mathrm{}=1`$) and a new electron-phonon constant is defined by $`\gamma =\alpha a`$ with the range of zero-point phonon position fluctuations given by $`a^2=\omega /2K`$. We can set the parameters $`t`$ and $`a`$ equal to 1 by redefining the overall energy scale and the units of phonon displacements. Thus, the properties of the Holstein Hamiltonian (4) depends only on the two interaction parameters $`\omega `$ and $`\gamma `$. Mean-field theory predicts that the ground state of this model is a Peierls state for any non-zero electron-phonon coupling and $`\omega <\mathrm{}`$. Early works based on strong-coupling perturbation theory and quantum Monte-Carlo simulations, as well as variational calculations seemed to support this point of view. However, the quantum Monte-Carlo results were limited to small systems (up to 16 sites) and their interpretation relied on a questionable finite-size-scaling analysis. The strong-coupling perturbation theory is based on the formation of small bipolarons in the $`\gamma /\omega \mathrm{}`$ limit, but it has been argued that, as the coupling $`\gamma `$ decreases, the bipolaron size becomes large and the strong-coupling picture breaks down . On the other hand, a functional integral calculation suggests that the transition occurs at finite electron-phonon coupling, but the accuracy of this approach is hard to estimate. Moreover, the static and dynamical properties of small clusters (up to six sites) show that there is a sharp crossover at a finite electron-phonon coupling from a quasi-free electron ground state to an ordered bipolaronic ground state, which can be seen as a precursor to the Peierls ground state of the infinite system. In this paper we discuss the ground state properties of the Holstein model of spin-1/2 electrons in the thermodynamic limit. We demonstrate that quantum lattice fluctuations suppress the Peierls instability for small electron-phonon coupling or large phonon frequency. In this regime the ground state is unique, gapless and shows only power-law correlations between electron position and between phonon displacements. This ground state is similar to the ground state of the non-interacting system ($`\gamma =0`$). When the electron-phonon coupling becomes large or the phonon frequency becomes small the system undergoes a transition to an insulating Peierls phase, which is qualitatively described by mean-field theory. In this regime the ground state is doubly degenerate, and there is a gap in the electronic spectrum, long-range CDW order and a dimerized lattice structure. Our results are based on density matrix renormalization group (DMRG) calculations. DMRG is as accurate as exact diagonalization on small systems but can be applied to much larger systems while maintaining very good precision. It has already been applied successfully to the study of the Peierls instability in quantum lattices with spinless fermion or spin degrees of freedom. There have not been any application of DMRG to models of spin-1/2 electrons coupled to phonons yet, because these systems are significantly harder to deal with due to the additional degrees of freedom and the larger amplitude of phonon displacements. For this work we have used an improved DMRG method for systems with boson degrees of freedom, which has been described in a previous work. With this approach both the error due to the necessary truncation of the phonon Hilbert space and the DMRG truncation error can be kept negligible. The accuracy of this new DMRG technique has been demonstrated by comparison with many numerical and analytical methods for the polaron problem (a single electron) in the one-dimensional Holstein model. The maximum number of density matrix eigenstates $`m`$ used in our calculations is 600, giving truncation errors from $`10^7`$ to $`10^{11}`$ depending on the system size and parameters. The error in the ground state energy is estimated to be smaller than $`10^5t`$. The actual number of phonon states kept for each local oscillator ranges from 8 to 32 depending on the electron-phonon coupling strength. We have studied open chains with an even number $`N`$ of sites (up to 100) and extrapolate results to the thermodynamic limit. Open boundary conditions are used because the DMRG method usually performs much better in this case than for periodic boundary conditions. In previous studies of the Peierls instability in the Holstein model the ground state symmetry was explicitly broken as in the mean-field and adiabatic approximations. Thus, the Peierls ground state was revealed by a lattice distortion (dimerization) $$q_i=(1)^im_p$$ (5) and a CDW $$n_i=1+(1)^im_e$$ (6) with $`m_e,m_p0`$, where $`\widehat{O}`$ means the ground state expectation value of operator $`\widehat{O}`$, and $`m_p`$ and $`m_e`$ are the phonon and electronic order parameter, respectively. For $`m_e,m_p0`$, the ground state was two-fold degenerate \[this degeneracy corresponds to the two possible phases of the oscillations (5) and (6)\]. Note that, as all eigenstates of the Holstein Hamiltonian satisfy $$q_i=\frac{\alpha }{K}(n_i1),$$ (7) the order parameters are related by $$m_p=\frac{\alpha }{K}m_e.$$ (8) Our DMRG method gives an excellent approximation to the exact ground state of the Holstein model on a lattice of finite size. It is known exactly that the ground state of the half-filled Holstein model on a finite lattice is unique for $`\omega 0`$, implying that there is no degenerate broken symmetry ground state at any finite electron-phonon coupling or non-zero phonon frequency. Instead, there is a quasi-degeneracy of the ground state when the electron-phonon coupling exceeds a finite critical value (this point will discussed in more detail later). Therefore, we always find $`q_i=0`$ and $`n_i=1`$ in our calculations. This property follows directly from the uniqueness of the ground state and the electron-hole symmetry, i.e., the invariance of the Hamiltonian (2) under the transformation $`c_{i\sigma }^{}(1)^ic_{i\sigma },q_iq_i.`$ (9) To observe the consequences of the Peierls instability we have to look at correlation functions. The most important ones for a Peierls state are the staggered charge density correlation function $$C_n(m)=(1)^m(n_in_{i+m}1)$$ (10) and the staggered phonon displacement correlation function $$C_q(m)=(1)^mq_iq_{i+m}.$$ (11) We have found that, for small electron-phonon coupling $`\gamma `$ or large phonon frequency $`\omega `$, both correlation functions decrease as a power-law $`m^\beta `$ with $`2\beta >0`$ as a function of the distance $`m`$. An example is shown in Fig. 1(a). As the electron-phonon coupling increases or the phonon frequency decreases, the exponent $`\beta `$ becomes smaller. For sufficiently large electron-phonon coupling or small phonon frequency the behavior of both correlation functions is completely different. As seen in Fig. 1(b), in this case both functions tend to finite values at large distances, showing the existence of long-range order. It is not always possible to determine the presence or absence of long-range order in the thermodynamic limit from the correlation functions of a finite chain. A better approach is to compute the electronic and phononic static staggered susceptibilities defined as $$\chi _e=\frac{1}{N}\underset{m}{}C_n(m)$$ (12) and $$\chi _p=\frac{1}{N}\underset{m}{}C_q(m),$$ (13) respectively. It is clear that both $`\chi _e`$ and $`\chi _p`$ vanish in the thermodynamic limit if there is no long-range order. For instance, both susceptibilities vanish as $`1/N`$ in the non-interacting limit ($`\gamma =0`$). In Fig. 2(a) we show both $`\chi _e`$ and $`\chi _p`$ as a function of the inverse chain length for a weak electron-phonon coupling. Both quantities clearly tend to zero in an infinite chain. Thus, we conclude that there is no long-range CDW order nor lattice distortion in the ground state of the Holstein model for the parameters ($`\gamma =0.4,\omega =1`$) used in this example. On the other hand, it is clear that $`\chi _e`$ and $`\chi _p`$ remain finite for $`N\mathrm{}`$ if there is long-range CDW order or a lattice dimerization, respectively. For instance, in the mean-field approximation, one finds $$\chi _e=m_e^2,\chi _p=m_p^2.$$ (14) Figure 2(b) shows $`\chi _e`$ and $`\chi _p`$ as a function of the inverse system size for a relatively strong electron-phonon coupling. In this case, both susceptibilities remain finite for $`N\mathrm{}`$ and thus, reveals the presence of a Peierls state with long-range CDW order and lattice dimerization for the parameters considered in this example ($`\gamma =1,\omega =1`$). Using Eqs. (8) and (14), one sees that $$\sqrt{\chi _p}=\frac{\alpha }{K}\sqrt{\chi _e}$$ (15) in the mean-field approximation. It is possible to demonstrate that this relation holds for the exact ground state in several special cases, such as the adiabatic limit ($`\omega 0`$) and the anti-adiabatic limit ($`\omega \mathrm{}`$). Although we can not prove the validity of (15) for the general case, our numerical results show that it is always satisfied (within numerical errors) in an infinite system. This simply means that lattice dimerization and CDW are two inseparable features of the Peierls ground state. Therefore, we define a unique order parameter $`\mathrm{\Delta }`$ as $$\mathrm{\Delta }=\alpha \sqrt{\chi _p}\frac{\alpha ^2}{K}\sqrt{\chi _e},$$ (16) where $`\chi _p`$ and $`\chi _e`$ are the infinite system extrapolation of the ground state susceptibilities (12) and (13) calculated from DMRG simulations. If the ground state of the Holstein model is a Peierls state, one has $`\mathrm{\Delta }>0`$, and otherwise $`\mathrm{\Delta }=0`$. Obviously, this definition of $`\mathrm{\Delta }`$ is just a generalization of the usual gap parameter of mean-field theory $`\mathrm{\Delta }_{MF}`$, which is related to the other mean-field order parameters $`m_e`$ and $`m_p`$ by $$\mathrm{\Delta }_{MF}=\alpha |m_p|=\frac{\alpha ^2}{K}|m_e|.$$ (17) In the mean-field approximation the Peierls distortion opens a gap $`2\mathrm{\Delta }_{MF}`$ in the electronic spectrum. It is sometimes assumed that this relation between Peierls gap and order parameters remains valid when quantum lattice fluctuations are taken into account. In such a case the exact Peierls gap would simply be given by $`2\mathrm{\Delta }`$. However, it is likely that the Peierls gap is more reduced by the quantum lattice fluctuations than the dimerization or CDW amplitude and becomes smaller than the value $`2\mathrm{\Delta }`$ obtained from (16). Unfortunately, calculating the optical gap of the Holstein model with a DMRG method is not possible yet. To find how the appearance of the Peierls ground state correlates with a gap in the infinite system we have calculated the charge gaps $$E_{g1}=2(E_0(1)E_0(0))$$ (18) and $$E_{g2}=E_0(2)E_0(0),$$ (19) where $`E_0(x)`$ is the DMRG ground state energy with $`x`$ electrons added to ($`x>0`$) or removed from ($`x<0`$) the half-filled band. In these definitions we implicitly use the electron-hole symmetry of the model at half filling, which implies that $`E_0(x)=E_0(x)`$. It should be noted that with these definitions the charge gaps incorporate lattice relaxation effects occurring when the band filling is modified. Therefore, $`E_{g1}`$ and $`E_{g2}`$ are not always equal to the optical gap of the system. $`E_{g1}`$ can be interpreted as the energy required to create a quasi-particle excitation made of an electron dressed by phonons. Similarly, $`2E_{g2}`$ represents the energy required to create a quasi-particle excitation which is a bound pair of electrons dressed by phonons, when such electron binding occurs ($`E_{g2}<E_{g1}`$). Otherwise, one expects $`E_{g2}E_{g1}`$. Figures 3(a) and (b) show both gaps for several system sizes. If there is no long-range order ($`\mathrm{\Delta }=0`$) we find that the gaps extrapolate to zero in the limit $`N\mathrm{}`$ \[Fig. 3(a)\]. Therefore, we think that in this regime the system is still a metal, as in the non-interacting case ($`\gamma =0`$). However, if the ground state of the infinite system is a Peierls state ($`\mathrm{\Delta }>0`$), we find that both gaps extrapolate to a non-zero value in the thermodynamic limit \[Fig. 3(b)\]. For $`\gamma =1`$ and $`\omega =1`$, $`E_{g1}=0.82`$ and $`E_{g2}=0.18`$, which are much smaller than the value that one would anticipate from the amplitude of the Peierls distortion $`2\mathrm{\Delta }=2.5`$. For comparison, the mean-field result for the same parameters is $`2\mathrm{\Delta }_{MF}=3.1`$. This confirms that the quantum lattice fluctuations have a much stronger effect on the Peierls gap than on the amplitude of the Peierls distortion. Nevertheless, we have never found that either $`E_{g1}`$ or $`E_{g2}`$ vanishes for $`N\mathrm{}`$ in the Peierls ground state. In small clusters, a sharp drop of the Drude weight occurs simultaneously with the crossover to the ordered bipolaronic ground state. Therefore, the opening of the electronic gap always seems to accompany the appearance of long-range order in the ground state and we conclude that a Peierls ground state is always an insulator. We have also analyzed the scaling of the lowest excitation energies $`\epsilon _n=E_nE_0`$ with the system size, where $`E_n`$ is the energy of the $`n`$-th lowest eigenstate of the Hamiltonian (4) at half filling. In the phase without long-range order we have found that the $`\epsilon _n`$ decrease as a power-law for increasing system size and vanish in the thermodynamic limit, as seen in Fig. 4(a). These results confirm that in this case the infinite system has a unique ground state but is gapless; there is a continuous band of excitations starting from the ground state, as expected for a metal. In the Peierls phase, the energy difference $`\epsilon _1`$ between the ground state and the first excited state is very small even in small chains and the other excited states have a much higher energy. Thus, the ground state appears almost degenerate in finite systems. Moreover, we observe completely different scalings for the $`\epsilon _n`$. Figure 4(b) shows that $`\epsilon _1`$ decreases exponentially with increasing system size, while the energy differences between the two lowest eigenstates and the higher excited states remain finite in the thermodynamic limit. This shows that the ground state of the Peierls phase is two-fold degenerate in the thermodynamic limit. We have also checked that the order parameter $`\mathrm{\Delta }`$ calculated for the first excited state tends to the same finite value as for the ground state in the thermodynamic limit. Therefore, both states are Peierls states with long-range CDW order and lattice dimerization, in qualitative agreement with mean-field predictions. The gap between the degenerate ground state and the other eigenstates also confirms the insulating nature of the system in the Peierls phase. Our results demonstrate that the ground state of the one-dimensional Holstein model for spin-1/2 electrons at half filling can be either a metallic state or an insulating Peierls state depending on the interaction parameters $`\gamma `$ and $`\omega `$. The system undergoes a quantum phase transition between the metallic phase and the Peierls insulating phase at finite critical values $`\gamma _c`$ and $`\omega _c`$. In this aspect, the Holstein model for spin-1/2 electrons is similar to spin-Peierls and spinless fermion models. Unfortunately, DMRG simulations become less accurate and harder to carry out in the vicinity of the transition while, at the same time, the finite-size-scaling analysis requires more accurate results and larger system sizes. Therefore, determining the critical values $`\gamma _c`$ and $`\omega _c`$ for which this metal-insulator transition occurs demands a substantial amount of computer time and we have not attempted to draw a phase diagram. Nevertheless, we can show the evolution of the order parameter $`\mathrm{\Delta }`$ as a function of the electron-phonon $`\gamma `$ for $`\omega =1`$ in Fig. 5. We see that the transition to the Peierls state occurs around $`\gamma =0.8`$. This is in good agreement with calculations based on a functional integral approach, which predicts $`\gamma _c1`$ for a slightly larger phonon frequency $`\omega =1.1`$. As the adiabatic and anti-adiabatic limits are usually investigated for finite values of the electron-phonon coupling constant $`\lambda =\alpha ^2/2K`$ ($`=\gamma ^2/\omega `$ with our choice of units), we show $`\mathrm{\Delta }`$ as a function of the phonon frequency $`\omega `$ for a fixed value $`\lambda =0.64`$ in Fig. 6. One can see that our results converge to the exact adiabatic result for small $`\omega `$ and that the transition from the Peierls phase to the metallic phase occurs around $`\omega =1`$. In summary, we have studied the ground state properties of the one-dimensional Holstein model for spin-1/2 electrons at half filling using DMRG. We have shown that this system undergoes a transition from a metallic phase to an insulating Peierls phase at finite values of the electron-phonon coupling and of the phonon frequency. ###### Acknowledgements. We thank S. Moukouri and I. Peschel for helpful discussions. E.J. thanks the Institute for Theoretical Physics of the University of Fribourg, Switzerland, for its kind hospitality during the preparation of this manuscript. S.R.W. acknowledges support from the NSF under Grant No. DMR-98-70930, and from the University of California through the Campus Laboratory Collaborations Program.
no-problem/9903/cond-mat9903051.html
ar5iv
text
# Towards an effective potential for the monomer, dimer, hexamer, solid and liquid forms of hydrogen fluoride ## I Introduction Hydrogen fluoride (HF) is one of the simplest molecules capable of forming hydrogen bonds. Despite the simplicity of the compound, the theoretical description of its behavior, especially in the liquid phase, is still far from being fully satisfactory. Much relevant theoretical work has concentrated on the problem of determining a potential model suitable for computer simulations. For this purpose a model is needed that reproduces correctly the main features of the real interaction potential and which is simple enough to be computed efficiently. In the last two decades several empirical potentials have been developed for Molecular Dynamics (MD) or Monte Carlo (MC) simulations of liquid HF. Of these models, only the three-site ones are able to reproduce correctly the dipole and quadrupole moments of the monomer. These models represent the charge distribution of each monomer by fractional charges placed at three sites on the molecular axis, two on the F and H nuclei and the third one at an appropriate position X along the F-H bond. A first three-site model (called HF3) was developed by Klein and McDonald by fitting ab initio results for the potential energy surface of the (HF)<sub>2</sub> dimer. Cournoyer and Jorgensen proposed a second three-site model (called HFC), with a simplified non-Coulombic part consisting of a single Lennard-Jones interaction between the fluorines. The parameters of the model were fitted directly to experimental thermodynamic data for the liquid, while simultaneously providing a reasonable equilibrium geometry of the dimer. Recently, Jedlovszky and Vallauri presented two further models, hereafter referred to as HF-JV1 and HF-JV2, respectively. HF-JV1 is a variant of HFC, with charges reproducing both the monomer dipole and quadrupole and with an accurate treatment of the long range part of the Coulombic interactions, neglected in the original work on HFC. The HF-JV2 model includes molecular polarizability, by adding an induced point dipole moment at each F site, while keeping the charge distribution of HF-JV1 unchanged. The scalar polarizability of the molecules was set equal to its experimental value, while the two parameters of the Lennard-Jones interaction between F atoms were fitted to the experimental values of the liquid density and internal energy. Unfortunately, none of the available models is able to reproduce, in a fully satisfactory way, both the thermodynamics and the structure of liquid and gaseous HF. In the search for a new potential, the essential ingredients can be identified by reviewing the known properties of the gas, liquid and solid phases of HF. The simplest associated form of HF is the gas phase dimer, (HF)<sub>2</sub>, whose structure and rovibrational spectrum were first characterized by the microwave spectra of Dyke, Howard and Klemperer. The equilibrium configuration of the isolated dimer F-H$`{}_{}{}^{}\mathrm{}`$F-H<sup>′′</sup> is planar but bent, as a consequence of a competition between dipolar and quadrupolar electrostatic interactions (whenever ambiguity is possible, H denotes the hydrogen atom involved in the hydrogen bond, while H<sup>′′</sup> is the other one). The atoms F-H$`{}_{}{}^{}\mathrm{}`$F form a nearly linear arrangement, with H placed slightly off the FF axis, and with a distance between the hydrogen-bonded fluorines $`r_{\mathrm{FF}}`$ 2.7 Å. The second hydrogen atom forms an angle $`\widehat{\mathrm{FFH}^{\prime \prime }}115^{}`$ and the H–F bonds are slightly stretched with respect to the monomer. The (HF)<sub>2</sub> bent dimer appears to be the basic structural motif of all the best known associated forms of HF, namely the gas phase (HF)<sub>6</sub> cyclic hexamer, the low temperature crystal and even the liquid. Pairs of adjacent HF units in the hexamer exhibit a bent arrangement similar to that of the dimer. The same structure is found in crystalline deuterium fluoride (DF) which is made up of infinite zig-zag chains of DF units. Interchain F–F distances ($`3.2`$ Å) are much larger than intrachain F–F distances ($`2.5`$ Å), an indication that interchain interactions are weaker than the interactions between adjacent HF units within the chains. The very small entropy difference between the solid and the liquid suggests that the liquid is largely associated. Best fit analysis of dielectric constant data and of Raman and IR spectra support this conclusion and indicate that acyclic zig-zag chains are abundant in the liquid. Finally, the neutron scattering measurements of the radial correlation function of liquid DF yield H–F and F–F average neighbor distances consistent with a most probable local structure similar to that of the other forms of HF. Since all associated forms of HF share the (HF)<sub>2</sub> dimer as a common structural unit, any satisfactory potential model should reproduce the main features of the (HF)<sub>2</sub> potential energy surface. Ab initio quantum mechanical calculations have contributed significantly to clarify the interactions present in the isolated dimer (HF)$`_2,`$ by determining accurately its equilibrium properties and by mapping out the entire potential energy surface. Much of this work has been directed to obtain analytical models for the potential surface, usually by fitting the numerical results to a chosen functional form. The analytical expressions of these models, which have been developed mainly to study the rovibrational states of (HF)<sub>2</sub>, are usually very complex and therefore not suitable for simple MD or MC simulations. Fortunately, as already mentioned, there is evidence that the main features of the real molecular interactions may be approximately reproduced by much simpler empirical models with effective potentials. In fact, it has been long known that the bent structure of the (HF)<sub>2</sub> dimer is dominated by the classical two-body electrostatic interactions between the permanent multipole moments of the monomer (mainly the dipolar and quadrupolar terms). The incorporation of many-body polarization effects gives rather small refinements on the predicted equilibrium angles, a finding consistent with the experimental observation that the dipole moment of the dimer is only weakly enhanced relative to that of the isolated monomer. The effects of the environment on the HF molecules do not seem to be large. However, a comparison of the structures in the sequence “monomer $``$ dimer $``$ hexamer $``$ liquid $``$ solid” (as shown by Tables II and IV below) evidences a tendency in which a larger degree of association implies a small increase of the H–F bond distance, together with a much larger decrease of the F–F distance (from 2.8 Å in the gas phase dimer to 2.49 Å in the solid). Unfortunately, as stressed by Röthlisberger and Parrinello, the available three-site models refer to rigid molecules and, consequently, they cannot reproduce the relaxation of the interatomic distances in going from the gas phase dimer to the condensed phases. The previous observations suggest a relatively clear picture of the ingredients required for a satisfactory potential model for HF. First, a charge distribution is required which approximately reproduces the first few multipolar moments of the HF molecule. Second, the H–F bond cannot be considered as rigid and intra-molecular interactions must be included to allow for the observed variations of the H–F and F–F distances in the various aggregation forms of HF. Finally, further atom–atom interactions must be introduced to model the remaining non-Coulombic intermolecular forces. Having all this in mind, we have tried to construct a new three-site model by simultaneously using data on the gas, liquid and solid phases. For this purpose: (a) the molecules are not rigid, but the H–F bond length can vary, and, (b) the parameters are fitted to theoretical and experimental data, including the ab initio structure of the HF dimer, the room temperature density of liquid DF and the experimental structure of solid DF at 4 K. For the structure of the dimer we have used the ab initio (HF)<sub>2</sub> potential energy surface developed by Bunker, Jensen, Karpfen, Kofranek and Lishka (BJKKL). The BJKKL calculations, in which the potential energy of the (HF)<sub>2</sub> complex has been determined for over 1000 different configurations, represent the most complete and accurate scan of the energy surface to date. These quantum mechanical results are in excellent agreement with the experiments on gas phase (HF)<sub>2</sub>. The decision to add to the fit some data on solid and liquid HF (more precisely, DF) has been taken because of the partial failure of our preliminary models fitted only to the ab initio surface. These models reproduced well the zig-zag (HF) chains characteristic of the crystal, but gave totally wrong inter-chain distances and, furthermore, did not agree with the experimental density of the liquid. This behavior was to be expected. In fact, the (HF)<sub>2</sub> potential surface only accounts for the basic HF–HF interactions within a chain, and obviously exclude the weak long-range interactions responsible for the distances between different (HF) chains. Since the density of the liquid is affected by interactions between distant pairs of HF molecules in relative orientations which are not sampled in the solid, it is also understandable that only by fine-tuning the long range potential it has been possible to reproduce the experimental density of the liquid. Finally, it must be mentioned that the addition of data on solid and liquid HF to the fit gave only a small deterioration of the agreement with the ab initio data on the dimer. This fact confirms that solid and liquid data add information on regions of the potential surface that are not sampled by the dimer. ## II Methods and Calculations ### A Potential model The potential model is represented by intra- and inter-molecular parts: $$V_{\text{HF}}^{\mathrm{intra}}(r_{\text{HF}})=D_e\left\{1\mathrm{exp}[\alpha (r_{\text{HF}}r_e)]\right\}^2,$$ (1) $$V_{AB}^{\mathrm{inter}}=V_{AB}^{\mathrm{Coul}}+V_{AB}^{\mathrm{non}\mathrm{Coul}},$$ (2) $$V_{AB}^{\mathrm{Coul}}=\underset{iA}{}\underset{jB}{}\frac{q_iq_j}{r_{ij}},$$ (3) $$V_{AB}^{\mathrm{non}\mathrm{Coul}}=A_{\mathrm{FF}}\mathrm{exp}(B_{\mathrm{FF}}r_{\mathrm{FF}})C_{\mathrm{FF}}r_{\mathrm{FF}}^6.$$ (4) BJKKL fitted the intra-molecular part of their own ab initio surface with a Morse potential, eq. (1). Here $`r_e`$ represents the equilibrium H-F distance, $`D_e`$ the dissociation energy and $`\alpha `$ is an effective range parameter. Because of their simplicity and accuracy, the BJKKL functional form and parameter values are adopted in this paper. The Coulombic interactions between molecules $`A`$ and $`B`$ are modeled through three point charges for each HF monomer, two at the nuclear positions $`𝑹_\mathrm{H}`$ and $`𝑹_\mathrm{F}`$, and the third at a position $`𝑹_\mathrm{X}`$ along the H–F bond. In eq. (3) $`q_i`$ and $`q_j`$ are the fractional charges on the $`i`$th site of molecule $`A`$ and $`j`$th site of molecule $`B,`$ respectively; $`r_{ij}`$ is the distance between these sites. The motion of the site X is constrained so that it remains at the same relative position along the bond, $$𝑹_\mathrm{X}=\beta 𝑹_\mathrm{F}+(1\beta )𝑹_\mathrm{H},$$ (5) where $`\beta `$ is an adjustable parameter between 0 and 1. The charges are $`+q`$ at both the H and F nuclei, and $`2q`$ at the third site to preserve neutrality. This three-site charge model is related to that of Refs. . By allowing for changes in the H–F bond length, the present model effectively accounts for a part of the polarization effects. For solid and liquid DF the Ewald’s method has been used to ensure complete convergence of the Coulombic interactions (which have an infinite range). The remaining non-Coulombic part of the inter-molecular potential is represented in a simplified way, using only a Buckingham “$`\mathrm{exp}6`$” atom-atom interaction between the fluorines, eq. (4). This term is meant to represent the interactions between the electronic clouds around two far away atoms. Since the hydrogens in HF are essentially bare nuclei, it makes good physical sense to avoid atom-atom interactions involving them. As a matter of fact, no improvement in the quality of the fit is found by adding similar H–H and H–F non-Coulombic interactions. A rather well defined hierarchy of interactions may be identified in the chosen potential model. The length of the HF monomer is solely determined by the intra-molecular potential. The structure of the dimer is also influenced by the position of the charge site X and by the F–F equilibrium distance, i.e. by the position of the minimum of the $`\mathrm{exp}6`$ interaction between the fluorines. Finally, the charge and the remaining properties of the $`\mathrm{exp}6`$ model, mainly the strength of the long range attractive term $`C_{\mathrm{FF}}r_{\mathrm{FF}}^6`$, affect the structure and density of solid and liquid HF. The presence of this hierarchy implies that small changes in the charge and in the long range attraction can be compensated by the remaining parameters to maintain the correct monomer and dimer structures. ### B Potential optimization The potential model contains five adjustable parameters, $`\beta `$, $`q`$, $`A_{\mathrm{FF}}`$, $`B_{\mathrm{FF}}`$ and $`C_{\mathrm{FF}}`$, and three parameters fixed at the ab initio values, $`r_e`$, $`\alpha `$ and $`D_e`$. In a first series of attempts, we fitted the present model (as well as other preliminary ones) only to the ab initio potential energy of the dimer. The $`\chi ^2`$ deviation between the model and the ab initio surface was computed, for each given combination of parameter values, with the same relative weights used by BJKKL. The $`\chi ^2`$ was minimized by searching the parameter space with Nelder-Mead simplex method. The resulting potential was then tested by computing some properties of the other associated forms of HF. In particular, the liquid phase was studied by isothermal-isobaric MD simulations, as described in the next Subsection. As discussed in the introduction, these preliminary models fitted only to the ab initio surface gave unsatisfactory results, so that it was decided to add more data to the fit. For this purpose, the equilibrium geometries at $`T=0`$ K of the HF dimer, hexamer and crystal were determined as a function of the potential parameters by minimizing, with the WMIN program, the total potential energy with respect to the structural parameters. The deviations of the calculated geometries from the ab initio dimer structure and from the experimental DF crystal structure at 4 K are then added to $`\chi ^2`$, with weights subjectively chosen to make the contributions from the surface, the dimer and the crystal roughly equal. Since the new set of parameters, although more satisfactory, still did not give the correct density of the liquid, it become necessary to tune the long range interactions. In a further set of minimization runs, a range of $`q`$ and $`C_{\mathrm{FF}}`$ values was searched, again with Nelder-Mead method. The three remaining parameters $`\beta `$, $`A_{\mathrm{FF}}`$ and $`B_{\mathrm{FF}}`$ where determined as a function of $`q`$ and $`C_{\mathrm{FF}}`$ by fitting the ab initio and crystal data. Each complete set of five parameters was then used in a short MD simulation to determine the equilibrium density of the liquid at 293 K and to add to the $`\chi ^2`$ the deviation from the relevant experimental value. This method, which involves two nested fit procedures, was found to be reasonably efficient and converged to a minimum in about fifty cycles. The main problem encountered in the fit was the noise in the computed liquid density due to insufficient MD equilibration with our computer time constraints. To reduce this noise, the MD equilibration was done in parallel with the potential optimization, by accepting after successful $`\chi ^2`$ minimization cycles the final configuration of the MD run as the initial configuration for the next run. With this strategy the current MD configuration was always the one with the best potential parameters so far. Since the parameters change rather slowly, the simulated system tended to remain close to equilibrium. No structural data on the liquid, beside the density, has been included in the fit. This rather drastic choice avoids the repeated calculation of equilibrated radial correlation functions, which would have required even longer MD runs then those needed for equilibrated densities. As a technical detail, it must be noticed that no attempt was done to embed potential surface calculation, MD simulation, dimer and crystal energy minimizations and the two nested Nelder-Mead procedures into a single monolithic program, which would have been unmanageably complex. Separate programs, calling each other as distinct processes at the operating system level, were used instead. The two Nelder-Mead procedures, in particular, were actually a single program invoking a second copy of itself. No special changes were required for the surface, MD and energy minimization programs. The optimal set of parameter values is shown in Tab. I and represents a compromise among the best results that can obtained separately for the dimer, the crystal and the liquid. As usual, no special physical significance should be attributed to the potential parameters. In fact, because of the possible compensation among different terms in the potential model, alternative slightly different sets of parameters might have been used. The model may be simply regarded as a tool to reproduce the observed data and predict new results. ### C Molecular Dynamics The MD calculations employed 500 deuterium fluoride (DF) molecules, in a cube with periodic boundary conditions, and, using Andersen’s isothermal-isobaric (NPT) method simulated a liquid sample in contact with a heat bath and subject to a hydrostatic pressure of 1 atm ($`10^4`$ GPa). The simulated liquid was obtained by melting and equilibrating at 293 K an initial crystalline configuration. The behavior of the system as a function of temperature was then determined by raising or lowering the bath temperature in steps of $`10`$ K. Each temperature was maintained for at least 5 ps for equilibration and further 5 ps for analysis. The equations of motion were integrated using the velocity Verlet algorithm, with a time step of 0.25 fs. As previously described, each DF molecule carried three point charges, at D, F and X sites. Ciccotti, Ferrario and Ryckaert method for linear constraints has been used to maintain each massless X charge at a fixed fraction $`\beta `$ of the DF bond. ## III Results The most important properties calculated with the present potential model for the monomer, dimer, planar (HF)<sub>n</sub> rings, hexamer, crystal and liquid forms of HF (or DF) are compared in Tables II, III and IV with the available experimental and ab initio data. As described in section II C, the properties of the liquid have been determined through MD calculations, while the equilibrium geometries at 0 K of the other forms of HF are found by minimizing the potential energy. ### A Monomer, dimer and cyclic polymers of HF The excellent results (Tab. II) for the equilibrium bond length, dissociation energy and spectroscopic parameters of the monomer, which all depend only on the parameters fixed to the BJKKL values, indicate that the Morse model accurately reproduces the main features of the true intra-molecular potential. The spectroscopic parameters $`\nu _e`$ (harmonic frequency) and $`x_e`$ (anharmonicity constant) for the energy levels of a Morse oscillator, $`E_n=h\nu _e[(n+\frac{1}{2})x_e(n+\frac{1}{2})^2]`$, are directly obtained from $`D_e`$ and $`\alpha `$ through $`\nu _e=\alpha \sqrt{D_e/\mu }`$ and $`x_e=h\nu _e/4D_e`$, where $`\mu `$ is the reduced HF mass. The multipole moments computed from our three-charges model (taking the molecular center of mass as origin) are very close to the experimental and ab initio moments of the monomer. The dipole and quadrupole values follow the trend of the ab initio results and therefore are slightly underestimated with respect to the experimental data. The octupole and hexadecupole moments are also in reasonable agreement with the ab initio calculations. This overall agreement is an indication of a good match between the electrostatic interactions in real HF and in the model. The computed minimum energy structure of the dimer (Tab. II) compares well with the experimental and ab initio data. The experimental F-F distance is excellently reproduced, as well as both the angles $`\widehat{\mathrm{H}^{}\mathrm{FF}}`$ and $`\widehat{\mathrm{FFH}^{\prime \prime }}`$ of the bent equilibrium configuration. Moreover, the increased length of the HF molecule in going from the monomer to the dimer, and the slight length difference between the two HF intramolecular bonds, are well predicted. Since these length changes were the primary reason for allowing non-rigid molecules, such a behavior must be considered very satisfactory. Unfortunately, in spite of the excellent dimer geometry, the dimerization energy is clearly underestimated. This drawback was not completely unexpected. In fact, in real HF and in quantum mechanical models dimerization is accompanied by hydrogen bonding, which is not explicitly incorporated in the present classical treatment. As a further test of the potential, we have computed the equilibrium geometry of the planar (HF)<sub>n</sub> rings, with $`n=2,3\mathrm{}8`$. As shown in Tab. III, the structural parameter computed for planar rings, with $`C_{nh}`$ symmetry, are in good agreement with the available ab initio results. Though the model systematically underestimates the ab initio binding energies, it nevertheless reproduces correctly the relative stability of the different structures. The smallest ring, which is the cyclic dimer, has a binding energy of $`\mathrm{\Delta }E=3.18`$ kcal/mole, and is thus substantially less stable than the bent dimer (Tab. II). For cyclic (HF)<sub>3</sub> the binding energy for each hydrogen bond, $`\mathrm{\Delta }E/3`$, is slightly less that the binding energy of the bent dimer. The bond stabilization energy, $`\mathrm{\Delta }E/n`$, increases for larger rings, up to the hexamer, and then decreases again. The particularly favorable stability of the (HF)<sub>n</sub> rings with $`n6`$ is readily understood by noticing that for these rings the $`\widehat{\mathrm{H}^{}\mathrm{FF}}`$ and $`\widehat{\mathrm{FFH}^{\prime \prime }}`$ angles (Tab. III) are close to those of the bent dimer (Tab. II). Planar (HF)<sub>n</sub> rings must satisfy the geometric constraint $`\widehat{\mathrm{FFH}^{\prime \prime }}\widehat{\mathrm{H}^{}\mathrm{FF}}=\alpha _n`$, where $`\alpha _n=180^{}360^{}/n`$ is the inner angle of the $`n`$-sides regular polygon. The hexamer, for which $`\alpha _n=120^{}`$, can be obtained by joining essentially undeformed bent dimers, and is the most stable structure. The hexamer can be stabilized even further by allowing for non planar structures. We find that the stablest structure has a non-symmetric “chair” shape, with average bond lengths and angles (Tab. II) which compare well with the available experiments and which are almost identical to those found in the dimer. A “boat” structure slightly above in energy, at $`4.96`$ kcal/mole, is also found, with lengths and angles close to those of the “chair” structure. ### B Crystal and liquid DF Low temperature crystals of DF are orthorhombic, space group $`Cmc2_1`$ ($`C_v^{12}`$), with four molecules per unit cell on the $`\sigma _v`$ plane sites. The minimum energy structure computed for the DF crystal (see Tab. IV) is close to the experimental structure at low temperature. The discrepancies in the lengths of the cell axes partially compensate each other, yielding a density only slightly smaller than the experimental one. The increased H-F intra-molecular lengths with respect to both monomer and dimer, and the F-F distance smaller than in the dimer, are well reproduced. In Fig. 1 the densities predicted by the present model over the whole temperature range of liquid HF (which, at atmospheric pressure, goes from the freezing point at $`83`$ C up to the boiling point at $`19.75`$ C) are compared with the experimental data. The experimental densities show a nearly linear dependence on $`T.`$ The data are from two sources, covering different temperature ranges. The slight vertical shift between the two data sets is due to the experimental uncertainties. It is very satisfactory to note that, although our potential model has been fitted only to a single density at $`293`$ K, the straight line corresponding to its predictions is somewhat vertically shifted, but has essentially the same slope as the experimental density. This slope is not reproduced by the HFC, HF-JV1 and HF-JV2 models, since their density straight lines (obtained by joining the corresponding results, which, unfortunately, are available only at $`203`$ and $`273`$ K) intersect the experimental curve. Another pleasant feature of the present model, also shown in Fig. 1, is that at $`303`$ K and above the average density continued to decrease over the whole MD analysis period. At each $`T<303`$ K, the density oscillated around an equilibrium value. In our opinion, this behavior indicates that the simulated liquid boils at some point in the range $`293÷303`$ K, in good agreement with the experimental normal boiling point of HF ($`T_b=292.9`$ K). With regard to the internal energy $`U,`$ our MD results exhibit a nearly linear dependence on $`T.`$ However, their absolute values (Tab. IV) are underestimated with respect to the available experimental data (a similar drawback is also present in the HF3 model ). Fig. 2 reports the partial pair radial correlation functions $`g_{ij}`$(r) computed for the liquid at 293 K, together with the real space function $`d(r)4\pi \rho _mr\left[G(r)1\right]`$. Here $`\rho _m`$ is the molecular number density, and $`G(r)`$ is a composite (or total) pair correlation function, obtained by adding the three partial pair correlation functions with the appropriate nuclear weights and then by convoluting the sum with the same experimental resolution function used in the neutron scattering experiments. The resulting theoretical prediction for $`d(r)`$ is compared in Fig. 2, with the corresponding neutron diffraction data for liquid DF at 293 K, which are the only available real space data. The first peak at $`r0.95`$ Å is due to the intra-molecular H–F bond distance and here the agreement between simulation and experiment is excellent. The second and third peak of the experimental $`d(r)`$ occur at $`r1.6`$ Å and $`r2.55`$ Å, and correspond to the hydrogen-bond $`r_{\mathrm{HF}}`$ inter-molecular distance and to the $`r_{\mathrm{FF}}`$ separation, respectively. Unfortunately, the model (with the present choice of parameters) fails to reproduce these peaks and the complex liquid structure at longer distances. The reason of such a shortcoming may be found by comparing the partial pair correlation functions $`g_{ij}(r)`$ of the present model Fig. (2) with the analogous ab initio MD results of Röthlisberger and Parrinello. The position of the first inter-molecular peak of all our $`g_{ij}(r)`$ is shifted toward larger $`r`$ values (a similar trend occurs in the polarizable HF-JV2 model ). In addition, the height of the hydrogen-bond peak of $`g_{\mathrm{HF}}(r)`$ is nearly half the correct value, and the height of the first peak of $`g_{\mathrm{HH}}(r)`$ is also underestimated. It is to be recalled that the HF3 potential reproduces the three principal peaks of the experimental data and gives the best performance for $`d(r)`$ among the available models. This quite good agreement was obtained by modeling the hydrogen-bond interaction with a Morse term. To our knowledge, no other empirical model takes explicitly into account the hydrogen bond. Before concluding this Section, it may be useful to summarize the performances of the best available models for liquid HF. The HF3 potential gives, as already mentioned, quite good agreement with neutron diffraction structural data at the normal boiling point (293 K), but systematically fails to reproduce thermodynamics: the predicted internal energies are largely underestimated (i.e., their absolute values are too small) and the pressures in MD calculations at constant volume are several kilobars too high, indicating that the model system is less strongly bound than real HF. In comparison with HF3, the HFC model yields a slightly better thermodynamics but a slightly worse structure for the liquid, with charges corresponding to a dipole moment enhanced with respect to the monomer. Then, the predictions of HF-JV1 for the liquid phase are not very different from the HFC ones, but HF-JV1 fails completely to reproduce the properties of the isolated dimer. Finally, whereas HF-JV1 works reasonably well only at room temperature, the polarizable model potential HF-JV2 represents a true improvement as regards the predicted density at low temperature, i.e. at $`203`$ K. However, problems with the pair correlation functions are encountered also for HF-JV2. ## IV Conclusions We have shown that a simple potential model suitable for MD or MC simulations can reproduce, quantitatively or semiquantitatively, many physical properties of the hydrogen fluoride over the whole set of its solid, liquid and gaseous associated forms. To cover such a wide range of environmental conditions, it has been necessary to consider a molecular model with variable bond length. The present investigation confirms the plausibility of assuming the dimer as the basic structural unit, but also stresses the need of fitting the potential to a set of experimental and theoretical data which includes information on the condensed phases. In the liquid, the correct trend of the density is very encouraging, but the underestimated energies and the problems with the radial distribution functions indicate that some physical effect is still misrepresented by the model. Unfortunately, it is not likely that adding data on the liquid structure could improve the results with the current type of model (without an hydrogen bond term). In fact, our experience with the fit shows that those few parameter sets which give better liquid structures were incompatible with the gas and crystal data. We believe that the essential shortcoming of the model is the neglect of any explicit representation for the hydrogen bond, which cannot be reduced to purely electrostatic multipole interactions. Such an explicit modeling of the hydrogen bond is also lacking in most other three-site potentials for HF, with the exception of the HF3 model. Although the hydrogen bonding has a quantum mechanical origin, an approximate classical treatment is however possible and may have significant consequences, as seen from the rather good structural results for the HF3 potential. The inclusion of potential terms representing the hydrogen bond interactions appears therefore the next necessary step for more accurate HF models. More data on the liquid, including at least the radial correlation function, need to be incorporated in the fit. In conclusion, our model cannot be considered as a definitive solution of the problem, but can be seen as a significant step towards a really satisfactory potential for MD or MC simulations. It has the merit of pointing out the importance of a variable molecular length and of the hydrogen bond. Moreover, it shows that the strategy of a simultaneous fit to data covering all the associated forms of HF can be successful and should be considered as the appropriate way to fully accomplish the difficult task of finding a potential model for such a strongly associating system. ###### Acknowledgements. Work done with funds from MURST (Ministero dell’Università e della Ricerca Scientifica e Tecnologica) through the INFM (Istituto Nazionale di Fisica della Materia), from CNR and from the University of Bologna (“Finanziamento speciale alle strutture”). We thank Bunker, Jensen, Karpfen, Kofranek, and Lishka for providing their ab initio data.
no-problem/9903/astro-ph9903080.html
ar5iv
text
# AN ANALYSIS OF THE X-RAY EMISSION FROM THE SUPERNOVA REMNANT 3C 397 ## 1 Introduction The bright radio and X-ray extended source 3C 397 (also called G41.1$``$0.3 or HC 26) used to be regarded as one of the youngest supernova remnants (SNRs). It has been observed by several telescopes for more than two decades, and some preliminary understanding have been achieved. The 20 cm observation of the Fleurs synthesis telescope first resolved a shell structure with a small average diameter ($`D3^{}.6`$), which shows an apparent departure from spherical symmetry (Caswell et al. 1982). The HII region G41.1$``$0.2 lies $`6^{}`$ to the west and in the foreground (Caswell et al. 1975) and may extinguish the emission from 3C 397 (Cersosimo & Magnini 1990). The HI absorption measurements provide a distance estimate for 3C 397: $`d\text{ }>\text{ }7.5\mathrm{kpc}`$ (Caswell et al. 1975), and the $`\mathrm{\Sigma }D`$ relationship suggests a distance $`12`$$`13\mathrm{kpc}`$ (Caswell & Lerche 1979; Milne 1979). A preliminary estimate of the remnant age is only $`t600\mathrm{yr}`$ (Caswell et al. 1982; Becker, Markert, & Donahue 1985). The spectral index of the synchrotron emission was found to vary across the source without evident correlation with the radio intensity features, possibly due to an inhomogeneous ambient medium (Anderson & Rudnick 1993). The Einstein observation was reported by Becker et al. (1985). Two concentrations of soft X-ray emission appear in the IPC image, and Becker et al. suggested an association of some X-ray emission with the central VLA radio component. No stellar remnant was indicated by the spectrum. A temperature $`T<0.25\mathrm{keV}`$ and a column density of hydrogen $`N_H>5\times 10^{22}\mathrm{cm}^2`$ were estimated from the IPC data while $`T1.0`$$`6.0\mathrm{keV}`$ and $`N_H1.4\times 10^{22}\mathrm{cm}^2`$ were estimated from the MPC data. However, Becker et al. did not think the Einstein data are of such high quality that these parameters are determined reliably. As yet the emission lines and element abundances were poorly known. However, this situation could change and many of the basic physical properties could be known more explicitly than before because of the recent X-ray observations of 3C 397. In this paper, we first present the X-ray spectral analysis based on the archival ASCA SIS data and ROSAT PSPC data in combination. The ROSAT HRI data, ASCA SIS data, and NVSS 20 cm data (Condon et al. 1998) are used for the investigation of the remnant morphology. The data analysis is given in §2, the physical features of the remnant is discussed in §3, and the results are concluded in §4. ## 2 Data Analysis The SNR 3C 397 was observed by the ASCA satellite on April 8, 1995 with the SIS and GIS detectors. The SIS data are in 1-CCD clocking mode, and hence no residual dark distribution (RDD) correction is made. After the standard screening process, the effective exposure times were 34.1 kiloseconds for SIS0 and 33.7 kiloseconds for SIS1, and the total events amount to 33943 and 27116 for SIS0 and SIS1, respectively. The ROSAT PSPC observation was made on October 28, 1992 with an effective exposure time of $`4165\mathrm{s}`$. The ROSAT HRI observation was made on October 21, 1992 and the effective exposure time was $`6153\mathrm{s}`$. The standard software FTOOL4.0, IRAF, and AIPS are used to process the X-ray and NVSS data. ### 2.1 Spectral Analysis In view of the fact that the ASCA SIS has a better spectral resolution (2% FWHM at 6 keV) than the GIS (8%), we use the SIS data in the spectral analysis. The spectrum of the remnant is extracted from the SIS data within a circular region of radius $`4^{}`$ centered at $`\text{R.A.}=19^\mathrm{h}07^\mathrm{m}33^\mathrm{s}`$, $`\text{decl.}=07^{}08^{}00^{\prime \prime }`$ (J2000). Background spectra are extracted from the archival data on blank sky near 3C 397. The distinct emission lines in the spectrum are Mg He$`\alpha `$ ($`1.35\mathrm{keV}`$), Si He$`\alpha `$ ($`1.85\mathrm{keV}`$), S He$`\alpha `$ ($`2.43\mathrm{keV}`$), and the Fe K$`\alpha `$ complex. The Fe L complex also contributes considerably in the range about 0.65 — 1.8$`\mathrm{keV}`$. The centroid of the Fe K$`\alpha `$ complex is 6.59 keV (with a 90% confidence range $`\pm 0.02\mathrm{keV}`$, plus comparable errors caused by calibration uncertainties, such as non-uniform charge transfer inefficiency), which indicates the significance of B- through Li-like iron ions. It is known that the distinct Fe K$`\alpha `$ lines indicate the existence of a high temperature ($`\text{ }>\text{ }1\mathrm{keV}`$) plasma component, and such a high temperature and a low centroid of the complex indicate that this plasma component has not yet reached ionization equilibrium (e.g. Borkowski & Szymkowiak 1996). The SIS0 spectrum is fitted with a two-component non-equilibrium ionization (NEI) model using the spectral code SPEX (Kaastra et al. 1996) in the energy range 0.5 — 8.4 keV. The model of Morrison & McCammon (1983) is used for the interstellar absorption of the spectrum. In order to avoid too many degrees of freedom in fitting, we only allowed the abundances of those elements (Mg, Si, S, & Fe) showing distinct emission lines to vary during fitting. Two cases, in which the elemental abundances of the two components either vary independently or are coupled, are investigated. The fitting results are listed in Table 1. The reduced $`\chi ^21.3`$ in the independent case is better than the $`\chi ^21.6`$ in the coupled case. Considering the ROSAT PSPC has a better response in the soft band ($`<0.9\mathrm{keV}`$) than the ASCA, the combined spectra of both the ASCA SIS0 and the ROSAT PSPC (0.2 — 2 keV) are fitted with the NEI model again, and the results are very similar to the pure SIS case (see Table 1). No significant contributions of power-law, bremsstrahlung, and blackbody components are found. From Figure 1 one can find that the emission below about $`2\mathrm{keV}`$ arises predominantly from the cool component plasma, while the emission above $`2\mathrm{keV}`$ arises mainly from the hot component. The cool component is responsible for the lines of Mg, Si, and Fe L, while the hot one responsible for the lines of S and Fe K$`\alpha `$. Noticeably, the ionization parameter $`n_et\text{ }>\text{ }1\times 10^{12}\mathrm{cm}^3\mathrm{s}`$ of the cool component (see Table 1) is so high that it indicates that the component is approaching ionization equilibrium (e.g. Masai 1984). Motivated by the possibility of equilibrium ionization (EI), we also apply the XSPEC code (using the VMEKAL model therein) (Mewe et al. 1995) to fit the SIS0 and SIS1 spectra in the range 0.5 — 8.8 keV. Besides the spectrum extracted from the circle mentioned above, we also extract the spectra from the two circles centered on positions A and B (labeled in Figure 2), each with a radius of $`1^{}.65`$ covering an X-ray concentration. The fitting results are listed in Table 2. Due to the broad point spread function (PSF) of the ASCA SIS and the small separation of the two concentrations ($`2^{}`$), we must consider the mutual contamination of the photons between regions A and B and be careful about the results. The signal-to-noise here is not very good so that a sophisticated spatio-spectral analysis can hardly be applied. We use the ROSAT PSPC data to check our results on the interstellar absorption to A and B obtained from the ASCA data. The PSPC spectra are extracted from the same regions as the SIS spectra. The statistical significance of the PSPC spectra is not very high, so we only consider the energy band 0.8 — 1.8 keV which is dominated by the cool component (see Figure 1). The temperature is fixed at 0.23 keV, the value obtained from ASCA fitting. The abundances of heavy elements in the regions A and B are all fixed at the average values of the SIS results. Then we get $`N_H(\mathrm{A})=2.83_{0.13}^{+0.13}\times 10^{22}`$cm<sup>-2</sup> and $`N_H(\mathrm{B})=3.06_{0.15}^{+0.15}\times 10^{22}`$cm<sup>-2</sup>, which are both very similar to the SIS results. These results obtained from the PSPC and SIS data favor the trend $`N_H(\mathrm{A})<N_H(\mathrm{B})`$. Furthermore, in spite of the low statistical significance of the PSPC data, we still find sign about the difference of Mg He$`\alpha `$ emission between the spectra of regions A and B. More detailed spatio-spectral analysis of this source could be an interesting subject for future AXAF and XMM observations. The low temperatures derived by the NEI (SPEX) and the EI (VMEKAL) models are very similar, around $`0.23\mathrm{keV}`$, but the high temperatures are quite different from one another. This can be easily understood due to the NEI nature of the hot component. Notably, the low temperatures in both models are in the temperature range determined ever from the Einstein IPC data and the high temperatures are in the range determined from the MPC data (Becker et al. 1985). The abundances of Si in the hot component are nearly zero either from the NEI model (except in the coupled case) or from the EI model. Both of the models yield high values of the emission measure (EM) for the cool component compared with those of the hot one. Coincidentally, for the cool component, the sum of the EMs of the regions A and B obtained from the EI model are comparable with the EM obtained from the NEI model. ### 2.2 Image Production Using the SIS data we make maps in the bands 0.5 — 10 keV, 0.5 — 2 keV, and 2 — 10 keV. The 0.5 — 10 keV map with an overlay of the 20 cm radio emission contours is shown in Figure 2. The 2 — 10 keV (hard) contours and the 20 cm radio emission contours are superposed on the ROSAT HRI colored image (Figure 3, Plate). The reason here we select 2 keV to separate the soft map from the hard map is that 2 keV is the approximate demarcation between the hot and cool components in the spectral analysis (§2.1). These SIS maps are produced after the corrections for exposure and vignetting. Here the “Lucy-Richardson” method is applied for 40 iterations to deconvolve the ASCA images with the PSF (e.g. Jalota, Gotthelf, & Zoonematkermani 1993). The 0.5 — 10 keV image is very similar to the 0.5 — 2 keV (soft) image because the 2 — 10 keV (hard) emission from the hot component is much weaker than the soft emission. Both the 0.5 — 10 keV and the soft images are similar to the ROSAT PSPC image. Here the 20 cm radio contour maps are made from the NVSS. The resolution of the NVSS is 45 arcsec. (The rms brightness fluctuation of the NVSS is $`\sigma 0.45`$ mJy/beam$`0.14`$K \[Stokes I\]. The peak brightness is about 1.1 Jy/beam at $`\text{R.A.}=19^\mathrm{h}07^\mathrm{m}27^\mathrm{s}.83`$, $`\text{decl.}=07^{}08^{}24^{\prime \prime }.31`$ \[J2000.0\].) ### 2.3 Search for the Pulsed Signal The GIS data are used in the temporal analysis owing to their better time resolution. After barycentering the photon times of arrival, we extract the GIS2 (GIS3) light curves from the same region as we extract the SIS spectra. Only high bit-rate mode (resolution = 62.5 ms) data are used. Light curves in the 0.5 — 10 keV and 2 — 10 keV bands are obtained. A $`2^{20}`$ point fast Fourier transform is applied to the light curves. No significant pulsed signal is found in the range 0.125 — 30 s. ## 3 Discussion ### 3.1 Distance Both the NEI and EI fitting results give a hydrogen column density $`N_H`$ around $`2.9\times 10^{22}\mathrm{cm}^2`$ with a range 2.6 — 3.2$`\times 10^{22}\mathrm{cm}^2`$. The extinction per unit distance in the direction of 3C 397 can be estimated from the contour diagrams given by Lucke (1978): $`E_{BV}/d0.60\mathrm{mag}\mathrm{kpc}^1`$. Using the relation $`N_H=5.9\times 10^{21}E_{BV}\mathrm{cm}^2`$ (Spitzer 1978), a distance $`d8.2\mathrm{kpc}`$ (with a range 7.4 — $`9.0\mathrm{kpc}`$) is obtained. This is in agreement with the limit $`d\text{ }>\text{ }7.5\mathrm{kpc}`$ (Caswell et al. 1975). Adopting an average angular radius of $`2^{}`$, the radius of the remnant would be $`r4.7\mathrm{pc}`$. ### 3.2 Emission Measure In a two-component model, the hot component is usually ascribed to the surrounding medium swept up by the blast wave and the cool component to the ejecta heated by the reverse shock. It seems not to be the case here, however, because the EM of the cool component is much higher than that of the hot component. A competing factor may be the inhomogeneity of the surrounding medium. The hot component may correspond to the shocked low density intercloud matter (ICM) and the cool component to the shocked dense cloud matter which overwhelms the ICM in mass. Similar cases were also encountered in other remnants, such as the young SNR N132D (Favata et al. 1997). For 3C 397, in fact, inhomogeneities in the surrounding medium have been invoked to account for the irregular distribution of the radio spectral indices over the source, in view of the turbulence stimulated by the onset of plasma instabilities when the ejecta collide with the cloudlets (Anderson & Rudnick 1993). If the volume emission measure of the cool component is taken as $`fn_en_{H,c}V340\times 10^{58}\mathrm{cm}^3`$, where $`f`$ is the filling factor of dense cloudlets, then the hydrogen number density in the cloudlets is $`n_{H,c}30(f/0.25)^{1/2}\mathrm{cm}^3`$ and the electron density $`n_e`$ in the cloudlets is $`36(f/0.25)^{1/2}\mathrm{cm}^3`$. The mass of the cloudlets inside the remnant amounts to $`110(f/0.25)^{1/2}M_{}`$. From the volume EM of the hot component $`(1f)n_en_{H,ICM}V1\times 10^{58}\mathrm{cm}^3`$, the intercloud hydrogen number density obtained is $`n_{H,ICM}0.9[(1f)/0.75]^{1/2}\mathrm{cm}^3`$. About $`10[(1f)/0.75]^{1/2}M_{}`$ of intercloud gas is contained inside the remnant. ### 3.3 Age and Explosion Energy Because the time since the gas was engulfed by the shock front is contained in the ionization parameter $`n_et`$, the value $`n_et1.3\times 10^{12}\mathrm{cm}^3\mathrm{s}`$ in the cool component implies an age of the remnant: $`t\text{ }>\text{ }1.1\times 10^3(f/0.25)^{1/2}\mathrm{yr}`$, and $`n_et6.3\times 10^{10}\mathrm{cm}^3\mathrm{s}`$ in the hot component implies $`t\text{ }>\text{ }1.8\times 10^3[(1f)/0.75]^{1/2}\mathrm{yr}`$, which are both appreciably higher than the previous estimate of age, $`600\mathrm{yr}`$. We consider that the blast wave propagates into the intercloud medium and transmits a cloud shock into the cloudlets. If the high temperature $`T_h2.6\mathrm{keV}`$ (for the hot component in the NEI case) is ascribed to the blast wave, the postshock temperature may be $`T_s=0.77T_h2\mathrm{keV}`$ (e.g. Rappaport, Doxsey, & Solinger 1974). The blast wave velocity would be $`v_s=(16kT_s/3\mu m_H)^{1/2}1.3\times 10^8\mathrm{cm}\mathrm{s}^1`$ (where the mean atomic weight $`\mu =0.61`$), and the age could be estimated as $`t=2r/5v_s1.4\times 10^3\mathrm{yr}`$ using the Sedov model. This estimate is between the values given by the two ionization parameters. Its being smaller than one of the above values ($`\text{ }>\text{ }1.8\times 10^3\mathrm{yr}`$) is understandable due to the severe departure from spherically symmetric evolution, the possible error in the determination of the distance, and the uncertainty in the filling factor. If the temperature ($`1.6\mathrm{keV}`$) of the hot component obtained in the EI case is used, the age estimate would be $`1.8\times 10^3\mathrm{yr}`$. On the assumption that the interaction with the cloudlets can be neglected in the propagation of the blast wave, we estimate that the explosion energy is $`E=(1.4n_{H,ICM}m_H/\xi )(r^5/t^2)2\times 10^{50}[(1f)/0.75]^{1/2}\mathrm{ergs}`$ (for $`t1.8\times 10^3\mathrm{yr}`$) where $`\xi =2.026`$. ### 3.4 Issues Concerning the Inhomogeneous Medium While the scenario of inhomogeneity reconciles the emission measures of the two plasma components and produces estimates of the age and explosion energy, it raises some questions to be discussed as follows. The first is the pressure equilibrium between the ICM and the cloudlets ($`p_{ICM}`$ vs. $`p_c`$) (McKee & Cowie 1975). From the values of temperatures and densities obtained above, we have $`p_{ICM}/p_c(1/3)[3f/(1f)]^{1/2}`$. Although here the filling factor $`f`$ is adjustable, the ratio would be $`1/3`$ if $`f0.25`$ is taken as above. It was recently suggested that intercloud magnetic fields are a plausible pressure source for balancing the cloudlets (Chevalier 1998). Thus a mean magnetic field of $`4.7\times 10^4\mathrm{G}`$ would be needed inside the remnant, which is similar to the observed strengths in other SNRs evolving in clouds (Claussen et al. 1997). The second is the thermal evaporation of the cloudlets within the remnant. Since the remnant is very likely to expand in a cloudy medium, according to the model of White & Long (1991), the X-ray image should appear centrally brightened and smoothly darkened to the limb. As will be shown in §3.6, the eastern concentration of 3C397 is indeed near the center and hence one could not exclude the possibility of evaporation. However its overall brightness distribution is much more complicated than the model describes and the eastern concentration looks compact and may be caused by other mechanism. If the cloud evaporation is unimportant here, a possible factor may be the inhibition of evaporation by the electromagnetic instabilities (Levinson & Eichler 1992) caused in the collision of the ejecta with the cloudlets (Anderson & Rudnick 1993). The third is the photoevaporation of the cloudlets by the progenitor star. According to McKee, Van Buren, & Lazareff (1984), a region of radius $`28\mathrm{pc}`$ could be made homogeneous by the photoionization of an O4-B0 star during its main sequence lifetime, given an average medium density $`8\mathrm{cm}^3`$ as in the case of 3C 397. Here, however, the cloudlets exist within a radius $`2^{}`$ ($`4.7\mathrm{pc}`$). Hence the likelihood that the progenitor was a massive early-type star is low and a type Ia supernova explosion history may be favored. The forth is the zero abundance of Si (with an upper limit of about 0.1) in the hot component. The hot component is now ascribed to the intercloud gas behind the blast wave and possibly contains most of the ejecta. It has not yet reached ionization equilibrium, and it is improbable that its Si is completely depleted. On the other hand, it is also difficult to understand the absence of Si in the ejecta. If the ejecta has a mass of $`1M_{}`$ with a solar abundance of Si, its mass ratio to the intercloud gas ($`0.1`$) would entail the absence of Si in the unshocked ICM. ### 3.5 Ancient Records? The SGR Counterpart? The remnant age as young as $`1400`$$`1800\mathrm{yr}`$ is of interest in connection with ancient guest stars. However, the extinction to 3C 397, $`A_V4.5\times 10^{22}N_H13`$mag (Gorenstein 1975) is too high for ancient astronomers to have observed the SN explosion with naked eyes, given the hydrogen column $`N_H2.9\times 10^{22}\mathrm{cm}^2`$. That may explain why there was no mention of a candidate historical record. Some researchers are interested in the seemingly compact central source which could possibly harbor a neutron star (see Jones et al. 1998). 3C 397 was also suggested as a possible candidate for the extended X-ray counterpart of SGR1900+14 (Greiner 1996). However, the absence of pulsed emission and power-law component in our analysis seems not in support of these speculations. It was recently proposed that a quiescent X-ray source, which may be associated with the SNR G42.8+0.6, is responsible for the SGR (Hurley et al. 1999). ### 3.6 Morphology In the ASCA SIS and ROSAT HRI maps (Figure 2 and Figure 3), the X-ray emission is confined within the radio boundary, and again has two concentrations similar to the Einstein IPC map. (Here the maximum positional error of the ASCA images could be as large as about $`40^{\prime \prime }`$.) Two bright arcs in the hard contour map are roughly coincident with the two concentrations (see Figure 3). Therefore the hot component is mainly concentrated in the two bright portions. The two hard arcs seem to compose a bilateral structure in the western half. The eastern half of the HRI image (Figure 3) seems to show a broken bubble-like structure, with the eastern concentration of emission on its western boundary. The eastern X-ray concentration which looks like a compact source in the center of the remnant is coincident with the intersecting portions of the east- and western bubble-like halves. The hard contour map, the HRI image, and the VLA map all show an outward protrusion on the eastern side. The elongation direction of the whole remnant is essentially perpendicular to the galactic plane. The high EM of the cool component suggests that the remnant is evolving in a cloudy medium. The brighter radio and X-ray emissions on the west side close to the galactic plane imply a density gradient of the medium toward the plane, as has been speculated by Anderson & Rudnick (1993). The $`N_H`$ values for regions A and B obtained from the SIS data and from the PSPC data using the VMEKAL code (§2.1) are also in favor of such a gradient. In fact the HII region G41.1$``$0.2 is known to be adjacent to the west (see §1). The remnant morphology in the X-ray images (especially the ROSAT HRI image) seems to be suggestive of a bipolar (or a peanut-like) bubble with roughly an east-west orientation, its symmetry axis not perpendicular to the line of sight. The western half may be a little more distant than the eastern half, as implied by the $`N_H`$ values of regions A and B. The western X-ray concentration is located at the apex of the western bright portion, which coincides with the western bright radio emission rather well (see Figure 3). This apex should result from the shock wave interacting with the denser cloudy medium in the west. On the other side, the plasma may leak out of the the broken eastern bubble, so that the X-ray emission from the eastern half is softer than from the western half. It is unclear why the remnant takes a bipolar shape. The possibility that it was caused by two explosions seems low in view of the similar sizes and similar plasma temperatures of the east and west bubbles. Another possibility may be that dense matter was accumulated around the equatorial plane, but there is no direct evidence for such a conjecture. Two possible mechanisms could be responsible for the matter accumulation. One is a proto-stellar disk of gas which was left over from the time of the formation of the progenitor star (McCray & Lin 1994). The other could be the mass loss of the progenitor star(s) along the equatorial plane; because the progenitor was probably not a massive star, it could be a mass-losing binary system and this is consistent with a type Ia supernova explosion (that is mentioned in §3.4). ## 4 Conclusion The ASCA SIS0 and the ROSAT PSPC spectral data on SNR 3C 397 are analysed with a two-component NEI model using the SPEX code. We also fit simultaneously the ASCA SIS0 and SIS1 spectra with the VMEKAL model for an EI case. The hard ($`\text{ }>\text{ }2\mathrm{keV}`$) X-ray emission is found to arise primarily from the hot component which is responsible for S and Fe K$`\alpha `$ lines. The cool component contributes dominantly to the soft emission, responsible for Mg, Si, Fe L lines. The cool component is found to be approaching ionization equilibrium, and its high emission measure suggests that the remnant evolves in a cloudy medium. The intercloud magnetic fields may be a pressure source to balance the dense clouds. The existence of an inhomogeneous surrounding medium implies that the supernova progenitor may not be a massive early-type star. The zero abundance of Si in the hot component needs further explanation. We did not find power-law, bremsstrahlung, or blackbody components in the spectral analysis, nor pulsed signals in the temporal analysis. We restored the X-ray maps using the ASCA SIS data and compared them with the ROSAT HRI and 20 cm VLA maps. Two bright concentrations might imply that the remnant with a bipolar structure encounters a denser medium in the west. The $`N_H`$ values obtained suggest a distance of $`8\mathrm{kpc}`$. The Sedov model for the dynamics and the ionization parameters for the two components yield an age $`\text{ }>\text{ }1.4`$$`1.8\times 10^3\mathrm{yr}`$, which is much greater than the previous estimate of $`600\mathrm{yr}`$. We would like to thank J. Trümper and R. McCray for critical comments during the preparation of this manuscript. We would also like to thank J. Condon for critical reading and comments of the manuscript, and J.-H. Huang, Q.-S. Gu, and W. Becker for technical help. Data analysis is carried out on the SUN workstation at the Laboratory for Astronomical Data Analysis of the Department of Astronomy, Nanjing University. This work is supported by a grant from the NSF of China, a grant from the Ascent Project of the State Scientific Ministry of China, and a grant from the State Education Ministry of China for scholars coming back from abroad. ## 5 Figure captions TABLE 1 Fitting results obtained with the NEI model by applying SPEX to the ASCA SIS0 and ROSAT PSPC spectral data. The element abundances for the two temperature components are independent and coupled, respectively. The 90% confidence ranges ($`\mathrm{\Delta }\chi ^2=2.706`$) are indicated. independent case coupled case SIS0 SIS0+PSPC SIS0 hot component $`n_en_HV`$ ($`10^{58}\mathrm{cm}^3`$) $`1.29_{0.18}^{+0.33}`$ $`1.33_{0.14}^{+0.40}`$ $`0.99_{0.16}^{+1.06}`$ $`T`$ (keV) $`2.63_{0.53}^{+0.43}`$ $`2.53_{0.57}^{+0.31}`$ $`3.84_{0.04}^{+0.00}`$ $`n_et`$ ($`10^{12}\mathrm{cm}^3\mathrm{s}`$) $`6.26_{1.63}^{+3.83}\times 10^2`$ $`6.49_{1.61}^{+4.49}\times 10^2`$ $`5.01_{0.36}^{+0.14}\times 10^2`$ \[Mg/H\] $`1.89_{1.89}^{+1.36}`$ $`1.74_{1.53}^{+1.49}`$ $`0.69_{0.00}^{+0.00}`$ \[Si/H\] $`0.00_{0.00}^{+0.08}`$ $`0.00_{0.00}^{+0.19}`$ $`0.67_{0.00}^{+0.05}`$ \[S/H\] $`1.88_{0.64}^{+0.49}`$ $`1.87_{0.70}^{+0.41}`$ $`1.98_{0.00}^{+0.02}`$ \[Fe/H\] $`2.45_{0.67}^{+1.74}`$ $`2.65_{0.62}^{+2.28}`$ $`0.91_{0.00}^{+0.00}`$ cool component $`n_en_HV`$ ($`10^{58}\mathrm{cm}^3`$) $`344_{88}^{+232}`$ $`341_{79}^{+298}`$ $`490_{29}^{+489}`$ $`T`$ (keV) $`0.239_{0.015}^{+0.010}`$ $`0.238_{0.019}^{+0.007}`$ $`0.226_{0.053}^{+0.003}`$ $`n_et`$ ($`10^{12}\mathrm{cm}^3\mathrm{s}`$) $`1.34_{0.98}^+\mathrm{}`$ $`1.38_{1.09}^+\mathrm{}`$ $`2.67_{2.11}^{+0.07}`$ \[Mg/H\] $`0.38_{0.16}^{+0.18}`$ $`0.37_{0.19}^{+0.16}`$ —— \[Si/H\] $`0.94_{0.25}^{+0.35}`$ $`0.91_{0.10}^{+0.33}`$ —— \[S/H\] $`0.82_{0.82}^{+2.48}`$ $`0.83_{0.83}^{+1.82}`$ —— \[Fe/H\] $`0.33_{0.28}^{+0.41}`$ $`0.34_{0.31}^{+0.54}`$ —— $`N_H`$ ($`10^{22}\mathrm{cm}^2`$) $`2.87_{0.12}^{+0.12}`$ $`2.87_{0.13}^{+0.20}`$ $`3.04_{0.05}^{+0.00}`$ $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}.`$ $`341/254`$ $`579/459`$ $`416/258`$ TABLE 2 Fitting results obtained with the EI model by applying XSPEC simultaneously to the SIS0 and SIS1 data. The 90% confidence ranges ($`\mathrm{\Delta }\chi ^2=2.706`$) are indicated. whole portion A portion B hot component $`n_en_HV/(d/8\mathrm{kpc})^2`$ ($`10^{58}\mathrm{cm}^3`$) $`2.04_{0.27}^{+0.26}`$ $`0.59_{0.14}^{+0.07}`$ $`0.99_{0.19}^{+0.11}`$ $`T`$ (keV) $`1.67_{0.09}^{+0.11}`$ $`1.55_{0.14}^{+0.19}`$ $`1.23_{0.06}^{+0.08}`$ \[Mg/H\] $`7.67_{2.97}^{+4.26}`$ $`8.19_{4.32}^{+8.24}`$ $`0.00_{0.00}^{+2.51}`$ \[Si/H\] $`0.00_{0.00}^{+0.10}`$ $`0.00_{0.00}^{+0.18}`$ $`0.00_{0.00}^{+0.33}`$ \[S/H\] $`0.60_{0.29}^{+0.32}`$ $`0.52_{0.51}^{+0.62}`$ $`1.00_{0.46}^{+0.78}`$ \[Fe/H\] $`3.72_{0.56}^{+0.65}`$ $`3.73_{1.06}^{+1.94}`$ $`5.57_{1.37}^{+3.02}`$ cool component $`n_en_HV/(d/8\mathrm{kpc})^2`$ ($`10^{58}\mathrm{cm}^3`$) $`521_{61}^{+76}`$ $`117_{26}^{+32}`$ $`197_{38}^{+60}`$ $`T`$ (keV) $`0.223_{0.003}^{+0.004}`$ $`0.224_{0.007}^{+0.008}`$ $`0.222_{0.033}^{+0.006}`$ \[Mg/H\] $`0.58_{0.06}^{+0.08}`$ $`0.61_{0.13}^{+0.14}`$ $`0.48_{0.11}^{+0.15}`$ \[Si/H\] $`0.97_{0.11}^{+0.12}`$ $`1.10_{0.23}^{+0.27}`$ $`0.97_{0.17}^{+0.20}`$ \[S/H\] $`5.60_{0.83}^{+1.03}`$ $`6.48_{1.83}^{+2.63}`$ $`4.12_{1.60}^{+1.79}`$ \[Fe/H\] $`0.00_{0.00}^{+0.15}`$ $`0.00_{0.00}^{+0.25}`$ $`0.00_{0.00}^{+0.35}`$ $`N_H`$ ($`10^{22}\mathrm{cm}^2`$) $`2.82_{0.04}^{+0.07}`$ $`2.66_{0.07}^{+0.12}`$ $`3.08_{0.06}^{+0.27}`$ $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}.`$ $`1142/559`$ $`613/559`$ $`593/559`$
no-problem/9903/cond-mat9903131.html
ar5iv
text
# REFERENCES Comment on ”Macrospopic Equation for the Roughness of Growing Interfaces in Quenched Disorder” In a recent Letter Braunstein and Buceta introduced a ’macroscopic’ equation for the time evolution of the width of interfaces belonging to the directed percolation depinning (DPD) universality class . From numerical simulations of the DPD model, they inferred an ansatz (Eq.(1) in Ref.) for the time derivative of the interface width (called DSIW in Ref.) at the depinning transition. Braunstein and Buceta found that their formula fitted the numerical data at the depinning trasition, for $`q_c=0.539`$ and $`\beta =0.63`$, with the appropriate election of some arbitrary constants. Here we argue that, contrary to what it is claimed in Ref., Braunstein and Buceta’s formula does not describe the ’macroscopic’ behaviour of the interface. The formula proposed in Ref for the DSIW is an approximation to the very short times regime (when less than one layer has been completed), which is not significant for the description of the surface dynamics at large scales. We obtain analitically the short time behaviour of the DPD model, which is valid for any q and explains the apperance of an exponential term in the formula of Ref. for the DSIW. Let us consider the DPD model in a system of size $`L`$ and a density $`q`$ of blocked cells ($`p=1q`$ density of free cells). We are interested in the very short times regime when the first monolayer still has not been completed, i.e. the number of growth attempts $`N`$ is $`NL`$ (this corresponds to times $`t=N/L1`$). In this regime, the probability of having a column $`i`$ with height $`h_i>min(h_{i1},h_{i+1})+2`$ is negligible and the columns are growing almost independently. The growth at this early stage can be seen as a random deposition (RD) process in which every column grows in one unit with probability $`p/L`$. The short time regime of the DPD model is then like RD, which is solvable exactly, but with the additional ingredient of a density $`q`$ of blocked sites. One can see that, within this approximation, the probability of having a column with height $`h`$ after $`N`$ growth attempts is given by $$P(N,h)=\frac{(Nsp)^h}{h!}e^{Ns}+qp^h\underset{r=h+1}{\overset{N}{}}\frac{(Ns)^r}{r!}e^{Ns},$$ (1) where $`s=1/L`$ is the probability of attempting to growth a column and the usual approximation $`s^r(1s)^{Nr}N!/[(Nr)!r!](Ns)^rexp(Ns)/r!`$ has been made. From the probability (1), one can calculate the interface width $`W^2=h^2h^2`$ and then the time derivative, which leading terms are $$\frac{dW^2}{dt}=pe^{qt}+2p^2e^{qt}\left(\frac{e^{qt}1}{q}+t\right),$$ (2) where $`t=Ns=N/L`$ is the time in the units used in Ref.. This formula gives the exact time evolution of $`\frac{dW^2}{dt}`$ for any $`q`$ (not only at $`q_c=0.539`$) and is valid for times $`t1`$. For times $`t>1`$ differences between neighbouring columns are likely to be larger than $`2`$ resulting in horizontal correlations and the break down of (2). A comparison of Eq.(2) with numerical simulations of the DPD model is presented in Figure 1. Our calculation suggests that the exponential term in the ansatz of Ref. is actually produced by the usual random deposition-like dynamics, which occurs in any growth model for short times. Juan M. López, Department of Mathematics, Imperial College, 180 Queen’s Gate, London SW7 2BZ, United Kingdom José J. Ramasco<sup>∗,†</sup> and Miguel A. Rodríguez Departamento de Física Moderna, Universidad de Cantabria, Avenida Los Castros s/n, Santander E-39005, Spain Instituto de Física de Cantabria, Consejo Superior de Investigaciones Científicas – Universidad de Cantabria, Santander E-39005, Spain
no-problem/9903/hep-lat9903032.html
ar5iv
text
# 1 Introduction ## 1 Introduction As lattice QCD computations evolve towards quantitative predictions, it becomes increasingly important to make better use of computational resources by using more sophisticated discretizations of the continuum action, or “improved actions”. With Kogut-Susskind quarks the errors from the lattice discretization of the fermion action are proportional to $`a^2`$, where $`a`$ is the lattice spacing. In contrast, for the (unimproved) Wilson quark formulation, the errors are proportional to $`a^1g`$. However, for the lattice spacings that are now practical for calculations with dynamical quarks, the effects of these errors in the Kogut-Susskind formulation are not small. In particular, the breaking of flavor symmetry is a large effect. The Kogut-Susskind formulation naturally describes four flavors of quarks, which is conventionally reduced to two in dynamical simulations by taking the square root of the determinant. However, only a U(1) subgroup of the original SU(4) chiral symmetry is an exact symmetry, so for nonzero lattice spacing only one of the sixteen pseudoscalar mesons has a vanishing mass when the quark mass vanishes. In principle, flavor symmetry breaking is also present for vector mesons, nucleons, etc., but these effects are generally small, and we can concentrate on the pseudoscalar mesons. Flavor symmetry breaking can be reduced by “fattening” the links, which means that the conventional parallel transport using the link matrix is replaced by an average over paths connecting the points. This was demonstrated in Ref. , where the single link was replaced by an average of the simple link and three link paths, or “staples”. This modification was introduced as the simplest possible gauge invariant modification of the nearest neighbor coupling, and the relative weight of the staples and simple connection was treated as a free parameter. The improvement of flavor symmetry was found to be fairly insensitive to the value of the weighting coefficient. Lepage pointed out that this improvement could be understood as an introduction of a form factor suppressing the coupling to high momentum gluons which scatter quarks from one corner of the Brillouin zone to another. Lagäe and Sinclair used this understanding to construct an action where successive smearings in all four directions canceled the tree level couplings to all gluons with any momentum component equal to $`\pi /a`$, and showed that this further reduced flavor symmetry breaking. This action includes paths to nearest neighbor points that are up to nine links in length. A fat link Kogut-Susskind fermion action, motivated by a perfect Kogut-Susskind fermion action, was tested by Bietenholz and Dilger on the 2D Schwinger Model. They found that their action shows very good scaling for the pion and the eta masses. Furthermore, perfect action motivated fattening has been used by DeGrand for Wilson-like fermions, showing very good scaling and renormalization factors very close to unity. In an earlier paper we investigated the single staple fattening, together with the Naik term improving the dispersion relation in more detail. Ref. also describes an algorithm for using these actions in full QCD simulations, and extends studies of flavor symmetry breaking to the nonlocal pions, which turn out to have much larger breakings than the local non-Goldstone pion which was used in the earlier studies. This paper extends our studies of flavor symmetry breaking to actions involving paths longer than three links. Many of these results were briefly presented at the Lattice-98 conference. We investigate actions that involve five and seven link paths to the nearest neighbor point, in addition to the one and three link paths. The coefficients of the various paths are chosen to minimize or to completely eliminate the tree level couplings to gluons with transverse momentum components $`\pi /a`$. Motivated by encouraging results using links fattened by “Ape smearing”, we also try a variant of the fattened action which makes the fattened links approximately unitary. Recently Lepage has analyzed flavor symmetry breaking using the language of Symanzik improvement. In addition to providing a simple construction of the seven-link action which cancels tree level couplings at momentum $`\pi `$, Lepage points out in this work that the fattening which reduces flavor symmetry breaking introduces an error of order $`a^2`$ in the low momentum physics (effectively an $`a^2p^2`$ error, where $`p`$ is a momentum), and shows how this error can be removed by an additional term in the action. This work concentrates on the effect of the valence quark action on the flavor symmetry breaking. We have not done a study of the effects of changing the dynamical quark action; however, we have found in previous studies that improvements in flavor symmetry arising from changes in the valence quark action are indicative of improvements in full QCD simulations. Also, we have not studied scaling, or the independence of mass ratios on the lattice spacing, although we expect that they will show improved scaling properties. Our concentration on the splittings of the pions is motivated by experience that suggests that this is the worst practical problem with Kogut-Susskind quarks. (For example, in Ref. we found that rotational symmetry of the pion propagator was essentially restored at coarser lattice spacings than was the flavor symmetry.) Furthermore, we expect that the study of the pion spectrum provides a good guide to the quality of actions, since the low energy dynamics of QCD is approximately the dynamics of a pion gas. ## 2 The actions Figure 1 illustrates the coupling to gluons with momentum components $`\pm \pi `$. The figure illustrates the single link couplings in $`D_\mu `$, connecting the central point to both the forwards and backwards nearest neighbors, as well as three link staples connecting to the same points. Each horizontal arrow represents a parallel transport by $`U_\mu \mathrm{𝟏}+igaA_\mu `$. The directions of the arrows in the coupling to the backwards neighbor include one minus sign because this transport involves $`U^{}`$ and another minus sign appearing explicitly in the derivative. Now consider the coupling to a $`\mu `$-direction gluon ($`A_\mu (\pi \widehat{e}_\nu )`$) with momentum component $`\pm \pi `$ in the $`\nu `$ direction. The $`\pm `$ signs above the $`\mu `$ direction links indicate the sign of the coupling of this gluon relative to the forward simple link. Thus, if $`c_1`$ and $`c_3`$ are the weights of the one-link and three-link paths, the coupling to this gluon from the paths pictured here is $`c_12c_3`$. (This is not the whole story — one must also include the staples in the directions that are not shown in this figure.) Similarly, the parenthesized signs below the $`\mu `$ direction links show the relative coupling to a gluon with momentum $`\pi `$ in the $`\mu `$ direction ($`A_\mu (\pi \widehat{e}_\mu )`$). Note that this coupling is automatically cancelled between the forward and backward parts of $`D_\mu `$, so we don’t have to worry about longitudinal momentum $`\pi `$. Now one might worry that the $`\nu `$ direction links, required to keep the expression gauge invariant, might introduce couplings to $`\nu `$ direction gluons with momentum components $`\pm \pi `$. But for $`A_\nu (\pi \widehat{e}_\mu )`$ gluons, the contributions of the vertical links in the center cancel, and the contribution of the links from the left and right sides cancel, since they are separated by $`2a`$ and traversed in opposite directions. Similarly, the coupling to $`A_\nu (\pi \widehat{e}_\nu )`$ gluons cancels between the top and bottom halves of the figure. This argument extends to the five and seven link paths illustrated in Fig. 2, so the end result is that we can compute the couplings to gluons with momentum components equal to $`\pi `$ by just considering the $`\mu `$ direction links in $`D_\mu `$. Taking into account the multiplicities of the various paths and using $`C_n`$ for the weight of the $`n`$ link paths, the couplings are given below. In these expressions the coefficients explicitly show how many paths of each length contribute with positive weight and how many with negative weight. We also write explicitly $`(2C_5)`$ and $`(6C_7)`$ to indicate that in $`\overline{)}D_x`$ there are two shortest paths connecting the starting point to the $`\widehat{x}`$ direction link displaced by $`+\widehat{y}+\widehat{z}`$, and six paths to the link displaced by $`+\widehat{y}+\widehat{z}+\widehat{t}`$. * Coupling to $`k=(0,0,0,0)`$: $`C_1+6C_3+12(2C_5)+8(6C_7)`$ * Coupling to $`k=(0,\pi ,0,0)`$: $`C_1+(42)C_3+(48)(2C_5)+(8)(6C_7)`$ * Coupling to $`k=(0,\pi ,\pi ,0)`$: $`C_1+(24)C_3+(48)(2C_5)+(+8)(6C_7)`$ * Coupling to $`k=(0,\pi ,\pi ,\pi )`$: $`C_1+(6)C_3+(12)(2C_5)+(8)(6C_7)`$ If we are willing to use all the paths up to length seven, we can normalize the zero momentum coupling to one and set all the others to zero with $`C_1=1/8`$, $`C_3=1/16`$, $`(2C_5)=1/32`$ and $`(6C_7)=1/64`$. This defines our “Fat7” action and, with tadpole improved coefficients, the “Fat7tad” action. However, it is interesting to ask how helpful this complexity is — could we get by with shorter paths? If we restrict ourselves to five link and shorter paths, we can no longer satisfy all of these conditions. However, we can choose the couplings to minimize the maximum of the couplings to the high momentum gluons. This leads to $`C_1=1/7`$, $`C_3=1/14`$ and $`(2C_5)=1/28`$, with all couplings to high momentum gluons reduced by a factor of seven. This defines our “Fat5” action. Here we note that in two dimensions the equivalent of our “Fat7” action has only a 3-link staple. Our tree level formula for the relative weight of the staple would be 0.25 in two dimensions. This is very close to the relative weight of 0.238 introduced in the approximate perfect action constructed in Ref. for the 2D Schwinger Model. All of these actions can be tadpole improved by inserting a factor of $`(1/u_0)^{L1}`$ in the coefficient of each path, where $`L`$ is the length of the path. Using $`L1`$ instead of $`L`$ amounts to absorbing one power of $`u_0`$ into the quark mass. We use the average plaquette to define $`u_0`$. While the paths shown in Fig. 2 can be used to reduce or eliminate couplings to gluons with transverse momentum components equal to $`\pi `$, Lepage has pointed out that they have the undesirable effect of modifying the coupling to gluons with small nonzero transverse momentum components, essentially by introducing a second derivative coupling proportional to $`a^2p^2`$. Following Ref. , we can correct for this by introducing a flavor conserving five link path, $`+\widehat{y}+\widehat{y}+\widehat{x}\widehat{y}\widehat{y}`$, into $`D_x`$, giving an action with no tree level order $`a^2`$ corrections. While it is not clear what observable quantities are affected by this correction, it is a relatively cheap and aesthetically pleasing addition to the Fat7+Naik action. We have tested flavor symmetry breaking with this action, which we call the “Asq” action, or, when the coefficients are tadpole improved, the “Asqtad” action. To summarize and clarify these actions, we give the coefficients of the paths in the $`a^2`$ improved action in a form useful for simulation. Here $`c_1`$ is the coefficient of the simple link. $`c_3`$, $`c_5`$ and $`c_7`$ are the coefficients of the three, five and seven link paths in Fig. 2. $`c_N`$ is the coefficient of the three link path to the third nearest neighbor (Naik term), and $`c_L`$ is the coefficient of the five link path implementing the correction introduced by Lepage. The origin of each term in the coefficients is identified by a subscrpt: “F” for flavor symmetry, “N” for the Naik correction to the quark dispersion relation, and “L” for the small momentum form factor correction. The factors of $`1/2`$ and $`1/6`$ in $`c_5`$ and $`c_7`$ compensate for the number of different paths connecting the starting point to the $`\mu `$ direction link parallel to the simple link. For example, the $`x`$ direction link displaced by $`+\widehat{y}+\widehat{z}`$ (coefficient $`c_5`$) is included in both $`+\widehat{y}+\widehat{z}+\widehat{x}\widehat{z}\widehat{y}`$ and $`+\widehat{z}+\widehat{y}+\widehat{x}\widehat{y}\widehat{z}`$, and we average over both paths. $`c_1`$ $`=`$ $`(1/8)_F+(3/8)_L+(1/8)_N`$ $`c_3`$ $`=`$ $`(1/16)_F`$ $`c_5`$ $`=`$ $`(1/32)_F(1/2)`$ $`c_7`$ $`=`$ $`(1/64)_F(1/6)`$ $`c_L`$ $`=`$ $`(1/16)_L`$ $`c_N`$ $`=`$ $`(1/24)_N`$ (1) We have also experimented with an action in which the fat links are approximately unitary. Theoretically, it is not clear why unitarity should be a concern in suppressing flavor symmetry breaking. However, results in Ref. led us to consider “APE-smeared” links, where an average over the single link and nearby paths is constructed as above, and then the resulting fat link is projected back onto the nearest element of SU(3). Of course, such actions are not explicitly expressed as a sum over paths, and dynamical simulations using them would require an extension of our algorithms. However, spectrum calculations using APE-smeared links for the valence quarks are straightforward. In particular, we ran a calculation using a single iteration of APE smearing on a fairly coarse lattice. This action differs from the one link plus staple action only in the projection onto a special unitary matrix, yet, as will be seen in a later section, it produced smaller flavor symmetry breaking than the one link plus staple plus Naik action. (The Naik term has little effect on flavor symmetry.) This motivated us to construct an action for which the fat links are approximately unitary, but are still expressed as an explicit sum over paths. An almost unitary matrix $`M`$ can be expressed as a unitary matrix $`U`$ times a correction: $$M=U\left(\mathrm{𝟏}+ϵ\right),$$ (2) where $`ϵ`$ is Hermitian. Now look at $`M^{}M`$ to first order in $`ϵ`$ $$M^{}M=\left(\mathrm{𝟏}+ϵ\right)U^{}U\left(\mathrm{𝟏}+ϵ\right)\mathrm{𝟏}+2ϵ.$$ (3) Now, inverting Eq. 2 to first order, $$U=M\left(\mathrm{𝟏}ϵ\right)=\frac{3}{2}M\frac{1}{2}MM^{}M.$$ (4) We want to use this equation to make a fat link approximately unitary. Let $`M`$ be a generic fat link: $$M=L+\alpha S$$ (5) where $`L`$ is the simple link and $`S`$ is some sum over other paths connecting the ends of $`L`$. $`S`$ may be the sum of the staples for a simple fattening, or something more complicated involving longer paths. We will work to first order in $`\alpha `$. Customarily we also rescale $`M`$ by some factor which is $`1+b\alpha `$. However, this factor cancels in the derivation, so we suppress it here. Unitarize $`M`$ as above to make an approximately unitary fat link $`F`$: $`F`$ $`=`$ $`{\displaystyle \frac{3}{2}}M{\displaystyle \frac{1}{2}}MM^{}M`$ (6) $`=`$ $`{\displaystyle \frac{3}{2}}\left(L+\alpha S\right){\displaystyle \frac{1}{2}}\left(L+\alpha S\right)\left(L^{}+\alpha S^{}\right)\left(L+\alpha S\right)`$ Expand, keeping terms up to order $`\alpha `$ and using $`L^{}L=\mathrm{𝟏}`$, and remarkable cancellations occur $$F=L+\frac{\alpha }{2}\left(SLS^{}L\right)$$ (7) $`F`$ is an approximately unitary fat link expressed as a sum over paths — a form suitable for dynamical simulations. The only change is that the sum over fattening paths, $`S`$, has been replaced by an average of the paths traversed in each direction, $`\frac{1}{2}\left(SS^{}\right)`$, with $`L`$ inserted as necessary to maintain gauge invariance. At this point one can forget about how it was derived, and just verify from Eq. 7 that $`F^{}F=\mathrm{𝟏}`$ to order $`\alpha `$. It is also easy to verify, by setting link matrices to $`\mathrm{𝟏}+iA_\mu `$, that $`F`$ averages $`A_\mu `$ over position in the same way that $`M`$ did. The minus sign in front of the $`LS^{}L`$ term compensates for the fact that this path is now traversed in the opposite direction. This action is illustrated in Fig. 3 for the case where $`S`$ is the three link staple. The two terms can be made to look more symmetric by factoring $`L`$ out on the right (or left). $$F=\left(\mathrm{𝟏}+\frac{\alpha }{2}\left(SL^{}LS^{}\right)\right)L$$ (8) In this form, we are just following the parallel transport by $`L`$ with a difference of going around closed loops in opposite directions, or a term proportional to $`F_{\mu \nu }`$. ## 3 The simulations Most of our spectrum calculations used two of the same sets of sample lattices used in Ref. . These were lattices with two flavors of dynamical quarks, where the dynamical quarks used the ”Staple+Naik” action, and a Symanzik improved gauge action at $`10/g_{imp}^2=7.3`$ and $`7.5`$. At $`10/g_{imp}^2=7.3`$ we used $`12^3\times 32`$ lattices with dynamical quark masses of $`0.02`$ and $`0.04`$, with lattice spacings determined from $`r_0`$ of about 0.15 fm. and 0.16 fm. respectively. The $`10/g_{imp}^2=7.5`$ runs used $`16^3\times 48`$ lattices with $`m_q=0.015`$ and $`0.030`$, with lattice spacings of $`0.13`$ fm and $`0.14`$ fm, respectively. Sample sizes ranged from 48 to 60 lattices, with four source time slices per lattice for spectrum calculations. In addition we computed the meson spectrum on a set of eleven large ($`32^3\times 64`$) quenched lattices with a lattice spacing around $`0.07`$ fm., using the one plaquette gauge action at $`6/g_{conv}^2=6.15`$. For each of the three gauge couplings we computed spectra at two values of the quark mass, which allows us to interpolate results to make fair comparisons of the actions. (For the two flavor runs, the sea quark mass was also changed.) The meson masses were interpolated assuming that the squared meson masses are linear in the quark mass. One approach to comparing actions is to interpolate to a fixed ratio of $`m_G`$ to $`m_\rho `$, where $`m_G`$ is the Goldstone pion mass. In so doing, we are letting each action “determine its own lattice spacing”. Since the $`\rho `$ mass is also dependent on the valence quark action, this results in a slightly different estimate of the lattice spacing for each action. An alternate approach is to assume that each set of lattices has a fixed lattice spacing, which could be determined either from the rho mass for a fixed choice of action, or from the static quark potential. Operationally, this results in interpolating all the valence actions on a set of lattices to the same pion mass, and we choose the Goldstone pion for this interpolation. For the runs with $`10/g_{imp}^2=7.3`$ and $`7.5`$ we will present results with both sets of assumptions. For the fine lattices, with $`6/g_{conv}^2=6.15`$, the differences in the $`\rho `$ masses with different actions are insignificant, which is expected as the lattice spacing gets smaller. As in Ref. , we parameterize the splitting of the pions by the dimensionless quantity $$\delta _2=\frac{m_\pi ^2m_G^2}{m_\rho ^2m_G^2},$$ (9) where $`m_\pi `$ is one of the non-Goldstone pion masses, $`m_G`$ is the Goldstone pion mass, and $`m_\rho `$ is one of the local $`\rho `$ masses. Since the $`\rho `$ masses are nearly degenerate, it makes little difference which one we use. In our analysis we used the local $`\gamma _i\gamma _i`$ ($`\rho _2`$) mass because for heavy quarks it is often estimated more accurately. We use the squared meson masses since they are approximately linear in the quark mass and we interpolate and extrapolate in quark mass. Empirically $`\delta _2`$ is fairly insensitive to the quark mass, and the theoretical analysis in Ref. predicts that the numerator is independent of quark mass for small quark masses. The squared Goldstone pion mass is subtracted in the denominator to give more sensible behavior at large quark mass. At large $`am_q`$, all of the meson masses become degenerate, but it is still possible to use the difference between the Goldstone pion mass and the rho mass as a natural scale for flavor symmetry breaking. ## 4 Results Figure 4 shows the spectrum of pion masses obtained with the “Staple+Naik” action at $`10/g_{imp}^2=7.3`$ and $`7.5`$ respectively. Here, as in all of our calculations, the pattern of near-degeneracies predicted in Ref. is evident. Note that the local non-Goldstone pion, the flavor $`\gamma _0\gamma _5`$ pion, is one of the lightest non-Goldstone pions, so a realistic assesment of the flavor symmetry breaking requires consideration of the nonlocal pions. All to the actions we have tested produce a qualitatively similar pattern of pion masses. Figure 5 shows $`\delta _2`$ from three different actions, each with three different lattice spacings. All of these results are interpolated in quark mass to the point where $`m_G/m_\rho =0.55`$. The improvement of the flavor symmetry breaking is evidenced by the smaller non-Goldstone pion masses for the improved actions. In this figure we have determined the lattice spacing for each action using the $`\rho `$ mass evaluated with this action. Thus, even though the spectra for the different actions are evaluated on the same sets of lattices, they appear at different horizontal positions. However, as we would hope, the ambiguity in lattice spacing coming from the difference in $`\rho `$ mass with action becomes smaller at the smaller lattice spacings. (It is neither surprising nor upsetting that the $`\rho `$ mass should depend on the action. After all, in the end we expect that these actions will have better scaling behavior in all quantities, meaning that we should come closer to the continuum limit at these large lattice spacings. Among other things, this means that we expect the nucleon to rho mass ratio, a well known problem on coarse lattices, to be improved with these actions.) In Table 2 we show $`\delta _2`$ for the various actions. We tabulate this for only three of the pions: the local non-Goldstone pion with flavor structure $`\gamma _0\gamma _5`$, and the two three-link pions, with flavor structures $`\gamma _0`$ and $`\mathrm{𝟏}`$. The local pion is the most often studied one, and so allows comparison with other work. The flavor $`\mathrm{𝟏}`$ is the worst case pion. The other three link pion is in the second worst multiplet, but it is interesting to study because its parity partner would have exotic quantum numbers, so the propagator can be fit to a simple exponential, leading to smaller errors for the mass estimates. We note in passing that the flavor $`\mathrm{𝟏}`$ pion is properly called a “pion” here instead of an “eta”, since we did not compute quark-line disconnected diagrams in the propagator. In other words, one may imagine that the quark and antiquark in this pion carry a flavor quantum number in addition to that coming from the Kogut-Susskind quarks’ natural four flavors, and so cannot annihilate each other. Thus, it is correct to demand that an improved action should make this pion degenerate with the others. In order to make a fair comparison of the actions, we have interpolated (or extrapolated) the spectrum to a fixed $`m_G/m_\rho =0.55`$ point. This is done using two different bare masses for the valence quarks. We also interpolated (or extrapolated) our spectrum to a fixed $`m_G=545MeV`$ using the heavy quark potential parameter $`r_0`$ in order to fix the scale. As we can see from the table, the computed $`\delta _2`$’s in both cases are in agreement with in errors. The errors in both cases were computed with jackknife analysis. Since $`\delta _2`$ involves a ratio of mass differences, and the different masses are all correlated, naive error propagation would lead to an overestimate of the errors in $`\delta _2`$. Therefore we used a jackknife analysis to compute these errors. When comparing actions, we are interested in the difference in $`\delta _2`$ between the two actions. Since we used the same lattices for all the actions, these $`\delta _2`$’s are not independent, and one should really do a jackknife analysis in order to determine the errors on the differences of $`\delta _2`$’s. We have done this and we find in general the same error as the one computed with error propagation from the quoted $`\delta _2`$ errors. Our results show that all the variants of the fat actions significantly improve flavor symmetry, with larger improvement for the actions that suppress more couplings to gluons with transverse momentum $`\pi `$. Generally, tadpole improving the tree level coefficients results in a better action. The approximately reunitarized action with coefficient 0.25 seems to work better than the Staple+Naik action. We have no clear understanding of why this happens, but this may be a hint that a more careful (non-perturbative) tuning of the fattening coefficients would result in better actions. As expected, the Fat7tad action is the best for suppressing flavor symmetry violations. The Asqtad action is slightly worse than the Fat7tad action in this regard. Extra $`O(a^4)`$ flavor symmetry violation introduced by the Naik term and the Lepage term may be responsible for this. However, the Asqtad action has improved rotational symmetry due to the Naik term, and has no additional $`O(a^2)`$ flavor conserving errors introduced by the fattening. Thus, of all the actions we have studied, we consider this one to be the best candidate for an improved Kogut-Susskind action. In Ref. it was shown that smearing the links by “Ape smearing”, where the link and staples are averaged and the result in projected back onto SU(3) (i.e. replaced by the SU(3) matrix which maximizes $`Tr(U^{}F)`$, where $`F`$ is the fattened link) improved the flavor symmetry, among other nice features. We find similar results on our set of sample lattices. It is interesting to compare the “Ape1” action in Table 2 with the “Staple-un(.25)” action, since the difference between the two is that the Ape1 action uses links that are exactly unitary, while the Staple-un(.25) uses the same fattening, but is only unitary to first order in the staple weight. We see that the approximately unitary action is only slightly worse than the Ape1 action. Also, we see as expected that that the Ape4 valence action gives a very good suppression of flavor symmetry breaking, although using it for a dynamical action is difficult. The approximately reunitarized actions seem to have slightly better flavor symmetry breaking than the comparable Link+Staple action, but probably not enough to justify the extra complexity. Another practical advantage of the fat link actions is that the fat link configurations are smoother than the original configurations, and the conjugate gradient computation of the propagators converges in fewer iterations. Quantifying this statement is tricky because, unlike the flavor symmetry breaking, where $`\delta _2`$ is approximately independent of quark mass, the number of conjugate gradient iterations is very sensitive to the quark mass. For example, at $`10/g_{imp}^2`$ at a fixed bare quark mass of 0.02 the Link+Staple action requires 21% fewer conjugate gradient iterations than the conventional action. However, after interpolation to $`m_G/m_\rho =0.55`$ this advantage disappears, and both of these actions require about the same number of iterations. On the coarse lattice the approximately unitary actions do better — the Staple-un(.25) action and the Ape1 action require about 10% fewer iterations than the conventional or Link+Staple actions, and the Ape4 21% fewer. On the fine lattice, with $`6/g_{conv}^2=6.15`$, there is much less ambiguity since the various pion masses are much closer. Also, the fattening does a much better job of smoothing the configurations on the fine lattice. In this case the Link+Staple action requires 46% fewer iterations than the conventional action, and the Fat7 action 52% fewer. However, the advantage of the approximately reunitarized action has disappeared – it requires just about as many iterations as the Link+Staple. Regrettably, the Asqtad action does not do as well as the Fat7 action, only reducing the number of iterations by 37% as compared to the conventional action. This is probably because of the negative coefficient associated with the Lepage term (see Table 1), meaning that this term is actually undoing part of the smoothing accomplished by the other paths. We should note that with these complicated actions the cost of simulation with dynamical quarks is no longer completely dominated by the conjugate gradient. Except for very light quark masses, the cost of computing the fermion force and precomputing the fat links becomes comparable to the conjugate gradient cost. ## 5 Conclusions In this paper we have investigated the possibility of constructing a Kogut-Susskind action with improved flavor and rotational symmetry suitable for dynamical fermion simulations. For this reason we want to keep the amount of new paths introduced into the action as small as possible. An action containing fat links with paths up to length seven was constructed. At tree level this action has no couplings of quarks to gluons with a transverse momentum component $`\pi `$. As a result, at tree level the flavor symmetry violating terms in the action are completely removed. In addition to flavor symmetry, the rotational symmetry is improved by introducing the Naik term. Finally, as Lepage pointed out, we need to introduce an extra five link staple in order to cancel errors of $`O(a^2p^2)`$ introduced by the fattening. The resulting action can be further improved by tadpole improvment. This action, which we call “Asqtad”, is an order $`O(a^4,a^2g^2)`$ accurate fermion action. Asqtad is an action simple enough to be useful for dynamical simulations. Preliminary tests using our generic code show about a factor of 4 higher cost than the code implementing the standard Kogut-Susskind action. We have found optimizations specific to the Asqtad action that could bring the cost factor down to 2-2.5. The $`O(a^2)`$ precision at a cost of a factor of 2-2.5 makes such an action very competitive with the other popular improved actions such as D234, perfect or approximately perfect actions, the Neuberger action, and domain wall fermion actions. The highly improved chiral symmetry of actions respecting (or approximately respecting) the Ginsparg-Wilson relation is not something that Asqtad can compete with. On the other hand, cost may favor the Asqtad action. The Neuberger action, approximately perfect actions and domain wall fermions are all fairly costly to implement for dynamical fermions. In view of the enormous price one has to pay in order to have highly improved chiral symmetry on the lattice, we think that the Asqtad action is a good candidate for a fermion action to be used in the next generation of dynamical simulations. Flavor symmetry breaking in the traditional Kogut-Suskind action at lattice spacings commonly used in high temperature QCD studies results in pions as heavy as the kaons, making it impossible to study the effects of the strange quark. Our study of the Asqtad action shows that one can achieve a good separation between the pions and the kaons at accessible lattice spacings. Thus Asqtad is an action that may prove very useful in projects in which the effects of the strange quark are to be studied. ## Acknowledgements This work was supported by the U.S. Department of Energy under contract DE-FG03-95ER-40906 and by the National Science Foundation grant number NSF–PHY97–22022. Computations were done on the Paragon at Oak Ridge National Laboratory, and the T3E’s at NERSC, NPACI and the PSC. We would like to thank Peter Lepage for helpful communications, and the members of the MILC collaboration for inspiration and many discussions.
no-problem/9903/astro-ph9903392.html
ar5iv
text
# The Question of the Peak Separation in the Vela Pulsar1footnote 11footnote 1to be published in Proc. 19th Texas Symposium (Mini-symposium: “Pulsars and Neutron Stars”) ## 1 Introduction In a recent analysis of gamma-ray double-peak pulses of the Crab, the Vela and Geminga, as detected by EGRET , Kanbach (1999) pointed to an intriguing possibility that for the Vela pulsar the phase separation ($`\mathrm{\Delta }^{\mathrm{peak}}`$) between the two peaks may actually be energy dependent. The energy-averaged value of the separation is very large in all three objects, between 0.4 and 0.5 (Fierro, Michelson & Nolan 1998), and the effect by itself is of the order of a few percent or less. In the case of the Vela pulsar (the object with the best photon statistics), the plot of $`\mathrm{\Delta }^{\mathrm{peak}}`$ against energy (Fig.2 middle panel of Kanbach 1999) shows that $`\mathrm{\Delta }^{\mathrm{peak}}`$ is decreasing by about $`5\%`$ over 20 energy intervals covering the range between $`50\mathrm{MeV}`$ and $`9\mathrm{GeV}`$. The scatter of the points seems to us, however, also consistent with the separation staying at a constant level of $`0.43`$, provided that we reject the lowest and the highest energy interval. The problem raised by Kanbach is important by itself from a theoretical point of view, regardless of whether his finding becomes well established or not. With future high-sensitivity missions like GLAST, any firm empirical relation between the peak-separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ and the photon energy $`ϵ`$ may serve as a tool to verify some models of pulsar activity. The presence of two peaks in gamma-ray pulses with large (0.4 - 0.5) phase separation may be understood within a scenario of a single canonical polar cap (e.g. Daugherty & Harding 1996, Miyazaki & Takahara 1997). One need to assume, however, a nearly aligned rotator, i.e. a rotator where three characteristic angles are of the same order: $`\alpha `$ \- the angle between spin axis $`\stackrel{}{\mathrm{\Omega }}`$ and the magnetic moment $`\stackrel{}{\mu }`$, $`\theta _\gamma `$ \- the opening angle between a direction of the gamma-ray emission and $`\stackrel{}{\mu }`$, and $`\zeta `$ \- the angle between $`\stackrel{}{\mathrm{\Omega }}`$ and the line of sight. For a canonical polar cap and instant electron acceleration, $`\theta _\gamma `$ roughly equals $`0.02/\sqrt{P}`$ radians only (where $`P`$ denotes a spin period). To avoid uncomfortably small characteristic angles, Daugherty & Harding (1996) postulated that primary electrons come from more extended polar caps, and with the acceleration occuring at a height $`h`$ of several neutron-star radii $`R_{\mathrm{ns}}`$. The latter assumption may be justified by a GR effect found by Muslimov & Tsygan (1992). The aim of this paper is to present general properties of $`\mathrm{\Delta }^{\mathrm{peak}}(ϵ)`$ obtained numerically (and to some degree semi-analytically) for four simplified versions of a polar-cap activity model. In Section 2 we outline the model. Section 3 describes the results and offers some explanation of the presented effects. Conclusions follow in Section 4. ## 2 The Model We use a polar cap model with beam particles (primary electrons) distributed evenly along a hollow cone formed by the magnetic field lines from the outer rim of a canonical polar cap, i.e. with an opening angle $`\theta _{\mathrm{init}}=\theta _{\mathrm{pc}}`$, where $`\theta _{\mathrm{pc}}(2\pi R_{\mathrm{ns}}/cP)^{1/2}`$ radians at the stellar surface level ($`h=0`$). In one case (model C - see below) we assume $`\theta _{\mathrm{init}}=2\theta _{\mathrm{pc}}`$ The essential ingredients of the high-energy processes had been introduced by Daugherty & Harding (1982), with high-energy radiation due to curvature and synchrotron processes (CR and SR, respectively) induced by primary electrons accelerated to ultrarelativistic energies. The pulsar parameters are those of the Vela: the spin period $`P=0.0893\mathrm{s}`$, and the dipolar magnetic field at the polar cap $`B_{\mathrm{pc}}10^{12}\mathrm{G}`$. Within the hollow-cone geometry we have considered three scenarios for electron acceleration: model A - beam particles are injected at a height $`h_{\mathrm{init}}=0`$ with some initial ultrarelativistic energy $`E_{\mathrm{init}}`$ (the values of $`E_{\mathrm{init}}`$ are listed in Table 1) and no subsequent acceleration; model B - similar to model A but beam particles are injected at a height $`h_{\mathrm{init}}=1R_{\mathrm{ns}}`$; model C - beam particles are injected at a height $`h_{\mathrm{init}}=2R_{\mathrm{ns}}`$ with a low energy $`E_{\mathrm{init}}`$ and then accelerated by a longitudinal electric field $``$ present over a characteristic scale height $`\mathrm{\Delta }h=0.6R_{\mathrm{ns}}`$, resulting in total potential drop $`V_0`$: $$(h)=\{\begin{array}{cc}V_0/\mathrm{\Delta }h,\mathrm{for}h_{\mathrm{init}}h(h_{\mathrm{init}}+\mathrm{\Delta }h)\hfill & \\ 0,\mathrm{elsewhere}.\hfill & \end{array}$$ (1) For comparison, we considered a model with a uniform electron distribution over the entire polar cap surface (i.e. $`\theta _{\mathrm{init}}[\mathrm{\hspace{0.17em}0},\theta _{\mathrm{pc}}]`$): model D - beam particles are injected at a height $`h_{\mathrm{init}}=0`$ with initial ultrarelativistic energy $`E_{\mathrm{init}}`$ as in the model A, and no subsequent acceleration. The values of $`E_{\mathrm{init}}`$ in models A, B and D, and the potential drop $`V_0`$ in model C were chosen to yield similar number of secondary pairs - about $`10^3`$ per beam particle. Table 1. summarizes the properties of the models. Table 1. Model parameters. $`\begin{array}{c}B_{\mathrm{pc}}\\ \\ \left[10^{12}\mathrm{G}\right]\end{array}`$ $`\begin{array}{c}\alpha \\ \\ \left[\mathrm{deg}\right]\end{array}`$ $`\begin{array}{c}h_{\mathrm{init}}\\ \\ \left[R_{\mathrm{ns}}\right]\end{array}`$ $`\begin{array}{c}\theta _{\mathrm{init}}\\ \\ \left[\theta _{\mathrm{pc}}\right]\end{array}`$ primary electrons Model A 1.0 3.0 0.0 1.0 $`E_{\mathrm{init}}=8.68`$ TeV, no acceleration Model B 1.0 5.0 1.0 1.0 $`E_{\mathrm{init}}=20.0`$ TeV, no acceleration Model C 3.0 10.0 2.0 2.0 $`E_{\mathrm{init}}=0.5\mathrm{MeV}`$, acceleration (see eq.(1)) with $`V_0=2.5\times 10^7`$volts Model D 1.0 3.0 0.0 $`[0,1]`$ $`E_{\mathrm{init}}=8.68`$ TeV, no acceleration ## 3 Results We have calculated numerically the pulse shapes as a function of photon energy for all four models. The main difference between models A and B is due to different values of $`h_{\mathrm{init}}`$ which result in different locations of origin of secondary particles. Changing these locations is an easy way to modify spectral properties of emergent radiation and enables to change (preferrably - to increase) the angle $`\alpha `$ as constrained by the observed energy-averaged peak separation $`\mathrm{\Delta }^{\mathrm{peak}}0.43`$ (Kanbach 1999). For a given model there are two possible values of the angle $`\zeta `$ resulting in a desired peak separation $`\mathrm{\Delta }_{\mathrm{peak}}`$ at some energy $`ϵ`$ (which we choose to be $`1\mathrm{GeV}`$), and we took the larger one in each case: $`\zeta =3.75,4.5,15.`$, and $`3.65`$ degrees for models A, B, C, and D, respectively. In Fig. 1 we present dependency of peak separation on photon energy for all four models. Within a ‘low-energy‘ range (below a few GeV) the peak separation either remains constant (Model B) or slightly decreases with the increasing photon energy (Models A, C, and D). At a critical energy $`ϵ_{\mathrm{turn}}`$ around a few GeV, the separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ undergoes a sudden turn: for $`ϵ>ϵ_{\mathrm{turn}}`$ it either rapidly increases (models A, B and C) or rapidly decreases (model D). For comparison, the overall trend (taking into account a substantial scatter of points) resulting from the analysis by Kanbach (see Fig.2, middle panel, of Kanbach 1999) is marked schematically. It would be easy to understand why $`\mathrm{\Delta }^{\mathrm{peak}}`$ should stay at a constant level in the range $`ϵ<ϵ_{\mathrm{turn}}`$ if the gamma-ray photons were due exclusively to CR (especially in models with no acceleration). A preferred direction of this emission would be set up already at the lowest magnetospheric altitudes. This is because a monotonic decrease of the electron’s energy $`E`$, and initial increase in the dipolar curvature radius $`\rho _{\mathrm{cr}}`$ make the level of contribution to the one-particle CR-spectrum at all energies to be the highest one just at the initial altitude $`h_{\mathrm{init}}`$. However, in the energy range below a few GeV the gamma-ray emission in our models is dominated by the synchrotron radiation (SR) due to secondary $`e^\pm `$ pairs in the cascades. The resulting behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ \- either its slight decrease with increasing $`ϵ`$ (models A and C) or no change at all (model B) - is actually mediated by several factors which influence directional and spectral properties of the SR. These include energy and pitch angle distributions of secondary $`e^\pm `$ pairs, as well as the spatial spread of these pairs within the magnetospere (see e.g. Rudak & Dyks 1999). These factors do change from one model to another. At the energy $`ϵ_{\mathrm{turn}}`$ the separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ undergoes a sudden turn and starts changing rapidly as $`ϵ`$ increases. (This occurs above $`1\mathrm{GeV}`$, where most photons are due to CR.) This is the regime where the position of each peak in the pulse is determined by the magnetospheric opacity due to $`\gamma \stackrel{}{B}e^\pm `$. For the hollow-cone models (A, B and C) the photons in both peaks of a pulse come from low magnetospheric altitudes with narrow opening angles. When $`ϵ`$ is high enough these photons will be absorbed by the magnetic field with subsequent pair-creation. In other words, inner parts of the ‘original’ peaks in the pulse will be eaten-up and the gap between the peaks (i.e. the peak separation) will increase. Photons which now found themselves in the ‘new’ peaks come from higher altitudes (the magnetosphere is transparent to them) and have wider opening angles. To quantify this line of arguments we present a simple semi-analytical solution for $`\mathrm{\Delta }^{\mathrm{peak}}`$ as a function of $`ϵ`$, which reproduces with astonishing accuracy our Monte Carlo results (see Fig.1). For each point (at a given radial coordinate $`r`$) lying on a magnetic field line with a known angular coordinate $`\theta `$ on the stellar surface, one defines the ”escape energy” (Harding et al. 1997), which is the upper limit $`ϵ_{\mathrm{esc}}`$ for photon energy if the photon is to avoid magnetic absorption when propagating outwards. We have approximated $`ϵ_{\mathrm{esc}}`$ with a power-law formula $`ϵ_{\mathrm{esc}}(r)=a\left(r/R_{\mathrm{ns}}\right)^b`$ MeV, and the values of $`a`$ and $`b`$ were found by fits to numerical solutions for each model. We found $`a7.83\times 10^2`$, $`b2.49`$ for $`B=10^{12}\mathrm{G}`$ and $`\theta =\theta _{\mathrm{pc}}`$ (models A and B); $`a1.25\times 10^2`$, $`b2.50`$ for $`B=3\times 10^{12}\mathrm{G}`$ and $`\theta =2\theta _{\mathrm{pc}}`$ (model C). Photons of some energy $`ϵ`$ will escape the magnetosphere only when they are emitted at $`rr_{\mathrm{esc}}`$, where $`r_{\mathrm{esc}}`$ is the solution of the equation $`ϵ_{\mathrm{esc}}(r)=ϵ`$. Let us now assume for simplicity, that photons which form the ‘new’ peaks originate just at $`r=r_{\mathrm{esc}}(ϵ)`$ (this is a reasonable assumption, especially for the models with no acceleration). With the coordinate $`r`$ of such an emitting ring determined, we now calculate the corresponding opening angle (for dipolar magnetic field lines) and then $`\mathrm{\Delta }^{\mathrm{peak}}`$. In Fig.1 the blue lines represent the values of $`\mathrm{\Delta }^{\mathrm{peak}}(ϵ)`$ found semi-analytically, while the red dots are the Monte Carlo results. This branch of solution intersects the horizontal line set by $`\mathrm{\Delta }^{\mathrm{peak}}=0.43`$ at $`ϵ_{\mathrm{turn}}0.9`$, $`4.5`$, and $`3`$ GeV for models A, B, and C, respectively. For model D, with uniform distribution of primary electrons over the polar cap (but otherwise identical to model A) changes of $`\mathrm{\Delta }^{\mathrm{peak}}`$ above $`ϵ_{\mathrm{turn}}`$, occur in the opposite sense. Unlike in previous models, here both peaks of the pulse are formed by photons which were emitted along magnetic field lines attached to the polar cap at some opening angle $`\theta _{\mathrm{init}}<\theta _{\mathrm{pc}}`$ . These photons are less attenuated than those coming from the outer rim, and in consequence the peak separation drops. Similar behaviour was obtained by Miyazaki & Takahara (1997) in their model of a homogeneous polar-cap. Regardless the actual shape of the active part (i.e. ‘covered’ with primary electrons) of the polar cap (either an outer rim, or an entire cap, or a ring), one does expect in general strong changes in the peak separation to occur at photon energies close to high-energy spectral cutoff due to magnetic absorption. To illustrate this, the lower panels of Fig. 1 present the energy output per logarithmic energy bandwidth at the first peak as a function of photon energy $`ϵ`$ for models A, B, C and D. ## 4 Summary Motivated by the recent suggestion (Kanbach 1999) that the peak separation in gamma-ray pulses in the Vela pulsar may be energy dependent over the range of $`50\mathrm{MeV}`$ to $`9\mathrm{GeV}`$, we calculated gamma-ray pulses expected in polar-cap models with magnetospheric activity induced by curvature radiation of beam particles. Two types of geometry of a magnetospheric column above the polar cap were assumed: a hollow-cone column attached to the outer rim of the polar cap and a filled column. Four models were considered with three scenarios for beam acceleration. The emission showing up as double-peak pulses is a superposition of curvature radiation due to beam particles and synchrotron radiation due to secondary $`e^\pm `$ pairs in cascades. The changes in the peak separation were investigated with Monte Carlo numerical simulations and then reproduced (to some extent) with semi-analytical methods. We found that for the energy range $`ϵ<ϵ_{\mathrm{turn}}\mathrm{a}\mathrm{few}\mathrm{GeV}`$ the peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ either slightly decreases with increasing photon energy $`ϵ`$ at the rate consistent with Kanbach (1999), or stays at a constant level. The gamma-ray emission in this range is dominated by synchrotron radiation in all four models considered. The actual behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ depends on physical properties of the pairs as well as on their spatial extent in the magnetosphere, which vary from one model to the other. At $`ϵϵ_{\mathrm{turn}}`$ the peak separation $`\mathrm{\Delta }^{\mathrm{peak}}`$ makes an abrupt turn, and for $`ϵ>ϵ_{\mathrm{turn}}`$ it changes dramatically. It increases in the hollow-cone models (A, B, C) and decreases in the filled-column model (D), at a rate $`0.28`$ phase per decade of photon energy. This is due to magnetic absorption effects ($`\gamma \stackrel{}{B}e^\pm `$). The numerical behaviour of $`\mathrm{\Delta }^{\mathrm{peak}}`$ in the hollow-cone models was reproduced with good accuracy with a simple semi-analytical approach to the condition of magnetospheric transparency for a photon of energy $`ϵ`$, originating at a given point in the dipolar magnetic field and propagating outwards at a given direction. The value of $`ϵ_{\mathrm{turn}}`$ is model-dependent and for the cases considered here it stays between $`0.9\mathrm{GeV}`$ and $`4.5\mathrm{GeV}`$. To find such hypothetical turnover of $`\mathrm{\Delta }^{\mathrm{peak}}`$ in real observational data would require, however, high-sensitivity detectors, since for $`ϵ>ϵ_{\mathrm{turn}}`$ the expected flux of gamma-rays drops significantly (Fig. 1, lower panels). If observed, this turnover would be a signature of polar cap activity in gamma-ray pulsars, with high-energy cutoffs in their spectra due to magnetic absorption. A detailed account of the effects presented above will be given elsewhere (Dyks & Rudak 1999, in preparation) ## Acknowledgments This work has been financed by the KBN grants 2P03D-00911 and 2P03D-01016. We are grateful to Gottfried Kanbach for discussions on EGRET data of the Vela pulsar. ## References Daugherty, J.K., Harding, A.K., 1982, ApJ, 252, 337 Daugherty, J.K., Harding, A.K., 1996, ApJ, 458, 278 Fierro, J.M., Michelson, P.F., Nolan, P.L., 1998, ApJ, 494, 734 Harding A.K., Baring, M.G., Gonthier, P.L., 1997, ApJ, 476, 246 Kanbach, G., 1999, Proceedings of the 3rd INTEGRAL Workshop, in press Miyazaki, J., Takahara, F., 1997, MNRAS, 290, 49 Muslimov,A.G., Tsygan, A.I., 1992, MNRAS, 255, 61 Rudak, B., Dyks, J., 1999, MNRAS, 303, 477
no-problem/9903/cond-mat9903223.html
ar5iv
text
# Sum Rule of the Hall Conductance in Random Quantum Phase Transition ## Abstract The Hall conductance $`\sigma _{xy}`$ of two-dimensional lattice electrons with random potential is investigated. The change of $`\sigma _{xy}`$ due to randomness is focused on. It is a quantum phase transition where the sum rule of $`\sigma _{xy}`$ plays an important role. By the string (anyon) gauge, numerical study becomes possible in sufficiently weak magnetic field regime which is essential to discuss the floating scenario in the continuum model. Topological objects in the Bloch wavefunctions, charged vortices, are obtained explicitly. The anomalous plateau transitions ( $`\mathrm{\Delta }\sigma _{xy}=2,3,\mathrm{}>1`$) and the trajectory of delocalized states are discussed. Effects of randomness are crucial in the quantum Hall effect (QHE). According to the scaling theory of the Anderson localization, all states in two dimensions are localized due to randomness. There are, however, some exceptions. Symmetry effects, which govern universality classes of the Anderson localization, allow the existence of delocalized states in several two-dimensional systems. For example, states at the center of each Landau band become delocalized in the presence of strong magnetic field. They are not extended in a usual manner, but critical which is associated with multifractal character. They play an essential role in the quantization of the Hall conductance (QHE) and their behavior determines the plateau transition in QHE. The plateau transition occurs as the strength of randomness or magnetic field is varied. It is a typical quantum phase transition and the Hall conductance characterizes each phase. Non-zero Hall conductance means the existence of delocalized states below the Fermi energy. When the strength of randomness is sufficiently strong, the system is expected to become the Anderson insulator. Then it means disappearance of the delocalized states below the Fermi energy. If we assume that the delocalized states do not disappear discontinuously, they must float upward across the Fermi energy. This is the floating scenario for the delocalized states. It is later extended in the discussion of the global phase diagram. The scenario also predicts the selection rule between different integer quantum Hall states. The rule prohibits transitions $`\mathrm{\Delta }\sigma _{xy}\pm 1`$. On the other hand, the breakdown of this selection rule is observed in some experiments and numerical simulations. In this paper, based on topological arguments of the Hall conductance and numerical study of the lattice model, we try to clarify the points. The plateau transition in QHE is a quantum phase transition where the sum rule restricts the transition type. Therefore it is also interesting as a problem of the quantum phase transition. There are several studies on the delocalized states in two-dimensional lattice electrons with uniform magnetic field and random potential. Here we make clear topological nature of the Bloch wavefunctions and the Hall conductance in sufficiently weak magnetic field regime. It has much to do with the continuum model and may shed some light on the experiments. The physical reason of the anomalous plateau transition is first stated clearly in our paper. The Hamiltonian is defined on a two-dimensional square lattice as $`H={\displaystyle \underset{l,m}{}}c_l^{}e^{i\theta _{lm}}c_m+h.c.+{\displaystyle \underset{n}{}}w_nc_n^{}c_n,`$where $`c_n^{}(c_n)`$ creates (annihilates) an electron at a site $`n`$ and $`l,m`$ denotes nearest-neighbor sites. The magnetic flux per plaquette, $`\varphi `$, is given by $`_{\mathrm{plaquette}}\theta _{lm}=2\pi \varphi `$ where the summation runs over four links around a plaquette. The last term, $`w_n=Wf_n`$, is the strength of random potential at a site $`n`$ and $`f_n`$’s are uniform random numbers between $`[1/2,1/2]`$. Although the system is infinite, two-dimensional periodicity of $`L_x\times L_y`$ is imposed on $`\theta _{lm}`$ and $`w_n`$ (the infinite size limit corresponds to $`L_x,L_y\mathrm{}`$). When the randomness strength is weak and the temperature is sufficiently low, the interactions between the electrons may play a dominant role. However, we focus on the situation where it is not important and use this non-interacting model. When the Fermi energy lies in the lowest $`j`$-th energy gap, the Hall conductance $`\sigma _{xy}`$ is obtained by summing the Chern number $`C_n`$ below the Fermi energy $`\sigma _{xy}={\displaystyle \underset{n=1}{\overset{j}{}}}C_n`$ $`C_n={\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑𝒌\widehat{z}(\mathbf{}_k\times 𝑨_n)},𝑨_n=u_n(𝒌)|\mathbf{}_k|u_n(𝒌),`$where $`|u_n(𝒌)`$ is a Bloch wavefunction of the $`n`$-th energy band with $`L_xL_y`$ components and $`u_n^\gamma (𝒌)`$ is the $`\gamma `$-th component. The integration is over the Brillouin zone. Arbitrarily choosing the $`\alpha `$ and $`\beta `$-th components of the wavefunction and focusing on the winding number (vorticity or charge) at each zero point (vortex) of the $`\alpha `$-th component, the expression is rewritten as $`C_n={\displaystyle \underset{\mathrm{}}{}}N_n\mathrm{},N_n\mathrm{}={\displaystyle \frac{1}{2\pi }}{\displaystyle _R_{\mathrm{}}}𝑑𝒌\mathbf{}\mathrm{Im}\mathrm{ln}({\displaystyle \frac{u_n^\alpha (𝒌)}{u_n^\beta (𝒌)}}),`$where $`N_n\mathrm{}`$ is the charge of a vortex at $`𝒌_{\mathrm{}}`$ (a zero point of $`u_n^\alpha (𝒌)`$ in the Brillouin zone), $`R_{\mathrm{}}`$ is a region around $`𝒌_{\mathrm{}}`$ which does not include other zero points of the $`\alpha `$-th nor $`\beta `$-th components, and $`R_{\mathrm{}}`$ is the boundary. The arbitrariness in the choice of $`\alpha `$ and $`\beta `$ corresponds to a freedom in gauge fixing. In other words, the gauge choice does not affect observables such as the Hall conductance. However, we need to fix the gauge to obtain physical quantities. Further, gauge dependent objects, e.g. the configuration of vortices, are helpful to understand the physics. As discussed below, the sum rule of the Hall conductance can be clearly understood by tracing the vortices. Here we comment on implication of the Chern number in the infinite size limit. If all states in the $`n`$-th band are localized, the corresponding Chern number vanishes, $`C_n=0`$. Therefore non-zero $`C_n`$ means the existence of delocalized states. On the other hand, even if $`C_n=0`$, we can not exclude the existence of delocalized states in a rigorous sense. However, it is a situation in the Hall insulator and we believe that all the states in the band become localized as the Anderson insulator of the orthogonal class (not unitary). There are several previous studies on the Hall conductance (Chern number) in this model . In this paper, the plateau transition in sufficiently weak magnetic field regime is focused on by the topological arguments and the numerical study. It has much connection with the continuum model. In order to explore the regime, the choice of the gauge is important. There are several choices for a given geometry of the system. Employing the novel string (anyon) gauge, we can study topological nature of the Bloch wavefunctions in sufficiently weak magnetic field regime. An example for $`\theta _{ij}`$’s in the string gauge is shown for a $`3\times 3`$ square lattice in Fig.1. The extension to the other geometries is straightforward. Choosing a plaquette $`S`$ as a starting one, we draw outgoing arrows (strings) from the plaquette $`S`$. The $`\theta _{ij}`$ on a link $`ij`$ is given by $`2\pi \varphi n_{ij}`$ where $`n_{ij}`$ is the number of strings which cut the link $`ij`$ (the orientation is taken into account). Then it is clear that the magnetic flux is uniform except at the plaquette $`S`$ . At the plaquette $`S`$ , the condition of uniformity gives $`e^{i2\pi \varphi (L_xL_y1)}=e^{i2\pi \varphi }`$. It restricts the possible magnetic flux as $`\varphi ={\displaystyle \frac{n}{L_xL_y}},n=1,2,\mathrm{},L_xL_y.`$In the case of the standard Laudau gauge in $`x`$-direction, the smallest compatible magnetic flux $`\varphi `$ with the periodicity is $`\varphi =1/L_x`$. Then a system with a rectangular geometry is needed for weak magnetic field regime. On the other hand, in our string gauge, it is $`\varphi =1/L_xL_y`$. For a square $`L\times L`$ geometry, it allows us to make use of $`L`$ times smaller magnetic flux in the string gauge than the Landau gauge. The string gauge enables us to study sufficiently weak magnetic regime. We performed numerical diagonalizations of the above Hamiltonian and obtained the spectrum and the Bloch wavefunctions. Here the string gauge is employed. Zero points (vortices) of the Bloch wavefunctions and the winding number (charge) of each vortex are also calculated for all the energy bands. In Figs.2, the configuration of vortices and their charges are shown with different randomness strengths $`W`$’s. As discussed above, when the Fermi energy lies in an energy gap, the Hall conductance is quantized to an integer. The integers are shown in some energy gaps in Figs.2. They are given by the sum of all the Chern numbers below each gap. Note that, although the energy gap has to close to change the Hall conductance as discussed below, the exact gap-closing points can not be seen due to the lack of numerical accuracy. However, tracing the vortices, we can identify the gap closing points and some of them are shown by the triangles in Figs.2. Let us first summarize some features of the numerical results in Figs.2. As the strength of randomness $`W`$ is changed, vortices in each energy band move continuously and the motion of a vortex forms a vortex line. However, with small change of $`W`$, the Chern number of each energy band is stable. This is because the Chern number is the topological invariant of the energy band and the topology change is necessary to change it. As seen in Figs.2, when the Chern number changes, the two energy bands touch. Then the two bands, in generic, touch at one point in the Brillouin zone and it forms a singularity. Near the point, the low-energy physics is described generally by massless Dirac fermions. Associated with the appearance of the gap closing point, a vortex line passes through it and the Chern number in each energy band changes by $`\pm 1`$. Therefore the Chern numbers of the two energy bands do not change in total. This is the sum rule in our model. It also leads to the selection rule $`\mathrm{\Delta }\sigma _{xy}=\pm 1`$. As shown in Figs.2, the overlap of the energy bands can also happen. Then the energy gap seems to be closed. However there is still a energy gap in the Brillouin zone (the situation is similar to semi-metals) and the Chern number of each energy band is well-defined. Therefore the vortex motion is still governed by massless Dirac fermions. As discussed above, the basic observation of Figs.2 is that the change of the Chern number in each energy band is described by massless Dirac fermions and, in general, obeys the selection rule $`\mathrm{\Delta }\sigma _{xy}=\pm 1`$. In fact, one can see the transition $`\sigma _{xy}=3210`$ in Fig.2 (a) where the electron density is fixed. However, the change of the observed $`\sigma _{xy}`$ can break the rule (anomalous plateau transition). The definition of the observed $`\sigma _{xy}`$ has subtle aspects. The $`\sigma _{xy}`$ depends strongly on the randomness realization, the boundary condition and the geometry in this transition region. Therefore naive infinite size limit is not well-defined mathematically. One needs to define the observed Hall conductance by its average over different realizations. Then one can take the infinite size limit. Physically, in a realistic situation, there are several possibilities for the justification of the ensemble average. For example, (i) due to the finite coherence length, the system effectively decouples into several domains with different realizations of randomness or (ii) since the thermal fluctuation exists, the $`\sigma _{xy}`$ is averaged near the Fermi energy and the energy average may be replaced by the ensemble average . In our model, small energy gaps can appear due to the existence of randomness. It is clearly seen in Fig.2 (b). The Hall conductance is quantized even when the Fermi energy lies in the small gap. However these small gaps strongly depend on randomness realization and, after the ensemble average, the corresponding Hall conductance is generally not quantized. This is in contrast to the case when the Fermi energy lies in the Landau gap and the Hall conductance is quantized even after the ensemble average. In fact, after the ensemble average, the plateau transitions $`\sigma _{xy}=3210`$ in Fig.2 (a) becomes $`\sigma _{xy}=30`$ ($`\mathrm{\Delta }\sigma _{xy}=3`$). It demonstrates the transition $`\mathrm{\Delta }\sigma _{xy}\pm 1`$ (See also Fig.3). Although the plateau transitions generally obey the selection rule ( $`\mathrm{\Delta }\sigma _{xy}=\pm 1`$) for a given realization of randomness, the transition $`\mathrm{\Delta }\sigma _{xy}\pm 1`$ is observed due to the ensemble average. This is the anomalous transition. Finally we comment on the trajectory of delocalized states in the lowest Landau band. As seen in Fig.2 (b), the lowest Landau band splits into some subbands due to the existence of randomness. In our case ($`\varphi =1/64`$), before the collapse of the lowest Landau gap, the Hall conductance $`\sigma _{xy}=1`$ when the Fermi energy lies in the gap. It implies that there are delocalized states in the lowest Landau band. In Fig.2 (b), when $`W`$ is sufficiently small, only one of the subbands in the lowest Landau band carries non-zero Chern number (=+1). Therefore we can assign the position of the delocalized states to the subband. In this way, the delocalized states can be traced. Through the energy gap closing, the Chern number changes and the delocalized states move from one subband to another. Further, when the gap closes, the position of the delocalized states can be identified with the gap closing points i.e. the massless Dirac fermions. These gap closing points are shown by the triangles. It can be seen in Fig.2 (b) that, when the randomness strength is sufficiently small, the delocalized states float up relatively within the lowest Landau band. At the same time, the lowest Landau band broadens and goes downward in energy. However, before the delocalized states float across the lowest Landau gap, the gap collapses and the states disappear in pair with the other delocalized states falling down from the higher energy region. The pair annihilation point depends strongly on the randomness realization and the energy is not observed after the ensemble average (it corresponds to the experimental situation). It has the same origin as the anomalous transition discussed above. Therefore we do not observe any sign of floating across the Landau gap. This work was supported in part by Grant-in-Aid from the Ministry of Education, Science and Culture of Japan and also Kawakami Memorial Foundation. The computation has been partly done using the facilities of the Supercomputer Center, ISSP, University of Tokyo.
no-problem/9903/cond-mat9903142.html
ar5iv
text
# Critical Crashes? ## Acknowledgments I am grateful to Didier Sornette for his valuable comments on earlier version of the paper. I also really appreciate a number of enjoyable and revealing discussions of the subject which I had during last year.